Platform Engineering Teams Are Killing Their Terraform
Platform engineering teams are replacing Terraform with higher-level abstractions. Here's why IDP layers are winning and what practitioners are actually building.
Your platform engineering team probably has a dirty secret: nobody wants to touch the Terraform anymore.
Not because Terraform is bad — it's an extraordinary tool that reshaped how we think about infrastructure. But a growing number of platform teams in 2026 are arriving at the same uncomfortable conclusion: asking application developers to write HCL, or even to understand the topology of their cloud resources, was always the wrong abstraction layer. The response isn't to abandon infrastructure as code. It's to bury it beneath something that actually maps to how developers think about shipping software.
This shift has been building for two years, but it's accelerating now in a way that's hard to ignore. And the teams doing it well are making choices that would have been heretical in 2022.
The Abstraction Problem Nobody Wanted to Admit
Here's the core tension: Terraform (and Pulumi, and CDK, and Crossplane) model infrastructure. They describe cloud resources, their relationships, and their configurations. That's exactly the right abstraction for an infrastructure team managing a cloud estate.
But most organizations asked something different of these tools. They asked application developers — people whose primary concern is shipping features — to become proficient in infrastructure topology. "Here's a Terraform module, just fill in the variables." Sound familiar?
The result, in most orgs, was one of two failure modes:
- The copy-paste sprawl: Developers clone an existing service's Terraform, change the names, and pray. Nobody understands the networking config. When something breaks at 2 AM, one infrastructure engineer is the single point of failure.
- The golden path bottleneck: Platform teams build perfectly curated Terraform modules, but every new service request becomes a ticket. The "self-service" platform is really just a queue with extra steps.
Neither of these is a platform. They're coping mechanisms.
What's Actually Replacing It
The teams that are getting this right in 2026 have converged on a layered approach. Terraform (or equivalent) still exists — but it's an implementation detail, not a developer interface. The developer-facing layer is an Internal Developer Platform (IDP) that speaks in workloads, not resources.
The pattern looks roughly like this:
| Layer | Who Owns It | What It Describes | Example Tools |
|---|---|---|---|
| Application spec | App developer | "I need a web service with a Postgres database and a message queue" | Score, custom CRDs, Backstage templates |
| Platform orchestration | Platform team | How that spec maps to actual infrastructure patterns | Humanitec, Kratix, custom controllers |
| Infrastructure provisioning | Platform team (automated) | The actual cloud resources | Terraform, Crossplane, Pulumi |
| Cloud APIs | Cloud providers | Raw compute, storage, networking | AWS, GCP, Azure |
The critical insight is the separation between the top two layers. The application spec is deliberately lossy — it doesn't expose every knob. A developer says "I need Postgres" and the platform decides whether that means RDS, Cloud SQL, or an in-cluster operator based on the environment, the team's tier, compliance requirements, and cost constraints.
This is not a new idea in theory. What's new is that a critical mass of platform teams are actually building and operating these systems in production, and the tooling has matured enough that you don't need a 15-person platform org to pull it off.
The Score Specification and the Rise of Workload APIs
One pattern gaining serious traction is the Score specification — an open-source, platform-agnostic way to describe workload requirements. It's essentially a YAML file that says "here's my container, here are the resources it needs, here are its dependencies" without specifying anything about how those resources are provisioned.
A Score file for a typical web service might declare a dependency on a `postgres` resource of type `database` and a `redis` resource of type `cache`. That's it. No subnet IDs, no IAM roles, no provider blocks. The platform team maintains the mapping between those abstract resource types and their concrete implementations.
But Score is just one approach. Many teams are building custom Kubernetes CRDs that serve the same purpose — a `WorkloadSpec` or `ServiceManifest` that's tailored to their organization's specific patterns. The common thread is that the developer-facing API is about intent, not implementation.
This works because most application infrastructure follows a small number of patterns. In practice, the vast majority of services at any given company are one of maybe five archetypes:
- Stateless web service with a database
- Background worker consuming from a queue
- Scheduled job / cron
- API gateway or BFF layer
- Event-driven processor
If your platform can express these archetypes well, you've covered the needs of most developers most of the time. The remaining edge cases — the truly novel infrastructure requirements — still go through the infrastructure team directly. That's fine. Platform engineering isn't about handling 100% of cases. It's about handling the 85% that shouldn't require infrastructure expertise.
What Teams Are Getting Wrong
Having talked to platform engineers at meetups across the country — from Denver to Austin to NYC — there are a few recurring mistakes worth flagging. If you find developer meetups near you and ask around, you'll hear versions of these stories.
Mistake 1: Building the IDP Before Understanding Demand
The single most common failure mode is platform teams that start by evaluating tools — Backstage vs. Port vs. Cortex vs. custom — before they've deeply understood what their developers actually struggle with. The best platform teams start with embedded observation. They sit with application developers, watch them deploy, watch them debug, and identify the specific friction points.
One pattern that works: before building anything, create a shared document where application developers log every time they're blocked by infrastructure. After two weeks, the patterns are obvious. Maybe it's database provisioning. Maybe it's secrets management. Maybe it's just that environment creation takes three days. Start there.
Mistake 2: Trying to Abstract Away Kubernetes
This is a hot take, but I'll stand behind it: most IDP efforts that try to completely hide Kubernetes from developers end up creating a worse abstraction that's harder to debug. The goal shouldn't be to pretend Kubernetes doesn't exist. It should be to reduce the surface area of Kubernetes that developers need to understand.
A developer should probably know that their service runs in a pod, that it has resource limits, and that they can look at logs via `kubectl`. They shouldn't need to understand NetworkPolicies, PodDisruptionBudgets, or custom scheduler configurations. Draw the line deliberately.
Mistake 3: No Escape Hatch
Every platform abstraction needs a clean escape hatch. When a team has a genuinely novel requirement — a GPU workload, a multi-region database, a weird compliance constraint — they need a path that doesn't involve fighting the platform. The best IDPs let teams "eject" to the lower layer when needed, with clear documentation about what they're taking on by doing so.
The Terraform Isn't Actually Dead
To be clear: Terraform is very much alive in this architecture. It's just moved from being a developer-facing tool to being a platform implementation detail. Most of the teams I've seen making this transition still use Terraform (or increasingly OpenTofu) under the hood. Some are mixing in Crossplane for Kubernetes-native resource management.
The change is organizational, not technological. Terraform modules are now maintained exclusively by the platform team. They're versioned, tested, and promoted through environments just like application code. Application developers never see them. The platform orchestration layer calls them, parameterized by the developer's workload spec and the platform's policies.
This has a massive secondary benefit: it makes infrastructure changes safer. When the platform team needs to upgrade a database engine version or change a networking pattern, they update the module in one place. Every service using that pattern gets the update through the normal promotion process. No more hunting through 200 service repos to update a provider version.
Practical Takeaways
If you're on a platform team — or thinking about starting one — here's what you can do this week:
1. Audit your developer-facing infrastructure surface area. List every infrastructure concept your application developers currently need to understand to ship a feature. Kubernetes manifests, Terraform variables, CI pipeline configuration, secrets management, DNS — all of it. Then draw a line: above the line is stuff developers must own, below the line is stuff the platform should abstract. Most teams are shocked at how much is below the line.
2. Start with one golden path, not a general-purpose platform. Pick your most common service archetype — probably "stateless web service with a Postgres database" — and build a complete, opinionated path from `git init` to production for just that pattern. Make it ridiculously easy. One command or one merged PR to get a new service running. Once that works and teams trust it, expand to the next archetype. This is infinitely more effective than building a general-purpose service catalog that handles everything poorly.
3. Measure time-to-production, not platform adoption. The metric that matters is how long it takes a new developer to go from onboarding to deploying a meaningful change to production. If your platform isn't moving that number, it's not working — no matter how many services are registered in your catalog. Track this relentlessly.
Where This Goes Next
The convergence of platform engineering with AI-assisted development is the obvious next frontier. We're already seeing teams experiment with LLM-powered interfaces to their platforms — a developer describes what they need in natural language, and the platform generates the workload spec. It's early, but the combination of constrained output formats (YAML specs with known schemas) and well-defined platform contracts makes this a more tractable problem than general-purpose code generation.
The teams that will benefit most from AI-assisted platform interactions are, predictably, the ones that already have clean abstraction layers. If your "platform" is a wiki page with instructions for modifying Terraform files, no amount of AI is going to make that a good developer experience.
If you want to see how engineering teams near you are tackling these problems, it's worth checking out what's happening at local infrastructure and DevOps meetups. You can explore tech events in your city — these conversations are happening everywhere from Seattle to Raleigh-Durham, and the practitioner-level detail you get in person is hard to replicate in blog posts.
FAQ
Is Terraform actually being replaced?
No. Terraform (and OpenTofu) are still widely used for infrastructure provisioning. What's changing is who interacts with it. Platform teams are wrapping Terraform in higher-level abstractions so that application developers work with workload specs instead of HCL. The Terraform is still there — it's just an implementation detail of the platform, not a developer-facing tool.
How big does my team need to be to justify an Internal Developer Platform?
Smaller than you think. If you have more than about 20 application developers and they're spending meaningful time on infrastructure configuration, an IDP pays for itself quickly. The key is starting narrow — one service archetype, one deployment pattern — rather than trying to build a general-purpose platform from day one. A single platform engineer can build a useful golden path in a few weeks if the scope is constrained.
Should we buy an IDP product or build our own?
The honest answer is that most teams end up doing both. Products like Backstage (with plugins), Humanitec, or Kratix give you a foundation, but every organization's infrastructure landscape is different enough that some custom integration work is inevitable. Start with an existing tool that handles the parts you don't want to build — service catalog, RBAC, audit logging — and build custom only where your infrastructure patterns are genuinely unique. The worst outcome is spending six months building a custom portal that reimplements features available off the shelf.
Find Your Community
Platform engineering is still a discipline where the best knowledge lives in practitioner communities, not vendor docs. If you're navigating this transition, connecting with other platform teams in your area is one of the highest-leverage things you can do. Explore meetups in your city to find infrastructure and DevOps groups, or browse open tech jobs if you're looking to join a team that's building this way.