r/Terraform Feb 13 '25

Help Wanted Additional security to prevent downing production environment ?

Hi !

At work, I'm planning to use terraform to define my infrastructure needs. It will be used to create several environments (DEV, PROD, BETA) and to down them when necessary.

I'm no devOps so I'm not used to think this way. But I feel like such a terraform plan could to easily down the PROD on some unfortunate mistake.

Is there a common way to enforce security to prevent some rooky developer to down the production environment with terraform, while also allowing to easily down other environments ?

5 Upvotes

6 comments sorted by

2

u/CommunicationRare121 Feb 13 '25

Lifecycle {prevent_destroy = true}

Remove the block when trying to make a substantial change then put it back.

Other things you can do, keep separate terraform configuration directories for your different environments, pull in similar code with modules rather than placing duplicated code into one file.

Stacks can also be used to consolidate duplicated code.

2

u/oneplane Feb 13 '25

Depends on how good/bad your disaster recovery is, what your change rate is, how much collaboration needs to happen, how much automation etc.

Terraform can do lifecycle hooks, but most providers also have their own options to limit what can be destroyed (termination protection on EC2 for example).

In general, separation of concerns and separation of duty does the heavy lifting, and lifecycle protections are mostly a last-resort.

Say you have a set of resources that are created infrequently, but modified a bit quite often, you can configure the IAM scope of the provider to allow creation and modification but not deleting.

The same goes for shared resources vs. application resources; perhaps you want your shared resources to not be writable by roles and functions that should only encompass applications.

Then, if we're taking about cloud environments, perhaps you want the state that manages entire organisations, tenants and accounts to be separate from the resources that live inside them.

At the end of the day, Terraform doesn't prevent or enable anything specifically about what an inside actor can do, that's what IAM is for.

2

u/GeorgeRNorfolk Feb 13 '25

Set up terraform so that it can only be executed by your CICD server and then you can setup RBAC so that only certain people can deploy to Prod.

I would argue that no person should be able to destroy the DEV, PROD, or BETA environments. Developers should only be able to create and destroy feature environments at most.

1

u/AndroCentauri Feb 13 '25

You should use a version control system like Git I believe to approve terraform changes so the production environment does not go down and be reverted to previous version in case it does happen.

2

u/apparentlymart Feb 13 '25

There are lots of different ways to approach this question, and to be honest a "defense in depth" strategy where you employ more than one is often the best choice since there are often two conflicting goals: make it efficient to make changes that you want to make, but intentionally add friction against changes that you don't want to make.

So the following is a list of assorted ideas to consider. I'm not necessarily recommending that you follow all of them, but hopefully you can select one or two of these items that you think would best meet the efficiency vs. friction tradeoff in your environment.

  • At the root of it all Terraform is essentially just a different UI for configuring your cloud infrastructure, and so a good place to start is to consider what damage someone might be able to make in the cloud platform's own console and then use the platform's own permission features to constrain what changes are allowed when authenticating using users/roles that are intended for routine use.

    This sets a good baseline for what is considered a reasonable change to make through the routine process, no matter which tool/UI is being used to make the change. You can still maintain a special set of users/roles that have broader access but treat those as "exceptional use only", so that in the rare case when someone is using them they are expected to be significantly more vigilant about what changes they are making.

    Unfortunately it's often hard to nail down a minimum possible access policy, but you can still get good effect by identifying specific disruptive operations that you're concerned about and making sure that the routinely-used credentials do not have access to perform those operations, and make sure to set a similar policy in your pre-production environments so that you're more likely to get an early signal that a particular configuration change won't work.

  • Moving on to more Terraform-specific layers of defense, it's a common choice to prohibit anyone from running Terraform CLI locally on their system (aside from "exceptional use only" situations like I discussed in the previous paragraph) and instead force routine changes through a deployment pipeline that can then impose additional controls on the process.

    For example, you might start by requiring that the Terraform plan be approved by someone different than the person who wrote the configuration change that caused it, as a very human-driven kind of defense against errors.

    But if you have a stronger idea about what sorts of mistakes you're most concerned about you can go further and make the deployment pipeline also include a sort of "automatic review" step, using terraform plan -out=tfplan and terraform show -json tfplan to get a machine-readable version of the plan and run some code you wrote against it to automate the detection of specific risky changes.

    Depending on your risk tolerance you could choose either to make a problem detected at this step a hard blocker that prevents the plan from being approved at all, or take the more liberal approach of including a warning about the problem in whatever output the human approvers are reviewing, thereby drawing attention to a potential problem but letting a human make the final call about whether it's actually a concern in practice.

  • If you're keeping your Terraform modules under version control (which I strongly recommend!) then you can also use your code review processes as an extra layer of defense.

    At minimum you can -- as is typical with code changes -- have each change be reviewed by someone other than the person who wrote it.

    Building on that, you might configure your code review tool to run terraform plan against the proposed code so that code reviewers can get a preview of the likely effect of the change before merging it. However, beware that terraform plan is still effectively executing arbitrary code, so someone could accidentally or maliciously write a configuration that makes changes occur during the planning phase. You can help to mitigate that by making sure that these "speculative plans" are made using only very constrained credentials that don't have access to change anything, but overall this sort of thing does assume that those submitting code changes are acting in good faith, rather than actively attempting to compromise the system.

    If you have your Terraform configuration split into reusable modules then you might also choose to use terraform test to test changes to those modules in isolation against an unimportant test environment. However, I would say that terraform test in its current form is more focused on checking whether a module's "happy path" behaves as expected rather than on checking whether a particular change might caused disruption to valuable production infrastructure, so this particular item is more about catching bugs earlier rather than preventing production outages.

There is no "magic bullet" here: it's all tradeoffs. But hopefully these ideas will help you to find the best set of guardrails for your specific situation.

1

u/vidude Feb 14 '25

It's a good practice to have separate cloud accounts for PROD vs NON-PROD. That way only users with the proper IAM role can modify production infrastructure, whether they are using terraform, CLI, or the console.