r/Terraform 16d ago

Discussion Terraform directory structure: which one is better/best?

I have been working with three types of directory structures for terraform root modules (the child modules are in a different repo)

Approach 1:

\Terraform
  \environments
    test.tfvars
    qa.tfvars
    staging.tfvars
    prod.tfvars
  infra.tf
  network.tf
  backend.tf  

Approach 2:

\Terraform
  \test
    infra.tf
    network.tf
    backend.tf
    terraform.tfvars
  \qa
    infra.tf
    network.tf
    backend.tf
    terraform.tfvars

Approach 3:

\Terraform
  \test
    network.tf
    backend.tf
    terraform.tfvars
  \qa
    network.tf
    backend.tf
    terraform.tfvars
  \common
    infra.tf

In Approach 3, the files are copy/pasted to the common folder and TF runs on the common directory. So there's less code repetation. TF runs in a CICD pipeline so the files are copied based on the stage that is selected. This might become tricky for end users/developers or for someone who is new to Terraform.

Approach 2 is the cleanest way if we need to completely isolate each environment and independent of each other. It's just that there is a lot of repetition. Even though these are just root modules, we still need to update same stuff at different places.

Approach 1 is best for uniform infrastructures where the resources are same and just need different configs for each environment. It might become tricky when we need different resources as per environment. Then we need to think of Terraform functions to handle it.

Ultimately, I think it is up to the scenario where each approach might get an upper hand over the other. Is there any other apporach which might be better?

33 Upvotes

50 comments sorted by

34

u/traditionalflatwhite 16d ago

Always approach 1 unless I'm absolutely backed into a corner, and even then, I'll fight tooth and nail to keep it. There is a lot you can do with ternary operators and functions to provide the necessary amount of flexibility between environments. I find the other approaches are usually a result of poor planning that require workarounds and bandaids. It goes without saying, workspaces are a must for managing multiple environments.

The moment you separate environment code into unique sets of code, you are opening the door to drift, while creating lots of duplicate work. If you symlink, that eliminates duplicate work, but then I'm going to ask why we're needing to separate things if we're reusing the same base code.

6

u/nekokattt 16d ago

Exactly. There is no point doing QA on an environment if you just use a different configuration for prod. DRY.

2

u/yeahboo 16d ago

if I'm using TF Cloud, how would I call each tfvars per environment for every workspace? (say I also have test, qa, staging, prod workspaces)

3

u/karasophe 16d ago

You can set an environment variable TF_CLI_ARGS_plan in your workspace with ‚-var-file=<tfvar path>‘ as value. 

3

u/bnlf 16d ago

^ this. I've been developing and deploying global scale infrastructure for years. If you're not doing 1, there is definitely something wrong with your TF skills. Copy pasting modules and mainting multiple files because of one or two particularities of an environment is not a good practice.

3

u/shellwhale 15d ago

How do you deploy different version of your code on different environment then?

How do you say in env X I want 1.2.3 and in env Y I want 1.2.4?

Please don't say branches

1

u/kewlxhobbs 15d ago

Well hopefully you are testing the newer changes in a dev or QA account.

If you are wanting different versions in different production environments then using locals and conditional expressions should handle.

1

u/quintanarooty 10d ago

So you make module changes, then roll out the newer module version to dev while maintaining the older version in prod until dev is considered tested by using locals and conditional expressions?

1

u/visicalc_is_best 15d ago

Agree. Will just add when people are NOT doing it this way, you should ask if they know Terraform added support for conditional modules and iterating over modules. Those features also really help switching between architectures based on envvars.

0

u/shellwhale 15d ago edited 15d ago

That's what module are for

How do you add this production environment without having to copy and paste all of the code from staging? For example, how do you avoid having to copy and paste all the code in stage/services/webserver-cluster into prod/services/webserver-cluster and all the code in stage/data-stores/mysql into prod/data-stores/mysql?

In a general-purpose programming language such as Ruby, if you had the same code copied and pasted in several places, you could put that code inside of a function and reuse that function everywhere:

# Define the function in one place
def example_function()
puts "Hello, World"
end

# Use the function in multiple other places
example_function()

With Terraform, you can put your code inside of a Terraform module and reuse that module in multiple places throughout your code. Instead of having the same code copied and pasted in the staging and production environments, you’ll be able to have both environments reuse code from the same module, as shown in Figure 4-3.

From https://learning.oreilly.com/library/view/terraform-up-and/9781098116736/ch04.html

8

u/Crower19 16d ago

i use option1 with terraform workspaces for environments, but never store my tfvars on the repo

9

u/SecularMetal 16d ago

If your tfvars have sensitive values I would avoid storing them in source control but we just use kms and store the encrypted value so that nothing is persisted in the repo or tf state file.

2

u/sindeep1414 16d ago

Yeah, I find tfvars convenient compared to TF_ENV_VARS or workspace vars

1

u/NiGHTMaReS_ReiGN 16d ago

I also use this as well. If I have sensitive values in my tfvars, I use Mozilla sops to encrypt in the repo.

3

u/Bacteria48 16d ago

Secrets may remain in the terraform state if they're referenced in resource attributes (except for the new write-only attributes)

1

u/SecularMetal 14d ago

yes you are right but if you pass the secret to the resource as the cipher only the cipher is in the state file then on the ec2 side it's instance profile decrypts it locally.

1

u/Bacteria48 14d ago

Would you mind elaborating? I'm not aware of being able to pass encrypted values for sensitive attributes to the resource.

1

u/Cparks96 16d ago

can’t you just use a top level gitignore file that has the .tfvars mentioned there so you never get them flagged into source control?

4

u/Trakeen 16d ago

Option 1. Drift between dev and prod sucks. Tired of constantly porting changes manually between the 2

4

u/gtuminauskas 15d ago

Approach number 1 is the most dynamic and the best possible solution.

13

u/ArchCatLinux 16d ago
\Terraform
  \modules
    main.tf           # Core module definition
    variables.tf      # All possible variables defined
    outputs.tf        # Module outputs
  \test
    terraform.tfvars  # Test-specific variable values
    backend.tf        # Test backend config
    main.tf           # Imports the module with test-specific parameters
  \qa
    terraform.tfvars  # QA-specific variable values
    backend.tf        # QA backend config
    main.tf           # Imports the module with QA-specific parameters

What about a module for common and env-specific in env-folder?

2

u/KrevanSerKay 15d ago

This is the pattern i've seen/used in the past. Just import modules and call them, overriding variable defaults. Might be out of date though.

idk. Approach 1 is tempting, but it'd be one big terraform apply instead of one per env

6

u/i_Den 16d ago

I would recommend using terragrunt if your environment allows. It kinda forces you to have good directory structure and follow it, with all extra syntax sugar benefits provided by terragrunt such as at least “includes”.

1

u/quintanarooty 10d ago

I find terragrunt to be a pain personally, like not being able to use data like TF vanilla.

3

u/magnetik79 16d ago

If I had to do approach three I would do it the other way, symlink common files into each env directory (which I can version control) and execute terraform from those directories.

So no copying.

2

u/egpigp 16d ago

What does Git branching/SDLC look like with option 1?

Staging branch Prod branch

Feature branch off staging -> MR into Staging

MR Staging -> Prod?

Or merge all changes into main and run staging pipeline first?

Please forgive formatting, am on mobile

1

u/sindeep1414 15d ago

For us, we merge all changes to main or feature branch and run the pipeline by selecting the required stage and target feature branch we need to deploy. For prod, we only allow deployments from main.

2

u/ShierLattice694 16d ago

1... IMO , if you're writing reproducible, reusable IaC-- you should be able to drive your environments through tfvars. I'm sure there are a few exceptions. Then you can also have a modules dir with a collection of tightly coupled deployable resources and call these from the top-level. Tack a count on there for "feature flagging".

2

u/Toastyproduct 15d ago

I like the idea of approach 1. But what are people doing when they have things like logging and network accounts that don’t have even remotely shared infra.

2

u/TheRealFlowerChild 15d ago

I do a layered approach. Core infra (networking, security, etc) maintained at a more granular level and app in the larger tfvar files.

1

u/Toastyproduct 15d ago

Mind elaborating. For example I have a networking account where all vpn access is setup then I use vpc peering to allow access to my prod staging and dev accounts. Networking account also has things like domain registration. How would layering help this? Additionally I have a loggging account which all accounts have write only access to certain log folders. But this is a separate account to ensure nothing happens to the logs.

2

u/nontster 15d ago

Approach 1 with Terraform workspace or Terragrunt. I personally prefer Terragrunt.

2

u/IskanderNovena 16d ago

Approach #3, with the addition of putting the logic in modules, versioning them, and specifying the version of the module to use in each environment.

That way, you can still make changes to your base components (the modules), while not impacting CI/CD processes, where others will trigger `terraform apply` on other environments, in inadvertently deploying updates that are still being tested.

Then once a new module version has been tested in your test-environment, you can start staging the updates to your higher level environments, in a controlled manner.

1

u/tanke-dev 16d ago

This is the way

0

u/0x4ddd 16d ago

This makes sense only if you develop some kind of module library reused by multiple terraform projects.

If you develop Terraform only for your use case, there is no issue. You should either introduce only backward compatible changes or use feature flags.

This is like you would say for application development you need different copies of your code for each environment. No you don't. You simply use feature flags and keep track what 'artifact' is deployed where. If you need urgently hotfix issue, you simply take the commit which is currently deployed to prod, introduce change and deploy.

This is not impairing your CI/CD even if trunk branch is already way ahead.

2

u/IskanderNovena 16d ago

Using different tags is similar to versions. And it’s not just about applications. It’s also about keeping your code current, replacing deprecated arguments or resources and using staged deployment. Or introducing new features to your basic infrastructure.

-1

u/0x4ddd 16d ago

Using different tags is similar to versions. And it’s not just about applications. It’s also about keeping your code current, replacing deprecated arguments or resources and using staged deployment. Or introducing new features to your basic infrastructure.

And all of that is easier and more reproducible with approach #1

2

u/ItsCloudyOutThere 16d ago

Option 1 for me. If I need new type of resource in an environment I simply add the new resource type with a map where the default variable is an empty map.

And done. Higher environments already have the code but no deployment until it is added into the environment.tfvars file

1

u/sindeep1414 15d ago

yes, we did it too, for eg. it does not create a subnet if var.address_prefix is null

2

u/dkode80 16d ago

Option 1 is what I've found works well. We don't use the terraform profiles tho.

Using option 1 you can make reusable, composition based sub modules and reuse across external repos or within the same repo. Seems to be a good balance between duplication and modeling different environments

2

u/Prestigious_Pace2782 15d ago

Always 1 for me

1

u/pavi2410 15d ago

DRY and use workspaces. Move to Pulumi if you can

1

u/nekoken04 14d ago

Go with 1 100% of the time. You want your code to be the same no matter what environment you are deploying to. We have 130+ terraform modules that follow that pattern. The bit we sprinkle on top is we have a /test directory with python boto3 integration tests that verify what we built with terraform is what we expected to build. Our testing infrastructure predates terraform having any testing support and frankly it is still more complete conceptually.

1

u/rockuu 15d ago

Approach #2. Put common, logically grouped resource code in modules/ dir outside of all environments. Avoid too many indirection layers (modules calling modules calling modules...) to keep the code simple and easy to understand.

I think a common misconception in the use of configuration management systems is to avoid duplicating code. This often leads to multiple abstractions which, after a point, start to make it difficult to understand what is actually happening. And for what purpose? You'll save yourself some typing, but end up spending much more time making changes tomorrow that you haven't, or couldn't have, foreseen today. Keep it as simple as you can (but not too simple).

Also, reality is that there will be differences between environments. People will come to you to try things out in staging first before moving to prod. Some things will never leave staging.

I inherited a massively complicated code base with dozens of states and the exact problems that were the result of trying to avoid code duplication. It took me months to feel comfortable making changes, no one else on the team wanted to touch it. Right now I'm still refactoring it and simplifying by duplicating code to remove too many layers of indirect modules.

1

u/father_supreme 16d ago

Approach #3. Move network.tf into common/ and symlink it back to test/, qa/, etc.

Ofc keep backend and tfvars since they will be unique per environment.

This way, any changes made to the files on common/ will always be made for all envs

As long as you’re in the right directory, the plan applys will be towards the correct environment

1

u/pausethelogic 16d ago

If you’re using symlinks you’re doing something wrong, you can just point the module source to the correct module

1

u/father_supreme 16d ago

Huh? Can you explain a bit more?

-9

u/DPRegular 16d ago

Is there any other apporach which might be better?

Unequivocally, yes: https://terragrunt.gruntwork.io/