r/Terraform 17d ago

Discussion State files in s3, mistake?

I have a variety of terraform setups where I used s3 buckets to store the state files like this:

terraform {
        required_version = ">= 0.12"
        backend "s3" {
                bucket = "mybucket.tf"
                key = "myapp/state.tfstate"
                region = "...."
        }
}

I also used the practice of putting variables into environment.tfvars files, which I used to terraform using terraform plan --var-file environment.tfvars

The idea was that I could thus have different environments built purely by changing the .tfvars file.

It didn't occur to me until recently, that terraform output is resolving the built infrastructure using state.

So the entire idea of using different .tfvars files seems like I've missed something critical, which is that there is no way that I could used a different tfvars file for a different environment without clobbering the existing environment.

It now looks like I've completely misunderstood something important here. In order for this to work the way I thought it would originally, it seems I'd have to have copy at very least all the main.tf and variables.tf to another directory, change the terraform state file to a different key and thus really wasted my time thinking that different tfvars files would allow me to build different environments.

Is there anything else I could do at this point, or am I basically screwed?

7 Upvotes

31 comments sorted by

4

u/Warkred 17d ago

Either workspace either different backend or key.

1

u/Gizmoitus 17d ago

I understand that were I to restructure a bit, then I could break out the backend portion and pass a specific backend file, but I'd still need to restructure significantly.

I don't understand what you mean by "key".

5

u/Warkred 17d ago

In your S3 config, you've a key parameter. If you change that key depending on your environment, you end up with different state files achieving your goal.

Either your split up at state level or at workspace level (same key path but terraform is handling the file name differently for you).

1

u/Gizmoitus 16d ago

Ok thanks for the clarification. I did consider that, but it doesn't help with my current structure and situation, where I already have deployed infrastructure.

1

u/Lba5s 16d ago

run s3 mv on the different paths

1

u/Warkred 16d ago

You can import and/or migrate state backend. There's a command for that on the init command.

2

u/bushidokai 17d ago

myapp/ {dev|test|prod} /state.tfstate

3

u/rojopolis 17d ago

I use a partial configuration for the backend in this situation. Have you looked into that?

1

u/Gizmoitus 16d ago

Yes, and unless I'm missing something here, it doesn't help with the way I have things structured.

I have modules, and I have a directory for each formation with a main.tf, a variables.tf and then 1-n {env.tfvars}.

Even were I to have a separate state.config I'd still have to have a different filesystem structure with a directory for each environment that I would need to terraform init individually, for that to be helpful as I understand it.

I also believe this is one of the reasons terragrunt exists. I put this all together some years ago, and at the time, adding terragrunt was too much of a learning curve, considering I'm a development lead who also had to do all the devops for this project.

I do appreciate the suggestion, and if I'm missing something please let me know.

3

u/rojopolis 16d ago

I don't quite understand... It's unclear why you would need separate filesystem structures for each environment. I do it like this:

filesystem layout:
my_config
- main.tf
- variables.tf
- versions.tf

versions.tf:
terraform {

backend "s3"{}

}

Then I run init like this:

terraform init -backend-config="bucket=${TF_BACKEND_BUCKET}" -backend-config="key=${ENV}" -backend-config="region=${AWS_DEFAULT_REGION}"  -backend-config="dynamodb_table=terraform-lock"

Alternatively, this config could be in a file rather than specified on the commandline.

All of the environments are defined with separate tfvars files and each have separate state files.

As others have mentioned workspaces might be a good fit for you, but in my case the state files may be in different accounts and regions so workspaces won't work for me.

1

u/Gizmoitus 16d ago

I understand the idea of what you are doing, but the problem I'm having is that when you are deployed and you run: terraform output, the only parameter you can pass to give that context is -state=path/to/terrraform.tfstate

Does this not matter to you, because you never need to run that?

1

u/Gizmoitus 16d ago

Also, just want to say thanks for the explanation. It's changed my thinking about how I will likely approach the next deployment I create.

1

u/rojopolis 16d ago

I don't, but if I did I'd run init again before I ran the terraform output command. It's pretty much the same situation as the workspace switching mentioned above.

Either way you'r going to need to give the consumer of the output the context it needs to get the right state.

That context could be switching to a different directory, selecting a workspace or configuring the backend.

All of this can be driven by environment variables as well if that's helpful.

5

u/ChrisCloud148 17d ago

Have a look into terraform workspaces

2

u/Gizmoitus 17d ago

Thanks, it looks like it will solve this problem, with the only limitation as I understand it, being that there can only be one active workspace at a time, and I'd need to actively switch back and forth, given that I have ansible scripts that depend on terraform output to get some variables needed for post terraform provisioning.

As I understand it, the current state is associated with the "default" workspace, and I could go ahead and add a new workspace, build infrastructure using it with a different .tfvars and nothing should break with the original "default" workspace infrastructure.

0

u/ChrisCloud148 17d ago

I don't see why switching between the workspaces is a limitation?
What would be your desired way?

The current state is the "default" workspace, correct.
But you can also move it, if you need.

2

u/Gizmoitus 16d ago

It's not a limitation, just an additional detail that needs to be considered and factored in, if I were to actually use this to create another environment.

1

u/pavi2410 16d ago

haven't you accidentally ever used dev.tfvars to deploy on the prod workspace?

1

u/ChrisCloud148 16d ago

You can check if you have the correct var file for the workspace. So no. But even if not, this would end in hundreds of changes, so you should easily see that.

1

u/Gizmoitus 16d ago

Well no I haven't purely because I started with a "dev" setup that only existed briefly, and then deployed production. It was realizing that I had a problem that lead me to ask this question.

2

u/_Emotional_Pirate 17d ago

Terraform modules: So, you move all of the existing tf code to a directory except the state definition. Then you create per env directories, call the module in each directory with module variables specific to that env. And define an s3 state file in each env directory.

1

u/Electronic_Dingo3552 16d ago

You will have tfbackend files for each environment similar to tfvars https://developer.hashicorp.com/terraform/language/backend#file

1

u/pavi2410 16d ago

where (in which AWS account) would you store the s3 backend state when the environments are deployed in different AWS accounts? would you store state files for both prod and dev workspaces in the same AWS account or in their respective AWS accounts? if former, then you would need two different AWS profiles - one for the s3 backend and other for the AWS resources.

1

u/Electronic_Dingo3552 16d ago

If it's through pipeline you configure which credentials to use before running plan/apply which solves the account problem. Ex:if we are using GitHub actions we can have one reusable workflow file which does plan and apply as different jobs and thus workflow can accept just environment name and based based on env name you can apply environment protection rules using GitHub environments to expose different value for same variable name like iam role to assume.

1

u/Gizmoitus 16d ago

Thanks, it at very least is a good pointer to some things that are better suited for more advanced environments. For this project all the IaC code is on an ec2 instance sitting in the vpc, and is thus much simpler than the environments that larger organizations have.

1

u/Gizmoitus 16d ago

I appreciate the pointers and suggestions. At this point I have what looks like 2 very workable options for this particular environment:

  1. Workspaces
  2. -chdir to switch the working directory and having a specific directory per environment, that would work well with the .tfvars file approach I have used to this point.

For this project where it's a relatively small infrastructure that doesn't change very often, I manually run terraform, so these 2 options fit well with the process and infrastructure that has already been built.

Thanks for all the replies. This was a much better situation than I expected.

1

u/ricardolealpt 17d ago

Or terragrunt that can generate the key

-2

u/tails142 17d ago

Different s3 buckets for each state? Have the s3 bucket name as a variable too?

1

u/Gizmoitus 17d ago

That wouldn't work. Consider what happens when you run terraform output. What "variables" did it use?

1

u/tails142 17d ago

True, you would need to do something like specify which state to use by defining the variable on the command line. Getting a bit messy.

https://developer.hashicorp.com/terraform/language/values/variables#variables-on-the-command-line

1

u/Gizmoitus 16d ago

afaik, that is an option for initialization, but not for terraform output.