Welcome to the last bit of setting up our WordPress site with Terraform and AWS Lightsail! In this bit, we’ll be setting up an AWS S3 bucket to store our Terraform state. At the moment, our Terraform state has been stored locally on your own machine. This has some downsides, if your hard drive crashes – poof, the state is gone. If you get a new computer, again – no state. Or, if you are working on a team that needs to share a distributed state, your local machine isn’t going to be an option. To do this, we’ll first create an S3 Bucket with Terraform and then we’ll migrate our local state to the S3 bucket. Let’s get started!

AWS S3 Terraform

All code can be found in my github repo. Let’s start by creating our S3 module:

.
├── README.md
├── main.tf
├── modules
│   ├── lightsail
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── route53
│   │   ├── main.tf
│   │   └── variables.tf
│   └── s3
│       ├── data
│       │   └── terrafrom-state-policy.json
│       ├── main.tf
│       └── variables.tf
├── terraform.tfstate
└── variables.tf

We’ll start by defining our variables.tf:

variable "user_role_arn" {
  type        = string
  description = "The user_role_arn of the account being used by terraform"
}

variable "bucket_name" {
  type        = string
  description = "The name of the bucket"
}

variable "tags" {
  description = "Tags to set on the bucket."
  type        = map(string)
  default     = {}
}

Our variables are pretty simple, we have the following:

  • user_role_arn – This will be the ARN of the user used to execute the Terraform. We created this user in bit1 here.
  • bucket_name – This is the name of the bucket, remember s3 bucket names are globally unique.
  • tags – Honestly this should be a thing we put on everything we create with Terraform.

Next, we are going to create a json file that will define our bucket policy. I like to put these in data folders:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "<your_user_arn>"
            },
            "Action": "s3:ListBucket",
            "Resource": "<your_bucket_arn>"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "<your_user_arn>"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": "<your_bucket_arn>/*"
        }
    ]
}

Now that we have our variables defined, let’s create our main.tf module:

resource "aws_s3_bucket" "s3_bucket" {
  bucket = var.bucket_name

  tags = var.tags
}

resource "aws_s3_bucket_ownership_controls" "s3_bucket" {
  bucket = aws_s3_bucket.s3_bucket.id
  rule {
    object_ownership = "BucketOwnerPreferred"
  }
}

resource "aws_s3_bucket_acl" "s3_bucket" {
  bucket = aws_s3_bucket.s3_bucket.id
  acl    = "private"
}

resource "aws_s3_bucket_versioning" "s3_bucket" {
  bucket = aws_s3_bucket.s3_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

data "aws_iam_policy_document" "s3_bucket" {
    statement {
        principals {
          type = "AWS"
          identifiers = [var.user_role_arn]
        }

        actions = [
            "s3:ListBucket"
        ]

        resources = [
            aws_s3_bucket.s3_bucket.arn
        ]
    }

    statement {
        principals {
          type = "AWS"
          identifiers = [var.user_role_arn]
        }

        actions = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:DeleteObject"
        ]

        resources = [
            "${aws_s3_bucket.s3_bucket.arn}/*"
        ]
    }
}

resource "aws_s3_bucket_policy" "s3_bucket" {
  depends_on = [
    data.aws_iam_policy_document.s3_bucket,
    aws_s3_bucket.s3_bucket
  ]
  bucket = aws_s3_bucket.s3_bucket.id
  policy = data.aws_iam_policy_document.s3_bucket.json
}

We’ll walk through each of these items one by one:

  • resource "aws_s3_bucket – This creates your bucket.
  • resource "aws_s3_bucket_ownership_controls" – This will set the object ownership of your bucket to “BucketOwnerPreferred”, which is a nice way of saying anything written to the bucket will be owned by the bucker owner, so if another account writes something to the bucket it will still technically be owned by the bucket owner (not the other account).
  • resource "aws_s3_bucket_acl" – This is setting our ACL to be private, we don’t want to share items in this bucket with the world.
  • resource "aws_s3_bucket_versioning" – We are enabling versioning here, a simple and cheap backup solution.
  • data "aws_iam_policy_document" – This generates an IAM policy document we can attach to the bucket.
  • resource "aws_s3_bucket_policy" – This attaches the IAM policy document to the bucket.

Okie dokie, go ahead and update your terraform.tvars and set the required variables then run terraform plan/apply to create your bucket! Validate that your bucket was created properly and that the permissions are setup.

Terraform State Migration

Whew, we are almost done. As I said before, your current Terraform state is being stored on your local machine. To migrate the state, we need to tell Terraform about remote backend. We do this by updating the main.tf module and adding the following:

terraform {
  backend "s3" {
    bucket = "<insert-your-bucket-name>"
    key    = "<insert-a-key>/terraform.tstate"
    region = "<insert-your-region>"
  }
}

If you run a plan now, it should error with the following:

╷
│ Error: Backend initialization required, please run "terraform init"
│
│ Reason: Initial configuration of the requested backend "s3"
│
│ The "backend" is the interface that Terraform uses to store state,
│ perform operations, etc. If this message is showing up, it means that the
│ Terraform configuration you're using is using a custom configuration for
│ the Terraform backend.
│
│ Changes to backend configurations require reinitialization. This allows
│ Terraform to set up the new configuration, copy existing state, etc. Please run
│ "terraform init" with either the "-reconfigure" or "-migrate-state" flags to
│ use the current configuration.
│
│ If the change reason above is incorrect, please verify your configuration
│ hasn't changed and try again. At this point, no changes to your existing
│ configuration or state have been made.

The easiest thing to do here is to run terraform init -migrate-state. That’s it! After you verify your state files exist in your S3 bucket, you can remove the local state files.

Final Notes

Earlier I mentioned that using a remote state is a necessity for sharing with teams. Be aware though, this walk-through leaves out an important piece, locking. The proper way to do this in AWS is to use a DynamoDB table. In the future, we can walk through setting that up with Terraform as well. But for now, just realize that these first four bits are meant for a solo development experience.

Leave a Reply

Your email address will not be published. Required fields are marked *