Brain Phrye

code cooking diy fiction personal photos politics reviews tools 


Terraform Providers

Recently I was creating sets of AWS instances. To configure themselves they needed access to some script functions and a place to share configuration information between them. They would operate as a cluster but there was no way for terraform to tell each machine the ip addresses of the other members. However terraform could inform the last-allocated instance the ip addresses of all the other members so I used an s3 bucket to share around that configuration information as well as some other things.

However just creating a single s3 bucket didn’t work across regions. While the namespace for AWS buckets appears to be global, access appears to be limited per-region. Therefore I created one bucket per region like so.

First I set the AWS provider and made a number of aliases for the different regions. Note that I use AWS profiles to access config information This saves storing credentials in configs and makes it easier and more clear to access multiple AWS accounts.

dist.tf:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
variable "profile" {}

provider "aws" {
  region  = "eu-west-1"
  profile = "${var.profile}"
}

provider "aws" {
  alias   = "eu-central-1"
  region  = "eu-central-1"
  profile = "${var.profile}"
}

provider "aws" {
  alias   = "eu-west-2"
  region  = "eu-west-2"
  profile = "${var.profile}"
}

provider "aws" {
  alias   = "eu-west-3"
  region  = "eu-west-3"
  profile = "${var.profile}"
}

module "eu-west-1-s3-dist" {
  source = "./modules/s3-dist"
  region = "eu-west-1"
}

module "eu-central-1-s3-dist" {
  source = "./modules/s3-dist"
  region = "eu-central-1"
  providers = { aws = "aws.eu-central-1" }
}

module "eu-west-2-s3-dist" {
  source = "./modules/s3-dist"
  region = "eu-west-2"
  providers = { aws = "aws.eu-west-2" }
}

module "eu-west-3-s3-dist" {
  source = "./modules/s3-dist"
  region = "eu-west-3"
  providers = { aws = "aws.eu-west-3" }
}

The last four stanzas source this directory which contains two files. The first is the terraform config, modules/s3-dist/dist.tf:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
variable "region" {}

resource "aws_s3_bucket" "dist" {
  bucket = "dist-${var.region}"
  acl    = "private"
  region = "${var.region}"

  lifecycle {
    prevent_destroy = true
  }

  tags {
    Name = "dist"
    Role = "infrastructure"
  }
}

resource "aws_s3_bucket_object" "script" {
  bucket = "dist-${var.region}"
  key    = "script"
  source = "${path.module}/script"
  # filemd5() is in Terraform >=0.11.12
  etag = "${filemd5("${path.module}/script")}"
}

The second is the script, modules/s3-dist/script:

1
2
#!/usr/local/bin/bash
echo "Lots of shell functions."

The instances are then told the region specific path to get the script (this gets past user-data length restrictions for one thing) and they also use this to synchronise certain information about their cluster.