How To Create Your AWS VPCs Pillars With Terraform

This post shows how Terraform works to provision infrastructure resources on AWS in full automation, with a use case covering the necessary steps to segregate environments and build a cross-VPC which offers management services (such as monitoring, logging, CI/CD) to a development application VPCs.

VPCs are deployed in one region and availability zone; both VPCs have internet access through internet gateways and NAT gateways.

VPC peering ensures connectivity between the two VPCs.

A basic EC2 instance acting as Bastion Host is deployed in the management VPC to get access to the EC2 instance built in the private subnet of the peered development VPC.

We’ll use the latest (at writing time) ubuntu ami for both EC2 instances.

The whole set-up can be built within the AWS free-tier program.

Terraform Code

On my Github account you can also find repositories to practice on earlier steps if needed, such as creating a github account, building a basic EC2 and a VPC with Terraform on AWS.

Code can be cloned from my github repository.

Code structure is the following:

  • dev_vpc containing the code to set-up the development VPC, EC2 bastion host, and the security group enabling SSH traffic
  • mgmt_vpc containing the code to set-up the management VPC, EC2 private instance, and the security group enabling SSH traffic
  • mgmt_dev_vpcs_peering containing the code to set-up the VPC peering, the updates of route tables to allow traffic routing between the VPCs.

State is managed locally, in default terraform.tfstate files, located in each folder.

Management VPC

Let’s look at the main_mgmt.tf file first, located in mgmt_vpc folder and containing the provisioning code to build the management vpc.

Define AWS Provider

First, we need to define the AWS provider:

 

Build the VPC

After that, we make use of the AWS VPC Terraform module, which creates VPC resources on AWS:

 

This module defines the following attributes of our VPC, through variables which are defined in variables.tf file:

  • name
  • cidr
  • availability zones
  • provate and public subnets
  • dns resolution
  • nat gateway presence

IPv4 Addressing Scheme Definition

  • CIDR: 10.20.0.0/16
  • private subnet: 10.20.1.0/24
  • public subnet: 10.20.101.0/24

Enable SSH traffic

To allow access the bastion host from your local machine, we need to enable SSH traffic by using the security group created through the VPC module above:

 

To access the bastion host from your local machine, you need to create the SSH Keypair to access the bastion host via SSH. To do so, launch the command from your terminal:

ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f $HOME/.ssh/your_key_name

 

By default, your newly generated key files (private and public) will be stored in ~/.ssh folder.

Then open your your_key_name.pub and copy its content; paste it in the variable ssh_public_key in mgmt_vpc/variables.tf file.

Filename your_key_name is referenced in mgmt_vpc Terraform code and the .pub file contents have to be added in variables.tf file.

To let your bastion host access via ssh the private instance on the peered vpc, you need to set-up the ssh agent forwarding on your local machine.

To associate the newly defined key, we need to add the following resource:

 

Build an EC2 Private Instance

To build an EC2 instance, we use the AWS EC2 Instance Terraform module:

 

This module defines the following attributes of our EC2 instance, through variables which are defined in variables.tf file:

  • name
  • number of instances
  • ami
  • type of instance
  • the security group to be mapped
  • subnet id
  • key pair name, defined above

Input Values

Terraform Input Variables contain configuration parameters, located in the variables.tf file:

 

Outputs Values

Output values are configured in the file outputs.tf:

 

Development VPC

The code to build this VPC looks is located in dev_vpc folder, which is very similar to the code we used to provion the management VPC.

Define AWS Provider

 

Build the VPC

 

IPv4 Addressing Scheme Definition

  • CIDR: 10.0.0.0/16
  • private subnet: 10.0.1.0/24
  • public subnet: 10.0.101.0/24

Enable SSH traffic

To allow access to the private instance from the bastion host, we need to enable SSH traffic by using the security group created through the VPC module above:

 

To access the bastion host from your local machine, you need to create the SSH Keypair (a new one for dev einvironment) to access the bastion host via SSH. To do so, launch the command from your terminal:

ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f $HOME/.ssh/your_dev_key_name

 

By default, your newly generated key files (private and public) will be stored in ~/.ssh folder.

Then open your your_dev_key_name.pub and copy its content; paste it in the variable ssh_public_key in dev_vpc/variables.tf file.

Filename your_dev_key_name is referenced in mgmt_vpc Terraform code and the .pub file contents have to be added in variables.tf file.

To let your bastion host access via ssh the private instance on the peered vpc, you need to set-up the ssh agent forwarding on your local machine.

To associate the newly defined key, we need to add the following resource:

 

Build an EC2 Private Instance

To build an EC2 instance, we use the AWS EC2 Instance Terraform module:

 

This module defines the following attributes of our EC2 instance, through variables which are defined in variables.tf file.

Input Values

This is the variables.tf file for this environment:

 

Outputs Values

This is the outputs.tf file for this environment:

 

Peering the two VPCs

The code to establish the peering between mgmt_vpc and dev_vpc is in the main.mgmt.dev.peering.tf file, located in mgmt_dev_vpcs_peering folder.

Define AWS Provider

 

Accessing States

We now need to access both mgmt_vpc and dev_vpc states:

 

Peering Configuration

We can now configure VPC peering:

 

Routing Traffic between Peered VPCs

To enable ssh traffic be routed between the two vpc subnets, the route tables created with the VPC modules need to be updated:

 

Outputs Values

This is the outputs.tf file for this environment:

 

Build the Infrastructure

Now we’re ready to provision our VPCs!

Management VPC

Cd into your mgmt_vpc folder and:

  • launch terraform init
  • launch terraform plan to check that everything is all right before actually creating infrastructure
  • launch terraform apply and enter ‘yes’ when prompted (or use terraform apply -auto-approve)

At the end output variables defined in outputs.tf file are displayed.

Development VPC

Cd into your `dev_vpc folder and:

  • launch terraform init
  • launch terraform plan to check that everything is all right before actually creating infrastructure
  • launch terraform apply and enter ‘yes’ when prompted (or use terraform apply -auto-approve)

In the end, the values of output variables defined in outputs.tf file are displayed.

VPC Peering

Cd into your mgmt_dev_vpcs_peering folder and:

  • launch terraform init
  • launch terraform plan to check that everything is all right before actually creating infrastructure
  • launch terraform apply and enter ‘yes’ when prompted (or use terraform apply -auto-approve)

In the end, the values of output variables defined in outputs.tf file are displayed.

Inspect Your Infrastructure

Terraform writes configuration data into a file called terraform.tfstate, which you have in your local repo. This file contains the IDs and properties of the resources Terraform created so that Terraform can manage or destroy those resources going forward.

To inspect your infrastructure configuration launch terraform show from any of your environment directories.

Test SSH Connections

From your terminal, launch the command:

ssh -A ubuntu@<dns public name of your bastion host>

 

You should get a message like: “The authenticity of host ‘dns public name’ (‘public ip address’)’ can’t be established. ECDSA key fingerprint is . Are you sure you want to continue connecting (yes/no/[fingerprint])?": type 'yes'.

Now you should successfully be logged into your bastion host!

To login to your private instance on the peered VPC, launch the command form your bastion host:

ssh ubuntu@<dns private name of your private ec2 instance>

 

This proves that peering is working and that the ssh traffic is properly routed and enabled!

Destroy Your infrastructure

To destroy your infrastructure, use terraform destroy -auto-approve, in each folder, reversing the creation order.

Considerations On The Architecture

This use case brings modularity and segregation.

It makes a bit more complex to handle infrastructure code changes in each environment, but provides higher isolation than the workplace approach and ease team collaboration on code.

This simplified set-up leverages both Terraform and AWS VPC pillars and a production VPC can be added and peered to the management VPC easily, just following the same approach.

It is important to highlight, though, that while this set-up enables a viable transition towards a production-grade configuration, there are several aspects which need a more robust set-up, such as:

  • management of remote backends for infrastructure code (such as S3 buckets)
  • central and secure management of secrets: you can use encryption or store them in vaults (such as Terraform Vault or AWS Secret Manager)
  • bastion host hardening, or VPN access
  • segregation of mgmt_vpcs into dev and prod, so to isolate the environment where you can bring and try changes and new central services, which then can be moved into prod to serve application environments (typically dev, staging, prod)
  • segregation of AWS Accounts by leveraging AWS multiaccount capabilities for each environment and centralization of account mgmt
  • introduction of persistence layers in apps environments (e.g. for DBs).