ApprenticeCTO
  • Home
  • About
  • Privacy

    How To Create Your AWS VPCs Pillars With Terraform

    posted in Tech on Nov 20, 2020 by ApprenticeCTO

    This post shows how Terraform works to provision infrastructure resources on AWS in full automation, with a use case covering the necessary steps to segregate environments and build a cross-VPC which offers management services (such as monitoring, logging, CI/CD) to a development application VPCs.

    VPCs are deployed in one region and availability zone; both VPCs have internet access through internet gateways and NAT gateways.

    VPC peering ensures connectivity between the two VPCs.

    A basic EC2 instance acting as Bastion Host is deployed in the management VPC to get access to the EC2 instance built in the private subnet of the peered development VPC.

    We’ll use the latest (at writing time) ubuntu ami for both EC2 instances.

    The whole set-up can be built within the AWS free-tier program.

    Terraform Code

    On my Github account you can also find repositories to practice on earlier steps if needed, such as creating a github account, building a basic EC2 and a VPC with Terraform on AWS.

    Code can be cloned from my github repository.

    Code structure is the following:

    • dev_vpc containing the code to set-up the development VPC, EC2 bastion host, and the security group enabling SSH traffic
    • mgmt_vpc containing the code to set-up the management VPC, EC2 private instance, and the security group enabling SSH traffic
    • mgmt_dev_vpcs_peering containing the code to set-up the VPC peering, the updates of route tables to allow traffic routing between the VPCs.

    State is managed locally, in default terraform.tfstate files, located in each folder.

    Management VPC

    Let’s look at the main_mgmt.tf file first, located in mgmt_vpc folder and containing the provisioning code to build the management vpc.

    Define AWS Provider

    First, we need to define the AWS provider:

    terraform {
    required_providers {
    aws = {
    source = "hashicorp/aws"
    }
    }
    }
    provider "aws" {
    region = "eu-central-1"
    }
    view raw main_mgmt.tf hosted with ❤ by GitHub

     

    Build the VPC

    After that, we make use of the AWS VPC Terraform module, which creates VPC resources on AWS:

    module "mgmt_vpc" {
    source = "terraform-aws-modules/vpc/aws"
    version = "~> 2.64"
    name = var.vpc_name
    cidr = var.vpc_cidr
    azs = var.vpc_azs
    private_subnets = var.vpc_private_subnets
    public_subnets = var.vpc_public_subnets
    enable_dns_hostnames = true
    enable_dns_support = true
    enable_nat_gateway = true
    tags = var.vpc_tags
    }
    view raw main.mgmt.tf hosted with ❤ by GitHub

     

    This module defines the following attributes of our VPC, through variables which are defined in variables.tf file:

    • name
    • cidr
    • availability zones
    • provate and public subnets
    • dns resolution
    • nat gateway presence

    IPv4 Addressing Scheme Definition

    • CIDR: 10.20.0.0/16
    • private subnet: 10.20.1.0/24
    • public subnet: 10.20.101.0/24

    Enable SSH traffic

    To allow access the bastion host from your local machine, we need to enable SSH traffic by using the security group created through the VPC module above:

    resource "aws_security_group" "allow_ssh" {
    name = "allow_ssh"
    description = "Allow SSH inbound traffic"
    vpc_id = module.mgmt_vpc.vpc_id
    ingress {
    description = "SSH incoming"
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
    Name = "allow_ssh"
    }
    }
    view raw main_mgmt.tf hosted with ❤ by GitHub

     

    To access the bastion host from your local machine, you need to create the SSH Keypair to access the bastion host via SSH. To do so, launch the command from your terminal:

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f $HOME/.ssh/your_key_name
    

     

    By default, your newly generated key files (private and public) will be stored in ~/.ssh folder.

    Then open your your_key_name.pub and copy its content; paste it in the variable ssh_public_key in mgmt_vpc/variables.tf file.

    Filename your_key_name is referenced in mgmt_vpc Terraform code and the .pub file contents have to be added in variables.tf file.

    To let your bastion host access via ssh the private instance on the peered vpc, you need to set-up the ssh agent forwarding on your local machine.

    To associate the newly defined key, we need to add the following resource:

    resource "aws_key_pair" "bastion_key" {
    key_name = "bastion_host_key_aws"
    public_key = var.ssh_public_key
    }
    view raw main_mgmt.tf hosted with ❤ by GitHub

     

    Build an EC2 Private Instance

    To build an EC2 instance, we use the AWS EC2 Instance Terraform module:

    module "ec2_instances" {
    source = "terraform-aws-modules/ec2-instance/aws"
    version = "~> 2.15"
    name = var.ec2_instances_name
    instance_count = 1
    ami = var.ec2_instances_ami
    instance_type = var.ec2_instances_type
    vpc_security_group_ids = [aws_security_group.allow_ssh.id]
    subnet_id = module.mgmt_vpc.public_subnets[0]
    key_name = "bastion_host_key_aws"
    tags = {
    Terraform = "true"
    Environment = "mgmt"
    }
    }
    view raw main_mgmt.tf hosted with ❤ by GitHub

     

    This module defines the following attributes of our EC2 instance, through variables which are defined in variables.tf file:

    • name
    • number of instances
    • ami
    • type of instance
    • the security group to be mapped
    • subnet id
    • key pair name, defined above

    Input Values

    Terraform Input Variables contain configuration parameters, located in the variables.tf file:

    variable "vpc_name" {
    description = "Name of VPC"
    type = string
    default = "mgmt_vpc"
    }
    variable "vpc_cidr" {
    description = "CIDR block for VPC"
    type = string
    default = "10.20.0.0/16"
    }
    variable "vpc_azs" {
    description = "Availability zones for VPC"
    type = list
    default = ["eu-central-1a"]
    }
    variable "vpc_private_subnets" {
    description = "Private subnets for VPC"
    type = list(string)
    default = ["10.20.1.0/24"]
    }
    variable "vpc_public_subnets" {
    description = "Public subnets for VPC"
    type = list(string)
    default = ["10.20.101.0/24"]
    }
    variable "vpc_tags" {
    description = "Tags to apply to resources created by VPC module"
    type = map(string)
    default = {
    Terraform = "true"
    Environment = "mgmt"
    }
    }
    variable "ec2_instances_name" {
    description = "ec2 instance type"
    type = string
    default = "bastion-host"
    }
    variable "ec2_instances_ami" {
    description = "ec2 instance ami"
    type = string
    default = "ami-0c960b947cbb2dd16"
    }
    variable "ec2_instances_type" {
    description = "ec2 instance type"
    type = string
    default = "t2.micro"
    }
    variable "ssh_public_key" {
    description = "bastion host ssh public key"
    type = string
    default = "paste your public key file content"
    }
    view raw variables.tf hosted with ❤ by GitHub

     

    Outputs Values

    Output values are configured in the file outputs.tf:

    output "mgmt_vpc_public_subnet" {
    description = "IDs of the VPC's public subnet"
    value = module.mgmt_vpc.public_subnets
    }
    output "mgmt_vpc_id" {
    description = "IDs of the VPC's private subnet"
    value = module.mgmt_vpc.vpc_id
    }
    output "mgmt_vpc_public_subnets_cidr_block" {
    description = "IDs of the VPC's public subnet cidr block"
    value = module.mgmt_vpc.public_subnets_cidr_blocks[0]
    }
    output "mgmt_vpc_public_route_table_id" {
    description = "IDs of the VPC's public subnet route table ID"
    value = module.mgmt_vpc.public_route_table_ids[0]
    }
    output "mgmt_ec2_instance_public_ip" {
    description = "Public IP addresse of EC2 instance"
    value = module.ec2_instances.public_ip
    }
    view raw outputs.tf hosted with ❤ by GitHub

     

    Development VPC

    The code to build this VPC looks is located in dev_vpc folder, which is very similar to the code we used to provion the management VPC.

    Define AWS Provider

    terraform {
    required_providers {
    aws = {
    source = "hashicorp/aws"
    }
    }
    }
    provider "aws" {
    region = "eu-central-1"
    }
    view raw main_mgmt.tf hosted with ❤ by GitHub

     

    Build the VPC

    module "dev_vpc" {
    source = "terraform-aws-modules/vpc/aws"
    version = "~> 2.64"
    name = var.vpc_name
    cidr = var.vpc_cidr
    azs = var.vpc_azs
    private_subnets = var.vpc_private_subnets
    public_subnets = var.vpc_public_subnets
    enable_dns_hostnames = true
    enable_dns_support = true
    enable_nat_gateway = true
    tags = var.vpc_tags
    }
    view raw main_dev.tf hosted with ❤ by GitHub

     

    IPv4 Addressing Scheme Definition

    • CIDR: 10.0.0.0/16
    • private subnet: 10.0.1.0/24
    • public subnet: 10.0.101.0/24

    Enable SSH traffic

    To allow access to the private instance from the bastion host, we need to enable SSH traffic by using the security group created through the VPC module above:

    resource "aws_security_group" "allow_dev_ssh" {
    name = "allow_dev_ssh"
    description = "Allow SSH inbound traffic"
    vpc_id = module.dev_vpc.vpc_id
    ingress {
    description = "SSH incoming"
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
    Name = "allow_dev_ssh"
    }
    }
    view raw main_dev.tf hosted with ❤ by GitHub

     

    To access the bastion host from your local machine, you need to create the SSH Keypair (a new one for dev einvironment) to access the bastion host via SSH. To do so, launch the command from your terminal:

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f $HOME/.ssh/your_dev_key_name
    

     

    By default, your newly generated key files (private and public) will be stored in ~/.ssh folder.

    Then open your your_dev_key_name.pub and copy its content; paste it in the variable ssh_public_key in dev_vpc/variables.tf file.

    Filename your_dev_key_name is referenced in mgmt_vpc Terraform code and the .pub file contents have to be added in variables.tf file.

    To let your bastion host access via ssh the private instance on the peered vpc, you need to set-up the ssh agent forwarding on your local machine.

    To associate the newly defined key, we need to add the following resource:

    resource "aws_key_pair" "dev_vpc_key" {
    key_name = "dev_vpc_key_pair"
    public_key = var.ssh_public_key_private_subnet
    }
    view raw main_dev.tf hosted with ❤ by GitHub

     

    Build an EC2 Private Instance

    To build an EC2 instance, we use the AWS EC2 Instance Terraform module:

    module "ec2_instances" {
    source = "terraform-aws-modules/ec2-instance/aws"
    version = "~> 2.15"
    name = var.ec2_instances_name
    instance_count = 1
    ami = var.ec2_instances_ami
    instance_type = var.ec2_instances_type
    vpc_security_group_ids = [aws_security_group.allow_dev_ssh.id]
    subnet_id = module.dev_vpc.private_subnets[0]
    key_name = "dev_vpc_key_pair"
    tags = {
    Terraform = "true"
    Environment = "dev"
    }
    }
    view raw main_dev.tf hosted with ❤ by GitHub

     

    This module defines the following attributes of our EC2 instance, through variables which are defined in variables.tf file.

    Input Values

    This is the variables.tf file for this environment:

    variable "vpc_name" {
    description = "Name of VPC"
    type = string
    default = "dev_vpc"
    }
    variable "vpc_cidr" {
    description = "CIDR block for VPC"
    type = string
    default = "10.0.0.0/16"
    }
    variable "vpc_azs" {
    description = "Availability zones for VPC"
    type = list
    default = ["eu-central-1a"]
    }
    variable "vpc_private_subnets" {
    description = "Private subnets for VPC"
    type = list(string)
    default = ["10.0.1.0/24"]
    }
    variable "vpc_public_subnets" {
    description = "Public subnets for VPC"
    type = list(string)
    default = ["10.0.101.0/24"]
    }
    variable "vpc_tags" {
    description = "Tags to apply to resources created by VPC module"
    type = map(string)
    default = {
    Terraform = "true"
    Environment = "dev"
    }
    }
    variable "ec2_instances_name" {
    description = "ec2 instance type"
    type = string
    default = "private_ec2_dev_instance"
    }
    variable "ec2_instances_ami" {
    description = "ec2 instance ami"
    type = string
    default = "ami-0c960b947cbb2dd16"
    }
    variable "ec2_instances_type" {
    description = "ec2 instance type"
    type = string
    default = "t2.micro"
    }
    variable "ssh_public_key_private_subnet" {
    description = "private subnet instances ssh public key"
    type = string
    default = "paste your public key file content"
    }
    view raw variables.tf hosted with ❤ by GitHub

     

    Outputs Values

    This is the outputs.tf file for this environment:

    output "dev_vpc_private_subnet" {
    description = "IDs of the VPC's private subnet"
    value = module.dev_vpc.private_subnets
    }
    output "vpc_id" {
    description = "ID of the VPC"
    value = module.dev_vpc.vpc_id
    }
    output "dev_vpc_private_route_table_id" {
    description = "IDs of the VPC's public subnet route table ID"
    value = module.dev_vpc.private_route_table_ids[0]
    }
    output "dev_vpc_public_subnets_cidr_block" {
    description = "IDs of the VPC's public subnet cidr block"
    value = module.dev_vpc.public_subnets_cidr_blocks[0]
    }
    output "dev_vpc_private_subnets_cidr_block" {
    description = "IDs of the VPC's private subnet cidr block"
    value = module.dev_vpc.private_subnets_cidr_blocks[0]
    }
    output "dev_ec2_instance_private_ip" {
    description = "Private IP addresse of EC2 instance"
    value = module.ec2_instances.private_ip
    }
    view raw outputs.tf hosted with ❤ by GitHub

     

    Peering the two VPCs

    The code to establish the peering between mgmt_vpc and dev_vpc is in the main.mgmt.dev.peering.tf file, located in mgmt_dev_vpcs_peering folder.

    Define AWS Provider

    terraform {
    required_providers {
    aws = {
    source = "hashicorp/aws"
    }
    }
    }
    provider "aws" {
    region = "eu-central-1"
    }
    view raw main_mgmt.tf hosted with ❤ by GitHub

     

    Accessing States

    We now need to access both mgmt_vpc and dev_vpc states:

    data "terraform_remote_state" "mgmt_vpc" {
    backend = "local"
    config = {
    path = "../mgmt_vpc/terraform.tfstate"
    }
    }
    data "terraform_remote_state" "dev_vpc" {
    backend = "local"
    config = {
    path = "../dev_vpc/terraform.tfstate"
    }
    }
    view raw mgmt.mgmt.dev.tf hosted with ❤ by GitHub

     

    Peering Configuration

    We can now configure VPC peering:

    resource "aws_vpc_peering_connection" "mgmt_vpc" {
    vpc_id = data.terraform_remote_state.mgmt_vpc.outputs.mgmt_vpc_id
    peer_vpc_id = data.terraform_remote_state.dev_vpc.outputs.vpc_id
    auto_accept = true
    }
    resource "aws_vpc_peering_connection_options" "mgmt_vpc" {
    vpc_peering_connection_id = aws_vpc_peering_connection.mgmt_vpc.id
    accepter {
    allow_remote_vpc_dns_resolution = true
    }
    requester {
    allow_vpc_to_remote_classic_link = false
    allow_classic_link_to_remote_vpc = false
    }
    }
    view raw main.mgmt.dev.tf hosted with ❤ by GitHub

     

    Routing Traffic between Peered VPCs

    To enable ssh traffic be routed between the two vpc subnets, the route tables created with the VPC modules need to be updated:

    resource "aws_route" "mgmt_route_table_dev_peer" {
    route_table_id = data.terraform_remote_state.mgmt_vpc.outputs.mgmt_vpc_public_route_table_id
    destination_cidr_block = data.terraform_remote_state.dev_vpc.outputs.dev_vpc_private_subnets_cidr_block
    vpc_peering_connection_id = aws_vpc_peering_connection.mgmt_vpc.id
    }
    resource "aws_route" "dev_route_table_mgmt_peer" {
    route_table_id = data.terraform_remote_state.dev_vpc.outputs.dev_vpc_private_route_table_id
    destination_cidr_block = data.terraform_remote_state.mgmt_vpc.outputs.mgmt_vpc_public_subnets_cidr_block
    vpc_peering_connection_id = aws_vpc_peering_connection.mgmt_vpc.id
    }
    view raw main.mgmt.dev.tf hosted with ❤ by GitHub

     

    Outputs Values

    This is the outputs.tf file for this environment:

    output "mgmt_dev_vpcs_peering_id" {
    description = "ID of the VPC peering between mgmt and dev vpcs"
    value = aws_vpc_peering_connection.mgmt_vpc.id
    }
    view raw outputs.tf hosted with ❤ by GitHub

     

    Build the Infrastructure

    Now we’re ready to provision our VPCs!

    Management VPC

    Cd into your mgmt_vpc folder and:

    • launch terraform init
    • launch terraform plan to check that everything is all right before actually creating infrastructure
    • launch terraform apply and enter ‘yes’ when prompted (or use terraform apply -auto-approve)

    At the end output variables defined in outputs.tf file are displayed.

    Development VPC

    Cd into your `dev_vpc folder and:

    • launch terraform init
    • launch terraform plan to check that everything is all right before actually creating infrastructure
    • launch terraform apply and enter ‘yes’ when prompted (or use terraform apply -auto-approve)

    In the end, the values of output variables defined in outputs.tf file are displayed.

    VPC Peering

    Cd into your mgmt_dev_vpcs_peering folder and:

    • launch terraform init
    • launch terraform plan to check that everything is all right before actually creating infrastructure
    • launch terraform apply and enter ‘yes’ when prompted (or use terraform apply -auto-approve)

    In the end, the values of output variables defined in outputs.tf file are displayed.

    Inspect Your Infrastructure

    Terraform writes configuration data into a file called terraform.tfstate, which you have in your local repo. This file contains the IDs and properties of the resources Terraform created so that Terraform can manage or destroy those resources going forward.

    To inspect your infrastructure configuration launch terraform show from any of your environment directories.

    Test SSH Connections

    From your terminal, launch the command:

    ssh -A ubuntu@<dns public name of your bastion host>
    

     

    You should get a message like: “The authenticity of host ‘dns public name’ (‘public ip address’)’ can’t be established. ECDSA key fingerprint is . Are you sure you want to continue connecting (yes/no/[fingerprint])?": type 'yes'.

    Now you should successfully be logged into your bastion host!

    To login to your private instance on the peered VPC, launch the command form your bastion host:

    ssh ubuntu@<dns private name of your private ec2 instance>
    

     

    This proves that peering is working and that the ssh traffic is properly routed and enabled!

    Destroy Your infrastructure

    To destroy your infrastructure, use terraform destroy -auto-approve, in each folder, reversing the creation order.

    Considerations On The Architecture

    This use case brings modularity and segregation.

    It makes a bit more complex to handle infrastructure code changes in each environment, but provides higher isolation than the workplace approach and ease team collaboration on code.

    This simplified set-up leverages both Terraform and AWS VPC pillars and a production VPC can be added and peered to the management VPC easily, just following the same approach.

    It is important to highlight, though, that while this set-up enables a viable transition towards a production-grade configuration, there are several aspects which need a more robust set-up, such as:

    • management of remote backends for infrastructure code (such as S3 buckets)
    • central and secure management of secrets: you can use encryption or store them in vaults (such as Terraform Vault or AWS Secret Manager)
    • bastion host hardening, or VPN access
    • segregation of mgmt_vpcs into dev and prod, so to isolate the environment where you can bring and try changes and new central services, which then can be moved into prod to serve application environments (typically dev, staging, prod)
    • segregation of AWS Accounts by leveraging AWS multiaccount capabilities for each environment and centralization of account mgmt
    • introduction of persistence layers in apps environments (e.g. for DBs).
    Latest posts

    Managing IT Risks Effectively

    From Overwhelming Obsolscence To Effective Lifecycle Management

    Spread Valuable Tech Capabilities Before Building New Ones

    • M
    • @
    2022 © ApprenticeCTO. (Built on Recked Theme - Photo by Ricardo Rocha on Unsplash)
    Privacy