Provisioning RDS Instances using Terraform

Provisioning RDS Instances using Terraform

In this blog post, We will learn to provision RDS instances using Terraform - Infrastructure as Code

​Table of Contents

●    What is Terraform?
●    What is RDS?
●    Installation of Terraform
●    Installation of AWS CLI
●    Configuring AWS CLI
●    Create a Working directory for Terraform
●    Understanding Terraform files
●    Launching RDS instance from Snapshot
●    Launching RDS instances

PreRequisites

●    Installation of Terraform
●    Installation of AWS CLI
●    IAM user Access key and Secret Key

What is Terraform?

●    A tool to create resources, modify and delete them as required.
●    Supporting clouds such as AWS, Azure, GCP, Digital Ocean, IBM cloud etc.

What is RDS?

RDS stands for Relational Database Services

Amazon RDS provides an interface to easily create and manage relational databases in the cloud.

It provides salient features such as replication, security, scalability, and high availability with auto-failover.

Amazon RDS provides DB engine types such as Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. 

With the help of DMS, you can migrate existing DB to RDS easily.

Installing Terraform

Install terraform using one of the below-mentioned methods.
1.    Using binary package (.zip)
2.    Compiling from source

https://learn.hashicorp.com/tutorials/terraform/install-cli

From the above link, Download the respective terraform package and install it.
Run the below command for checking the installed version.

terraform -v

Installing AWS CLI

A command-line program to create and manage Amazon AWS Solutions resources programmatically
Install AWS CLI using below provided command

sudo apt-get install AWS CLI

Run the below command to check the version of AWS cli

AWS Version

aws-version-(4).png

Configuring AWS CLI

A profile can be configured so that it can be used by the terraforming for authentication.

With programmatic access, users will be provided an AWS access key and Secret key.

Entire the key and region when asked while executing the below command.

AWS configure

Understanding Terraform Files

variables.tf:

A file that holds the access key, secret key, and the region of AWS.

What not to do with Access Keys?
Not to hard code the keys in a file

What should we do?
Use AWS CLI profile to pass the secret keys

Then we will add AWS keys to the /home/zenesys/.aws/credentials file.

Providers.tf:
A plugin will be installed using terraform to communicate with the respective providers.

Providers such as AWS, Microsoft Azure Services, and GCP, IBM, Oracle Cloud, Digital Ocean.

Providers-tf.png

You may also like: Creating EC2 Instances using Terraform

Main.tf
A template / file which contains a template to provision the resources. 

A custom name can be used instead of main.tf

Launch an RDS Instance using a Snapshot

When there is a requirement to launch an RDS instance from the existing RDS instance,

In this case, We will create a snapshot of the existing RDS instance and use it to launch a New RDS Instance with the same data.

Let's assume you already have a snapshot in place for an RDS instance, Now we can go ahead and create a DB instance using it.

Here is the terraform script for it,

We are checking for the latest snapshot of the “dbinstance” DB instance.

data "aws_db_snapshot" "db_snapshot" {
most_recent = true
db_instance_identifier = "dbinstance"
}


Pass the snapshot_identifier in the template to launch the RDS instance from the snapshot. 
Attached is the template below,

resource "aws_db_instance" "db_sample" {
instance_class = "db.t2.medium"
identifier = "sampledb"
username = "sample"
password = "Sample@#5832"
publicly_accessible = false
db_subnet_group_name = "${aws_db_subnet_group.db-subnet.name}"
snapshot_identifier = "${data.aws_db_snapshot.db_snapshot.id}"
vpc_security_group_ids = ["sg-00g52b79"]skip_final_snapshot = true
}


We can configure the template as required

Execute terraform apply the command to launch an RDS instance from the  existing snapshot.

Launch RDS Instance from Scratch

If you’re launching an RDS Instance for the first time, 

We need to create the following resources such as Subnet groups, Security Groups , Parameter groups, 

If, you want to launch it in a desired VPC and Subnet group,

If not, Use the below terraform script to launch your first RDS instance using terraform.


resource "aws_db_instance" "default" {
allocated_storage = 50
identifier = "sampleinstance"
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.m4.medium"
name = "sample"
username = "dbadmin"
password = "DBAdmin@5#41$32"
parameter_group_name = "default.mysql5.7"
}


aws_db_instance – RDS instance as a resource

identifier – A unique name for the DB Instance

engine_version – DB version to use

If you want to launch RDs instances in a custom VPC and subnet groups, You can create the same using Terraform.

The VPC where you want to create RDS Instance

resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}


A subnet group (collection of subnets) is a minimum requirement before creating an RDS Instance.

Let's create subnets from different availability zones.

private subnet in AZ – A

resource "aws_subnet" "priv-subnet1" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "10.0.2.0/24"
availability_zone = "AZ-a of the Region"
}


private subnet in AZ – B

resource "aws_subnet" "priv-subnet2" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "10.0.3.0/24"
availability_zone = "AZ-b of the region"
}

Now We can create a subnet group using the above subnets A and B:

resource "aws_db_subnet_group" "db-subnet" {
name = "DB subnet group"
subnet_ids = ["${aws_subnet.priv-subnet1.id}", "${aws_subnet.priv-subnet2.id}"]}


And We must pass a DB subnet group parameter in the main script to use the subnet group which we have created.

db_subnet_group_name = "${aws_db_subnet_group.db-subnet.name}"

Once we have the terraform scripts ready, we can execute the following commands to launch the RDS instance.

terraform plan

terraform apply

We can create an RDS instance from Scratch using Terraform in a custom VPC.

Also Read: Tracking S3 Bucket Changes using Lambda Function

Recent Comments:-

There are no comments yet.

Leave message