Launching Web-Server using AWS-EFS ,TerraForm and GitHub

Piyush Mehta
6 min readOct 7, 2020

Task Description:

Create/launch Application using Terraform and

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploaded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Note: EFS(Elastic File Storage) is a Object Storage which is automatically scalable; storage is automatically scaled with the increase/decrease of workload, which makes it a better choice than EBS(Elastic Block Storage) for this task of running a web server.

So Lets begin writing our TerraForm code-

The first step is to provide our AWS Profile and Region we want to work on

provider “aws”{

region = “ap-south-1”

profile = “Kaizoku” // Replace it by your profile

}

Then we’ll provide our GitHub Repo Link where our source code for the site is stored

variable “github_url”{

type = string

default = “https://github.com/kaiz-O/animesite" // Replace it by your GitHub Repo link

}

Then we’ll create a new Key-Pair

resource “tls_private_key” “KeyGen” {

algorithm = “RSA”

rsa_bits = 4096

} // Key Generation Using RSA Algorithm

resource “local_file” “KeyFile” {

content = tls_private_key.KeyGen.private_key_pem

filename = “key_pair.pem”

file_permission = 0400

} // Copying the Key Content to a local file

resource “aws_key_pair” “KeyAWS” {

key_name = “TerraKey”

public_key = tls_private_key.KeyGen.public_key_openssh

}

Then we’ll create a new Security Group

resource “aws_security_group” “SecurityGroupGen”{

name = “TerraSG”

description = “security_group_with_SSH_and_HTTP_access”

ingress{

description = “SSH”

from_port = 22

to_port = 22

protocol = “tcp”

cidr_blocks = [“0.0.0.0/0”]

}

// Inbound rule which allows connection from ALL the IPs using port 22

ingress{

description = “HTTP”

from_port = 80

to_port = 80

protocol = “tcp”

cidr_blocks = [“0.0.0.0/0”]

}

// Inbound rule which allows connection from ALL the IPs using port 80

ingress{

description = “NFS”

from_port = 2049

to_port = 2049

protocol = “tcp”

cidr_blocks = [“0.0.0.0/0”]

}

// Inbound rule which allows connection from ALL the IPs using port 2049

egress{

from_port = 0

to_port = 0

protocol = “-1”

cidr_blocks = [“0.0.0.0/0”]

}

// Outbound rule which allows connection to ALL the IPs using multiple ports

tags = {

Name = “TerraSG”

}

}

Now we’ll launch our new EC2 instance using the above Keypair and Security Group

resource “aws_instance” “MyOS”{

depends_on = [

aws_security_group.SecurityGroupGen,

aws_key_pair.KeyAWS

]

ami = “ami-052c08d70def0ac62” // RedHat AMI

instance_type = “t2.micro”

key_name = aws_key_pair.KeyAWS.key_name

security_groups = [

aws_security_group.SecurityGroupGen.name

]

tags = {

Name = “MyInstance”

}

}

We’ll create a EFS volume

resource “aws_efs_file_system” “efs”{

depends_on = [

aws_instance.MyOS

]

creation_token = “volume”

tags = {

Name = “StorageEFS”

}

}

Attach the EFS Volume to our EC2 instance

resource “aws_efs_mount_target” “alpha”{

depends_on = [

aws_efs_file_system.efs

]

file_system_id = aws_efs_file_system.efs.id

subnet_id = aws_instance.MyOS.subnet_id

security_groups = [

aws_security_group.SecurityGroupGen.id

]

}

Now we’ll connect to our EC2 instance via SSH using the public IP and the KeyPair we generated earlier and execute some commands

resource “null_resource” “connectSSH”{

depends_on = [

aws_efs_mount_target.alpha

]

connection{

type = “ssh”

user = “ec2-user”

private_key = tls_private_key.KeyGen.private_key_pem

host = aws_instance.MyOS.public_ip

}

provisioner “remote-exec” {

inline = [

“sudo yum install httpd git -y”,

// Installs httpd and git

“sudo mount -t ${aws_efs_file_system.efs.id}:/ /var/www/html”,

// Mount the EFS to the /var/www/html directory

“sudo rm -rf /var/www/html/*”,

// Removes all the files from the directory.

“sudo git clone ${var.github_url} /var/www/html/”,

// Clones the content of our Repository to the directory

“sudo systemctl start httpd”,

// Starts the webserver

“sudo systemctl enable httpd”

// optional , makes the server start every time the OS boots

]

}

}

We’ll now create a S3 Bucket and store our image file in it

resource “aws_s3_bucket” “bucket”{

bucket = “bucketanimesite”

// enter unique bucket name

acl = “public-read”

force_destroy = true

}

// Bucket Creation

resource “null_resource” “gclone”{

depends_on = [aws_s3_bucket.bucket]

provisioner “local-exec” {

command = “git clone ${var.github_url} “

}

}

// Cloning Repo in our local computer

resource “aws_s3_bucket_object” “update_bucket”{

depends_on = [aws_s3_bucket.bucket,null_resource.gclone]

bucket = “${aws_s3_bucket.bucket.id}”

source = “animesite/onepiecebg.jpg” //Source of the file

key = “onepiecebg.jpg” // Name of the file to be saved as

acl = “public-read”

}

Since, our file is stored in S3, we can use Cloud Front for better content delivery of our file, it helps in faster loading of data , which improves the customer experience.

resource “aws_cloudfront_distribution” “cfront”{

depends_on = [

aws_s3_bucket.bucket ,

null_resource.gclone ,

aws_s3_bucket_object.update_bucket

]

origin{

domain_name = aws_s3_bucket.bucket.bucket_regional_domain_name

origin_id = “newImage”

custom_origin_config{

http_port = 80

https_port = 80

origin_protocol_policy = “match-viewer”

origin_ssl_protocols = [“TLSv1”,”TLSv1.1",”TLSv1.2"]

}

}

enabled = “true”

default_cache_behavior{

allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]

cached_methods = [“GET”,”HEAD”]

target_origin_id = “newImage”

forwarded_values{

query_string = “false”

cookies{

forward = “none”

}

}

viewer_protocol_policy = “allow-all”

min_ttl = 0

default_ttl = 3600

max_ttl = 86400

}

restrictions{

geo_restriction{

restriction_type = “none”

}

}

viewer_certificate{

cloudfront_default_certificate = “true”

}

}

Now we’ll update the URL of the file of our code to the new CloudFront URL

resource “null_resource” “code_update” {

depends_on = [

aws_cloudfront_distribution.cfront,

null_resource.connectSSH

]

connection {

type = “ssh”

user = “ec2-user”

private_key = tls_private_key.KeyGen.private_key_pem

host = aws_instance.MyOS.public_ip

}

provisioner “remote-exec” {

inline = [

“sudo chown ec2-user /var/www/html/onepiece.css”,

“sudo echo ‘’’< body{ background-image: url(${aws_cloudfront_distribution.cfront.domain_name});} >’’’ >>/var/www/html/onepiece.css”,

“sudo systemctl restart httpd”

]

}

}

Finally, our webserver is now ready, we want our site to open automatically in our browser and the IP address should also be displayed in our terminal

resource “null_resource” “chrome”{

depends_on = [ null_resource.code_update ]

provisioner “local-exec”{

command = “chrome ${aws_instance.MyOS.public_ip}”

}

}

output “AddressOfSite” {

value = aws_instance.MyOS.public_ip

}

Now save and close the above file with the .tf extention and open the directory where the file is saved, in any Terminal/CMD.

and run the following commands

terraform init

// Downloads the required plugins

terraform validate

// Checks for any error

terraform apply --auto-approve

// Runs the TerraForm code and creates an infrastructure containing all the resources that are written

terraform destroy --auto-approve

// After our work is done, we can destroy the complete infrastructure that was created using a single command. Isn’t is AWESOME!!!

Thank you, hope you like it :)

For Source code click here.

--

--