Provision AWS EKS Cluster using Terraform

Pium Sudhara
6 min readApr 6, 2022

--

Hello everyone, so in my previous article I was talked about Terraform, the most popular open source Infrastructure as code(IaC) tool. Infrastructure on AWS With Terraform | by Pium Sudhara | Medium, In this article I explained about what is Infrastructure as Code, IaC tools and create an simple Nginx server using Terraform. Today I will explain how we are going to create an AWS EKS cluster using Terraform step by step. First of all let’s talked about AWS EKS.

What Is AWS EKS ?

Kubernetes or K8s is an open-source container orchestration system for automating deployment, scaling, and management of containerized applications. There are many ways to deploy an Kubernetes cluster, you can use Minikube, Kubeadm, Kops and cloud provide managed Kubernetes services such as AWS EKS, Google GKE, Azure AKS.

Image Source Managed Kubernetes Service — Amazon EKS — Amazon Web Services

Amazon Elastic Kubernetes Service or EKS is a managed Kubernetes service provide by AWS, that you can run and scale your Kubernetes applications on cloud or on-premises. You can run Kubernetes without install and operating Kubernetes control plane or worker nodes.

There are several benefits you can have when using AWS EKS.

  • High Availability
  • Highly Secure K8s Environment
  • Provision Your Resources For Scale

And also you can use EKS on cloud, on your on-premises data center and self manages resources. (VM)

What is Terraform ?

As mentioned in my previous article Infrastructure on AWS With Terraform | by Pium Sudhara | Medium, Terraform is “HashiCorp Terraform is the most popular open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.”

In Terraform there is 4 steps in Infrastructure Resources Provisioning.

  • init: Initialize the Terraform environment.
  • plan: Preview changes before apply the execution.
  • apply: Create or apply changes into the deployment.
  • destroy: Destroy all of your resources.

Prerequisites

Ok, then let’s start.

  • The First step is to create an VPC for our cluster. So let’s vpc.tf file to configure the AWS VPC. I used my region as ap-southeast-1 (Asia Pacific-Singapore)provides subnets and availability zones in the region.
variable "region" {
default = "ap-southeast-1"
description = "AWS region"
}
provider "aws" {
region = var.region
}
data "aws_availability_zones" "available" {}locals {
cluster_name = "My-AWS-EKS"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.2.0"
name = "My-VPC"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}

In the VPC file, we have and 3 public and private subnets and 10.0.0.0/16 CIDR (Classless Inter-Domain Routing) range in our region. And also enables NAT Gateway and DNS Hostname.

  • The next step is to create sg.tf file to create an security groups. We have 2 security groups for 2 worker node group.
resource "aws_security_group" "worker_group_mgmt_one" {
name_prefix = "worker_group_mgmt_one"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
]
}
}
resource "aws_security_group" "worker_group_mgmt_two" {
name_prefix = "worker_group_mgmt_two"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"192.168.0.0/16",
]
}
}
resource "aws_security_group" "all_worker_mgmt" {
name_prefix = "all_worker_management"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
]
}
}
  • Next step is create EKS cluster for our VPC(eks.tf). In the EKS we have t2.micro type instances.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.24.0"
cluster_name = local.cluster_name
cluster_version = "1.20"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_idworkers_group_defaults = {
root_volume_type = "gp2"
}
worker_groups = [
{
name = "worker-group-1"
instance_type = "t2.micro"
additional_userdata = "echo foo bar"
additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
asg_desired_capacity = 2
},
{
name = "worker-group-2"
instance_type = "t2.micro"
additional_userdata = "echo foo bar"
additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
asg_desired_capacity = 1
},
]
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
  • Create k8s.tf file to Kubernetes provider.
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64encode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
  • And our last terraform script is outputs.tf to get expose the endpoint of the cluster.
output "cluster_id" {
description = "EKS cluster ID."
value = module.eks.cluster_id
}
output "cluster_endpoint" {
description = "Endpoint for EKS control plane."
value = module.eks.cluster_endpoint
}
output "kubectl_config" {
description = "kubectl config as generated by the module."
value = module.eks.kubeconfig
}
output "region" {
description = "AWS region"
value = var.region
}
output "cluster_name" {
description = "Kubernetes Cluster Name"
value = local.cluster_name
}

Now our scripting part is done. You can store your code in your GitHub repository. After store your code, let’s initialize our code. I have been mentioned there are there are 4 steps in Terraform Infrastructure Resources Provisioning.

  • Run the below code to initialize, it will download all the necessary modules.
terraform init
  • After download the modules, you can go for the plan step. In the plan step Terraform preview you the changes that you are going to apply in the infrastructure.
terraform plan
  • Final step is to if you are fine with plan, you can go for apply step.
terraform apply

Let’s go to our AWS console to verify that our resources are created.

  • VPC
  • EKS Cluster
  • Subnets

That’s it. We have been successfully created our AWS EKS Cluster.

  • If you are not going to use these resources, please use “terraform destroy” to delete your resources.

So, in today we are talked AWS EKS and we are provisioned an AWS EKS Cluster using Terraform. In the next article let’s talked about AWS Containers and how we are going to work with them. If you need any help follow me on GitHub, LinkedIn or Twitter. Also check HashiCorp Learn Tutorial from this link.

Thank you all..! Have a great day.. don’t forget to leave a 👏✌️❤️

--

--

Pium Sudhara

Senior Cloud Operations Engineer at LSEG☁️💻 | AWS Community Builder | pium.karunasena@gmail.com