Deploying Web-App on Kubernetes Cluster using AWS-EKS

MRIDUL MARKANDEY
6 min readJul 15, 2020
AWS EKS- ELASTIC KUBERNETES SERVICE
Here’s how AWS introduced AWS EKS in re-invent 2018

WHAT IS EKS ??

Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service that makes it easy for you to run Kubernetes on AWS without the overhead to install and operate your own Kubernetes clusters. EKS is internally integrated with services such as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC), providing users a seamless experience to monitor, scale, and manage your applications.

PRE-REQUISITES:

  1. AWS CLI must be installed and its path should be set and configured.
  2. “ Kubectl ” program must be installed and its path should be set.
  3. “ Eksctl ” command should be installed and its path should be set.
  4. Must be familiar with concepts related to docker, containers, and Kubernetes.
  5. Should have an AWS account.

TASK DESCRIPTION:

  1. Create an AWS EKS Cluster using eksctl program with an IAM user and Key Pair
  2. Deploy WordPress with MySQL on the EKS cluster with AWS EBS as a provisioner
  3. Create an EKS Cluster using eksctl
  4. Create an EFS provisioner to scale up the pods in different Availability Zones
  5. Deploy the WordPress with MySQL over Cluster using EFS provisioner.

IMPLEMENTATION:

STEP-1: CREATE AN IAM USER WITH ADMIN ROLE:

  1. Login in your AWS account and go to IAM service
  2. Click on Users at the left side and then Add User and tick Programmatic access and AWS Management Console access
  3. Select Attach existing policies directly and then select AdministratorAccess policy and go to Next Tags
  4. Add Tags then click next and create a user
  5. Now your IAM user is created and download the .csv file for the access key and secret key for this IAM user which will be available for one time only to download

STEP 2: Log in to your AWS account with the IAM user credentials

Provide the access key, secret key from the .csv file downloaded while creating the IAM user, region, and the default format i.e. JSON.

STEP 3: Installing eksctl program

Since eks provides very limited options therefore we use eksctl command to set up everything in AWS EKS by just one command

  1. Download and install the weave eksctl program as per O.S.

$ chocolatey install eksctl

2. Check the installation is successful

$ eksctl version

STEP 4: Install and configure kubectl

  1. Download the Amazon EKS-vended kubectl binary.

$ curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/windows/amd64/kubectl.exe

2. Move kubectl to a folder that is in your path already set or either set the path from Environmental Variables of the kubectl.exe file

STEP 5: Creating an EKS Cluster

  1. Create a yaml file for creating an eks cluster with spot instances and provide a public key to login in the node
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfigmetadata:
name: eks-cluster
region: ap-southeast-1nodeGroups:
- name: ng-1
instanceType: t2.micro
desiredCapacity: 3
ssh:
publicKeyName: mykey1122 - name: ng-mix
minSize: 2
maxSize: 5
instancesDistribution:
maxPrice: 0.017
instanceTypes: ["t2.small", "t3.small"] # At least one instance type should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
ssh:
publicKeyName: mykey1122

2. Run this command to create the cluster and it will setup everything with CloudFormation Stack

$ eksctl create cluster -f spot_cluster.yml

Then while the cluster is being created, it might take almost 30–35 minutes as this code first contact to cloud formation, and cloud formation code is created for launching these resources from code and then the cluster is clustered, and then the node groups are created.

3. Check the cluster is created or not

$ eksctl get cluster

4. Create a kubeconfig file or update the existing kubeconfig file

$ aws eks update-kubeconfig --name eks-cluster

5. Check the cluster connectivity

$ kubectl cluster-info

6. Create a namespace and set it to default

$ kubectl create namespace eksns
$ kubectl config set-context --current --namespace=eksns
$ kubectl config view

STEP 6: Creation of PVC

For using amazon EFS as storage we have to just install amazon-efs-utils on all worker nodes.

sudo yum install amazon-efs-utils -y

Now, create a provisioner for EFS.

apiVersion: apps/v1
kind: Deployment
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
- name: FILE_SYSTEM_ID
value: fs-3e42c8ef
- name: AWS_REGION
value: ap-south-1
- name: PROVISIONER_NAME
value: eks-prov/aws-efs
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: fs-3e42c8ef.efs.ap-south-1.amazonaws.com
path: /

Step 7: After creation of provisioner , create a role binding

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-prov-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: wp-mysql
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
  • This will create a role binding . But we also needs PVCs so that data insides the pods can remain persistent so we have to create PVCs and as we will be using EFS so for this we need a storage class that support the EFS.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: aws-efs
provisioner: eks-prov/aws-efs

Step 8: Create mysql-deployment

apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-mysql
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: efs-mysql

Step 9: Create wordpess-deployment

apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-wordpress
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: efs-wordpress

Step 10: Create and apply kustomization file

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: mysql-pass
literals:
- password=redhat
resources:
- mysql-deployment.yaml
- wordpress-deployment.yaml

Apply your kustomization:

kubectl apply kustomization.yml -k .

Step 11: Access your wordpress website

For accessing your WordPress website go to ELB and after you can see your WordPress site on your web_browser.

STEP 12: Delete cluster once everything is done

To delete the cluster you need to first delete the resources created using the manifest file using kubectl delete -f <file/name>

Then to delete the cluster use:

eksctl delete cluster -f <file/name/through/created/the/cluster>

That’s it…!! You have successfully launched your application on AWS EKS. Hope you find this blog useful. If you have any suggestion then please let me know in the comments.

Thank you.

--

--