What’s up Guys! Welcome to automationcalling.com

Kub_Dashboard

In this post, we are going to take a look in detail on how to set up a Kubernetes Cluster in AWS EKS.

To get more exposure and practical experience to set up a Kubernetes cluster in AWS, I suggest you create AWS free tire  , I also request you carefully look on “12 months free” and “Always free” options and stay with limited usage. 

There are many different methods for running Kubernetes on AWS, which are:

  • KOPS
  • EKS
  • Manual Setup in EC2  machines (This is so complicated to maintain – operational burden)

Apart from above, there are different mode to set up Kubernetes cluster, which are:

  • AWS Console
  • CLI
  •  Cloud Formation/Terraform

In this post, we will see the combination of AWS Console and Cloud formation template to run the Kubernetes cluster on AWS.

What is Amazon EKS?

EKS Stands for Elastic Kubernetes Service, which is a managed service that allows you to run Kubernetes on AWS.

Amazon EKS makes your application easy to deploy, manage, monitor and scale container orchestration. It helps you to run Kubernetes management infrastructure across multiple zones which are generally available for all AWS customers.

The benefit of Amazon EKS is:

  • To avoid the operational burden.
  • Quick setup (takes maximum 5 minutes to a cluster) in AWS either AWS CLI or terraform etc.,
  • Use Cluster Worker nodes using defined AMIs with the help of the CloudFormation template.
  • EKS will provision Kubernetes cluster setup with High availability, security, scalability and stay away from maintenance.
  • When it comes to upgrading the Kubernetes version or Security patch updates, AWS EKS is the best way to go.

Pre-requisites

Before setting up AWS EKS, the following set up is required to proceed further.

  • AWS CLI
  • Kubectl
  • AWS-IAM-Authenticator
  • VPC and 3 private subnets
  • Create IAM roles and Users

AWS CLI

Creating Kubernetes cluster in AWS CLI is quite easier than console. In order to install AWS CLI, I strongly recommend the below versions.

  • installing Python 3 version 3.3 and above.
  • Minimum AWS CLI version: 1.16.73, earliest version doesn’t know EKS

To install AWS CLI, please visit here

After successful installation of Python 3 and AWS CLI, supply the command:

aws --version

The following output will be seen.

AWS_version

kubectl

kubetcl is a command-line utility to communicate your Kubernetes Cluster API Server on AWS. Kubetcl helps to view pods/nodes, logs, watching the status of nodes, run containers etc.,

To install Kubetcl, please visit the link

To verify kubetcl is successfully installed, supply the following command

  • kubectl version --short --client

kubetcl_version

AWS-IAM-Authenticator

AWS IAM Authenticator uses AWS IAM credential to authenticate to a Kubernetes Cluster on AWS. The benefit of using IAM Authenticator for Kubernetes is, you don’t need to maintain separate credential for Kubernetes access,you can create a dedicated role “KubernetesAdmin” at cluster provisioning time and set up Authenticator.

To install AWS IAM Authenticator, please visit the link

To verify successful installation, the following output will be seen when you supply command “aws-iam-authenticator help"

iam_authenticator

VPC and 3 private subnets

To set up the Kubernetes cluster on AWS EKS, you must set up VPC and 3 private subnets in a different zone.

Create VPC and subnets (minimum 3) for EKS using AWS Console or Cloud formation template or terraform. Security group needs to be created dedicated for AWS EKS.

Create IAM roles and Users

  1. Create a dedicated role for EKS which should have the following policies attached.
  2. eks_policy
  3. Create a policy with the following action for created role
  4. {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "iam:GetRole",
                    "iam:PassRole"
                ],
                "Resource": "arn:aws:iam::<ACCOUNT_ID>:role/<ROLE_NAME>"
            }
        ]
    }
  5. Attach the created Policy in the created role
  6. Create a user in IAM role and attach the created role in it.
  7. Download the access key and security key under Security Credentials

Note: This should be used to do aws configure for the user in AWS CLI

Creating the EKS Cluster in AWS

To create EKS cluster in AWS CLI, the following is command

aws eks --region <region> create-cluster --name <clusterName> 
--role-arn <EKS-role-ARN> --resources-vpc-config 
subnetIds=<subnet-id-1>,<subnet-id-2>,<subnet-id-3>,securityGroupIds=
<security-group-id>

Note: if you create cluster in console then please make sure the user is the same as what is configured in AWS CLI.

Examples:

  • <region>: use-east-1, us-west-1
  • <clustername>: Name of your EKS Cluster
  • <EKS-role-ARN>: This is something you can find in IAM page-> Click Users at leftpane and copy “User ARN”
  • <subnet-id-1>,<subnet-id-2>,<subnet-id-3>: Replace with your created 3 subnets
  • <security-group-id>: Replace with right security group id.

aws eks --region us-east-1 create-cluster --name EKS-DEMO --kubernetes-version 1.14 --role-arn arn:aws:iam::123456789012:role/EKS-ROLE --resources-vpc-config subnetIds=subnet-012345234,subnet-05234234,subnet-023433423,securityGroupIds=sg-02343213

After executing the above command, you should see the following commands in the terminal.

eks_cluster_response

Cluster started creating, after 5-10 minutes, the following status can be seen in ESK cluster page in the console

EKS_Cluster_active

Update KubeConfig

Supply the following command to make sure AWS IAM Authenticator for Kubernetes will use the same credentials

aws sts get-caller-identity

kubeconfig file is a file used to configure access to Kubernetes when used in conjunction with the kubectl commandline tool

By default, update-kubeconfig command to create or update your kubeconfig for your cluster

aws eks --region region update-kubeconfig --name cluster_name

After supplied the above command, kube config gets created/updated in the following location

for eg., linux: /user/.kube/config

To verify supply vi /user/.kube/config

Note: It will update cluster details, user and roles in your kubeconfig file

Now, verify created Kubernetes cluster using the kubetcl svc command:

kubetcl get svc

kubernet_cluster_version

Kubernetes cluster successfully show up ip address etc.,

Setting up Worker Nodes for Created Cluster

So far we have seen how to setup cluster in EKS, now we can see how to launch worker nodes using cloud formation template

  1. open launch workers with the latest guide from here
  2. Select worker node selection as highlighted below.
  3. workernode_selection
  4. It navigates to CloudFormation template in your console.
    • StackName: Provide name for your worker template
    • EKS Cluster: The Name of EKS Cluster you created in above. The name must be same as above (no spaces etc.,)
    • Worker Node Configuration:
      • NodeGroupName: Enter the worker group name
      • NodeAutoScallingGroupMinSize: Minimum 1
      • NodeAutoScalingGroupDesiredCapacity: Minimum 1
      • NodeAutoScalingGroupMaxSize: Minimum >1
      • NodeInstanceType: It depends on what type of EC2 machine you select to spin up
      • KeyName: Create/Select already created a key parameter to log in the machine
      • VpcId: Select the VPC you created in the above
      • Subnets: Select the 3 subnets with different zones
      • Leave rest of the fields are default
  5. Acknowledge the AWS CloudFormation might create IAM resource
  6. Click on create Stack button

Cloudformation template start created nodes.

worker-nodes

After successful creation, the ec2 instance gets spinning up.

The next important setup is NodeInstanceRole

 In order to integrate the worker group to our created cluster, NodeInstanceRole is required which can be found in CloudFormation page->Click on your created node->Click outputs

nodeinstancevalue

To do this, first we need to download AWS authenticator configuration map:

curl -O 
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09
/aws-auth-cm.yaml

open the file and replace rolearn with above NodeInstanceRole as below

replace_rolearn

Save the file and apply the following command

kubetcl apply -f aws-iam-authenticator.yaml

The terminal will show the following message

configmap/aws-auth created

To Verify Status of Worker Node using kubectl

Supply the following command:

kubetcl get nodes

node_status

To Delete Worker Node

  1. Go to CloudFormation Page in AWS Console page.
  2. Select the Worker Node you just created.
  3. Click on the ‘Delete’ button.
  4. Proceed with Delete in the following confirmation dialogue box

delete_worker_node

Conclusion

To sum up all, EKS is wonderful container orchestration tool with no maintenance or operational burden on the cluster side. If you are an AWS fan and user, then it’s a wonderful tool to leverage. Supports upstream Kubernetes and replicated across three masters in different availability zones. Each cluster cost is $0.20 per hour which is sufficient to run multiple applications.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s