Search This Blog

Saturday, August 1, 2020

Docker, Kubernetes - Simple Web server on Amazon AWS using container service from scratch - AWS EC2, EKS, ECR




Today we will see how to run a simple webserver in a docker container and use Kubernetes to manage a scalable container environment.

Before we start, lets look at what these technologies do:

Docker: Is a tool that allows developers to create, deploy and run applications by using containers. Containers are much more light weight, allow resource sharing, support scaling when compared to Virtual machines. A docker Image comprises of different layers (OS, Applications, etc) that is used to execute code in container. If 2 containers use same layer it will be shared between them (eg. OS). We will use Docker to create our container Image on local and then use this image on the Kubernetes later. 

Kubernetes: Is used for automating deployment, scaling, and management of containerized applications, in our case Docker container apps (but supports other containerization tools too). Kubernetes deploys the containers using pods. A pod can contain one ore more containers. We will use just one container per pod in our demo. A Kubernetes cluster is a set of nodes, and nodes run containerized applications. A Kubernetes service exposes these apps running in the pods as a service. In our case we will expose our web server as a http service on port 80.

EKS (Elastic Kubernetes Service) : Is an offering from Amazon AWS run kubernetes on their cloud. We will add this component to our setup to run Kubernetes.

ECR (Elastic Container Registry) : Is a service to store Container Images. We will use this service's registry url to push our Image and then fetch it in Kubernetes Implementaion.

EC2 (Elastic Compute Cloud) : A Virtual machine service by AWS. We will create a Linux EC2 instance for running this experiment.






So lets get started,

EC2 Linux Instance Setup :

Skip this if you already have EC2 running and move to Docker Setup.

AWS provides free 1 year usage under its free tier. You can sign up here :


Once the account is created we can create our first EC2 Linux instance from the AWS console:

1) Select Launch a virtual machine.


2) Select Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type


3) Select the free tier applicable option and click on Review and Launch.


4) Click on Launch in next step.


5) We need a key pair to connect to our Linux server. Select Create new key pair and give it any name. Next click on Download Key Pair to download the file. Copy the file to a safe location.


6) After downloading the key file click on Launch Instances. You should see a success message.


7) Download and install Putty (Windows) to connect to our newly created linux instance. You can use other SSH tools too and follow the instructions for importing keys. Mac users can skip this step and go to Step 11 as there is an inbuilt SSH tool.



8) Open PuttyGen which will be available after installing putty. Click on load to select the Key file you downloaded.




9) Next click on save Private key to save the private key safely. You will need it later.




10) Go to AWS console and open the ec2 instance that you created. And then to running Instances. Select the instance. From the bottom right corner copy the Public DNS name. This is the hostname to connect to your Linux instance.




11) Now close PuttyGen and open Putty to connect to Linux instance. Paste the hostname that you copied by adding "ec2-user@" before the hostname . Give it a name under "Saved sessions" and click Save. Next click on SSH and browse the key file that was generated in step 9. Go back to session and save again. Then click open. It should not ask you for username or password.
Mac users can cd to the directory where pem file is saved and use following command in their terminal by specifying the pem file name:

    ssh -i aws-test.pem ec2-user@hostname



12) You should see a successful message on Login.



Docker Setup :

To install Docker run following commands :

1) Update packages.

     sudo yum update -y

2)  Install Docker. When it prompts, type "y" and Enter.

    sudo yum install docker

3) Once Installation is complete Start Docker service

    sudo service docker start


4) Add ec2-user to docker group to run docker commands without sudo.

    sudo usermod -a -G docker ec2-user

5) Logout from putty and login again. The run following command to check if docker is running. It will print some information.

    docker info

6) Now that Docker is up and running. Let's create our first container. Create a new directory.

    mkdir docker-test
    cd docker-test

7) Create an empty Docker Manifest file (Contains instructions to create image). Run vi editor (Or any editor) to add contents to file. After pasting the contents close the file.
    
    touch Dockerfile

    vi Dockerfile

Tells Docker to use Ubuntu as base Image and install Apache web server on it
Creates the web application to print hello world in browser and expose port 80
from container for http access.

Contents: 

FROM ubuntu:18.04

# Install dependencies
RUN apt-get update && \
 apt-get -y install apache2

# Install apache and write hello world message
RUN echo 'Hello World!' > /var/www/html/index.html

# Configure apache
RUN echo '. /etc/apache2/envvars' > /root/run_apache.sh && \
 echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && \
 echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && \ 
 echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && \ 
 chmod 755 /root/run_apache.sh

EXPOSE 80

CMD /root/run_apache.sh

8) Build the Image using below command. hello-world-web is the name of our image.

    docker build -t hello-world-web .


9) Verify you image

    docker images --filter reference=hello-world-web


10) Run the Docker container. -d will run the process in background. -p maps 80 port of app to the 80 port of container for you to access. You can also assign name to the container using --name which makes it easier to stop the container, which otherwise is randomly generated by docker.

    docker run -ti -d -p 80:80 hello-world-web

11) If you try to paste you Linux hostname (Public DNS name which you use to login to linux using putty) in your browser, it will not connect. This is because AWS has not opened port 80 to Outside world. So we have to go to the EC2 console where on the left you will see Network and Security. Under it click on network interfaces. Here find which security group it belongs to. Then click on Security groups and click on the same security group.


12) To the bottom you will see Inbound rules. Click on Edit Inbound Rule and then add rule by selecting HTTP in type and Anywhere in source. Save the rule.


13) Open the link in browser again and Viola ! Your first Docker Web app container is ready.


14) Stopping container is easy. Docker assigns random names to the container if the name is not specified while running the container. The running containers can be listed by using following command (name is under name column)

    docker ps


The container can be then stopped by:

    docker stop #container name



ECR Setup :

We have now tested our docker image by running it on the local docker container. Next we can go ahead and deploy it the Kubernetes cluster. Before doing that we will host our image in a Repository so that Kubernetes can fetch it from there instead of local file system.

1) Let's create a ECR instance on AWS using pre-installed aws cli tool on the linux instance. 

But before running aws command we need to configure it. Which requires generating an access key.
You can generate one by going to the aws console, clicking on your name on the top right corner and then clicking on create new Access key. In the next popup make sure you note down the keys.



Next run the command and enter the noted keys. Enter us-east-1 as the region and next one blank:

    aws configure


Our AWS CLI is now configured.

We can now create the ECR instance.
Let's name it docker-test-rep and use us-east-1 region.

    aws ecr create-repository --repository-name docker-test-rep --region us-east-1

Save the result as it contains our repository information. repositoryUri is our repository url.


2) We will now tag our Docker image hello-world-web to this repository.

    docker tag hello-world-web xxxxxxx3.dkr.ecr.us-east-1.amazonaws.com/docker-test-rep

3) And push it to the repository.

We will pass the ECR login credentials to docker to authenticate itself.

    aws ecr get-login-password | docker login --username AWS --password-stdin xxxxxxx3.dkr.ecr.us-east-1.amazonaws.com

Now push the image to your repository.

    docker push xxxxxxx3.dkr.ecr.us-east-1.amazonaws.com/docker-test-rep


4) Our image is now available on the ECR instance.


EKS Setup :

We need a Kubernetes Cluster to host and manage our containers. So we will create a EKS instance on aws.

1) We need to install eksctl tool to run the EKS commands. Run following commands in order:

    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

    sudo mv /tmp/eksctl /usr/local/bin

Test if eksctl works

    eksctl version


2) We need kubectl to run Kubernetes commands.

Since this is the first time, we need to generate ssh keys. Run following commands:

    ssh-keygen

Enter 3 times without entering any input to generate file with default name and no passcode.

Enter the directory where public key is present.

    cd /home/ec2-user/.ssh

3) Install kubectl

    curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.7/2020-07-08/bin/linux/amd64/kubectl

    chmod +x ./kubectl

    sudo mv ./kubectl /usr/local/bin

Check the kubectl command

    kubectl version --short --client


4) We will now create a EKS cluster with name prod having 2 nodes and in us-east-2 region. This process will take around 15 minutes to complete.

    eksctl create cluster \
    --name test \
    --version 1.17 \
    --region us-east-2 \
    --nodegroup-name linux-nodes \
    --node-type t3.medium \
    --nodes 2 \
    --nodes-min 1 \
    --nodes-max 4 \
    --ssh-access \
    --ssh-public-key id_rsa.pub \
    --managed

5) Run the following command to check if the cluster is available.

    kubectl get svc


6) Deploy the image via Kubernetes to the container. You need to provide your ECR Uri here along with a name of your choice - hello-world-test

    kubectl run -it --attach hello-world-test --image=xxxxxx3.dkr.ecr.us-east-1.amazonaws.com/docker-test-rep --image-pull-policy=IfNotPresent

You can always list your pods using following command.

    kubectl get pods


If you want to execute the bash of your container os (use the name from above) :

    kubectl exec -it hello-world-test-5b6f5998f7-zfr8p -- /bin/bash

 

7) The container is up and running but you will not be able to access the web service until the service is exposed. Let's do it using following command:

    kubectl expose deployment hello-world-test --name=hello-world-test-service --type=LoadBalancer --port 80 --target-port 80

8) We need the hostname of the cluster to access the web service. We can get it using following command.

    kubectl get service
    

9) Lets access the URL to see if our web service application in the container is working as expected.

And there it is !


10) You may want to monitor Kubernetes cluster, for which you need to install the Dashboard.

To Install Metrics server:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

To check if it is running:

    kubectl get deployment metrics-server -n kube-system


To install Dashboard:

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

Create following file for creating admin account:

  vi  eks-admin-service-account.yaml

Contents:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: eks-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: eks-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: eks-admin
  namespace: kube-system

Run following command to setup admin account:

    kubectl apply -f eks-admin-service-account.yaml 


11) We will now expose the dashboard to 8002 port on localhost

    kubectl proxy --address='0.0.0.0' --port=8002 --accept-hosts='.*'


Verify that it is accessible (Open another putty and run the command on ec2 linux)

    curl 127.0.0.1:8002/api/v1

This should give you a huge JSON output.

Since we are using a linux os with no GUI we cannot access the dashboard via browser. And the dashboard can only be accessible via localhost/127.0.0.1 hence we cannot use the public hostname of ec2 instance.
But as a workaround we can do a port forward from our local (Physical Windows/Mac machine) to access the same.

First Open incoming traffic on 8002 port from EWS console -> Security Groups and selecting the security group (As Step 11 of Docker setup)


Run CMD as administrator on your windows system (Mac/Linux will have a different method).

    netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=8002 connectaddress=ec2-18-204-20-242.compute-1.amazonaws.com connectport=8002

Open following link in your browser:


Select the token option. The token can be generated by using following command (Open in another shell as kubectl proxy is running in current shell):

    kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

Copy the token and paste it in the browser.





Make sure you delete all the instances on AWS after you are done experimenting to avoid unwanted charges.

    eksctl delete cluster --region us-east-2 test
    
    aws ecr delete-repository \
    --repository-name docker-test-rep \
    --force

    Thank You For Reading :)

No comments:

Post a Comment