I. Setting Up K8S Cluster For A Node Server

Kirtfieldk
8 min readJan 10, 2021
Photo by cottonbro from Pexels

Topics Covered

  • Building Docker Image for NodeJS Server
  • Starting MiniKube VM
  • Build K8S Using These Docker Images

Building Docker Image

Docker is possibly the greatest technology I will ever learn. I started getting familiar with Docker in 2018, but my knowledge exploded all through 2019. During that year, I was working as a full time software dev intern in D.C. where all development was exclusively conducted in containers.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

This is exactly how Docker defines what it calls a container. Containers are basically the smaller more efficient virtual machine. As a result, we can build tons of containers without burdening our machine. The reason we will need to construct a Docker container to package our NodeJS server is because Kubernetes(K8S) builds, deploys, and restarts containers to scale our application.

To start, just create a file called Dockerfile — this will be where we place our instructions to build a successful container.

Dockerfile

This Dockerfile, builds off of the node:alpine image, which is the smallest linux image that contains NodeJS. For simple builds like this, the alpine image will generate the smallest image size that contains NodeJS. In addition, we can compose this Dockerfile as a multi-step file, where we create an executable binary in one step and take that executable into smaller base image steps — usually the scratch image. Although this particular Dockerfile is one step, our client Dockerfile will be two-step.

After establishing the node:alpine image, we create a working directory /app where now all of our commands like copy and run will execute in that directory. We copy the package.json first because when Docker builds images, it takes previous images built from the Dockerfile to speed up the process. This means that the initial build of this image will be extremely slower than the following builds (as long the image is present on our computers). With this information, the reason we copy the package.json into the work directory first is because if the package.json remains unchanged then Docker will build off of the previous image and we will not need to install all of our dependencies again.

We then copy the rest of the source code into the container and run the command npm run dev. This command starts our development server.

To build our image

docker build -t dev /file/path
docker build -t dev .

To start our image

docker run -p 5000:5000 --name dev --restart always dev

Our server will start and be listening on port 5000.

MiniKube

Minikube allows us to develop and play with K8S clusters locally. To start our clusters run the command

minikube start --driver=hyperkit

where we specify that the VM driver will be hyperkit. For my case, I want to use my local server image. Since minikube has its own virtual machine with its own instance of Docker, custom images on my computer will not be visible to the Docker instance in minikube. To build custom images into our local K8S cluster run the command

minikube docker-env

to show all of the environment variables associated with minikube’s Docker instance.

minikube docker-env

At the bottom of the output, we get the message

To point your shell to the minikube’s docker-daemin, run:
eval $(minikube -p minikube docker-env)

that gives us the explicit command to run in order to enter our minikube’s version of Docker. After running this command, we can check if we are in minikube’s Docker by running

docker ps

in our terminal and in a fresh terminal.

docker ps

We can see a vastly different amount of docker container running on our machine. This is because the minikube’s Docker instance automatically starts up the needed containers to make K8S operate smoothly, and on our local Docker version, I have no running containers. Therefore, we are successful in entering the second instance of Docker.

Now we must build our image inside this new Docker instance, so inside the directory with the dockerfile run the single command

docker build -t api .

to construct our image. Now we can run the command

docker image ls

to ensure that our image in in minikube, by seeing if our image tag is listed.

K8S

Now lets move onto the K8S cluster. For this specific project our K8S cluster will be composed of a nginx-ingress router, cluster-IP services, and deployments.

Lets begin with our server_cluster_ip.yaml file, which will be responsible for exposing our deployment pods to the K8S network. Note, deployment refers to the K8S definition.

Deployments represent a set of multiple, identical Pods with no unique identities

where a pod is simply a collection of similar or identical containers. Our server_cluster_ip.yaml file will be relatively simple.

server_cluster_ip.yaml

First we will specify the apiVersion — K8S objects live in specific api version. The service object lives in the v1 api. A service will expose our deployment pods to the K8S network on the port 5000, and will expose all the containers in the pods to each other on port 5000. We will give our service the name server-cluser-ip-service which will be how we will reference this cluster IP to our ingress controller. We will finally give our cluster IP service the component name of server which will be how we will reference this service from our deployment config file.

Now let us look at the server_deployment.yaml.

server_deployment.yaml

The server_deployment.yaml is responsible for deploying and monitoring pods that contain identical containers. Deployments have a current state (the heath and config of its running containers) and an ideal state. Whenever the current state deviates from the ideal state, the deployment will kill, reconfig, and restart the pods that are not operating correctly — this includes changes in port, images, and name.

The server_deployment.yaml specifies we want three identical pods running at all times and we want our pods to use the cluster ip server, exposing them to the K8S network on port 5000. This is defined by the matchLabels: {component: server} tag. Our template tag will define how our pods will operate. Each pods is given the label server, we will not be referencing this deployment but this label is nice to have if we ever will need to. Under spec, we will explicitly say that we will use the image api on our local machine, if present. We give each container a name, and expose port 5000. If we have env variables, we can list them here; however, having a .env file in our server directory will establish the env variables for the pod. This is safer because we will not commit our .env file and commit our server_deployment.yaml file.

ingress_service.yaml

Finally, the ingress_service.yaml will create our ingress service that will route outside traffic to appropriate clusters. First we will use the open source ingress-nginx package; ingress-nginx routes outside traffic to specific pods within a deployment cluster. This means that ingress-nginx completely bypasses the cluster ip, that acts as the gatekeeper to the pods in their cluster. The key features of our yaml file are the nginx.ingress.kubernetes.io/rewrite-target: /$1 which redirects an incomming url <url>:<port>/api/<Hello_world> to <url>:<port>/<Hello_World> inside our cluster. The ingress service states that all urls that begin with /url/ will be routed to a pod in our server-cluster-ip-service that is listening on port 5000. In the client post, this ingress service will be modified to route traffic to our client.

We need to install ingress-nginx, so run

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml

to build out and run the containers need to run ingress-nginx. Finally, use

minikube addons enable ingress

to enable ingress.

Start Up

The configuration files are all built, so all that is left to do is build out our clusters. Ensure that all three configuration files are places in the same directory; my project has all three files places in a directory called K8S. For quick creation of all of our services, run the command

kubectl apply -f <directory:/file/path>

which will output

ingress.networking.k8s.io/ingress-service created
service/servcer-cluster-ip-service created
deployment.apps/server-deployment created
.

Along with the successful created message, other output includes syntax and value errors from parsing our .yaml files. To confirm that all services and pods are running smoothly we can run the commands

kubectl get pods

Output:

kubectl get pods output

kubectl get services

kubectl get services output

kubectl get ingress

kubectl get ingress output

All of the components look good so let us test out our api in postman. When looking at the output of the get service command, note that the Cluster-IP column is the IP of the service with respect to the K8S network. To actually access out API from our local machine, we must run the command

minikube ip

to get the IP address of the minikube Virtual Machine. For me the Ip address is 192.168.64.3, so all of our http calls must begin with http://192.168.64.3/api/<route>. As we can see, the ingress service default listens to port 80, so we can omit 80 from out http req because port 80 is assumed when the port is not specified.

Postman Test

As expected, our HTTP request returns a response code of 200!

--

--