II. Inserting Client Containers Into Kubernetes

Kirtfieldk
5 min readJan 18, 2021

Welcome to part two, where the journey of deploying a kubernetes cluster onto Amazon Web Services is documented. The first part of this series focused on setting up the kubernetes network with a deployment of multiple server pods as well as an ingress nginx controller. In that post, I explained how to pull local images into K8S and some key commands that will be needed to understand concepts in this post. This post will walk through the steps of deploying client containers into an established kubernetes network.

In the same fashion of the server, to install out client containers into the K8S network we will need a client_deployment and client_clusterIP yaml files. For these config files to work, we will need to navigate into the client directory and build out the client dockerfile. The dockerfile for most React applications will resemble the one below.

dockerfile

This dockerfile implements a multi-stage build, which will reduce the image size a great amount. The first build is called the builder stage where the image installs all the the needed packages and builds the production version of the React UI. The production process creates a static index.html file placed into the /build directory. In the second stage, the images pulls the nginx:1.18.0-alpine to serve our static index.html file. We then overwrite the default nginx.config with the local nginx.config in the client directory. The nginx image is designed to serve the index.html file placed in the /user/share/nginx/html path, so we will copy the build directory from the builder stage and place it into that path. To build just run

docker build -t client .

in the terminal.

nginx.config

The localnginx.config file establishes the port to be used is 3000 and the line try_files $uri $uri/ /index.html ensures that our react-router-dom package works seamlessly. If this line was not present, nginx 404 errors will pop up whenever a url path is types instead of navigated to in the frontend.

Note that the command CMD ["nginx", "-g", "daemon off;"] is commented out. To run this image stand alone, uncomment this line to serve the standard index.html file. However, to run this image in K8S we will need to remove the CMD for our network to display the html file.

We will need the client_clusterIP.yaml to expose our pods to the K8S network.

client_clusterIP.yaml

We give it the name of client-cluster-ip-service so we can reference it in our ingress controller and the selector: {component: client} so we can again reference this but through our client_deployment.yaml file.

The client_deployment.yaml file will ensure that all of the pods that are managed in this deployment remain healthy and operating as directed.

client_deployment.yaml

Like the server_deployment.yaml file, we match these pods with the client_clusterIP service and pull the local image client.

To ensure that outside traffic will be directed to our frontend, we will need to establish route rules in our ingress controller.

ingress_service.yaml

Updating the ingress_service.yaml file to include the new rule /?(.*), will redirect all incoming traffic that does not start with /api/ in its url to return a frontend page.

To implement these updates, run the command

kubectl apply -f k8s

to receive the output of our objects being created or configured.

output

Since most of the objects were created in part I, we will see unchanged for server-cluster-ip-service and server-deployment. Although the image above does not reflect client-cluster-ip-service and client-deployment being created, the first run of apply will create the object. The ingress-service is configured which means updated.

To check if the kubernetes cluster is healthy, use the commands

kubectl get <object>

and

kubectl describe <object> <name>

to check if the objects are running.

To access the front end and run the command

minikube ip

to receive the IP address of our minikube VM. Once we grab the IP address, we can access the UI via http://<minikubeIP>/ as the URL. For this camp application, the homepage makes an immediate api call to the /api/v1/bootcamp route.

Frontend

From the console, the call onto the bootcamp route received a successful response with one camp fetched from the MongoDB database. The call to user was unsuccessful because the route did not receive valid JSON values.

From this brief example, we can see that all traffic inside our K8S network passes through the ingress nginx controller. In respect to this K8S cluster, the minikube IP is the localhost of the network. The ingress nginx controller is configure to listen on port 80, so all API calls not specifying a port will automatically a direct its traffic to port 80.

In the client src code, all axios calls take the form axios.get('/api/v1/<route>'), so the ingress controller will direct this call to the server deployment pods.

Successfully, we have placed our client image into the already existing kubernetes network!

--

--