This is second tutorial of the Kubernetes Tutorial Series. First tutorial covered architecture of Kubernetes as well as how to provision a cluster in AWS using Kops.
In this article we will learn about the various Kubernetes Objects which helps us in deploying our application on top of Kubernetes.
What are Kubernetes Objects?
Kubernetes contains a number of abstractions that represent the state of your system like deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API. If you don’t understand, don’t worry, you will at the end of this article. Some of the Kubernetes Objects are:
and many more…
We will not cover each and every Object in this tutorial series but we will cover enough of them which are most widely used.
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy.
A Pod contains an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. Docker is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well like rkt as explained earlier.
Let us look at how to create a simple Pod. Copy and paste the below code in a file called pod.yaml . The sample below is a simple manifest for a Pod which contains a busybox container that prints a message of ‘Hello Kubernetes!’.
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
Let’s break down what we did.
We created a YAML file in which we wrote:
kind as Pod. Kind specifies which Kubernetes Object we want.
apiVersion specifies the api version of the object. In this case it’s v1 for Pod. Similarly other objects have different apiVersion as we will see later.
Next we specified some metadata in which we specified the name of the pod.
After that, we specified specs section, this is the meat of this YAML. Here we specify the containers which we want to run inside the Pod, the image to be used for the container and other things which we haven’t in this example but we can like environment variables, volume mounts, commands etc.
Now run the below command to spin up the Pod.
kubectl create -f pod.yaml
Check whether the pod is running or not by running kubectl get pod myapp-pod.
Also you can check the log of the pod by running kubectl logs myapp-pod. You should see the below output.
Pods are the building blocks of our application but they are not a good candidate when it comes to scaling Pods, updating them is a pain. That is why Kubernetes provides us with Deployments.
Deployments are responsible for creating Pods and is the production standard for creating Pods. Deployments give us features like specifying number of replicas for Pods, rolling updates etc. Let’s quickly look at a Deployment YAML file. Let’s name it deployment.yaml .
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4 ports: - containerPort: 80
Now create the deployment:
kubectl create -f deployment.yaml kubectl get deployment nginx-deployment
You can see that 3 pods are desired and available. If you run kubectl get pods. You should see the 3 running Pods.
Let’s break it down
First of all we specified apiVersion, kind and metadata. This is same as previous yaml file but metadata included something called labels which we will look in a moment. Note here that the apiVersion is apps/v1, deployments belong to this api group. Then comes the specs section which contains the following information:
Number of replicas
Selector: It defines how the Deployment finds which Pods to manage.
The template field specifies information like name of the container, labels for the container, which image to be used, ports for the container etc.
If you try to delete a pod manually then it will come back up again because we specified that we always want 3 replicas of this nginx pod. Deployments are great but we can’t access the pod from the out side world. That’s what we are going to look now.
Services help us in making our applications which are running inside Pods accessible to the outside world or to the cluster internally.
As we know by now that each Pod has it’s own IP address and Pods are ephemeral in nature, so their IP addresses keep on changing. Example: Suppose you have 3 replicas of Pods which are running and they serve as the backend of your application. Your frontend of the application wants to communicate with these backend Pods. If we specify their IP in the yaml file that we write, we would have to change it again and again which is close to impossible in this world of microservices where Pods come and go continuously. Services act as an abstraction for these backend Pod. Each Service will get a VIP which doesn’t change during the lifetime of a service. Application’s frontend will then point to this Service which will then transfer the requests to the backend pods.
The set of Pods targeted by a Service is (usually) determined by a Label Selector. In the below picture we can see that the service will match the pods which have Label of zone=prod and version=v1 and send traffic to only these Pods.
There are three types of Services:
Cluster IP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIPservice, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
Let’s look at an example of how this works. The below specification will create a new Service object named “nginx-service” which targets TCP port 80 on any Pod with the "app=nginx" label. For this article we have used type as NodePort. You can use LoadBalancer as well as ClusterIP based on your requirement.
kind: Service apiVersion: v1 metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: NodePort
Now run the below command to get the Port on which our application can be accessed.
kubectl get svc nginx-service
You should see the the below output:
Copy the port number which you get in your service. Now go to AWS Console where your Cluster is running and pick up the IP address of one of the worker nodes. Go to browser and take the combination of your Worker Node IP and NodePort and paste on the browser. In my case it’s 22.214.171.124:31463. You should see the below page.
There are many other Kubernetes Objects like Persistent Volumes, Stateful Sets, Daemon Sets, Jobs, Config Maps, Secrets etc., we will look at some of them later. For now you have got a good foundation of Kubernetes objects.
Next, we will look at how to persist data in Kubernetes using Volumes.
Feel free to ask any queries in the comments section below.