I’m new to Kubernetes. So when I saw on Meetup that Weaveworks was hosting a User Group session introducing the concepts of deploying containers and microservices to Kubernetes, I figured it was worth checking out. I’m glad I did!
In just about an hour’s time, Luke Marsden, who heads DX for Weaveworks, gave a detailed walk through of Kubernetes key concepts and architecture, then demoed all of that in real time. Here’s a recap on a Kubernetes beginner’s guide of concepts, explaining what elements are, and how it all fits together:
What is Kubernetes – key definitions and concepts
Here is a visualization of the essential Kubernetes concepts I outline below from Luke’s slide:
A container is the smallest unit in the Kubernetes world. The main purpose of Kubernetes is to manage, deploy, and, to a certain extent, monitor containers. Kubernetes management is not limited to Docker containers.
A node is the host that the container runs on.
A pod is a management unit in Kubernetes comprised of one or more containers. Each pod has its own unique IP address and storage namespaces. All containers share these networking and storage resources. One of the characteristics mentioned in this presentation is that pods are “mortal.” This means when a pod is deleted, it is gone for good. A YAML (Yet Another Markup Language) file is used to define a Pod. Here is a sample copy of a YAML file representing a Pod:
apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9
A Deployment is a new way to handle High Availability (HA) in Kubernetes in place of the Replication Controller. A pod by itself is “mortal” but with a Deployment, Kubernetes can make sure that the number of Pods that a user specifies is always up and running in the system. A Deployment specifies how many instances of a pod will run. A YAML file is used to define a Deployment. Here is an example of a YAML file representing a Deployment:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9
According to the official Kubernetes website, “A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service.”
Luke discussed the concept of Cluster IP and NodePorts. Cluster IP is internal to Kubernetes, and the NodePorts are the published IP addresses for external users to access the services. Routes to services are based on labels. As with Pods and Deployments, a service is also defined by a YAML file.
This diagram from the presentation puts the Service YAML file and the Deployment YAML file side by side. Note the arrow that makes it easy to see that a Service is connected to a Deployment via the label attribute. In this case, the label is nginx.
A side note on “Declarative” in Kubernetes
One important Kubernetes concept to keep in mind: “Declarative” is an operation based on YAML files. Kubernetes depends on a YAML file to check if the desired Pod, Deployment, or Service is running as defined. We can perform kubectl apply a hundred times with no undesirable or unexpected outcome. Kubernetes just checks if the system is running according to the desired state as defined in the YAML file.
To give you an example, for scripts that are not declarative, there is a command to append a certain text to a file. It will just append without checking if the text is already in that file. If you apply the script to append a certain text ten times, the exact text will appear in that file ten times.
Because of the time limit of the talk, Luke focused on the Master node and the worker node in Kubernetes architecture. The Master node is the control plane while the worker node is where the containers are being deployed.
You can find a more detailed description of Kubernetes architecture on GitHub.
Basically, we need to initialize the Master node with kubeadm init. This creates a few digital certificates and an output with the join command kubeadm join –token=. We can use that command on the worker node to join to the master.
For a more detailed description of the kubeadm command and how to initialize and create various nodes, check out the < Kubernetes Getting Started Guides for kubeadm.
Kubernetes Networking Live Demo
To demonstrate this, Luke used only one master and one worker node.
Once the master and worker nodes were initialized, we need to create a network. Luke showed how Weave Net made networking support a breeze, a process that otherwise would take many more steps.
To learn more about Kubernetes networking, check out the Kubernetes docs.
Once Luke set up the Kubernetes environment, he demonstrated the deployment of the service. Using Katacoda environments, he created Pods with a YAML file and the kubectl apply command. He also created a Service by linking it to the pod using the YAML file contents.
Luke showed how to create a live dashboard that visualizes all of the activities and pieces of the deployment, an easy way to track the progress of the deployment. He did this using (part of Weave Cloud).
You can try the visualization component of Weave Cloud yourself by following the step-by-step instructions found in the Weaveworks’ Guides. To try all of Weave’s components in one, you can sign up for Weave Cloud as a free 60-day trial.
For more comprehensive Kubernetes resources, check out:
After this brief intro to key Kubernetes concepts and seeing them demoed live, I’m eager and excited to give Kubernetes a try. Watching Luke walk through the steps of the deployment made it less daunting to try it on my own. For anyone who wants more guided training to use Kubernetes, Weaveworks also offers hands-on training sessions to User Group members. You can find out about upcoming talks and trainings by joining a Weave User Group. If you’re new to Kubernetes like me, or have a decent amount of experience, you’ll find the range of talks and levels of expertise very useful. Join a Weave User Group.