Cloud Architecture Fundamentals (Intro to Kubernetes)

November 24, 2023
Admin
intro-to-kubernetes

Kubernetes Intro

Are you launching your career in cloud computing? Are you acquainted with the concept of containers and their capacity to enhance organizational efficiency? In this article, we present a high-level introduction to containers and Kubernetes. Additionally, we delve into the organizational benefits that Kubernetes offers.

About the Authors?

Hello! My name is Michael Gibbs, CEO and Founder of Go Cloud Architects. I’ve been working in technology for well over 25 years. We are dedicated to helping our clients build elite cloud computing careers. I’ve been working in technology for over 25 years, and I’ve spent 20 years of my career coaching or mentoring others to get their first tech job.

In my 25-tech career, I have worked in:

  • Networking
  • Security
  • Cloud computing
  • Teaching
  • Coaching
  • Mentoring

My name is Ran Tao. I have more than 20 years of experience in Software Design and Solution Architecture in a Global Fortune 500 company as a technical manager. My specialties are Enterprise Project and Portfolio Management (EPPM) system development and enterprise-scale system implementation. I have successfully led several IT projects with a multi-million dollar budget and a duration of up to 2.5 years. The systems and platforms I implemented and managed are mission-critical and support a $2 billion plus portfolio. Education-wise, I have a master’s degree in Computer Science and Ph.D. in engineering.

Introduction

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. At a high level, Kubernetes and containerized applications work this way:

  1. Applications with their dependencies are first packaged into standardized images.
  2. Then, images are deployed to containers to be executed in a fleet of computing instances, such as physical computers or virtual computers (VM). 
  3. Kubernetes containers provide a consistent running environment regardless of the differences amongst the underlying operating systems (OS). Essentially, containers add another layer of abstraction between the application running environment and the operating system.

Kubernetes manages the above process. It also automatically scales up or down containers based on the workload.  Additionally, Kubernetes keeps the container running environment healthy by providing the ability to automatically recover from failures. Essentially, Kubernetes and its supporting systems provide organizations with the freedom of running applications anywhere they desire (private clouds, public clouds, hybrid clouds, or multi-cloud environments.  Furthermore, container images can be moved without refactoring the code. Kubernetes enables organizations to effortlessly move workloads to where it matters.

To fully understand how beneficial and transformational containerized computing can be, we will show the three computing methods, and how these methods have changed over time.

Virtualization Technologies Over Time

Let’s take a brief overview of computing technologies that have changed overtime. This will help you understand the benefit of Kubernetes.

Before Virtualization

Early on, applications were running on dedicated servers. A single application or a few light applications could run on the same Operating System (OS) and on the same hardware. This meant a single server was used in most cases to support a single application. This meant that a very large number of servers were required to meet an organization’s needs.  As computer processing power increased, the servers had a tremendous amount of unused capacity.  Having so many servers created an environment with high hardware costs, power costs, and management costs all while wasting a tremendous amount of underutilized compute capacity.  The solution to this problem was server virtualization.  

Server Virtualization (VM)

To address all of the unused capacity and high costs of the traditional environment virtualization was born. Virtualization effectively enables a single physical server to be provisioned into multiple logical servers. Virtualization provided organizations significant cost reduction by enabling greater utilization of their servers. This enables organizations to reduce the number of servers needed for their computing needs.

Virtualization works by using a technology called a hypervisor which will partition a physical server into multiple logical servers. The hypervisor manages all the physical server’s resources (memory, CPU, storage). This enables the organization to allocate a percentage of the servers’ resources to each individual virtual server. Each virtual machine will have its own operating system. While virtualization is widely used today, there is a movement to containerized applications to improve system efficiency. The main driver of this migration is the efficiency gained by not requiring multiple copies of the operating system on a server.  Additionally, containers are easier to manage in a large scale deployment. 

The architecture of server virtualization can be seen below.

Container Based Virtualization (Kubernetes)

The next evolution of the virtual machine is the container.  Unlike the virtual machine there is no hypervisor.  Additionally, there is no requirement for multiple copies of the operating system – as seen with server virtualization.

Effectively you have a server with an operating system.  Then a container runtime module is installed.  Next applications are packed into logically isolated containers (minicomputer that leverages the host operating system, along with an application and application dependencies).  This creates a secure environment where each application is logically separated from all other containers.

Since no operating system is needed per container, the container is very efficient in terms of memory and CPU resources.  This enables a single server to securely and efficiently host many applications.

The architecture of a container can be seen below:

How Kubernetes Works

Containers and container runtime modules address the problem of sharing computing resources efficiently. Something needs to manage the containers and that’s Kubernetes. Kubernetes provides a framework to ensure the resiliency and scalability of the containerized resources. Kubernetes manages container scaling, failover, deployments, and more.

Key features provided by Kubernetes (extracted from Kubernetes.IO)

Service discovery and load balancing

Kubernetes can expose a container using the DNS name or using the container’s IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.

Storage orchestration

Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.

Automated rollouts and rollbacks

You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adapt all their resources to the new container.

Automatic bin packing

You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.

Self-healing

Kubernetes restarts containers that fail, replaces containers when needed, and stops containers that don’t respond to your user-defined health check. Additionally, Kubernetes doesn’t advertise containers to clients until they are ready to serve.

Secret and configuration management

Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration

The Components Of A Kubernetes Deployment

There are many variations of Kubernetes deployments.  Kubernetes is open-source with an incredible degree of tunability.  There are many components of a Kubernetes deployment, the most common ones are described in this document.

The first component of the Kubernetes deployment are servers running the Kubernetes cluster. A Kubernetes cluster consists of nodes, which can be categorized as master nodes and worker nodes. Master nodes manage worker nodes in the cluster. Hence, master nodes are also called the control plane.

Control Plane

The control plane oversees the environment. To ensure high availability, the control plane (master nodes) should run on multiple servers. It is responsible for making global decisions about the cluster, such as scheduling, and for maintaining the target state of the cluster. An illustrative scenario of the control plane in operation is when an application necessitates four containers, and if one of them fails, a new container is automatically initiated. (Note that, in Kubernetes terminology, applications run in pods containing containers. The concept of pods will be explained later; hence, we will use the term “container” for now.)

There are lots of components in the control plane, each of which carries different functions. We will cover the top four components here.

Kube-APISever

The APIserver is the front-end of the control plane, which exposes Kubernetes APIs in REST API format and consumes JSON or YAML input. Users need to have Kubectl installed on a local computer to interact with APIserver. If EKS from AWS is used, eksctl is also needed. Both Kubectl and eksctl are command line utilities for communicating with Kube-APIServer.

Cluster Store

Cluster can be considered as the database (key value store) of the cluster. It saves the cluster state and configuration. Most of Kubernetes implementations use etcd as its backing store. Hence, sometimes the Cluster store is called etcd.

Kube-controller-manager

Within the control plane, there are several controllers, such as node controller, job controller, deployment controller, etc. As its name states, Kube-controller-manager manages all controllers and sometimes is called “control of controllers”.

Kube-scheduler

Kube-scheduler listens to APIServer for new work tasks. Then, it assigns work to worker nodes based on certain factors to a scheduling decision. Common decision factors are resource requirements, policy constraints, affinity and anti-affinity specifications, etc.

Worker Nodes (Data Plane)

Worker nodes have node components, which maintain the running pods and provide the Kubernetes runtime environment. A Pod is an encapsulation of containers. More often, there is only one container running in a Pod. However, multiple containers can also be placed in a pod. A typical example is one container running the main application and another container is running the helper application. Key components of worker nodes are listed below:

Kubelet

Kubelet is the main Kubernetes agent running on each node. Kubelet registers the node with the control plane and listens to the APIServer for a work task. Then, Kubelet creates and executes pods. Once it is done, it reports back to the control plane.

Kube-Proxy

Kube-Proxy is a network proxy running on each node. It controls network communication to the Pods, via the pod’s IP address, from network sessions inside or outside of the cluster.

Container Runtime

Container runtime is the software that is responsible for running containers. Kubernetes containers are pluggable as long as the runtime used implements the Kubernetes Container Runtime interface (CRI). Popular runtimes are Docker, gVisor, CRI-O, containerd, etc.

Other Kubernetes Components (addons)

There are other addons to provide cluster-level features.  An example would be web user interface which provides a dashboard to monitor the health of the container clusters.  There are addons for resource monitoring, cluster level logging and more.  While many cluster addons are optional, cluster DNS is mandatory for all deployments.  Cluster DNS provides its own mapping of container names to IP addresses.

How Kubernetes Benefits The Organization

Kubernetes is a widely used platform for modern computing.  Since containers are very portable and scalable containers are the virtual machine of the future.

Kubernetes is widely used in the datacenter in modern microservice-based applications.  Kubernetes is also widely used in the cloud for the same reasons it’s used in the datacenter – scalability and portability.

Containers allow for the migration of existing applications to cloud and back or even across cloud providers. Kubernetes provides an environment to simplify the management and deployment of containerized applications.  Containerized applications can be anything from a simple website to machine learning.

Since Kubernetes is open source and vendor agnostic these containers can be used in any data center or cloud provider.  This gives the organization complete control of their containerized applications regardless of the computing environment.  Since Kubernetes is open source, you can easily lift and shift existing containerized applications to Microsoft AKS, or to AWS EKS, or to Google GKE, or even back to the on-premises data center.

Kubernetes can also be used for running machine learning models due to its flexibility and scalability. The worker nodes can be backed by whatever resources are needed, for example a GPU being used by a container for machine learning.

Kubernetes Control Plane supports different deployment methods, such as canary deployment, blue and green deployments, which makes deployment and management of microservices applications across pods much easier.

If you’re looking for more Kubernetes mentoring resources, join us on LinkedIn and YouTube for regular updates on valuable training materials.

Cloud Architect Career Development Program

We’ll send you a nice letter once per week. No spam.


    Being Read Posts

    Lorem ipsum dolor sit amet, consectetur adipiscing elit, incididunt ut labore et dolore magna aliqua.

    All Blog Posts
    Edgecomputing
    What is Edge Computing?

    What is Edge Computing? If you are curious...

    More Details
    AWS-Solutions-Architect-Interview-Questions
    Solutions Architect Interview Questions

    (Master the Solutions Architect Interview Questions on the...

    More Details
    Cloud Architects Do NOT Construct | Cloud Architects Do NOT Engineer
    Cloud Architects Do NOT Construct | Cloud Architects...

    Why can’t you learn cloud architecture by doing,...

    More Details

    Cloud & Technical Training

    High Quality Training ... Always Up to date

    Enroll Now in One of Our Programs