Let’s assume you’ve developed an application and packaged it using Docker containers. Subsequently, you’ve deployed the application on three different servers using Docker, and it begins experiencing significant traffic.
Scaling
Now, you need to scale up quickly; how will you transition from three servers to forty servers that you might need in the future?
Once you’ve created the containers, you need to decide where they should go. Monitor all containers and ensure they restart if they fail. This is where Kubernetes comes into play.
Kubernetes
Kubernetes, or K8s, is an open-source container orchestration platform that simplifies the deployment, management, and scaling of containerized applications.
Benefit of Kubernetes
* Deploy & manage applications (Containers )
* Scale up and down according to demands
* Zero downtime of application deployments
* Rollbacks of deployments and much more
How does it Work?
CLUSTER
A cluster is a group of NODES, which can be VMs (Virtual Machines) or physical machines. These NODES can be running on the Cloud (AWS, GCP, Azure) or on-premises.
Kubernetes Cluster
Master Node Or Control Plane
This is the central processing unit of the cluster, where all critical decisions are made. The control plane comprises several interconnected components.
→ API server - api
→ Scheduled - sched
→ Cluster Store - etcd
→ Control Manager - c-m
→ Cloud Control Manager - c-c-m
N:B The control plane component communicates via API server - api
Worker Nodes
This is where all the heavy lifting occurs, such as running your application. The worker nodes consist of two components:
kubelet
kube-proxy
Every cluster has one or more worker nodes. If a node fails, your application will still be accessible from the other nodes because a cluster is made up of multiple nodes working together.
Both master and worker nodes communicate via Kubelet. Each node contains a container runtime and Kubelet for starting, stopping, and managing individual containers.