overcast blog

Cloud-Native Engineering: Kubernetes, Docker, Micro-services, AWS, Azure, GCP & more.

Follow publication

Mastering Kubernetes Master Node: A Full Guide

In the Kubernetes ecosystem, the Master Node plays a pivotal role in managing the state of a cluster, orchestrating containerized applications, and ensuring that the environment runs efficiently and securely. This blog post delves into what a Kubernetes Master Node is, its key components, and provides practical examples of how to set up and interact with a Master Node in your Kubernetes cluster.

Understanding the Kubernetes Master Node

The Master Node is the heart of a Kubernetes cluster, responsible for making global decisions about the cluster (such as scheduling), detecting and responding to cluster events (such as starting up a new pod when a deployment’s replicas field is unsatisfied).

Key Components of a Master Node:

  • API Server (kube-apiserver): Acts as the front end to the cluster’s shared state, allowing users, management tools, and other components to communicate.
  • Cluster Store (etcd): A persistent storage layer that stores the cluster’s state and configuration.
  • Controller Manager (kube-controller-manager): Runs controller processes that watch the state of your cluster and make changes aiming to move the current state towards the desired state.
  • Scheduler (kube-scheduler): Watches for newly created Pods with no assigned node, and selects a node for them to run on.

Setting Up a Kubernetes Master Node

Setting up a Kubernetes cluster involves initializing a Master Node and joining worker nodes to the cluster. Here’s how to get started with creating a Master Node using kubeadm, a tool that provides a best-practice way to create a Kubernetes cluster.

Prerequisites:

  • One or more machines running one of:
  • Ubuntu 16.04+
  • Debian 9+
  • CentOS 7+
  • Red Hat Enterprise Linux (RHEL) 7+
  • Fedora 25+
  • 2 GB or more of RAM per machine
  • 2 CPUs or more
  • Full network connectivity between all machines in the cluster

1. Install kubeadm, kubelet, and kubectl

On Ubuntu, you can install these packages using the following commands:

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

2. Initialize the Master Node (careful :) )

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The --pod-network-cidr flag is required for some networking plugins.

3. Set Up Local kubeconfig

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4. Apply a CNI (Container Network Interface) Plugin

For example, to apply Flannel as the network plugin:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Interacting with the Master Node

Once your Master Node is set up, you can start interacting with it to manage your Kubernetes cluster.

View Cluster Information:

kubectl cluster-info

Get Nodes in the Cluster:

kubectl get nodes

Deploying Applications:

Deploy a simple Nginx application:

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Apply the deployment:

kubectl apply -f nginx-deployment.yaml

Accessing Deployed Applications:

Expose the Nginx deployment:

kubectl expose deployment nginx-deployment --type=NodePort --port=80

Find the exposed service’s port and access it:

kubectl get services

Best Practices for Master Node Management

  • High Availability: For production environments, configure your cluster for high availability by setting up multiple Master Nodes.
  • Backup etcd: Regularly back up the etcd data store to safeguard your cluster’s state.
  • Secure the API Server: Use RBAC policies to control access to the Kubernetes API and apply network policies to restrict traffic to the API server.

eep Dive into the Kubernetes Control Plane

The Kubernetes Master Node, often referred to as the control plane, orchestrates the cluster’s operations. Its components, including the API Server, etcd, Controller Manager, and Scheduler, work in concert to manage the cluster’s state, respond to events, and schedule workloads.

Extended Component Overview:

  • API Server (kube-apiserver): Serves as the cluster's gateway, handling internal and external requests. It's designed to scale horizontally, ensuring high availability and performance.
  • etcd: A highly reliable distributed data store that persistently stores the cluster’s state and configuration. For production environments, running etcd in a clustered configuration across multiple machines is recommended to ensure resilience and high availability.
  • Controller Manager (kube-controller-manager): Runs a set of controllers that handle routine tasks such as ensuring the correct number of pods for a deployment and managing service endpoints.
  • Scheduler (kube-scheduler): Responsible for assigning workloads to nodes based on resource availability, constraints, and affinity/anti-affinity specifications.

Setting Up a High-Availability Master Node

For production clusters, a high-availability (HA) setup is crucial. This involves running multiple Master Nodes to ensure that the cluster remains operational in case of a failure.

1. Setting Up an HA Cluster with kubeadm:

To create an HA cluster, you need to initialize your first Master Node and then join additional Master Nodes to the cluster.

  • Initialize the First Master Node:
bashCopy code
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" \
--upload-certs --pod-network-cidr=10.244.0.0/16
  • Join Additional Master Nodes:

On each additional Master Node, use the join command provided at the end of the kubeadm init output. It will include the --control-plane flag.

2. Configuring a Load Balancer:

Before joining additional Master Nodes, set up a load balancer that directs traffic to all Master Nodes. This ensures the API Server’s high availability.

Advanced Use Cases

Auto-Scaling Applications:

Kubernetes supports horizontal pod auto-scaling (HPA) to automatically increase or decrease the number of pod replicas based on CPU utilization or other selected metrics.

  • Enable HPA for a Deployment:
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10

This command auto-scales the nginx-deployment based on CPU usage, maintaining a target of 50% CPU utilization across all replicas.

Rolling Updates and Rollbacks:

Kubernetes facilitates rolling updates to deployments, allowing updates with zero downtime by incrementally updating pod instances with new ones.

Performing a Rolling Update

Update the deployment’s image:

kubectl set image deployments/nginx-deployment nginx=nginx:1.16.1

Rollback an Update:

If an update causes issues, you can rollback to the previous version:

kubectl rollout undo deployments/nginx-deployment

Best Practices for Master Node Management

Securing the Control Plane:

API Server Authentication and Authorization: Utilize RBAC to define roles and permissions tightly around resources and operations.

etcd Encryption: Enable encryption at rest for etcd to protect sensitive data.

Network Policies: Apply network policies to control the flow of traffic between pods and namespaces, reducing the risk of lateral movement in case of a compromise.

Backup and Disaster Recovery

Regularly back up the etcd data store and Kubernetes object configurations. This can be done using tools like etcdctl for etcd backups and kubectl get for exporting Kubernetes objects.

  • Backing Up etcd Data:
ETCDCTL_API=3 etcdctl snapshot save snapshot.db \
--endpoints=https://[127.0.0.1]:2379 \
--cacert=/path/to/ca.crt \
--cert=/path/to/etcd-server.crt \
--key=/path/to/etcd-server.key
  • Restoring etcd Data:
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--data-dir /path/to/new/data-dir \
--initial-cluster token=NEW_TOKEN \
--initial-cluster-state new \
--name m1

Monitoring and Logging:

Implement comprehensive monitoring and logging for the control plane. Tools like Prometheus for metrics collection and Elasticsearch-Fluentd-Kibana (EFK) stack for logging can provide deep insights into the cluster’s health and performance.

Conclusion

The Kubernetes Master Node is a critical component that orchestrates and manages the entire cluster, ensuring that your applications run smoothly and efficiently. By following the steps outlined in this guide, you can set up, configure, and manage a Kubernetes Master Node, laying the foundation for a robust, scalable containerized application infrastructure. Whether you’re deploying simple web applications or complex distributed systems, mastering the Kubernetes Master Node is key to unlocking the full potential of container orchestration with Kubernetes.

Learn more

Published in overcast blog

Cloud-Native Engineering: Kubernetes, Docker, Micro-services, AWS, Azure, GCP & more.

Written by DavidW (skyDragon)

Into cloud-native architectures and tools like K8S, Docker, Microservices. I write code to help clouds stay afloat and guides that take people to the clouds.

No responses yet

Write a response