Kubernetes Cluster Setup Guide

VMware Workstation Environment


Document Information

Version: 1.0
Date: May 30, 2025
Environment: VMware Workstation with Ubuntu 22.04
Cluster Configuration: 1 Control Plane + Worker + 1 Worker Node


Table of Contents

  1. Prerequisites
  2. System Preparation
  3. Container Runtime Installation
  4. Kubernetes Components Installation
  5. Control Plane Initialization
  6. Pod Network Installation
  7. Worker Node Configuration
  8. Cluster Verification
  9. Metrics Server Setup
  10. Dashboard Installation
  11. Testing and Monitoring
  12. Troubleshooting

Prerequisites

VM Configuration

  • ubuntu0 (Control Plane + Worker)

    • IP Address: 192.168.9.131
    • Resources: 2 core, 2GB RAM
    • OS: Ubuntu 22.04
  • ubuntu1 (Worker)

    • IP Address: 192.168.9.132
    • Resources: 1 core, 2GB RAM
    • OS: Ubuntu 22.04

Network Information

  • Network Range: 192.168.9.0/24
  • Pod Network CIDR: 10.244.0.0/16
  • Service CIDR: 10.96.0.0/12 (default)

System Preparation

Step 1: Update Both Nodes

Execute on both ubuntu0 and ubuntu1:

# Update system packages
sudo apt update && sudo apt upgrade -y

# Install required packages
sudo apt install -y apt-transport-https ca-certificates curl gpg

# Disable swap (required for Kubernetes)
# Temporary - immediately deactivate all swap spaces
sudo swapoff -a

# Permanent - prevent swap from being activated on boot
# Option 1: Manual editing (for beginners)
sudo nano /etc/fstab
# Comment it out by adding # at the beginning:
# /swapfile none swap sw 0 0

# Option 2: Automated commenting (for scripts/automation)
# This sed command finds all swap entries in fstab and comments them out:
# - '/ swap /' matches lines containing ' swap '
# - 's/^\(.*\)$/#\1/g' adds '#' at the start of matching lines
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Verify swap is disabled
free -h

Step 2: Configure Kernel Modules

Execute on both nodes:

# Load required kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# Verify modules are loaded
lsmod | grep br_netfilter
lsmod | grep overlay

Step 3: Configure Sysctl Parameters

Execute on both nodes:

# Set up required sysctl params
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl parameters
sudo sysctl --system

# Verify settings
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables
sysctl net.ipv4.ip_forward

Container Runtime Installation

Step 4: Install containerd

Execute on both nodes:

# Install containerd
sudo apt install -y containerd

# Create containerd configuration directory
sudo mkdir -p /etc/containerd

# Generate default configuration
containerd config default | sudo tee /etc/containerd/config.toml

# Enable SystemdCgroup in containerd config
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# Restart and enable containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

# Verify containerd is running
sudo systemctl status containerd

Kubernetes Components Installation

Step 5: Install Kubernetes Repository

Execute on both nodes:

# Add Kubernetes apt repository
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Update apt package index
sudo apt update

Step 6: Install Kubernetes Components

Execute on both nodes:

# Install kubelet, kubeadm and kubectl
sudo apt install -y kubelet kubeadm kubectl

# Mark packages as held back from automatic updates
sudo apt-mark hold kubelet kubeadm kubectl

# Enable kubelet service
sudo systemctl enable kubelet

# Verify installation
kubeadm version
kubectl version --client
kubelet --version

Control Plane Initialization

Step 7: Initialize Control Plane

Execute only on ubuntu0 (192.168.9.131):

# Initialize the cluster
sudo kubeadm init \
  --apiserver-advertise-address=192.168.9.131 \
  --pod-network-cidr=10.244.0.0/16 \
  --node-name=ubuntu0

# Set up kubectl for regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Remove taint from control plane to allow scheduling pods
kubectl taint nodes ubuntu0 node-role.kubernetes.io/control-plane:NoSchedule-

# Verify control plane node
kubectl get nodes

Important: Save the kubeadm join command from the init output. Example:

kubeadm join 192.168.9.131:6443 --token <token> \
    --discovery-token-ca-cert-hash sha256:<hash>

Pod Network Installation

Step 8: Install Flannel CNI

Execute only on ubuntu0:

# Install Flannel CNI
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Wait for Flannel pods to be ready
kubectl wait --for=condition=ready pod -l app=flannel -n kube-flannel --timeout=60s

# Verify Flannel installation
kubectl get pods -n kube-flannel
kubectl get nodes

Worker Node Configuration

Step 9: Join Worker Node

Execute on ubuntu1 (192.168.9.132):

# Use the join command from Step 7
sudo kubeadm join 192.168.9.131:6443 --token <your-token> \
    --discovery-token-ca-cert-hash sha256:<your-hash>

If you lost the join command, generate a new one on ubuntu0:

kubeadm token create --print-join-command

Cluster Verification

Step 10: Verify Cluster Status

Execute on ubuntu0:

# Check all nodes
kubectl get nodes -o wide

# Check system pods
kubectl get pods --all-namespaces

# Check cluster info
kubectl cluster-info

# Check component status
kubectl get componentstatuses

Expected output:

NAME      STATUS   ROLES           AGE   VERSION
ubuntu0   Ready    control-plane   5m    v1.28.x
ubuntu1   Ready    <none>          2m    v1.28.x

Metrics Server Setup

Step 11: Install Metrics Server

Execute on ubuntu0:

# Download and install metrics-server
# kubectl apply -f https://raw.githubusercontent.com/boenkkk/kubernetes-cluster-setup-guide/refs/heads/main/metric-server.yaml
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Patch metrics-server for insecure kubelet connections
kubectl patch deployment metrics-server -n kube-system --type='json' -p='[
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/args/-",
    "value": "--kubelet-insecure-tls"
  },
  {
    "op": "add", 
    "path": "/spec/template/spec/containers/0/args/-",
    "value": "--kubelet-preferred-address-types=InternalIP"
  }
]'

# Wait for metrics-server to be ready
kubectl wait --for=condition=ready pod -l k8s-app=metrics-server -n kube-system --timeout=60s

# Verify metrics are working
kubectl top nodes
kubectl top pods --all-namespaces

Dashboard Installation

Step 12: Install Kubernetes Dashboard (No Authentication)

Execute on ubuntu0:

# Install Kubernetes Dashboard
# kubectl apply -f https://raw.githubusercontent.com/boenkkk/kubernetes-cluster-setup-guide/refs/heads/main/dashboard.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# Wait for dashboard to be deployed
kubectl wait --for=condition=ready pod -l k8s-app=kubernetes-dashboard -n kubernetes-dashboard --timeout=60s

# Configure dashboard to skip authentication
kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type='json' -p='[
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/args/-",
    "value": "--enable-skip-login"
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/args/-", 
    "value": "--disable-settings-authorizer"
  }
]'

# Create service account with cluster-admin permissions
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF

# Patch dashboard deployment to use service account
kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard -p '{"spec":{"template":{"spec":{"serviceAccountName":"dashboard-admin"}}}}'

# Wait for dashboard to restart
kubectl rollout status deployment/kubernetes-dashboard -n kubernetes-dashboard

Step 13: Access Dashboard

# Start proxy on ubuntu0
kubectl proxy --address='0.0.0.0' --disable-filter=true &

# Access dashboard from your host machine at:
# http://192.168.9.131:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
# Click "Skip" when prompted for authentication

Method 2: NodePort Service

# Edit dashboard service to use NodePort
kubectl -n kubernetes-dashboard patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'

# Get the NodePort
kubectl -n kubernetes-dashboard get svc kubernetes-dashboard

# Access dashboard at:
# https://192.168.9.131:<NodePort>
# https://192.168.9.132:<NodePort>
# Click "Skip" when prompted for authentication

Method 3: Insecure HTTP Access

# Create insecure dashboard service
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: kubernetes-dashboard-insecure
  name: kubernetes-dashboard-insecure
  namespace: kubernetes-dashboard
spec:
  ports:
  - port: 9090
    protocol: TCP
    targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
EOF

# Patch dashboard for insecure access
kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type='json' -p='[
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/args/-",
    "value": "--insecure-bind-address=0.0.0.0"
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/args/-",
    "value": "--insecure-port=9090"
  }
]'

# Get NodePort for insecure access
kubectl get svc kubernetes-dashboard-insecure -n kubernetes-dashboard

# Access via HTTP (no HTTPS, no authentication):
# http://192.168.9.131:<NodePort>
# http://192.168.9.132:<NodePort>

Testing and Monitoring

Step 14: Deploy Test Application

# Create test deployment
kubectl create deployment nginx-test --image=nginx --replicas=3

# Expose as service
kubectl expose deployment nginx-test --port=80 --type=NodePort

# Check deployment
kubectl get deployments
kubectl get pods -o wide
kubectl get services

# Test service access
kubectl get svc nginx-test
curl http://192.168.9.131:<NodePort>
curl http://192.168.9.132:<NodePort>

Step 15: Monitoring Commands

# View resource usage
kubectl top nodes
kubectl top pods --all-namespaces

# View cluster information
kubectl get nodes -o wide
kubectl get pods --all-namespaces -o wide

# View cluster events
kubectl get events --all-namespaces --sort-by='.lastTimestamp'

# Monitor continuously
watch kubectl top nodes
watch kubectl top pods --all-namespaces

Step 16: Create Monitoring Script

# Create monitoring script
cat <<'EOF' > ~/monitor-k8s.sh
#!/bin/bash
echo "=== Kubernetes Cluster Monitor ==="
echo "Date: $(date)"
echo ""
echo "=== Cluster Nodes ==="
kubectl get nodes -o wide
echo ""
echo "=== Resource Usage ==="
kubectl top nodes 2>/dev/null || echo "Metrics not available yet"
echo ""
echo "=== Pod Resource Usage ==="
kubectl top pods --all-namespaces 2>/dev/null || echo "Metrics not available yet"
echo ""
echo "=== System Pods Status ==="
kubectl get pods -n kube-system
echo ""
echo "=== User Workloads ==="
kubectl get pods --all-namespaces | grep -v kube-system | grep -v kubernetes-dashboard | grep -v kube-flannel
echo ""
echo "=== Services ==="
kubectl get svc --all-namespaces
echo ""
echo "=== Recent Events ==="
kubectl get events --all-namespaces --sort-by='.lastTimestamp' | tail -10
EOF

chmod +x ~/monitor-k8s.sh

# Run monitoring script
~/monitor-k8s.sh

Troubleshooting

Common Issues and Solutions

Nodes in NotReady State

# Check kubelet logs
sudo journalctl -xeu kubelet

# Check container runtime
sudo systemctl status containerd

# Restart kubelet
sudo systemctl restart kubelet

# Check node conditions
kubectl describe nodes

Pod Networking Issues

# Check Flannel pods
kubectl get pods -n kube-flannel

# Check Flannel logs
kubectl logs -n kube-flannel -l app=flannel

# Restart Flannel pods
kubectl delete pods -n kube-flannel -l app=flannel

Metrics Server Issues

# Check metrics-server logs
kubectl logs -n kube-system -l k8s-app=metrics-server

# Restart metrics-server
kubectl rollout restart deployment/metrics-server -n kube-system

# Verify metrics-server arguments
kubectl describe deployment metrics-server -n kube-system

Dashboard Access Issues

# Check dashboard pods
kubectl get pods -n kubernetes-dashboard

# Check dashboard logs
kubectl logs -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard

# Restart dashboard
kubectl rollout restart deployment/kubernetes-dashboard -n kubernetes-dashboard

# Check service endpoints
kubectl get endpoints -n kubernetes-dashboard

Resource Constraints

# Monitor resource usage
kubectl top nodes
kubectl describe nodes

# Check pod resource requests/limits
kubectl describe pods --all-namespaces | grep -A 5 -B 5 "Requests\|Limits"

# Check for pending pods
kubectl get pods --all-namespaces | grep Pending

Reset Cluster (If Needed)

Reset Worker Node

# On worker node (ubuntu1)
sudo kubeadm reset
sudo rm -rf /etc/cni/net.d
sudo rm -rf $HOME/.kube/config

Reset Control Plane

# On control plane (ubuntu0)
sudo kubeadm reset
sudo rm -rf /etc/cni/net.d
sudo rm -rf $HOME/.kube

Clean Restart

# After reset, restart from Step 7 (Initialize Control Plane)
# Make sure to clean up any leftover containers
sudo docker system prune -af

Useful Commands Reference

Cluster Management

# Labeling Node as Worker
kubectl label nodes <node-name> node-role.kubernetes.io/worker=true

# View cluster information
kubectl cluster-info
kubectl get nodes -o wide
kubectl get pods --all-namespaces

# Resource monitoring
kubectl top nodes
kubectl top pods --all-namespaces

# Service discovery
kubectl get svc --all-namespaces
kubectl get endpoints --all-namespaces

# Logs and events
kubectl logs <pod-name> -n <namespace>
kubectl get events --all-namespaces --sort-by='.lastTimestamp'

Application Management

# Deploy applications
kubectl create deployment <name> --image=<image>
kubectl expose deployment <name> --port=<port> --type=NodePort

# Scale applications
kubectl scale deployment <name> --replicas=<number>

# Update applications
kubectl set image deployment/<name> <container>=<new-image>

# Delete applications
kubectl delete deployment <name>
kubectl delete service <name>

Configuration Management

# Apply YAML configurations
kubectl apply -f <file.yaml>

# Edit resources
kubectl edit deployment <name>
kubectl edit service <name>

# View resource definitions
kubectl get deployment <name> -o yaml
kubectl describe deployment <name>

Get Kube Config for Lens

# On your control plane node
sudo cat /etc/kubernetes/admin.conf
# or
cat ~/.kube/config

Security Notes

Important Security Considerations

  1. Dashboard Authentication Disabled: This configuration disables authentication for the Kubernetes dashboard. This is acceptable for lab environments but should NEVER be used in production.

  2. Metrics Server TLS Disabled: The metrics server is configured to skip TLS verification for simplicity in the lab environment.

  3. Network Security: The cluster uses default network policies. In production, implement proper network segmentation and policies.

  4. RBAC: While cluster-admin permissions are used for simplicity, production environments should use principle of least privilege.

Production Considerations

For production deployments:

  • Enable proper authentication and authorization
  • Use TLS certificates for all components
  • Implement network policies
  • Use specific RBAC roles instead of cluster-admin
  • Enable audit logging
  • Implement proper backup strategies
  • Use LoadBalancer services instead of NodePort
  • Implement resource quotas and limits

Conclusion

This guide provides a complete setup for a 2-node Kubernetes cluster suitable for learning and development purposes. The cluster includes:

Functional Control Plane: ubuntu0 serves as both control plane and worker
Worker Node: ubuntu1 provides additional compute capacity
Pod Networking: Flannel CNI for pod-to-pod communication
Metrics Monitoring: metrics-server for resource monitoring
Web Dashboard: Kubernetes dashboard with no authentication
Resource Monitoring: Complete observability stack

Next Steps

  1. Learn Kubernetes Concepts: Practice with deployments, services, configmaps, secrets
  2. Experiment with Applications: Deploy multi-tier applications
  3. Explore Storage: Set up persistent volumes and storage classes
  4. Network Policies: Implement pod-to-pod communication controls
  5. Helm Charts: Learn package management for Kubernetes
  6. CI/CD Integration: Connect with Jenkins, GitLab, or GitHub Actions

Support

Source:

For additional help:


Document End

This document was generated on May 30, 2025, for VMware Workstation environment with Ubuntu 22.04.