Understanding Kubernetes Fundamentals
What is Kubernetes?
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform that automates deploying, scaling, and managing containerized applications. Think of it as an operating system for your distributed applications.
What problem does it solve? When you have one container running on one server, Docker is sufficient. But when you need to run hundreds of containers across dozens of servers with automatic scaling, health checks, load balancing, and zero-downtime deployments, you need Kubernetes.
Real-world analogy: If Docker is like having a single apartment, Kubernetes is like managing an entire apartment complex, handling maintenance, security, utilities, and ensuring everything runs smoothly across all units.
Why Use Kubernetes?
Kubernetes provides several critical capabilities for production applications:
- Self-healing: Automatically restarts failed containers, replaces containers, kills containers that don't respond to health checks
- Horizontal scaling: Scale your application up or down with a single command or automatically based on CPU/memory usage
- Load balancing: Distributes network traffic across multiple instances of your application
- Automated rollouts: Deploy new versions gradually, rolling back automatically if something goes wrong
- Secret management: Securely store and manage sensitive information like passwords, API keys, and certificates
- Platform independence: Run the same configuration on AWS, Azure, Google Cloud, or your own data center
Kubernetes Architecture
A Kubernetes cluster consists of two main components: the Control Plane (the brain) and Worker Nodes (the muscle).
Control Plane Components
The control plane manages the cluster and makes decisions about scheduling and scaling:
- API Server: The front door to Kubernetes. All commands (kubectl, dashboards, automation) talk to the API server
- Scheduler: Decides which node should run each container based on resource requirements and constraints
- Controller Manager: Watches the cluster state and makes changes to match your desired configuration
- etcd: A distributed database that stores all cluster data (like a cluster's brain memory)
Worker Node Components
Each worker node runs your application containers and includes:
- Kubelet: An agent that ensures containers are running in pods as expected
- Container Runtime: Software that runs containers (Docker, containerd, or CRI-O)
- Kube-proxy: Manages networking rules, enabling communication between pods
Core Kubernetes Concepts
Pods
A Pod is the smallest deployable unit in Kubernetes. It represents one or more containers that share storage and network resources. Think of a pod as a wrapper around your container(s).
Why not just containers? Pods allow multiple tightly coupled containers to share resources. For example, a web server container might share a pod with a logging sidecar container.
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80This creates a single nginx pod. However, you rarely create pods directly in production. Instead, you use Deployments.
Deployments
A Deployment manages a set of identical pods, ensuring the desired number of replicas are always running. Deployments handle rolling updates, rollbacks, and scaling.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80This deployment creates 3 identical nginx pods. If one crashes, Kubernetes automatically creates a replacement. You can scale to 10 replicas with:
kubectl scale deployment web-app --replicas=10Services
Pods are ephemeral; they can be created, destroyed, and replaced. Their IP addresses change. A Serviceprovides a stable network endpoint to access a set of pods.
service.yml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: ClusterIP
selector:
app: web
ports:
- port: 80
targetPort: 80Service Types:
ClusterIP- Internal access only (default)NodePort- Exposes service on each node's IP at a static portLoadBalancer- Creates a cloud load balancer (AWS ELB, Azure LB, etc.)
Namespaces
Namespaces provide logical isolation within a cluster. They're like folders for organizing resources, perfect for separating development, staging, and production environments.
namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: developmentResources in different namespaces are isolated. A service named
api in the developmentnamespace is separate from an api service in production.ConfigMaps and Secrets
ConfigMaps store non-sensitive configuration data (API URLs, feature flags) while Secrets store sensitive information (passwords, API keys, certificates).
configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database.url: "postgres://db:5432"
log.level: "info"Secrets are similar but encoded (not encrypted by default):
secret.yml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
stringData:
db.password: "mySecurePassword"
api.key: "secretApiKey123"Both are injected into pods as environment variables or mounted as files, allowing you to change configuration without rebuilding container images.
Resource Management
Kubernetes allows you to specify resource requests (guaranteed resources) and limits (maximum resources) for each container:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"- Requests: Kubernetes guarantees this amount. Used for scheduling decisions.
- Limits: Maximum resources a container can use. Prevents one container from hogging resources.
Mi= Mebibytes (memory),m= millicores (CPU, 1000m = 1 core)
Local Development Options
Before deploying to production, you need a local Kubernetes environment for development and testing:
| Tool | Best For | Pros | Cons |
|---|---|---|---|
| Kind | CI/CD, testing multi-node clusters | Fast, lightweight, multi-node support | Requires Docker knowledge |
| Minikube | Beginners, learning | Easy setup, good docs, addons | Slower than Kind, single-node only |
| Docker Desktop | Mac/Windows users | One-click enable, integrated | Resource heavy, limited features |
| K3s | Edge, IoT, resource-constrained | Minimal resource usage | Simplified, not full K8s |
kubectl: The Kubernetes Command Line
kubectl is the command-line tool for interacting with Kubernetes clusters. Here are essential commands:kubectl get pods- List all podskubectl get services- List all serviceskubectl describe pod <name>- Detailed pod informationkubectl logs <pod-name>- View container logskubectl apply -f config.yml- Create/update resources from YAMLkubectl delete pod <name>- Delete a podkubectl exec -it <pod> -- /bin/bash- Shell into a container
Kustomize vs Helm
Two popular tools for managing Kubernetes configurations:
Kustomize (built into kubectl) uses a declarative approach to customize YAML files for different environments without templates. It's simpler and doesn't require learning a new DSL.
Helm is a package manager for Kubernetes. It uses templates and allows you to install pre-built applications (charts) from a repository. Better for complex applications or when using third-party software.
When to use what: Use Kustomize for custom applications where you control the manifests. Use Helm for installing third-party software (databases, monitoring tools) or when you need complex templating logic.
Common Pitfalls to Avoid
- Not setting resource limits: One container can consume all node resources, starving other applications
- Using
:latesttag: Makes deployments unpredictable. Always use specific version tags - Storing secrets in code: Use Kubernetes Secrets or external secret management (Vault, AWS Secrets Manager)
- Running as root: Containers should run as non-root users for security
- No health checks: Kubernetes can't determine if your app is healthy without liveness/readiness probes
- Ignoring namespaces: Leads to cluttered clusters and accidental resource conflicts
Next Steps
Now that you understand Kubernetes fundamentals, you're ready for hands-on practice:
- Deploy Microservices Locally Using Kubernetes Kind - Set up a local cluster and deploy a complete microservices application
- Explore kubectl: Practice with commands on a local cluster before touching production
- Read official docs: The Kubernetes documentation is comprehensive and well-written
- Try interactive tutorials: Check out Kubernetes interactive tutorials