Deploy Microservices Locally Using Kubernetes Kind

Running microservices locally in a production-like Kubernetes environment helps catch infrastructure issues early, before they ever reach UAT or production. This guide walks through setting up a local Kubernetes cluster using Kind (Kubernetes in Docker), building Docker images for the FreightFlow Nexus services, and deploying them using Kustomize overlays. By the end, all four services (Config Server, API, Tracking, and Gateway) will be running locally and accessible via port-forwarding.

Prerequisites

Make sure the following tools are installed before proceeding:
  • Java 21+
  • Maven 3.9+
  • Docker
  • Kind (Kubernetes in Docker)
  • kubectl
Note: Kind is a lightweight tool that runs Kubernetes clusters inside Docker containers. It is ideal for local development and CI pipelines where a full cloud cluster is not needed.

FreightFlow Nexus Services Overview

The application consists of four Spring Boot microservices:
  • Config Server (8071) - Centralized configuration management using Spring Cloud Config
  • API Service (8080) - Core business logic for freight operations (shipments, orders, customers)
  • Tracking Service (8090) - Real-time shipment tracking and GPS location updates
  • Gateway (8072) - API Gateway for routing, authentication, and load balancing

Dockerfile Example

Each service uses a similar Dockerfile. Here's the structure:
freightflow-api/Dockerfile
FROM eclipse-temurin:21-jre-alpine # Set working directory WORKDIR /app # Copy the jar file COPY target/*.jar app.jar # Expose the application port EXPOSE 8080 # Run the application ENTRYPOINT ["java", "-jar", "app.jar"]
All services use the eclipse-temurin:21-jre-alpine base image for a lightweight footprint (~170MB). The pattern is identical across services, with only port numbers varying.

Create the Kind Cluster

Start by creating a local cluster named nexus:
kind create cluster --name nexus
Verify the cluster is up and running:
kubectl cluster-info --context kind-nexus
This confirms that kubectl is pointing at your local Kind cluster and the API server is reachable.

Build the Services

From the project root, package all microservices:
mvn clean package
This produces the .jar artifacts for each service module under their respective target/ directories.

Build Docker Images

Build Docker images for all four services:
docker build -t nexus-config:v0.0.4 ./freightflow-config docker build -t nexus-api:v0.0.4 ./freightflow-api docker build -t nexus-tracking:v0.0.4 ./freightflow-tracking docker build -t nexus-gateway:v0.0.4 ./freightflow-gateway

Load Images into Kind

Kind runs its own internal container registry isolated from the Docker daemon. Images built locally are not automatically visible inside the cluster. They must be loaded explicitly:
kind load docker-image nexus-config:v0.0.4 --name nexus kind load docker-image nexus-api:v0.0.4 --name nexus kind load docker-image nexus-tracking:v0.0.4 --name nexus kind load docker-image nexus-gateway:v0.0.4 --name nexus
Important: Skipping this step is a common mistake. Without loading images into Kind, Kubernetes will fail to pull them and pods will remain in ErrImagePull or ImagePullBackOff state.

Apply Kubernetes Manifests

The project uses Kustomize to manage environment-specific configurations. A single command applies all base manifests with local patches, including imagePullPolicy, environment variables, and secrets:
kubectl apply -k k8s/overlays/local/
To preview the fully rendered manifests without actually deploying:
kubectl kustomize k8s/overlays/local/
The Kustomize directory layout looks like this:
k8s/ ├── base/ # Shared manifests │ ├── kustomization.yml │ ├── namespace.yml │ ├── rbac.yml │ ├── freightflow-config/ │ ├── freightflow-api/ │ ├── freightflow-tracking/ │ └── freightflow-gateway/ └── overlays/ ├── local/ # Kind (local development) │ ├── kustomization.yml │ ├── secrets.yml │ └── patches/ └── aks/ ├── uat/ # AKS UAT └── prod/ # AKS Prod

Kustomize Configuration Deep Dive

Understanding the Kustomize structure helps you customize deployments for different environments. Let's examine the key configuration files:

Base Configuration

The base kustomization.yml defines shared resources used across all environments:
k8s/base/kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: nexus resources: - namespace.yml - rbac.yml - freightflow-config/deployment.yml - freightflow-config/service.yml - freightflow-api/deployment.yml - freightflow-api/service.yml - freightflow-tracking/deployment.yml - freightflow-tracking/service.yml - freightflow-gateway/deployment.yml - freightflow-gateway/service.yml
The namespace.yml creates an isolated namespace for the application:
k8s/base/namespace.yml
apiVersion: v1 kind: Namespace metadata: name: nexus
The rbac.yml sets up permissions for service discovery (required for Spring Cloud Kubernetes):
k8s/base/rbac.yml
apiVersion: v1 kind: ServiceAccount metadata: name: nexus-service-account namespace: nexus --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: service-discovery-role namespace: nexus rules: - apiGroups: [""] resources: ["services", "endpoints", "pods"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: service-discovery-rolebinding namespace: nexus subjects: - kind: ServiceAccount name: nexus-service-account namespace: nexus roleRef: kind: Role name: service-discovery-role apiGroup: rbac.authorization.k8s.io

Sample Deployment & Service

Here's the API service deployment configuration:
k8s/base/freightflow-api/deployment.yml
apiVersion: apps/v1 kind: Deployment metadata: name: api labels: app: api spec: replicas: 1 selector: matchLabels: app: api template: metadata: labels: app: api spec: serviceAccountName: nexus-service-account containers: - name: api image: nexus-api:v0.0.4 ports: - containerPort: 8080 env: - name: SPRING_CLOUD_CONFIG_URI value: "http://config:8071" - name: ENVIRONMENT value: "default" resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "1Gi" cpu: "1000m"
And its corresponding service definition:
k8s/base/freightflow-api/service.yml
apiVersion: v1 kind: Service metadata: name: api labels: app: api spec: type: ClusterIP selector: app: api ports: - name: http port: 8080 targetPort: 8080 protocol: TCP

Local Overlay Configuration

The local overlay kustomization.yml references base resources and applies environment-specific patches:
k8s/overlays/local/kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base - secrets.yml patches: - path: patches/config-deployment-patch.yml - path: patches/api-deployment-patch.yml - path: patches/tracking-deployment-patch.yml - path: patches/gateway-deployment-patch.yml
Secrets are defined in secrets.yml (never commit real secrets to Git):
k8s/overlays/local/secrets.yml
apiVersion: v1 kind: Secret metadata: name: nexus-secrets namespace: nexus type: Opaque stringData: CONFIG_REPO_URL: "https://github.com/your-org/nexus-config" GITHUB_TOKEN: "<YOUR_GITHUB_TOKEN>" DB_URL: "jdbc:postgresql://localhost:5432/nexus" DB_USERNAME: "postgres" DB_PASSWORD: "<YOUR_DB_PASSWORD>"
Patch files customize deployments for the local environment. Here's the API deployment patch:
k8s/overlays/local/patches/api-deployment-patch.yml
apiVersion: apps/v1 kind: Deployment metadata: name: api spec: template: spec: containers: - name: api imagePullPolicy: Never # Critical for Kind env: - name: ENVIRONMENT value: "local" - name: SPRING_DATASOURCE_URL valueFrom: secretKeyRef: name: nexus-secrets key: DB_URL - name: SPRING_DATASOURCE_USERNAME valueFrom: secretKeyRef: name: nexus-secrets key: DB_USERNAME - name: SPRING_DATASOURCE_PASSWORD valueFrom: secretKeyRef: name: nexus-secrets key: DB_PASSWORD
Key Points:
  • imagePullPolicy: Never is critical for Kind to use locally loaded images
  • Patches use strategic merge to add/override specific fields without duplicating entire manifests
  • Secrets are referenced via secretKeyRef to avoid hardcoding sensitive values
  • Resource limits prevent any single service from consuming all cluster resources

Verify Pods are Running

After applying manifests, check the status of all pods in the nexus namespace:
kubectl get pods -n nexus
Wait until all pods show STATUS: Running and READY: 1/1. To watch them in real-time:
kubectl get pods -n nexus -w
If a pod is stuck, describe it to inspect events and identify the root cause:
kubectl describe pod <pod-name> -n nexus

Access Services Locally

Kind does not support LoadBalancer type services out of the box. Use kubectl port-forwardto expose each service on a local port:
# Config Server (port 8071) kubectl port-forward -n nexus deployment/config 8071:8071 # API Service (port 8080) kubectl port-forward -n nexus deployment/api 8080:8080 # Tracking Service (port 8090) kubectl port-forward -n nexus deployment/tracking 8090:8090 # Gateway (port 8072) kubectl port-forward -n nexus deployment/gateway 8072:8072
Test the gateway by hitting these URLs in your browser or via curl:
http://localhost:8072/config/api/dev http://localhost:8072/api/<endpoint>

Service Overview

ServiceSpring App NamePortK8s DeploymentImage
Config Serverconfig8071confignexus-config:v0.0.4
APIapi8080apinexus-api:v0.0.4
Trackingtracking8090trackingnexus-tracking:v0.0.4
Gatewaygateway8072gatewaynexus-gateway:v0.0.4

Redeploying After Code Changes

When you modify a service and need to redeploy it, the image must be rebuilt and reloaded into Kind's cache since it does not pull from an external registry. Here is the full cycle for a single service (e.g., gateway):
# 1. Rebuild the jar mvn clean package -pl freightflow-gateway # 2. Rebuild the Docker image docker build -t nexus-gateway:v0.0.4 ./freightflow-gateway # 3. Remove the old image from Kind's cache docker exec nexus-control-plane crictl rmi nexus-gateway:v0.0.4 # 4. Load the new image kind load docker-image nexus-gateway:v0.0.4 --name nexus # 5. Restart the deployment to pick up the new image kubectl rollout restart deployment/gateway -n nexus
To redeploy all four services at once, use this loop:
mvn clean package \ -pl freightflow-config,freightflow-api,freightflow-tracking,freightflow-gateway for svc in config api tracking gateway; do docker build -t nexus-${svc}:v0.0.4 ./freightflow-${svc} docker exec nexus-control-plane crictl rmi nexus-${svc}:v0.0.4 kind load docker-image nexus-${svc}:v0.0.4 --name nexus kubectl rollout restart deployment/${svc} -n nexus done

Useful Commands

Pod & Deployment Management

# Check pod status kubectl get pods -n nexus # Describe a pod (for debugging startup issues) kubectl describe pod <pod-name> -n nexus # Reapply config and restart kubectl apply -k k8s/overlays/local/ kubectl rollout restart deployment/api -n nexus # Delete and recreate all resources kubectl delete -k k8s/overlays/local/ kubectl apply -k k8s/overlays/local/ # Delete the entire cluster kind delete cluster --name nexus

Log Tailing

# Follow logs for a deployment kubectl logs -f -n nexus deployment/config kubectl logs -f -n nexus deployment/api kubectl logs -f -n nexus deployment/gateway # View last 100 lines kubectl logs --tail=100 -n nexus deployment/api # Follow logs for all pods of a service kubectl logs -f -l app=api -n nexus
That covers the complete local development workflow for FreightFlow Nexus microservices on Kind. This setup gives you a reproducible, production-like Kubernetes environment on your machine, ideal for validating deployment manifests, testing service discovery, and catching configuration issues before they reach UAT.
In the next article, we will look at deploying the same services to AKS UAT using Azure Container Registry (ACR) with managed identity. If you have any questions, feel free to leave a comment below.
Write your Comment