Deploy Microservices to Azure Kubernetes Service
Prerequisites: This guide assumes you've completed our local Kubernetes setup with Kind and have the base Kustomize configurations ready. If you're new to Kubernetes, start with Understanding Kubernetes Fundamentals.
Prerequisites
Before deploying to AKS, ensure you have:
- Azure CLI installed and configured (
az --version) - kubectl installed (
kubectl version --client) - Docker installed and running
- Active Azure subscription with appropriate permissions
- Microservices source code (FreightFlow Nexus or your own)
- Kustomize base configurations from the local setup
Log into Azure using the CLI:
az login
az account set --subscription "Your-Subscription-Name"Azure Container Registry Setup
Azure Container Registry (ACR) is a managed Docker registry service where we'll store our microservice images. ACR integrates seamlessly with AKS and provides private image storage with security scanning capabilities.
Create Resource Group
First, create a resource group to organize all your AKS-related resources:
az group create \
--name nexus-rg \
--location eastusExpected Output: JSON response showing the resource group was created with "provisioningState": "Succeeded"
Create ACR Instance
Create a container registry with a unique name. The Basic SKU is cost-effective for development:
az acr create \
--resource-group nexus-rg \
--name nexusacr \
--sku Basic \
--admin-enabled trueThe
--admin-enabled flag enables the admin account for simplified authentication during development. For production, use managed identities or service principals instead.Login to ACR
az acr login --name nexusacrExpected Output: "Login Succeeded"
Build and Push Docker Images
We need to build Docker images for each microservice and push them to ACR. We'll tag images with both the ACR registry URL and a version tag for proper image management.
Get ACR Login Server
ACR_LOGIN_SERVER=$(az acr show --name nexusacr --query loginServer --output tsv)
echo $ACR_LOGIN_SERVERThis command retrieves your ACR URL (e.g.,
nexusacr.azurecr.io) which we'll use for tagging images.Build and Push Config Server
cd freight-flow-nexus/config-server
# Build the application
./mvnw clean package
# Build Docker image with ACR tag
docker build -t $ACR_LOGIN_SERVER/config-server:v1.0.0 .
# Push to ACR
docker push $ACR_LOGIN_SERVER/config-server:v1.0.0Build and Push Remaining Services
Repeat for each microservice:
# API Service
cd ../api-service
./mvnw clean package
docker build -t $ACR_LOGIN_SERVER/api-service:v1.0.0 .
docker push $ACR_LOGIN_SERVER/api-service:v1.0.0
# Tracking Service
cd ../tracking-service
./mvnw clean package
docker build -t $ACR_LOGIN_SERVER/tracking-service:v1.0.0 .
docker push $ACR_LOGIN_SERVER/tracking-service:v1.0.0
# Gateway Service
cd ../gateway-service
./mvnw clean package
docker build -t $ACR_LOGIN_SERVER/gateway-service:v1.0.0 .
docker push $ACR_LOGIN_SERVER/gateway-service:v1.0.0Pro Tip: You can verify images in ACR using:
az acr repository list --name nexusacr --output table
az acr repository show-tags --name nexusacr --repository config-server --output tableCreate AKS Cluster
Azure Kubernetes Service provides a managed Kubernetes control plane. You only manage the worker nodes. We'll create a small cluster suitable for development and testing.
Create the Cluster
az aks create \
--resource-group nexus-rg \
--name nexus-aks \
--node-count 2 \
--node-vm-size Standard_B2s \
--enable-managed-identity \
--generate-ssh-keys \
--attach-acr nexusacrThis command creates a 2-node cluster with:
- Standard_B2s VMs: Cost-effective for development (2 vCPUs, 4 GB RAM each)
- Managed Identity: Azure-managed authentication instead of service principals
- ACR Integration: The
--attach-acrflag grants AKS pull permissions automatically
Cluster creation takes 5-10 minutes. The output shows cluster details including FQDN and resource group.
Get Cluster Credentials
Download cluster credentials to configure kubectl:
az aks get-credentials --resource-group nexus-rg --name nexus-aks
# Verify connection
kubectl cluster-info
kubectl get nodesExpected Output: You should see 2 nodes in "Ready" state.
Create AKS Kustomize Overlay
We'll create an AKS-specific overlay that modifies our base configurations with ACR image references and production-appropriate resource limits. The overlay approach lets us reuse base configurations while customizing for different environments.
Create Overlay Directory
cd k8s-manifests
mkdir -p overlays/aksCreate kustomization.yaml
The AKS overlay references base configurations and applies environment-specific patches:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: nexus
bases:
- ../../base
images:
- name: config-server
newName: nexusacr.azurecr.io/config-server
newTag: v1.0.0
- name: api-service
newName: nexusacr.azurecr.io/api-service
newTag: v1.0.0
- name: tracking-service
newName: nexusacr.azurecr.io/tracking-service
newTag: v1.0.0
- name: gateway-service
newName: nexusacr.azurecr.io/gateway-service
newTag: v1.0.0
patchesStrategicMerge:
- resources.yaml
configMapGenerator:
- name: app-config
behavior: merge
literals:
- ENVIRONMENT=aks
- LOG_LEVEL=infoThe
images section replaces local image references with ACR URLs. The configMapGeneratoradds AKS-specific environment variables.Create Resource Patch
Create
resources.yaml to set production-appropriate resource limits:apiVersion: apps/v1
kind: Deployment
metadata:
name: config-server
spec:
template:
spec:
containers:
- name: config-server
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
replicas: 2
template:
spec:
containers:
- name: api-service
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
apiVersion: apps/v1
kind: Deployment
metadata:
name: tracking-service
spec:
replicas: 2
template:
spec:
containers:
- name: tracking-service
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway-service
spec:
replicas: 2
template:
spec:
containers:
- name: gateway-service
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"We've increased replicas to 2 for API, tracking, and gateway services to provide high availability. Resource limits prevent any single pod from consuming all node resources.
Deploy to AKS
With our AKS cluster ready and overlay configured, we can deploy the microservices using Kustomize.
Apply Configurations
# Preview what will be deployed
kubectl kustomize overlays/aks
# Deploy to cluster
kubectl apply -k overlays/aksExpected Output: You'll see creation confirmations for namespace, deployments, services, and ConfigMaps.
Verify Deployment
# Check all resources
kubectl get all -n nexus
# Watch pod startup
kubectl get pods -n nexus -w
# Check pod details
kubectl describe pod <pod-name> -n nexus
# View logs
kubectl logs -f deployment/config-server -n nexusWait for all pods to reach "Running" status. The config-server must be ready before other services start successfully.
Troubleshooting
Common issues and solutions:
- ImagePullBackOff: Verify ACR integration with the check-acr command below
- CrashLoopBackOff: Check logs with
kubectl logsand verify environment variables - Pending Pods: Check node capacity with
kubectl describe nodes
az aks check-acr --resource-group nexus-rg --name nexus-aks --acr nexusacr.azurecr.ioIngress Controller Setup
To expose services externally, we'll install NGINX Ingress Controller. This provides a single entry point with routing rules instead of exposing each service individually with LoadBalancer services.
Install NGINX Ingress
# Add Helm repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
# Install ingress controller
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=LoadBalancerWait for the LoadBalancer IP to be assigned:
kubectl get service ingress-nginx-controller -n ingress-nginx -wCreate Ingress Resource
Create
ingress.yaml in the AKS overlay:apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: nexus.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gateway-service
port:
number: 8080Update the
host field with your domain, then add this to your kustomization.yaml under resources section. Apply the updated configuration:kubectl apply -k overlays/aks
# Verify ingress
kubectl get ingress -n nexus
kubectl describe ingress nexus-ingress -n nexusConfigure your DNS to point to the LoadBalancer IP shown in the ingress output.
Production Considerations
Before going to production, consider these additional configurations:
Security
- Network Policies: Restrict pod-to-pod communication with Kubernetes NetworkPolicies
- Pod Security Standards: Enforce security standards to prevent privileged containers
- Secrets Management: Use Azure Key Vault integration for sensitive data
- TLS/SSL: Enable HTTPS with cert-manager and Let's Encrypt
- RBAC: Implement fine-grained access control with Azure AD integration
Monitoring and Logging
- Azure Monitor: Enable Container Insights for cluster and application monitoring
- Log Analytics: Centralize logs with Azure Log Analytics workspace
- Prometheus/Grafana: Add metrics collection and dashboards
- Application Insights: Integrate for distributed tracing
Scaling and Performance
- Horizontal Pod Autoscaler: Automatically scale pods based on CPU/memory usage
- Cluster Autoscaler: Automatically scale nodes based on pod demands
- Resource Quotas: Set namespace-level resource limits
- Pod Disruption Budgets: Ensure availability during node maintenance
Disaster Recovery
- Backup: Use Velero for cluster backups and disaster recovery
- Multi-region: Deploy to multiple regions for high availability
- Health Checks: Implement liveness and readiness probes
- Rolling Updates: Configure proper deployment strategies
Cost Optimization
- Node Pools: Use different VM sizes for different workload types
- Spot Instances: Leverage Azure Spot VMs for non-critical workloads
- Resource Limits: Set appropriate requests and limits to avoid over-provisioning
- Azure Cost Management: Monitor and optimize spending
CI/CD Integration
Automate deployments using Azure Pipelines or GitHub Actions. Here's a basic GitHub Actions workflow:
name: Deploy to AKS
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Build and Push to ACR
run: |
az acr login --name nexusacr
docker build -t nexusacr.azurecr.io/api-service:${{ github.sha }} .
docker push nexusacr.azurecr.io/api-service:${{ github.sha }}
- name: Set AKS Context
uses: azure/aks-set-context@v3
with:
resource-group: nexus-rg
cluster-name: nexus-aks
- name: Deploy to AKS
run: |
kubectl set image deployment/api-service \
api-service=nexusacr.azurecr.io/api-service:${{ github.sha }} \
-n nexus
kubectl rollout status deployment/api-service -n nexusThis workflow builds images on every push to main, tags them with the commit SHA, and deploys to AKS.
Cleanup
When you're done testing, clean up resources to avoid charges:
# Delete the entire resource group (removes AKS, ACR, and all resources)
az group delete --name nexus-rg --yes --no-wait
# Or delete just the AKS cluster
az aks delete --resource-group nexus-rg --name nexus-aks --yes --no-wait
# Remove kubectl context
kubectl config delete-context nexus-aks