Skip to content
DevOps

Kubernetes - A Complete Introduction for Developers

Published on:
·6 min read·Author: MDS Software Solutions Group

Kubernetes Complete Introduction

devops

Kubernetes - A Complete Introduction for Developers

Kubernetes (commonly abbreviated as K8s) is a container orchestration platform that has revolutionized how applications are deployed, scaled, and managed in production environments. If you work with Docker containers and are wondering how to efficiently manage them at scale, Kubernetes is the answer.

In this guide, we will walk you through all the key Kubernetes concepts - from fundamental objects, through YAML manifests, to a hands-on deployment of a Next.js application in a K8s cluster.

Why Kubernetes?#

When your application runs in a single Docker container on a single server, everything is straightforward. Problems begin when you need to:

  • Scale your application across multiple instances based on traffic
  • Ensure high availability - automatic restarts after failures
  • Manage configuration of multiple microservices simultaneously
  • Deploy without downtime - zero-downtime deployments
  • Balance traffic across multiple application instances
  • Manage secrets and configuration securely

Docker Compose works well in development environments, but in production you need something more robust. Kubernetes automates all these tasks and provides a declarative model for infrastructure management.

Kubernetes Architecture#

Before diving into practice, it is worth understanding the basic architecture of a Kubernetes cluster:

  • Control Plane (Master) - the brain of the cluster, managing state and making scheduling decisions
    • API Server - the central API for communicating with the cluster
    • etcd - a distributed key-value store holding cluster state
    • Scheduler - decides which node should run a Pod
    • Controller Manager - monitors cluster state and makes corrections
  • Worker Nodes - machines where containers actually run
    • kubelet - an agent managing Pods on a given node
    • kube-proxy - manages networking rules

Core Kubernetes Objects#

Pod - The Smallest Unit#

A Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers that share networking and storage:

apiVersion: v1
kind: Pod
metadata:
  name: my-nextjs-app
  labels:
    app: nextjs
    environment: production
spec:
  containers:
    - name: nextjs
      image: my-registry/nextjs-app:1.0.0
      ports:
        - containerPort: 3000
      resources:
        requests:
          memory: "128Mi"
          cpu: "250m"
        limits:
          memory: "256Mi"
          cpu: "500m"
      livenessProbe:
        httpGet:
          path: /api/health
          port: 3000
        initialDelaySeconds: 10
        periodSeconds: 30
      readinessProbe:
        httpGet:
          path: /api/health
          port: 3000
        initialDelaySeconds: 5
        periodSeconds: 10

In practice, you rarely create Pods directly. Instead, you use higher-level controllers such as Deployments.

Deployment - Managing Replicas#

A Deployment is the most commonly used object in Kubernetes. It manages a set of identical Pods (ReplicaSet) and enables declarative updates:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nextjs-app
  labels:
    app: nextjs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nextjs
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: nextjs
        version: "1.0.0"
    spec:
      containers:
        - name: nextjs
          image: my-registry/nextjs-app:1.0.0
          ports:
            - containerPort: 3000
          env:
            - name: NODE_ENV
              value: "production"
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: database-url
          resources:
            requests:
              memory: "128Mi"
              cpu: "250m"
            limits:
              memory: "256Mi"
              cpu: "500m"

The RollingUpdate strategy with maxUnavailable: 0 ensures that during deployment, the required number of Pods is always available, guaranteeing zero-downtime deployments.

Service - Internal Communication#

A Service provides a stable access point to a group of Pods. Even when Pods are created and destroyed, a Service maintains a consistent IP address and DNS entry:

apiVersion: v1
kind: Service
metadata:
  name: nextjs-service
spec:
  type: ClusterIP
  selector:
    app: nextjs
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000

Kubernetes offers several Service types:

  • ClusterIP (default) - accessible only within the cluster
  • NodePort - exposes a port on every node in the cluster
  • LoadBalancer - provisions an external load balancer (in cloud environments)
  • ExternalName - maps a Service to an external DNS name

Ingress - External Traffic Routing#

Ingress is an object that manages HTTP/HTTPS access from outside the cluster. It enables routing based on hostnames and paths:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nextjs-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - myapp.example.com
      secretName: myapp-tls
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nextjs-service
                port:
                  number: 80
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: api-service
                port:
                  number: 8080

kubectl Essentials#

kubectl is the primary CLI tool for interacting with a Kubernetes cluster. Here are the essential commands you should know:

# Cluster information
kubectl cluster-info
kubectl get nodes

# Pod management
kubectl get pods
kubectl get pods -o wide
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs -f <pod-name>  # follow
kubectl exec -it <pod-name> -- /bin/sh

# Deployment management
kubectl get deployments
kubectl rollout status deployment/nextjs-app
kubectl rollout history deployment/nextjs-app
kubectl rollout undo deployment/nextjs-app

# Applying manifests
kubectl apply -f deployment.yaml
kubectl apply -f ./k8s/  # all files in directory
kubectl delete -f deployment.yaml

# Scaling
kubectl scale deployment nextjs-app --replicas=5

# Debugging
kubectl get events --sort-by='.lastTimestamp'
kubectl top pods
kubectl top nodes

ConfigMaps and Secrets#

ConfigMap - Application Configuration#

A ConfigMap stores configuration data as key-value pairs. It allows you to decouple configuration from the container image:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  NEXT_PUBLIC_API_URL: "https://api.example.com"
  NEXT_PUBLIC_SITE_NAME: "My Application"
  LOG_LEVEL: "info"
  CACHE_TTL: "3600"
  nginx.conf: |
    server {
      listen 80;
      server_name localhost;
      location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
      }
    }

Secret - Sensitive Data#

Secrets are used to store sensitive data such as passwords, API tokens, or certificates. Values are encoded in base64:

apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc0BkYjoxNTQzMi9teWRi
  jwt-secret: c3VwZXItc2VjcmV0LWtleS0xMjM0NTY=
  redis-password: cmVkaXMtcGFzc3dvcmQtMTIz

Using ConfigMap and Secret in a Deployment:

spec:
  containers:
    - name: nextjs
      image: my-registry/nextjs-app:1.0.0
      envFrom:
        - configMapRef:
            name: app-config
        - secretRef:
            name: app-secrets
      volumeMounts:
        - name: config-volume
          mountPath: /etc/nginx/conf.d
  volumes:
    - name: config-volume
      configMap:
        name: app-config
        items:
          - key: nginx.conf
            path: default.conf

Keep in mind that Secrets in Kubernetes are not encrypted by default - they are merely base64-encoded. For production environments, consider using tools like Sealed Secrets, HashiCorp Vault, or AWS Secrets Manager.

Namespaces - Resource Isolation#

Namespaces provide logical isolation of resources within a cluster. They are particularly useful in multi-team environments:

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    environment: production
---
apiVersion: v1
kind: Namespace
metadata:
  name: staging
  labels:
    environment: staging

You can limit resources available in a namespace using ResourceQuota:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: production-quota
  namespace: production
spec:
  hard:
    requests.cpu: "4"
    requests.memory: "8Gi"
    limits.cpu: "8"
    limits.memory: "16Gi"
    pods: "20"

Persistent Volumes - Durable Storage#

Containers are ephemeral by nature. Persistent Volumes (PV) and Persistent Volume Claims (PVC) enable durable data storage:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: standard
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:16-alpine
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: db-secrets
                  key: postgres-password
          volumeMounts:
            - name: postgres-storage
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: postgres-storage
          persistentVolumeClaim:
            claimName: postgres-data

Horizontal Pod Autoscaler (HPA)#

The HPA automatically scales the number of Pods based on observed load. It requires the Metrics Server to be installed:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nextjs-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nextjs-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
        - type: Pods
          value: 2
          periodSeconds: 60
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
        - type: Pods
          value: 1
          periodSeconds: 120

The behavior section allows fine-grained control over scaling speed in both directions, preventing overly aggressive changes.

Helm Charts - The Kubernetes Package Manager#

Helm is a package manager for Kubernetes that simplifies deploying complex applications. A Chart is a collection of files describing related Kubernetes resources:

# Chart.yaml
apiVersion: v2
name: nextjs-app
description: A Helm chart for Next.js application
type: application
version: 1.0.0
appVersion: "1.0.0"
dependencies:
  - name: postgresql
    version: "13.x.x"
    repository: "https://charts.bitnami.com/bitnami"
    condition: postgresql.enabled
# values.yaml
replicaCount: 3

image:
  repository: my-registry/nextjs-app
  tag: "1.0.0"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  className: nginx
  hosts:
    - host: myapp.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: myapp-tls
      hosts:
        - myapp.example.com

resources:
  limits:
    cpu: 500m
    memory: 256Mi
  requests:
    cpu: 250m
    memory: 128Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

postgresql:
  enabled: true
  auth:
    database: myapp
    username: myapp

Essential Helm commands:

# Install a chart
helm install my-release ./nextjs-chart -f values.yaml

# Upgrade
helm upgrade my-release ./nextjs-chart -f values.yaml

# Rollback
helm rollback my-release 1

# List releases
helm list

# Uninstall
helm uninstall my-release

Local Kubernetes Environments#

For learning and development, you do not need a cloud cluster. Several tools allow you to run Kubernetes locally:

Minikube#

The most popular tool for local Kubernetes:

# Install and start
minikube start --driver=docker --cpus=4 --memory=8192

# Dashboard
minikube dashboard

# Expose a service
minikube service nextjs-service

# Stop
minikube stop

Kind (Kubernetes in Docker)#

A lightweight alternative, ideal for CI/CD pipelines:

# Create a cluster
kind create cluster --name my-cluster

# With multi-node configuration
kind create cluster --config kind-config.yaml
# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
  - role: worker

Docker Desktop#

Docker Desktop has built-in Kubernetes support - simply enable it in the settings. This is the simplest option for developers who already use Docker Desktop daily.

Deploying a Next.js Application in Practice#

Let us combine all the concepts and deploy a Next.js application to a Kubernetes cluster. Directory structure:

k8s/
  namespace.yaml
  configmap.yaml
  secret.yaml
  deployment.yaml
  service.yaml
  ingress.yaml
  hpa.yaml

Complete deployment manifest:

# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: nextjs-production
---
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nextjs-config
  namespace: nextjs-production
data:
  NEXT_PUBLIC_API_URL: "https://api.example.com"
  NODE_ENV: "production"
---
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nextjs-app
  namespace: nextjs-production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nextjs
  template:
    metadata:
      labels:
        app: nextjs
    spec:
      containers:
        - name: nextjs
          image: my-registry/nextjs-app:1.0.0
          ports:
            - containerPort: 3000
          envFrom:
            - configMapRef:
                name: nextjs-config
          resources:
            requests:
              cpu: "250m"
              memory: "128Mi"
            limits:
              cpu: "500m"
              memory: "256Mi"
          livenessProbe:
            httpGet:
              path: /api/health
              port: 3000
            initialDelaySeconds: 15
            periodSeconds: 30
          readinessProbe:
            httpGet:
              path: /api/health
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
---
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nextjs-service
  namespace: nextjs-production
spec:
  type: ClusterIP
  selector:
    app: nextjs
  ports:
    - port: 80
      targetPort: 3000
---
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nextjs-ingress
  namespace: nextjs-production
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - myapp.example.com
      secretName: myapp-tls
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nextjs-service
                port:
                  number: 80

Deploy everything with a single command:

kubectl apply -f k8s/

Monitoring with Prometheus and Grafana#

Monitoring is critical in production environments. Prometheus and Grafana are the standard in the Kubernetes ecosystem:

# prometheus-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      evaluation_interval: 15s
    scrape_configs:
      - job_name: "kubernetes-pods"
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - source_labels:
              - __meta_kubernetes_pod_annotation_prometheus_io_scrape
            action: keep
            regex: true
          - source_labels:
              - __meta_kubernetes_pod_annotation_prometheus_io_port
            action: replace
            target_label: __address__
            regex: (.+)

Installing kube-prometheus-stack with Helm:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set grafana.adminPassword=your-password

This installs Prometheus, Grafana, Alertmanager, and a set of pre-built dashboards for cluster monitoring.

Managed Kubernetes in the Cloud#

For production environments, managed Kubernetes services from cloud providers are the recommended approach:

Amazon EKS (Elastic Kubernetes Service)#

  • Deep integration with the AWS ecosystem (ALB, RDS, S3, IAM)
  • Fargate support (serverless containers)
  • Largest market share

Azure AKS (Azure Kubernetes Service)#

  • Integration with Azure DevOps and GitHub Actions
  • Windows container support
  • Free control plane

Google GKE (Google Kubernetes Engine)#

  • Created by the inventors of Kubernetes - the most advanced implementation
  • GKE Autopilot - a fully managed mode
  • Best performance and fastest updates

The choice of platform depends on your existing cloud ecosystem, business requirements, and team preferences.

When Kubernetes Is Overkill#

Kubernetes is a powerful tool, but not every project needs it. K8s is likely overkill when:

  • You have a simple application with one or two services
  • Low traffic - a few hundred users per day
  • Small team (1-3 people) with no K8s experience
  • Budget is tight - a K8s cluster incurs additional costs
  • No need for autoscaling - traffic is predictable and steady

In such cases, better alternatives include:

  • Docker Compose + VPS - for straightforward applications
  • PaaS platforms - Vercel, Railway, Fly.io
  • Serverless - AWS Lambda, Azure Functions, Google Cloud Run
  • Managed containers - AWS ECS, Azure Container Apps

Kubernetes makes sense when you are managing multiple microservices, need advanced autoscaling, have a DevOps team, and require high availability.

Summary#

Kubernetes is an essential tool in the modern developer's toolkit. Key takeaways from this guide:

  • Pods are the basic unit, but you manage them through Deployments
  • Services and Ingress handle internal and external communication
  • ConfigMaps and Secrets separate configuration from code
  • HPA automates scaling based on metrics
  • Helm simplifies managing complex deployments
  • Start with a local cluster (minikube or kind) before migrating to the cloud

Investing in learning Kubernetes pays off many times over - the automation, reliability, and scalability it provides are difficult to achieve by other means.

Need Help with Kubernetes?#

At MDS Software Solutions Group, we help companies implement and manage Kubernetes infrastructure. We offer:

  • K8s cluster architecture design
  • Application migration to Kubernetes
  • CI/CD pipeline setup with automated deployments
  • Monitoring and alerting (Prometheus, Grafana)
  • Cloud infrastructure cost optimization
  • Kubernetes training for development teams

Contact us to discuss your project and infrastructure needs!

Author
MDS Software Solutions Group

Team of programming experts specializing in modern web technologies.

Kubernetes - A Complete Introduction for Developers | MDS Software Solutions Group | MDS Software Solutions Group