devops 18 read

Kubernetes for Small Teams: A Practical Getting Started Guide

Learn Kubernetes without the enterprise complexity. Practical guide for small teams covering k3s, deployments, services, and when K8s makes sense.

By Dmytro Klymentiev
Kubernetes for Small Teams: A Practical Getting Started Guide

Kubernetes has a reputation for complexity. Enterprise tutorials throw concepts like Ingress Controllers, Service Meshes, and Helm Charts at you before you deploy a single container.

This guide is different. I've helped several small teams adopt Kubernetes, and I found that most of the complexity is optional. This covers the practical parts you actually need, without the enterprise overhead.

Should Your Small Team Use Kubernetes?

Before diving into Kubernetes for small teams, let's be honest about when it makes sense.

When Kubernetes is Worth It

Multiple services to manage: Running 5+ containers that need to communicate and scale.

High availability requirements: Your app needs zero downtime during deployments and server failures.

Scaling needs: Traffic spikes require automatic scaling up and down.

Multiple environments: You need dev, staging, and production with consistent deployments.

Team growth: DevOps knowledge should be in code, not in someone's head.

When Kubernetes is Overkill

Single application: One web app with a database? Docker Compose is simpler.

Low traffic: Under 10,000 daily users? VPS with PM2 or systemd works fine.

Solo developer: You'll spend more time on infrastructure than features.

Tight budget: Kubernetes has resource overhead. Minimum viable cluster needs 2-3 nodes.

No container experience: Learn Docker first. Kubernetes for small teams assumes Docker knowledge.

Related: Docker for Developers - Master containers first

Kubernetes for Small Teams: Choosing Your Approach

Let cloud providers handle the control plane:

DigitalOcean Kubernetes (DOKS): $12/month per node, simplest UI Linode Kubernetes Engine (LKE): $10/month per node, good value Vultr Kubernetes: $10/month per node, global regions Google GKE Autopilot: Pay per resource, scales automatically

For small teams, managed Kubernetes is the right choice. No control plane maintenance, automatic updates, integrated load balancers.

Option 2: k3s (Lightweight Self-Hosted)

k3s is Kubernetes stripped to essentials - 40MB binary, runs anywhere:

  • Single-node laptops
  • Raspberry Pi clusters
  • Edge deployments
  • Budget VPS servers

Perfect for learning and resource-constrained environments.

Option 3: Full Self-Hosted

Running kubeadm or kubespray yourself. Only consider this if:

  • You have dedicated DevOps staff
  • Compliance requires on-premises
  • You need very specific configurations

Not recommended for small teams.

Getting Started: k3s Setup

Let's set up Kubernetes for small teams using k3s. We'll use a single node to start, then add more.

Single Node Installation

SSH to your server (2GB+ RAM recommended):

# Install k3s
curl -sfL https://get.k3s.io | sh -

# Wait for startup
sudo k3s kubectl get nodes

That's it. You have a working Kubernetes cluster.

Configure kubectl

# Copy config for regular user
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER ~/.kube/config

# Verify
kubectl get nodes

Add More Nodes (Optional)

On the master node, get the token:

sudo cat /var/lib/rancher/k3s/server/node-token

On additional nodes:

curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_IP:6443 K3S_TOKEN=TOKEN sh -

Kubernetes Core Concepts for Small Teams

Kubernetes introduces several abstractions that might seem confusing at first. But each exists for a good reason. Let's understand the core concepts you'll use daily.

Pods: The Basic Unit

A Pod is the smallest thing Kubernetes can deploy. It wraps one or more containers that share networking and storage.

In most cases, you'll run one container per pod. The main exceptions are sidecars - helper containers that enhance your main app (like log collectors or proxies).

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
    - name: app
      image: nginx:alpine
      ports:
        - containerPort: 80

Why you rarely create Pods directly: Pods are ephemeral. If a pod crashes or the node dies, that's it - the pod is gone. Kubernetes won't recreate it. For production workloads, you need something that manages pods for you. That's where Deployments come in.

Deployments: Managing Multiple Replicas

A Deployment tells Kubernetes "I want X copies of this pod running at all times." If a pod dies, Kubernetes automatically creates a replacement. If you update the image, Kubernetes rolls out the change gradually.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: app
          image: my-app:v1.0
          ports:
            - containerPort: 3000
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"

Let's break down the key parts:

replicas: 3 - Kubernetes will maintain exactly 3 pods. If one crashes, a new one starts. If you scale to 5, Kubernetes adds 2 more.

selector and labels - These connect the Deployment to its pods. The selector says "manage pods with label app=web-app." The template creates pods with that label. This matching is how Kubernetes knows which pods belong to which Deployment.

resources.requests - The minimum resources the container needs. Kubernetes uses this for scheduling decisions - it won't place a pod on a node that can't meet its requests.

resources.limits - The maximum resources allowed. If your container tries to use more memory than its limit, Kubernetes kills it. This prevents one misbehaving app from taking down the whole node.

Services: Stable Network Access

Pods get random IP addresses that change every time they restart. You can't hardcode these IPs. Services solve this by providing a stable endpoint that routes traffic to healthy pods.

apiVersion: v1
kind: Service
metadata:
  name: web-app-service
spec:
  selector:
    app: web-app
  ports:
    - port: 80
      targetPort: 3000
  type: ClusterIP

Here's what each part does:

selector: app: web-app - The Service finds all pods with this label and routes traffic to them. As pods come and go, the Service automatically updates its routing.

port: 80 - The port the Service listens on.

targetPort: 3000 - The port your container actually listens on. This mapping lets you use standard ports (80) externally while your app uses whatever port it wants internally.

Service types for small teams:

  • ClusterIP (default): Only accessible within the cluster. Other pods can reach it by name (web-app-service:80), but the outside world can't.
  • NodePort: Exposes the service on a high port (30000-32767) on every node. Useful for testing but not ideal for production.
  • LoadBalancer: Creates an actual load balancer in your cloud provider. This is how you expose services to the internet in managed Kubernetes.

ConfigMaps and Secrets: Separating Configuration

Hardcoding configuration in your container image is bad practice. Every environment (dev, staging, prod) needs different settings. ConfigMaps and Secrets let you inject configuration at runtime.

ConfigMaps store non-sensitive configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  DATABASE_HOST: "postgres-service"
  LOG_LEVEL: "info"

Secrets store sensitive data like passwords and API keys. They're base64-encoded (not encrypted by default, so use additional security measures in production):

apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
stringData:
  DATABASE_PASSWORD: "secret123"
  API_KEY: "xyz789"

To use these in your Deployment, reference them in the container spec. The envFrom directive loads all keys as environment variables:

spec:
  containers:
    - name: app
      envFrom:
        - configMapRef:
            name: app-config
        - secretRef:
            name: app-secrets

Your application now sees DATABASE_HOST, LOG_LEVEL, DATABASE_PASSWORD, and API_KEY as regular environment variables. When you update the ConfigMap or Secret, you can restart pods to pick up changes - no image rebuild needed.

Ingress: Routing External Traffic

Services handle internal routing, but how do you expose your app to the internet with a proper domain name and TLS? That's what Ingress does.

An Ingress resource defines routing rules - which hostnames and paths should go to which services. But Ingress alone does nothing. You need an Ingress Controller (like Traefik or Nginx) that reads these rules and configures actual routing.

Good news: k3s includes Traefik as an Ingress Controller by default. You just create Ingress resources, and Traefik handles the rest.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: app.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-app-service
                port:
                  number: 80

This rule says: "When traffic comes in for app.example.com, route it to the web-app-service on port 80." You can add multiple rules for different subdomains or paths, all routing to different services.

Kubernetes for Small Teams: Practical Deployment

Now let's put everything together and deploy a real application - a Node.js app with PostgreSQL. This example shows how all the concepts work together in practice.

Project Structure

Organizing your Kubernetes manifests matters. A clean structure makes your infrastructure maintainable and your team productive.

k8s/
├── namespace.yaml
├── postgres/
│   ├── pvc.yaml
│   ├── deployment.yaml
│   └── service.yaml
├── app/
│   ├── configmap.yaml
│   ├── secret.yaml
│   ├── deployment.yaml
│   └── service.yaml
└── ingress.yaml

Why this structure? Group related resources together. All PostgreSQL resources live in one folder, all app resources in another. When something breaks at 2 AM, you'll know exactly where to look. Keep this folder in your Git repository - infrastructure as code means you can recreate everything from scratch.

Namespace

Namespaces are like folders for your Kubernetes resources. They provide isolation between different applications or environments.

# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: my-app

Why use namespaces? Without them, all resources live in the default namespace and become a mess. Namespaces let you:

  • Organize resources logically (one namespace per app or environment)
  • Apply resource quotas per namespace
  • Set up RBAC so team members only access their namespaces
  • Delete an entire app with kubectl delete namespace my-app

PostgreSQL

Databases need special handling in Kubernetes because data must survive pod restarts and node failures. We'll use a PersistentVolumeClaim to request storage that lives independently of the pod.

Persistent Volume Claim:

A PVC is like asking Kubernetes "I need 10GB of disk space." Kubernetes finds or creates the storage and attaches it to your pod.

# k8s/postgres/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
  namespace: my-app
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

accessModes: ReadWriteOnce means only one pod can mount this volume at a time - perfect for databases. If you need multiple pods reading the same data (like shared files), you'd use ReadWriteMany, but that requires specific storage backends.

Deployment:

Note that we use replicas: 1 for the database. Running multiple PostgreSQL replicas requires proper clustering setup (like Patroni) - don't just increase this number and expect it to work.

# k8s/postgres/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:15-alpine
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              value: myapp
            - name: POSTGRES_USER
              value: appuser
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-secret
                  key: password
          volumeMounts:
            - name: postgres-storage
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: postgres-storage
          persistentVolumeClaim:
            claimName: postgres-pvc

Key parts to understand:

secretKeyRef - Instead of hardcoding the password, we reference a Secret. This keeps credentials out of your Git repository. The Secret is created separately (shown later).

volumeMounts - This attaches our PVC to the container at /var/lib/postgresql/data, which is where PostgreSQL stores its data files. When the pod restarts, it reattaches to the same storage with all data intact.

Service:

The Service gives PostgreSQL a stable DNS name within the cluster. Your application will connect to postgres:5432 instead of tracking pod IPs.

# k8s/postgres/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: my-app
spec:
  selector:
    app: postgres
  ports:
    - port: 5432
      targetPort: 5432

Notice there's no type specified - it defaults to ClusterIP, which is correct for databases. You don't want your database exposed to the internet.

Application

Now for the Node.js application that connects to PostgreSQL.

ConfigMap:

ConfigMaps separate configuration from code. When you need to change an environment variable, you update the ConfigMap - no need to rebuild your Docker image.

# k8s/app/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: my-app
data:
  NODE_ENV: "production"
  DATABASE_HOST: "postgres"
  DATABASE_NAME: "myapp"
  DATABASE_USER: "appuser"

Notice DATABASE_HOST: "postgres" - this is the name of the PostgreSQL Service we created. Kubernetes DNS automatically resolves this to the correct pod IP.

Secret:

Secrets are like ConfigMaps but for sensitive data. The main difference is that you can apply stricter access controls and they're not shown in kubectl output by default.

# k8s/app/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: app-secret
  namespace: my-app
type: Opaque
stringData:
  DATABASE_PASSWORD: "your-secure-password"

Important: Don't commit real secrets to Git. Use stringData for plain text (Kubernetes base64-encodes it), or use external secret management (HashiCorp Vault, AWS Secrets Manager, sealed-secrets).

Deployment:

This is where it all comes together. The app Deployment references our ConfigMap and Secret, includes health checks, and sets resource limits.

# k8s/app/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  namespace: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: app
          image: your-registry/app:latest
          ports:
            - containerPort: 3000
          envFrom:
            - configMapRef:
                name: app-config
            - secretRef:
                name: app-secret
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 15
            periodSeconds: 20

Understanding the probes:

Your application needs a /health endpoint that returns HTTP 200 when healthy. Kubernetes uses this endpoint in two ways:

readinessProbe - "Is this pod ready to receive traffic?" Kubernetes won't send traffic to a pod until it passes this check. Useful during startup while your app connects to the database, loads caches, etc. If it fails later, Kubernetes removes the pod from the Service but doesn't restart it.

livenessProbe - "Is this pod still alive?" If this fails, Kubernetes restarts the pod. Use this to recover from deadlocks or hung processes.

initialDelaySeconds - How long to wait before starting checks. Give your app time to start up.

periodSeconds - How often to run the check.

Service:

# k8s/app/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-app
  namespace: my-app
spec:
  selector:
    app: web-app
  ports:
    - port: 80
      targetPort: 3000

Ingress with TLS

# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  namespace: my-app
  annotations:
    kubernetes.io/ingress.class: traefik
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - app.example.com
      secretName: app-tls
  rules:
    - host: app.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-app
                port:
                  number: 80

Deploy Everything

# Create namespace first
kubectl apply -f k8s/namespace.yaml

# Create secrets
kubectl apply -f k8s/postgres/secret.yaml
kubectl apply -f k8s/app/secret.yaml

# Deploy postgres
kubectl apply -f k8s/postgres/

# Deploy app
kubectl apply -f k8s/app/

# Create ingress
kubectl apply -f k8s/ingress.yaml

# Verify
kubectl get all -n my-app

Kubernetes for Small Teams: Essential Commands

These are the commands you'll use daily. Bookmark this section.

Viewing Resources

Start here when investigating any issue.

# List pods - shows pod names, status, restarts, age
kubectl get pods -n my-app

# List all resources in namespace
kubectl get all -n my-app

# Describe pod - first command when a pod won't start
# Shows events, errors, image pull issues, resource problems
kubectl describe pod POD_NAME -n my-app

# View logs - your primary debugging tool
kubectl logs POD_NAME -n my-app

# Follow logs in real-time (like tail -f)
kubectl logs -f POD_NAME -n my-app

# Logs from all pods with a label - useful when you don't know which replica has the issue
kubectl logs -l app=web-app -n my-app

Debugging

When logs aren't enough, get inside the container.

# Shell into pod - run commands inside the container
# Use for checking files, testing network, debugging environment
kubectl exec -it POD_NAME -n my-app -- sh

# Run one-off command without interactive shell
kubectl exec POD_NAME -n my-app -- ls /app

# Port forward - access your app locally without exposing it
# Opens localhost:3000 → your pod. Great for debugging internal services
kubectl port-forward svc/web-app 3000:80 -n my-app

# View cluster events - shows what Kubernetes is doing/failing
# Sorted by time, most recent last
kubectl get events -n my-app --sort-by=.metadata.creationTimestamp

Updating Deployments

Rolling updates and rollbacks - the production essentials.

# Update image - triggers rolling update
# Old pods stay running until new ones are healthy
kubectl set image deployment/web-app app=your-registry/app:v2 -n my-app

# Watch rollout progress - see pods being replaced
kubectl rollout status deployment/web-app -n my-app

# Rollback to previous version - instant recovery from bad deploys
kubectl rollout undo deployment/web-app -n my-app

# Scale up/down - handle traffic spikes or reduce costs
kubectl scale deployment/web-app --replicas=5 -n my-app

Resource Management

Monitor resource usage and clean up.

# View resource usage per pod - identify memory hogs
kubectl top pods -n my-app

# View resource usage per node - check cluster capacity
kubectl top nodes

# Delete a pod - Kubernetes will recreate it (if managed by Deployment)
# Useful to force restart a stuck pod
kubectl delete pod POD_NAME -n my-app

# Delete resources from YAML file
kubectl delete -f k8s/app/deployment.yaml

SSL Certificates with cert-manager

For automatic Let's Encrypt certificates, install cert-manager:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml

Create ClusterIssuer:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your@email.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: traefik

Certificates are now automatic when Ingress references this issuer.

CI/CD Integration for Small Teams

Automating deployments removes human error and makes releases boring (in a good way). Here's a practical GitHub Actions workflow.

GitHub Actions Deployment

This workflow builds your Docker image, pushes it to a registry, and updates your Kubernetes deployment - all triggered by pushing to main.

name: Deploy to Kubernetes

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Build and push Docker image
        run: |
          docker build -t your-registry/app:${{ github.sha }} .
          docker push your-registry/app:${{ github.sha }}

      - name: Install kubectl
        uses: azure/setup-kubectl@v3

      - name: Configure kubectl
        run: |
          echo "${{ secrets.KUBECONFIG }}" | base64 -d > kubeconfig
          export KUBECONFIG=kubeconfig

      - name: Update deployment
        run: |
          kubectl set image deployment/web-app \
            app=your-registry/app:${{ github.sha }} \
            -n my-app
          kubectl rollout status deployment/web-app -n my-app

Key points:

${{ github.sha }} - Uses the Git commit hash as the image tag. Every build is unique and traceable. You can always match a running container to its source code.

KUBECONFIG secret - Store your cluster credentials in GitHub Secrets. Never commit kubeconfig files. For managed Kubernetes, you can usually download this from your provider's dashboard.

rollout status - The pipeline waits for the deployment to complete. If pods fail health checks, the pipeline fails and you know immediately.

Image Registry

Your images need to live somewhere your cluster can pull from.

Small teams options:

  • GitHub Container Registry: Free for public repos, included in GitHub plans. Integrates well with GitHub Actions.
  • Docker Hub: Free tier allows one private repo. Good for getting started.
  • DigitalOcean Container Registry: $5/month, 500MB. Integrates well with DOKS.
  • Self-hosted: Harbor or GitLab Registry. More work to maintain but full control.

Monitoring and Logging

Basic Monitoring with Metrics Server

Already included in k3s. View with:

kubectl top nodes
kubectl top pods -n my-app

Simple Logging Stack

For small teams, start simple:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: fluentd
  template:
    metadata:
      labels:
        name: fluentd
    spec:
      containers:
        - name: fluentd
          image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
          env:
            - name: FLUENT_ELASTICSEARCH_HOST
              value: "elasticsearch"

Or use managed solutions:

  • Datadog (expensive but comprehensive)
  • Grafana Cloud (generous free tier)
  • Papertrail (simple log aggregation)

Kubernetes for Small Teams: Common Pitfalls

1. Over-Engineering

Don't add complexity you don't need:

  • Skip service mesh (Istio, Linkerd) initially
  • Skip Helm until you have many similar deployments
  • Skip multi-cluster until you truly need it

2. Ignoring Resource Limits

Always set resource requests and limits:

resources:
  requests:
    memory: "128Mi"
    cpu: "100m"
  limits:
    memory: "256Mi"
    cpu: "500m"

Without limits, one pod can crash your entire node.

3. Not Using Health Checks

Kubernetes needs to know when your app is ready:

readinessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 5
  periodSeconds: 10
livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 15
  periodSeconds: 20

4. Storing State in Pods

Pods are ephemeral. Use:

  • PersistentVolumeClaims for databases
  • External services for sessions (Redis)
  • S3/object storage for files

5. Not Backing Up

Regular backups of:

  • YAML manifests (in Git)
  • Secrets (encrypted backup)
  • PersistentVolumes (database dumps)

Kubernetes for Small Teams Checklist

Initial Setup

  • Choose managed vs self-hosted
  • Set up kubectl access
  • Configure ingress controller
  • Set up cert-manager for SSL

Per-Application

  • Namespace created
  • Deployment with resource limits
  • Service configured
  • ConfigMaps for configuration
  • Secrets for sensitive data
  • Health checks implemented
  • Ingress with TLS

Operations

  • CI/CD pipeline configured
  • Monitoring in place
  • Log aggregation set up
  • Backup strategy defined
  • Rollback procedure documented

Summary

Kubernetes for small teams is achievable with the right approach:

  1. Start with k3s or managed Kubernetes - Skip the complexity of self-hosted control planes
  2. Master the basics - Deployments, Services, ConfigMaps, Secrets, Ingress
  3. Use namespaces - Organize applications cleanly
  4. Set resource limits - Prevent resource exhaustion
  5. Implement health checks - Let Kubernetes manage availability
  6. Automate deployments - CI/CD from day one

Don't add complexity until you need it. A small team can successfully run Kubernetes with just the concepts in this guide.

Related resources:


Need help setting up Kubernetes for your team or migrating from traditional infrastructure? Let's talk about your container orchestration needs.

Need help with devops?

Let's discuss your project

Get in touch
RELATED