Kubernetes Deployment Strategies Explained

Photo by Unsplash

Photo by Unsplash
Kubernetes deployment strategies determine how your application transitions from one version to the next without disrupting users. Choosing the wrong strategy can mean minutes of downtime, a bad rollout reaching all users at once, or wasted infrastructure costs. In this article we break down Rolling Update, Blue-Green, and Canary deployments — when to use each, their trade-offs, and working YAML examples you can apply to your clusters today.
Every deployment carries risk. A new image might introduce a regression, a database migration could break backward compatibility, or a performance issue might only show under production load. The deployment strategy is your primary tool for controlling the blast radius: how many users are affected, for how long, and how quickly you can recover.
Kubernetes replaces Pods incrementally by default — bringing up new replicas before terminating old ones. With maxSurge and maxUnavailable settings you control how aggressive the rollout is. Rolling updates are zero-downtime for stateless services and require no extra infrastructure, making them the right choice for the majority of deployments.
Rolling updates mix old and new Pods during the transition window, which breaks scenarios requiring strict version isolation — for example a database schema migration that is not backward-compatible with the old application version. In those cases you need Blue-Green or a feature-flag controlled Canary.
Set 'minReadySeconds: 30' on your Deployment so Kubernetes waits 30 seconds after a new Pod passes its readiness probe before counting it as available. This gives your application time to warm up and prevents traffic from routing to a Pod that is technically ready but not yet warmed.
Blue-Green maintains two identical environments — Blue (live) and Green (staging). You deploy the new version to Green, run smoke tests, then flip a single Service selector to route 100% of traffic from Blue to Green. The old Blue environment remains intact for instant rollback: just flip the selector back.
The key is the Service's selector field. Both Blue and Green Deployments share the same 'app: myapp' label but differ by 'version: blue' and 'version: green'. Changing the Service selector is a single kubectl patch command and takes effect in seconds with no Pod restarts.
# Blue deployment (current live)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-blue
labels:
app: myapp
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: myapp
image: myapp:v1.0.0
ports:
- containerPort: 8080
---
# Service - switch traffic by changing selector
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
version: blue # change to "green" to cut over
ports:
- port: 80
targetPort: 8080Blue-Green doubles your compute footprint during the transition. For large deployments this is significant. A mitigation is to scale Green down to zero after the cutover is confirmed stable, then scale Blue down. Tools like Argo Rollouts automate this lifecycle and even integrate with load balancers for weighted traffic splitting without duplicating Services.
A Canary release sends a small percentage of real production traffic to the new version while the majority continues hitting the stable version. You observe error rates, latency, and custom business metrics before gradually increasing the canary weight. If anything looks wrong you shift 100% back to stable immediately.
The simplest native canary uses two Deployments (stable: 9 replicas, canary: 1 replica) both selected by the same Service. With 10 total Pods, roughly 10% of requests land on canary. For finer control without Ingress changes you need a service mesh (Istio, Linkerd) or a traffic-splitting Ingress controller.
If your application uses server-side sessions or sticky cookies, some users may be pinned to the canary version for their entire session. Ensure your metrics segment by version label, not just overall error rate, so you can distinguish canary problems from baseline noise. Use Prometheus label selectors like 'version="canary"' in your queries.
Argo Rollouts extends Kubernetes with a Rollout custom resource that implements Blue-Green and Canary natively, integrates with analysis templates (automated metric queries to promote or abort), and provides a UI dashboard. It is the production-grade choice when you need automated Canary analysis without maintaining custom scripts.
An AnalysisTemplate defines a PromQL query and success criteria. During a Canary rollout, Argo Rollouts runs the analysis on each step interval. If the query returns a value outside the threshold — for instance error rate above 1% — the rollout is automatically aborted and the stable version is restored without any human intervention.
After your CI pipeline builds and pushes a new image, a CD step can call 'kubectl argo rollouts set image myapp-rollout myapp=myapp:sha-abc123' to start the progressive delivery. The pipeline can then wait and watch the rollout status with 'kubectl argo rollouts status myapp-rollout --watch' and fail the workflow if the rollout is aborted.
Use Argo Rollouts' traffic mirror step to shadow a small percentage of live traffic to the canary without affecting user responses. This gives you real-world load testing with zero user impact before you start shifting actual traffic weight.
Rolling Update is the default and covers 80% of use cases — stateless services, frontend apps, and most backend APIs. Blue-Green is ideal for database migrations, compliance-sensitive environments requiring clean cut-overs, or whenever you need instant rollback. Canary is best when you want real user signal before full rollout, especially for high-traffic services where even a 1% error rate has measurable business impact.
Ask yourself: does the new version share a database schema with the old? If yes and the migration is not backward-compatible, you need Blue-Green. Do you have a robust metrics stack and want gradual user exposure? Go Canary with Argo Rollouts. Is the change low-risk and fully backward-compatible? Rolling Update with a reasonable maxUnavailable value is sufficient.
Key Kubernetes deployment concepts to know: Deployment, Service, HPA (Horizontal Pod Autoscaler), canary release, and rolling update.