Kubernetes Security: Complete K8s Hardening Guide — From Cluster to Pod
The Kubernetes Security Challenge
Kubernetes is now the de facto orchestration platform — 92% of organizations use it in production as of 2026. But its complexity creates a massive attack surface.
The 2025 Red Hat State of Kubernetes Security Report found:
- 67% of organizations had a K8s security incident in the past 12 months
- 45% delayed deploying applications due to security concerns
- 78% of clusters run with default (overpermissive) RBAC settings
1. RBAC: The First Line of Defense
RBAC misconfigurations are the single biggest K8s vulnerability.
Audit Current RBAC Permissions
# Find all cluster-admin bindings (should be minimal)
kubectl get clusterrolebindings -o json | jq -r '
.items[] |
select(.roleRef.name == "cluster-admin") |
"\(.metadata.name): \(.subjects[]?.name) (\(.subjects[]?.kind))"
'
# Find all roles with wildcard permissions
kubectl get clusterroles -o json | jq -r '
.items[] |
select(.rules[]?.verbs? | index("*")) |
.metadata.name
'
# Check what a specific service account can do
kubectl auth can-i --list \
--as=system:serviceaccount:default:my-app
Minimal RBAC for Application Workloads
# Role: Only read pods and configmaps in own namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "configmaps", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
---
# RoleBinding: Bind to app service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-reader-binding
namespace: production
subjects:
- kind: ServiceAccount
name: my-app
namespace: production
roleRef:
kind: Role
name: app-reader
apiGroup: rbac.authorization.k8s.io
Disable Automounted Service Account Tokens
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: production
automountServiceAccountToken: false
2. Pod Security Standards (PSS)
Pod Security Standards replaced PodSecurityPolicy in K8s 1.25+:
Enforce Restricted Profile
# Label namespace to enforce restricted profile
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Hardened Pod Spec
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
serviceAccountName: my-app
automountServiceAccountToken: false
securityContext:
runAsNonRoot: true
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: my-registry/app:v1.2.3@sha256:abc123...
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
ports:
- containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir:
sizeLimit: 100Mi
What this prevents:
- Container escape via privileged mode
- Root filesystem tampering
- Privilege escalation via setuid binaries
- Cryptomining via unlimited resources
- Image tag mutation (uses digest)
3. Network Policies: Microsegmentation
By default, all pods can communicate with all other pods. This is disastrous for lateral movement.
Default Deny All Traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Allow Only Specific Traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-traffic
namespace: production
spec:
podSelector:
matchLabels:
app: web-api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: production
podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
4. Admission Controllers: OPA Gatekeeper
Admission controllers intercept API requests before persistence:
Block Privileged Containers
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sblockprivileged
spec:
crd:
spec:
names:
kind: K8sBlockPrivileged
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sblockprivileged
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container not allowed: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged init container not allowed: %v", [container.name])
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockPrivileged
metadata:
name: block-privileged-containers
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces: ["production", "staging"]
Require Image Digests (No :latest)
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredigest
spec:
crd:
spec:
names:
kind: K8sRequireDigest
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredigest
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not contains(container.image, "@sha256:")
msg := sprintf("Image must use digest, not tag: %v", [container.image])
}
5. Runtime Monitoring with Falco
Falco detects anomalous activity at runtime:
# falco-rules.yaml
- rule: Terminal shell in container
desc: Detect shell in a container (potential exec)
condition: >
spawned_process and
container and
proc.name in (bash, sh, zsh, dash) and
not proc.pname in (cron, supervisord)
output: >
Shell spawned in container
(user=%user.name container=%container.name
image=%container.image.repository
shell=%proc.name parent=%proc.pname)
priority: WARNING
- rule: Read sensitive file in container
desc: Detect reads of sensitive files
condition: >
open_read and
container and
fd.name in (/etc/shadow, /etc/passwd, /proc/1/environ)
output: >
Sensitive file read in container
(file=%fd.name container=%container.name
image=%container.image.repository)
priority: CRITICAL
- rule: Outbound connection to crypto pool
desc: Detect potential cryptomining
condition: >
outbound and
container and
fd.sip.name contains "pool" or
fd.sport in (3333, 4444, 5555, 8888, 9999)
output: >
Crypto mining connection detected
(container=%container.name dest=%fd.sip:%fd.sport)
priority: CRITICAL
6. Secrets Management
Never store secrets in plain YAML:
# BAD: Plain text secret
apiVersion: v1
kind: Secret
metadata:
name: db-creds
type: Opaque
data:
password: cGFzc3dvcmQxMjM= # base64 is NOT encryption!
---
# GOOD: External Secrets Operator with Vault
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-creds
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-creds
data:
- secretKey: password
remoteRef:
key: secret/data/production/db
property: password
K8s Security Audit Checklist
Cluster Level
- API server not publicly accessible
- RBAC enabled (default since 1.8+)
- No cluster-admin bindings to user accounts
- etcd encrypted at rest
- Audit logging enabled
- Admission controllers configured
- K8s version is latest stable (patch < 30 days)
Network Level
- Default deny NetworkPolicy in all namespaces
- Ingress controller has WAF (ModSecurity)
- Service mesh with mTLS (Istio/Linkerd)
- No NodePort services in production
- CNI supports NetworkPolicy (Calico/Cilium)
Workload Level
- Pod Security Standards: restricted
- All containers non-root
- Read-only root filesystem
- All capabilities dropped
- Resource limits on all containers
- Image digests (not tags)
- Private registry only
- Seccomp profiles enabled
- No hostPath mounts
- No host networking
Key Takeaways
- RBAC is broken by default — 78% of clusters have overpermissive roles. Audit with
kubectl auth can-i --list - Network policies are mandatory — without them, one compromised pod = full cluster access
- Pod Security Standards replace PSP — enforce "restricted" profile on production namespaces
- Admission controllers are your last gate — use OPA Gatekeeper or Kyverno to prevent misconfigs
- Runtime monitoring catches what static analysis misses — Falco detects container escapes, cryptomining, and shell access in real time
Scan your Kubernetes manifests and Helm charts with ShieldX — detect RBAC misconfigurations, missing security contexts, and CIS benchmark violations before deployment.
Advertisement
Free Security Tools
Try our tools now
Expert Services
Get professional help
OWASP Top 10
Learn the top risks
Related Articles
Cloud Security Guide: AWS, Azure & GCP Misconfigurations 2025
Master cloud security with comprehensive guides on S3 bucket security, IAM policies, secrets management, and real breach case studies.
Cloud Security in 2025: Comprehensive Guide for AWS, Azure & GCP
Deep-dive into cloud security best practices across all three major providers. Covers IAM, network security, data encryption, compliance, and real-world misconfigurations that led to breaches.
IaC Security: Securing Terraform, Docker & Kubernetes Before Deployment
67% of IaC templates contain at least one misconfiguration. This guide covers Terraform security scanning, Docker hardening, Kubernetes RBAC, OPA policies, and automated IaC security in CI/CD pipelines.