Understanding Kubernetes networking is crucial for building reliable, scalable applications. This guide covers everything from basic Pod communication to advanced Service mesh patterns.
Figure 1: Kubernetes networking architecture
Kubernetes Networking Fundamentals
Kubernetes networking must satisfy these requirements:
1. All Pods can communicate with each other without NAT
2. All Nodes can communicate with all Pods without NAT
3. The IP a Pod sees itself as is the same IP others see it as
1. Pod-to-Pod Communication
Same Node Communication
Pods on the same node communicate through the virtual Ethernet bridge.
apiVersion: v1
kind: Pod
metadata:
name: pod-a
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Pods get unique IP addresses from the cluster CIDR range:
| Component | IP Range | Example |
|---|---|---|
| Cluster CIDR | 10.244.0.0/16 | Pod IPs |
| Service CIDR | 10.96.0.0/12 | Service IPs |
| Node Network | 192.168.1.0/24 | Node IPs |
Cross-Node Communication
Kubernetes uses Container Network Interface (CNI) plugins for inter-node routing.
Popular CNI Plugins:
- Calico - BGP routing, network policies
- Flannel - Simple overlay network
- Cilium - eBPF-based, advanced features
- Weave - Encrypted overlay
Figure 2: CNI plugin network flow
2. Services and Load Balancing
ClusterIP Service
Default service type for internal communication.
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
How it works:
1. Service gets a virtual IP (ClusterIP)
2. kube-proxy programs iptables/IPVS rules
3. Traffic to ClusterIP is load-balanced to Pods
NodePort Service
Exposes service on each node's IP at a static port.
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080 # 30000-32767
LoadBalancer Service
Creates an external load balancer (cloud provider).
apiVersion: v1
kind: Service
metadata:
name: public-service
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
3. Ingress Controllers
Ingress provides HTTP/HTTPS routing to services.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
Figure 3: Ingress controller routing
4. Network Policies
Control traffic flow between Pods.
Default Deny All Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
Allow Specific Traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Egress Control
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-egress
spec:
podSelector:
matchLabels:
app: secure-app
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53 # Allow DNS
5. DNS in Kubernetes
CoreDNS provides service discovery.
Service DNS Names
# Format: <service-name>.<namespace>.svc.cluster.local
# Same namespace
curl http://backend-service
# Different namespace
curl http://backend-service.production.svc.cluster.local
# Headless service (returns Pod IPs)
curl http://database-0.database.production.svc.cluster.local
Custom DNS Configuration
apiVersion: v1
kind: Pod
metadata:
name: custom-dns-pod
spec:
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
- 8.8.4.4
searches:
- production.svc.cluster.local
- svc.cluster.local
options:
- name: ndots
value: "2"
6. Service Mesh
Advanced traffic management with Istio/Linkerd.
Istio Virtual Service
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews
http:
- match:
- headers:
user-type:
exact: "premium"
route:
- destination:
host: reviews
subset: v2
weight: 100
- route:
- destination:
host: reviews
subset: v1
weight: 90
- destination:
host: reviews
subset: v2
weight: 10
Traffic Splitting
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Figure 4: Service mesh data plane
7. Troubleshooting Network Issues
Debug Pod
apiVersion: v1
kind: Pod
metadata:
name: netshoot
spec:
containers:
- name: netshoot
image: nicolaka/netshoot
command: ["/bin/bash"]
args: ["-c", "while true; do sleep 30; done;"]
Common Commands
# Check Pod IPs
kubectl get pods -o wide
# Test connectivity
kubectl exec -it netshoot -- ping 10.244.1.5
# DNS lookup
kubectl exec -it netshoot -- nslookup backend-service
# Check network policies
kubectl get networkpolicies -A
# View service endpoints
kubectl get endpoints backend-service
# Trace route
kubectl exec -it netshoot -- traceroute backend-service
# Check iptables rules
kubectl exec -it netshoot -- iptables -L -t nat
8. Performance Optimization
Connection Pooling
# Python example with connection pooling
import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
session = requests.Session()
retry = Retry(
total=3,
backoff_factor=0.3,
status_forcelist=[500, 502, 503, 504]
)
adapter = HTTPAdapter(
pool_connections=100,
pool_maxsize=100,
max_retries=retry
)
session.mount('http://', adapter)
session.mount('https://', adapter)
# Reuse connection
response = session.get('http://backend-service/api')
Keep-Alive Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
upstream backend {
keepalive 32;
server backend-service:80;
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
9. Security Best Practices
- ✅ Implement NetworkPolicies in all namespaces
- ✅ Use TLS for service-to-service communication
- ✅ Enable mutual TLS with service mesh
- ✅ Restrict egress traffic
- ✅ Use private container registries
- ✅ Regular security audits
Networking Checklist
- [ ] CNI plugin installed and configured
- [ ] Network policies defined
- [ ] Ingress controller deployed
- [ ] TLS certificates configured
- [ ] DNS resolution working
- [ ] Service mesh evaluated (if needed)
- [ ] Monitoring and logging enabled
- [ ] Load testing performed
Conclusion
Mastering Kubernetes networking enables:
- ✅ Reliable communication between services
- ✅ Secure traffic with network policies
- ✅ Scalable architecture with proper load balancing
- ✅ Advanced patterns with service mesh
Start with the basics and gradually implement advanced features as your needs grow.
Need help with Kubernetes networking? Contact us for expert consulting.