This guide covers container port configurations in Kubernetes, explaining key concepts and practical setups. If you're setting up ports for the first time or troubleshooting connectivity issues, you'll find clear explanations and useful examples to help you navigate container networking effectively.
What's the Deal with ContainerPort in K8s?
ContainerPort in Kubernetes enables applications to communicate with the outside world. It functions as an access point for container services and is essential for proper connectivity.
In its simplest form, containerport specifies which port an application is listening on inside the container. While this concept is fundamental, incorrect configuration can lead to significant troubleshooting time with seemingly invisible but properly deployed applications.
ports:
- containerPort: 8080 # Your app is listening here inside the container
protocol: TCP # Default is TCP, but you can specify UDP if needed
name: http # Optional but recommended for clarity
The containerPort field is part of the Container spec, not the Pod spec, which is important when thinking about multi-container pods. Each container declares its ports independently.
Setting Up ContainerPort K8s Step-by-Step
Proper containerport configuration from the outset prevents many common issues. Here are the key considerations:
1. Know Your App's Port Requirements
Before implementation, it's important to consider:
- The default listening port of the application
- Requirements for multiple ports for different services (API, metrics, admin interface)
- Protocol requirements (TCP/UDP)
- HTTP and HTTPS traffic handling needs
Application documentation typically provides this information. For custom applications, consultation with the development team is recommended.
For standard applications, here are common default ports:
- Node.js apps: 3000
- Spring Boot: 8080
- Nginx: 80
- MongoDB: 27017
- Redis: 6379
2. The Essential ContainerPort Configuration
Here's a detailed example of a deployment with containerport properly configured:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: production
labels:
app: web-app
component: frontend
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
spec:
containers:
- name: web-app
image: your-repo/web-app:1.0
imagePullPolicy: Always
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
- containerPort: 9090
name: metrics
env:
- name: PORT
value: "8080"
- name: METRICS_PORT
value: "9090"
imagePullSecrets:
- name: registry-credentials
Breaking this down:
- The
name
field for ports isn't required but makes your life easier when creating services or debugging - You can have multiple ports defined for different purposes (HTTP, HTTPS, metrics)
- Make sure environment variables that define ports match your containerPort values
- Adding annotations for metrics collection helps observability tools automatically discover scrape targets
3. Connecting ContainerPort to Services
Your containerport won't do much good without a Service to expose it. Here's a detailed Service configuration that works with the Deployment above:
apiVersion: v1
kind: Service
metadata:
name: web-app-service
namespace: production
labels:
app: web-app
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:region:account:certificate/cert-id
spec:
selector:
app: web-app
ports:
- port: 80 # The port this service is available on
targetPort: http # Points to your named containerPort
protocol: TCP
name: http
- port: 443 # HTTPS port
targetPort: https # Points to your named https containerPort
protocol: TCP
name: https
type: LoadBalancer # Exposes the service externally
sessionAffinity: None
Key details:
- Using named targetPorts (http, https) instead of numbers makes your config more maintainable
- Including annotations configures cloud-specific features like SSL termination
- When you specify multiple ports, naming each one becomes mandatory
- The service selector must match the pod labels exactly
For internal services, you'd typically use:
spec:
type: ClusterIP
ports:
- port: 80 # Port exposed by the service
targetPort: http # Container port to route traffic to
For node-level access:
spec:
type: NodePort
ports:
- port: 80
targetPort: http
nodePort: 30080 # Optional: specific port on each node (30000-32767)
4. Understanding the Full Connection Path
To master containerport, you need to understand the full connection path:
- Container: App binds to containerPort (e.g., 8080)
- Pod: Contains one or more containers, each with their ports
- Service: Routes traffic to pods based on labels
port
: What clients connect totargetPort
: Maps to containerPortnodePort
: (Optional) Exposes service on every node's IP
- Ingress: Routes external HTTP(S) traffic to services
Common ContainerPort K8s Issues and How to Fix Them
The following section addresses common troubleshooting scenarios for containerport configurations in Kubernetes environments.
"My Service Can't Connect to My Pods"
The Symptoms: Your service is running, your pods are running, but traffic isn't flowing.
Detailed Diagnosis:
Verify DNS resolution is working:
kubectl exec -it <some-pod> -- nslookup <service-name>
Look for kube-proxy issues:
# Check kube-proxy logs on the node
kubectl logs -n kube-system -l k8s-app=kube-proxy
Check if there's a network policy blocking traffic:
kubectl get networkpolicies -A
Test the connection directly to the pod:
# Get a pod's IP
POD_IP=$(kubectl get pod <pod-name> -o jsonpath='{.status.podIP}')
# Try to connect from another pod in the same namespace
kubectl exec -it <some-other-pod> -- curl $POD_IP:<containerPort>
Verify your service is selecting the right pods:
# See what labels your service is selecting
kubectl get service <service-name> -o jsonpath='{.spec.selector}'
# Find pods matching those labels
kubectl get pods --selector=app=your-app-label
Check that your service's targetPort
matches your pod's containerPort
:
# Check pod container ports
kubectl get pods <pod-name> -o jsonpath='{.spec.containers[*].ports[*]}'
# Check service target ports
kubectl get service <service-name> -o jsonpath='{.spec.ports[*].targetPort}'
"Port Conflicts in Multi-Container Pods"
The Symptoms: Your pod won't start, with errors about port binding failures.
Detailed Diagnosis:
Check container logs for binding errors:
kubectl logs <pod-name> -c <container-name>
Look at the pod events:
kubectl describe pod <pod-name>
Check for port conflicts in your pod spec:
kubectl get pod <pod-name> -o yaml | grep containerPort -A 5
The Fix: Multiple containers in the same pod can't bind to the same port number on the same interface. Review your pod spec and ensure each container uses unique ports or binds to different interfaces.
containers:
- name: first-container
ports:
- containerPort: 8080
command: ["nginx", "-g", "daemon off;"]
- name: second-container
ports:
- containerPort: 9090 # Different port!
command: ["python", "app.py"]
For sidecars that need to share ports with the main container, you might need to use localhost binding with different ports:
env:
- name: LISTEN_ADDR
value: "127.0.0.1:9090" # Only listen on localhost
"My Readiness Probe is Failing Despite the App Running"
The Symptoms: Your pod is running but stuck in a not-ready state.
Detailed Diagnosis:
Test the health endpoint directly:
kubectl exec -it <pod-name> -- curl localhost:<port>/health
Verify what port your app is listening on:
kubectl exec -it <pod-name> -- netstat -tulpn
Check what port your readiness probe is using:
kubectl describe pod <pod-name> | grep -A10 Readiness
The Fix: Often, this happens because your readiness probe is checking a different port than your app is listening on.
# Complete probe configuration
readinessProbe:
httpGet:
path: /health
port: 8080 # Make sure this matches your containerPort!
scheme: HTTP
httpHeaders:
- name: Custom-Header
value: Readiness
initialDelaySeconds: 10 # Wait before first check
periodSeconds: 5 # How often to check
timeoutSeconds: 2 # How long to wait for response
successThreshold: 1 # How many successes to be ready
failureThreshold: 3 # How many failures to be not ready
If your app has a startup delay, increase the initialDelaySeconds
.
"My Pod Works Locally but Fails in Production"
The Symptoms: Everything works in development/staging, but connections fail in production.
Common Causes and Fixes:
Container binding to localhost only: Make sure your app binds to 0.0.0.0
Inside the container, not just to localhost:
# Check what address your app is binding to
kubectl exec -it <pod-name> -- netstat -tulpn
Environment-specific configuration: Check if your app is binding to different ports based on environment variables:
env:
- name: PORT
valueFrom:
configMapKeyRef:
name: app-config
key: port
Different Network Policies:
# Compare network policies across namespaces
kubectl get networkpolicies -n development
kubectl get networkpolicies -n production
ContainerPort vs HostPort vs NodePort: Clearing the Confusion
These similar port-related terms can be confusing. Below is a comprehensive breakdown of their differences:
Port Type | What It Does | When To Use It | Gotchas | Example YAML |
---|---|---|---|---|
containerPort | Declares which port your app listens on inside the container | Always – this is essential documentation | Not specifying it doesn't block the port, but other components won't know about it | containerPort: 8080 |
hostPort | Maps container port directly to the same port on the node | Rarely – only when you need direct access to a specific pod from outside | Creates scheduling constraints; only one pod per node can use a specific hostPort | containerPort: 8080 <br>hostPort: 80 |
nodePort | Part of Service definition; exposes service on all nodes' IPs | When you need external access but don't have a load balancer | Limited to range 30000-32767 by default | type: NodePort <br>nodePort: 30080 |
port | The port exposed by the Service | Used in all Service definitions | Just a virtual port that kube-proxy listens on | port: 80 |
targetPort | The port on the pod that the Service forwards traffic to | Used in all Service definitions | Can be a name or number, corresponds to containerPort | targetPort: http |
When to Use Each Port Type:
containerPort: Always use this to document what ports your app uses.
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: Use for development or when you need a fixed port on a specific node.
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 8080 # Warning: scheduling limitation
NodePort: Use when you need externally accessible services without a load balancer.
kind: Service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080 # Accessible at <any-node-ip>:30080
LoadBalancer: Use when you need a stable, external IP in cloud environments.
kind: Service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
Ingress: Use for HTTP(S) routing when you have multiple services.
kind: Ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /api
backend:
service:
name: api-service
port:
number: 80
Monitoring ContainerPort Traffic Like a Pro
Monitoring traffic through containerports enables proactive issue detection before service disruptions occur.
Setting Up Comprehensive Port Monitoring
1. Prometheus ServiceMonitor Custom Resource:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: web-app-monitor
namespace: monitoring
spec:
selector:
matchLabels:
app: web-app
endpoints:
- port: metrics # Named port from your service
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- production
This automatically discovers and scrapes metrics from your service's metrics port.
2. Grafana Dashboard for Port Traffic:
Create a dashboard with these key metrics:
- Request rate by port (
rate(http_requests_total{port="http"}[5m])
) - Error rate by port (
sum(rate(http_requests_total{status=~"5.."}[5m])) by (port) / sum(rate(http_requests_total[5m])) by (port)
) - Connection latency (
histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket[5m])) by (le, port))
) - Connection backlog (
sum(nginx_connections_waiting) by (pod)
)
3. Network Policy Visualization with Cilium:
# Install Hubble UI for visualizing network flows
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/master/install/kubernetes/quick-install.yaml
4. Setting Up Port-Specific Alerts:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: port-alerts
namespace: monitoring
spec:
groups:
- name: port.rules
rules:
- alert: HighErrorRate
expr: sum(rate(http_requests_total{status=~"5.."}[5m])) by (service, port) / sum(rate(http_requests_total[5m])) by (service, port) > 0.05
for: 5m
labels:
severity: warning
annotations:
summary: "High error rate on {{ $labels.service }} port {{ $labels.port }}"
description: "Error rate is {{ $value | humanizePercentage }} for the last 5 minutes"
Network Policy Examples for Traffic Control
Basic policy to allow inbound traffic to specific containerPort:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web
namespaceSelector:
matchLabels:
purpose: frontend
ports:
- protocol: TCP
port: 8080
Policy for restricting outbound traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-outbound
spec:
podSelector:
matchLabels:
app: secured-app
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432 # PostgreSQL port
Advanced ContainerPort Patterns for Production Systems
The following advanced patterns enhance containerport configuration robustness:
Named Ports for Better Maintainability
# Pod template
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 9090
# Service
ports:
- port: 80
targetPort: http # References the named port
Then if your internal port changes, you only update it in one place:
# Just change this
ports:
- name: http
containerPort: 9000 # Changed from 8080
Implementing Port Discovery with Pod Annotations
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
These annotations help service discovery systems find the right ports automatically.
Health Check Ports
Dedicate a port for health checks to keep them separate from your main traffic:
ports:
- containerPort: 8080
name: http
- containerPort: 8081
name: health
livenessProbe:
httpGet:
path: /live
port: health
readinessProbe:
httpGet:
path: /ready
port: health
Zero-Downtime Port Migration Strategy
When changing the port your application listens to:
- Gradually shift traffic to the new port
- Remove the old port when migration complete
Update service to point to both (weighted routing):
# Using Istio for traffic splitting
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service-split
spec:
hosts:
- my-service
http:
- route:
- destination:
host: my-service
port:
number: 8080
subset: old
weight: 90
- destination:
host: my-service
port:
number: 9090
subset: new
weight: 10
Deploy with both ports:
ports:
- containerPort: 8080 # Old port
name: http-old
- containerPort: 9090 # New port
name: http-new
Multi-Protocol Support
For applications that need to support both TCP and UDP:
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: dns
containerPort: 53
protocol: UDP
Service configuration:
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
- name: dns
port: 53
targetPort: dns
protocol: UDP
Service Mesh Integration
Using Istio for advanced traffic management:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews.prod.svc.cluster.local
port:
number: 9090
subset: v2
- route:
- destination:
host: reviews.prod.svc.cluster.local
port:
number: 9090
subset: v1
This routes traffic to different versions of your service based on HTTP headers.
Custom Resource Definitions for Port Management
For teams managing many services, creating a CRD can simplify port management:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: serviceports.networking.example.com
spec:
group: networking.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
serviceName:
type: string
containerPorts:
type: array
items:
type: object
properties:
name:
type: string
port:
type: integer
protocol:
type: string
scope: Namespaced
names:
plural: serviceports
singular: serviceport
kind: ServicePort
Usage:
apiVersion: networking.example.com/v1
kind: ServicePort
metadata:
name: web-ports
namespace: production
spec:
serviceName: web-app
containerPorts:
- name: http
port: 8080
protocol: TCP
- name: metrics
port: 9090
protocol: TCP
You'd then write a controller to translate these into actual Kubernetes resources.
Wrapping Up
Configuring containerport in Kubernetes may appear straightforward but involves significant complexity. Key principles include:
- Explicit declaration of containerPorts
- Alignment of service targetPorts with containerPorts
- Implementation of named ports for clarity and maintainability
- Separation of concerns with dedicated ports for different functions