127.0.0.1 in a kubeconfig is a promise the CI container cannot keep

The first instinct when wiring up a CI pipeline to deploy via Helm is to export the kubeconfig and paste it as a secret. It works locally. In a pipeline container it fails immediately: the server address is 127.0.0.1:6443, which inside a Docker container is the container's own loopback, not the host. Swapping in the LAN IP hits a firewall: k3s binds to all interfaces but iptables drops traffic from the Docker bridge before it reaches the socket. There is no clean path through iptables that is not also a security hole.

The correct fix: pipeline steps should run as Kubernetes Pods, not Docker containers. Woodpecker has a Kubernetes backend for this. When WOODPECKER_BACKEND=kubernetes, the agent creates Pods instead of Docker containers. Those Pods run inside the cluster and reach the API server at kubernetes.default.svc.cluster.local:443 using the mounted service account token. No kubeconfig. No networking hack. The deploy step shrinks to a single helm upgrade command.

Building images without a Docker socket requires Kaniko. It reads the Dockerfile and pushes directly to the registry from inside the Pod, using the workspace PVC that Woodpecker shares between steps.

One practical detail with local-path storage: Woodpecker defaults to ReadWriteMany for workspace PVCs. local-path only supports ReadWriteOnce. Set WOODPECKER_BACKEND_K8S_STORAGE_RWX=false and it stops trying to provision an access mode the storage class cannot satisfy.

sources