Ephemeral node Gcloud Kubernetes Engine Django and IPv6

Greetings and hat off for the beautiful project that is Tailscale.

I am trying to leverage tailscale to improve our CI/CD setup.
The idea is to have a tailscale sidecar running along a classic Django Pod.

From → To :

At first I tried non-ephemeral node, those got attributed IPV4 and everything worked.

However, given the nature of the node and the fact that staging deployments on a given branch are short lived an ephemeral node is really what we should be using.

With ephemeral node we get attributed an IPV6 and this is were things gets Wary.

Its a simple Django app with UWSGI.

[uwsgi]
env =
  LANG=en_US.UTF-8
  DJANGO_SETTINGS_MODULE=foobar.settings

buffer-size = 65535
chdir=/opt/foobar/
chmod-socket = 666
gid = foobar

http = :{{USWGI_PORT}}

lazy = true
lazy-apps = true
module=foobar.wsgi:application
post-buffering = 65535
no-orphans = true
single-interpreter = true
uid = foobar
vacuum=true
import=ddtrace.bootstrap.sitecustomize

disable-write-exception = true       ; to ignore log message when client abort connection
ignore-write-errors = true

;; logging
...

Tailscale runs but spews the following logs

2021/07/20 00:58:26 [RATELIMIT] format("[unexpected] peerapi listen(%q) error: %v")
2021/07/20 00:58:26 Received error: PollNetMap: EOF
2021/07/20 00:58:26 control: mapRoutine: backoff: 101 msec
2021/07/20 00:58:26 magicsock: home is now derp-8 (lhr)
2021/07/20 00:58:26 magicsock: endpoints changed: 35.187.13.199:1024 (stun), 10.68.0.46:47912 (local)
2021/07/20 00:58:26 control: client.newEndpoints(0, [35.187.13.199:1024 10.68.0.46:47912])
2021/07/20 00:58:26 magicsock: adding connection to derp-8 for home-keep-alive
2021/07/20 00:58:26 magicsock: 1 active derp conns: derp-8=cr0s,wr0s
2021/07/20 00:58:26 control: NetInfo: NetInfo{varies=false hairpin=false ipv6=false udp=true derp=#8 portmap= link=""}
2021/07/20 00:58:26 Switching ipn state Starting -> Running (WantRunning=true, nm=true)
2021/07/20 00:58:26 derphttp.Client.Connect: connecting to derp-8 (lhr)
2021/07/20 00:58:26 magicsock: derp-8 connected; connGen=1
2021/07/20 00:58:29 control: HostInfo: {"IPNVersion":"1.10.2-t50371bb8f-g80dcd5480","BackendLogID":"efd5161010ef8910333139dda78a2796eb91062b73d8615645888681ad950870","OS":"linux","OSVersion":"Debian 10.10 (buster); kernel=5.4.89+","Hostname":"foobar-app-deployment-5df54d5458-6qsjr","GoArch":"amd64","RequestTags":["tag:staging","tag:web-app"],"Services":[{"Proto":"tcp","Port":8800}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":false,"WorkingIPv6":false,"WorkingUDP":true,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":8,"DERPLatency":{"1-v4":0.076050254,"10-v4":0.14277602,"2-v4":0.139160703,"4-v4":0.0170741,"7-v4":0.224902491,"8-v4":0.013953524,"9-v4":0.108908927}}}
2021/07/20 00:59:31 [RATELIMIT] format("[unexpected] peerapi listen(%q) error: %v") (2 dropped)
2021/07/20 00:59:31 [unexpected] peerapi listen("fd7a:115c:a1e0:efe3:2a3d:40a8:6dbb:b81d") error: listen tcp6 [fd7a:115c:a1e0:efe3:2a3d:40a8:6dbb:b81d]:0: bind: cannot assign requested address

Clearly the issue seems to be around IPV6.


// plus
WorkingIPv6=false```

Is it a UWSGI configuration issue ? Kubernetes ? Django ?

Tailscale is a overlay network, running within a Wireguard tunnel using UDP packets. Inside the tunnel, Tailscale supports both IPv4 and IPv6.

The Wireguard UDP packets are carried in the underlay network. The underlay can be either IPv4 or IPv6 and, counterintuitively, it doesn’t matter which. Tailscale can send IPv6 packets inside the tunnel, bundled up into a UDP frame, carried over an IPv4 network to their destination, and that will be fine. The destination unpacks an IPv6 frame from inside the UDP packet, and Tailscale can receive it.

The “WorkingIPv6=false” log means that the underlay network doesn’t have a working IPv6 address. That is ok, IPv4 is fine.

What I think is happening is this: [unexpected] peerapi listen("fd7a:115c:a1e0:efe3:2a3d:40a8:6dbb:b81d") error: listen tcp6 [fd7a:115c:a1e0:efe3:2a3d:40a8:6dbb:b81d]:0: bind: cannot assign requested address

Might that machine’s kernel have IPv6 disabled completely? That would prevent IPv6 from working inside the Tailscale tunnel.

Thank you @DGentry

What I see about containerd nodes on GKE :

IPv6 address family is enabled on pods running Containerd

Affected GKE versions: all

IPv6 image family is enabled for Pods running with Containerd. The dockershim image disables IPv6 on all Pods, while the Containerd image does not. For example, localhost resolves to IPv6 address ::1 first. This is typically not a problem, however, this might result in unexpected behavior in certain cases.

Currently, the workaround is to use IPv4 address like 127.0.0.1 explicitly, or configure an application running in the Pod to work on both address families.


And my docker image is from FROM python:3.8-slim-buster

So it seems that everything should be working.
I am going to try a simper test case.

Just to double check, there is no way to disable IPV6 on ephemeral nodes right?

At present, ephemeral keys only allocate an IPv6 address and there isn’t a way to get an IPv4 address.

This is done because IPv4 addresses are a limited resource, and some of the use-cases for ephemeral keys could consume them at a very high rate (even though they would be returned to the pool later).

1 Like