Tailscaled in GKE Pod can ping but not route

Greetings, I’m trying to set up a reverse proxy as a sidecar pod in GKE that routes traffic back through services on my home network. I’ve got tailscale running off a modified version of this gist and running with --accept-routes --advertise-exit-node and it can see all of my services. It itself was allocated an IPv6 address, and I can tailscale ping most other tailscale services, including my BGP addresses that I’ve got a tailscale-router routing to. However, any attempts to hit any running services on my home network from the kubernetes pod just hang and eventually timeout. tailscale status shows tx/rx from the endpoints I’d expect but nothing actually appears to be sending.

Interestingly, I have no issues going the other direction, I can successfully enable the pod as an exit node and forward all my traffic through the pod.

I can’t seem to find any super relevant logs in the pod itself, though it does throw this error on startup:
Warning: net.ipv6.conf.all.forwarding is disabled. Subnet routes won't work.
Which, given it’s in GKE, I’m not sure I can actually resolve that - but I’m also not entirely sure that’s the only issue.

Any suggestions are greatly appreciated, happy to try anything.

For the following error. Warning: net.ipv6.conf.all.forwarding is disabled. Subnet routes won’t work.

please enable IP_Forwarding using following commands; Enable IP forwarding on Linux · Tailscale

I’m not sure I’ll be able to override the ipv6 forwarding as these are GKE managed nodes (ipv4 is enabled by default).

I’m also not sure that’s the whole problem here. But I’ll see if it can be overridden.

I’ve been doing some playing around with vanilla Debian GCE instances, I appear to have consistent issues whenever I’m allocated an IPv6 address (which I gather is standard if you use an Ephemeral key). If I do a standard login and get an IPv4 address things “just work” (even without explicitly enabling IP Forwarding), but I can’t get it to work at all if I have IPv6.

I’ve got two GCE instances, the only difference is on one I did a sudo tailscale up --authkey tskey-$ephemerallkey and the other I did sudo tailscale up and did a manual authentication. The manual authentication works perfectly, the ephemeral key with its IPv6 address, does not work at all.

Ok, now I’m at a loss. I stumbled on Tailscale on Google Cloud Run · Tailscale (awesome btw). Following and modifying that example I can successfully create a local docker container that gets an IPv6 address and correctly talks to my backend service. However, when I push the exact same container to Cloud Run or GKE they do not work.

From Cloud Run / GKE I’m also entirely unable to connect to hello.ipn.dev, yet the local container with IPv6 is fine.

Is the TAILSCALE_AUTHKEY environment variable set in Cloud Run? That is an easy step to miss.

Yeah. I can see the cloud run instances joining my tailscale network (til their hostnames are “localhost”, so I’ve got a bunch of localhost-1, localhost-2, etc. coming and going.)

Neither the Cloud Run instances or GKE pods are able to access any of the addresses (neither an IPv4 / IPv6). I did, however, manage to get a Debian GCE instance with an IPv6 address to talk to another Tailscale IPv6 address, but not an IPv4 address.

This is what I have at the moment: Tailscale reverse proxy test · GitHub

This all works locally just fine (even with Ephemeral addresses). It does not work in Cloud Run, GKE, or GCE.

EDIT: Even when getting a shell into the running container I’m unable to curl to any of my tailscale addresses - it’s not just my nginx config that could be bad.

For anyone finding this thread in the future: this was resolved in https://github.com/tailscale/tailscale/issues/2690