Subnet router and Kubernetes

Hello,

I have successfully installed the subnet router on my remote equipment and can access all my machines on the private network from home.

I have installed a second router in side my cluster using the mvisonneau/tailscale:v1.18.2 helm chart. The install went ok, and tailscale admin picked up the new router. When I try to access a service or pod inside the cluster, the tailscale pod logs sees the incoming request however nothing gets returned, http or icmp. Here is what the logs look like.

2022/04/07 16:31:24 Accept: ICMPv4{100.72.195.93:0 > 10.244.235.131:0} 60 icmp ok
2022/04/07 16:31:29 Accept: ICMPv4{100.72.195.93:0 > 10.244.235.131:0} 60 icmp ok
2022/04/07 16:31:34 Accept: ICMPv4{100.72.195.93:0 > 10.244.235.131:0} 60 icmp ok

The one thing I noticed is the helm chart version of tailscale is pretty far behind, could this be a compatibility issue between the different versions?

Thanks for any thoughts you may have on the subject.
Brad

Ok the error was that the ip forwarding was not set inside the pod. working with the helm author to fix, I hope.

1 Like

Hi Dax

I’m facing the excact same issue, I can how ever tell you updating the versions of the tailscale did not seem to fix it for me, regardless I’ve created pull requests to the owner of the docker image and helm repo to update to 1.22.2 :slight_smile:

Do you know why the pod just stopped forwarding the IP’s this have been working for me for around 3 months or so, and all of a sudden just stopped?

open prs:

I had to abandon the helm chart and make my own. Ping me if you would like my recipe.

Brad

I would like the recipe!

Im using the updated mvisonneau tailscale helm chart but the same result as above.
I have a redis server running inside of an eks cluster deployed with terraform.
The terraform uses the mvisonneau tailscale-relay chart. Deploys into the cluster fine and uses a reusable tailscale auth key and advertise routes 10.100.0.0/16
So currently the pods are a in public vpc for simplicity and the ipv4.ip_forward=1 is set on both the pod and the node itself. Is the ipv6.conf.all.forwarding a requirement if we’re only using ipv4?

tailscale ping appears to be working as well

Hello all, I would like to do the same. could you please share your helm chart?
Thanks

@dvpn could you solve the issue? I would like to access the apiserver over relay, so I can close down the apiserver access. I can ping the relay vpn ip from client but cannot access any network behind.
did you need to set forwards manually? As I see in the pod there is not settings in /etc/sysctl.conf
thanks