Docker host ports not available w/ exit node enabled

Hello, I currently have a server whose wan traffic should be routed over another TS node. I have this feature working, my WAN IP is different before/after enabling this feature, every time.

This host also have some docker containers which listen on TCP ports, after I set the exit node I can not access them anymore over Tailscale. Everything goes back to normal after running -accept-routes again, with empty parameters. Also, non container services are not disrupted.

Tailscale (native, not a container) version v1.6.0
Your operating system & version: Ubuntu 20 and OpenSuse Tumbleweed.

Hello!

First I’d like to verify that I understand your situation correctly. You have a Linux server that is hosting containers. These containers have their own IP addresses in a private IP address space and you can’t reach them anymore when using some other device as an exit node.

When using the exit node feature, all destinations (including local networks) are sent to the exit node. This is a safety measure that we implemented to prevent accidentally accessing local networks when using an exit node. Depending on why you enabled the exit node feature, accessing the local network may be beneficial (e.g. at home) or may be unwanted (e.g. on a wifi network you don’t trust), so we chose the more conservative option. It’s possible that this will become configurable in future.

As another safety measure, we block access to private networks (RFC1918 ranges including 192.168.0.0/16, and a few others) on the exit node unless those are also advertised as subnet routes.

For now as a workaround, on a Linux device you can set a packet mark in iptables to avoid the Tailscale routing table for the IP range you would like to access, which I think should solve your problem. Assuming your containers are on 10.0.0.0/24 then the command would be:

iptables -t mangle -I OUTPUT 1 -d 10.0.0.0/24 -j MARK --set-mark 0x80000

Generally speaking, you should be able to leave that iptables rule in effect any time, regardless of if you’re using an exit node or not, or even if tailscale is not running (it will only have side effects if other firewall or routing rules check for that mark, which is usually not the case).

Hello, thanks for you answer, you are right on the spot with my issue, however those rules don’t fix it. I replaced the 10.0.0.0/24 network in the command w/ the one I saw in “docker inspect” (172.18.0.0/24 )

This new rule is the only one in mangle, the rest is empty, w/ default accept. I’ll restart the server just to be sure

Can you include the output of ip rule show?

I suspect this might be caused by reverse path filtering. Assuming the network device is docker0, check this:
sysctl net.ipv4.conf.docker0.rp_filter

If it is 1, try setting it to 2 (sysctl -w net.ipv4.conf.docker0.rp_filter=2 or echo 2 >/proc/sys/net/ipv4/conf/docker0/rp_filter). 1 is strict filtering (packets must come in from the place where they would be sent from), whereas 2 is loose filtering (the routing table needs to mention something about the destination, but it doesn’t matter if that’s a different interface).

As a slightly more secure solution (more resistant to IP spoofing coming from inside your containers, which is not usually a huge concern), you may want to consider something like iptables -t mangle -I PREROUTING 1 -i docker0 -s 172.18.0.0/24 -j MARK --set-mark 0x80000 which should make incoming packets see the same routing decision as outgoing packets, but I don’t have any docker containers installed on my computer to confirm it works.

A third option would be to make a new routing table (just containing the docker network) that ip rule looks at before the lookup 52 entry that Tailscale installs. That would avoid the need to use iptables at all, but is a bit more work to set up.

This didn’t work either, I want to clarify that those containers have their ports published on the host.
Im in 192.168.168.77.12/24
docker host is on another site at 192.168.168.88.12/24
containers on this host are on 172.18.0.0/24

i’ve done a little dump and saw packets are not coming back from docker:

19:19:49.634214 IP 100.80.193.26.50538 > 100.110.190.45.3000: Flags [S], seq 2997723526, win 64480, options [mss 1240,sackOK,TS val 3515202453 ecr 0,n
op,wscale 7], length 0
19:19:49.634270 IP 100.110.190.45.50538 > 172.18.0.5.3000: Flags [S], seq 2997723526, win 64480, options [mss 1240,sackOK,TS val 3515202453 ecr 0,nop,
wscale 7], length 0

19:19:49.885377 IP 100.80.193.26.50540 > 100.110.190.45.3000: Flags [S],

UP

I have exactly the same problem, wondering if anyone found a solution since that.
I couldn’t find a way to make tailscale not redirecting docker networks to the exit node.

Is this an option that could be added to the tailscale CLI, or an iptable rule we should manually add ?

I believe I have the same problem:

I have Docker container (PostgreSQL DB) listening on 0.0.0.0:5432. I can connect locally to it, but when I start the exit node it “captures” the port. I’ve confirmed that with netstat.

Tailscale on Linux, exit node is Linux as well.

FYI, looks like if you only need to access Docker locally you can use: tailscale up --exit-node=EXIT-NODE --exit-node-allow-lan-access