Ipv6 does not work with exit node

tldr: ipv4 traffic goes through exit node, but ipv6 does not work

I have tailscale running on an Oracle OCI Compute instance, configured to be an exit node. Lets call it the remote machine.

sudo tailscale up --advertise-exit-node

Remote machine has both a public ipv4 and a public ipv6 address.
On the other hand, my laptop (local machine) is connected to my home wifi, and has only ipv4, that too behind ISP’s NAT
I have tailscale running on my local machine, configured to use the remote machine as exit node

sudo tailscale up --exit-node=remote-machine-tailscale-ip

Now,
On remote machine, curl ipv4.icanhazip.com and curl ipv6.icanhazip.com show its public ipv4 & ipv6 respectively, as expected.
On local machine, curl ipv4.icanhazip.com shows remote-machine-ipv4, as expected. But curl ipv6.icanhazip.com times out

I expect ipv6 to work because ipv6 support says Exit nodes fully support IPv6. You can exit through an IPv6-supporting exit node even if your client device’s ISP doesn’t have IPv6.
Also, I made sure to enable ip forwarding

I tried using my android phone as client instead(which has both ipv4 & ipv6 connectivity), but its the same. ipv4 gets routed to remote machine, ipv6 does not work at all.

System information:
Remote machine OS: Ubuntu 20.04.2 LTS
Remote machine tailscale version: 1.12.1
Local machine OS: Arch Linux
Local machine tailscale version: 1.12.1-1
Local machine hardware: Macbook Air 2017
Android version: 10
Please feel free to ask for output of any command for additional info.

The most common reason why IPv4 might work but not IPv6 is if net.ipv4.ip_forward=1 but net.ipv6.conf.all.forwarding=0 on the exit node.

Another would be if IPv6 were completely turned off (i.e. disabled in kernel config), not just lacking an IPv6 address on the LAN.

Otherwise if you send the Tailscale IP address of the exit node to support@tailscale.com, we might be able to tell more about what is going wrong.

Hi. Thanks for the reply.

I verified values of net.ipv4.ip_forward and net.ipv6.conf.all.forwarding on remote machine(exit node) again, both of them are set to 1.

ubuntu@vm1:~$ sysctl -n net.ipv4.ip_forward net.ipv6.conf.all.forwarding
1
1

I can use ipv6 on my local machine, when I am connected to my Android mobile’s hotspot. so, that should rule out complete disabling of ipv6 in kernel config. Ran this command on the local machine, just to be extra sure.

➜  ~ sudo sysctl -a 2>/dev/null | grep disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.tailscale0.disable_ipv6 = 0
net.ipv6.conf.wlp3s0.disable_ipv6 = 0

I will send the tailscale ip address of exit node as suggested

Do you have access control lists configured? If so, they need to allow appropriate outbound ipv6 ranges, as documented in this Gitlab issue: ACLs have no way to represent "allow user to use this exit node" without completely opening all traffic · Issue #1742 · tailscale/tailscale · GitHub

I did not change the default Access Controls, which are

// Example/default ACLs for unrestricted connections.
{
  // Declare static groups of users beyond those in the identity service.
  "Groups": {
    "group:example": [ "user1@example.com", "user2@example.com" ],
  },
  // Declare convenient hostname aliases to use in place of IP addresses.
  "Hosts": {
    "example-host-1": "100.100.100.100",
  },
  // Access control lists.
  "ACLs": [
    // Match absolutely everything. Comment out this section if you want
    // to define specific ACL restrictions.
    { "Action": "accept", "Users": ["*"], "Ports": ["*:*"] },
  ]
}

I think I have got a hint…
On my local machine,

➜  ~ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 14:c2:13:11:93:ca brd ff:ff:ff:ff:ff:ff
    inet 10.20.30.123/24 brd 10.20.30.255 scope global noprefixroute wlp3s0
       valid_lft forever preferred_lft forever
    inet6 fe80::d33:e6e0:94b8:d2c5/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 100.74.95.101/32 scope global tailscale0
       valid_lft forever preferred_lft forever
    inet6 fd7a:115c:a1e0:ab12:4843:cd96:624a:5f65/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::3b0f:94cf:1649:c974/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

On my remote machine,

ubuntu@vm1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 02:00:17:00:78:98 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.0.112/24 brd 10.0.0.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 2603:c024:8000:bfee:e:e:e:e/128 scope global dynamic noprefixroute 
       valid_lft 7330sec preferred_lft 7030sec
    inet6 fe80::17ff:fe00:7898/64 scope link 
       valid_lft forever preferred_lft forever
3: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 100.98.53.104/32 scope global tailscale0
       valid_lft forever preferred_lft forever
    inet6 fd7a:115c:a1e0:ab12:4843:cd96:6262:3568/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::7b1:1a29:96aa:96b/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

When I ping Google DNS’s ipv4 address from my client,

ping -c 1 8.8.8.8

Wireshark on local machine shows icmp traffic going through tailscale interface.
tcpdump on remote machine’s tailscale interface shows

ubuntu@vm1:~$ sudo tcpdump -n -i tailscale0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tailscale0, link-type RAW (Raw IP), capture size 262144 bytes
06:03:42.424900 IP 100.74.95.101 > 8.8.8.8: ICMP echo request, id 28, seq 1, length 64
06:03:42.439635 IP 8.8.8.8 > 100.74.95.101: ICMP echo reply, id 28, seq 1, length 64

tcpdump on remote machine’s ens3 interface shows

ubuntu@vm1:~$ sudo tcpdump -n -i ens3 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
06:03:42.424945 IP 10.0.0.112 > 8.8.8.8: ICMP echo request, id 28, seq 1, length 64
06:03:42.439619 IP 8.8.8.8 > 10.0.0.112: ICMP echo reply, id 28, seq 1, length 64

We note that in tailscale interface, source address of ping request was 100.74.95.101 (local machine tailscale ipv4) and in ens3 interface, source address of ping request got changed to 10.0.0.112 (remote machine ens3 interface ipv4)

In Contrast, When I ping Google DNS’s ipv6 address from my client,

ping -6 -c 1 2001:4860:4860::8888

Wireshark on local machine shows icmpv6 traffic going through tailscale interface.
tcpdump on remote machine’s tailscale interface shows

ubuntu@vm1:~$ sudo tcpdump -n -i tailscale0 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tailscale0, link-type RAW (Raw IP), capture size 262144 bytes
06:12:34.913091 IP6 fd7a:115c:a1e0:ab12:4843:cd96:624a:5f65 > 2001:4860:4860::8888: ICMP6, echo request, seq 1, length 64

tcpdump on remote machine’s ens3 interface shows

ubuntu@vm1:~$ sudo tcpdump -n -i ens3 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
06:12:34.913119 IP6 fd7a:115c:a1e0:ab12:4843:cd96:624a:5f65 > 2001:4860:4860::8888: ICMP6, echo request, seq 1, length 64
06:12:39.976476 IP6 fe80::17ff:fe00:7898 > fe80::200:17ff:fee1:645e: ICMP6, neighbor solicitation, who has fe80::200:17ff:fee1:645e, length 32
06:12:39.976584 IP6 fe80::200:17ff:fee1:645e > fe80::17ff:fe00:7898: ICMP6, neighbor advertisement, tgt is fe80::200:17ff:fee1:645e, length 32

We note that in tailscale interface, source address of ping request was fd7a:115c:a1e0:ab12:4843:cd96:624a:5f65 (local machine tailscale ipv6) and in ens3 interface, source address of ping request was still the same (local machine tailscale ipv6)

So I wonder if this is the reason. On the remote machine,
In case of ipv4, when the ping request is received from local machine on tailscale0 interface, it gets forwarded to ens3 interface, with the source address changed to ens3 interface’s ip
when the ping response is received, it gets forwarded to tailscale0 interface, with the destination address set to local machine’s ip

But in case of ipv6 this is not happening, the source address is set to tailscale ipv6 address of local machine, while the request is being sent from ens3 interface of remote machine to internet, so the ping response never reaches the remote machine.

Apologies for the verboseness of my description, I want to avoid any possible confusion.
Can anyone please confirm if this could be what is going wrong?

If I configure my local machine (with public ipv6 using mobile hotspot) to advertise as exit node and the remote machine to use local machine as exit node, both ipv4 & ipv6 work as intended.

So I suppose the problem is more of tailscale exit node - Oracle OCI instance compatibility than problem in tailscale itself

I’m interested in the OCI always free instances and have on my todo list OCI + NixOS + tail-scale (and now based on this IPv6 :slight_smile: ).

One approach is to look into IPv6 NAT66 or masquerading from the ULA addresses (such as fd7a:115c:a1e0:ab12:4843:cd96:624a:5f65) to the ens3’s IPv6 address… that would require some interesting configuration of the Linux ip6tables, etc.

This might be something that tailscale can automate for their ULA on an IPv6 exit node - as an option or even by default.

The more IPv6 native way would be to somehow get a /64 allocation of the OCI VCN /56 associated with the instance (as against a subnet) and that could then be configured across the tailscale mesh in addition to the ULA out of the box. The VCN routing would need to see the instance as a router… it might respond to IPv6 router advertisements or be able to be explicitly configured - AWS recently added such a capability to configure an instance as a source / destination for an IPv6 subnet.

You’ve got me thinking…

1 Like

Thank you Marc, that was the right approach. I googled a bit about masquerading & ip6tables syntax, and here is how it went.
Comparing nat table of ip6tables on client and server, I found a rule for IPv6 Masquerading was missing on the server side
That explained why the source address of ipv6 traffic remained the same during forwarding. So, when I manually added the rule, ipv6 started working

As to why the rule was missing, from the lines 1 and 2 , it seems to be added by tailscale only when the system has support for nat table in its IPv6 netfilter stack.
Looking for this line in journalctl, v6nat was always false in my server (and it was sometimes true on my client). Though based on this definition I expect it to be true on my server as

ubuntu@vm1:~$ sudo cat /proc/net/ip6_tables_names
security
raw
mangle
nat
filter

Will report on github issues when I get the time. Thank you everyone who took the time to respond.

1 Like

I am running into this exact same problem - Oracle free instance, exit node working with IPv4 but not with IPv6, etc.

I added 4 DNS nameservers - 2 IPv4 and 2 IPv6. Interestingly, when I put the IPv6 ones first, then IPv6 works but IPv4 doesn’t. Otherwise, it’s the other way around.

Were you able to fix the problem? Any tips on how to fix it?