Magicsock: Rebind ignoring IPv6 bind failure: failed to bind any ports (tried [41641 0])

Environment: I have two DERP Relays running exit node. Tailscale DERPs are disabled. The client server running Tailscale has no IPv6.

When I perform tailscale up with: --force-reauth --authkey=<key>, I’m finding that I cannot apply an exit node with --exit-node=<ip>. When I attempt to do this, the network connection never makes it to a working state on this machine and the tailscaled logs keep talking about ipv6 failing. The tailscale healthcheck will usually mention the inability to connect to the home DERP relay while in this broken state. Even when I split this up into two commands, --force-reauth ( just to ensure I’m using the latest key) in one command, then --exit-node=<ip> in another command, I see inconsistent ability to make a connection and sometimes switching the exit node IP resolves it. When things don’t work, it generally looks like this, with repeated errors about ipv6:

Jul 10 22:55:18 my.host.name tailscaled[1141730]: EditPrefs: MaskedPrefs{ExitNodeID="" ExitNodeIP=100.96.19.63 WantRunning=true}
Jul 10 22:55:18 my.host.name tailscaled[1141730]: wgengine: Reconfig: configuring userspace WireGuard config (with 2/13 peers)
Jul 10 22:55:18 my.host.name tailscaled[1141730]: wgengine: Reconfig: configuring router
Jul 10 22:55:18 my.host.name tailscaled[1141730]: monitor: RTM_NEWROUTE: src=, dst=127.0.0.0/8, gw=, outif=0, table=52
Jul 10 22:55:18 my.host.name tailscaled[1141730]: monitor: RTM_NEWROUTE: src=, dst=192.168.176.0/24, gw=, outif=90, table=52
Jul 10 22:55:18 my.host.name tailscaled[1141730]: monitor: RTM_NEWROUTE: src=, dst=10.105.0.0/24, gw=, outif=90, table=52
Jul 10 22:55:18 my.host.name tailscaled[1141730]: monitor: RTM_NEWROUTE: src=, dst=, gw=, outif=90, table=52
Jul 10 22:55:18 my.host.name tailscaled[1141730]: monitor: RTM_NEWROUTE: src=, dst=172.x.x.x/28, gw=, outif=90, table=52
Jul 10 22:55:18 my.host.name tailscaled[1141730]: [RATELIMIT] format("monitor: %s: src=%v, dst=%v, gw=%v, outif=%v, table=%v")
Jul 10 22:55:18 my.host.name tailscaled[1141730]: wgengine: Reconfig: configuring DNS
Jul 10 22:55:18 my.host.name tailscaled[1141730]: dns: Set: {DefaultResolvers:[http://100.96.19.63:65181/dns-query] Routes:{} SearchDomains:[] Hosts:14}
Jul 10 22:55:18 my.host.name tailscaled[1141730]: dns: Resolvercfg: {Routes:{.:[http://100.96.19.63:65181/dns-query]} Hosts:14 LocalDomains:[]}
Jul 10 22:55:18 my.host.name tailscaled[1141730]: dns: OScfg: {Nameservers:[100.100.100.100] SearchDomains:[] MatchDomains:[]}
Jul 10 22:55:18 my.host.name tailscaled[1141730]: magicsock: adding connection to derp-900 for [JsxFT]
Jul 10 22:55:18 my.host.name tailscaled[1141730]: magicsock: 2 active derp conns: derp-900=cr0s,wr0s derp-901=cr7m0s,wr7m0s
Jul 10 22:55:18 my.host.name tailscaled[1141730]: derphttp.Client.Recv: connecting to derp-900 (my-derp)
Jul 10 22:55:18 my.host.name tailscaled[1141730]: restarted resolved after 239ms
Jul 10 22:55:20 my.host.name tailscaled[1141730]: magicsock: [0xc00002a000] derp.Recv(derp-900): derphttp.Client.Recv connect to region 900 (my-derp): context deadline exceeded
Jul 10 22:55:20 my.host.name tailscaled[1141730]: derphttp.Client.Send: connecting to derp-900 (my-derp)
Jul 10 22:55:20 my.host.name tailscaled[1141730]: magicsock: last netcheck reported send error. Rebinding.
Jul 10 22:55:20 my.host.name tailscaled[1141730]: magicsock: unable to bind udp6 port 41641: listen udp6 :41641: socket: address family not supported by protocol
Jul 10 22:55:20 my.host.name tailscaled[1141730]: magicsock: unable to bind udp6 port 0: listen udp6 :0: socket: address family not supported by protocol
Jul 10 22:55:20 my.host.name tailscaled[1141730]: magicsock: Rebind ignoring IPv6 bind failure: failed to bind any ports (tried [41641 0])
Jul 10 22:55:20 my.host.name tailscaled[1141730]: Rebind; defIf="eth_1g0", ips=[172.x.x.x/28]
Jul 10 22:55:20 my.host.name tailscaled[1141730]: post-rebind ping of DERP region 901 okay

The strange thing is that once I do manage to get connected to one, I can quickly switch back and forth between the two as I would expect with not so much as even a blip in a ping, until I attempt to shut down the VPN service and attempt to reconnect and reauth.

I’m currently juggling two VPN services in case one goes down, so stopping tailscaled is a normal part of that process.