Hmm, if tailscale up doesn’t ask for reauthentication then key expiry isn’t it.
In your logs, the “network unreachable” problems dialing logtail and DERP are a bad sign. Is it possible your firewall has suddenly started blocking outgoing https? We need this in order to negotiate connections.
There’s also quite a bit of IPv6 noise in there. I wonder if tailscale has accidentally latched itself onto using IPv6 for everything, and then only the IPv6 part of your link has gone down.
Is it possible your firewall has suddenly started blocking outgoing https? We need this in order to negotiate connections.
Zero chance of this, but it is possible that I had pushed a nixos change that altered a different part of my config (removed an ethernet bridge that shouldn’t have even been at all related to the device that tailscale would’ve been using to get to the Internet). I do sort of feel like maybe that dislodged something, somewhere, that caused ipv6 to fail for tailscale.
Or, maybe tailscale had latched on to using the bridge, and then I’d torn it down? Not sure that even makes sense though? (I even think I’d restarted tailscaled.)
I really wish I’d have checked if general ipv6 was working for other programs. Now everything seems fine, of course.
I had a similar problem with everything working fine with a computer in the lab (running Ubuntu bionic), and then things stopped working soon after a power outage. Tailscale admin showed that it was connected but I could not ssh in. tailscale status showed a -, that it was not communicating with any other devices successfully. It went on like this for days where I could not figure out why it was connected yet not working. Rebooting had no effect either.
Disabling IPv6 on the LAN network interface and rebooting solved the problem.
Device was connected to an eero router in case any internal eero shenanigans unknowingly contributed to this.
Also been having issues with Tailscale disconnecting at random times… and I can’t connect to my services. Restarting tailscale or rebooting the server usually brings it back up… but it’s still been frustrating.
What’s the best place to check Tailscale logs on Debian and Ubuntu?
I’ve had this happen to me as well on various Ubuntu systems that I use as exit nodes.
I like to allow Ubuntu to patch automatically and even automatically reboot i.e. “[reboot required] unattended-upgrades”. The problem I ran into was that even after the patched host rebooted I would have to use console or out of band access to restart sshd.
The reason is sshd would fail to start is because the address from tailscale wasn’t ready for sshd to bind to – and I’d have to manually restart sshd and then access was restored as expected.
My workaround has been to allow sshd to bind to a nonlocal address in the event that tailscale isn’t established before sshd tries to bind to the tailscale address.
I have experienced the same, or at least a similar, issue where I am able to ping a node but unable to ssh to the same node, receiving a “… port 22: No route to host” from the ssh client. The destination is a Cloud VPS running Fedora 35 Server. I have not experienced this issue on any of my physical destination nodes.
I eventually resolved the issue by configuring a firewalld zone that accepts port 22/tcp on the tailscale0 interface.
Sorry for necro-ing the thread, but I felt it would be useful to post and underline that jay’s idea of the ip_nonlocal_bind setting in sysctl is a great solution to this problem.
I too had the same problem of being locked out of the server after a reboot since sshd fails to start because the tailscale interface takes some seconds to properly set up the ip address. This applies probably to any service that tries to bind to the tailscale IP address too early.
As an aside, the sshd systemd configuration does specify retries but it does not actually restart because in the specific failure case of being unable to bind to an IP, it returns error code 255 which makes systemd not retry to start the service. More info in debian bugreport 
P.S. This issue is also discussed here in the forums in another thread , but that did not have any solution. The supportbot’s suggestion of modifiying all systemd configurations for all services on the machine is very brittle and a pain to do in practice.