Connectivity Issues...how to debug

I’ve got a Mac and an Ubuntu machine. The Ubuntu machine is on my home network, the Mac is wherever I am (not at home).

I can “ping”, “tailscale ping” and “tailscale ping --tpms” in any combination and it works. I can access the Kubernetes server running on my Ubuntu machine from my Mac. But there are two issues.

First, I cannot ssh into the Mac. I tried setting the MTUs on the network interface to 1200…didn’t help. The connection is getting made, but it hangs at: “debug1: expecting SSH2_MSG_KEX_ECDH_REPLY”. I tried various things like fiddling with algorithms. Nothing made a difference.

But there is another issue. If I run an iperf server on the Ubuntu machine (e.g., iperf -s -p 25700) and then I try to test it from my Mac (by running iperf -c 100.x.x.x -p 25700), I get very strange results. Output indicates they connect, but the performance I get is very strange. I get bandwidth of 52.2Kbps. But if I reverse who is the server and who is the client, it works fine and I get something on the order of 10Mbps.

Clearly something is wrong in the networking side. But it seems more “hobbled” than nonfunctional. I was hoping MTU fragmentation would was the source of the issue, but setting the MTU on the Ubuntu machine to a lower level didn’t help.

Any other suggestions of things I can try to debug this?

Thanks.

Can you ssh from the Ubuntu machine into the Mac when you’re on the same network, not through tailscale?

I don’t know. I’ve never tried. I don’t even have sshd running on my Mac.

But I can ssh into the Ubuntu machine from the Mac when I’m on the same network (which, to be clear, is what I’m trying to do with tailscale as well, i.e., Mac → Ubuntu). Furthermore, I can ssh into the Ubuntu machine from my Mac using a cloudflared tunnel as well. So the issue seems particular to tailscale’s transport mechanism.

Strangely, it all seems to be working now (iperf and ssh). Not sure what changed. I’m still curious what other debugging techniques there are out there. But there is no urgency around this particular issue now.