Tailscale with MagicDNS works with certain ports but times out on others

I have a NixOS machine running on AWS configured with Tailscale. It’s name is doodoo. I have two servers running on doodoo. However, on my macOS machine (macOS 10.15.4, Tailscale 1.4.4), I’m not able to access both servers:

❯ curl --max-time 5 doodoo:9695
curl: (28) Connection timed out after 5005 milliseconds

❯ curl --max-time 5 doodoo:8080
{"path":"$","error":"resource does not exist","code":"not-found"}% ### successful response

I’ve confirmed that both servers are up and running on doodoo, and can be accessed with localhost:XXXX when ssh’d into the machine.

What could be going on here? Are only certain port ranges supported?

I don’t see any host named doodoo. What’s its Tailscale IP?

What could be going on here?

Maybe your server isn’t listening on a specific IP (or localhost) rather than listening on all IPs?

Are only certain port ranges supported?

All IP packets flow. All protocols, all ports.

MagicDNS is a DNS server, so it just maps the name (doodoo) to an IP address. It doesn’t care about the port number (or even see the port number).

If you’re getting connection timed out, there’s a good chance that either Tailscale ACLs are blocking the port, or you have firewall rules (iptables etc) blocking the port. It might also be that you just have a really slow web server that takes more than 5 seconds to respond. (You could check this by sending an invalid request, like with the nc command.)

Thanks for the swift response guys! The Tailscale IP is 100.118.228.49.

That’s interesting. I hadn’t considered that option. The server that is not working is out of my control (it’s hasura console), and it’s entirely possible that it is set up to only listen on localhost – possibly as a security precaution.

I don’t think the server is quite that slow, but I’ll poke around further to see if I’m getting firewalled!

Can you share the output of:

$ netstat -nap --inet | grep LISTEN

Here’s my output:

[skainswo@ip-172-31-5-175:~]$ netstat -nap --inet | grep LISTEN
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 127.0.0.1:9695          0.0.0.0:*               LISTEN      7555/hasura
tcp        0      0 127.0.0.1:33543         0.0.0.0:*               LISTEN      2729/node
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:9693          0.0.0.0:*               LISTEN      7555/hasura

The service on 8080 that is working isn’t shown for some reason, possibly because it’s running in docker…

Yup, those hasura process and the node process are listening on 127.0.0.1.

We have an open wishlist bug to automatically do what you probably expected in this case:

But it’s not quite clear what the options & defaults for that should be if we do it.

Ah, perfect! Thanks for the pointer to the GH issue. This was pretty surprising but it’s not a blocker for me at the moment. I’m happy to hear that it’s not a bug!