I have configured a Taiscale exit node in an Azure VM. This VM is setup in a VNet subnet that uses the Azure Internet Gateway as its default gateway. I have the necessary NSG rules to allow UDP 41641 and 3478 and my tailscale client make a “direct” connection, and all as I expected.
However I have several remote systems that I need to allow incoming access only from my tailscale vpn devices, ie. from the outbound IP of my exit node. As my exit node uses the Azure Internet gateway, vpn client outbound connections uses one of the Azure default Internet Gateway IP address. I cant use default Azure IPs in my remote firewalls as they are used by many other users and also can change time to time. Therefore I created a Azure NAT gateway with a public IP address and used the NAT gateway in my Exit Node subnet. This is working as expected, vpn outbound connections uses the public IP of my NAT gateway, and I can use that IP in the firewalls to allow inbound access to my remote systems.
However then I noticed my taiscale clients no longer use the “direct” connection, instead it uses a relay, adding latency. If I revert back to my old setup, I get direct connection. Azure documentation do not say anything about NAT gateway’s blocking any UDP in particular.
One of my colleges also re-produced the same issue by simply using an Azure VM that uses a NAT gateway, and a remote tailscale client. We are effectively doing something similar to what has been documented in Connect to an AWS VPC using subnet routes · Tailscale , although it is not in AWS, but in Azure.
Anyone has any insight how I can use Azure NAT gateway for the tailscale host while getting a direct connection?