My problem came in when I tried to set it up for RDS in multiple AWS accounts/VPCs.
For context, our setup is:
Three AWS accounts, one for dev, staging and production
Each account has its own RDS cluster and VPC
Because we use terraform to create resources, the subnet IPv4 CIDR of the VPC is 10.5.0.0/16 in all three accounts
When I set up the EC2 subnet routes, all three advertised 10.5.0.0/16 as routes. This seems to be OK for one instance (i.e. when I first set up dev, I could connect through to the RDS instance successfully). However, if I add another subnet router, it all gets set up correctly but only dev, not staging/prod routes work unless I disable the 10.5.0.0/16 subnet on all but one of the Network devices. So staging will work if I disable the dev and production subnet route.
Additionally, on the macos app, it shows three devices but I can’t select another one (they are each tagged) except for the first device.
I have tried setting my ACLs to dst: [*:*] and still no luck.
I can successfully ping the tailscale IP address of all instances. I just get socket timeouts when trying to connect to the RDS cluster.
How do I successfully get this to work without changing my internal network?
Behaviour I would like:
Have one subnet route per dev/staging/prod with subnet routes 10.5.0.0/16
Select a Network device from the tailscale macos app
Connects to the relevant network (dev/staging/prod)
Gives access only to the RDS instance on the chosen account
A tailnet becomes (logically) like a LAN. As such, you won’t be able to have multiple subnets with the same CIDR.
You can selectively enable/disable subnet routers, but this would be for the entire tailnet. Or you can have multiple tailnets and switch between them, but this would require multiple identity providers (GitHub would be ideal here).
I think having multiple tailnets is probably what I am after - so am I correct in saying that there is no way to select a single network device and connect to that via the macos app without disabling subnet routes?
By having multiple identity providers, do you mean that different identity providers would effective “own” each machine and that would allow the successful switching between tailnets? Or do you mean we’d need three separate tailscale accounts?
I thought it would be fairly common that users would be able to switch between networks and not just be connected to all of them - but is this not the case? Even if I were to change the subnet CIDRs in the other accounts, I still would be connect to all three accounts at the same time?
Ideally I would only ever want to be connected to one VPN in one AWS account and have people select which one they want (rather than connected to everything which obviously would not work given the subnet CIDR).
Yes, There is some discussion on how to manage overlapping CIDRs, since as you could expect, it’s something we hear about a lot. As of now, my mechanism if I were to set this up for a small team would be to disable and enable subnet routers (either in the Admin Console, or the API) based on which network I need access to at the time.
For larger teams, when that would be difficult to coordinate, then having multiple github organizations, each with it’s own tailnet would be the simplest to coordinate. One github IdP account can be used to access multiple tailnets. You choose which one to connect to after logging into GitHub from https://login.tailscale.com/
We just ran into the same issue, multiple accounts for dev/prod, same CIDR for both. Are there any additional thoughts on this other than creating multiple tailnets using the identity providers? What does the pricing look like for this setup? Another consideration we’re thinking about is tearing down our DEV and bringing it back up with a different CIDR range, of course, it’s just our time at that point.
We are a small team so we found manually turning on/off the subnet routes works well for our use case (no extra cost).
We have 3 subnet routes (dev/staging/prod) but the CIDR range is only enabled in one at a time. By default, we leave dev on and then if you need access to staging or prod, you go and enable the subnet route (Edit route settings).
This works quite well as only admins can turn on prod VPN access and it is off by default. We don’t find it too cumbersome but we are a small team so it’s pretty quick to ask an admin to turn on prod etc.
Each overlapping use of the same IPv4 subnet can be given a unique IPv6 prefix to disambiguate it. Client apps wanting to connect to a node on one of the overlapping IPv4 subnets would instead connect to an IPv6 address to be able to choose which of the overlapping subnets to send to.