How do you create a persistent Docker TS container?

I’m stuck on trying to create a persistent tailscale machine with docker and I was wondering if anyone has a solution on how to configure it to work?

I have 2 Piholes on my LAN (1 is RPi 1 is NAS Docker) and I set up my parent’s devices to access them with Talscale on their devices to keep them safe (their own accounts)
This works until the TS machine changes name and so I have to access their TS accounts in order to reshare it and also reconfigure it as their DNS for them again.

For my own devices it’s no problem to manually reset the machine as a DNS, but they’re not tech savvy so it takes ages before I can get in to do it for them in their Tailscale accounts.

The Config:
Docker Tailscale container with macvlan network to have unique LAN IP
Docker PiHole container using the Tailscale container as its network so that it can be used on both LAN and via TS.
As far as setup, it all works fine. The TS container is created and PiHole behaves perfectly on both connections.

I use a reusable AUTH KEY in the docker setup and it creates a machine XYZ.
But if I stop/start the container it creates a new machine as XYZ-1 and the original XYZ remains offline.

Current tailscale docker CLI:

docker run -d \
--name=tailscale \
--net custommacvlan \
--ip \
-p 53/tcp `#PiHole Port` \
-p 53/udp `#PiHole Port` \
-p 67/udp `#PiHole Port` \
-p 80/tcp `#PiHole Port` \
-v /volume1/docker/tailscale/lib:/var/lib \
-v /dev/net/tun:/dev/net/tun \
-e TS_SOCKET=/var/run/tailscale/tailscaled.sock \
-e TS_AUTHKEY=tskey-auth-ThiS1sMyKey-NotSUr3wHatThisBItisButItsPAR70fTheAutHK3y \
--privileged \
--cap-add NET_ADMIN \
--restart=unless-stopped \

docker exec tailscale tailscale status

Hey @callumw,

I have a similar setup that does not show the issue you are seeing.

I use a persistent volume mapping of the folder /var/lib/tailscale in the container to a local folder and this ensures that the state of tailscale and the keys remain stored over container restarts. I think this is solved in your setup with the /var/lib volume mapping you do. Can you verify that the tailscale folder is indeed present in the /volume1/docker/tailscale/lib folder?

Secondly I set a TS_HOSTNAME environment variable to keep the hostname fixed on the Tailscale network.

Thirdly I don’t use a TS_AUTHKEY, but when I start the container I visit the link that is in the log of the container to authenticate the device, and then I disable the host key expiry so that I don’t need to login again.

The full code with example docker compose and Dockerfile are published on github.

Best regards,

Thanks for the reply @Lieven
Yes, the lib/tailscale is in a persistent volume on my drive.
-v /volume1/docker/tailscale/lib:/var/lib \

Last update to the tailscaled.log.conf file was 2022 (when I first tried it), so I purged the folder to prompt a fresh .conf in case something was off with the old one.

Start, stop, start
It created an incremental machine again: xyz-1 (+new .conf file created)

So I added the environment variable to define the hostname as per your example.
-e TS_HOSTNAME=tsdocker \

Start, stop, start
It created an incremental machine again: tsdocker-1

Next I created the initial machine and set as no expiry.
I thought that might be the one, but it created a tsdocker-1 machine again on the second start :frowning:

Hallo callumw,
You can fix that by specifying TS_STATE_DIR environment with value /path/to/persistent/volume. For easy of work specify the following env to your docker: TS_SOCKET, TS_AUTHKEY, TS_STATE_DIR, TS_HOSTNAME, TS_TAILNET and if you want to make the node announce as exit node, add the env TS_EXTRA_ARGS with respective tailscale cli command and TS_ROUTES, make sure to mount your persistent volume /var/lib/tailscale to the same directory that is specified in TS_STATE_DIR.

Let us know if that fixes your issue

1 Like

Hey @saidearly

I can confirm this is the root of the problem.

In the example Dockerfile I provided to @callumw I set the TS_STATE_DIR explicity.

I just tried without this setting and then I can reproduce the behaviour that @callumw is seeing.

Best regards,

@ Lieven

Yeah sure it is.