Rsync over ssh slow

Hello.
I have tried to sync backup VPS files from two Proxmox hypervisors server and
with direct connection over public ip I reach 70MB/s in download, using tailascale I can reach only 40MB/s…

tried more times… same result.
any idea? (with 40MB/S the connection is direct without relay… cpu usage is slow…)

The remote server have guaranteed 1gbps upload speed and the local server also 1gbps down speed…

Tailscale deserves some performance optimization - there’s a chance you’re running into that.

When you say “cpu usage is slow”, do you mean CPU usage is low? You’re not seeing tailscale max out the CPU when doing a transfer? If that’s the case (low CPU usage) it sounds like it could be some other problem.

the cpu is normal, with rsync file transfer the cpu usage depend on the speed of the transfer, with 90MB/s(900mbps) the cpu reach usually 90%…
But with tailscale the transfer reach maximum 120Mbit/s… with total cpu usage of 40% (with a lot of other services running).

This tailscale host it’s not virtual , is a debian os with i7 3.4ghz and 32gb ram ecc with 1gbps in out bandwidth. Using the public wan ip i can reach at least 40MB/s


Also… a tailscale ip address can’t be usable for rsync jobs at the startup of the server because the os run too late the tailscale service therefore the script that run at the boot can’t find any tailscale ip and the scheduled job is not executed.

We are aware of this limitation, and we will be working on it. For further updates, please subscribe to the issue https://github.com/tailscale/tailscale/issues/414.

We are aware of this limitation, and we will be working on it. For further updates, please subscribe to the issue https://github.com/tailscale/tailscale/issues/414.