Tailscale version 1.24.0
Your operating system & version Ubuntu 20
I just switched from zerotier to tailscale because of the idle traffic use over LTE that zerotier was using.
I would like to reduce that use even more. Are there some things that I can do to minimize the idle traffic? Something like changing timing of keepalive timers, etc. ?
Jay
April 25, 2022, 3:03pm
2
Hello.
We tune our mobile apps to reduce idle traffic and even the non-mobile apps should be mostly idle when there isn’t any activity but unfortunately there currently aren’t any knobs exposed for tuning; this is something we’d like to improve upon over time.
There’s a couple of github issues you can subscribe to to see work being done in this area:
opened 10:45AM - 21 Nov 21 UTC
optimization
OS-ios
OS-android
L3 Some users
P1 Nuisance
T3 Performance/Debugging
bug
### What is the issue?
Device runtime cut by half running tailscale always on.
…
Did not happen with wirequard.
PS: I did check for old bug reports and solutions on that, but there seem not to be any.
### Steps to reproduce
Enable tailscale and wait.
### Are there any recent changes that introduced the issue?
_No response_
### OS
iOS
### OS version
15.1
### Tailscale version
1.16.1
### Bug report
_No response_
<img src="https://frontapp.com/assets/img/favicons/favicon-32x32.png" height="16" width="16" alt="Front logo" /> [Front conversations](https://app.frontapp.com/open/top_3xbgx)
opened 03:51PM - 09 Jul 20 UTC
L5 All users
P2 Aggravating
T3 Performance/Debugging
Currently, active peers heartbeat with each other every 2 seconds.
(@danderso… n and I had discussed this and considered several options; the overall constraint is we want to only ever be without a network connection for 5 seconds max, before failing back to DERP, so we need a heartbeat at least half that, ideally.)
Unfortunately, every 2 seconds is in both directions, and each ping also involves a pong.
So every 2 seconds:
```
srv: 8.9M/0.0M magicsock: disco: d:de3ee729b55e7f2f<-d:c9b9fc10e4b34938 ([MINwg], 167.71.156.251:41641) got ping tx=0ada6c80d510
srv: 8.9M/0.0M magicsock: disco: d:de3ee729b55e7f2f->d:c9b9fc10e4b34938 ([MINwg], 167.71.156.251:41641) sent pong tx=0ada6c80d510
srv: 8.9M/0.0M magicsock: disco: d:de3ee729b55e7f2f->d:c9b9fc10e4b34938 ([MINwg], 167.71.156.251:41641) sent ping tx=764870a09f60
srv: 8.9M/0.0M magicsock: disco: d:de3ee729b55e7f2f<-d:c9b9fc10e4b34938 ([MINwg], 167.71.156.251:41641) got pong tx=764870a09f60 latency=23ms pong.src=209.180.207.193:49249
```
Which works out to 2 packets per second.
I think we can get that down to 1/4 that.
We can have a bit in the Ping frame that says "I'm a heartbeat ping, no need to reply right away." (a ping frame used for health, not latency). Then the response Pong can both say 1) how long it was delay (so the pinger can still compute pretty accurate ping time, less the delay time), and 2), can carry its own Ping challenge for its direction along with the Pong response.
Then we'd be back down to just 1 packet every 2 seconds, with the pings from each direction 1 second out of phase.
Alternatively, we do something like elect the lexicographically lesser disco endpoint as the heartbeat pinger and only ping in one direction. The ponger then just trusts the path if it got pings from it. That optimization would only come into play when the path is symmetric, which I imagine it is almost always (I haven't analyzed logs yet, but it seems to be).
Thanks Jay,
I was not specifically looking at mobile. We have some edge Ubuntu devices that are LTE.
I will take a look at your references.
Joe