Tailscale using non-existing DNS server in Kubernetes pod

I’m attempting to set up an exit node and subnet router for Tailscale from inside my K8s cluster. I would like to be able to connect to both k8s services on 10.43.0.0/16 and e.g. my NAS on 192.168.88.0/24 from my tailnet. I built this deployment:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tailscale
  name: tailscale
  namespace: tailscale
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tailscale
  strategy: {}
  template:
    metadata:
      labels:
        app: tailscale
    spec:
      hostNetwork: true
      containers:
      - image: tailscale/tailscale
        name: tailscale
        resources: {}
        env:
          - name: TS_ACCEPT_DNS
            value: "true"
          - name: TS_AUTHKEY
            value: "tskey-auth-my-key"
          - name: TS_EXTRA_ARGS
            value: "--advertise-exit-node"
          - name: TS_ROUTES
            value: "192.168.88.0/24,10.43.0.0/16"
        volumeMounts:
          - name: lib
            mountPath: /var/lib
          - name: tun
            mountPath: /dev/net/tun
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
      volumes:
      - name: tun
        hostPath:
          path: /dev/net/tun
      - name: lib
        hostPath:
          path: /var/lib

but when the pod starts, it appears to be trying to use a DNS server at 192.168.88.1, which doesn’t exist, to look up a k8s service. I have added my kube-dns server as a DNS server, but there’s nowhere that should be telling the pod to use 192.168.88.1 for dns (not e.g. DHCP). The full startup output:

boot: 2023/06/10 20:33:49 error checking get permission on secret tailscale: Post "https://kubernetes.default.svc/apis/authorization.k8s.io/v1/selfsubjectaccessreviews": dial tcp: lookup kubernetes.default.svc on 192.168.88.1:53: no such host
boot: 2023/06/10 20:33:49 error checking update permission on secret tailscale: Post "https://kubernetes.default.svc/apis/authorization.k8s.io/v1/selfsubjectaccessreviews": dial tcp: lookup kubernetes.default.svc on 192.168.88.1:53: no such host
boot: 2023/06/10 20:33:49 error checking patch permission on secret tailscale: Post "https://kubernetes.default.svc/apis/authorization.k8s.io/v1/selfsubjectaccessreviews": dial tcp: lookup kubernetes.default.svc on 192.168.88.1:53: no such host
boot: 2023/06/10 20:33:49 Starting tailscaled
boot: 2023/06/10 20:33:49 Waiting for tailscaled socket
2023/06/10 20:33:49 logtail started
2023/06/10 20:33:49 Program starting: v1.42.0-tab797f0ab, Go 1.20.3-tsddff070: []string{"tailscaled", "--socket=/tmp/tailscaled.sock", "--state=kube:tailscale", "--statedir=/tmp", "--tun=userspace-networking"}
2023/06/10 20:33:49 LogID: 2c4c1795c8ecac1ab7d48dd23e69e92a425ceb5c63d6f1f9ed822ca7d8a6d19a
2023/06/10 20:33:49 logpolicy: using system state directory "/var/lib/tailscale"
2023/06/10 20:33:49 wgengine.NewUserspaceEngine(tun "userspace-networking") ...
2023/06/10 20:33:49 dns: using dns.noopManager
2023/06/10 20:33:49 link state: interfaces.State{defaultRoute=eno1 ifs={cni0:[10.42.3.1/24] eno1:[192.168.88.72/24]} v4=true v6=false}
2023/06/10 20:33:49 magicsock: disco key = d:b9251c17c3e27887
2023/06/10 20:33:49 Creating WireGuard device...
2023/06/10 20:33:49 Bringing WireGuard device up...
2023/06/10 20:33:49 Bringing router up...
2023/06/10 20:33:49 Clearing router settings...
2023/06/10 20:33:49 Starting network monitor...
2023/06/10 20:33:49 Engine created.
2023/06/10 20:33:49 flushing log.
2023/06/10 20:33:49 logger closing down
2023/06/10 20:33:49 getLocalBackend error: ipnlocal.NewLocalBackend: calling ReadState on state store: Get "https://kubernetes.default.svc/api/v1/namespaces/tailscale/secrets/tailscale": dial tcp: lookup kubernetes.default.svc on 192.168.88.1:53: no such host

I fixed this by removing “hostNetwork: true” from the manifest, still don’t know why tailscale was doing this.