i like tailscale but i notice that i get more weird network blippy latency issues when using it. i used to always have my phone connected to my tailnet so i could use my dns, etc. but always occasionally something won’t load right and i have to refresh again couple of times.
It tended to happen a lot more when switching between wifi / cellular when leaving and entering buildings, etc.
I've found that using Tailscale on my Android phone became worlds more reliable (as far as the issues you've described) once I stopped using a custom DNS resolver on my Tailnet.
Very cool, I love Tailscale. I use it to connect together a VPS, desktop computer, phone, and a few laptops. My main use case is self-hosted Immich and Forgejo so this is great.
Can someone help me understand what this is vs exposing my services via MagicDNS using the tailscale Kubernetes operator? Functionally it looks like a fair amount of overlap but this solution is generic outside of Kubernetes and more baked into tailscale itself? The operator solution obviously uses kube primitives to achieve a fair amount of the features discussed here.
Fascinating to watch Tailscale evolve from what was (at least in my mind) a consumer / home-lab / small-business client networking product into an enterprise server-networking product.
I know they are good at what they do because it's dev tooling that I will actually pay for, which is as many people know, a difficult thing to convince developers to do.
If I'm getting this right it's only highly available from a network layer perspective. However if one of your nodes is still responsive but the service that you exposed on it isn't responsive there's no way for Tailscale to know and it'll route the packet just the same? It's not doing health checks like a reverse proxy would I imagine.
This would be great if it supported wildcards for ingress controllers. A service foo would give you foo.tailYYYY.ts.net as well as *.foo.tailYYYY.ts.net.
I understand the usefulness of the feature, but find their examples weird. Are people really exposing their company's databases and web hosts on their tailnet?
I'm happy to see this feature added. It's a feature that I didn't quite realize I was missing, but now that I see it described, I can understand exactly how I'll put it to use. Great work as always by the Tailscale team.
In addition, do people do so in mesh format? Seems expensive to do so for all of your machines, more often the topology I see is a relay/subnet advertisement based architecture that handles L3 and some other system handles L6/L7
This sounds great, I think it's exactly what I was looking for recently for hosting arbitrary services on my tailnet. I figured out a workaround where i created a wildcard certificate and dns cname record pointing to my raspberry pi on my tailnet but this could be potentially simpler
I wonder if that architecture screenshot's "MagicDNS" value is a nod to Pangolin, since they are currently working on a new Clients feature that should eventually replicate some of the core Tailscale functionality.
I recently found Tailscale when searching to control my home lab when traveling and have been amazed by how simple it is we can create a private network.
I normally am one to not recommend proprietary services, especially for homelab use but their solution is just so far above all of the alternatives in terms of usability that I make an exception here.
But, what found particularly interesting on that page was the following:
>" Some especially cruel networks block UDP entirely
, or are otherwise so strict that they simply cannot be traversed using STUN and ICE. For those situations, Tailscale provides a network of so-called DERP (Designated Encrypted Relay for Packets) servers. These fill the same role as TURN servers in the ICE standard, except they use HTTPS streams and WireGuard keys instead of the obsolete TURN recommendations."
DERP seems like one interesting solution (there may be others!) to UDP blockages...
I have a GitHub action that uses an OAuth token to provision a new key and store it in our secrets manager as part of the workflow that provisions systems - the new systems then pull the ephemeral key to onboard themselves as they come up
It can get especially interesting when you do things like have your GitHub runners onboard themselves to Tailscale - at that point you can pretty much fully-provision isolated systems directly from GitHub Actions if you want
It tended to happen a lot more when switching between wifi / cellular when leaving and entering buildings, etc.
Now I just don’t use it
reply