Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just one data point, but I can't say I've had any development problems with Docker on MacOS. It's frustratingly slow, but it has always worked. Perhaps you are doing stuff way more complicated than just running a few containers.


For individual containers or smaller local deployments using internal Docker networking, it's usually stable and you can use hacks like NFS reverse mounts for faster shared files.

But I've run into issues emulating more complex environments where you need multiple exposed ports, more complex host-to-Docker networking, etc., which generally works on Linux because it's not stuck in a VM shim layer.

But most of the time, I don't have any issues, just sometimes slower environments on Mac if I don't tune folder mounts and run apps like Ruby/Node/PHP with hundreds or thousands of files.


You can use Wireguard to tunnel from the internal Docker networking to Mac's network, so that each Docker container can have a separate IP address that is visible to Mac processes.

Here's a description of my setup for that: https://news.ycombinator.com/item?id=33665178


I think this touches on what might have been the heart of the issue of one of my projects. It made use of extremely, complex dynamic docker networking. Both inter-container and host networking. It was a beast


The problem is that the Docker runtime doesn't actually run on MacOS. It runs on a Linux VM running alongside MacOS. This can make a few things complex enough that requires changes to applications which interface with docker, especially related to networking. It's the same with Docker on Windows (running Linux containers. Windows containers are a whole other issue)


> It's the same with Docker on Windows

And the same with Docker Desktop on Linux: https://docs.docker.com/desktop/install/linux-install/


This strikes me as such an obviously stupid, self-defeating design that I have to be missing something.

Why would the Docker developers do this? Did doing it locally not work well? Is all that just to 'spare' Linux developers from actually installing Docker on their machines?


The trouble with Linux is that anything you don't vendor in will be a giant fucking headache when it comes to support. But then, so will anything you do vendor in, as soon as some shared lib your vendored-in dep relies on is at the wrong version, or when it encounters some oddball config/package combo, or whatever.

Here they appear to have chosen to vendor in the whole damn OS, which, given the realities of supporting Linux, doesn't seem totally insane to me. I'd expect it also lets you run it with reduced permissions (not everyone has root on their workstation) and to generally make cross-distro operation much more reliable and consistent.


Maybe you want to run a completely different kernel, or distro, from your prod environment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: