Deep dark docker, X11 oddities and X11 vs VNC to Ubuntu with nvidia

Deep dark docker, X11 oddities and X11 vs VNC to Ubuntu with nvidia

OK, I need somewhere to stash all of these strange findings. First of all if you have Docker for Mac as of September 2021, do *not* install experimental new virtualization framework. I did and it apparently breaks docker pull for some reason that is mysterious. I actually did a full install and reinstall before discovering this.

All of this really makes me want to switch the client from docker to podman and to switch from Docker Hub to Github Container Registry. That goes on the list

Dealing with Docker Volumes and Users for MacOS

OK, contrary to previous documentation (and I'm pretty sure it worked in the past). If you do a volume mount like docker -v .:/data run ubuntu you are going to get the data directory set with permissions as root:root which is pretty bad., the way to fix this is that you can add a --user 501:20 to say that the permissions should be done with UID 501 and GID 20 for instance and then it will likely match. The defaults for MacOS are 501:20 and for Linux 1001:1001. Sadly, this doesn't seem to work, no matter what I do, when I do a -v, it gets create as the root user in the container. Sigh.

This does mean that you need to make sure in your Dockerfile that you create a user with those UID and GID. That is typically done with a RUN useradd -g 20 -u 501 -m user. Lots of time people stick with a default user name like user since that is what Jenkins uses.

If you are using Docker compose, then there is a user: 501:20 that you can add to each container to set the UID for mount.

However, none of this works for the MacOS. That's because in between the MacOS user interface you see is a virtual machine, in my case multipass, but it could be vagrant. So there is osxfs file system driver that pushes from the Mac side into the linux VM and from there into the containers call. It has this problem that say the files are owned by 501:20 on the Mac side, what should it tell the linux virtual machine? Well, basically for that version of osxfs, it does a mapping between the Max side and the Linux User ID for you.

So what happens is the bind mount point is owned by root, but the directories below it are owned by whatever the UID is on the MacOS side. Pretty confusing, it also means if you volume mount a single file, then it will be owned by "apparently root" even though it is really owned by you on the host side. In which case if you want to change it make sure to set it rw. Also, even thought is says root ownership, in fact access it just fine. Now that is really confusing.

Note that this is completely different if you have Linux host. Then, it just does a straight user id and gid mapping. So if the file is owned on by 1001/1001 in the host, then it is 1001/1001 in the container. This makes way more sense on Linux to docker containers since Apple uses very different UID/GIDs.

Getting X-Windows Graphical Apps to work with .Xauthority

Well, 99% of the time, I'm using docker for server applications where they are just waiting around on ports for for some to do

But for graphical simulations like that really want to send graphics out, you have basically four choices:

  1. X-Windows. You can use X-Windows and there is quite a bit of docker magic to make it happen. Basically, you need the security key that you have to punch into your docker container. Typically this is in $HOME/.Xauthority and you can look at it with xauth list and you just need to make sure that you don't put it somewhere like the root where you don't have write permissions since there is a lock file that X Windows needs to create. In the container you need to make sure you can write applications and that OpenGL is turned on.
  2. Xauthority bugs when in root. Many of the examples on the Internet I've seen map the Xauthority cookies (these are the tokens that give you access rights like this, -v ~/.Xauthority:/.Xauthority the problem is that slash is owned by root and most of these examples (unwisely) have you running as root when you are running X-Windows applications. If you are running as user, this is going to fail because X Windows cannot lock the file, it creates a -c and -l files and these will fail at the root. You can see this if you do an strace xauth list and it will fail and say there is no .Xauthority. The fix is to map that file into your user directory, so for instance is the like -v ~/.Xauthority:/home/user/.Xauthority will work since that home directory is owned by a user.

OpenGL and X-Windows with GLX

OpenGL for X-Windows is normally turned off by default. You can see this if you try to run glxgears in an Ubuntu Focal container, it just hangs and says there is no rendered. You can also see this with a glxinfo | grep render and see that you have nothing. Also you will see that the software renderer is not available.

On native Mac, you turn this on with defaults write org.xquartz.x11 enable_iglw -bool true and then the glxgears. runs fine on the Mac side using what is called direct rendering. But if you want this rendering to be remote, then you are sending X-Windows commands across the network and the X11 server (I know this is confusing, the X Windows world, the server is the graphics display and the client is the app that is doing the rendering). The part that doesn't work well is that this works fine for OpenGL 1.x, but not very well otherwise because the X protocol is chatty.

The next frontier: VNC and Ubuntu host with nVidia GPUs

Instead what most people do is to use VNC to capture the actual frame buffer using something called Remote Buffer Protocol. As TurboVNC explains, this means the docker container will look at the frame buffer, (so all the 3D rendering happens in the container) and then do section by section compression as JPEG pieces and send that over. This is also a path that allow web browsers to handle this, so you can just have a VNC viewer sent to a browser which is pretty nice.

So that seems like the best long term approach as this allows you to use any graphics. And having a real nvidia system with Ubuntu underneath. Even with Docker, the underlying tools that are using the graphics processor really need clear access to it.

This might be possible with VMware on a Mac but that would be kind of a miracle we can explore on the next post 🙂

%d bloggers like this: