Well after you read the tutorial on docker, you find that there are least 1M ways that people build docker images. In trolling through the docker.com and github.com, here are the best practices:

  • Believe in micro services. If you are putting more than one major function into a container, you probably want two of them. The isolation benefits are really large when you do this and the performance hit minimal. The biggest gain is no more conflicts between libraries.
  • For each service, create a github repo or at least a subdirectory in your main project. This is where the Dockerfile will live and all the files that you need. There are some folks (oh WordPress just autocorrected that to fools, LOL!) who will either have one monster CMake or Scons script or bash script “to rule them all”, but I have found it to be way more readable to use a small Make for each service, this makes it easy to move.
  • The folks at hypriot have a really nice scheme. The directory has a Makefile and a build.sh for thing that happen at runtime. When you build, you just pick a target like build image and it happens. Put all the local configuration files in here and use Dockerfile to COPY them into your image. It is really nice to have a local Dockerfile and a local Makefiles. Note that the build.sh can usually be subsumed by the Dockerfile which is easier to read. 
  • This scheme doesn’t help much with two problems. First what if you have different flavors like Raspberry Pi, i.MX6 and Intel that use the same files, in that case, I use different Dockerfile.{rpi,intel,imx6} and then have make targets for them.
  • The other problem is that you do not want all the build libraries in the runtime image, so you need to have one build image and one runtime image. The trick here is to build the first image, then copy all the build artifacts into a docker data container. Then when you use the runtime image, you can use the volume-from to get just those. Then copy them back into the runtime image It is tricky to figure out what the build artifacts, but that makes it much easier. You can also use the host environment if you don’t like containers, then you create a /var space and use docker cp to get them out and then COPY to put them in. But this means you need to know about the host file system. Another approach is to use the VOLUME command on the Dockerfile and then use a volumes-from to copy into a runtime container. 
  • Finally when you are done, you can push these into the docker hub. Make sure to also push up a readme.MD and I haven’t figured it out yet, but you want to also publish the location of the github repo that has the Dockerfile and all the configurations.

I’m Rich & Co.

Welcome to Tongfamily, our cozy corner of the internet dedicated to all things technology and interesting. Here, we invite you to join us on a journey of tips, tricks, and traps. Let’s get geeky!

Let’s connect