Google Cloud Platform Strategies: Of Kubernetes, GKE, Helm, and locally on a Mac with Microk8s

Ok with those preliminaries out of the way how do you get a GPU instance up and running and have it work easily. There are a few steps. The main issue are when you are thinking about this that there are 3.5 different ways to do things these days and it’s super confusing what the right approach is. But here are the ones that I see:

  1. Traditional Bare Metal on GCE (Save lots of versions). Ok, the oldest way is to get a Google optimized image with the Google Compute Engine that knows how to deal with CUDA and then load up your stuff. When you want to use it, you load your image, then play, then save the new image. This is because even when you stop an instance, you are still paying for it. So basically, your image, you should keep incrementing version numbers and there you have it. The biggest problem of course is that while this works well for a single developer, it’s pretty confusing when you are sharing or doing something in production. You basically have to copy the images around and know what is what. There is no version control either. Some of the tricks here are to use a Google Optimized OS
  2. Bare metal with Terraform, Docker and Persistent Storage, Filestorage and Cloud Storage. OK, this is a variant that is the 0.5 bump, basically it is like the solution above with automation. The big ones are to use Terraform to actually configure the bare metal, this makes it easy to bring up the same bare metal Google Compute Engine hardware everytime. Then for your local configurations, you can run Docker, this is again another layer of isolation so that you can say have one Terraform for the stuff outside of docker like CUDA driver installation and then load lots of different machine learning things on there. Then the final trick is to decouple the storage from the GCE, so each developer can have their own local data. This is the simplest way for a few people to work. It also means that they could run this stuff on their local machine or anywhere that Terraform can run. The only complicated part is how to have persistent storage. As usual, Google has lots of different ways to do this but basically, you can have a virtual disk that’s a block device which they call a persistent disk. This is great for fast temporary storage. The net level is called Filestore which are NFS drives you can mount and since they are NFS, you they are relatively fast and can be shared by multiple machines. This is a good place to put say decompressed images that are ready to be processed. For the really big stuff, you can also connect Cloud Storage buckets. The trick is getting the right Docker containers, NVIDIA has a load of them as does Google to do this. If you are doing one bad ass machine this is the way to go.
  3. Google Kubernetes Engine with Helm to Persistent Volumes. Ok, this is all the way to the other side, instead of provisioning your own single bare metal server, you use Kubernetes instead to do the work. The you basically initialialize a system with GKE and then use Helm to load things up. The main advantage here is that you can work with multiple machine each doing different things, so it is great for workflows where you want smaller machines to do different things like preprocess data and then maybe have a big instance run the calculations. This stuff is really tuned for things like web services, but you can also attach what Kubernetes calls Persistent Volumes which then map to the underlying real storage mechanisms. For for instance, there are cloud volumes (Persistent Disk or AWS EBS), there are file sharing with nfs and then distributed file systems like glusterfs which are kept in the Yaml files that define what k8s is supposed to do.
  4. Google and Kubeflow. OK, this is the most complex, but this is a way to manage GKE with a specific workflow that you can monitor and manage a workflow. It has the most tools but is of course the most complicated to setup.

So given all this, here are some experiments in the middle tier. that is first making GKE work with Helm and Persistent Volumes to NFS and then Google Cloud Storage.


First, you need to select a zone that has the GPUs that you want. There are zillion zones and there’s a magic list that shows what GPUs are available where. The short answer is that us-west1-b and us-east1-c has a huge number and us-central1-a and us-central1-c are the others that do as well. You can see what’s available with gcloud compute zones list but make sure that you have the permissions to enable the computer clusters with gcloud services enable computer.googleapis.com to let this run


The basics: Running GKE with the gcloud CLI

So to start your Kubernetes cluster. The guide skips a few steps but the basics

# enable it
gcloud services enable container.googleapis.com
# pick zones where GPUs live
gcloud config set compute/zone us-west1-b
# boot up a basic cluster
gcloud container create rich-1
# now add this so kubectl can see it
gcloud container clusters get-credentials rich-1
kubectl cluster-info
# run a random hello world
kubectl create deployment hello-world --image gcr.io/google-samples/hello-app:1.0
kubectl 
# now create a public port 80
kubectl expose deployment hello-world --type LoadBalancer --port 80  --target-port 8080
# figure out what the external point

Now you can close it all up

kubectl delete server hello-world
gcloud container clusters delete rich-1

Dealing with persistent storage is the next challenge call Google Filestore, most Kubernetes applications are stateless web services, so storage isn’t a problem. Butyou just want a simple storage system, then instead of a VM, you can dump your application into a container and run it in K8s

gcloud services enable file.googleapis.com
gcloud filestore instances list

An Interlude: Making it all work locally but not quite Kubeflow

OK, now that this basic stuff is running, how do you mirror this work on a different machine. I really like to verify what I’m doing on a local machine in addition to the cloud infrastructure, so I know where the bugs are 🙂 This leads a local installation on a Mac. Fortunately things have gotten much easier because Ubuntu now has something called microk8s that does the job. The only confusing thing is that if you have docker installed, it does a local installation of kubectl, but if you have an installation with home-brew, there is a conflict. Also gcloud has a kubectl installation as well, so all of these conflict:

# docker also installs kubectl fyi
brew install docker
# overwrite that installation to get the latest, if you want
brew install Kubernetes-cli helm
helm repo add https://helm
# get the plugin manager krew for kubectl and the merge utility
brew install krew
# now the plugin that deals with configs
kubectl krew install konfig
brew install ubuntu/microk8s/microk8s
microk8s install
microk8s status --wait-ready
# get dns, storage and a dashboard
microk8s enable dashboard dns storage
# this should but is broken on the mac
microk8s enable kubeflow --bundle lite
# but helm works but look at https://atrifacthub.io for repos
helm repo add bitnami https://charts.bitnami.com/bitnami
# startup three containers and call it rich0wp
# figure out what values you can change
helm show values bitnami/wordpress
# you can create a local file and apply it with a -f or --set
helm install rich-wp bitnami/wordpress 
# when you are done
helm uninstall rich-wp

At this point, you can run microk8s kubectl to operate the cluster, the main kubectl has no idea that you are there. It uses ~/.kube/config to figure that out. As usual, there are lots of interesting complexities. The biggest being that microk8s has a separate config file, so end up in this interesting interlude of how to merge YAML files togety. Turns out

# make you have the bash yq utility for merging yaml files
# for yq version 3.0 which is deprecated
microk8s config | yq m -i -a append ~/.kube/config -
# for yq version 4+ i could not get to work
# using kubectl itself by stackoverflow
brew install krew
kubectl krew install konfig
# ok watch this one liner, it creates a temporary file
# then dumpes the microk8s config into it and then uses konfig to merge
# then uses sponge to do an inplace replacement of the ~/.kubeconfig!
TMP=$(mktemp)
microk8s config > $TMP
kubectl konfig import --save $TMP
rm $TMP
# there is a bug so need to make so need to fix groups
multipass exec microk8s-vm -- eval 'sudo usermod -a -G microk8s $USER'
multipass exec microk8s-vm -- eval 'sudo chown -f -R $USER $HOME/.kube'
# now start up kubeflow in its light version
microk8s enable kubeflow -- --ignore_min_mem --bundle=lite
# wait for it!
microk8s status --wait-ready

To show you how bleeding edge this is, there is an open bug now where on the VM side (they use Multipass), the actual script that microk8s runs is done as root but it needs to be in the right group. This is one way to tell you are the bleeding edge. Had the same issue with docker four years ago.

I’m Rich & Co.

Welcome to Tongfamily, our cozy corner of the internet dedicated to all things technology and interesting. Here, we invite you to join us on a journey of tips, tricks, and traps. Let’s get geeky!

Let’s connect