Deploying minishift on a remote laptop.

Part of my new job working with Fabric8 is to having it deployed via minishift.
Everything is nice and working (try it it’s awesome as long you deploy it on your local workstation.

The thing is that my desktop macosx laptop has only 8GB of RAM and is not very well up to the task to get all the services being deployed when I have my web browser and other stuff hogging the memory. I would not do on a remote VM since I want to avoid the nested virtualisationt part that may slow down things even more.

Thanksfully I have another linux laptop with 8GB of RAM which I use for my testing and wanted to deploy minishift on it and access it from my desktop laptop.

This is not as trivial as it sounds but thanks to minishift flexibility there is way to set this up.

So here is the magic command line :

minishift start --public-hostname localhost --routing-suffix

What do we do here? We bind everyting to localhost and, what for you may ask? Cause we then are going to use it via SSH. First you need to get the minishift IP :

$ minishift ip

and now since in my case it’s the IP I am going to forward SSH it :

sudo ssh -L 443: -L 8443: username@host

Change the username@host and the to your IP. I use sudo here since to be able to forward the privileged 443 port need root access,

When this is done if the stars was aligned in the right direction when you typed those commands you should be able to see the fabric8 login page :

Getting a letsencrypt SSL certificate for the OpenShift console and API

By default when you install an OpenShift install it would automatically generate its own certificates.

It uses those certificates for communication between nodes and as well to automatically auth the admin account. By default those same certificates are the one provided for the OpenShift console and API.

Since it is auto generated you will when connecting  to the website with you webbrowser get an ugly error message :




and as the error message says that’s not very secure #sadpanda.

There is an easy way to generate certificate these days and it is to use letsencrypt, so let’s see how to connect it to the openshift console.

There is something to understand first here,  when you want to use an alternate SSL certificates for your console and API you can’t do that on your default (master) URL, it has to be another url. Phrased in another way here is a quote from the official documentation  :


with that in mind let’s assume you have setup a domain being a CNAME to your default domain. For myself here since this is a test install I went to use the easy way and I will use the service as I have documented in an earlier post. This give me easily a domain which would look like this :

So now that you have defined it, you need first to generate the letsencrypt certificate usually you would use certbot from RHEL EPEL to generate them but unfortunately at the time of writing this blog post the package was  uninstallable for me which probably would get fixed soon. In the meantime I have used letsencrypt from git directly as like this:

$ git clone

before you do anything, you need to understand the letsencrypt  process, usually you would have an apache or nginx (etc…) serving the generated files for verifications  (the /.well-known/ thing) since we can’t do that for us in openshift you can use the letsencrypt builtin webserver for that.

But to start the builtin webserver you need to be able to do it to bind it on port 80  but for us on master there is the router running which bind to it (and 443), so you would need to make sure it’s down and the most elegant way to do that with openshift is like this :

$ oc scale –replicas=0 dc router

now that you have nothing on port 80 you can tell letsencrypt to do its magic with this command line :

$ ./letsencrypt-auto –renew-by-default -a standalone –webroot-path /tmp/letsencrypt/ –server –email –text –agree-tos –agree-dev-preview -d auth

change the here to your own domain as the email address, if everything goes well you should get something like this :


now you should have all the certificates needed in /etc/letsencrypt/live/${domain}

So there is a little caveat here, there is a bug in openshift-ansible currently with symlinks and certificates and the way it operates. I have filled the bug here and it has already been fixed in GIT so hopefully by the time you will read this article this would be fixed in the openshift-ansible rpm if it’s not you can directly use the GIT openshift-ansible instead of the package.mber (3) here is going to change so you would have to adjust.

now you just need to some configuration in your /etc/ansible/hosts file :
openshift_master_named_certificates=[{"certfile": "/etc/letsencrypt/live/", "keyfile": "/etc/letsencrypt/live/", "names":[""]}]

after you run your playbook (with ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml) you should have it running properly and now when accessing by the console you should the reassuring secure lock :



  • If you need to renew the certs just do the steps where you oc scale the router quickly and renew the certificate with the letsencrypt auto command line mentioned earlier.
  • There is probably a way more elegant way to do that with a container and a route. I saw this on dockerhub but this seems to be tailored to apps (and kube) and I don’t think this could be used for the OpenShift console.
  • Don’t forget to oc scale –replicas=1 dc/router (even tho the ansible rerun should have done for you.

Easily test your OpenShift applications exposed by the router

OpenShift integrate[1] a router based on HAproxy to expose your services to the outside world. Whenever your do a :

oc expose servicename

it would expose by default the servicename this URL :


The defaultSubdomain is usually a wildcard DNS record that you have configured in your domain server by your system administrator. 

Now for your openshift testing if you don’t want to ask your system administrator to configure a new CNAME going to your testing environement you can just use the free service

The XP.IO service is a special DNS service which would take a an IP address and and report back the IP of this IP address to itself and to all subdomain so that the IP:

will go to same goes for foo., bar. etc…

You just then need to configure it in OpenShift by editing the value (assuming is your public IP which come back to your router) :

    subdomain: ""

Or if you use the openshift-ansible scripts to add this your /etc/ansible/hosts

and then you get all your route exposed properly without bother your always busy system admin.

[1] Which lately got merged into kubernetes as the “ingress” feature

How to view openshift router (haproxy) stats

After you have installed your fancy openshift install and that it kicked the haproxy router automatically after install you may want to see the stats of the router.

The HAproxy stats are exposed on the port 1936 where the router is located (usually on the master node) so first you need a way to access it. You can open it via your firewall (not ideal) or you can just port forward the port to your workstation via SSH :

$ ssh -L 1936:localhost:1936 master.openshift

Now that it’s done and you have 1936 tunelled you need to figure out the password of the haproxy stats. It’s stored in its environment variables so you just do a oc describe to see it for example :


Now that you have the password (uo5LtC6mac in my case), you just point your workstation web browser to :


just make sure to replace the password with your own password and you should be all set.


Controlling Yamaha AV RX-A830 from command line

At home I have been using a Yamaha AV RX-A380, it’s an home teather audio video solution where you can plug about everything you need (like 7 hdmi channel, spoiler alert there is something wrong with you if you have that many devices) and output to two other hdmi channel (like a tv and a projector).

It has integration for spotify, airplay, netradio and billions of connection to everything, just look at the damn back of this device :

Since I wanted to control it from the command line to automate it for home automation, I firebugged the web interface and reversed some of the REST calls in a nice bash script.

Here it is at your convenience to using or hack it :

This doesn’t support multi-zone and assume the web interface is resolvable to http://yamaha.local/ (it should be by default) so be aware. This may support other Yamaha AV devices but since I don’t have it I can’t say and you may have try, if it does kindly add a comment here soother would know 🙂

The trick to get your wordpress behind a reverse proxy

I have been meaning to get this blog SSL protected for a while and since solution like letsencrypt makes it easy I have generated some SSL keys for my domain  and configured it in apache.

So far so good, but the thing is my VM at my hosting provider is pretty small and I have been using varnish for quite some time or I would get out of memory quickly some the kernel OOM killer kicking[1] it.

Varnish don’t do SSL so you have to do something else, I went ahead and used Nginx to provide my SSL endpoint which then would look like this :


I could have done it with apache virtualhosts which look like this :


I went finally for nginx since most people seems to say that it was more lean and quick for those kick of ssl accelerator job.

So far so good for the configuration, you can find those informations all over the internet, the nginx ssl configuration was a bit special so I can have the higher secure end of SSL encryption :

Now the thing didn’t work very well when accessing the website, I could not see any of th medias including JS and SSL since they were served on the old non ssl url. I tried to force the wordpress configuration to serve SSL but I would end up in a http redirect loop.

Finally I stumbled on this guy blog and looked at a hack to put in the wp-config.php file. I streamlined it to :

if ( (!empty( $_SERVER['HTTP_X_FORWARDED_HOST'])) ||
     (!empty( $_SERVER['HTTP_X_FORWARDED_FOR'])) ) {
    $_SERVER['HTTPS'] = 'on';

and that’s it, wordpress would then understand it would serve as HTTPS and would add its https url properly.

Hope this helps

[1] I had even a cron sometime ago to mysqlping my mysql server and restart it automatically if it was down since I was so sick of it

Using python to drive OpenShift REST API

I have been meaning to automate my deployment directly from my small python application without having to use the openshift client (oc) directly.

OpenShift use a REST API and the oc client uses it to communicate with the server, you can actually see all the REST operation the oc client is doing if you specify the –loglevel=7 (it goes to 10 to get even more debug info) :

$ oc --loglevel=7 get pod 2>&1 |head -10
I0919 09:59:20.047350   77328 loader.go:329] Config loaded from file /Users/chmouel/.kube/config
I0919 09:59:20.048149   77328 round_trippers.go:296] GET https://openshift:8443/oapi
I0919 09:59:20.048158   77328 round_trippers.go:303] Request Headers:
I0919 09:59:20.048162   77328 round_trippers.go:306]     User-Agent: oc/v1.4.0 (darwin/amd64) openshift/85eb37b
I0919 09:59:20.048175   77328 round_trippers.go:306]     Authorization: Bearer FOOBAR
I0919 09:59:20.048180   77328 round_trippers.go:306]     Accept: application/json, */*
I0919 09:59:20.095239   77328 round_trippers.go:321] Response Status: 200 OK in 47 milliseconds
I0919 09:59:20.096056   77328 round_trippers.go:296] GET https://openshift:8443/version
I0919 09:59:20.096078   77328 round_trippers.go:303] Request Headers:
I0919 09:59:20.096084   77328 round_trippers.go:306]     User-Agent: oc/v1.4.0 (darwin/amd64) openshift/85eb37b

I was thinking to come up with my own python rest wrapper since a google quick search didn’t come up with any binding. But since openshift is build on kubernetes and fully compatible with it (i.e: no fork or changes that make it incompatible) it was as easy as using the tools provided for kube.

The first project coming up on the google search is pykube and it’s easily installable with pip.

You need to provide a kubeconfig that was already setup (with username/passwd) or already identified if it’s token based (i.e: oauth, oid etc) and you can use this example like this :

import pykube
api = pykube.HTTPClient(pykube.KubeConfig.from_file("/Users/chmouel/.kube/config"))
pods = pykube.Pod.objects(api).filter(namespace="test")
for x in pods:

see the documentation of pykub on its website

Getting openshift origin “cluster up” working with xhyve

In latest openshift client (oc) there is a nifty (relatively) new feature to get you a OpenShift cluster starting (very) quickly. It’s a pretty nice way to get you a new openshift origin environment on your laptop  without the hassle.

On macosx there is a (as well relatively) new lightweight virtualization solution called xhyve it’s a bit like KVM in the sense of being lightweight and does not need like virtualbox or vmware to get a UI running. It seemed to be a perfect fit to try those two together.

xhyve docker machine driver needed to be installed first so I just went on its website here :

and followed the installation instruction from the README which I could see everything was working :


I then fired up the “oc cluster up –create-machine” command and to my disappointment it was starting by default the virtualbox and I could not see anything in the options how to specify the “–driver xhyve” option to docker-machine which is what the oc cluster feature is using on the backend to bootstrap a docker environment.

Digging into the code it seems that the oc cluster has those feature set in static as virtualbox :

since there was no way to pass other options I first looked in the github issues to see if there was nothing reported about it and sent a feature request here,

I started to think a little bit more about a workaround going from modifying to my liking and recompiling the oc client or to just give up on xhyve but in fact the solution is actually much simplier.

Since there is the ability to specify to “oc cluster up” an already configured docker-machine environment with the “ –docker-machine” option. We just had to configured previously properly first (which is with the option –engine-insecure-registry :


and after a bit the new docker should be setup which can be easily used with the command eval $(docker-machine env xhyve)

I then just have to start my oc cluster up with the option  –docker-machine=”xhyve” and I would get my nicely setuped openshift origin cluster to play with in mere seconds :


Dealing with yaml in Emacs

Sometime ago or at least when I started doing programming in the late 90s XML was all the rage, it promised to be the panacea for everything from data to storage to data presentation and processing. People realised that it was just complexity as Joel Spolski points out an attempt to make the complex seem accessible to ordinary people.. Really people were annoyed to write all those tags as those ‘<‘ and ‘>’ are hard to reach on a qwerty keyboard.

Beginning the new millennia in 2000 the web started to get very popular and things like “web services” popped up everywhere, people realised that actually XML is not that great so started to get a format called Json to get computers talking to each others in a sane manner.

But people realise that json was actually not that great to chat between web services as it was actually designed to serialize objects between programming languages. And really down the line it’s more about the programmers being annoyed by all those { } [ ] brackets

So here came yaml the latest “fashion format” based on the popularity of tab based programming languages.

Most new software lately have been using it, all the containers software ecosystem configure things in yaml so you have to deal with it when you work with them.

I don’t know if I like yaml or not, the only thing i know is that when I have a big ass large yaml file it become quickly unreadable. You have no idea which blocks belong to which one and not sure how many indents you need to add to that block to align to that other one that started 800 lines ago.

This has been driving me crazy as I need to write some large kubernetes/OpenShift yaml files and sometime end up for hours trying to detect where I have my tab alignment.

Some may argue, but you do python and python is tab based. Yeah i have been doing python for the last 10 years and this has never been issue cause first I don’t write kick ass 5000 lines python functions and second the python mode of my editor Emacs is properly configured.

Ah there I say it, the editor needs to be configured properly to have a good workflow so here is Emacs to the rescue to make it bearable (and make that post more productive that another rant from the interweb)

So without further ado and with much fanfare, here is the emacs extension i found to make writing yaml bearable :

Highlight Indentation for Emacs


This mode would give you a visual representation of the current indentation with a bar showing the indentation.

Smart Shift

Make Shift

This mode doesn’t give you a visual but allow you to indent blocks of texts easily. Usually in emacs you would use the Control-C Tab command to indent and prefix it with a number for the number of indent. For example C-u 4 Control-C Tab would indent the text for 4 spaces. Smart shift make things much easier to move around.


Flycheck mode

This is a generics mode you should really configure for all your programming needs, it supports yaml files and will try to validate (with ruby-yaml library) your yaml file and see where you have an error.



This is a function I found in a post on stackoverflow (by the author of Highlight-Indentation-for-Emacs) it allow you to folds all code on an indentation level greater than the current line. A great way to show you the current outline of the file.


openshift-sdn with OpenStack SDN and MTU

I am lucky enough to have a cloud available to me for free it obviously runs OpenStack and I can kick VM like I want.

Since I am playing with OpenShift a lot lately I have seen issues in that cloud where pushing an image to the internal registry was just randomly failing.

Networking is definitely not my pedigree but I could definitely sense it was a networking issue. Since I could nost just blame the underlying cloud (hey it’s free!) I had to investigate a bit.

Using the “access to internal docker registry” feature of OpenShift, I could definitively push from the master (where the registry was) in 2s but not from the node where it was completely stucks at the end while it could only push some bits at first and after waiting forever there.

I came back to our internal mailing list and the local experts there pointed me to the file :


and the interesting part is this :

# The $DOCKER_NETWORK_OPTIONS variable is used by sdn plugins to set
# $DOCKER_NETWORK_OPTIONS variable in the /etc/sysconfig/docker-network
# Most plugins include their own defaults within the scripts
# TODO: More elegant solution like this
# DOCKER_NETWORK_OPTIONS='-b=lbr0 --mtu=1450'

I uncommented and adjusted my MTU to 1400 since 1450 wasn’t working for me and after a reboot I could push properly my images from the nodes to the internal registry.

Thanks to sdodson and Erik for pointing me to this