Controlling Yamaha AV RX-A830 from command line

At home I have been using a Yamaha AV RX-A380, it’s an home teather audio video solution where you can plug about everything you need (like 7 hdmi channel, spoiler alert there is something wrong with you if you have that many devices) and output to two other hdmi channel (like a tv and a projector).

It has integration for spotify, airplay, netradio and billions of connection to everything, just look at the damn back of this device :

Since I wanted to control it from the command line to automate it for home automation, I firebugged the web interface and reversed some of the REST calls in a nice bash script.

Here it is at your convenience to using or hack it :

This doesn’t support multi-zone and assume the web interface is resolvable to http://yamaha.local/ (it should be by default) so be aware. This may support other Yamaha AV devices but since I don’t have it I can’t say and you may have try, if it does kindly add a comment here soother would know :)

The trick to get your wordpress behind a reverse proxy

I have been meaning to get this blog SSL protected for a while and since solution like letsencrypt makes it easy I have generated some SSL keys for my domain  and configured it in apache.

So far so good, but the thing is my VM at my hosting provider is pretty small and I have been using varnish for quite some time or I would get out of memory quickly some the kernel OOM killer kicking[1] it.

Varnish don’t do SSL so you have to do something else, I went ahead and used Nginx to provide my SSL endpoint which then would look like this :

nginx-varnish-apache

I could have done it with apache virtualhosts which look like this :

apache-virtualhosts-varnish-ssl

I went finally for nginx since most people seems to say that it was more lean and quick for those kick of ssl accelerator job.

So far so good for the configuration, you can find those informations all over the internet, the nginx ssl configuration was a bit special so I can have the higher secure end of SSL encryption :

Now the thing didn’t work very well when accessing the website, I could not see any of th medias including JS and SSL since they were served on the old non ssl url. I tried to force the wordpress configuration to serve SSL but I would end up in a http redirect loop.

Finally I stumbled on this guy blog and looked at a hack to put in the wp-config.php file. I streamlined it to :

    
if ( (!empty( $_SERVER['HTTP_X_FORWARDED_HOST'])) ||
     (!empty( $_SERVER['HTTP_X_FORWARDED_FOR'])) ) {
    $_SERVER['HTTPS'] = 'on';
}
    

and that’s it, wordpress would then understand it would serve as HTTPS and would add its https url properly.

Hope this helps

[1] I had even a cron sometime ago to mysqlping my mysql server and restart it automatically if it was down since I was so sick of it

Using python to drive OpenShift REST API

I have been meaning to automate my deployment directly from my small python application without having to use the openshift client (oc) directly.

OpenShift use a REST API and the oc client uses it to communicate with the server, you can actually see all the REST operation the oc client is doing if you specify the –loglevel=7 (it goes to 10 to get even more debug info) :

$ oc --loglevel=7 get pod 2>&1 |head -10
I0919 09:59:20.047350   77328 loader.go:329] Config loaded from file /Users/chmouel/.kube/config
I0919 09:59:20.048149   77328 round_trippers.go:296] GET https://openshift:8443/oapi
I0919 09:59:20.048158   77328 round_trippers.go:303] Request Headers:
I0919 09:59:20.048162   77328 round_trippers.go:306]     User-Agent: oc/v1.4.0 (darwin/amd64) openshift/85eb37b
I0919 09:59:20.048175   77328 round_trippers.go:306]     Authorization: Bearer FOOBAR
I0919 09:59:20.048180   77328 round_trippers.go:306]     Accept: application/json, */*
I0919 09:59:20.095239   77328 round_trippers.go:321] Response Status: 200 OK in 47 milliseconds
I0919 09:59:20.096056   77328 round_trippers.go:296] GET https://openshift:8443/version
I0919 09:59:20.096078   77328 round_trippers.go:303] Request Headers:
I0919 09:59:20.096084   77328 round_trippers.go:306]     User-Agent: oc/v1.4.0 (darwin/amd64) openshift/85eb37b

I was thinking to come up with my own python rest wrapper since a google quick search didn’t come up with any binding. But since openshift is build on kubernetes and fully compatible with it (i.e: no fork or changes that make it incompatible) it was as easy as using the tools provided for kube.

The first project coming up on the google search is pykube and it’s easily installable with pip.

You need to provide a kubeconfig that was already setup (with username/passwd) or already identified if it’s token based (i.e: oauth, oid etc) and you can use this example like this :


import pykube
api = pykube.HTTPClient(pykube.KubeConfig.from_file("/Users/chmouel/.kube/config"))
pods = pykube.Pod.objects(api).filter(namespace="test")
for x in pods:
    print(x)

see the documentation of pykub on its website

Getting openshift origin “cluster up” working with xhyve

In latest openshift client (oc) there is a nifty (relatively) new feature to get you a OpenShift cluster starting (very) quickly. It’s a pretty nice way to get you a new openshift origin environment on your laptop  without the hassle.

On macosx there is a (as well relatively) new lightweight virtualization solution called xhyve it’s a bit like KVM in the sense of being lightweight and does not need like virtualbox or vmware to get a UI running. It seemed to be a perfect fit to try those two together.

xhyve docker machine driver needed to be installed first so I just went on its website here :

https://github.com/zchee/docker-machine-driver-xhyve

and followed the installation instruction from the README which I could see everything was working :

2016-09-18__21-27-28-10262

I then fired up the “oc cluster up –create-machine” command and to my disappointment it was starting by default the virtualbox and I could not see anything in the options how to specify the “–driver xhyve” option to docker-machine which is what the oc cluster feature is using on the backend to bootstrap a docker environment.

Digging into the code it seems that the oc cluster has those feature set in static as virtualbox :

https://github.com/openshift/origin/blob/85eb37b34f0657631592356d020cef5a58470f8e/pkg/bootstrap/docker/dockermachine/helper.go#L56-L79

since there was no way to pass other options I first looked in the github issues to see if there was nothing reported about it and sent a feature request here,

I started to think a little bit more about a workaround going from modifying to my liking and recompiling the oc client or to just give up on xhyve but in fact the solution is actually much simplier.

Since there is the ability to specify to “oc cluster up” an already configured docker-machine environment with the ” –docker-machine” option. We just had to configured previously properly first (which is with the option –engine-insecure-registry 172.30.0.0/16) :

2016-09-18__21-05-12-14647

and after a bit the new docker should be setup which can be easily used with the command eval $(docker-machine env xhyve)

I then just have to start my oc cluster up with the option  –docker-machine=”xhyve” and I would get my nicely setuped openshift origin cluster to play with in mere seconds :

2016-09-18__21-04-47-3802

The best ways to work with yaml files in Emacs

Sometime ago or at least when I started doing programming in the late 90s XML was all the rage, it promised to be the panacea for everything from data to storage to data presentation and processing. People realized that it was just complexity as Joel Spolski points out an attempt to make the complex seem accessible to ordinary people.. Really people were annoyed to write all those tags as those ‘<‘ and ‘>’ are hard to reach on a qwerty keyboard.

Beginning the new millennia in 2000 the web started to get very popular and things like “web services” popped up everywhere, people realised that actually XML is not that great so started to get a format called Json to get computers talking to each others in a sane manner.

But people realise that json was actually not that great to chat between web services as it was actually designed to serialize objects between programming languages. And really down the line it’s more about the programmers being annoyed by all those { } [ ] brackets

So here came yaml the latest “fashion format” based on the popularity of tab based programming languages.

Most new software lately have been using it, all the containers software ecosystem configure things in yaml so you have to deal with it when you work with them.

I don’t know if I like yaml or not, the only thing i know is that when I have a big ass large yaml file it become quickly unreadable. You have no idea which blocks belong to which one and not sure how many indents you need to add to that block to align to that other one that started 800 lines ago.

This has been driving me crazy as I need to write some large kubernetes/OpenShift yaml files and sometime end up for hours trying to detect where I have my tab alignment.

Some may argue, but you do python and python is tab based. Yeah i have been doing python for the last 10 years and this has never been issue cause first I don’t write kick ass 5000 lines python functions and second the python mode of my editor Emacs is properly configured.

Ah there I say it, the editor needs to be configured properly to have a good workflow so here is Emacs to the rescue to make it bearable (and make that post more productive that another rant from the interweb)

So without further ado and with much fanfare, here is the emacs extension i found to make writing yaml bearable :

Highlight Indentation for Emacs

2016-09-07__09-06-21-543

This mode would give you a visual representation of the current indentation with a bar showing the indentation.

Smart Shift

Make Shift

This mode doesn’t give you a visual but allow you to indent blocks of texts easily. Usually in emacs you would use the Control-x Tab command to indent and prefix it with a number for the number of indent. For example C-u 4 Control-x Tab would indent the text for 4 spaces. Smart shift make things much easier to move around.

Flycheck-mode

Flycheck mode

This is a generics mode you should really configure for all your programming needs, it supports yaml files and will try to validate (with ruby-yaml library) your yaml file and see where you have an error.

aj-toggle-fold

2016-09-07__09-36-55-32078

This is a function I found in a post on stackoverflow (by the author of Highlight-Indentation-for-Emacs) it allow you to folds all code on an indentation level greater than the current line. A great way to show you the current outline of the file.

openshift-sdn with OpenStack SDN and MTU

I am lucky enough to have a cloud available to me for free it obviously runs OpenStack and I can kick VM like I want.

Since I am playing with OpenShift a lot lately I have seen issues in that cloud where pushing an image to the internal registry was just randomly failing.

Networking is definitely not my pedigree but I could definitely sense it was a networking issue. Since I could nost just blame the underlying cloud (hey it’s free!) I had to investigate a bit.

Using the “access to internal docker registry” feature of OpenShift, I could definitively push from the master (where the registry was) in 2s but not from the node where it was completely stucks at the end while it could only push some bits at first and after waiting forever there.

I came back to our internal mailing list and the local experts there pointed me to the file :

/etc/sysconfig/openshift-node

and the interesting part is this :

# The $DOCKER_NETWORK_OPTIONS variable is used by sdn plugins to set
# $DOCKER_NETWORK_OPTIONS variable in the /etc/sysconfig/docker-network
# Most plugins include their own defaults within the scripts
# TODO: More elegant solution like this
# https://github.com/coreos/flannel/blob/master/dist/mk-docker-opts.sh
# DOCKER_NETWORK_OPTIONS='-b=lbr0 --mtu=1450'

I uncommented and adjusted my MTU to 1400 since 1450 wasn’t working for me and after a reboot I could push properly my images from the nodes to the internal registry.

Thanks to sdodson and Erik for pointing me to this

Deploy openshift router and registry only on a master nodes with no others

Something that has come up when using OpenShift and that was tricky enough to be shared on a blog post.

On OpenShift you have this routers  and registry which by default are on the master nodes and that’s fine. Things get tricky if you don’t want anything else in there.

I finally figured this out after digging in some internal mailing lists and this is actually not too difficult. The key thing is to have this on the ‘default‘ namespace annotations :

openshift.io/node-selector: region=infra

The default namespace is an internal namespace used for openshift infrastructure services.

Let me describe this a little bit further, here is my node labels configuration :

root@master:~$ oc get node
NAME                                 LABELS                                                                                STATUS    AGE
master.local.openshift.chmouel.com   kubernetes.io/hostname=master.local.openshift.chmouel.com,region=infra,zone=default   Ready     2d
node1.local.openshift.chmouel.com    kubernetes.io/hostname=node1.local.openshift.chmouel.com,region=primary,zone=west     Ready     2d
node2.local.openshift.chmouel.com    kubernetes.io/hostname=node2.local.openshift.chmouel.com,region=primary,zone=east     Ready     2d

I had already a router running fine on my master by forcing (this was generated by the oadm router command) it with a nodeSelector on the deploymentConfig :

root@master:~$ oc get pod router-1-q3am8 -o yaml
[..]
nodeName: master.local.openshift.chmouel.com
nodeSelector:
region: infra
[..]

Now I am going to edit my /etc/origin/master/master-config.yaml and add :

projectConfig:
    defaultNodeSelector: "region=primary"

which force all new nodes to get on the primary region.

As expected if I delete my router and redeploy it :

root@master:~$ oc delete pod router-1-q3am8
root@master:~$ oc deploy router --latest

The router was not able to be deployed since getting since we explicitely told the scheduler that we want pods only on infra :

Sep 23 09:45:52 master.local.openshift.chmouel.com origin-master[2879]: I0923 09:45:52.203596 2879 event.go:203] Event(api.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"router-1", UID:"454f46b0-5fbc-11e5-9c22-fa163e93ac32", APIVersion:"v1", ResourceVersion:"99201", FieldPath:""}): reason: 'failedCreate' Error creating: pods "" is forbidden: pod node label selector conflicts with its project node label selector

So what I had to do now is to edit the default namespace (not project but namespace that’s a critical point) and add in the metadata/annotations section :

apiVersion: v1
kind: Namespace
metadata:
   annotations:
      openshift.io/node-selector: region=infra

which to say that the default project can be indeed deployed on region=infra.

Now let’s try again :

root@master:~$ oc deploy router --latest

and check the log :

Sep 23 09:47:25 master.local.openshift.chmouel.com origin-master[2879]: I0923 09:47:25.341257 2879 event.go:203] Event(api.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"router-1", UID:"454f46b0-5fbc-11e5-9c22-fa163e93ac32", APIVersion:"v1", ResourceVersion:"99201", FieldPath:""}): reason: 'successfulCreate' Created pod: router-1-l5r0e

which seems to work fine and deployed on infra :

root@master:~$ oc get pod|grep router
router-1-ed6dk            1/1       Running   0          1h
root@master:~$

Using yaml for OpenShift v3 templates

I have been experimenting a lot with OpenShift v3 and love how everything work well together plugging Kubernetes and Docker with a PAAS workflow.

One of the thing that I don’t get is to have to write manually verbose json templates, it’s wonderful for machines and to parse but writing it can get as painful as (dear I said it) XML.

OpenShift natively support very nicely yaml files and it’s a straight conversion of what you would have in json format.

Since at this time most of the examples are in json I wrote a script to quickly convert them to yaml and came up with this command line using python and bash :

for i in $(find . -name '*.json');do  python -c 'import sys,json,yaml;print(yaml.safe_dump(json.loads(sys.stdin.read()), default_flow_style=False))' < $i > ${i/json/yaml};done

Happy Yameling (I just made this word up and I am not even drunk)

Building RPM with Docker images

dockerrpm

For an internal project at work I have been thinking about more how to generate RPMs out of our CI. I wanted to have them produced as artifacts of the build so I can test how if they can be installed and properly working with some smoketests.

Since we are using Docker for most of the things in our CI, I have been thinking about more about how to do that with docker images and RPM.

Ideally what I would love to have from RPM is to be able to integrate with Docker so when you build your RPM you are building in a docker images. Basically the %prep section will be spined-up in a special docker images and the rpm output would be back to the host.

The advantages outside of making sure you are building your RPMs in a confined and reproducible enthronement is that you would be able to say from the same rpm build that I want to build the RPMs for centos/fedora/rhel/etc.. in whatever flavours.

I am sure there is some workaround way to do that with chroot and such but it would be nice if this mechanism is built-in inside RPM (be it an abstracted system to do that as chroot/docker or whatever container technology).

Since we are not there yet, I have ended-up just the straightforward way of constructing an image with my build dependences.

It’s a python project which use PBR for generating the version so I have to generate first a tarball in my build directory and get the generated version.

I modify the spec file with that version and start to build the image with the new tarball and new spec file.

I run the image and mount a volume to a local directory on the host and start running the image which run the start.sh script in the container.

The start.sh script is pretty straightforward, it builds the rpm and copy them to the volume directory as root (since there is no other way) so they can be copied from the host to the artifact output directory.

I could have not copying and uploading to a object storage system (like Swift obviously) but since I needed to be available in the CI I ended-up up with the local file copy system.

Here is my scripts, in SPECS/project.spec and SOURCES/* there is the spec and sources/patches as a standard rpm, the only thing is to make sure to use a %define _version VERSION and use that macro for Version in your spec file.

The main build.sh which get run from the CI

The DockerFile which try to be optimised a bit for Docker caching :

and the script start.sh that gets run inside the container :

It probably would not fit straight to your environement but at least that may get you the idea.

Use cases for Docker driven development.

So the trend these days is to talk about container all the things that usually involve Docker, it even has a now its own verb, people are now using the word ‘containerizing’ to describe packaging their application with Docker.

A lot of the things happening lately in the Docker world is to solve how to get those containers in real production environments, there is people working on taking the ‘containerization’ philosophy to storage, networking or getting orchestration right

While getting docker in production is an important thing to have that would be hopefully getting solved and stable sometime soon, there is the use case for docker that can be used as of right now which is the developer use case.

I briefly mentioned this in another blog post and wanted to expand my thoughts here after chatting with some of my colleagues that don’t seem to get how that would help and consider the docker trending not much more than a marketing trend.

Take functional Testing to an another level.

I am not going to go over of what functional testing is and all the different type of Software testing. There is a lot of them that are very well documented andexcept the unittests they usually need to have the real services properly up and running first

The testing driven development for unittests is a very well known process to develop application, it usually tightens to unittest. You start to write your unittests and write your code and fix your code/tests in an iterative way.

When the code is working usually it gets committed to a CI environment which run the unittests and maybe some other functional tests before it gets reviewed and merged.

The functional tests for that feature doesn’t get committed at the same time, because usually running functional tests is painful and can takes a lot of time. You have to setup all the things properly, the initial setup of your DB and how it communicate to your service in the right context of how you want to do your testing.

And even if you go by that path and get it done, most people would just do it in a single VM easy to share among your colleagues and you wont’ go by the process of properly setup bunch of vm that communicate together like a DB a APP and perhaps a WEB server. You won’t even try to test how your application scales and behave cause that’s even more costly.

Have your functional testing done by multiple services not just a single VM.

And that’s where testing with docker with an orchestrator like fig can shine. You can specify different scenarios that are really quick to deploy. You can run different runs and targets directly from your CI and more importantly
you can easily share those to your colleagues/contributors and that’s usually very fast cause if there is one thing docker is good is that it does a lot of smart caching to build the images and run those images in a blink of a second.

Show your users how your app should be deploy.

team_building_with_cookingWhen you build your Dockerfiles you show the way how your apps is building and how the configuration is setup. You are able to give ideas to your users how it would work. It may not going to be optimal and perfect since you probably not going to have the same experience and tweaking of someone who deploys complicated software for a living but at least you can give a guideline how things works without having the user pulling his hair how things works.

Even your unittesting can get more robust!

This is to follow up on my blog post on the tool dox I have introduced, since in OpenStack we have some very complicated tests to do that are very much dependent of a system it gets very complicated to run our unittests in a portable. But that’s not just OpenStack take for example an app that needs Sqlalchemy to run, you can sure run it with sqlite backend to run your unittests but you may going to end up in weird cases with your foreign keys not working properly and other SQL features not implemented. With containers you can have a container that gets setup with your DB of your choice to do your testing against easily. There is more use cases where you depend of the binary (or your dependences) depend of the system that you want to be controlled and contained.

I hope those points would help you to get convinced into containers in your development workflow. Hopefully in the future all those workflow would generalised more and we would have even more powerful tools to get our development (almost) perfectly done â„¢

spicy_meatball