NextDNS + DNSMasq DHCP and local names

Took me a little bit a while to figure out so here is some documentation,

My router from my ISP which is generally pretty good, doesn’t support local dns names which is annoying in itself. Combined with NextDNS, I have no way to identify the devices on my network.

So there I went configured dnsmasq on my tiny raspbery-pi :

port=5353
no-resolv
interface=eth0
except-interface=lo
listen-address=::1,192.168.0.3
no-dhcp-interface=
bind-interfaces
cache-size=10000
local-ttl=2
log-async
log-queries
bogus-priv
server=192.168.0.3
add-mac
add-subnet=32,128

This would have the dnsmasq service listening on 192.168.0.3:5353 and forward everything to 192.168.0.3.

I continued and set my DHCP server :


dhcp-authoritative
dhcp-range=192.168.0.20,192.168.0.251,24h
dhcp-option=option:router,192.168.0.254
dhcp-name-match=set:wpad-ignore,wpad
dhcp-name-match=set:hostname-ignore,localhost
dhcp-ignore-names=tag:wpad-ignore
dhcp-mac=set:client_is_a_pi,B8:27:EB:*:*:*
dhcp-reply-delay=tag:client_is_a_pi,2
dhcp-option=option:dns-server,192.168.0.3
dhcp-option=option:domain-name,lan

domain=lan
dhcp-option=option6:dns-server,[::]
dhcp-range=::100,::1ff,constructor:eth0,ra-names,slaac,24h
ra-param=*,0,0

Standard DHCP really just make sure you setup the router to your local router, here it’s 0.254 in my config.

I then configured the nextdns client on 192.168.0.3 default DNS port 53 :

cache-size 0
report-client-info true
setup-router false
log-queries true
config CONFIG_ID_FROM_NEXTDNS_GET_IT_FROM_THERE
cache-max-age 0s
timeout 5s
control /var/run/nextdns.sock
forwarder .lan.=192.168.0.3:5353
max-ttl 5s
discovery-dns 192.168.0.3:5353
hardened-privacy false
bogus-priv true
auto-activate false
listen 192.168.0.3:53
use-hosts true
detect-captive-portals false

The key setings is the discovery-dns setting, it means it would try to discover the local names to display on the nextdns web UI and resolve all lan domain to the local dnsmasq server.

And that’s it…. Hope this helps.

batzconverter – A multiple timezone converter

I do write a lot of scripts to automate my day to day workflow, some of them I just wrote for 3h to save me 5mn time only once and some others I write in about 5mn but save me hours and hour of productivity.

The script showed today, that I am proud of because of its usefulness and probably not of its code, is called “batzconverter“. and available on github. What the script is trying to solve is when you work with your team spread around 3/4 timezones, how do you schedule a meeting easily.

It’s a ~200 lines simple shell script that leverage onto the very powerful GNU date

In its most simple form when you type the command batz you are getting this :

This is showing all timezone (which you can configure) at current time, with some easily identified emojis. The 🏠 emoji on the right is to show the location of the user.

But you can do way more, let’s say you want to show all timezone for tomorrow 13h00 meeting :

It will just do that and show it.

Same goes for a specific date :

You can do some extra stuff, like adding quickly another timezone that isn’t configured :

Or give another timezone as the base for conversion, the airplane emoji ✈️ here is to let know that you showing another target timezone :

easy peasy to use, no frills, no bells just usefulness…

There is another flag called “-j” to allow you to output to json, it was implemented for being able to plug into the awesome alfredapps on osx as a workflow :

But it doesn’t have to be for Alfred, the json outut can be used for any other integrations.

Configuration is pretty simple you just need to configure the timezone you would like to have in a file located ~/.config and the emojis associated with it (cause visuals are important! insert rolled eyes emoji here).

Head up on github chmouel/batzconverter to learn how to install and configure it, and feel free to let me know about suggestions or issues you have using it.

Building packages for multiple distros on launchpad with docker

I have been trying to build some packages for the ubuntu distros for a new program I have released, gnome-next-meeting-applet

In short, it what quite painful! if you are quite new to the launchpad and debian packaging ways (which I wasn’t and yet It took me some time to figure out) you can get quite lost. I got to say that the fedora copr experience is much smoother. After a couple of frustrated google and stackoverflow searches and multiple tries I finally figured out a script that builds and upload properly to launchpad via docker to make it available to my users.

  1. The first rule of uploading to launchpad it’s to properly setup your GPG key on your account making sure it match what you have locally.
  2. The second rule is making sure the new upload you make increase onto the precedent uloaded version or it will be rejected.
  3. The third rule is to be patient or work on week-end, because the queue can be quite slow.

Now the magic is happening in this script :

https://github.com/chmouel/gnome-next-meeting-applet/blob/742dbe48795c0151411db69065fdd773762100e1/debian/build.sh#L20-L41

We have a Dockerfile with all the dependencies we need for building the package in this file :

https://github.com/chmouel/gnome-next-meeting-applet/blob/a2785314365c51200935ad63c38f490c597989c9/debian/Dockerfile

When we launch the script in the main loop we modify the FROM to go to the distro docker tag targeted (here I have LTS and ROLLING) and we start the container build.

When it’s done we mount our current source as a volume inside the container and mount our ~/.gnupg to the build user gnupg inside the container.

With dch we increase the targeted distro version and we add as well after the release number the distro target after the “~” like this “0.1.0-1~focal1”.

We finish the upload with dput and launchpad *should* then send you an email it was accepted.

After waiting a bit your package should be built for the multiple distribution.

Tekton yaml templates and script feature

Don’t you love “yaml”, yes you do ! or at least that’s what the industry told you to love!

When you were in school your teacher told you about “XML” and how it will solve all the industry problems (and there was many in the late 90s). But you learned that you hate reaching to your “<“ and” “>” keys and rather have something else. So then the industry came up with “json” so computer or yourself can talk to each others, that’s nice for computers but actually not so nice for yourself it was actually a lie and was not made for yourself to read and write but only for comptures. So then the “industry” came up with yaml, indentation based? you get it and that’s humm all about it, now you are stuck counting whitespaces in a 3000 lines file trying to figure out where goes where….

Anywoo ranting about computer history is not the purpose of this blog post, like every other cloud native (sigh) component out there tekton is using yaml to let the user describe its operative executions. There is a very nice feature in there (no sarcasm it is really nice!) allowing you to embed “scripts” directly in tasks. Instead of like before where you had to build a container image with your script and run it from tekton, you can now just specify the script embedded directly in your “Task” or “Pipeline”.

All good, all good, that’s very nice and dandy but when you start writing a script that goes over 5 lines you are getting into the territory where you have mixed a ~1000 lines script embedded in a 2000 lines of yaml (double sigh).

You can come back to the old way, and start to go over the development workflow of :

“write” -> commit -> “push” -> “build image” => “push” -> “update tag” -> “start task”

and realize that you are loosing approximately 40 years of your soul into some boring and repetitive tasks.

So now that i am over talking to myself with this way too long preamble here is the real piece of information in this post, a script that like everything in your life workaround the real issue.

It’s available here :

https://github.com/chmouel/chmouzies/blob/master/work/tekton-script-template.sh

The idea is if you have in your template a tag saying #INSERT filename, it would be replaced by the content of the file, it’s dumb and stupid but make devloping your yaml much more pleasing… so if you have something like :

image: foo
script: |
#INSERT script.py

the script with see this and insert the file script.py in your template. It will respect the previous line indentation and add four spaces extra to indent the script and you can have as many INSERT as you want in your template….

Now you can edit your code in script.py and your yaml template in the yaml template.. win win, separation of concerns, sanity win happy dance and emoji and all…

Deploying minishift on a remote laptop.

Part of my new job working with Fabric8 is to having it deployed via minishift.
Everything is nice and working (try it it’s awesome https://fabric8.io/guide/getStarted/gofabric8.html) as long you deploy it on your local workstation.

The thing is that my desktop macosx laptop has only 8GB of RAM and is not very well up to the task to get all the services being deployed when I have my web browser and other stuff hogging the memory. I would not do on a remote VM since I want to avoid the nested virtualisationt part that may slow down things even more.

Thanksfully I have another linux laptop with 8GB of RAM which I use for my testing and wanted to deploy minishift on it and access it from my desktop laptop.

This is not as trivial as it sounds but thanks to minishift flexibility there is way to set this up.

So here is the magic command line :

minishift start --public-hostname localhost --routing-suffix 127.0.0.1.nip.io

What do we do here? We bind everyting to localhost and 127.0.0.1, what for you may ask? Cause we then are going to use it via SSH. First you need to get the minishift IP :


$ minishift ip
192.168.42.209

and now since in my case it’s the 192.168.42.209 IP I am going to forward SSH it :


sudo ssh -L 443:192.168.42.209:443 -L 8443:192.168.42.209:8443 username@host

Change the username@host and the 192.168.42.209 to your IP. I use sudo here since to be able to forward the privileged 443 port need root access,

When this is done if the stars was aligned in the right direction when you typed those commands you should be able to see the fabric8 login page :

Getting a letsencrypt SSL certificate for the OpenShift console and API

By default when you install an OpenShift install it would automatically generate its own certificates.

It uses those certificates for communication between nodes and as well to automatically auth the admin account. By default those same certificates are the one provided for the OpenShift console and API.

Since it is auto generated you will when connecting  to the website with you webbrowser get an ugly error message :

2016-09-28__23-40-01-20126

and as the error message says that’s not very secure #sadpanda.

There is an easy way to generate certificate these days and it is to use letsencrypt, so let’s see how to connect it to the openshift console.

There is something to understand first here,  when you want to use an alternate SSL certificates for your console and API you can’t do that on your default (master) URL, it has to be another url. Phrased in another way here is a quote from the official documentation  :

2016-09-28__23-55-03-27531

with that in mind let’s assume you have setup a domain being a CNAME to your default domain. For myself here since this is a test install I went to use the easy way and I will use the xp.io service as I have documented in an earlier post. This give me easily a domain which would look like this :

lb.198.154.189.125.xip.io

So now that you have defined it, you need first to generate the letsencrypt certificate usually you would use certbot from RHEL EPEL to generate them but unfortunately at the time of writing this blog post the package was  uninstallable for me which probably would get fixed soon. In the meantime I have used letsencrypt from git directly as like this:

$ git clone https://github.com/letsencrypt/letsencrypt

before you do anything, you need to understand the letsencrypt  process, usually you would have an apache or nginx (etc…) serving the generated files for verifications  (the /.well-known/ thing) since we can’t do that for us in openshift you can use the letsencrypt builtin webserver for that.

But to start the builtin webserver you need to be able to do it to bind it on port 80  but for us on master there is the router running which bind to it (and 443), so you would need to make sure it’s down and the most elegant way to do that with openshift is like this :

$ oc scale –replicas=0 dc router

now that you have nothing on port 80 you can tell letsencrypt to do its magic with this command line :

$ ./letsencrypt-auto –renew-by-default -a standalone –webroot-path /tmp/letsencrypt/ –server https://acme-v01.api.letsencrypt.org/directory –email email@email.com –text –agree-tos –agree-dev-preview -d lb.198.154.189.125.xip.io auth

change the lb.198.154.189.125.xip.io here to your own domain as the email address, if everything goes well you should get something like this :

2016-09-29__00-08-22-10578

now you should have all the certificates needed in /etc/letsencrypt/live/${domain}

So there is a little caveat here, there is a bug in openshift-ansible currently with symlinks and certificates and the way it operates. I have filled the bug here and it has already been fixed in GIT so hopefully by the time you will read this article this would be fixed in the openshift-ansible rpm if it’s not you can directly use the GIT openshift-ansible instead of the package.mber (3) here is going to change so you would have to adjust.

now you just need to some configuration in your /etc/ansible/hosts file :

openshift_master_cluster_public_hostname=lb.198.154.189.125.xip.io
openshift_master_named_certificates=[{"certfile": "/etc/letsencrypt/live/lb.198.154.189.125.xip.io/full.pem", "keyfile": "/etc/letsencrypt/live/lb.198.154.189.125.xip.io/privkey.pem", "names":["lb.198.154.189.125.xip.io"]}]
openshift_master_overwrite_named_certificates=true

after you run your playbook (with ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml) you should have it running properly and now when accessing by the console you should the reassuring secure lock :

2016-09-29__10-11-32-12477

NB:

  • If you need to renew the certs just do the steps where you oc scale the router quickly and renew the certificate with the letsencrypt auto command line mentioned earlier.
  • There is probably a way more elegant way to do that with a container and a route. I saw this on dockerhub but this seems to be tailored to apps (and kube) and I don’t think this could be used for the OpenShift console.
  • Don’t forget to oc scale –replicas=1 dc/router (even tho the ansible rerun should have done for you.

Easily test your OpenShift applications exposed by the router

OpenShift integrate[1] a router based on HAproxy to expose your services to the outside world. Whenever your do a :

oc expose servicename

it would expose by default the servicename this URL :

servicenameprojectname.defaultSubDomain

The defaultSubdomain is usually a wildcard DNS record that you have configured in your domain server by your system administrator. 

Now for your openshift testing if you don’t want to ask your system administrator to configure a new CNAME going to your testing environement you can just use the free service xp.io

The XP.IO service is a special DNS service which would take a an IP address and xp.io and report back the IP of this IP address to itself and to all subdomain so that the IP:

blah.1.2.3.4.xp.io

will go to 1.2.3.4 same goes for foo.1.2.3.4, bar.1.2.3.4 etc…

You just then need to configure it in OpenShift by editing the value (assuming 1.2.3.4 is your public IP which come back to your router) :


routingConfig:
    subdomain: "1.2.3.4.xip.io"

Or if you use the openshift-ansible scripts to add this your /etc/ansible/hosts

osm_default_subdomain=1.2.3.4.xip.io

and then you get all your route exposed properly without bother your always busy system admin.

[1] Which lately got merged into kubernetes as the “ingress” feature

How to view openshift router (haproxy) stats

After you have installed your fancy openshift install and that it kicked the haproxy router automatically after install you may want to see the stats of the router.

The HAproxy stats are exposed on the port 1936 where the router is located (usually on the master node) so first you need a way to access it. You can open it via your firewall (not ideal) or you can just port forward the port to your workstation via SSH :

$ ssh -L 1936:localhost:1936 master.openshift

Now that it’s done and you have 1936 tunelled you need to figure out the password of the haproxy stats. It’s stored in its environment variables so you just do a oc describe to see it for example :

2016-09-27__12-58-57-15400

Now that you have the password (uo5LtC6mac in my case), you just point your workstation web browser to :

http://admin:password@localhost:1936

just make sure to replace the password with your own password and you should be all set.

2016-09-27__13-01-20-4942

Controlling Yamaha AV RX-A830 from command line

At home I have been using a Yamaha AV RX-A380, it’s an home teather audio video solution where you can plug about everything you need (like 7 hdmi channel, spoiler alert there is something wrong with you if you have that many devices) and output to two other hdmi channel (like a tv and a projector).

It has integration for spotify, airplay, netradio and billions of connection to everything, just look at the damn back of this device :

Since I wanted to control it from the command line to automate it for home automation, I firebugged the web interface and reversed some of the REST calls in a nice bash script.

Here it is at your convenience to using or hack it :

This doesn’t support multi-zone and assume the web interface is resolvable to http://yamaha.local/ (it should be by default) so be aware. This may support other Yamaha AV devices but since I don’t have it I can’t say and you may have try, if it does kindly add a comment here soother would know :)

The trick to get your wordpress behind a reverse proxy

I have been meaning to get this blog SSL protected for a while and since solution like letsencrypt makes it easy I have generated some SSL keys for my domain  and configured it in apache.

So far so good, but the thing is my VM at my hosting provider is pretty small and I have been using varnish for quite some time or I would get out of memory quickly some the kernel OOM killer kicking[1] it.

Varnish don’t do SSL so you have to do something else, I went ahead and used Nginx to provide my SSL endpoint which then would look like this :

nginx-varnish-apache

I could have done it with apache virtualhosts which look like this :

apache-virtualhosts-varnish-ssl

I went finally for nginx since most people seems to say that it was more lean and quick for those kick of ssl accelerator job.

So far so good for the configuration, you can find those informations all over the internet, the nginx ssl configuration was a bit special so I can have the higher secure end of SSL encryption :

Now the thing didn’t work very well when accessing the website, I could not see any of th medias including JS and SSL since they were served on the old non ssl url. I tried to force the wordpress configuration to serve SSL but I would end up in a http redirect loop.

Finally I stumbled on this guy blog and looked at a hack to put in the wp-config.php file. I streamlined it to :

    
if ( (!empty( $_SERVER['HTTP_X_FORWARDED_HOST'])) ||
     (!empty( $_SERVER['HTTP_X_FORWARDED_FOR'])) ) {
    $_SERVER['HTTPS'] = 'on';
}
    

and that’s it, wordpress would then understand it would serve as HTTPS and would add its https url properly.

Hope this helps

[1] I had even a cron sometime ago to mysqlping my mysql server and restart it automatically if it was down since I was so sick of it