For the love of centered windows gnome extension edition

Feels weird or great or stupid or pretty smart or whatever to be wrong. Just when I wrote that previous blog post : https://blog.chmouel.com/2021/11/14/for-the-love-of-centered-windows/ that I realize that shelll script doesn’t work great on wayland.

I didn’t really understood how Wayland works and just assumed that my tiny scripts just works. But experiencing not working on a native Wayland application and understanding how wayland works: https://wayland.freedesktop.org/docs/html/ch05.html it obviously needed a better way to do that if I have to keep up with the modern world of a linux desktop.

So I spent a few hours to trying to understand how to make a gnome shell extension to replicate this. Since with the wayland architecture I don’t think there is other way around to replicate this feature.

I got inspired by the [tactile](https://gitlab.com/lundal/tactile). And when I say inspired I pretty much copied a lot of code from there (my javascript and gtk dev skills are near to zero so I truly needed help).

The results is a nice extension that does exactly the same thing as my script and work exactly as wanted.

I even got a nice gif to show for it :

If you are interested it feel free to grab it from the gnome extensions website :

https://extensions.gnome.org/extension/4615/one-third-window/

Just start pressing the Super (or windows) + C keys to center the current window or Super/Win+Shift C to rotate the window around.

For the love of centered windows

Sometime over a 2020 confinement my work decided to give us some money to buy some work from home office items.

I didn’t need much at that time since I was already settled with everything I needed in a work from home office but decided to go for a fancy new screen since well why not and the other one (a standard 24″ display) could find some good use for my teenage gamer son.

The chosen screen is a Samsung Ultra Wide display with a beautiful model name called S34J550WQU.

At first I was thrilled, the display looks nice and it seems I have much more estate for more to look at than before.

But in practice it’s not like this, this screen is too wide because, if I start having to d some work on the windows located on the right and on the left, I actually need to look at those windows by turning my neck or to actually move my head left to right. After a while it became quite irritating and gave me serious neckache and stiffness.

After experimenting for a short while, I decided that centered window was the way to go, so I was painfully resizing windows and moved them around manually as needed. There was other tools I have found trying to automate those taskss but they never helped much.

Fast forward 3 months later, I was happily switched back onto Linux with the promises of those titling windows manager where windows gets titled and rearranged exactly the way I would wanted it.

But I never got into titling windows manager and never understood it really. Before that ultra wide screen display, I was mostly a one large window person with fast Alt-<Tab> fingers. I would have my Emacs (or browser/terminal) in full screen which was just right to look at without moving my head since the screen was smaller where I can focus on the task at hand, without having five of them at the same time.

In titling windows manager you can customize it a lot but no matter what layout I tried I never could get it right.

I gave up trying to adjust my workflow with i3/monad since part it is because I am definitively comfortable with desktop Gnome and I actually quite enjoying its simplicity

I kinda for a while was thinking to sell that ultra wide monitor and get a smaller screen but like a lot of things life I was able to adjust my workflow with a set of keybinding and a shell script.

I got inspired by this unix.stackexchange answer. With the help of xwininfo and wmctrl. my script would identify the current window and rotate them one them on the left of the scrren or the right, or my preferred placement for working on center. So essentially when I have my editor (Emacs) a browser and a terminal, I can move the windows around and center the one I want and move the non active one easily on the sides.

I realize now I kinda replicated a (extremely simplified) titling windows manager with the only feature I want . The advantage of my method is that I can have it exactly the way I want. The thing is that I don’t want a perfect 1/3 of the screen window, the window in the center is a few pixel more in the center and cover over the side windows and that makes it trivial in a screen, compared to try to figure it out how to do that in i3.

Another win for perfect aligned windows desktop.

(I am quite new to wayland but to my surprise the script still work on wayland too).

How to make a release pipeline with Pipelines as Code

One of the early goal of Pipelines as Code on Tekton is to make sure we were able to have the project CI running with itself.

The common user case of validating pull request was quickly implemented and you can find more information about it in this walkthough video :

https://www.youtube-nocookie.com/embed/Uh1YhOGPOes

For slightly more advanced use case here is how we made a release pipeline for the project.

The goal is when we tag a release and push the tags to the GitHUB repo it will

  • Generate the release.yaml file for that version for user to automatically kubectl apply -f- it.
  • Upload that release.yaml to a release-${version} branch
  • Generate the tkn-pac binaries for the different Operating Systems
  • Generate the GitHUB release.

To be able to do so, I created a Repository CR in the pipelines-as-code-ci namespace:

apiVersion: pipelinesascode.tekton.dev/v1alpha1
  kind: Repository
  metadata:
    name: pipelines-as-code-ci-make-release
    namespace: pipelines-as-code-ci
  spec:
    branch: refs/tags/*
    event_type: push
    namespace: pipelines-as-code-ci
    url: https://github.com/openshift-pipelines/pipelines-as-code

The key part is the branch and event_type spec fields, where in plain english means I want to have all the tags push handled and run in the namespace pipelines-as-code-ci

I then created a release-pipeline.yaml PipelineRun in my .tekton directory with the annotations needed :

    pipelinesascode.tekton.dev/on-event: "[push]"
    pipelinesascode.tekton.dev/on-target-branch: "[refs/tags/*]"

which mean in pain english that this pipelinerun will handle all on push tags events.

In my tasks I need the git-clone tasks and a custom version of the goreleaser task located inside my repository in .tekton/task/goreleaser.yaml.

The annotation for this looks like this :

pipelinesascode.tekton.dev/task: "[git-clone, .tekton/tasks/goreleaser.yaml]"

Goreleaser takes care of a lot of things for us, it compiles all binary and make a release in GitHub, it as well has the ability to generate a homebrew release in openshift-pipelines/homebrew-pipelines-as-code/ so user on OSX or LinuxBrew can easily just do :

brew install openshift-pipelines/pipelines-as-code/tektoncd-pac

Uploading the release.yaml si done with a Python script I wrote for it :

https://github.com/openshift-pipelines/pipelines-as-code/blob/main/hack/upload-file-to-github.py

It will fetch the tag SHA and create a branch release-${tagversion} and push the file into it. This gives a stable branch with all the artifacts specifique to that version.

After all of that, I just need to edit the release and change a few fields to make it a bit nicer and set it as release (by default goreleaser do a prerelease)

Here is the link to all the files :

Speed up your tekton pipeline caching the hacky way

There is one thing that can get your wind up when you try to iterate quickly in a PR is to have a slow CI.

While working on a go project with a comprehensive test suite it was usually taking over 20 to 30 minutes to run and being as patient as a kid waiting for her candy floss to be ready I am eagerly waiting that my pipeline is Green or not.

My pipeline wasn’t that complicated, it’s a typical go project one, it has only a few steps which can be described like this :

  1. Checkout the git repo in a Tekton workspace (PVC) to the commit SHA we want to be tested.
  2. Run my functional/unit tests
  3. (I don’t have e2e tests yet :) but that should be step three when it’s done)
  4. Run some linting on my code with golangci-lint
  5. Run some other linters like yamllint.

As a good cloud native citizen Tekton spins up a new pod and attaches our pvc/workspace to it, and in each pods where a step is using go, the go compiler recompiles my dependencies I have vendored in my vendor directory.

I am vendoring inside my repository so at least I don’t have `go mod` downloading again and again, but the go compilation can get very slow.

At first, I made a large container with a large image that had all the tools (which is Tekton upstream test-runner image) to speeds by a good time albeit it was still relatively slow compared to running it on my laptop to run the testsuite.

I decided to go the “hacky way”, I would save the cache as often as possible to help the go compiler as much as possible.

This would use a toy project of mine called go-simple-uploader which is a simple go http server to upload and serve files to “cache my cache”.

At first I deployed the go-simple-uploader binary inside a kubernetes Deployment in front of a “Service”. The service will be accessible to every pod inside that same namespace, to its service name “http://uploader:8080”.

I modified my pipeline so at first step I would check if there is a cache file and uncompress it :

- image: mirror.gcr.io/library/golang:latest
  name: get-cache
  workingDir: $(workspaces.source.path)
  script: |
     #!/usr/bin/env bash
     set -ex
     mkdir -p go-build-cache;cd go-build-cache

     curl -fsI http://uploader:8080/golang-cache.tar || {
          echo "no cache found"
          exit 0
     }

     echo "Getting cache"
     curl http://uploader:8080/golang-cache.tar|tar -x -f-

This step does a few things :

  • It uses the golang official image because I know it will be used later on in my pipeline so I don’t care as much to have a small image and I can use a bash/curl in there.
  • It uses the source workspaces where I have my code checked out in from the git-clone task and create the go-build-cache directory
  • If it finds the file called golang-cache.tar It uncompressed it as quickly as possible.

Later on in my pipeline, I have another task with multiple steps. I have moved away from using multiple tasks because PVC attachment can get quite slow. While a tekton task is a single pod where each steps are a container the steps looks like this :

- name: unitlint
  runAfter:
    - get-cache
  taskSpec:
     steps:
      - name: unittest
        image: mirror.gcr.io/library/golang:latest
        workingDir: $(workspaces.source.path)
        script: |
            mkdir -p $HOME/.cache/ && ln -vs $(workspaces.source.path)/go-build-cache $HOME/.cache/go-build
            make test
      - name: lint
        image: quay.io/app-sre/golangci-lint
        workingDir: $(workspaces.source.path)
        script: |
            mkdir -p $HOME/.cache/ && ln -vfs $(workspaces.source.path)/go-build-cache $HOME/.cache/go-build
            mkdir -p $HOME/.cache/ && ln -vfs $(workspaces.source.path)/go-build-cache $HOME/.cache/golangci-lint
        make lint-go

This task run after `get-cache` the key part is where I symlink the go-build-cache directory to $HOME/.cache/go-build directory

Sames goes for golangci-lint I symlink its cache directory to $HOME/cache/golangci-lint

Later on I have another task that save this cache :

- name: save-cache
  workingDir: $(workspaces.source.path)
  script: |
    #!/usr/bin/env bash
    curl -o/dev/null -s -f -X POST -F path=test -F file=@/etc/motd  http://uploader:8080/upload || {
        echo "No cache server found"
        exit 0
     }

    lm="$(curl -fsI http://uploader:8080/golang-cache.tar|sed -n '/Last-Modified/ { s/Last-Modified: //;s/\r//; p}')"
     if [[ -n ${lm} ]];then
          expired=$(python -c "import datetime, sys;print(datetime.datetime.now() > datetime.datetime.strptime(sys.argv[1], '%a, %d %b %Y %X %Z') + datetime.timedelta(days=1))" "${lm}")
           [[ ${expired} == "False" ]] && {
             echo "Cache is younger than a day"
             exit
            }
       
      fi

      cd $(workspaces.source.path)/go-build-cache
      tar cf - . |curl -# -L -f -F path=golang-cache.tar -X POST -F "file=@-" http://uploader:8080/upload

The task starts by checking if you have a cache server and then with the help of some shell and python magic checking with the “Last-Modified” http header if you have a cache file that has already been generated since over a day.

I am doing this because at first I was using a uploading server in another environment which could get quite slow to upload on every run, since we run in the same namespace the upload is very fast so this is probably not needed, but keeping it here if someone else needs it.

It then uploads the cache file where it would be available for the subsequent runs using the first get-cache task.

Things got much faster, starting with my pipeline going over 10/12 minute at first run, it will get down to 2mn on the next runs. It gets as slow as your infra, the time for kubernetes to spin up the pods really, it could get under 1mn if I am running my pipeline under Kind.

This is pretty KISS and may actually work on any pipelines that needs caching (I am looking at you java maven and nodejs npm).

Now the best way probably to address this in a generic way is to have a task in the tektoncd/catalog that behaves like the Github Action cache where the user can specify a parameter for the cache key, a TTL and a directory and maybe the storage type (ie: like an object storage or a simple http uploader or even another PVC) and the task automatically do everything at the start of your pipeline and by the end with in a finally task.

Hopefully we can get this done in the near future.

NextDNS + DNSMasq DHCP and local names

Took me a little bit a while to figure out so here is some documentation,

My router from my ISP which is generally pretty good, doesn’t support local dns names which is annoying in itself. Combined with NextDNS, I have no way to identify the devices on my network.

So there I went configured dnsmasq on my tiny raspbery-pi :

port=5353
no-resolv
interface=eth0
except-interface=lo
listen-address=::1,192.168.0.3
no-dhcp-interface=
bind-interfaces
cache-size=10000
local-ttl=2
log-async
log-queries
bogus-priv
server=192.168.0.3
add-mac
add-subnet=32,128

This would have the dnsmasq service listening on 192.168.0.3:5353 and forward everything to 192.168.0.3.

I continued and set my DHCP server :


dhcp-authoritative
dhcp-range=192.168.0.20,192.168.0.251,24h
dhcp-option=option:router,192.168.0.254
dhcp-name-match=set:wpad-ignore,wpad
dhcp-name-match=set:hostname-ignore,localhost
dhcp-ignore-names=tag:wpad-ignore
dhcp-mac=set:client_is_a_pi,B8:27:EB:*:*:*
dhcp-reply-delay=tag:client_is_a_pi,2
dhcp-option=option:dns-server,192.168.0.3
dhcp-option=option:domain-name,lan

domain=lan
dhcp-option=option6:dns-server,[::]
dhcp-range=::100,::1ff,constructor:eth0,ra-names,slaac,24h
ra-param=*,0,0

Standard DHCP really just make sure you setup the router to your local router, here it’s 0.254 in my config.

I then configured the nextdns client on 192.168.0.3 default DNS port 53 :

cache-size 0
report-client-info true
setup-router false
log-queries true
config CONFIG_ID_FROM_NEXTDNS_GET_IT_FROM_THERE
cache-max-age 0s
timeout 5s
control /var/run/nextdns.sock
forwarder .lan.=192.168.0.3:5353
max-ttl 5s
discovery-dns 192.168.0.3:5353
hardened-privacy false
bogus-priv true
auto-activate false
listen 192.168.0.3:53
use-hosts true
detect-captive-portals false

The key setings is the discovery-dns setting, it means it would try to discover the local names to display on the nextdns web UI and resolve all lan domain to the local dnsmasq server.

And that’s it…. Hope this helps.

batzconverter – A multiple timezone converter

I do write a lot of scripts to automate my day to day workflow, some of them I just wrote for 3h to save me 5mn time only once and some others I write in about 5mn but save me hours and hour of productivity.

The script showed today, that I am proud of because of its usefulness and probably not of its code, is called “batzconverter“. and available on github. What the script is trying to solve is when you work with your team spread around 3/4 timezones, how do you schedule a meeting easily.

It’s a ~200 lines simple shell script that leverage onto the very powerful GNU date

In its most simple form when you type the command batz you are getting this :

This is showing all timezone (which you can configure) at current time, with some easily identified emojis. The 🏠 emoji on the right is to show the location of the user.

But you can do way more, let’s say you want to show all timezone for tomorrow 13h00 meeting :

It will just do that and show it.

Same goes for a specific date :

You can do some extra stuff, like adding quickly another timezone that isn’t configured :

Or give another timezone as the base for conversion, the airplane emoji ✈️ here is to let know that you showing another target timezone :

easy peasy to use, no frills, no bells just usefulness…

There is another flag called “-j” to allow you to output to json, it was implemented for being able to plug into the awesome alfredapps on osx as a workflow :

But it doesn’t have to be for Alfred, the json outut can be used for any other integrations.

Configuration is pretty simple you just need to configure the timezone you would like to have in a file located ~/.config and the emojis associated with it (cause visuals are important! insert rolled eyes emoji here).

Head up on github chmouel/batzconverter to learn how to install and configure it, and feel free to let me know about suggestions or issues you have using it.

Building packages for multiple distros on launchpad with docker

I have been trying to build some packages for the ubuntu distros for a new program I have released, gnome-next-meeting-applet

In short, it what quite painful! if you are quite new to the launchpad and debian packaging ways (which I wasn’t and yet It took me some time to figure out) you can get quite lost. I got to say that the fedora copr experience is much smoother. After a couple of frustrated google and stackoverflow searches and multiple tries I finally figured out a script that builds and upload properly to launchpad via docker to make it available to my users.

  1. The first rule of uploading to launchpad it’s to properly setup your GPG key on your account making sure it match what you have locally.
  2. The second rule is making sure the new upload you make increase onto the precedent uloaded version or it will be rejected.
  3. The third rule is to be patient or work on week-end, because the queue can be quite slow.

Now the magic is happening in this script :

https://github.com/chmouel/gnome-next-meeting-applet/blob/742dbe48795c0151411db69065fdd773762100e1/debian/build.sh#L20-L41

We have a Dockerfile with all the dependencies we need for building the package in this file :

https://github.com/chmouel/gnome-next-meeting-applet/blob/a2785314365c51200935ad63c38f490c597989c9/debian/Dockerfile

When we launch the script in the main loop we modify the FROM to go to the distro docker tag targeted (here I have LTS and ROLLING) and we start the container build.

When it’s done we mount our current source as a volume inside the container and mount our ~/.gnupg to the build user gnupg inside the container.

With dch we increase the targeted distro version and we add as well after the release number the distro target after the “~” like this “0.1.0-1~focal1”.

We finish the upload with dput and launchpad *should* then send you an email it was accepted.

After waiting a bit your package should be built for the multiple distribution.

Tekton yaml templates and script feature

Don’t you love “yaml”, yes you do ! or at least that’s what the industry told you to love!

When you were in school your teacher told you about “XML” and how it will solve all the industry problems (and there was many in the late 90s). But you learned that you hate reaching to your “<“ and” “>” keys and rather have something else. So then the industry came up with “json” so computer or yourself can talk to each others, that’s nice for computers but actually not so nice for yourself it was actually a lie and was not made for yourself to read and write but only for comptures. So then the “industry” came up with yaml, indentation based? you get it and that’s humm all about it, now you are stuck counting whitespaces in a 3000 lines file trying to figure out where goes where….

Anywoo ranting about computer history is not the purpose of this blog post, like every other cloud native (sigh) component out there tekton is using yaml to let the user describe its operative executions. There is a very nice feature in there (no sarcasm it is really nice!) allowing you to embed “scripts” directly in tasks. Instead of like before where you had to build a container image with your script and run it from tekton, you can now just specify the script embedded directly in your “Task” or “Pipeline”.

All good, all good, that’s very nice and dandy but when you start writing a script that goes over 5 lines you are getting into the territory where you have mixed a ~1000 lines script embedded in a 2000 lines of yaml (double sigh).

You can come back to the old way, and start to go over the development workflow of :

“write” -> commit -> “push” -> “build image” => “push” -> “update tag” -> “start task”

and realize that you are loosing approximately 40 years of your soul into some boring and repetitive tasks.

So now that i am over talking to myself with this way too long preamble here is the real piece of information in this post, a script that like everything in your life workaround the real issue.

It’s available here :

https://github.com/chmouel/chmouzies/blob/master/work/tekton-script-template.sh

The idea is if you have in your template a tag saying #INSERT filename, it would be replaced by the content of the file, it’s dumb and stupid but make devloping your yaml much more pleasing… so if you have something like :

image: foo
script: |
#INSERT script.py

the script with see this and insert the file script.py in your template. It will respect the previous line indentation and add four spaces extra to indent the script and you can have as many INSERT as you want in your template….

Now you can edit your code in script.py and your yaml template in the yaml template.. win win, separation of concerns, sanity win happy dance and emoji and all…

Deploying minishift on a remote laptop.

Part of my new job working with Fabric8 is to having it deployed via minishift.
Everything is nice and working (try it it’s awesome https://fabric8.io/guide/getStarted/gofabric8.html) as long you deploy it on your local workstation.

The thing is that my desktop macosx laptop has only 8GB of RAM and is not very well up to the task to get all the services being deployed when I have my web browser and other stuff hogging the memory. I would not do on a remote VM since I want to avoid the nested virtualisationt part that may slow down things even more.

Thanksfully I have another linux laptop with 8GB of RAM which I use for my testing and wanted to deploy minishift on it and access it from my desktop laptop.

This is not as trivial as it sounds but thanks to minishift flexibility there is way to set this up.

So here is the magic command line :

minishift start --public-hostname localhost --routing-suffix 127.0.0.1.nip.io

What do we do here? We bind everyting to localhost and 127.0.0.1, what for you may ask? Cause we then are going to use it via SSH. First you need to get the minishift IP :


$ minishift ip
192.168.42.209

and now since in my case it’s the 192.168.42.209 IP I am going to forward SSH it :


sudo ssh -L 443:192.168.42.209:443 -L 8443:192.168.42.209:8443 username@host

Change the username@host and the 192.168.42.209 to your IP. I use sudo here since to be able to forward the privileged 443 port need root access,

When this is done if the stars was aligned in the right direction when you typed those commands you should be able to see the fabric8 login page :

Getting a letsencrypt SSL certificate for the OpenShift console and API

By default when you install an OpenShift install it would automatically generate its own certificates.

It uses those certificates for communication between nodes and as well to automatically auth the admin account. By default those same certificates are the one provided for the OpenShift console and API.

Since it is auto generated you will when connecting  to the website with you webbrowser get an ugly error message :

2016-09-28__23-40-01-20126

and as the error message says that’s not very secure #sadpanda.

There is an easy way to generate certificate these days and it is to use letsencrypt, so let’s see how to connect it to the openshift console.

There is something to understand first here,  when you want to use an alternate SSL certificates for your console and API you can’t do that on your default (master) URL, it has to be another url. Phrased in another way here is a quote from the official documentation  :

2016-09-28__23-55-03-27531

with that in mind let’s assume you have setup a domain being a CNAME to your default domain. For myself here since this is a test install I went to use the easy way and I will use the xp.io service as I have documented in an earlier post. This give me easily a domain which would look like this :

lb.198.154.189.125.xip.io

So now that you have defined it, you need first to generate the letsencrypt certificate usually you would use certbot from RHEL EPEL to generate them but unfortunately at the time of writing this blog post the package was  uninstallable for me which probably would get fixed soon. In the meantime I have used letsencrypt from git directly as like this:

$ git clone https://github.com/letsencrypt/letsencrypt

before you do anything, you need to understand the letsencrypt  process, usually you would have an apache or nginx (etc…) serving the generated files for verifications  (the /.well-known/ thing) since we can’t do that for us in openshift you can use the letsencrypt builtin webserver for that.

But to start the builtin webserver you need to be able to do it to bind it on port 80  but for us on master there is the router running which bind to it (and 443), so you would need to make sure it’s down and the most elegant way to do that with openshift is like this :

$ oc scale –replicas=0 dc router

now that you have nothing on port 80 you can tell letsencrypt to do its magic with this command line :

$ ./letsencrypt-auto –renew-by-default -a standalone –webroot-path /tmp/letsencrypt/ –server https://acme-v01.api.letsencrypt.org/directory –email email@email.com –text –agree-tos –agree-dev-preview -d lb.198.154.189.125.xip.io auth

change the lb.198.154.189.125.xip.io here to your own domain as the email address, if everything goes well you should get something like this :

2016-09-29__00-08-22-10578

now you should have all the certificates needed in /etc/letsencrypt/live/${domain}

So there is a little caveat here, there is a bug in openshift-ansible currently with symlinks and certificates and the way it operates. I have filled the bug here and it has already been fixed in GIT so hopefully by the time you will read this article this would be fixed in the openshift-ansible rpm if it’s not you can directly use the GIT openshift-ansible instead of the package.mber (3) here is going to change so you would have to adjust.

now you just need to some configuration in your /etc/ansible/hosts file :

openshift_master_cluster_public_hostname=lb.198.154.189.125.xip.io
openshift_master_named_certificates=[{"certfile": "/etc/letsencrypt/live/lb.198.154.189.125.xip.io/full.pem", "keyfile": "/etc/letsencrypt/live/lb.198.154.189.125.xip.io/privkey.pem", "names":["lb.198.154.189.125.xip.io"]}]
openshift_master_overwrite_named_certificates=true

after you run your playbook (with ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml) you should have it running properly and now when accessing by the console you should the reassuring secure lock :

2016-09-29__10-11-32-12477

NB:

  • If you need to renew the certs just do the steps where you oc scale the router quickly and renew the certificate with the letsencrypt auto command line mentioned earlier.
  • There is probably a way more elegant way to do that with a container and a route. I saw this on dockerhub but this seems to be tailored to apps (and kube) and I don’t think this could be used for the OpenShift console.
  • Don’t forget to oc scale –replicas=1 dc/router (even tho the ansible rerun should have done for you.