User namespaces with Buildah and OpenShift Pipelines

In 2022 one of the hottest CI topics is how to secure it every steps along the way.

The so-called supply chain attacks have been more and more an attack vector for bad actor whereas providers need to make sure every piece of the Integration is secure.

One area that was identified as something we can improve with Openshift and containers in general is when running as root on the container may expose the host and process in that container may be able to mingle with other resources.

And this is a problem for us at the OpenShift pipelines team since this is what our shipped buildah ClusterTask does by default.

Ideally we would like to run everything rootless with a randomized user id, like what Openshift does by default. But there are some use-cases needing to run as root on the container but still be secure on the host.

In this article we will focus on Buildah but some of those techniques can be reused for different workloads, bear in mind that some of those technologies can be pretty bleeding edge and you may encounter some unexpected side effects, but for the sake of a secure pipeline I would encourage you to try this out and see how it works out for you.

Running as root in a user namespace

Running in a user namespace is when your are running the container as root (user id 0) but as user on the host.

OpenShift is using CRI-O as its container engine. And CRI-O has now a workload feature to be able to let the pod know it wants to run “user namespaced”.

User namespaces is not a new feature; it has been here for a while, see for example this 2013 article from the lwn introducing it. But it took some time to come to RHEL (from RHEL8) and integrated into OpenShift (support was added in Openshift 4.10). If you want to try this feature manually you can refer to this article explaining how it works .

To run as a user in a namespace with Openshift Pipelines you need to be able to pass the annotations expected by CRIO to get the pods running user namespaced.

You can directly edit the buildah ClusterTask with “oc edit clustertask buildah” to add this to all users, or export the clustertask as a task to a specific Namespace.

With the latest tkn create from feature you can easily automate it :

$ oc new-project test
$ tkn task create --from=buildah
$ oc edit task buildah

And add the annotations to the task :

    io.kubernetes.cri-o.userns-mode: "auto"
    io.openshift.builder: "true"

Now if you are running the (currently unreleased) latest Openshift Pipelines 1.7 on OpenShift 4.10, you will see your buildah container running like before as root.

But if you adventure yourself behind the scene on the host of the pod :

# get the nodename where the buildah pod is with "oc get pod -o wide"
$ oc debug nodes/nodename
$ chroot /host
$ lsns -t user

OpenShift and CRIO did their magic and ran your pod in a “user namespace”.

Running as rootless in container

To be able to run as rootless inside a container; where you run as a user inside the container, you need your container to do some setup to add the subuid and subgid provided from the latest shadow-utils package.

Thankfully the public image based on Red Hat UBI has everything already setup for us in its Containerfile

You have an extra step to do tho, you need to “force” running the container as the “build” user, since if we leave it to the default it will be “randomed” and would not have the setup needed to run as non-root.

Edit directly or export to a specific namespace the buildah clustertask as described in the previous section.
Add this securityContext to each build and push steps :

  runAsUser: 1000

And for demo purpose I have added this line at the beginning of the script task :

echo "Running as USER ID id"

I have then created a ConfigMap workspace with a sample dockerfile and a sample taskrun to be able to test it.

When running it, I can see my image is running as user :

% tkn tr logs -Lf buildah-run [build] ++ id
[build] Running as USER ID uid=1000(build) gid=1000(build) groups=1000(build),1000690000
[build] + echo 'Running as USER ID uid=1000(build) gid=1000(build) groups=1000(build),1000690000'
[build] + buildah --storage-driver=vfs bud --format=oci --tls-verify=true --no-cache -f ./Dockerfile -t image-registry.openshift-image-registry.svc:5000/test/buildahuser .
[build] STEP 1: FROM AS buildah-runner
[build] Getting image source signatures
[build] Checking if image destination supports signatures
[build] Copying blob sha256:adffa69631469a649556cee5b8456f184928818064aac82106bd08bd62e51d4e
[build] Copying blob sha256:26f1167feaf74177f9054bf26ac8775a4b188f25914e23bda9574ef2a759cce4
[build] Copying config sha256:fca12da1dc30ed8e7d03afb84b287fc695673fff9c04bfcb2ff404b558670a36
[build] Writing manifest to image destination
[build] Storing signatures
[build] STEP 2: RUN dnf -y update && dnf -y install git && dnf clean all
[build] Updating Subscription Management repositories.

More security with a custom SCC

Now these steps are working because currently on Openshift Pipelines we make all taskrun/pipelinerun running automatically as the pipelines serviceAccount which has the pipelines SecurityContext that allows running as any user.

We have added those rights to have it easy to to migrate pipelines that needs to be run or build as root, but if you like to further lock down your installation you can only allow running container as the user 1000 if the pod is asking (or the container image forces it) for it and don’t let the container running as root by default.

To do so you need to edit the pipelines-scc and modify the runAsUser and seLinuxContext section to MustRunAs to uid 1000 :

  type: MustRunAs
  uid: 1000
  type: MustRunAs

The pipeline serviceaccount will only allow running images as user 1000 and not as root anymore, if the images are not running as 1000 they will be randomed out as user id.

Templates examples

This gist reference all the files I am mentionned int his article

  • Taskrun and Dockerfile configmap workspace:

For the love of centered windows gnome extension edition

Feels weird or great or stupid or pretty smart or whatever to be wrong. Just when I wrote that previous blog post : that I realize that shelll script doesn’t work great on wayland.

I didn’t really understood how Wayland works and just assumed that my tiny scripts just works. But experiencing not working on a native Wayland application and understanding how wayland works: it obviously needed a better way to do that if I have to keep up with the modern world of a linux desktop.

So I spent a few hours to trying to understand how to make a gnome shell extension to replicate this. Since with the wayland architecture I don’t think there is other way around to replicate this feature.

I got inspired by the [tactile]( And when I say inspired I pretty much copied a lot of code from there (my javascript and gtk dev skills are near to zero so I truly needed help).

The results is a nice extension that does exactly the same thing as my script and work exactly as wanted.

I even got a nice gif to show for it :

If you are interested it feel free to grab it from the gnome extensions website :

Just start pressing the Super (or windows) + C keys to center the current window or Super/Win+Shift C to rotate the window around.

For the love of centered windows

Sometime over a 2020 confinement my work decided to give us some money to buy some work from home office items.

I didn’t need much at that time since I was already settled with everything I needed in a work from home office but decided to go for a fancy new screen since well why not and the other one (a standard 24″ display) could find some good use for my teenage gamer son.

The chosen screen is a Samsung Ultra Wide display with a beautiful model name called S34J550WQU.

At first I was thrilled, the display looks nice and it seems I have much more estate for more to look at than before.

But in practice it’s not like this, this screen is too wide because, if I start having to d some work on the windows located on the right and on the left, I actually need to look at those windows by turning my neck or to actually move my head left to right. After a while it became quite irritating and gave me serious neckache and stiffness.

After experimenting for a short while, I decided that centered window was the way to go, so I was painfully resizing windows and moved them around manually as needed. There was other tools I have found trying to automate those taskss but they never helped much.

Fast forward 3 months later, I was happily switched back onto Linux with the promises of those titling windows manager where windows gets titled and rearranged exactly the way I would wanted it.

But I never got into titling windows manager and never understood it really. Before that ultra wide screen display, I was mostly a one large window person with fast Alt-<Tab> fingers. I would have my Emacs (or browser/terminal) in full screen which was just right to look at without moving my head since the screen was smaller where I can focus on the task at hand, without having five of them at the same time.

In titling windows manager you can customize it a lot but no matter what layout I tried I never could get it right.

I gave up trying to adjust my workflow with i3/monad since part it is because I am definitively comfortable with desktop Gnome and I actually quite enjoying its simplicity

I kinda for a while was thinking to sell that ultra wide monitor and get a smaller screen but like a lot of things life I was able to adjust my workflow with a set of keybinding and a shell script.

I got inspired by this unix.stackexchange answer. With the help of xwininfo and wmctrl. my script would identify the current window and rotate them one them on the left of the scrren or the right, or my preferred placement for working on center. So essentially when I have my editor (Emacs) a browser and a terminal, I can move the windows around and center the one I want and move the non active one easily on the sides.

I realize now I kinda replicated a (extremely simplified) titling windows manager with the only feature I want . The advantage of my method is that I can have it exactly the way I want. The thing is that I don’t want a perfect 1/3 of the screen window, the window in the center is a few pixel more in the center and cover over the side windows and that makes it trivial in a screen, compared to try to figure it out how to do that in i3.

Another win for perfect aligned windows desktop.

(I am quite new to wayland but to my surprise the script still work on wayland too).

How to make a release pipeline with Pipelines as Code

One of the early goal of Pipelines as Code on Tekton is to make sure we were able to have the project CI running with itself.

The common user case of validating pull request was quickly implemented and you can find more information about it in this walkthough video :

For slightly more advanced use case here is how we made a release pipeline for the project.

The goal is when we tag a release and push the tags to the GitHUB repo it will

  • Generate the release.yaml file for that version for user to automatically kubectl apply -f- it.
  • Upload that release.yaml to a release-${version} branch
  • Generate the tkn-pac binaries for the different Operating Systems
  • Generate the GitHUB release.

To be able to do so, I created a Repository CR in the pipelines-as-code-ci namespace:

  kind: Repository
    name: pipelines-as-code-ci-make-release
    namespace: pipelines-as-code-ci
    branch: refs/tags/*
    event_type: push
    namespace: pipelines-as-code-ci

The key part is the branch and event_type spec fields, where in plain english means I want to have all the tags push handled and run in the namespace pipelines-as-code-ci

I then created a release-pipeline.yaml PipelineRun in my .tekton directory with the annotations needed : "[push]" "[refs/tags/*]"

which mean in pain english that this pipelinerun will handle all on push tags events.

In my tasks I need the git-clone tasks and a custom version of the goreleaser task located inside my repository in .tekton/task/goreleaser.yaml.

The annotation for this looks like this : "[git-clone, .tekton/tasks/goreleaser.yaml]"

Goreleaser takes care of a lot of things for us, it compiles all binary and make a release in GitHub, it as well has the ability to generate a homebrew release in openshift-pipelines/homebrew-pipelines-as-code/ so user on OSX or LinuxBrew can easily just do :

brew install openshift-pipelines/pipelines-as-code/tektoncd-pac

Uploading the release.yaml si done with a Python script I wrote for it :

It will fetch the tag SHA and create a branch release-${tagversion} and push the file into it. This gives a stable branch with all the artifacts specifique to that version.

After all of that, I just need to edit the release and change a few fields to make it a bit nicer and set it as release (by default goreleaser do a prerelease)

Here is the link to all the files :

Speed up your tekton pipeline caching the hacky way

There is one thing that can get your wind up when you try to iterate quickly in a PR is to have a slow CI.

While working on a go project with a comprehensive test suite it was usually taking over 20 to 30 minutes to run and being as patient as a kid waiting for her candy floss to be ready I am eagerly waiting that my pipeline is Green or not.

My pipeline wasn’t that complicated, it’s a typical go project one, it has only a few steps which can be described like this :

  1. Checkout the git repo in a Tekton workspace (PVC) to the commit SHA we want to be tested.
  2. Run my functional/unit tests
  3. (I don’t have e2e tests yet :) but that should be step three when it’s done)
  4. Run some linting on my code with golangci-lint
  5. Run some other linters like yamllint.

As a good cloud native citizen Tekton spins up a new pod and attaches our pvc/workspace to it, and in each pods where a step is using go, the go compiler recompiles my dependencies I have vendored in my vendor directory.

I am vendoring inside my repository so at least I don’t have `go mod` downloading again and again, but the go compilation can get very slow.

At first, I made a large container with a large image that had all the tools (which is Tekton upstream test-runner image) to speeds by a good time albeit it was still relatively slow compared to running it on my laptop to run the testsuite.

I decided to go the “hacky way”, I would save the cache as often as possible to help the go compiler as much as possible.

This would use a toy project of mine called go-simple-uploader which is a simple go http server to upload and serve files to “cache my cache”.

At first I deployed the go-simple-uploader binary inside a kubernetes Deployment in front of a “Service”. The service will be accessible to every pod inside that same namespace, to its service name “http://uploader:8080”.

I modified my pipeline so at first step I would check if there is a cache file and uncompress it :

- image:
  name: get-cache
  workingDir: $(workspaces.source.path)
  script: |
     #!/usr/bin/env bash
     set -ex
     mkdir -p go-build-cache;cd go-build-cache

     curl -fsI http://uploader:8080/golang-cache.tar || {
          echo "no cache found"
          exit 0

     echo "Getting cache"
     curl http://uploader:8080/golang-cache.tar|tar -x -f-

This step does a few things :

  • It uses the golang official image because I know it will be used later on in my pipeline so I don’t care as much to have a small image and I can use a bash/curl in there.
  • It uses the source workspaces where I have my code checked out in from the git-clone task and create the go-build-cache directory
  • If it finds the file called golang-cache.tar It uncompressed it as quickly as possible.

Later on in my pipeline, I have another task with multiple steps. I have moved away from using multiple tasks because PVC attachment can get quite slow. While a tekton task is a single pod where each steps are a container the steps looks like this :

- name: unitlint
    - get-cache
      - name: unittest
        workingDir: $(workspaces.source.path)
        script: |
            mkdir -p $HOME/.cache/ && ln -vs $(workspaces.source.path)/go-build-cache $HOME/.cache/go-build
            make test
      - name: lint
        workingDir: $(workspaces.source.path)
        script: |
            mkdir -p $HOME/.cache/ && ln -vfs $(workspaces.source.path)/go-build-cache $HOME/.cache/go-build
            mkdir -p $HOME/.cache/ && ln -vfs $(workspaces.source.path)/go-build-cache $HOME/.cache/golangci-lint
        make lint-go

This task run after `get-cache` the key part is where I symlink the go-build-cache directory to $HOME/.cache/go-build directory

Sames goes for golangci-lint I symlink its cache directory to $HOME/cache/golangci-lint

Later on I have another task that save this cache :

- name: save-cache
  workingDir: $(workspaces.source.path)
  script: |
    #!/usr/bin/env bash
    curl -o/dev/null -s -f -X POST -F path=test -F file=@/etc/motd  http://uploader:8080/upload || {
        echo "No cache server found"
        exit 0

    lm="$(curl -fsI http://uploader:8080/golang-cache.tar|sed -n '/Last-Modified/ { s/Last-Modified: //;s/\r//; p}')"
     if [[ -n ${lm} ]];then
          expired=$(python -c "import datetime, sys;print( > datetime.datetime.strptime(sys.argv[1], '%a, %d %b %Y %X %Z') + datetime.timedelta(days=1))" "${lm}")
           [[ ${expired} == "False" ]] && {
             echo "Cache is younger than a day"

      cd $(workspaces.source.path)/go-build-cache
      tar cf - . |curl -# -L -f -F path=golang-cache.tar -X POST -F "file=@-" http://uploader:8080/upload

The task starts by checking if you have a cache server and then with the help of some shell and python magic checking with the “Last-Modified” http header if you have a cache file that has already been generated since over a day.

I am doing this because at first I was using a uploading server in another environment which could get quite slow to upload on every run, since we run in the same namespace the upload is very fast so this is probably not needed, but keeping it here if someone else needs it.

It then uploads the cache file where it would be available for the subsequent runs using the first get-cache task.

Things got much faster, starting with my pipeline going over 10/12 minute at first run, it will get down to 2mn on the next runs. It gets as slow as your infra, the time for kubernetes to spin up the pods really, it could get under 1mn if I am running my pipeline under Kind.

This is pretty KISS and may actually work on any pipelines that needs caching (I am looking at you java maven and nodejs npm).

Now the best way probably to address this in a generic way is to have a task in the tektoncd/catalog that behaves like the Github Action cache where the user can specify a parameter for the cache key, a TTL and a directory and maybe the storage type (ie: like an object storage or a simple http uploader or even another PVC) and the task automatically do everything at the start of your pipeline and by the end with in a finally task.

Hopefully we can get this done in the near future.

NextDNS + DNSMasq DHCP and local names

Took me a little bit a while to figure out so here is some documentation,

My router from my ISP which is generally pretty good, doesn’t support local dns names which is annoying in itself. Combined with NextDNS, I have no way to identify the devices on my network.

So there I went configured dnsmasq on my tiny raspbery-pi :


This would have the dnsmasq service listening on and forward everything to

I continued and set my DHCP server :



Standard DHCP really just make sure you setup the router to your local router, here it’s 0.254 in my config.

I then configured the nextdns client on default DNS port 53 :

cache-size 0
report-client-info true
setup-router false
log-queries true
cache-max-age 0s
timeout 5s
control /var/run/nextdns.sock
forwarder .lan.=
max-ttl 5s
hardened-privacy false
bogus-priv true
auto-activate false
use-hosts true
detect-captive-portals false

The key setings is the discovery-dns setting, it means it would try to discover the local names to display on the nextdns web UI and resolve all lan domain to the local dnsmasq server.

And that’s it…. Hope this helps.

batzconverter – A multiple timezone converter

I do write a lot of scripts to automate my day to day workflow, some of them I just wrote for 3h to save me 5mn time only once and some others I write in about 5mn but save me hours and hour of productivity.

The script showed today, that I am proud of because of its usefulness and probably not of its code, is called “batzconverter“. and available on github. What the script is trying to solve is when you work with your team spread around 3/4 timezones, how do you schedule a meeting easily.

It’s a ~200 lines simple shell script that leverage onto the very powerful GNU date

In its most simple form when you type the command batz you are getting this :

This is showing all timezone (which you can configure) at current time, with some easily identified emojis. The 🏠 emoji on the right is to show the location of the user.

But you can do way more, let’s say you want to show all timezone for tomorrow 13h00 meeting :

It will just do that and show it.

Same goes for a specific date :

You can do some extra stuff, like adding quickly another timezone that isn’t configured :

Or give another timezone as the base for conversion, the airplane emoji ✈️ here is to let know that you showing another target timezone :

easy peasy to use, no frills, no bells just usefulness…

There is another flag called “-j” to allow you to output to json, it was implemented for being able to plug into the awesome alfredapps on osx as a workflow :

But it doesn’t have to be for Alfred, the json outut can be used for any other integrations.

Configuration is pretty simple you just need to configure the timezone you would like to have in a file located ~/.config and the emojis associated with it (cause visuals are important! insert rolled eyes emoji here).

Head up on github chmouel/batzconverter to learn how to install and configure it, and feel free to let me know about suggestions or issues you have using it.

Building packages for multiple distros on launchpad with docker

I have been trying to build some packages for the ubuntu distros for a new program I have released, gnome-next-meeting-applet

In short, it what quite painful! if you are quite new to the launchpad and debian packaging ways (which I wasn’t and yet It took me some time to figure out) you can get quite lost. I got to say that the fedora copr experience is much smoother. After a couple of frustrated google and stackoverflow searches and multiple tries I finally figured out a script that builds and upload properly to launchpad via docker to make it available to my users.

  1. The first rule of uploading to launchpad it’s to properly setup your GPG key on your account making sure it match what you have locally.
  2. The second rule is making sure the new upload you make increase onto the precedent uloaded version or it will be rejected.
  3. The third rule is to be patient or work on week-end, because the queue can be quite slow.

Now the magic is happening in this script :

We have a Dockerfile with all the dependencies we need for building the package in this file :

When we launch the script in the main loop we modify the FROM to go to the distro docker tag targeted (here I have LTS and ROLLING) and we start the container build.

When it’s done we mount our current source as a volume inside the container and mount our ~/.gnupg to the build user gnupg inside the container.

With dch we increase the targeted distro version and we add as well after the release number the distro target after the “~” like this “0.1.0-1~focal1”.

We finish the upload with dput and launchpad *should* then send you an email it was accepted.

After waiting a bit your package should be built for the multiple distribution.

Tekton yaml templates and script feature

Don’t you love “yaml”, yes you do ! or at least that’s what the industry told you to love!

When you were in school your teacher told you about “XML” and how it will solve all the industry problems (and there was many in the late 90s). But you learned that you hate reaching to your “<“ and” “>” keys and rather have something else. So then the industry came up with “json” so computer or yourself can talk to each others, that’s nice for computers but actually not so nice for yourself it was actually a lie and was not made for yourself to read and write but only for comptures. So then the “industry” came up with yaml, indentation based? you get it and that’s humm all about it, now you are stuck counting whitespaces in a 3000 lines file trying to figure out where goes where….

Anywoo ranting about computer history is not the purpose of this blog post, like every other cloud native (sigh) component out there tekton is using yaml to let the user describe its operative executions. There is a very nice feature in there (no sarcasm it is really nice!) allowing you to embed “scripts” directly in tasks. Instead of like before where you had to build a container image with your script and run it from tekton, you can now just specify the script embedded directly in your “Task” or “Pipeline”.

All good, all good, that’s very nice and dandy but when you start writing a script that goes over 5 lines you are getting into the territory where you have mixed a ~1000 lines script embedded in a 2000 lines of yaml (double sigh).

You can come back to the old way, and start to go over the development workflow of :

“write” -> commit -> “push” -> “build image” => “push” -> “update tag” -> “start task”

and realize that you are loosing approximately 40 years of your soul into some boring and repetitive tasks.

So now that i am over talking to myself with this way too long preamble here is the real piece of information in this post, a script that like everything in your life workaround the real issue.

It’s available here :

The idea is if you have in your template a tag saying #INSERT filename, it would be replaced by the content of the file, it’s dumb and stupid but make devloping your yaml much more pleasing… so if you have something like :

image: foo
script: |

the script with see this and insert the file in your template. It will respect the previous line indentation and add four spaces extra to indent the script and you can have as many INSERT as you want in your template….

Now you can edit your code in and your yaml template in the yaml template.. win win, separation of concerns, sanity win happy dance and emoji and all…

Deploying minishift on a remote laptop.

Part of my new job working with Fabric8 is to having it deployed via minishift.
Everything is nice and working (try it it’s awesome as long you deploy it on your local workstation.

The thing is that my desktop macosx laptop has only 8GB of RAM and is not very well up to the task to get all the services being deployed when I have my web browser and other stuff hogging the memory. I would not do on a remote VM since I want to avoid the nested virtualisationt part that may slow down things even more.

Thanksfully I have another linux laptop with 8GB of RAM which I use for my testing and wanted to deploy minishift on it and access it from my desktop laptop.

This is not as trivial as it sounds but thanks to minishift flexibility there is way to set this up.

So here is the magic command line :

minishift start --public-hostname localhost --routing-suffix

What do we do here? We bind everyting to localhost and, what for you may ask? Cause we then are going to use it via SSH. First you need to get the minishift IP :

$ minishift ip

and now since in my case it’s the IP I am going to forward SSH it :

sudo ssh -L 443: -L 8443: username@host

Change the username@host and the to your IP. I use sudo here since to be able to forward the privileged 443 port need root access,

When this is done if the stars was aligned in the right direction when you typed those commands you should be able to see the fabric8 login page :