New Adventure

I have always been a tooling guy. Early in my carrier, now more than 10 years ago, I realized how much I love writing things making my life easier.

Automation for the win!

Open Source

I have also always liked to be part of opensource communities. Sharing knowledge and helping others, learning a lot in the process. Participating in things like Hibernate documentation translation, Hibernate forum, Apache Maven mailing lists, MOJO @ CodeHaus (now MojoHaus) Continuum, Archiva, M2E, Sonar[Qube], Hudson, and… Jenkins.

After switching from Apache Continuum, I started using Jenkins around 2009 when it was named a bit differently. My first message on the users mailing list seems to date back to early 2010.

My first code contribution to Jenkins seems to date back to 2011, where I had to git bisect a weird issue we were probably only very few to have, as we were using IBM AIX at that time.

Later, I took over the maintenance of some others, like the Radiator View Plugin, Parameterized Scheduler Plugin and most importantly the Chuck Norris Plugin.

About 2 years ago, I started participating in the Jenkins Governance Meetings on IRC.

Full Time Jenkins!

All those years, I have been spending a fair amount of time on that project close to my heart. So, having the opportunity to work full-time in "a professional opensource company" on all-things-Jenkins is obviously something strongly motivating to me.

I am thrilled to say I will be joining CloudBees next August.

I am very proud to soon start working with so many great individuals, bringing my small contribution to smoothifying the software delivery process in our industry.

Follow me on @bmathus to get more updates if you are interested :-).

Why Managers Should Not Give Their Technical Opinion

A Good Manager Should Never Voice A Technical Opinion.

Wait, what? Why exclude them if they could be useful?

Management is too often badly considered. In my opinion, it’s because we actually rarely encounter good managers. You know, useful, inclusive, psychologist, able to catalyze work, and never get in the way, you have known plenty of them right?

Being a good manager is hard.

Like being a good developer.

It takes passion, time, experience, reading, and so many other things. And by trying to chase too many rabbits, you’re taking the risk to never even catch one.

But that manager is technically very good! She/he a lot of experience!

Fading thing

In IT, 2 years is a long time. 5 years is a life time. 10 years is eternity.

So, basically, that statement will become very rapidly obsolete. And if you’re being unlucky and that manager tries staying up-to-date, then he won’t spend that time on what he should (you know: caring about human, making the company a great place to work, tackling impediments, logistics).

As said above, being a good manager takes time. Reading a book on the latest technology trend will be more time taken from reading a book on human behaviours and becoming better at handling them…

Not doing the job. Twice.

If, really, that manager is technically better than anyone in the team, well in my opinion, you then have far bigger organizational issues.

I’ll ask two questions:

  1. What the heck does she/he do in a manager’s position?

  2. Why is he/she spending time working on technical things, when her/his first work should then be to start hiring people better than her/him? Not doing so is putting the company in serious disarray.

HiPPO & the likes

If that manager spent more time reading things on human behaviours, she/he’d be aware that there’s many natural human deviances. One is that many people, especially introverts, will not dare voice their own opinion after the Highest Paid Person Opinion has been expressed. Even if the idea could have been better.

Said differently, if those managers hope their sayings are going to be judged not because of their power, but because of their technical validity, well they are wrong and working around that is difficult.

My Context

I’ve been willing to write that down for a while. It started as a guts feeling, then as time went by I think I found more arguments to articulate my thoughts. It’s still ongoing, but I felt it is now somewhat clear enough to propose it here and possibly get some feedback.

I am talking here about human managers of many teams. I mean the type of position where your role is (should be?) to provide your team members with context so that they know where the company is going (i.e. alignment!).

Even if that may apply to other areas, what I have in mind is IT. The domain I work in and hence know the best. I also have in mind a modern/Agile organization where the goal is more to make the company succeed than to respect any form of historical hierarchical establishment.

Roughly, in my mind, that kind of manager can exist for at least 20 people. Below that number is heading towards micro-management.

Wrap up

Being and staying a good one is going to take time.

That time being a finite resource, you cannot be good and up-to-date in all areas.

Upgrading Centos 7 Kernel to enable using Overlay with Docker


I wanted to use Overlay storage driver for our Docker hosts (see why devicemapper should be avoided in my opinion. ).

The issue is: Overlayfs support has been merged in the Linux Kernel in 3.18 and Centos 7 is currently using a 3.10 one .[1].

Upgrade your kernel

Hence, since this is not for a customer-facing production machine, I decided to upgrade the kernel.

The simplest way to do it is to use ELRepo and install the package called kernel-ml (like in kernel mainline).


rpm --import
rpm -Uvh
yum --enablerepo=elrepo-kernel install kernel-ml

Bonus: switch grub by command line before rebooting

If you use a remote VM like me, you may not have access to the grub UI when the machine reboots. And the thing is: Centos will by default use the previous kernel.

So, by default again, if you reboot without modifying it, you will stay on the same old kernel.

If you want your machine to use the newly installed one: execute the following command (it will select the first available kernel, which is the newly installed on centos by default)

grub2-set-default 0

To show the list of available kernels, you can verify that the new one is indeed the first:

$ awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.1.6-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (4.1.6-1.el7.elrepo.x86_64) 7 (Core) with debugging
CentOS Linux (3.10.0-229.7.2.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-229.7.2.el7.x86_64) 7 (Core) with debugging
CentOS Linux 7 (Core), with Linux 3.10.0-229.el7.x86_64
CentOS Linux 7 (Core), with Linux 0-rescue-53ed95ad53094b469043c84aa868b827

Hope this helps!

1 Actually, that 3.10 kernel is not a vanilla one, it contains a lot of backported features. But the fact is that I had a lot of issues with RHEL 7.1. RHEL 7.2 is said to contain yet another bunch of backports for better Overlay support. But RHEL7.2 is still Beta currently and GA date is not yet announced.

Docker Storage Driver: Don't Use Devicemapper

DISCLAIMER: I’m not a system expert. What follows is more an aggregation of things I’ve tried and informations gathered on the Internet. I also wrote it to serve as a placeholder for myself. I make sure to provide many links so that you can make your own opinion. Please don’t hesitate to give feedback if you disagree with some statements below.

For some months now, we’ve started deploying more and more Docker services internally to progressively gather experiences.

After encountering a few issues, mainly related to storage, I’ve started to read a lot about the Docker Storage drivers.

Docker default behaviour (with DeviceMapper): Wrong

As much as I love Docker, some may find a pity that the default behaviour is NOT to be used in production (on a non-ubuntu host)!

Indeed, by default, here’s what Docker will choose as a storage driver:

  • AUFS

  • Devicemapper, in loopback mode

BUT, the thing is: though AUFS is apparently great for Docker (was used by DotCloud for their PaaS, before it became public), it’s not in the standard kernel. And is unlikely to be in the future.

For this reason, distributions like RedHat (which is upstream-first) chose to support devicemapper instead, in the so-called thin provisioning mode, aka thinp.

But by default, if AUFS is not found in the current kernel, Docker will fallback to the ubiquitous devicemapper driver. And for the default again, it will create loopback files. This is great for newcomers bootstrapping, but horrible from a least surprise principle perspective: since this mode MUST NOT be used in production.

So, I can use still Devicemapper, if I make sure to use thinp?

Short answer: no.

Longer one: many Docker knowledgeable people have publicly stated that you should prefer other ones. Many have even recommended that you default to using Overlay [1]:

@codinghorror more specifically we’ve never seen devmapper work reliably… Overlayfs support is pretty good now. Also zfs & btrfs.

Even Daniel J Walsh aka rhdan, working for RedHat has stated [2]:

Red Hat kernel engineers have been working hard on Overlayfs, (Perhaps others on the outside, but I am not familiar). We hope to turn it on for docker in rhel7.2 BUT […]


Device Mapper is a complex subsystem

I can’t help thinking that this sentence may tell us more about the subject than his author was thinking. Complexity often being the reason software can not survive years? Might be.

Conclusion: if in doubt, use overlay

I’ve started converting the Docker hosts I work with from devicemapper to overlay. My first impressions are good and everything seems to be working as expected [3].

From all I’ve read, my current wager is that overlay will soon become the default driver. It has the pros of devmapper (no need to create a FS for one) apparently without much of its cons.

Only some specific use cases will still make people choose other drivers like btrfs or zfs. But as these require to create and size real FS to be used, they are unlikely to be used as widely.

Some references

1 Previously named Overlayfs, it has been renamed simply Overlay when it got merged in the Kernel
2 in a pull-request by RedHat’s Vincent Batts, one of the most active Docker committers not working for Docker Inc, about putting overlay as the default driver in place of Devicemapper. That may ring yet another bell to you. At least it did for me.
3 I’ve actually had issues with RedHat’s Docker 1.6.x but this disappeared when I upgraded the Fedora Atomic Host I was playing with to Docker 1.7.1

Why I think I failed as an architect

I was not actually planning to write that, more something about Docker.* these days. But that’s how it is.

I was listening to the Arrested Devops podcast — Episode 38 about Career Development, with Jeff Hackert.

For many reasons lately, I’ve been thinking about my career and what I wanted to do. By the way, I absolutely, positively recommend you listen to that episode (35 minutes, seriously it’s worth it).

The part that made me think about that article is when Jeff talked about making things you do visible. Providing context. Understanding people’s needs.

Architect Failure

Though I retrospectively think I should maybe have pushed sometimes some more evolved/involved solutions, I’m not actually talking about a technical failure.

No, I’m talking about human/social one.

To simplify a bit, the management decided to reorganize the development with dev teams on one side, and a separate architecture team.

Because I had earned technical respect from (at least some of) my coworkers, it went not so badly initially. Some teams were asking for reviews, or even for solutions for issues/requirements they had.

But for some people, developers and maybe even more managers, we were intruders. Not welcome.

What I did

Mainly, I think I stayed in my office way too much, and took that position for granted. Kind of the Ivory Tower issue (well, without the tone I hope. I’ve tried hard to not be condescending especially because of how much I despised self-said Architects who didn’t code).

I thought the requests were going to flow naturally. How wrong, and dumb, I was.

Don’t get me wrong. I was not hiding and playing video-games on my computer :-). I was actually working for some teams. But even those teams eventually didn’t even ask us for help, and worked us around.

What I should have done

I should have hanged out more with the teams (which is somehow ironic I didn’t when you know me). Go see them, ask them if they were needing help. Get involved with them. Simply be more empathetic. Let them know what we did, why, for whom, constantly. Make that publicly available.

I should also have refused to work on some subjects supposed to be useful in 1 or 2 years, without any actual need. How many hours I lost on useless PoCs, studies, that will never get used.

Wrap up

That made me realize something. Something that may be obvious to more experienced people: the fact that the current management structure, the current organization will NOT stay as-is forever. And that you should always strive to break barriers, reach out the people who do the actual work and help them, work with them.

This way, people will know you’re basically useful wherever you are, and whatever position you hold. And that might also transitively prove your team is useful.

If you don’t, then you’re dead. At the next shakeup, you’ll be wiped out. And you will have deserved it.

De l'intérêt de Docker pour tout développeur !

Dans l’article qui suit, je vais vous montrer comment j’ai récemment eu l’occasion d’utiliser Docker pour un cas concret.

La particularité ici est que cela était un cas utile à un travail de développement : cela n’avait pas pour but de faire tourner une application, simplement d’accéder à un environnement (via Docker, donc), de récupérer des informations, puis de jeter.

Pour ce cas, j’estime avoir gagné au bas mot plusieurs dizaines de minutes.

Le contexte

Alors que je travaillais sur une pull request pour le projet maven-scm, j’ai eu besoin pour les tests d’intégration d’une vieille version de Subversion (oui, j’utilise Git sinon :-)).

Plus précisément, j’avais besoin de pouvoir faire un checkout d’un dépôt SVN avec les métadonnées au format SVN 1.6.

Or, ma machine est à jour, et la version que j’ai en local est une récente 1.8.8…

Que faire ?

  • Rétrograder la version de ma machine ? Bof, pas trop envie de risquer de péter mon existant.
  • Une VM ? Où ? En local ? Pfiou, ça va être long… En IaaS ? Bof.

Mais dis-donc !

Docker à la rescousse

Au final, cette manipulation m’a pris maximum 5 minutes. Le plus long a été de trouver sur Google la version du paquet debian correspond à SVN 1.6 la bonne version de Debian (pour aller au plus simple, puisqu’on pourrait aussi prendre une version plus récente et tenter d’installer une version spécifique de SVN).

Sur, donc :

Paquet subversion
squeeze (oldstable) (vcs): Système de contrôle de version avancé 
1.6.12dfsg-7: amd64 armel i386 ia64 kfreebsd-amd64 kfreebsd-i386 mips mipsel powerpc s390 sparc
wheezy (stable) (vcs): Système de contrôle de version avancé 
1.6.17dfsg-4+deb7u6: amd64 armel armhf i386 ia64 kfreebsd-amd64 kfreebsd-i386 mips mipsel powerpc s390 s390x sparc
wheezy-backports (vcs): système de gestion de version évolué 
1.8.10-1~bpo70+1: amd64 armel armhf i386 ia64 kfreebsd-amd64 kfreebsd-i386 mipsel powerpc s390 s390x
jessie (testing) (vcs): système de gestion de version évolué 
1.8.10-2: amd64 arm64 armel armhf i386 kfreebsd-amd64 kfreebsd-i386 mips mipsel powerpc ppc64el s390x
sid (unstable) (vcs): système de gestion de version évolué 
1.8.10-2: alpha amd64 arm64 armel armhf hppa hurd-i386 i386 kfreebsd-amd64 kfreebsd-i386 m68k mips mipsel powerpc ppc64 ppc64el s390x x32 
1.8.8-2: sparc 
1.7.13-3 [debports]: sparc64 
1.6.17dfsg-3 [debports]: sh4

OK, on va donc partir sur une version stable.

$ sudo docker run --rm -it debian:stable /bin/bash
root@d2645d786f6e:/# apt-get update
root@d2645d786f6e:/# apt-get install subversion zip
root@d2645d786f6e:/# svn --version
svn, version 1.6.17 (r1128011)
   compiled Mar 12 2014, 02:44:28
root@d2645d786f6e:/# svn co -N
A    asf/pom.xml
A    asf/site-pom.xml
 U   asf
Checked out revision 1629441]
root@d2645d786f6e:/# zip -rq asf

Ensuite, depuis le host, dans un autre onglet de votre émulateur de terminal favori :

$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
dbd6d39cbdb1        debian:stable       "/bin/bash"         25 minutes ago      Up 25 minutes                           sick_archimedes
$ sudo docker cp sick_archimedes:/ .
$ unzip -t 
    testing: asf/                     OK
    testing: asf/.svn/                OK
    testing: asf/.svn/dir-prop-base   OK
    testing: asf/.svn/props/          OK
    testing: asf/.svn/entries         OK
    testing: asf/.svn/all-wcprops     OK
    testing: asf/.svn/tmp/            OK
    testing: asf/.svn/tmp/props/      OK
    testing: asf/.svn/tmp/prop-base/   OK
    testing: asf/.svn/tmp/text-base/   OK
    testing: asf/.svn/prop-base/      OK
    testing: asf/.svn/prop-base/pom.xml.svn-base   OK
    testing: asf/.svn/prop-base/site-pom.xml.svn-base   OK
    testing: asf/.svn/text-base/      OK
    testing: asf/.svn/text-base/pom.xml.svn-base   OK
    testing: asf/.svn/text-base/site-pom.xml.svn-base   OK
    testing: asf/pom.xml              OK
    testing: asf/site-pom.xml         OK
No errors detected in compressed data of

Et voilà, en à peine quelques minutes, j’ai mon checkout, je jette mon conteneur, et je continue.

Je sais pas vous, mais moi c’est ce genre de petit exemple tout simple qui me place du côté de ceux qui disent que Docker n’est pas une simple nouveauté, mais effectivement une véritable révolution !

Atelier forge à l'AgileTour : préparez vos machines !

Michäel Pailloncy et moi allons animer un atelier lors de l’AgileTour Toulouse 2013, jeudi 10 octobre (cf. les détails de la session). Oui, c’est dans 3 jours :-).

Quelques informations complémentaires si vous prévoyez de venir à cet atelier :

  • sachez qu’il nécessite absolument un ordinateur. Si vous n’en avez pas, libre à vous de venir avec un ami qui en a un, mais ce sera probablement moins intéressant pour vous.
  • vous devrez aussi disposer d’un client Git fonctionnel (nous clonerons un dépôt local fourni sur la clé USB car nous n’aurons pas accès à Internet).
  • la machine devra posséder un JDK en version 7 installé. Nous en fournirons les binaires sur une clé USB, mais vous gagnerez beaucoup de temps si vous n’avez pas à le faire en début de TP.

Cf. aussi le dépôt GitHub suivant et son README

Si vous avez besoin de précisions, n’hésitez pas à me contacter via Twitter ou dans les commentaires de ce billet.

Merci de votre attention, faites passer le message :-).

Investir dans l'humain ?

Qu’est-ce qu’on va faire si on investit dans nos collaborateurs, et qu’ils s’en vont ?

Qu’est-ce qu’on va faire si on n’investit pas, et qu’ils restent ?…

Envie de participer à un projet opensource ? Jenkins a besoin de vous

Woui Nide You
!Jenkins est certainement le serveur d’Intégration Continue le plus utilisé dans le monde. Si vous vous intéressez de près ou de loin à l’open-source et que vous aimeriez contribuer à un projet de ce type, lisez la suite.

L’année dernière, en août, nous avons attaqué la traduction en français du Jenkins Definitive Guide, écrit en bonne partie par John Ferguson Smart. Le travail a avancé doucement, mais a avancé tout de même. A ce jour, sur la quinzaine de chapitres, trois sont traduits et relus, et presque tout le reste est en cours.

Mais je ne parle pas bien anglais…

Ce n’est pas grave. Il y a plusieurs chapitres où il faut simplement relire, et donc parler français est suffisant. Si éventuellement, vous ne comprenez pas certaines parties traduites, et qu’il faut relire l’original, vous pouvez toujours soulever la question sur la liste de diffusion du projet où on parle français.

Je ne suis pas développeur, ou je ne connais pas Git, ou les deux

si vous voulez vous former à Git, c’est l’occasion. On se fera un plaisir de répondre à vos questions sur la liste de diffusion, même si elles sont exclusivement liées à Git, et pas (encore) à la traduction :-).

Mais si vous ne le sentez pas ou n’avez pas le temps, ce n’est pas grave. Vous devez simplement savoir éditer un fichier XML. Il y en a un pour chaque chapitre.

Super ! Par où je commence alors ?

Si vous êtes intéressé, mais que vous avez des questions, surtout n’hésitez pas à les poser.

On vous attend ! :-)

Want to push your git changes, but no connection on Holiday? No worries, git bundle is here !

I’m currently writing this article offline, since I’m in a place where even phones don’t work fine. Imagine the following situation:

  • Granted, it’s the summer, but outside the weather is more suited to the frogs than to the human beings…;
  • Your laptop is sitting next to you, waiting for you to tackle this long overdue task on a dev project ;
  • You use git, but your Internet connection is between lacky and inexistent. Your only way to receive updates is to regularly take your computer to some place where the network is a bit better (so you can sync your emails, for example).

So, what you would like to do is quite simple: work offline with git (it’s one of its best forces, right?), then push a mail somewhere with your commits. To do that, you have many possibilities:

  • Zip -9 your repository and send it as attachment!
    • Ahem, mine is 400MB. Forget about it.
  • Git request-pull/am/format-patch to send mails and integrate them automatically on the other side
    • Requires too many configurations for what I want.

So what’s left? git bundle. Let’s have a look at the documentation:

git-bundle - Move objects and refs by archive

Ahem, well, not very explicit if you ask me. Let’s look at the description:

Some workflows require that one or more branches of development on one machine be replicated on another machine, but the two machines cannot be directly connected. This command provides support for git fetch and git pull to operate by packaging objects and references in an archive at the originating machine, then importing those into another repository using git fetch and git pull after moving the archive by some means (e.g., by sneakernet).

More interesting.

I’ll rephrase it: we’re going to create a special archive, containing only the commits I want, and finally send it as an attachment. People receiving this mail will be able to just pull from this archive, as from a normal repository! Sounds great, doesn’t it?

So, how to use it? Here’s my use case: I have to do some kind of code review. So I’m gonna create a new branch from the main one “develop”, I’ll call that new one reviewFeatX. Then, that‘s at least the content of this branch I’d like to be able to send.

The principle

For bundling to be efficient and interesting, it’s assumed that both repositories have a common basis. That’s quite obvious anyway: if the repository you’re working on is totally new, then you are likely to have to send it in its entirety. Sending “some commits” only makes sense when there’s in fact commits already present in both places. A Git

Thanks to git’s “everything-is-a-sha” policy + every commit has a parent, it’s quite easy for it to find the link between your work tree and another one.

Creating the archive

Looking at the picture above, what we would like to do is quite obvious: send the blue part as an archive, and not a lot more if possible. Now, how do we do that?

$ git bundle create ../reviewFeatX.gitbundle develop..reviewFeatX

Notice the “develop..reviewFeatX”: this part will be passed through the git rev-list command, which will in fact return all the hashes (sha) corresponding to the blue part above in the diagram. Now you have a reviewFeatX.gitbundle file that you can send by email, dropbox or whatever you want.

Using the archive

On the other end of the pipe, someone is hopefully going to want to retrieve commits from the file. Here’s how to do that:

  • First, you can just check if the bundle contains enough information to apply to your repository (that is: your local repository contains at least the commit basis onto which the bundle was created)
$ git bundle verify ../reviewFeatX.gitbundle
The bundle contains 1 ref
8c7feeb8d13233a466459cffc487ca08334af838 refs/heads/reviewFeatX
The bundle requires these 1 ref
6807f3ac794d72a410ac23fa8e2dc5c0bbd6c422 some log
../reviewFeatX.gitbundle is okay
  • So now, we can just apply it! To do that, just use the bundle as a remote repository.
$ git ls-remote ../reviewFeatX.gitbundle
1fd7         refs/heads/reviewFeatX

$ git fetch ../reviewFeatX.gitbundle reviewFeatX:reviewFeatX
From ../reviewFeatX.gitbundle
 * [new branch]      reviewFeatX -> reviewFeatX

$ git branch
* develop

$ git checkout reviewFeatX
Switched to branch 'reviewFeatX'

$ git log --oneline develop..reviewFeatX
1fd7 log3
df56 log2
abc1 log1

That’s it! You’ve now imported the commits from the bundle you received by mail.

As said in the introduction, you see there’s many ways to exchange commits. I hope you’ll have found this one interesting and that it will be useful to you.

Page 1 of 32 Older