Blog Liip Kirby Mon, 18 Feb 2019 00:00:00 +0100 Les derniers articles du blog Liip fr Best of both worlds: catalogue for Linked Open Data (LOD) Mon, 18 Feb 2019 00:00:00 +0100 Our client Statistik Stadt Zürich recently launched their Linked Open Statistical Data (LOSD) SPARQL endpoint. They publish the contents of their annual statistical yearbook as RDF using the DataCube vocabulary.

Since we build their Open Data portal, which is based on CKAN, they asked us how we could publish their LOSD on the catalogue.

Why add Linked Data to a data catalogue?

A SPARQL endpoint allows you to ask a question that would normally require combining data from several datasets. But you already need to have an idea of which data exist and what properties they have. It can be difficult to browse to see which kind of data is available.
This is were a data catalogue like CKAN can shine. It is neatly organized, provides an easy-to-use search and can be the starting point for a data dive. Once you found an interesting dataset, you'll be refered to the SPARQL endpoint with an example query so you can start to adapt it to your needs.

You can compare it to Wikipedia and Wikidata:

As you can see, both sides have a right to exist and actually complement each other.

How could that work for RDF DataCube?

DataCube as dataset source

To get a rough understanding of the DataCube, this schema offers a helpful outline of the vocabulary:

Summary of key terms and their relationship in RDF DataCube

Source: (Fig 1)

Let's use the qb:DataSet as a CKAN dataset and try to find metadata for it.

First of all, we tried to extract datasets (as defined by the DataCube) in SPARQL:

PREFIX qb: <>
PREFIX rdfs: <>

SELECT ?dataset ?label WHERE { 
# Zurich subgraph
GRAPH <> {
   ?dataset a qb:DataSet ; 
        rdfs:label ?label .     
   ?obs <> ?dataset .
GROUP BY ?dataset ?label
LIMIT 1000

Run this query

This query extracts all qb:DataSet s with at least one observation. The dataset is a container for observations, one observation represents a measured value. As a result, we get all datasets that are available in the specified subgraph ( of this SPARQL endpoint.

So this gives us the "entries" in our catalogue. Now let's find some more metadata for those entries.

Extract metadata from LOSD

The next step we took is to extract as much metadata as possible to find a match between the DataCube metadata and the CKAN metadata used on

PREFIX qb: <>
PREFIX skos: <>
PREFIX rdfs: <>

SELECT ?dataset ?title ?categoryLabel ?quelleLabel ?zeit ?updateDate ?glossarLabel ?btaLabel ?raumLabel
    ?dataset a qb:DataSet ; 
             rdfs:label ?title .     

    # group
      ?category a <> ;
                rdfs:label ?categoryLabel ;
                skos:narrower* ?dataset .

    # source, time, update date
    ?obs <> ?dataset .
      ?obs <> ?quelle .
      ?quelle rdfs:label ?quelleLabel .
      ?obs <> ?raum .
      ?raum rdfs:label ?raumLabel .
    OPTIONAL { ?obs <> ?zeit } .
    OPTIONAL { ?obs <> ?updateDate } .

    # use GLOSSAR und BTA (and others) for tags
      ?obs <> ?glossar .
      ?glossar rdfs:label ?glossarLabel .
      ?obs <> ?bta .        
      ?bta rdfs:label ?btaLabel .

    FILTER (?dataset = <>)
GROUP BY ?dataset ?title ?categoryLabel ?quelleLabel ?zeit ?updateDate ?glossarLabel ?btaLabel ?raumLabel
LIMIT 1000

Run this query

What’s going on here?

  • The “FILTER” clause helps to narrow down the hits, in this case only return results for one specific dataset
  • The “OPTIONAL” clause helps to declare triple patterns, that don’t have to match
  • In our case, depending on the dataset an observation might have different properties (some have BTA, others have RAUM or GLOSSAR)
  • Note that the category is modelled as a superset of a dataset using SKOS

Now that we have extracted a bunch of metadata, we can match them to the metadata needed by CKAN. The result looks something like that:

Mapping of LOSD metadata to CKAN catalogue metadata

Generate a SPARQL-Query to get the actual data

How can we generate a meaningful SPARQL query for each dataset? If we go back to the DataCube vocabulary, we can see that a dataset has a DataStructureDefinition. This helps us to uncover the structure of a dataset on a generic level. That means, given a dataset, we can extract the used properties of this dataset:

PREFIX rdf: <>
PREFIX rdfs: <>
PREFIX qb: <>
SELECT ?dataset ?datasetLabel ?component ?componentLabel

  ?spec a  qb:DataStructureDefinition ;
        qb:component/(qb:dimension|qb:attribute|qb:measure) ?component .

  ?component rdfs:label ?componentLabel .

  ?dataset a qb:DataSet ;
           rdfs:label ?datasetLabel ;
           qb:structure ?spec .
  FILTER (?dataset = <>)  
} ORDER BY ?dataset

Run this query

This returns, that this dataset uses the following properties:

And in combination with the dataset, we can easily generate the following query automatically:

PREFIX rdf: <>
PREFIX rdfs: <>
PREFIX qb: <>
    ?obs a qb:Observation ;
         qb:dataSet <> ;
         <> ?erwartete_aktualisierung ; 
         <> ?korrektur ; 
         <> ?datenstand ; 
         <> ?arbeitsstatten ; 
         <> ?betriebsart ; 
         <> ?raum ; 
         <> ?quelle ; 
         <> ?zeit ; 
         <> ?fussnote ; 
         <> ?glossar .

Run this query

This SPARQL query can then be refined by a user or simply used to extract the data as CSV.


  • The metadata can be mapped, we have all necessary concepts in the DataCube
  • Since a dataset consists of many observations and in LOSD the metadata is mostly attached to the observation, we suddenly have several values for each dataset. So the data must be aggregated somehow (concatenate all values, always pick the first value, extend the catalogue to accept multiple values)
  • Some fields are empty, so they either need to be updated in LOSD or another source for the data must be found
  • It’s easy to generate a meaningful SPARQL query for a dataset (to help users explore the LOSD or to extract the data as CSV)

Please find the whole source code of our prototype as a jupyter notebook on GitHub.

Fast Serialization with Liip Serializer Wed, 13 Feb 2019 00:00:00 +0100 For Serialization (From PHP objects to JSON, and Deserialization, the other way around), we have been using JMS Serializer for a long time in one of our big Symfony PHP projects, we are still using it for parts of it. We were and still are very happy with the features of JMS Serializer , and would highly recommend it for a majority of use cases.

Some of the functionality we would find difficult to cope without:

  • Different JSON output based on version. So that we can have “this field is here until version 3” etc.
  • Different Serializer groups so we can output different JSON based on whether this is a “detail view” or a “list view”.

The way that JMS Serializer works is that it has “visitors” and a lot of method calls, this in PHP in general cases is fine. But when you have big and complicated JSON documents, it has a huge performance impact. This is a bottleneck in our application we had for years before we built our own solution.

To find the bottleneck blackfire helped us a lot. This is a screenshot from blackfire when we were using JMS serializer, here you can see that we called visitProperty over 60 000 times!!

Our solution removed this and made our application a LOT faster with an overall performance gain of 55%, 390 ms => 175 ms and the CPU and I/O wait both down by ~50%.

Memory gain: 21%, 6.5 MB => 5.15 MB

Let’s look at how we did this!

GOing fast outside of PHP

Having tried a lot of PHP serializer libraries we started giving up, and started to think that it’s simply a bottleneck we have to live with. Then Michael Weibel (Liiper, working in the same team at the time) came with the brilliant idea of using GoLang to solve the problem. And we did. And it was fast!

We were using php-to-go and Liip/sheriff.

How this worked:

  • Use php-to-go to parse the JMS annotations and generate go-structs (basically models, but in go) for all of our PHP models.
  • Use sheriff for serialization.
  • Use goridge to interface with our existing PHP application.

This was A LOT faster than PHP with JMS serializer, and we were very happy with the speed. Integration between PHP and the GO binary was a bit cumbersome however. But looking at this, we thought that it was a bit of an unfair comparison to compare generated go code with the highly dynamic JMS code. We decided to try the approach we did with GO with plain PHP as well. Enter our serializer in PHP.

Generating PHP code to serialize - Liip Serializer

What Liip Serializer does is that it generates code based on PHP models that you specify, parsing the JMS annotations with a parser we built for this purpose.

The generated code uses no objects, and minimal function calls. For our largest model tree, it’s close to 250k lines of code. It is some of the ugliest PHP code I’ve been near in years! Luckily we don’t need to look at it, we just use it.

What it does is that for every version and every group it generates one file for serialization and one for deserialization. Each file contains one single generated function, Serialize or Deserialize.

Then when serializing/deserializing, it uses those generated functions, patching together which filename it should use based on which groups and version we have specified. This way we got rid of all the visitors and method calls that JMS serializer did to handle each of these complex use cases - Enter advanced serialization in PHP, the fast way.

If you use the JMS event system or handlers they won't be supported by the generated code. We managed to handle all our use cases with accessor methods or virtual properties.

One challenge was to make the generated code expose exactly the same behaviour as JMS serializer. Some of the edge cases are neither documented nor explicitly handled in code, like when your models have version annotation and you serialize without a version. We covered all our cases, except for having to do a custom annotation to pick the right property when there are several candidates. (It would have been better design for JMS serializer too, if it would allow for an explicit selection in that case.)

In a majority of cases you will not need to do this, but sometimes when your JSON data starts looking as complicated as ours, you will be very happy there’s an option to go faster.

Feel free to play around! We open sourced our solutions under Liip/Serializer on Github.

These are the developers, besides me, in the Lego team who contributed to this project, with code, architecture decisions and code reviews: David Buchmann, Martin Janser, Emanuele Panzeri, Rae Knowler, Tobias Schultze, and Christian Riesen. Thanks everyone! This was a lot of fun to do with you all, as working in this team always is.

You can read more about the Serializer on the repository on GitHub: Liip/Serializer

And the parser we built to be able to serialize here: Liip/Metadata-Parser

Note: The Serializer and the Parser are Open Sourced as-is. We are definitely missing documentation, and if you have trouble using it, or would like something specific documented, please open an issue on the GitHub issue tracker and we would happily document it better. We are in the process of adding Symfony bundles for the serializer and the parser, and putting the Serializer on packagist, and making it easier to use. Further ideas and contributions are of course always very welcome.

Flash image from:

Holacracy and ISO 9001:2015 get on surprisingly well together Mon, 04 Feb 2019 00:00:00 +0100 Why ISO 9001 at Liip?

With the introduction of Scrum more than 10 years ago, it was clear to us how to ensure quality: through consistent use of agile methods such as Scrum. After years of experience with Scrum and many other methods associated with leading-edge software development, we can claim to have software quality under control.

In recent years, however, certificates such as ISO 9001 have become an eligibility criterion for public tenders. No label, no right to participate. Regardless of whether this requirement makes sense or not, certification became urgent.

Mapping Holacracy® Governance

The certification project started in early 2018, two years after the introduction of Holacracy® at Liip. The basis for the certification was exclusively the prevailing Holacracy® governance; in our case it was mapped in the Holaspirit software.

We made our entry into ISO through TQMi, a tailor-made ISO template set for IT SMEs. Although we did not use the TQMi templates, we did use the grouping of standard topics that was thematically suitable for us. Along the TQMi chapters we have basically identified two ways to implement requirements: through the rules of the Holacracy® constitution or through our own governance, i.e. Holacracy®'s on-board resources such as roles, circles, expectations or binding guidelines.

Example 1: Dealing with Opportunities & Risks through the Constitution

Example 2: Organizational structure in the Constitution

Example 3: Own Governance - Circle with Domain - for Appearance

Example 4: Own Governance for Service and Support

In order to anchor the expectation that we will remain ISO-compatible in the long term in governance, the ISO role has distributed accountabilities to various circles. Here is the example of the General Incoming Processor. It states that the circle must fulfill the expectation to implement (exec) and document (doc) the TQMi chapter Service and Support for ISO compliance:

Assuring ISO-compliance by documenting and/or executing TQMi-requirement topic Service and Support (37 exec&doc)

To sum up

Right from the beginning, we were well positioned to meet the standard requirements. The challenge was rather to find a suitable way of mapping TQMi or ISO to our existing - or in a few cases new - governance. The three formal means - norm compliance through reference to the Constitution, reference to our own governance and accountabilities for entire chapters of circles - were then quite lean in implementation.

In general, ISO 9001:2015 and Holacracy® are a good match. Both pursue an approach of continuous improvement through inspection and adaptation. With very few exceptions, ISO also does not assume a hierarchical structure. And both ISO and Holacracy® demand clarity about expectations of a role and the metrics associated with it, and a clear process for tackling change. And even though we initially dismissed certification as a "contre cœur" necessity, we can now say that it made us better. Even though certain formats (such as the management report) may seem inappropriate for our organization, we have been able to uncover certain but relevant blind spots.

Mobile Apps Trends from Conversions@Google 2018 conference Thu, 31 Jan 2019 00:00:00 +0100 Mobile matters

Since the launch of the iPhone in 2007, mobile filled a huge gap in the computer devices industry. Numbers speak by themselves.

Earth statistics

As a reminder before we dive into figures, we're 7.6 billion around the globe as of 2018.
Among these people, the potential market for mobile apps is of 5 billion (i.e. people with a mobile phone).
If we only take into account active smartphones, we get around 3.5 billion smartphones (not users, smartphones!)

Mobile addressable market size
Mobile addressable market size

To put things into perspective, note that there are currently 1.3 billion of active desktops and laptops (1.2B Microsoft from which 700M Windows 10, and 110M Mac).
In 2023, we're planned to hit 5B active smartphones, and population should be around 8B.

Global population vs. devices
Global population vs. devices


In 10 years, we went from 68M to 1.7B smartphones sold per year.
As of 2017, the cumulative numbers of devices currently on Earth were: 3 billion of smartphones, and 1.5 billion of PCs.

Worldwide PCs and devices shipment
Worldwide PCs and devices shipment

iOS vs. Android

Regarding shipments, Apple takes a 15% share and Google Android the remaining 85%. This is explained by the different pricing strategy with Google covering a wide range, while Apple focuses on the high-end mostly.

If we think in terms of active smartphones, the split between the two operating systems is 25% of Apple iOS and 75% of Android.

On average, statistics say that an Apple user keep his device for 4.25y, and an Android user for 2y.

Usage and time spent

Let's take the example of Facebook over the last decade.
On desktop, they went from 12M Monthly Active Users (MAU) to 120M MAU.
On mobile, they started from 12M MAU to end up at 1.74B!

As of 2018, the time spent on mobile app per adult in the US was 3 hours. It's the only media that goes up compared to other medias such as TV or laptops.

From these 3h, 169 minutes are spent in apps vs. 11 minutes in mobile web.

Time spent in native apps vs. mobile web
Time spent in native apps vs. mobile web

App growth

Between 2005 and 2016, the app growth pace increased. Dramatically.

  • 2005: Skype took 630 days to hit 40M users
  • 2016: SuperMario Run (iOS only) needed only 4 days to hit 40M users

Mobile payments

Mobile payments are not only a trend. To back this up, you just need to look at Paypal. It had <1M USD of mobile payments in 2006. It reached 102B USD in 2016.

Native mobile applications

The Google Play Store announces to have 2.8B apps, while the Apple App Store claims 2.2B apps.
This means that every month, 2 apps are downloaded per person on the planet!

But don't get excited too fast with your company mobile app project... because of the apps downloaded, only 1/3 to 1/2 are used every month.

Another warning you should be careful if you develop or own a mobile product: don't focus on the vanity metrics of downloads, and better check the usage one.
To explain you why:

  • 25% of apps aren't used anymore after first time
  • 34% of apps are used more than 11 times
  • There is an average daily active users loss of -77% after the first three days of use
  • And it gets worse with time: -95% of average daily active users loss ninety days after install
Number of apps per active session
Number of apps per active session

A note about China and India markets

If you follow the tech industry, you must have seen all the initiative of the GAFA and co to tackle issues of India and China countries. Wonder why?

Because their future growth will come from there.
Both countries have 1.3 billion inhabitants. In India, there is only 22% of market penetration by mobile. And 52% in China.

Mobile's market penetration
Mobile's market penetration

These are just stats

With all these numbers, you could think "Let's go, let's build a native mobile app!"
But be sure to know your market. Statistics and behaviours of a region on the planet do not reflect the entire's planet behaviour!

For instance: even though there is a 70% market penetration of mobile in the US, the usage pattern during Cyber-Monday represents "only" 33% of purchases on mobile. Versus China who has "only" 55% of mobile market penetration, with an usage pattern that is 90% of mobile purchases on their biggest "Single's Day".
So you better analyze your market before crafting anything.

So, why does mobile matters?

Mobile has this power of distribution that never existed before. One app, one click, and you can reach half of the planet in less than a minute. This creates many opportunities.

As Luke Wroblewski puts it:

"Smartphone enable planet-scale reach."

Nevertheless, it's hard to maintain a good retention rate as usage decreases a lot during the first days after the app gets downloaded.
That's why at Liip we focus our business development efforts on apps that have a use in the everyday life, such as Urban Connect , Houston, Compex Coach, and One Thing Less.

What's the future of mobile?

Everyone agrees that the mobile realm reached a maturity plateau. We're not innovating that much anymore in terms of user experience.
Luke gave the example of the famous Adobe toolbar evolution on desktop during the 30 years. It was just pixels moving around. Nothing more...

Adobe Graphical User Interface evolution over the past 30 years...
Adobe Graphical User Interface evolution over the past 30 years...

Iteration is cool, until you need more innovation. And to innovate you got to get back to your vision, aka your North Star. What are you trying to achieve? Even more important, why are you trying to achieve it?

Personal device and product lifecycle
Personal device and product lifecycle
Personal device and product lifecycle
Personal device and product lifecycle

Luke sees the future of the mobile realm in 3 trends: device hardware capabilities, wrist computers, and voice devices.

Device hardware capabilities

One possible future for mobile devices resides in the devices' hardwares. Things such as camera, sensors, etc.
We already have some concrete example that came from the hardware, not the software nor UX iterations:

  • Apple and its TouchID which removed all the painful part of login like remembering a password
  • Amazon Dash with its "One button to order"
  • Amazon Go trial which remove the checkout in its entirety, by relying on device hardware capabilities
    • Sensors to identify the user
    • Camera/ML to confirms the uniqueness of the shopper
    • Camera during shopping to see what you do
    • Microphone to detect if you take something or not in your bag
    • Infrared sensors to see what's leaving the shelves
    • Deep Learning/ML systems of all these data when leaving the store to analyze what you bought, invoice, and automate refilling of the shelves
  • The new Apple AirPods which connect automatically to your iPhone, start the sound when you put them in your ears, stop the music when you remove them, all while removing the painful cables
  • Google Home to which you just speak, naturally
  • Urban Connect with its Natural User Interface* (aka NUI) too, that allows you to come close to your bike, unlock the shakle, and not think a second about how can the lock know its you the owner

Wrist computers and voice devices

Even more actual are the wrist computers like the Apple Watch, and voice devices like Alexa from Amazon.
These innovations are in the growth phase and we start to grasp their value, as we did with mobile phones back in 2010.

Voice devices market has a strong growth potential
Voice devices market has a strong growth potential

There are existing products out there and mass market use cases start to emerge. In less than a decade, they will be part of our daily lives. During your next train ride, just look at how millenials rely on voice instead of typing on their keyboard to send an SMS these days.

Wrist and voice devices statistics
Wrist and voice devices statistics

What about AR and VR?

For Luke, AR and VR are in the emergent phase. What this means is that we still have to work out the use cases.
It's by experimenting, betting, trying on real consumers, that real use cases will emerge. But he thinks it'll take more time than wrist computers and voice devices.

Comparison of Virtual Reality vs. Mixed Reality vs. Augmented Reality
Comparison of Virtual Reality vs. Mixed Reality vs. Augmented Reality
Augmented Reality examples
Augmented Reality examples

Lessons learnt

1/ We live on a mobile planet

  • 3.5B active smartphones
  • iOS = 25%, Android = 75% (warning, it's almost the opposite in Switzerland, know you market!)
  • 3h spent per day
  • 80x times used per day (wake-time)
  • Each session: between 15 and 30 seconds
  • 32x touches per second
  • It's hard to make your app stand out, so better be it useful
  • China and India are where there is most growth potential in the mobile realm

2/ The future is at our doorstep
Keep experimenting and documenting yourself about what comes next to not get caught up by surprise, more precisely:

  • Devices' hardware capabilities
  • Wrist computers
  • Voice devices
  • AR and VR

What's your take on the future of mobile? Share it in the comments' section below:

If you want to watch the complete videos (1h45 each) of Luke, check those two links: Conversions@Google 2018 edition, Conversions@Google 2017 edition

Google Analytics Metrics and Dimensions Cheatsheet Wed, 30 Jan 2019 00:00:00 +0100 Google Analytics dimensions & metrics cheatsheet

There are times using Google Analytics when you don't remember the name of a dimension, or wether it's even a dimension or a metric … There also are times when you even wonder how much details does Google Analytics capture about a aspects of the user interaction on your website or app.

For such times, here's a PDF cheatsheet of Google Analytics dimensions and metrics, as visual summary of the official documentation.

Comment l’Agilité profite à l’assurance-invalidité ? La réponse avec GILAI. Wed, 23 Jan 2019 00:00:00 +0100 Fin 2017, GILAI initiait un projet d’envergure: la refonte de l’application métier permettant la gestion de l'assurance invalidité (AI). GILAI a fait le choix de gérer ce projet à l’aide de Scrum, l’une des méthodes Agiles. Liip a accompagné GILAI durant huit mois. Durant cette période, l’équivalent de dix jours de travail a suffi pour initier l’ensemble des parties prenantes et des utilisateurs de la nouvelle version de l'application métier Web@AI 3.0 à l’approche Agile. Un accompagnement au niveau de la gouvernance du projet et de la formalisation des besoins a aussi été assuré par Liip.

Sandro Lensi, responsable des technologies, de l’infrastructure, et des systèmes de l’information chez GILAI et chef du projet d’urbanisation Web@AI 3.0, revient sur cette première expérience toute en agilité.

Liip : Qu’est-ce que GILAI ?

Sandro Lensi : GILAI est l’association qui gère l’informatique des Offices de l’assurance invalidité (AI) de vingt cantons suisses et du Liechtenstein. En déployant des systèmes informatiques communs, GILAI soutient les offices AI dans l’accomplissement des tâches prévues par la loi fédérale sur l’assurance-invalidité (LAI). L’ERP de gestion Web@AI est l’un de ces systèmes informatiques.

Quel est votre rôle dans le cadre du projet de refonte de la plateforme Web@AI 3.0 ?

Je pilote l’ensemble des activités, notamment la représentation au niveau du comité de pilotage (Conseil d’Administration), le Project Management Office, ainsi que l’équipe métier (située au sein de GILAI) et l’équipe de développement (située au sein de l’éditeur de solutions). Le but était de réunir ces différentes parties prenantes ayant des compétences variées au sein d’une seule équipe.

Dans quel contexte avez-vous contacté Liip fin 2017 ?

Le projet Web@AI 3.0 allait se dérouler sur deux ans. Et nous avions envie de le gérer de manière différente.

Les méthodes traditionnelles de gestion de projet et de développement d’applications ne nous semblaient pas idéales. En effet, les premiers résultats se font souvent attendre et ils ne correspondent parfois pas ou plus aux attentes initiales. Nous nous sommes donc très rapidement dirigés vers l’Agilité.

N’ayant pas d’expérience pratique dans ce domaine, nous souhaitions être accompagnés. Nous avons fait une simple recherche Google avec les mots clés “développement agile”. Nous sommes tombés sur votre blog post L’agilité chez QoQa : interview avec Joann Dobler. Bien que nos marchés respectifs soient très différents, nous avons perçu des similitudes entre le projet de QoQa et le nôtre. En particulier l’aspect très dynamique. Nous aimions la possibilité d’avoir très rapidement et fréquemment des résultats. Cela allait nous permettre de mieux piloter ce projet d’envergure et de rassurer en interne sur son évolution. Nous avons donc contacté Liip. Le feeling a tout de suite très bien passé. Après cette première rencontre, nous étions convaincus de vouloir collaborer avec Liip.

Pourquoi avoir choisi Liip ? Qu’est-ce qui vous a convaincu ?

Lors de la première rencontre, nous avons été immergés dans l’univers Liip. Nous avons senti que l’Agilité était vécue de manière totale et en tout temps. Nous nous sommes dit que vous saviez réellement ce que “travailler en mode Agile” représentait. Que vous aviez l'expérience et l’expertise. Et que vous n’alliez pas nous vendre un concept théorique.

Nous voulions une approche pragmatique et c’est ce que nous avons trouvé chez Liip.

Comment s’est articulé ce coaching Agile ?

Lors d’une première séance très ouverte, nous avons expliqué comment nous fonctionnions. Nous souhaitions passer d’une gestion de projet classique à un mode de fonctionnement Agile. Le coaching a d’abord porté sur GILAI en tant que Product Owner, c’est-à-dire notre fonctionnement interne. Le coaching a également inclus l’équipe de développement externe située au sein de notre éditeur de solutions. Le but était de former une seule équipe de projet.

Nous ne voulions plus travailler en silos. Nous voulions aussi rendre la communication entre ces différents acteurs plus efficace et transparente. Et le coaching a réussi.

Une autre activité importante du coaching était la formation. Nous voulions que toutes les personnes touchées par le projet comprennent l’Agilité. Les super utilisateurs de la future plateforme (experts métier) et les collaborateurs du centre opérationnel GILAI ont aussi été formés à l’approche Agile et au Design Thinking. Cela nous a permis de converger encore plus vers une unique équipe évoluant en Agilité. Cette démarche a porté ses fruits.

Nous avons pu constater que le niveau de satisfaction des utilisateurs avait augmenté suite à l’initiation à l’approche Agile.

Quelle est la situation maintenant, après 8 mois de coaching Agile ?

Le projet se passe vraiment bien. Le fonctionnement Agile est bien compris, vécu et maîtrisé par les parties prenantes. Nous avons atteint un niveau de maturité qui nous permet de viser l’objectif fixé pour le projet d’urbanisation Web@AI 3.0. La première phase de cet accompagnement est terminée. Mais nous envisageons de faire des piqûres de rappel avec Liip d’ici quelques mois. Par exemple avec un coaching spécifique sur les rôles de Product Owner et Scrum Master.

Qu’est-ce que l’accompagnement de Liip a apporté à GILAI ? Qu’est-ce que Liip a offert qu’un consultant “traditionnel” n’aurait pas pu offrir ?

Liip a apporté une réelle mise en pratique.

Parce que l’Agilité est vraiment vécue et pratiquée tous les jours chez Liip. Nous pensons que nous n’aurions pas pu trouver l’équivalent ailleurs.

Liip nous a offert une grande expertise tout comme une forte motivation et implication. Le fait que l’approche ne soit pas (que) commerciale nous a aussi beaucoup plu. Notre feeling était très bon. Et nous ne nous sommes pas trompés.

Est-ce que l'Agilité peut-être utilisée dans tous les secteurs ?

Oui, je suis convaincu que tout projet ou tout fonctionnement d’entreprise peut être travaillé sous l’angle de l’Agilité. GILAI se doit, comme entreprise au service de l’assurance-invalidité, de promouvoir des solutions novatrices pour optimiser sa performance et réduire ses coûts. Une approche Agile est donc d’actualité et tout-à-fait possible à mettre en oeuvre.

Nous en sommes la preuve. Nous avons même réussi la synergie public-privé (GILAI et notre éditeur de logiciel). Lorsqu’un développement doit être optimisé, rendu plus efficient, travailler avec de nouvelles méthodes, telles que l’Agilité, s’impose. Quel que soit le secteur.

Actuellement chez GILAI, nous utilisons l’approche Agile au sein des équipes projet et business intelligence. L'équipe infrastructure et service IT ne l’utilise pas (encore). Mais j’aimerais beaucoup tenter l’approche au sein de cette équipe également. Parce que je pense que c’est possible et que cela apporterait de la valeur.

Qu’est-ce que l’accompagnement de Liip vous a apporté personnellement ?

J’ai découvert une autre manière de travailler, beaucoup plus active et dynamique. Beaucoup plus vécue aussi. Et c’est important pour moi dans mon travail quotidien. Cela m’a permis d’être plus impliqué dans le projet. D’être beaucoup plus proche des besoins des utilisateurs de l’application métier. J’ai aussi eu l’impression de pouvoir apporter plus de valeur. Je ne souhaite et ne pourrais plus revenir à une méthode traditionnelle. L’approche Agile permet un gain de temps et une gestion plus fine dans la maîtrise de l’objectif. Le déploiement de la nouvelle plateforme auprès des Offices AI sera fait en agilité.

Two playbooks to facilitate Holacracy Meetings Tue, 22 Jan 2019 00:00:00 +0100 The facilitation of Holacracy meetings sometimes feels difficult, when not ingrateful. The process is demanding, and even when facilitating kind participants, one easily falls into unexpected situations.

Why another cheatsheet?

The official cheatsheets for Governance Meeting and Tactical Meeting do their job in a wonderful way. Yet they lack depth of information about the mechanics and don't cover the many corner cases a facilitator has to face.

Facing that challenge myself, I started gathering all advices that I could find about Holacracy facilitation (notably from the Holacracy Blog, trainings and practice) in two playbooks : some kind of 'verbous' cheatsheets for the two meeting formats which I first shared and used internally at Liip. They also helped me to prepare the Certified Holacracy Coach examination.

Many Liipers have told me not to keep them for ourselves so here you go:

Holacracy Facilitation Playbook - Governance Meeting
Holacracy Facilitation Playbook - Tactical Meeting

What's inside

These facilitation playbooks cover all the needed steps and their mechanics for either the Tactical and Governance meetings – actually the Integrative Election Process is missing, sorry for that.

They also provide concise phrases to introduce the process, to frame each step in terms of purpose and mechanics, as well as coaching advices.

How to use these playbooks

Print both facilitation playbooks for yourself.
Print more and leave copies in every meeting room.

Don't read them aloud from A to Z. They are resources from which you pick useful tips and phrases.

Don't distribute them to your meeting participants. There's no harm in them knowing about it. Yet it really adds a lot of details that they don't need in order to participate at best to the meeting. Let them have the official cheatsheet cards, that's in my opinion just what they need to follow the process.

Next steps

We use them for ourselves and will continue to improve them. Come back in a few months for an update!
Feedbacks and improvement ideas are very welcome.


Turn your building into a power plant - and earn money with sustainability. Tue, 08 Jan 2019 00:00:00 +0100 Exciting opportunities raising new and complex questions

In Switzerland, 2018 began with a revolution in the electricity market. Dues to the federal government's energy strategy 2050 and the new Swiss energy ordinance, owners of solar plants can now sell their own solar power in a private community, a so-called self-consumption community. For investors and owners of solar plants, the new possibility presented interesting chances of earning money with solar power. But it also brought new and complex questions. How exactly do you bill your own solar power to your neighbours? Which electricity tariffs apply and what is legally permitted? How do I proceed to manage and setup a self-consumption community?

We wanted to provide answers to precisely these questions. In a common project, Smart Energy Link and Liip developed a new service for easy and user friendly management of self-consumption communities - with a customer portal based on modern open source technologies.

Solving complex problems with Service Design, a cross-functional team and an agile approach in several stages.

At the beginning of the project, we faced quite some challenges: On the one hand, it was very challenging to reconcile all the legal, technical, energetic and structural requirements and at the same time make all that accessible for the users. On the other hand, we were in a completely new field for everyone involved. At the beginning of the project, there was hardly any experience in this area and, accordingly, no users we could interview. Last but not least, the new possibility in the Energy Ordinance also opened up an interesting field for new business to many of our competitors. A fast time-to-market was therefore crucial in order to secure a pioneer advantage for SEL in this new market.
For us, the key to success in this project was a user-centered approach with service design, in order create a easy and userfriendly solution. Furthermore an agile implementation with several stages helped us to achieve a fast time-to-market.
Tthanks to a cross-functional team consisting of software developers, service and user experience designers, lawyers and energy experts, we were able to gather all the necessary knowledge in close cooperation.

"The cooperation with experts from various fields was essential for this project. We could only develop the current solution in this quality in a team - and besides, it is much more fun to work together on a common goal. "Stefan Heinemann, software developer Liip.

Stage 1: Implementation of a first version in order to gain experience with real users.

As no experience with self-consumption communites in the market existed yet, we broke new ground with the project. Therefore it was essential for us to be able to gain experience with real users as quickly as possible. In the first stage of this project, we focused on developing a first functional version of the technical solution with which we could put the first self-consumption communities into operation.

The focus was on developing a first version of a webbased customer portal and linking it to the hardware in the building - the meters. To make the huge amount of data from the meters accessible - as an essential basis for controlling, optimising and billing the communities. In joint workshops we designed the core of the portal and the data model and created first design prototypes. We implemented them in 6 development sprints with Scrum and were thus able to present a first version of an integrated technical solution in time for Swissbau 2018 in Basel in January - and at the same time celebrate the commissioning of our first communities.

The fast market launch (GoLive of the first version was only four months after the first concept workshop) was retrospectively crucial for the development of the entire SEL service. On the one hand, we got already concrete visible results, that caught the attention from potential customers and investors. On the other hand, it enabled us to gain empirical information concerning the needs of the different user groups. This enabled us to further develop our future service based on users needs. Although we were operating in an area that had no users at all at the beginning of the project.

Stage 2: From software to a holistic, user-centered service - with service design.

With the launch of the first version, we also began optimising and extending our services for real estate management companies. Within the first phase of our Service Design project, we evaluated the needs and experiences of our communities as well as their property management companies with user interviews. This enabled us to obtain facts and clarity about their needs and pain points.

The most important finding? A seamless integration of our service into the existing systems and processes of classical property management is key. The less additional effort the management of self-consumption communities causes, the more they are willing to initiate self-consumption communities and manage them with our solution. Furthermore, we recognized the huge need of a simple and reliable billing process - taking into account the complex structure and legal constraints of the Swiss tariff models.

These findings were the basis to set clear priorities in the second development stage. In a joint ideation workshop, we worked on how to easy manage and especially bill self-consumption communities professionally without having to acquire any expert knowledge.

We worked out these ideas in Service Blueprints and developed simple prototypes for them. We did storyboards for the entire service from the setup to the billing and worked on concrete wireframes for the user interfaces.

Afterwards we carried out further user tests with various property management companies. In interviews we tested the content of the service for correctness and optimisation potential. In a second step, we iterated with the so-called RITE method on the wireframes of the new billing in the customer portal. This has given us clarity about which of our ideas we chose to implementing and how we can improve usability and comprehensibility. Especially the iteration with the RITE-method brought a lot of knowledge in a short time.
"The user tests clearly pointed out which features and functionalities are most important for our customers. Prioritizing features and tasks is much easier when we can develop them based on concrete customer needs. And we have been able to optimize many usability points with minimal effort.” Tobias Stahel, CEO Smart Energy Link

With the findings from the user testing and the revised wireframes, we had everything needed. We started to tackle the implementation of development sprints in a very targeted manner. After 6 further sprints in December 2018, this resulted in another GoLive - among others with the new possibility to bill a self-consumption community in just a few clicks.


With this second GoLive, SEL and Liip not only look back on an intensive journey together, but also on an extremely exciting project with many learnings. In summary, we see these central learnings in the project:

1. User interviews provide security The interviews and user tests were essential for the content of our solution. They gave us more confidence to be on the right track during the creative and development process. It was helpful to realize, that some of our initial ideas were not really a user need. Dropping them saved us a lot of development effort and money with just a few days of user testing. Looking back, we could have conducted interviews even earlier - some of today's functions would probably not have been developed at all after the last user tests, since they are nice to have, but do not provide any fundamental added value.

2. Clear user needs make it easier to prioritize where to work on: The prioritization of tasks and features is much easier, if we can base on clear customer needs and not just assumptions - as we knew which ones would bring success and which ones doesn’t.

3. Experts from different areas are essential for good services: Typical aspects of service design projects are the complexity and the multilayeredness of the challenges needed to be solved. Typically, they can rarely be answered by one person. In order to find good solutions in such a complex environment, cross-functional teams are essential for a good solution.

4. Agile Sprints help to leave the hidden chamber quickly: The Scrum Sprints proved to be extremely helpful in such an unknown environment to gain experience and to realize a fast time-to-market.

5. Service Design helps to make complex things simple: Last but not least: With this project we have once again proved that even in a technologically, legally and thematically highly complex environment, easy and user-friendly solutions can be developed. Service design as a holistic and user-centered approach is the structured method - with Scrum for a flexible and fast implementation.

Jugend und Medien - the portal for youth and media literacy Thu, 03 Jan 2019 00:00:00 +0100 The goal

Jugend und Medien (Youth and Media) is the national platform of the Bundesamts für Sozialversicherungen (Federal Social Insurance Office) which supports media literacy. On behalf of the Federal Council, it aims to ensure that children and youngsters use digital media safely and responsibly.
I'm very happy to have contributed to such an important step, focused on redesigning and improving the whole User Experience Design:

  • identify the main user needs
  • simplify the information structure
  • reorganize the content
  • increase general consistency
  • give a fresher look towards a richer experience and “brand” awareness
  • improve the emotional connection

The old website

Previous website

The challenge

  • tight budget
  • collaboration with external development agency
  • many user groups and different needs
  • amount of content, crosslinks and different languages
  • create a digital experience coherent with the print identity

The redesign

The new website not only provides a more modern look but mostly an improved user experience

As mentioned, Jugend und Medien is a portal with specific information on how parents, teachers, staff of special needs institutions, youth workers and educators can promote media literacy in their day-to-day life. They can find tips to help the younger generation protect themselves from the risks of media usage and networks.


Homepage new website

The homepage sections provide direct entries to the current trends and main user journeys. It combines the latest topics, the most popular (most viewed) pages, numerous offerings and related news.
The main navigation focuses on the major areas for parents and tutors, while for professional experts, there is a separate menu that leads to information on key topics and events, political initiatives, legal foundations, and cantonal strategies.
The clear information architecture and extended spectrum of content resources (texts, images, videos, pdfs, contacts, external links, facts and statistics) assists all the users finding answers for their needs.

The website is built in 7 different templates, from dynamic to static pages, from overview pages to detail pages, all follow a consistent page structure for a better usability.
The typography contrast, the yellow elements to highlight, and interactive components like accordions also improve navigation and readability. For longer pages, we developed a sticky sub-navigation which offers an easy page-content scan and a jump to a specific section.
Last but not least, the new look brought a much modern touch than before! Lighter, with more "white space", and more comfortable for reading.

Overview page

Themen new website

Detail page

Thema new website

See more on

The collaboration with the client

With an open mindset, we decided to start from scratch and avoid being influenced by the existing website. In a workshop with the BSV team and the support of Andreas Amsler (ex-Liiper), we listed all the user needs and grouped the data in different ways, striving for an improved concept – simpler and more intuitive –, appropriated for an information portal aimed to address parents and guardians.
Fabian Ryf, our Digital Analytics & Performance Consultant, validated the conceived information architecture from a SEO point of view. That way we ensured that content can be easily found and understood, also through search engines.

After the core structure was defined, we matched the existing content to it, splitting, merging, cleaning and even re-writing a lot of it. By refining the whole architecture we achieved a more usable and structured journey, now also with clearer content. It was just about time to adapt the interface too.

The whole concept was vivid in my head since the beggining. As we didn't have much time for sketches and wireframes, I quickly redesigned the new page templates and styleguide, attempting to involve developers as soon as possible (remember, they were external to Liip, which adds extra complexity). By the end, the client was happy, and so was I! The visuals were validated, the guidelines were defined and the project could move to implementation.

In the meantime, the client, the developers and I were exchanging on a very regular basis, reorganizing the content to fit the requirements, reshaping design to support the users, documenting updates for developers, all in fast iterations.

I must say that I really enjoyed collaborating with Collete Marti and Yvonne Haldimann. We accomplished a great result with meetings in a mix of English, French, German and Portuguese!
Visiting their Bern office allowed me to get to know the people, the environment and sense the corporate identity which I had to replicate digitally.

The collaboration with the developers

In 6 years working at Liip this was my first experience partnering with an external development agency.
I can say that I’m spoiled by my daily work in cross-functional teams where close collaboration is key for greater results. At Liip, from the very first pitch to the last line of code, User Experience Designers and Developers participate in workshops and co-create, building a product together until the end.
That said, to manage expectations when collaborating with an external partner isn't an easy task at all...

UX Designers alike might share the feeling that often what we conceptualize it isn't implemented accordingly. We design something, ship it to "the other side", and get back "half" of it. It's so frustrating that we – the so-called divas – end up doing "developer-centered design" instead of "user-centered design"...
Luckily it wasn't the case! Thanks to design-eyed-frontenders I had the chance to work with, and the use of Sketch + Invision we shared guidelines and comments, finding compromises and solutions together.

I would like to thank the development team for the amazing efforts: Raphael Wälterlin, Manuel Bloch, Patrick Gutter, Sebastian Thadewald, Michael Bossart and Co.

The result

The huge work from the BSV team and super commitment from cab services ag allowed us to achieve what I'm proud to announce at

lex4you le call center du futur Thu, 20 Dec 2018 00:00:00 +0100 Le TCS souhaitait offrir à un large public l’accès à des informations utiles et compréhensibles en lien avec des questions juridiques sur différents domaines de la vie quotidienne. Par ailleurs, le service de téléphonie en ligne lexCall a été développé pour un public cible restreint. Via lexCall, les assurés du TCS peuvent être mis en contact direct avec les juristes de la TCS Protection juridique grâce à la technologie WebRTC (voix sur IP), c’est-à-dire via un appel fait à l’aide d’un navigateur web. Pascale Hellmüller explique comment l’utilisation de la méthode Scrum a permis la mise en ligne de cette plateforme dans le délai imparti tout en garantissant la satisfaction des utilisateurs finaux.

Liip a accompagné l’équipe projet du TCS tout au long du développement de cette plateforme multiservice.

Une première dans le domaine de l’assistance juridique

Liip : Qu’est-ce que lex4you ? Et quelle est sa raison d’être ?

Pascale Hellmüller : lex4you est une plateforme web interactive qui permet d’offrir du renseignement juridique. Trois prestations sont proposées. lexSearch met à disposition de tous des informations utiles et compréhensibles en lien avec des questions juridiques sur différents domaines de la vie quotidienne. lexForum permet aux utilisateurs de prendre connaissance et d’échanger sur leurs expériences en matière de questions juridiques liées au quotidien. lexCall permet à un public restreint, notamment les assurés de la TCS Protection juridique ainsi que les assurés de certains de nos partenaires B2B, de bénéficier de renseignements juridiques directement donnés par un juriste en ligne, via un appel voix sur IP.

Quelle était la situation avant lex4you ?

Contrairement à ses concurrents, la TCS Protection juridique ne proposait pas de service spécifique de renseignement juridique par téléphone. Cette prestation pouvait être offerte sous forme de geste commercial. Nous avons souhaité nous aligner par rapport à nos concurrents, et proposer ce service de manière innovante à nos assurés.

Quelle est la situation après le go-live de lex4you ?

Avec lex4you, le but est d’orienter nos assurés. D’abord sur la plateforme web elle-même (accès libre aux textes et documents ainsi qu’au forum). Puis, dans un second temps, par téléphone via le service lexCall. Les juristes répondent directement aux appels et renseignent instantanément les assurés. Nous soutenons nos assurés lors de situations juridiques dans lesquelles ils peuvent se sentir démunis. Nous les orientons dans leurs démarches. Même lorsqu’il n’y a pas de litige. La plateforme lex4you permet également aux assurés d’adresser une simple demande de rappel durant les heures de service. Nous avons souhaité faciliter la proximité entre nos juristes et nos assurés et renforcer ainsi notre service clientèle.

Comment s’est déroulé le projet ?

Si je devais résumer le projet en deux mots, ce serait : oser et apprendre. C’était tout de même une forme de pari qui s’est révélé être un apprentissage extrêmement positif. A tout point de vue et pour tout le monde. Nous n’avions pas d’expérience au niveau digital dans le domaine de l’assurance. L’utilisation de la méthode Scrum était nouvelle pour moi. L’organisation sous forme de projet en privilégiant une dynamique participative de tous les acteurs étaient également nouvelles pour la direction de l’unité d’affaire en question.
L’approche agile est exigeante. Elle implique une grande rigueur ainsi que la définition et le respect d’un certain cadre. La définition de ce cadre a été faite avec la direction de l’unité d’affaire. Cet exercice a permis à l’équipe projet d’être autonome lors de prises de décision liées au développement et d’en faciliter le déroulement.
L’approche Scrum génère de multiples itérations. Des enseignements peuvent être tirés toutes les trois semaines. Dès que l’on reconnaît que l’on a la faculté d’apprendre, c’est-à-dire de s’adapter, tout devient possible. Avec le bon partenaire et le bon état d’esprit, il n’y a pas d’obstacle et tout fonctionne finalement.

Liip a été très créatif pour faciliter notre travail.

Les avantages de l’utilisation de la méthode Scrum

Quels ont été les arguments convaincants pour l'utilisation d’une méthodologie agile ?

Nous avions une contrainte de temps. La nouvelle plateforme lex4you devait être lancée 10 mois plus tard. Il nous fallait donc une approche qui nous permette de respecter ce délai. Nous voulions aussi une solution qui nous permette de développer les fonctions de base adaptées aux besoins de nos assurés et de notre organisation dans un premier temps et de construire autour de celles-ci par la suite. Les approches en cascade se révèlent moins adaptées et généralement plus coûteuses dans un tel cas. La méthode Scrum était donc celle qui répondait le mieux à nos besoins en termes de délai et de développement progressif de fonctionnalités.

Ce que j’aime beaucoup avec l’approche agile, c’est que tout problème est un cadeau car la méthode permet de l’identifier au cours du développement et d’intégrer immédiatement la résolution de celui-ci dans le processus.

Quels ont été les bénéfices de l’utilisation de cette méthode ?

Nous avions initié le projet en interne avec notre mode de fonctionnement habituel. Nous avions déjà travaillé sur la définition du service ainsi que ses fonctionnalités avant le début de la collaboration avec Liip. Pour être cohérents avec la méthode Scrum, il a fallu oser déconstruire le concept, pour le (re)-construire au fur et à mesure. Nous avons de cette manière pu profiter de l’expérience de co-création avec Liip.
Cette approche participative a non seulement réuni l’équipe projet en interne et Liip mais également notre direction. Les utilisateurs de la plateforme ainsi que les parties prenantes à la promotion et gestion du service (les juristes, les responsables du futur service, les représentants de la vente et du marketing) ont été inclus lors des phases successives de test. Ils ont pu donner leur avis par rapport aux fonctionnalités développées. Nous avons pu en tenir compte lors des phases de développement qui ont suivi.
A l’issue du projet, le plus beau commentaire que j’ai reçu de la part de la direction : “Nous comprenons maintenant l’approche et si c’était à faire, nous referions la même chose.”

Une solution technologique innovante

Pour quelles raisons avez-vous choisi la technologie WebRTC pour votre service lexCall ?

Notre première idée était de créer un call center classique. Nous aurions été dépendants des opérateurs téléphoniques. Le choix de la solution web, nous a offert une plus grande autonomie ainsi qu’une flexibilité dans l’évolution de notre service lexCall.
Ce site web responsive est doté d’une fonctionnalité WebRTC (real time communication). Le service lexCall permet aux assurés de contacter nos juristes grâce à la technologie voix sur IP sans frais supplémentaire s’ils se connectent depuis un ordinateur ou une tablette et dans le cadre de leur forfait téléphonique s’ils se connectent depuis leur smartphone.
Nous avons aussi vu l’opportunité de nous différencier de nos concurrents. En effet, avec cette solution digitale, il n’y a pas de file d’attente. Il est possible de voir sur le site, en temps réel, si un juriste est disponible pour répondre à un appel, dans les trois langues (français, allemand, italien). Si aucun juriste n’est disponible à ce moment durant les heures de service, l’assuré peut formuler une demande de rappel.

Quels sont les bénéfices de l’utilisation de cette technologie ?

L’assuré qui utilise notre service lexCall est logué sur la plateforme. Le système contrôle si l’assuré remplit les conditions pour avoir accès au service lexCall ou non. Dans la négative, l’assuré est orienté sur les démarches à faire pour bénéficier (à nouveau) du service.
Le juriste qui répond à l’appel d’un assuré n’a donc pas besoin de l’identifier. Cela représente un gain de temps et de sécurité par rapport à un call center classique.
Nous avons aussi la possibilité d’activer ou de désactiver le service lexCall en un clic à tout moment. Cela nous permet d’avoir une meilleure maîtrise des coûts, notamment ceux liés aux rappels qui passent par le réseau téléphonique standard. Cette solution web nous permet plus de flexibilité pour la gestion du service.

A l’issue du sprint n°5, lors de la démonstration, je me suis rendu compte que la plateforme prenait vraiment forme. J’ai réalisé, avec beaucoup d’émotion, que nous étions vraiment en train de le faire. C’était très porteur. Nous étions tous à la même table, Liip et l’équipe projet TCS. Ensemble, nous trouvions des solutions et concrétisons ce que nous avions imaginé. C’était fantastique. Cette proximité montre vraiment que l’intelligence collective apporte l’excellence. Ce que rend difficile une organisation en silo.