Liip Blog Kirby Tue, 13 Mar 2018 00:00:00 +0100 Latest articles from the Liip Blog en Speech recognition with Tue, 13 Mar 2018 00:00:00 +0100 Speech recognition is here to stay. Google Home, Amazon Alexa/Dot or the Apple Homepod devices are storming our living rooms. Speech recognition in terms of assistants on mobile phones such as Siri or Google home has reached a point where they actually become reasonably useful. So we might ask ourselves, can we put this technology to other uses than asking Alexa to put beer on the shopping list, or Microsoft Cortana for directions. For creating your own piece of software with speech recognition, actually not much is needed, so lets get started!


If you want to have your own speech recognition, there are three options:

  1. You can either hack Alexa to do things but you might be limited in possibilities
  2. You can use one of the integrated solutions such as Rebox that allows you more flexibility and has a microphone array and speech recognition built in.
  3. Or you use a simple raspberry pi or your laptop only. That’s the option I am going to talk about in this article. Oh btw here is a blog post from Pascal - another Liiper - showing how to do asr in the browser.

Speech Recognition (ASR) as Opensource

If you want to build your own device you make either use of excellent open source projects like CMU Sphinx, Mycroft, CNTK, kaldi, Mozilla DeepSpeech or KeenASR which can be deployed locally, often work already on a Raspberry Pi and often and have the benefit, that no data has to be sent through the Internet in order to recognize what you’ve just said. So there is no lag between saying something and the reaction of your device (We’ll cover this issue later). The drawback might be the quality of the speech recognition and the ease of its use. You might be wondering why it is hard to get speech recognition right? Well the short answer is data. The longer answer follows:

In a nutshell - How does speech recognition works?

Normally (original paper here) the idea is that you have a recurrent neural network(RNN). A RNN is a deep learning network where the current state influences the next state. Now you feed in 20-40 ms slices of audio that have been formerly transformed into a spectrogram as input into the RNN.

A spectrogram

An RNN is useful for language tasks in particular because each letter influences the likelihood of the next. So when you say "speech" for example,the chances to say “ch” after you’ve said "spee" is quite high ("speed" might be an alternative too). So each 20ms slice is transformed into a letter and we might end up with a letter sequence like this: "sss_peeeech" where “” means nothing was recognized. After removing the blanks and combining the same letters into one we might end up with the word "speech", if we’re lucky and among other candidates like "spech", "spich", "sbitsch", etc... Because the word speech appears more often in written text we’ll go for that.

A RNN for speech recognition

Where is the problem now? Well the problem is, you as a private person will not have millions of speech samples, which are needed to train the neural network. On the other hand everything you say to your phone, is collected by e.g. Alexa and used as training examples. You are not believing me? Here is all you have ever said samples to your Android phone. So what options are you having? Well you can still use one of the open source libraries, that already come with a pre-trained model. But often these models have been only trained for the english language. If you want to make them work for German or even Swiss-German you’d have to train them yourself. If you just want to get started you could use a speech recognition as a service provider.

Speech Recognition as a Service

If you feel like using a speech recognition service it might surprise you most startups in this area have been bought up by the giants. Google has bought startup and Facebook has bought another startup working in this field: Of course other big 5 companies are having their own speech services too. Microsoft has cognitive services in azure and IBM has speech recognition built into Watson. Feel free to choose one for yourself. From my experience their performance is quite similar. In this example I went with

Speech recognition with

For a fun little project "Heidi - the smart radio" at the SRF Hackathon (btw. Heidi scored 9th out of 30 :) I decided to build a smart little radio, that basically listens to what you are saying. You just tell the radio to play the station you want to hear and then it plays it. That’s about it. So all you need is a microphone and a speaker to build a prototype. So let’s get started.

Get the audio

First you will have to get the audio from your microphone, which can be done with python and pyaudio quite nicely. The idea here is that you’ll create a never ending loop which always records 4 seconds of your speech and saves it to a file after. In order to send the data to, it reads from the file backa and sends it as a post request to Btw. we will do the recording in mono.
import pyaudio
import wave

    #--------- SETTING PARAMS FOR OUR AUDIO FILE ------------#
    FORMAT = pyaudio.paInt16    # format of wave
    CHANNELS = 1                # no. of audio channels
    RATE = 44100                # frame rate
    CHUNK = 1024                # frames per audio sample

    # creating PyAudio object
    audio = pyaudio.PyAudio()

    # open a new stream for microphone
    # It creates a PortAudio Stream Wrapper class object
    stream =,channels=CHANNELS,
                        rate=RATE, input=True,

    #----------------- start of recording -------------------#

    # list to save all audio frames
    frames = []

    for i in range(int(RATE / CHUNK * RECORD_SECONDS)):
        # read audio stream from microphone
        data =
        # append audio data to frames list

    #------------------ end of recording --------------------#   
    print("Finished recording.")

    stream.stop_stream()    # stop the stream object
    stream.close()          # close the stream object
    audio.terminate()       # terminate PortAudio

    #------------------ saving audio ------------------------#

    # create wave file object
    waveFile =, 'wb')

    # settings for wave file object

    # closing the wave file object

def read_audio(WAVE_FILENAME):
    # function to read audio(wav) file
    with open(WAVE_FILENAME, 'rb') as f:
        audio =
    return audio

def RecognizeSpeech(AUDIO_FILENAME, num_seconds = 5):

    # record audio of specified length in specified audio file
    record_audio(num_seconds, AUDIO_FILENAME)

    # reading audio
    audio = read_audio(AUDIO_FILENAME)

    # ....

if __name__ == "__main__":
    while True:
        text =  RecognizeSpeech('myspeech.wav', 4)

Ok now you should have a myspeech.wav file in your folder that gets replaced with the newest recording every 4 seconds. We need to send it to to find out what we've actually said.

Transform it into text

There is an extensive extensive documentation to I will use the HTTP API, which you can simply use with curl to try things out. To help you out in the start, I thought I'd write the file to show some of its capabilities. Generally all you need is an access_token from that you send in the headers and the data that you want to be transformed into text. You will receive a text representation of it.
import requests
import json

def read_audio(WAVE_FILENAME):
    # function to read audio(wav) file
    with open(WAVE_FILENAME, 'rb') as f:
        audio =
    return audio


# get a sample of the audio that we recorded before. 
audio = read_audio("myspeech.wav")

# defining headers for HTTP request
headers = {'authorization': 'Bearer ' + ACCESS_TOKEN,
           'Content-Type': 'audio/wav'}

#Send the request as post request and the audio as data
resp =, headers = headers,
                         data = audio)

#Get the text
data = json.loads(resp.content)

So after recording something into your ".wav" file, you can send it off to and receive an answer:

{u'entities': {}, u'msg_id': u'0vqgXgfW8mka9y4fi', u'_text': u'Hallo Internet'}

Understanding the intent

Nice it understood my gibberish! So now, the only thing left is to understand the intent of what we actually want. For this has created an interface to figure out what the text was about. Different providers differ quite a bit on how to model intent, but for it is nothing more than fiddling around with the gui.

Teaching our patterns

As you see in the screenshot wit has a couple of predefined entity types, such as: age_of_person, amount_of_money, datetime, duration, email, etc.. What you basically do is, to mark the word you are particularly interested about, using your mouse, for example the radio-station "srf1" and assign it to a matching entity type. If you can't find one you can simply create one such as "radiostation" . Now you can use the textbox to enter some examples and formulations and mark the entity to "train" wit to recognize your entity in different contexts. It works to a certain extent, but don't expect too much of it. If you are happy with the results, you can use the API to try it.

import requests
import json

headers = {'authorization': 'Bearer ' + ACCESS_TOKEN}

# Send the text
text = "Heidi spiel srf1."
resp = requests.get('' % text, headers = headers)

#Get the text
data = json.loads(resp.content)

So when you run it you might get:

{u'entities': {u'radiostation': [{u'confidence': 1, u'type': u'value', u'value': u'srf1'}]}, u'msg_id': u'0CPCCSKNcZy42SsPt', u'_text': u'(Heidi spiel srf1.)'}


Nice it understood our radio station! Well there is not really much left to do, other than just play it. I've used a hacky mplayer call to just play something, but sky is the limit here.

if radiostation == "srf1" :


That was easy wasn't it? Well yes, I omitted one problem, namely that our little smart radio is not very convenient because it feels very "laggish". It has to listen for 4 seconds first, then transmit the data to wit and wait until wit has recognized it, then find the intent out and finally play the radio station. That takes a while - not really long e.g. 1-2 seconds, but we humans are quite sensitive to such lags. Now if you are saying the voice command in the exact right moment when it is listening, you might be lucky. But otherwise you might end up having to repeat your command multiple times, just to hit the right slot. So what is the solution?

The solution comes in the form of a so called “wake word”. It's a keyword that the device listens constantly to and the reason why you have to say "Alexa" first all the time, if you want something from it. Once a device picks up its own “wake word”, it starts to record what you have to say after the keyword and transmits this bit to the cloud for processing and storage. In order to pickup the keyword fast, most of these devices do the automatic speech recognition for the keyword on the device, and send off the data to the cloud afterwards. Some companies, like Google, went even further and put the whole ml model on the mobile phone in order to have a faster response rate and as a bonus to work offline too.

What's next?

Although the "magic" behind the scenes of automatic speech recognition systems is quite complicated, it’s easy to use automatic speech recognition as a service. On the other hand the market is already quite saturated with different devices, for quite affordable prices. So there is really not much to win, if you want to create your own device in such a competitive market. Yet it might be interesting to use open source asr solutions in already existing systems, where there is need for confidentiality. I am sure not every user wants his speech data to end up in a google data center when he is using a third party app.

On the other hand for the big players offering devices for affordable prices turns out to be a good strategy. Not only are they so collecting more training data - which makes their automatic speech recognition even better - but eventually they are controlling a very private channel to the consumer, namely speech. After all, it’s hard to find an easier way of buying things rather than just saying it out loud.

For all other applications, it depends what you want to achieve. If you are a media company and want to be present on these devices, that will probably soon replace our old radios, then you should start developing so called "skills" for each of these systems. The discussion on the pros and cons of smart speakers is already ongoing.

For websites this new technology might finally bring an improvement for impaired people as most modern browsers seem more and more to support ASR directly the client. So it won't take too long, unless the old paradigm in web development will shift from "mobile first" to "speech first". We will see what the future holds.

Meet Kotlin — or why I will never go back to Java! Fri, 09 Mar 2018 00:00:00 +0100 When Google and JetBrains announced first-class support for Kotlin on Android last year, I could not wait to use it on our next project. Java is an OK language, but when you are used to Swift on iOS and C# on Xamarin, it's sometimes hard to go back to the limited Java that Android has to offer.

Within this past year, we successfully shipped two applications using Kotlin exclusively, with another one to follow soon. We decided to also use Kotlin for previous Java apps that we keep updating.

I took my chance when the Mobile Romandie Beer meetup was looking for speakers. I knew that I had to show others how easy and fun this language is.

It turned out great. We had people from various backgrounds: from people just curious about it, UX/UI designers, iOS developers, Java developers to people using Kotlin in production already.

You can find my slides below:

I would like to share a few links that helped me learn about Kotlin:

  • Kotlin Koans: a step by step tutorial that directly executes your code in the browser
  • Kotlin and Android: the official Android page to get started with Kotlin on Android
  • Android KTX: a useful library to help with Android development, released by Google

See you at the next meetup!

Machine Learning as a Service with firefly Sun, 04 Mar 2018 00:00:00 +0100 So I know there is yhat science ops which is a product exactly for this problem, but that solution is a bit pricy and maybe not the right thing if you want to prototype something really quick. There is, of course, the option to use your own servers and wrap your ml model in a thin layer of Flask as I have show in a recommender example for Slack before. But now there is an even easier solution using firefly and Heroku, that offers you a possibility to deploy your prototypes basically for free.


You can easily install firefly with pip

pip install firefly-python

Once its installed (I've been using python 2.7 - shame on me) you should be able to test it with:

firefly -h

Hello World Example

So we could write a simple function that returns the result of two numbers:
def add(x,y):
  return x+y

and then run it locally with firefly:

firefly example.add
2018-02-28 15:25:36 firefly [INFO] Starting Firefly...

The cool thing is now that the function is available at and you could use it with curl. Make sure to still run the firefly server from another tab.

firefly curl -d '{"x": 4, "y": 5}'

or even with the built in client:

import firefly
client = firefly.Client("")


For any real-world example, you will need to use authentication. This is actually also quite easy with firefly. You simply supply an API token when starting it up:

firefly  example.add --token plotti1234

Using the firefly client you can easily authenticate with:

client = firefly.Client("",auth_token="plotti1234")

If you don't supply it, you will get a:

firefly.client.FireflyError: Authorization token mismatch.

Of course, you can still use curl to do the same:

curl -d '{"x": 6,"y":5}' -H "Authorization: Token plotti1234"

Going to production

Config File

You can also use a config.yml file, to supply all of these parameters

# config.yml
version: 1.0
token: "plotti1234"
    path: "/add"
    function: "example.add"

and then start firefly with:

firefly -c config.yml

Training a model and dumping it onto drive

Now you can train a model and dump it to drive with scikit with joblib. You can easily load it with firefly and serve it under a route. First, let's train a hello world tree model with the iris dataset and dump it to drive:

from sklearn import tree
from sklearn import datasets
from sklearn.externals import joblib

#Load dataset
iris = datasets.load_iris()
X, Y =,
#Pick a model
clf = tree.DecisionTreeClassifier()
clf =, Y)
# Try it out
array([[5.1, 3.5, 1.4, 0.2]])
array([0]) # result of classification
#Dump it to drive
joblib.dump(clf, 'iris.pkl') 

You can then load this model in firefly as a function and you are done:

from sklearn.externals import joblib
clf = joblib.load('iris.pkl')
def predict(a):
    predicted = clf.predict(a)    # predicted is 4x2 numpy array
    return int(predicted[0])

To start it up you use the conventional method:

firefly iris.predict

And now, you can access your trained model simply by the client or curl:

import firefly
client = firefly.Client("")
client.predict(a=[[5.1, 3.5, 1.4, 0.2]]) # the same values as above
0 # the same result yeay!

Deploy it to Heroku!

To deploy it to heroku you need to add two files. A Procfile that says how to run our app, and a requirements.txt file that says which libraries it will be using. It's quite straightforward for the requirements.txt:


And for the procfile you can use gunicorn to run it and supply the functions that you want to use as environment parameters:

# Procfile
web: gunicorn --preload firefly.main:app -e FIREFLY_FUNCTIONS="iris.predict" -e FIREFLY_TOKEN="plotti1234"

The only thing left to do is commit it to git and deploy it to heroku:

git init
git add . 
git commit -m "init"
heroku login # to login into your heroku account. 
heroku create # to create the app

The final step is the deployment which is done via git push in heroku :

git push heroku master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 279 bytes | 279.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote: -----> Python app detected
remote: -----> Installing requirements with pip
remote: -----> Discovering process types
remote:        Procfile declares types -> web
remote: -----> Compressing...
remote:        Done: 119.6M
remote: -----> Launching...
remote:        Released v7
remote: deployed to Heroku
remote: Verifying deploy... done.
   985a4c3..40726ee  master -> master

Test it

Now you've got a running machine learning model on heroku for free! You can try it out via curl. Notice I've wrapped the array as a string representation to make things easy.

 curl -d '{"a":"[[5.1, 3.5, 1.4, 0.2]]"}' -H "Authorization: Token plotti1234"

You can of course also use the firefly client:

client = firefly.Client("",auth_token="plotti1234")
client.predict(a=[[5.1, 3.5, 1.4, 0.2]])

Bonus: Multithreading and Documentation

Since we are using gunicorn you can easily start 4 workers and your API should respond better to a high load. Change your Procfile to:

web: gunicorn --workers 4 firefly.main:app -e FIREFLY_FUNCTIONS="iris.predict" -e FIREFLY_TOKEN="plotti1234"

Finally there is only crude support for an apidoc style documentation. But when you do a GET request to your root / of your app you will get a listing of the docstrings from your code. So hopefully in the future they will also support apidoc or swagger to make the usage of such an API even more convenient:

curl -H "Authorization: Token plotti1234"
{"app": "firefly", "version": "0.1.11", "functions": {"predict": {"path": "/predict", "doc": "\n    @api {post} /predict\n    @apiGroup Predict\n    @apiName PredictClass\n\n    @apiDescription This function predicts the class of iris.\n    @apiSampleRequest /predict\n    ", "parameters": [{"name": "a", "kind": "POSITIONAL_OR_KEYWORD"}]}}}

I highly recommend this still young project, because it really reduces deploying a new model to a git push heroku master for me for prototypes. There are obviously some things missing like extensive logging, performance benchmarking , various methods of authentication, better support for docs. Yet its so much fun to deploy models in such a convenient way.

One for all, all for one Fri, 02 Mar 2018 00:00:00 +0100 Why a blog?
The cooperation between Raffeisen and Liip has developed and deepened over the years. From the first steps in agility to a joint Scrum team working in the same office, in which affiliation with the company does not play a role, has emerged. That’s why we are sharing our experience in a blog series about collaboration and cooperation.

Approximately 255 Raiffeisen banks are currently members of the Raiffeisen Switzerland cooperative. Raiffeisen Switzerland provides services for the entire Raiffeisen Group and is responsible for the strategic orientation of the business areas of the Raiffeisen banks as well as for risk management, marketing, information technology, training, supply and management of banks with liquidity. Raiffeisen Schweiz also conducts its own banking business through branches.

As Raiffeisen's customer loyalty platform, MemberPlus has grown over several years and now offers a wide range of services to the Raiffeisen customers. In addition to discounts for event tickets, there are special conditions for private customers for hotel accommodations and many more. Corporate customers also have access to special sponsoring deals from the Raiffeisen Super League. Raiffeisen Music is the app for young people to listen to their favourite songs, visit concerts for less money and with a bit of luck meeting their stars.

The project: MemberPlus Portal 2.0
In 2018, a relaunch of the MemberPlus platform realizes the following vision:"We offer everyone an overview of offers for members. These offers can be booked fast and easily by MemberPlus customers". In addition to this clear user centricity, the second focus is on technical innovation: an upgrade to Magento 2 and a completely new user interface with Angular.

To be continued
Curious about the next Blogpost? We’ll publishe the next article ends of March on the topic of project setup.

Laura Kalbag – Accessibility for Everyone Wed, 21 Feb 2018 00:00:00 +0100 How does it feel to navigate the web with an impairment?

Imagine you can’t see and you listen to a screen reader, what does it say? What is wrong with a screen reader? It reads titles filed with SEO keywords (generalities and nothing specific), then it goes ‘link-webcontent-banner-link-link-webcontent-image…’ You get the idea.
Imagine you can’t listen and you see a video without subtitle. What can you understand?
Imagine you have fine motion impairment, how can you click on a tiny link ‘here’?
Imagine it is you first time on the web, you don’t know the conventions and you don't know how to fill a contact formular (what does the asterisk mean?)

Laura started her talk with a demonstration that provided examples of difficulty that are commonly faced.

There are 4 ways in which a page can be difficult:

  • The page is hard to operate,
  • The page is hard to understand,
  • The page is not readable,
  • The page is not listenable.

It is not about other people

Are you able bodied? Do you ever feel concerned about such issues? If the the answer is yes and then no, despite the fact that you might lack empathy, you are short-sighted.
There is about 100% of chance that, in the future, you will lose some of your abilities. We grow older, how easy is it for your grandparents to navigate the web? How easy is it for kids in comparison? Don’t fool yourself, you will be the grandparent.
Even temporarily, with a broken arm, a broken leg, an illness or accident, we will be, at some point, impaired. Actually, you could refer to yourself as a TAB: ‘temporarily able bodied’.
While we enjoy our condition of TAB, it is the world that we create that is impairing.
Imagine a world created for people who are half our size, how easy would it be to go around a house created with a standard of 1m10 height? It is uncomfortable and you might even harm yourself, it would be like walking around a Middle Age house and hurting your head because you are too tall.

What are accessibility and inclusive design?

Accessibility is a way to go around. For example: you have stairs to the main entrance, but you provide a way around your house with a lift for anyone with wheel (from mothers with child in a buggy to someone living with a wheelchair).
How does it feel to always have to take the back door because you go around with wheels?

Inclusive design goes beyond the alternative, it is designing for everyone from the beginning. Obviously disability is diverse, and there is little chances you can accommodate everybody. However, you can make slight changes that will provide a wider range of possibilities. Inclusive design is designing so taht everyone can take the front door.

You can :

  • Make it easy to see,
  • Easy to hear,
  • Easy to operate,
  • Easy to understand.

“We need to design for purpose. Accessibility is not binary: it is our eternal goal. Iteration is our power : digital is not printed. ” says Laura.

Practical actions for copywriting

I really liked that Laura provided us with a wide range of practical advices to create inclusive design. The list below is not exhaustive, it is just what caught my ear.

  • Give your content a clear hierarchy and clear structure,
  • Don’t be an attention thief,
  • Use plain and simple language and explain the content,
  • Give your content order,
  • Use headings to segment and label: headings are not just a visual feature, in plain text, use hierarchy such as

  • Prefer descriptive linking, such as Contact us rather than 'Click here to contact us',
  • Use ponctuation, like com and full stop → it gives the screen reader a break,
  • Add transcript (they are useful also to people who want to just scan the text),
  • Use captions and subtitles for video (captions include all of the audio information, example: a bit of audio). Producing captions and subtitles is easy with jubler. Another way of getting quick subtitles is to reuse and edite the auto-caption.

Alternative content

For people who can’t access your primary content (because of a low connection or sight disability), provide a text alternative (alternative attribute). It gives the browser a way to fall back
Write descriptive meaningful alternative text. Rather than ‘Picture of my dog’ be creative and use ‘Picture of the head of my dog resting on my knee, looking very sad while I work with my laptop on my lap.’

Try out and iterate

Social media is a great option to work on your alternative text. For example, you can add descriptions of pictures on Twitter.

Can accessible websites be beautiful?

Laura advises us to consider aesthetics as design, not as decoration because ugly is not accessible anyway. "We are not making art, beauty is a thoughtfully-designed interface." says Laura.

Practical actions: Aesthetic principles

  • Use buttons for buttons and links for links: something happen or take someone somewhere: interfaces should not be confusing, one needs to understand what is the purpose, what they can with it, when to do it,
  • Conventions: don’t be different for the sake of being different, but don’t do it because everybody does it,
  • Ensure the layout order reflects the content keyboard,
  • Width: long lines are difficult to follow,
  • Typography: chose according to readability, suitability and not because it looks cool: heinemann vs. georgia (beautiful serif but confusing if you are new to reading),
  • Small is not tidy, it is just small,
  • Don’t prevent font resizing,
  • Consider the font weights,
  • Consider the line heights,
  • Colour: it should not be the sole mean to convey information (example: use a doted line),
  • Colour contrast,
  • Don’t decide what is good for other human beings, rather ask them.

"Our industry isn’t so diverse: we don’t all have the same needs but we mostly build product for ourselves. We need to understand and care." advocates Laura.


It is beneficial to work within a diverse team. Empathy is easier because you embrace differences. When needs are different within a team, it makes it more difficult to completely avoid difference. When you understand problems, you are better at solving problems.
A diverse team also prevents us from ‘othering’: let’s not speak about the ‘other’ people.
Laura proposes to go a step further: what if we speak about a person rather than a user? Then it is not user experience design, just experience design.

We can also diversify our source material.
“Don’t shut people out. It impacts people’s lives. We build the new everyday thing, we have to take responsibility of what we do.” advocates Laura.

Everyday actions you can take

If you are not a designer or a copywriter, or if you feel that you are not in a position to decide, you can still make a difference:

  • Be the advisor: provide info and trainings,
  • Be the advocate: if you are not marginalised you have more power,
  • Be the questioner,
  • Be the gatekeeper,
  • Be difficult : embrace the awkwardness of being annoying,
  • Be unprofessional: don’t let people tell you to be quiet or to be nice,
  • Be the supporter: if you can’t risk things, support the people who speak up.

About Laura Kalbag and IxDA Lausanne

Laura Kalbag is a designer from the UK, and author of Accessibility For Everyone from A Book Apart.


Laura works on everything from design and development, through to learning how to run a sustainable social enterprise, whilst trying to make privacy, and broader ethics in technology, accessible to a wide audience. On an average day, you can find Laura making design decisions, writing CSS, nudging icon pixels, or distilling a privacy policy into something humans understand. (Text by IxDA Lausanne).

IxDA Lausanne is your local chapter of the Interaction Design Association - IxDA.
The team organises events for Interaction Design enthusiasts.
We are really happy to be the main sponsor of this great event and can’t wait for the next one.
Check the programm

Migros Culture Percentage Web-Relaunch Thu, 15 Feb 2018 00:00:00 +0100 Go Live before Christmas
Shortly before Christmas the new website of Migros Culture Percentage went live.

The website of Migros Culture Percentage's commitment was based on an outdated Content Management System (CMS). This should change by the end of 2017. A migration to the already known CMS Sitecore, which is in use at the Migros Group, was certain.

The new site should be mobile-capable and meet the current technological standards. The project was used to update the site in all respects. The performance has now been state-of-the-art since the end of 2017. This means that the site is responsive and can be accessed conveniently on all mobile devices and, of course, on the desktop computer at home.

Styleguide Generator
A pattern library generator was used as a basis for the development of the front-end ( The appearance is therefore based on a technically clean basis. This ensures the front-end of the current appearance and future design developments. The user journeys have also been reworked. Due to its user friendliness, the site is not only pleasing to the eye, but can also be experienced.

Cultural and social offer
Migros Culture Percentage gives a broad population access to cultural and social services. The Migros Cooperative Association finances the voluntary commitment in the areas of culture, society, education, leisure and business.

The website presents the various projects and at the same time makes it possible to apply for funding.

Drupal 8: Using the "config" section of composer Tue, 13 Feb 2018 00:00:00 +0100 Composer is the way, we handle our dependancies in Drupal 8. We at Liip use composer for all our Drupal 8 projects, even for a lot of Drupal 7 projects, we switched to composer.

We use the composer template available at GitHub:

Composer has a lot of cool features. There are several plugins, we use in all our Drupal projects like

Useful composer config options für Drupal developers

Today, I would like to share some cool features of the "config"-section of composer.json.

Let's have a look at the following config-section of my Drupal 8 project:

    "config": {
        "preferred-install": "source",
        "discard-changes": true,
        "secure-http": false,
        "sort-packages": true,
        "platform": {
            "php": "7.0.22"

Composer config: "preferred-install": "source"

Have you ever had the need of patching a contrib module? You found a bug and now you should publish a patch file to But how can you create a patch, if the contrib module was downloaded as a zip file via composer and extracted to you contrib folder that is not under source control?

"preferred-install: source" is your friend! Add this option to your composer.json

  • delete your dependancy folders and
  • run composer install again

All dependancies will be cloned via git instead of downloaded and extracted. If you need to patch a module or Drupal core, you can create patches easily via git because the depedancy is under version control.

"discard-changes": true

If you are working with Composer Patches and preferred-install: source, you want to enable this option. If you have applied patches, there will be a difference in the source compared to the current git checkout. If you deploy with composer, these messages can block the composer install call. This option will avoid messages like "The package has modified files." during deployment if you combine it with composer install --no-interaction.

"sort-packages": true

This option will sort all your packages inside the composer.json in alphabetical order. Very helpful.

Composer config: "platform" (force / fake a specific php version to be used).

Often we face the fact, that our live / deployment server are running on PHP 7.0, but locally you might run PHP 7.1 oder even PHP 7.2. This can be risky, because if you run a "composer update" locally, Composer will assume, that you have PHP 7.1 available and will download or update your vendor dependancies to PHP 7.1. If you deploy later, you will run into mysterious PHP errors, because on your target system, you do not have PHP 7.1 available.
This is the reason, we always fix / force the PHP version in our composer to match the current live system.

Drupal Core update: How can I preserve the ".htaccess" and "" file during the update?

If you do Drupal core updates, these files get always overriden by the default files. On some projects you might have changes these files and its quite annoying to revert the changes on every Core update.

Use the following options in the "extra"-section of your composer.json to get rid of this issue.

    "extra": {
        "drupal-scaffold": {
            "excludes": [
Launching Agile Zürich!! Sun, 11 Feb 2018 00:00:00 +0100 The story of the word “agile” started exactly 17 years ago when 17 practitioners met in Utah and drafted the famous Manifesto for Agile Software Development. That was the turning point and turned the IT industry completely upside down. Ever since it spreaded across various sectors.

Surprisingly, this small gathering turned out to be a global earthquake of the way we see the world we live in. From finance to business to organizations, “agile” is everywhere now. The downside of this revolution is that the word “agile“ is now an adjective used and overused and abused everywhere as well. By becoming mainstream, some shortcuts were taken to expand it to mass adoption and its original flavor was diluted along the way.


Over the years many communities emerged, gathering specialists around the new trending methods and games that are created every year. Around Zürich, a dozen of groups coexist around the same subject, “uncovering better ways of [working] by doing it and helping others do it“, as a tweak to the introduction of the manifesto.

One can easily be lost into the multiplication of new names and subgroups. Especially when you are new to agility. Where to start? Where to meet practitioners?

We miss a place to gather all together, to open the stage to newcomers, to share anecdotes and challenges. To freely talk about “agile”. To get back to the roots sometimes or to launch crazy ideas!

An open space is definitely needed.

Inspired by the successes of the strong Agile Tour communities around the globe... let’s launch Agile Zürich!!


Like open source software that anyone can inspect, modify, and enhance, Agile Zürich is an open source community. It is especially open for all. From any industry. From beginners to advanced practitioners, from curious minds to established professionals, from skepticals to believers… Agile Zürich intents to make people share.

And because it is always better to start early, its membership is free for students, as well as for unemployed people.


No company nor person owns it, it belongs to its members and intents to evolve through the years, based on its members’ actions.

Agile Zürich is also not for profit so every cent is spent to make it live, to bring value to the community.


Agile Zürich is not about preaching “agile” either, it is about sharing stories and learning techniques to face the complexity of the world. No one is right here, and no one is wrong either.

“At Agile Zürich, we are uncovering better ways of working by doing it and helping others do it.”

Join the group now and start sharing your thoughts! We’ll post the date of the first gathering soon, so let’s keep in touch!

Also, follow Agile Zürich on Twitter.

Innovative web presence of Steps Thu, 08 Feb 2018 00:00:00 +0100 Anniversary year
The dance festival Steps celebrates its 30th birthday in 2018. Every two year, the platform for contemporary dance presents approximately a dozen dance companies throughout Switzerland.

Innovative design
The birthday present to Steps was an innovative, reduced design of the website The appearance is now completely responsive and therefore usable on mobile phones, tablets as well as at home on the desktop.

Challenging implementation
The implementation of the new design was particularly challenging in the development, as the new appearance consists of several micro animations. This also applies to the current date change in the schedule.

The basis of the website was also renewed in the project: Steps now runs on the Sitecore Content Management System.

Cross-agency cooperation
The new Steps appearance was created in close cooperation with Migros in Lead, Y7K as design agency, Namics for the technical implementation in CMS and Liip for the front-end implementation.

Urban Connect Mobile App Golive (And The Challenges We Faced) Thu, 01 Feb 2018 00:00:00 +0100 Back in October, we kickstarted the relaunch of the Urban Connect bicycle fleet solution for their corporate clients (amongst which are Google and Avaloq). We provided the startup with User Experience, design, and mobile development services. The goal was to launch a brand new iOS and Android mobile app, as well as a new back-office and API. This new solution will enable the startup to grow its activity based on a robust and scalable digital platform.

The golive happened successfully last week — yeah!, and while we prepare our celebration lunch, I thought it is maybe interesting to share the challenges we faced during this purposeful project, and how we overcame them.

Make The User Use The App The Least Possible!

This product development was particular because we soon realized that for once, we were crafting a mobile app that should be used the least possible. That’s for a goal when all what we hear nowadays is about user retention and engagement.
The point is that Urban Connect service is about providing a bike solution with a smart lock that can be opened with your smartphone (via Bluetooth technology). People want to use it to book a bike and go from a point A to a point B, with the least friction possible — including with our mobile app software.

This key discovery was possible thanks to our UX workshops that focus on the user problems first. Concretely, it means that we went to the Google Zürich office to analyze and listen to how real users were using the solution, and which problems they had. That’s when we got that users wanted to use the app the least possible, and that it works automagically without getting their smartphone out of their pocket.
It’s only afterwards that we started to draw the first wireframes, and iterated on prototypes with Urban Connect and its clients to be sure that what we’re going to build was answering the real issues.
And finally, we developed and applied the user interface design layers.

This resulted in one of the Google users stating:

“Wow, that’s like a Tesla!
I never use the key, but it’s always there with me, and it just works.”

Again, we looked at the problems first, not at the design aspects. It may sound simple, but it still isn’t such a mainstream approach as one could think.

On Third-Party Smart Locks, Fancy Android Phones, and MVP

Most of the struggles we had to overcome were linked to the Bluetooth connectivity between the smart locks from Noke, and the Chinese Android devices.

The issues we faced with the Noke Smart Lock solution were that the company is still a startup, and as such, there are still some perks to their product such as outdated documentation, or hardware reliability.
Nevertheless, our solution was to not enter in a blame game party with them and Urban Connect, and rather contact them directly to fix the problems, one at a time. We must say that they were really responsive and helpful, so much that all of our main blockers were removed in less than a few days each time.
But that’s something to take into account in the planning and investment margin when you do estimation — thankfully we did!

Tests of our Urban Connect Mobile App Developments

Testing Session of our Urban Connect Mobile App.

As for the fancy Android phones, that’s the same story one hear everywhere: Android devices’ fragmentation sucks. We’re prepared to face such issues, and bought many different Android devices to be sure to have a good coverage.
Nevertheless, there are always corner cases such as a fancy Chinese phone with a specific Android ROM — one that is obviously not sold anymore to simplify the thing.
We overcame this issue thanks to a simple tool: communication. We didn’t play the blame game, and got in contact with the end user to figure out what was the problem on his device, so that we could understand it better.

Although communication is a good ally, its effect is the most effective when you start to use it as early as possible in the project.
As we use the Minimum Viable Product approach to develop our products, this allowed us to focus on this critical points upfront from day one, face them, and find solutions — vs. building the entire thing and being stuck at the golive with unsolvable issues.

Trustful Partnership as a Key to Success

On top of the usual UX and technical challenges that we face in every project, one key to the success of this product launch was the people involved. And the trustful collaboration between Urban Connect and us.
We faced many different challenges, from unexpected events, to sweats about planning deadlines (we made it on time, be reassured). And every single time, Judith and her team trusted us, as we did all along the way. Without this trust, we could have failed at many points during the project, but we both chose to remain in a solution-oriented mindset, and focus on the product launch goal. It proved to work, once again.

A Trustful Partnership as a Key to a Product Success

A Trustful Partnership as the Key to a Product Success.

What’s next for Urban Connect?

Now that we happily pressed the “Go Live!” button, we’re first going to celebrate it properly around a nice lunch in Zürich. It was intense, but we love to work on such purposeful products which will help cities get more healthier thanks to less pollution and more physical activity for its users.

Then, we’ll have plenty of features to work on in 2018, including innovative ones like integrating IoT modules to the platform. We’ll make sure to share such new cool stuff with you.

In case you want to learn more about Urban Connect e-bike fleet service, feel free to contact Judith to get more infos and a demo of our solution.