Blog Liip Kirby Thu, 14 Jun 2018 00:00:00 +0200 Les derniers articles du blog Liip fr Let’s make Moodle amazing Thu, 14 Jun 2018 00:00:00 +0200 A new empowering direction for Moodle

MoodleMoot UK & Ireland 2018 in Glasgow was the place to be, if you asked yourself like I did: “What will be the future of the Learning Management System (LMS) called Moodle?”. In fact, from the 26th to the 28th of March 2018, the Moodle Headquarters organized a conference dedicated to Moodle Partners (companies offering Moodle services such as Liip), as well as developers and administrators of the very popular open source course management system. That was a great opportunity to meet all these stakeholders and learn about the actual trends of this LMS. The program begins with the announcement of a $6 million investment from the company Education for the Many. Moodle HQ will make use of this funding to improve consistency and sustainability, to build a new European Headquarter in Barcelona and to improve its didactic approach.

A new investor believing in the Moodle mission

Martin Dougiamas, founder and CEO of the Moodle HQ, opens the conference with an inspiring keynote about the goals for the near future. Reminding the mission – empowering educators to improve our world – he articulates the vision of the company.

“Education is maybe the only weapon that can make a difference, as we need responsible persons to face the current issues of our world”.

This turning point requires financial support. Education for the Many, an investment company of the French-based Leclercq family involved in well-known businesses such as Decathlon sporting goods, understands the challenges that Moodle is facing. They are not focused only on the return on investment but they also care about the educational vision. For the time and money invested, Education for the Many receives a minor stake in Moodle HQ and a seat on the board.

Future challenges

“It’s time to make Moodle amazing!”, continued Martin. One of the benefits for Europeans will be the growth of the Moodle office in Barcelona. It should expand to become like the headquarters in Perth. Therefore, Barcelona will turn into the European Moodle HQ. As most Moodle users are located in Europe, being close to them is an advantage. The Moodle product is and should always remain competitive. Ensuring this is one of the pillars of the new strategy. With this goal in mind, the future Moodle 3.6+ versions will be designed to achieve sustainability at a high level. Furthermore, they will concentrate on improving the usability, creating standards, enhancing system integrations as well as being supported across all devices.

Engaging the learners

One of the big challenges as a teacher is to keep participants engaged during the learning process. To support this, Moodle HQ is developing a special certification for Moodle Partners, so they can deepen their software knowledge and get up-to-date on the best practices for online content creation. Through official Moodle Partners, teachers can access the same education platforms. This is how the Learn Moodle platform aims to significantly improve the quality of teaching. Moreover, effort will be invested to maximize connections inside the community of users and administrators, in order to build a big and strong userbase through the association. This platform will support the creation of educative content as well as sharing and offering services. Every Moodler is welcome to take part in this project.

To summarize, I came back from the conference more confident than ever about Moodle's potential, empowered as a Moodle Partner, and impatient to bring Moodle's capabilities to our customers.

Recipe Assistant Prototype with ASR and TTS on Socket.IO - Part 3 Developing the prototype Tue, 12 Jun 2018 00:00:00 +0200 Welcome to part three of three in our mini blog post series on how to build a recipe assistant with automatic speech recognition and text to speech to deliver a hands free cooking experience. In the first blog post we gave you a hands on market overview of existing Saas and opensource TTS solutions, in the second post we have put the user in the center by covering the usability aspects of dialog driven apps and how to create a good conversation flow. Finally it's time to get our hands dirty and show you some code.

Prototyping with Socket.IO

Although we envisioned the final app to be a mobile app and run on a phone it was much faster for us to build a small web application, that is basically mimicking how an app might work on the mobile. Although is not the newest tool in the shed, it was great fun to work with it because it was really easy to set up. All you needed is a js library on the HTML side and tell it to connect to the server, which in our case is a simple python flask micro-webserver app.

#socket IO integration in the html webpage
<script src=""></script>
    var socket = io.connect('http://' + document.domain + ':' + location.port);
    socket.on('connect', function() {
        console.log("Connected recipe");

The code above connects to our flask server and emits the start message, signaling that our audio service can start reading the first step. Depending on different messages we can quickly alter the DOM or do other things in almost real time, which is very handy.

To make it work on the server side in the flask app all you need is a python library that you integrate in your application and you are ready to go:

# in flask
from flask_socketio import SocketIO, emit
socketio = SocketIO(app)


#listen to messages 
def start_thread():
    global thread
    if not thread.isAlive():
        print("Starting Thread")
        thread = AudioThread()


#emit some messages
socketio.emit('ingredients', {"ingredients": "xyz"})

In the code excerpt above we start a thread that will be responsible for handling our audio processing. It starts when the web server receives the start message from the client, signalling that he is ready to lead a conversation with the user.

Automatic speech recognition and state machines

The main part of the application is simply a while loop in the thread that listens to what the user has to say. Whenever we change the state of our application, it displays the next recipe state and reads it out loudly. We’ve sketched out the flow of the states in the diagram below. This time it is really a simple mainly linear conversation flow, with the only difference, that we sometimes branch off, to remind the user to preheat the oven, or take things out of the oven. This way we can potentially save the user time or at least offer some sort of convenience, that he doesn’t get in a “classic” recipe on paper.


The automatic speech recognion (see below) works with in the same manner like I have shown in my recent blog post. Have a look there to read up on the technology behind it and find out how the RecognizeSpeech class works. In a nutshell we are recording 2 seconds of audio locally and then sending it over a REST API to and waiting for it to turn it into text. While this is convenient from a developer’s side - not having to write a lot of code and be able to use a service - the downside is the reduced usability for the user. It introduces roughly 1-2 seconds of lag, that it takes to send the data, process it and receive the results. Ideally I think the ASR should take place on the mobile device itself to introduce as little lag as possible.

#abbreviated main thread

self.states = ["people","ingredients","step1","step2","step3","step4","step5","step6","end"]
while not thread_stop_event.isSet():
    socketio.emit("showmic") # show the microphone symbol in the frontend signalling that the app is listening
    text = recognize.RecognizeSpeech('myspeech.wav', 2) #the speech recognition is hidden here :)
    socketio.emit("hidemic") # hide the mic, signaling that we are processing the request

    if self.state == "people":
        if intro_not_played:
            intro_not_played = False
        persons = re.findall(r"\d+", text)
        if len(persons) != 0:
            self.state = self.states[self.states.index(self.state)+1]
    if self.state == "ingredients"
        if intro_not_played:
            intro_not_played = False
        if "weiter" in text:
            self.state = self.states[self.states.index(self.state)+1]
        elif "zurück" in text:
            self.state = self.states[self.states.index(self.state)-1]
        elif "wiederholen" in text:
            intro_not_played = True #repeat the loop

As we see above, depending on the state that we are in, we play the right audio TTS to the user and then progress into the next state. Each step also listens if the user wanted to go forward (weiter), backward (zurück) or repeat the step (wiederholen), because he might have misheard.

The first prototype solution, that I am showing above, is not perfect though, as we are not using a wake-up word. Instead we are offering the user periodically a chance to give us his input. The main drawback is that when the user speaks when it is not expected from him, we might not record it, and in consequence be unable to react to his inputs. Additionally sending audio back and forth in the cloud, creates a rather sluggish experience. I would be much happier to have the ASR part on the client directly especially when we are only listening to mainly 3-4 navigational words.

TTS with Slowsoft

Finally you have noticed above that there is a play method in the code above. That's where the TTS is hidden. As you see below we first show the speaker symbol in the application, signalling that now is the time to listen. We then send the text to Slowsoft via their API and in our case define the dialect "CHE-gr" and the speed and pitch of the output.

#play function
    def play(self,text):
        headers = {'Accept': 'audio/wav','Content-Type': 'application/json', "auth": "xxxxxx"}
        with open("response.wav", "wb") as f: 
            resp ='', headers = headers, data = json.dumps({"text":text,"voiceorlang":"gsw-CHE-gr","speed":100,"pitch":100}))
            os.system("mplayer response.wav")

The text snippets are simply parts of the recipe. I tried to cut them into digestible parts, where each part contains roughly one action. Here having an already structured recipe in the open recipe format helps a lot, because we don't need to do any manual processing before sending the data.


We took our prototype for a spin and realized in our experiments that it is a must to have a wake-up. We simply couldn’t time the input correctly to enter it when the app was listening, this was a big pain for user experience.

I know that nowadays smart speakers like alexa or google home provide their own wakeup word, but we wanted to have our own. Is that even possible? Well, you have different options here. You could train a deep network from scratch with tensorflow-lite or create your own model by following along this tutorial on how to create a simple speech recognition with tensorflow. Yet the main drawback is that you might need a lot (and I mean A LOT as in 65 thousand samples) of audio samples. That is not really applicable for most users.


Luckily you can also take an existing deep network and train it to understand YOUR wakeup words. That means that it will not generalize as well to other persons, but maybe that is not that much of a problem. You might as well think of it as a feature, saying, that your assistant only listens to you and not your kids :). A solution of this form exists under the name snowboy, where a couple of ex-Googlers created a startup that lets you create your own wakeup words, and then download those models. That is exactly what I did for this prototype. All you need to do is to go on the snowboy website and provide three samples of your wakeup-word. It then computes a model that you can download. You can also use their REST API to do that, the idea here is that you can include this phase directly in your application making it very convenient for a user to set up his own wakeup- word.

#wakeup class 

import snowboydecoder
import sys
import signal

class Wakeup():
    def __init__(self):
        self.detector = snowboydecoder.HotwordDetector("betty.pmdl", sensitivity=0.5)
        self.interrupted = False

    def signal_handler(signal, frame):
        self.interrupted = True

    def interrupt_callback(self):
        return self.interrupted

    def custom_callback(self):
        self.interrupted = True
        return True

    def wakeup(self):
        self.interrupted = False
        self.detector.start(detected_callback=self.custom_callback, interrupt_check=self.interrupt_callback,sleep_time=0.03)
        return self.interrupted

All it needs then is to create a wakeup class that you might run from any other app that you include it in. In the code above you’ll notice that we included our downloaded model there (“betty.pmdl”) and the rest of the methods are there to interrupt the wakeup method once we hear the wakeup word.

We then included this class in your main application as a blocking call, meaning that whenever we hit the part where we are supposed to listen to the wakeup word, we will remain there unless we hear the word:

#integration into main app
            text = recognize.RecognizeSpeech('myspeech.wav', 2)

So you noticed in the code above that we changed included the wakeup.Wakeup() call that now waits until the user has spoken the word, and only after that we then record 2 seconds of audio to send it to processing with In our testing that improved the user experience tremendously. You also see that we signall the listening to the user via graphical clues, by showing a little ear, when the app is listening for the wakeup word, and then showing a microphone when the app is ready is listening to your commands.


So finally time to show you the Tech-Demo. It gives you an idea how such an app might work and also hopefully gives you a starting point for new ideas and other improvements. While it's definitely not perfect it does its job and allows me to cook handsfree :). Mission accomplished!

What's next?

In the first part of this blog post series we have seen quite an extensive overview over the current capabilities of TTS systems. While we have seen an abundance of options on the commercial side, sadly we didn’t find the same amount of sophisticated projects on the open source side. I hope this imbalance catches up in the future especially with the strong IoT movement, and the need to have these kind of technologies as an underlying stack for all kinds of smart assistant projects. Here is an example of a Kickstarter project for a small speaker with built in open source ASR and TTS.

In the second blog post, we discussed the user experience of audio centered assistants. We realized that going audio-only, might not always provide the best user experience, especially when the user is presented with a number of alternatives that he has to choise from. This was especially the case in the exploration phase, where you have to select a recipe and in the cooking phase where the user needs to go through the list of ingredients. Given that the Alexas, Homepods and the Google Home smart boxes are on their way to take over the audio-based home assistant area, I think that their usage will only make sense in a number of very simple to navigate domains, as in “Alexa play me something from Jamiroquai”. In more difficult domains, such as cooking, mobile phones might be an interesting alternative, especially since they are much more portable (they are mobile after all), offer a screen and almost every person already has one.

Finally in the last part of the series I have shown you how to integrate a number of solutions together - for ASR, slowsoft for TTS, snowboy for wakeupword and and flask for prototyping - to create a nice working prototype of a hands free cooking assistant. I have uploaded the code on github, so feel free to play around with it to sketch your own ideas. For us a next step could be taking the prototype to the next level, by really building it as an app for the Iphone or Android system, and especially improve on the speed of the ASR. Here we might use the existing coreML or tensorflow light frameworks or check how well we could already use the inbuilt ASR capabilities of the devices. As a final key take away we realized that building a hands free recipe assistant definitely is something different, than simply having the mobile phone read out the recipe out loud for you.

As always I am looking forward to your comments and insights and hope to update you on our little project soon.

Why I travel 4h+ every day to work and back just to write my graduation thesis Mon, 11 Jun 2018 00:00:00 +0200 I am 24 years old and in the 8th semester of my study “Business Information Systems”.

This May I started working at Liip. A lot of my fellow students and friends are asking me: Why would you take that route back and forth every day if you can just do it somewhere near? 🤷
This is actually a very reasonable and good question which hopefully will be answered by the end of this post. (Spoiler: No, the answer is not the juicy swiss-wage everyone thinks of 🤑)

My daily route


Figure 1: Here you can see the route I’d be taking if I was walking. Sure, let me swim across the lake real quick. 🏊‍
The route is about 80km (about 50 miles) if I was driving by car. Instead of driving by car, I am taking the train though. I’ll elaborate on why I take the train instead of driving myself later in this post.
Long talk short: It takes me 3 transfers to reach the Liip office in Zurich and about 2h one way. Which makes a total of about 4h every day to the office and back.
A lot time, isn’t it? Read along.

How I got to Liip and why I chose to send an application to this company

Side note: I already knew, I wanted to do my graduation thesis at a company. Also, what was clear to me is I wanted to go in the web-development direction. The question was just: Which company do I want to do it at?

I have already known about Liip through their open source contributions, such as their PHP one-line installation or their LiipImagineBundle. There is plenty of more open source contributions. I just picked two of which I had used in the past myself and still use from time to time.
Further, I did some research and noticed the awards the company has. One example for this is: Liip achieved the top 5 medium-sized companies.
This was enough to convince myself, to take the opportunity, to write a mail regarding and asking as of if it’s possible to write my thesis at this company.
After a little back and forth mailing, video-calling and explaining how writing a graduation Thesis at a company works, I was invited to come by for an interview. So I went for it, and it all went successful. 🎉

To put it in a nutshell: I don’t know about any other company having open source contributions which I have been using. Other than that, it’d be hard beating the award count. Also, I’d be lying if I said Zurich didn’t look cool on my CV(right? :D).

"I think it is possible for ordinary people to choose to be extraordinary." - Elon Musk

What is my topic and why it was an impact of my choice to this company

I was beforehand told that if I will be working at Liip, it’ll most likely be an open source bundle (which I find to be cool to support open source) regarding the eCommerce framework Sylius.
ECommerce gets more and more important as more and more people are shopping online instead of walking to the shops. I want to use this as an opportunity to get more knowledge in this area.
More about my topic, to follow, on another blogpost. :)

Why I am taking the train and how I am compensating for the travel time

As earlier mentioned I take the train every time I go to the Zurich Office, which takes like 2h one way. This is mostly the point where people roll their eyes and is the reason for them to reject such an opportunity.
First of all, what do people usually do when being in the train. I see mostly people hang on their phone, probably check out the same post in their social media over and over again until they eventually arrive. I told myself I don’t want to do this, so instead I read books.
Yeah, correct. I think for people that get distracted fast of reading books this is a good opportunity to (kind of) force yourself to read. 📖
Another big option is, since I am working on a Laptop I can start working in the train already, so I’d work less in the office and go back home earlier again. 💻
Every time I get into the train I tell myself: Why pull out your phone if you can be productive instead?
Not only are you using your time on the way when you are taking the train, but you also help the environment. 🌍

An overview of the advantages of taking the train over the car:

🚗 🚆
Have to drive slow or you get poor No possibilities to get speeding tickets
can't read books can read books
can't work remotely can work remotely
concentrate on the road can relax in the train
don't care about the environment care about the environment

TLDR; I chose Liip because of their open source contributions and I wanted to have a contribution too. I am using the train time to read books and start working in the train. There is no time-waste when I am reading or working on my laptop. Also, you help the environment by taking the train over the car.

Recipe Assistant Prototype with ASR and TTS on Socket.IO - Part 2 UX Workshop Mon, 04 Jun 2018 00:00:00 +0200 Welcome to part two of three in our mini blog post series on how to build a recipe assistant with automatic speech recognition and text to speech to deliver a hands free cooking experience. In the last blog post we provided you with an exhaustive hands on text to speech (TTS) market review, now its time to put the user in the center.

Workshop: Designing a user experience without a screen

Although the screen used to dominate the digital world, thanks to the rapid improvement of technologies, there are more options emerging. Most of mobile users have used or heard Siri from Apple iOS and Amazon Echo and almost 60 Mio Americans apparently already own a smart speaker. Until recently sill unheard of, nowadays smart voice based assistants are changing our life quickly. This means that user experience has to think beyond screen based interfaces. Actually it has always defined a holistic experience in a context where the user is involved, and also in speech recognition and speech as main input source, UX is needed to prevent potential usability issues in its interaction.

Yuri participated in our innoday workshop as an UX designer, where her goal was to help the team to define a recipe assistant with ASR and TTS, that help the user to cook recipes in the kitchen without using his hands, and is a enjoyable to use. In this blog post Yuri helped me to write down our UX workshop steps.


We started off with a brainstorming of our long term vision and short term vision and then wrote down down our ideas and thoughts on post its. We then grouped the ideas into three organically emerging topics, which were Business, Technology and User needs. I took the liberty to highlight some of the aspects that came to our minds:

  • User
    • Privacy: Users might not want to have their voice samples saved on some google server. Click here to listen to all your samples, if you have an Android phone.
    • Alexa vs. Mobile or is audio only enough?: We spent a lot of discussion thinking if a cookbook could work in an audio only mode. We were aware that there is for example an Alexa Skill from Chefkoch, but somehow the low rating made us suspicious if the user might need some minimal visual orientation. An app might be able to show you the ingredients or some visual clues on what to do in a certain step and who doesn't like these delicious pictures in recipes that lure you in to give a recipe a try?
    • Conversational Flow: An interesting aspect, that is easy to overlook was how to design the conversational flow in order to allow the user enough flexibility when going through each step of recipe but also not being to rigid.
    • Wakeup Word: The so called wakeup word is a crucial part of every ASR system, which triggers the start of the recording. I've written about it in a recent blog post.
    • Assistant Mode: Working with audio also gives interesting opportunities for features that are rather unusual on normal apps. We thought of a spoken audio alert, when the app notifies you to take the food from the oven. Something that might feel very helpful, or very annoying, depending on how it is solved.
  • Technology
    • Structured Data: Interestingly we soon realized that breaking down a cooking process means that we need to structure our data better than a simple text. An example is simply multiplying the ingredients by the amount of people. An interesting project in this area is the open recipe format that simply defines a YAML to hold all the necessary data in a structured way.
    • Lag and Usability: Combining TTS with ASR poses an interesting opportunity to combine different solutions in one product, but also poses the problem of time lags when two different cloud based systems have to work together.
  • Business
    • Tech and Cooking: Maybe a silly idea, but we definitely thought that as men it would feel much cooler to use a tech gadget to cook the meal, instead of a boring cookbook.

User journey

From there we took on the question: “How might we design an assistant that allows for cooking without looking at recipe on the screen several times, since the users’ hands and eyes are busy with cooking.”

We sketched the user journey as a full spectrum of activities that go beyond just cooking, and can be described as:

  • Awareness of the recipes and its interface on App or Web
  • Shopping ingredients according to selected recipe
  • Cooking
  • Eating
  • After eating

Due to the limited time of an inno-day, we decided to focus on the cooking phase only, while acknowledging that the this phase is definitely part of a much bigger user journey, where some parts, such as exploration, might be hard to tackle with an audio-only assistant. We tried though to explore the cooking step of the journey and broke it down into its own sub-steps. For example:

  • Cooking
    • Preparation
    • Select intended Recipe to cook
    • Select number of portions to cook
    • Check ingredients if the user has them all ready
  • Progress
    • Prepare ingredients
    • The actual cooking (boiling, baking, etc)
    • Seasoning and garnishing
    • Setting on a table

This meant for our cooking assistant that he needs to inform the user when to start each new sub-step and introduce the next steps in a easy unobtrusive way. He has also to track the multiple starts and stops from small actions during cooking, to for example remind the user to preheat the baking oven at an early point in time, when the user might not think of that future step yet (see below)


User experience with a screen vs. no screen

Although we were first keen on building an audio only interface, we found that a quick visual overview helps to make the process faster and easier. For example, an overview of ingredients can be viewed at a glance on the mobile screen without listening every single ingredient from the app. As a result we decided that a combination of a minimal screen output and voice output will ease out potential usability problems.

Since the user needs to navigate with his voice easy input options like “back”, “stop”, “forward”, “repeat” we decided to also show the step that the user is currently in the screen. This feedback helps the user to solve small errors or just orient himself more easily.

During the ux-prototyping phase, we also realised that we should visually highlight the moments when the user is expected to speak and when he is expected to listen. That's why immediately after a question from the app, we would like to show an icon with a microphone meaning “Please tell me your answer!”. In a similar way we also want to show an audio icon when we want the user to listen carefully. Finally since we didn’t want the assistant to permanently listen to audio, but listen to a so called “wake-up-word”, we show a little ear-icon, signalling that the assistant is now listening for this wake-up-word.

While those micro interactions and visual cues, helped us to streamline the user experience, we still think that these are definitely areas that are central to a user experience and should be improved in a next iteration.

Conclusion and what's next

I enjoyed that instead of starting to write code right away, we first sat together and started to sketch out the concept, by writing sticky notes, with ideas and comments that came to our mind. I enjoyed having a mixed group where we had UX people, Developers, Data Scientists and Project owners sitting at one table. Although our ambitious goal for the day was to deliver a prototype that was able to read recipes to the user we ran out of time and I couldn’t code the prototype on that day, but in exchange I think we gathered very valuable insights on a user experiences that work and that don’t work without a screen. We realized that going totally without a screen is much harder than it seems. It is crucial for the user experience that the user has enough orientation to know where he is in the process in order for him not to feel lost or confused.

In the final and third blog post of this mini series I will finally provide you with the details on how to write a simple flask and based prototype that combines automatic speech recognition, text to speech and wake-up-word detection to create a hands-free cooking experience.

Growing Like Crazy in a Few Hours: Welcome to Startup Weekend! Sun, 03 Jun 2018 00:00:00 +0200 Well, lucky you, I know a place where people are employees on Friday and entrepreneurs on Sunday. Embark on the journey, it’s going to be a lot of fun (and pleasure)!



Everything begins with an eager desire [PROBLEM]. Shared with others, it expands throughout the people believing in it.

While pitching, you clarify your thoughts and share your passion for it, which, by contagion, infects others [ONE MINUTE PITCH].


And because it’s always better with other people, you share energy with your teammates, gather together when it’s hard, do high-fives to relaunch after the low moments [TEAM].

You take tough decisions together and you accept them whatever they are, committed to be a great partner at any moment.


No one will tell you what to do there. You are free to stay or go, and also to say “no” too!

You are bold and do things you’ve never done [EXECUTION]. And it feels amazing.

Organizers and coaches are here to support you. They won’t tell you what or how to do things, they will just ask questions and help you reflecting. And it will be your decisions to take, and later to deal with the consequences.

Most of the time, they will just tell you to continue. Don’t worry, they keep you covered!


But the idea itself is just an idea for you, so you go out and search for people who would be ready to pay for it [CUSTOMER VALIDATION]. You don’t need to give them back your service right away, fake it until you make it they say [BUSINESS MODEL]!

You may need to change position at a moment [PIVOT]. Then, just continue!

At the very end, you wrap it up to present it to the world [FINAL PITCH].

Later, you look back and realize how much you grew, the new person you became in just a matter of hours. Inside of you something changed, and it cannot be removed. Congrats, you became an entrepreneur!

Startup Weekend Zürich, next edition on October 26th, 2018.

Photograph by Jürg Stuker. Startup Weekend Canvas: original by Jozué Morales, translated in English and adjusted with the help of the community by Léo Davesne.

Another post about this fantastic week-end on Namics' blog.

Why and how we use Xamarin Fri, 01 Jun 2018 00:00:00 +0200 When we start a new project, we always ask ourselves if we should choose Xamarin over a full native solution. I wanted to reflect on past projects and see if it was really worth using Xamarin.

But how do you compare projects? I decided to use line counting. It can seem obvious or simplistic, but the number of shared lines of code will easily show how much work has been done once instead of twice. I took the two most recent Xamarin projects that we worked on Ticketcorner Ski and together.

I used the following method:

  • Use the well-known cloc tool to count the number of lines in a project.
  • Count only C# files.
    • Other types such as json configuration files or API response mocks in unit tests do not matter.
  • Make an exception with Android layout files.
    • Our iOS layout is all done in Auto Layout code and we don't use Xcode Interface Builder.
    • To have a fair comparison, I included the Android XML files in the line count.
  • Do not count auto-generated files.
  • Do not count blank lines and comments.
  • Other tools like Fastlane are also shared, but are not taken into account here.

If you want to try with one of your one project, here are the commands I used for the C# files:

cloc --include-lang="C#" --not-match-f="(([Dd]esigner)|(AssemblyInfo))\.cs" .

For the Android XML layouts, I used:

cloc  --include-lang="xml" --force-lang="xml",axml Together.Android/Resources/layout

Here is what I found:

Project Android iOS Shared
Ticketcorner Ski 31% 31% 38%
together 42% 30% 28%

We can see that on those projects, an average of one third of code can be shared. I was pretty impressed to see that for Ticketcorner Ski we have the same number of lines on the two platforms. I was also pleasantly surprised to see that the project almost shares 40% of its code.

In a mobile app, most of the unit tests target the business logic, which is exactly what is shared with Xamarin: business logic and their unit tests are only written once. Most libraries not directly related to business logic are also shared: REST client, Database client, etc...

The code that is not shared is mostly UI code, interaction, etc... But it is also platform specific code: how to access the camera, how to handle push notifications, how to securely store user credentials according to each platform's guidelines.

It would not be fair to conclude that doing those projects in native would have been 30% more expensive. The shared code has sometimes to take into account that it will be used on two different platforms, and it gets more generic than it would be if written twice.

So... how do you choose one or the other ?

My goal with this blogpost is not to start a flame war on whether Xamarin is good or bad. I have shown here that for those projects, it was the right choice to use Xamarin. I want to share a few things we think about when we have to make a decision. Note that we use Xamarin.iOS and Xamarin.Android, but don't use Xamarin.Forms.

  • Does the application contain a lot of business logic, or is it more UI-based?
    • With one Xamarin project we worked on in the past year, a specific (and complex) use-case was overlooked by the client and it resulted in paying users being pretty unhappy. We were very pleased to be able to fix the problem once, and write the related unit tests once too.
    • As a counterexample, for the Zürich Zoo app, most of our job was writing UX/UI code. The business logic is solely doing GET requests to a backend.
  • Do you plan to use external libraries/SDKs?
    • Xamarin is pretty good at using .jar files on Android.
    • Native libraries on iOS have to be processed manually and it can be tedious to do. It is also hard to use a library packaged with CocoaPods that depends on many other pods.
    • For both platforms, We encountered closed-source tools that are not that easy to convert. As an example, we could use the Datatrans SDK, but not without some trial and error.
    • There are however other Xamarin libraries that can replace what you are used to when developping on both platforms. We replace Picasso on Android and Kingfisher on iOS by FFImageLoading on Xamarin. This library has the same API methods on both platforms which makes it easy to use.
  • Do you plan to use platform-specific features?
    • Xamarin is able to provide access to every platform feature, and it works well. It is also known that they update the Xamarin SDKs as soon as new iOS/Android versions are announced.
    • For Urban Connect however, the most important part of the app is using Bluetooth Low Energy to connect to bike locks. Even if Xamarin is able to do it too, it was the right decision to remove this extra layer and code everything natively.
  • Tooling, state of the platform ecosystems:
    • In the mobile world, things move really fast:
      • Microsoft pushes really hard for developers to adopt Xamarin, for example with App Center, the new solution to build, test, release, and monitor apps. But Visual Studio for Mac is still really buggy and slow.
      • Google added first-class support for Kotlin, has an awesome IDE and pushes mobile development with platforms like Firebase or Android Jetpack.
      • Apple follows along, but still somehow fails to improve Xcode and its tooling in a meaningful manner.
    • Choices made one day will certainly not be valid one year later.
  • Personal preferences:
    • Inside Liip there are very divergent opinions about Xamarin. We always choose the right tool for the job. Having someone efficient and motivated about a tool is important too.

I hope I was able to share my view on why and how we use Xamarin here at Liip. I personally enjoy working both on Xamarin or native projects. Have a look at together and Ticketcorner Ski and tell us what you think!

Repair Café - Reparieren statt Neu kaufen Thu, 31 May 2018 00:00:00 +0200 Mit Nachhaltigkeit zum Erfolg

Ab ins Repair-Cafe - einem Event an dem Besucher defekte Produkte mitbringen und gemeinsam mit Profis reparieren. Von der Romandie bis in die Ostschweiz und vom Tessin bis nach Basel helfen Repair Cafes nachhaltiger im Umgang mit Verbrauchsgegenständen zu werden. Reparieren statt wegwerfen ist das Motto der Repair Cafe Events. Gestartet als einer Aktion der Stiftung für Konsumentenschutz bestehen jetzt bereits 87 Cafes und Restaurants die Reparaturtage anbieten. Diese Zahl wächst stetig, alleine in den letzten sechs Monaten sind 16 neue Reparaturtage dazugekommen. Im vergangenen Jahr haben die Repair Cafe Events mehr als viereinhalb Tonnen Material vor dem Gang in den Müllcontainer bewahrt. Die Entwicklung ist vielversprechend aber wie erfahren Personen wann und wo ein Event stattfindet?

Wie läuft das ab?

Auf der Seite finden sich alle wichtigen Informationen zu Daten, Reparaturzeiten und Cafes in der Nähe. Werkzeuge können von den Besuchern kostenlos genutzt werden und gängige Ersatzteile können vor Ort gekauft werden. Als Anbieter kann der Anmeldungsprozess ebenfalls über die Webseite gemacht werden.


Eine Webseite für alle Informationen, die die Zusammenarbeit der Stiftung für Konsumentenschutz und den einzelnen Cafes vereinfacht. Und die Visibilität der Events erhöht, war Ziel der Stiftung für Konsumentenschutz. Eine Webseite muss her in kurzer Zeit und mit knappem Budget. Mit diesen Anforderungen sind wir ins Projekt gestartet.

Agile Entwicklungsmethoden und reger Austausch machten die Umsetzung möglich. Die Webseite wurde mit OctoberCMS entwickelt um die Flexibilität zu gewährleisten und das einfache Handling auf Stiftungsseite. Das Design ist verspielt und der Informationsaufbau klar.

Zusammenarbeit - purpose over profit

Die Stiftung für Konsumentenschutz setzt sich für die Anliegen der Konsumentinnen und Konsumenten und somit für die Nachhaltigkeit ein. Genau das richtige Projekt für eine Zusammenarbeit, denn wir sind bereits mehrfach für unsere Nachhaltigkeit ausgezeichnet worden. So war auch die Zusammenarbeit geprägt von Vertrauen und dem gemeinsamen Ziel die Welt umweltbewusster zu machen.

“Zero waste” ist in aller Munde aber viele unserer Geräte werden trotzdem jährlich ersetzt. Da setzen die Repair Cafes an. Mach dir selbst ein Bild und geh vorbei am nächsten Event in deiner Nähe.

The role of CKAN in our Open Data Projects Tue, 29 May 2018 00:00:00 +0200 CKAN's Main Goal and Key Features

CKAN is an open source management system whose main goal is to provide a managed data-catalog-system for Open Data. It is mainly used by public institutions and governments. At Liip we use CKAN to mainly help governments to provide their data-catalog and publish data in an accessible fashion to the public. Part of our work is supporting data owners to get their data published in the required data-format. We’re doing this by providing interfaces and useable standards to enhance the user experience on the portal to make it easier to access, read and process the data.



Out of the box CKAN can be used to publish and manage different types of datasets. They can be clustered by organizations and topics. Each dataset can contain resources which themself consist of Files of different formats or links to other Data-Sources. The metadata-standard can be configured to represent the standard you need but the Plugin already includes a simple and useful Meta-Data-Standard that already can get you started. The data is saved into a Postgres-Database by default and is indexed using SOLR.

Powerful Action-API

CKAN ships with an API which can be used to browse through the metadata-catalog and create advanced queries on the metadata. With authorization the API can also be used to add, import and update data with straight-forward requests.


The standard also includes a range of Cli-Commands which can be used to process or execute different tasks. Those can be very useful, e.g. to manage, automate or schedule backend-jobs.


CKAN offers the functionality to configure a preview of a number of different file-types, such as tabular-data (e.g. CSV, XLS), Text-Data (e.g. TXT), Images or PDFs. That way interested citizens can get a quick overview into the data itself without having to download it first and having to use local Software to merely get an better idea on how the data looks.

Preview von Daten auf Statistik Stadt Zürich


While CKAN itself acts as a CMS but for data, it really shines when making use of its extensibility and configure and develop it to your business needs and requirements. There is already a wide-ranging list of plugins that have been developed for CKAN, which covers a broad range of additional features or make it easier to adjust CKAN to fit your use cases and look and feel. A collection of most of the plugins can be found on CKAN-Extensions and on Github.

At Liip we also help maintaining a couple of CKAN's plugins. The most important ones that we use in production for our customers are:


The ckanext-harvest-plugin offers the possibility to export and import data. First of all, it enables you to exchange data between Portals that both use CKAN.

Furthermore we use this plugin to harvest data in a regular manner from different data-sources. At we use two different types of harvesters. Our DCAT-Harvester consumes XML-/RDF-endpoints in DCAT-AP Switzerland-Format which is enforced on the Swiss Portal.

The Geocat-Harvester consumes data from As the data from geocat is in ISO-19139_che-Format (Swiss version of ISO-19139) the harvester converts the data to the DCAT-AP Switzerland format and imports it.

Another feature of this plugin we use, is our DCAT-AP endpoint, to allow other portals to harvest our data and also serves as an example to Organizations that want to build an export that can be harvested by us.

How our Harvesters interact with the different Portals


The plugin ckanext-datastore stores the actual tabular data (opposing to 'just' the meta-data) in a seperate database. With it, we are able to offer an easy to use API on top of the CKAN-Standard-API to query the data and process it further. It provides basic functionalities on the resource-detail-page to display the data in simple graphs.

The datastore is the most interesting one for Data-Analysts, who want to build apps based on the data, or analyze the data on a deeper level. This is an API-example of the Freibäder-dataset on the portal of Statistik Stadt Zürich.


We use ckanext-showcase to provide a platform for Data-Analysts by displaying what has been built, based on the data the portal is offering. There you can find a good overview on how the data can be viewed in meaningful ways as statistics or used as sources in narrated videos or even in apps for an easier everyday life. For example you can browse through the Showcases on the Portal of the City of Zurich.


The ckanext-xloader is a fairly new plugin which we were able to adopt for the City of Zurich Portal. It enables us to automatically and asynchronously load data into the datastore to have the data available after it has been harvested.

CKAN Community

The CKAN-Core and also a number of its major plugins are maintained by the CKAN-Core-Team. The developers are spread around the globe, working partly in companies that run their own open-data portals. The community that contribute to CKAN and its Plugins is always open to developers that would like to help with suggestions, report issues or provide Pull-Requests on Github. It offers a strong community which helps beginners, no matter their background. The ckan-dev-Mailing-List provides help in developing CKAN and is the platform for discussions and ideas about CKAN, too.

Roadmap and most recent Features

Since the Major-Release 2.7 CKAN requires Redis to use a new system of asynchronous background jobs. This helps CKAN to be more performant and reliable. Just a few weeks ago the new Major-Release 2.8 was released. A lot of work on this release went into driving CKAN forward by updating to a newer Version of Bootstrap and also deprecating old features that were holding back CKAN's progress.

Another rather new feature is the datatables-feature for tabular data. Its intention is to help the data-owner to describe the actual data in more detail by describing the values and how they gathered or calculated.

In the Roadmap of CKAN are many interesting features ahead. One example is the development of the CKAN Data Explorer which is a base component of CKAN. It allows to converge data from any dataset in the DataStore of a CKAN instance to analyze it.


It is important to us to support the Open Data Movement as we see value in publishing governmental data to the public. CKAN helps us to support this cause by working with several Organizations to publish their data and consult our customers while we develop and improve their portals together.

Personally, I am happy to be a part of the CKAN-Community which has always been very helpful and supportive. The cause to help different Organizations to make their data public to the people and the respectful CKAN-Community make it a lot of fun to contribute to the code and also the community.

Open Data auf]]>
Recipe Assistant Prototype with Automatic Speech Recognition (ASR) and Text to Speech (TTS) on Socket.IO - Part 1 TTS Market Overview Mon, 28 May 2018 00:00:00 +0200 Intro

In one of our monthly innodays, where we try out new technologies and different approaches to old problems, we had the idea to collaborate with another company. Slowsoft is a provider of text to speech (TTS) solutions. To my knowledge they are the only ones who are able to generate Swiss German speech synthesis in various Swiss accents. We thought it would be a cool idea to combine it with our existing automatic speech recognition (ASR) expertise and build a cooking assistant that you can operate completely hands free. So no more touching your phone with your dirty fingers only to check again how many eggs you need for that cake. We decided that it would be great to go with some recipes from a famous swiss cookbook provider.


Generally there are quite a few text to speech solutions out there on the market. In the first out of two blog posts would like to give you a short overview of the available options. In the second blog post I will then describe at which insights we arrived in the UX workshop and how we then combined with the solution from slowsoft in a quick and dirty web-app prototype built on and flask.

But first let us get an overview over existing text to speech (TTS) solutions. To showcase the performance of existing SaaS solutions I've chosen a random recipe from Betty Bossi and had it read by them:

Ofen auf 220 Grad vorheizen. Broccoli mit dem Strunk in ca. 1 1/2 cm dicke Scheiben schneiden, auf einem mit Backpapier belegten Blech verteilen. Öl darüberträufeln, salzen.
Backen: ca. 15 Min. in der Mitte des Ofens.
Essig, Öl und Dattelsirup verrühren, Schnittlauch grob schneiden, beigeben, Vinaigrette würzen.
Broccoli aus dem Ofen nehmen. Einige Chips mit den Edamame auf dem Broccoli verteilen. Vinaigrette darüberträufeln. Restliche Chips dazu servieren. 

But first: How does TTS work?

The classical way works like this: You have to record at least dozens of hours of raw speaker material in a professional studio. Depending on the task, the material can range from navigation instructions to jokes, depending on your use case. The next trick is called "unit-selection", where recorded speech is sliced into a high number (10k - 500k) of elementary components called phones, in order to be able to recombine those into new words, that the speaker has never recorded. The recombination of these components is not an easy task because the characteristics depend on the neighboring phonemes and the accentuation or prosody. These depend on a lot on the context. The problem is to find the right combination of these units that satisfy the input text and the accentuation and which can be joined together without generating glitches. The raw input text is first translated into a phonetic transcription which then serves as the input to selecting the right units from the database that are then concatenated into a waveform. Below is a great example from Apple's Siri engineering team showing how the slicing takes place.


Using an algorithm called Viterbi the units are then concatenated in such a way that they create the lowest "cost", in cost resulting from selecting the right unit and concatenating two units together. Below is a great conceptual graphic from Apple's engineering blog showing this cost estimation.


Now in contrast to the classical way of TTS new methods based on deep learning have emerged. Here deep learning networks are used to predict the unit selection. If you are interested how the new systems work in detail, I highly recommend the engineering blog entry describing how Apple crated the Siri voice. As a final note I'd like to add that there is also a format called speech synthetisis markup language, that allows users to manually specify the prosody for TTS systems, this can be used for example to put an emphasis on certain words, which is quite handy. So enough with the boring theory, let's have a look at the available solutions.

SaaS / Commercial

Google TTS

When thinking about SaaS solutions, the first thing that comes to mind these days, is obviously Google's TTS solution which they used to showcase Google's virtual assistant capabilities on this years Google IO conference. Have a look here if you haven't been wowed today yet. When you go to their website I highly encourage you to try out their demo with a German text of your choice. It really works well - the only downside for us was that it's not really Swiss German. I doubt that they will offer it for such a small user group - but who knows. I've taken a recipe and had it read by Google and frankly liked the output.

Azure Cognitive Services

Microsoft also offers TTS as part of their Azure cognitive services (ASR, Intent detection, TTS). Similar to Google, having ASR and TTS from one provider, definitely has the benefit of saving us one roundtrip since normally you would need to perform the following trips:

  1. Send audio data from client to server,
  2. Get response to client (dispatch the message on the client)
  3. Send our text to be transformed to speech (TTS) from client to server
  4. Get the response on client. Play it to the user.

Having ASR and TTS in one place reduces it to:

  1. ASR From client to server. Process it on the server.
  2. TTS response to client. Play it to the user.

Judging the speech synthesis quality, I personally I think that Microsoft's solution didn't sound as great as Googles synthesis. But have a look for yourself.

Amazon Polly

Amazon - having placed their bets on Alexa - of course has a sophisticated TTS solution, which they call Polly. I love the name :). To be where they are now, they have acquired a startup called Ivona already back in 2013, which were back then producing state of the art TTS solutions. Having tried it I liked the soft tone and the fluency of the results. Have a check yourself:

Apple Siri

Apple offers TTS as part of their iOS SDK in the name of SikiKit. I haven’t had the chance yet to play in depth with it. Wanting to try it out I made the error to think that apples TTS solution on the Desktop is the same as SiriKit. Yet SiriKit is nothing like the built in TTS on the MacOS. To have a bit of a laugh on your Macbook you can do a really poor TTS in the command line you can simply use a command:

say -v fred "Ofen auf 220 Grad vorheizen. Broccoli mit dem Strunk in ca. 1 1/2 cm dicke Scheiben schneiden, auf einem mit Backpapier belegten Blech verteilen. Öl darüberträufeln, salzen.
Backen: ca. 15 Min. in der Mitte des Ofens."

While the output sounds awful, below is the same text read by Siri on the newest iOS 11.3. That shows you how far TTS systems have evolved in the last years. Sorry for the bad quality but somehow it seems impossible to turn off the external microphone when recording on an IPhone.

IBM Watson

In this arms race IBM also offers a TTS system, with a way to also define the prosody manually, using the SSML markup language standard. I didn't like their output in comparison to the presented alternatives, since it sounded quite artificial in comparison. But give it a try for yourself.

Other commercial solutions

Finally there are also competitors beyond the obvious ones such as Nuance (formerly Scansoft - originating from Xerox research). Despite their page promising a lot, I found the quality of the TTS in German to be a bit lacking.

Facebook doesn't offer a TTS solution, yet - maybe they have rather put their bets on Virtual Reality instead. Other notable solutions are Acapella, Innoetics, TomWeber Software, Aristech and Slowsoft for Swiss TTS.


Instead of providing the same kind of overview for the open source area, I think it's easier to list a few projects and provide a sample of the synthesis. Many of these projects are academic in nature, and often don't give you all the bells and whistles and fancy APIs like the commercial products, but with some dedication could definitely work if you put your mind to it.

  • Espeak. sample - My personal favorite.
  • Festival a project from the CMU university, focused on portability. No sample.
  • Mary. From the german "Forschungszentrum für Künstliche Intelligenz" DKFI. sample
  • Mbrola from the University of Mons sample
  • Simple4All - a EU funded Project. sample
  • Mycroft. More of an open source assistant, but runs on the Raspberry Pi.
  • Mimic. Only the TTS from the Mycroft project. No sample available.
  • Mozilla has published over 500 hours of material in their common voice project. Based on this data they offer a deep learning ASR project Deep Speech. Hopefully they will offer TTS based on this data too someday.
  • Char2Wav from the University of Montreal (who btw. maintain the theano library). sample

Overall my feeling is that unfortunately most of the open source systems have not yet caught up with the commercial versions. I can only speculate about the reasons, as it might take a significant amount of good raw audio data to produce comparable results and a lot of fine tuning on the final model for each language. For an elaborate overview of all TTS systems, especially the ones that work in German, I highly recommend to check out the extensive list that Felix Burkhardt from the Technical University of Berlin has compiled.

That sums up the market overview of commercial and open source solutions. Overall I was quite amazed how fluent some of these solutions sounded and think the technology is ready to really change how we interact with computers. Stay tuned for the next blog post where I will explain how we put one of these solutions to use to create a hands free recipe reading assistant.

What we learned at Typo Berlin 2018 Fri, 25 May 2018 00:00:00 +0200 5 things we learned

The presentations were full of everything from visionary thoughts to practical tips and tricks, with plenty of typography and content in between.

Designer Timothy Goodman at Typo Berlin 2018

New York based designer Timothy Goodman had the most important message of all.

Brand Strategist Alex Mecklenburgat Typo Berlin 2018

Brand strategist Alex Mecklenburg shared a similar message, but from the corporate perspective.

She posed the question "The Wonder of Digital Creation is sacred ... or is it?" and advises against creating internal innovation labs because they exclude everyone outside the lab from being innovative.

Digital Visionary Johann Jungwirth at Typo Berlin 2018

From vision and philosophy to more practical matters: the faceless truck of the future, as imagined by Volkswagen. And how about that fancy Volkswagen logo of the future?

Brand Talk at Typo Berlin 2018

Nivea showed how they designed fonts to provoke specific feelings. In graphology, this is called the Eindruckscharaktere.
Brand: Nivea
Agency: Juliasys

Brand Talk at Typo Berlin 2018

Europe’s largest network of health clinics presented their new corporate design and website. They followed an interesting approach: content first. Because it’s content that would gain their patient’s trust.
Brand: Helios Clinics
Agency: EdenSpiekermann

Professor and typographer Gerd Fleischmann at Typo Berlin 2018

Famous German dadaist, surrealist, and constructivist Kurt Schwitters (1887 - 1948) wasn't just one of the defining artists of the 20th century – he also had lots to say about typography, as Prof Gerd Fleischmann explained to us.

4 inspirational finds

Here is some work that’ll remind you how wonderful and faceted creativity can be.

Designer Hansje van Halem at Typo Berlin 2018

Dive into the work of Hansje van Halem – great fun.

XXX at Typo Berlin 2018

Fantastic work! Watch case film here.
Brand: London Symphony orchestra
Agency: Superunion

Urban developer Charles Landry at Typo Berlin 2018

Society is moving faster and faster, and urban developer Charles Landry talked about his his obervations on the consequences. For example, cities have to come up with creative solutions to cope with or even take advantage of the ever increasing speed of life.

Brand talk at Typo Berlin 2018

Speaking of faster moving times: In their “new agenda for strategic branding” the team from KMS Agency showed how fast a channel reaches 50 million users.

3 fun facts

Observing the creative avante-garde two things came to mind. First, even the big agencies deal with the stuff everyone else deals with. And second, the political resistance is alive!

Brand Talks at Typo Berlin 2018

We've all been there. But the good news is: even the big players in the agency world come to the same conclusion.
Brand: Helios Clinics
Agency: EdenSpiekermann

Brand Talk at Typo Berlin 2018

We've all been there, part 2: when the client wants to combine the two (very) different directions the agency presented.
Brand: London Symphony orchestra
Agency: Superunion

Politics yeah! at Typo Berlin 2018

How refreshingly political Type Berlin was! Several speakers came out passionately against Trump, while one German-speaking presenter did his entire talk in the she form only. We had a female muslim designer on stage and many speakers poked fun at patriarchy. Go go go, forward-thinking creatives!