Liip Blog https://www.liip.ch/en/blog Kirby Tue, 13 Nov 2018 00:00:00 +0100 Latest articles from the Liip Blog en Defining a company’s voice in three steps https://www.liip.ch/en/blog/define-your-brand-voice https://www.liip.ch/en/blog/define-your-brand-voice Tue, 13 Nov 2018 00:00:00 +0100 For this project, which focused on updating texts, we aimed to:

  • Ensure the text's quality and coherence between the different sections of the website
  • Facilitate the work of the editorial team for the long term

This article explains our methods for the project. We worked in stages:

  1. Defining OCN’s identity
  2. Verbalising OCN’s identity in a form that was easy to understand and share
  3. Defining the bases of OCN’s voice

Defining a company’s voice helps an editorial team to understand how to convey their identity in written form.

1. Defining the company’s identity

What does that mean?

A company’s identity forms the basis of its voice. If a company’s identity is clearly defined, its voice is also easy to define.
A company’s identity is its purpose: why you do what you do. Simon Sinek’s video,Start with Why , clearly explains the importance of this question.

Your company has a purpose beyond the money you make, beyond the things you do. The better you put it into words, the better you can see it – we can only see what we have put into words. Once you have put it into words, others can see it and focus all of their efforts into making it happen. It makes work unfulfilling when we don’t know what we are working towards.
Simon Sinek, extract from the presentation Start with Why

The example of OCN

Legal foundations defining the functions and services provided by OCN. A charter defines the concept of public service.

Based on OCN’s internal charter, we formulated various questions such as:

  • Why does OCN admit drivers and vehicles to road traffic?
  • Why does OCN admit drivers and boats to navigation traffic?
  • Why does OCN organise prevention activities and courses?
  • Why does OCN enforce administrative measures (warnings and driving licence withdrawals)?
  • Why does OCN collect cantonal and federal taxes (levies on vehicles and boats, charges on heavy goods vehicle traffic)?
  • Are the activities grouped into categories?

During a workshop, we talked to members of the OCN project team to bring internal tacit knowledge to the fore.

In OCN’s case, we were also inspired by the websites of companies in the road safety sector.
The OCN team shared links and images of textual content that they considered to be ‘engaging’ or, alternatively, ‘overly complex’. The team explained how and why they thought this text content was ‘engaging’ or ‘overly complex’.

What can you do?

Are your company’s activities clearly defined, yet there are no documents defining your identity or values? Ask yourself what the purpose of your company is: Why do we do what we do? Who are we? You will find some answers by reading internal documents, talking to your colleagues, and comparing your company to your competitors.

Read existing documents
You will find answers to the question Why do we exist? in documents entitled ‘corporate identity’ or ‘brand manifesto’, or in the charter of values. If such documents do not exist, talk to your colleagues.

Talk to your colleagues
Talk to your colleagues, such as the company founders and people who are in contact with your customer base. Capturing existing knowledge is vital.
You can help your colleagues to verbalise their ideas by asking questions.

For example, you could ask:

  • In your view, what is our company’s most important activity?
  • Why do we do this activity?
  • Why is this activity important?
  • If we stopped this activity, what would it change for our customers?
  • Are we leaders or followers in our field?

Analyse your competitors
Analysing your competitors will allow you to identify your position in relation to them. For example, you could perform a SWOT analysis to define your strengths and weaknesses in comparison with your competitors.

During your analysis, you can also compile examples of communications that you value highly or that you wish to avoid. This will help you definining your position.

2) Verbalise your identity in the form of a vision and a mission

What does that mean?

In point 1, we compiled the key points defining your company’s identity. Now, the aim is to summarise these key points using vision and mission concepts. That is an easy way of sharing the central elements of the identity.
Your vision explains to your customers who you are, and your mission explains why you do what you do.

The example of OCN

We used Simon Sinek’s circle. During a workshop, we asked members of the team about OCN’s activities (Sinek’s How) and the aim of these activities (Sinek’s Why).
After drafting a vision and a mission, we asked for feedback from the project team to ensure that we had correctly expressed OCN’s identity.

What can you do?

Use your research from point 1 to draft your company’s vision and mission. Be precise and concise. Avoid adjectives that are subject to interpretation such as good, nice, lovely.
Your vision conveys who you are in your sector or in relation to your customers: We are a leader in..., we are partners, we are consultants, etc.

Your mission explains what you do, above and beyond profit: We provide equality in..., we promote innovation in..., etc.

Request feedback from your colleagues
Test what you have drafted by asking your colleagues’ opinions. Ask questions like:

  • Does this statement represent our company?
  • If not, which word would you replace to represent our company?
  • Does this statement represent why we do what we do?
  • If not, which word would you replace to correctly represent why we do what we do?

If you have to explain or justify your choice of words, your draft is probably not clear. The aim is for the team to agree that ‘yes, that statement represents who we are, yes, that statement represents what we do’.

3) Define the basic principles of your voice

What does that mean?

Building on your vision and your mission, use adjectives to define how your company expresses itself. These basic principles will guide your editorial process.

The example of OCN

In OCN’s case, we selected three series of three adjectives that define OCN’s voice. Here are two examples:

Accessible
We use everyday language to be understood by all. We explain the terms and concepts that we use. We support our customers by drafting texts that enable cross-reading.

Respectful
Our language is suitable in all circumstances, no matter who we are talking to. We avoid judgemental adjectives and vocabulary loaded with alternative meanings.

How do you do this?

Choose three to seven adjectives, depending on the complexity of your voice. We recommend pairing each adjective with a brief explanation. Avoid adjectives that are subject to interpretation such as good, nice, lovely.

You could help your colleagues by suggesting adjectives, for example using post-it notes or a card game. You could ask them:

  • Are we meticulous?
  • Are we trendsetting?
  • Are we wacky?

Request feedback from your colleagues
Test what you drafted by asking your colleagues’ opinions. Ask questions like:

  • Does this adjective define how our company, or our digital product, speaks to our customers?
  • If not, what is the right adjective?

Look for collaboration and suitable solutions.

Key points to remember

  • Your company’s voice is based on your company’s identity
  • Draw inspiration from your charter, values, and colleagues’ knowledge to define your company’s identity
  • Examine how your company is positioned compared with your competitors
  • Aim for collective agreement by asking for feedback from your colleagues
  • Define a series of adjectives that are the principles for your voice

Share the love <3

Thank you to the OCN team, especially Fanny and the editorial team for being motivated and welcoming!
Thank you to Yves for his involvement and enthusiasm throughout the project!
Thank you to Darja, Jérémie and Tom for their valuable advice and feedback!
Thank you to Sara, a recent arrival at Liip who is already providing motivation!

Our suggested reading to learn more about this topic

Erika Heald (Content Marketing Institute): 5 Easy Steps to Define and Use Your Brand Voice
Kinneret Yifrah article posted on Medium, 6 reasons to design a voice and tone for your digital product
And of course, here is a link from my colleague Caroline on how to ensure that your voice is strong and heard.

]]>
TEDx - make a wish https://www.liip.ch/en/blog/tedx https://www.liip.ch/en/blog/tedx Wed, 31 Oct 2018 00:00:00 +0100 Destination Tomorrow

We have been partner of various TEDx programmes for many years. This year, we sponsored TEDx events in Bern, Fribourg and are about to go to Geneva. TEDx is a forum for ‘ideas worth spreading’. It consists of self-organised events in various cities. As part of last year’s ‘Destination Tomorrow’ TEDx event at HSG in St. Gallen, we wanted to show how innovation and digitalisation could be combined.
The wish tree
We worked on visions of the future with around 400 participants across seven lecture halls at HSG. The focal point of the event was our wish tree. A wish tree is a living tree, either indoors or outdoors, onto which wishes are hung. The wishes symbolically grow towards the sky with the tree, and so are incorporated into the greater whole. Participants at the event could write down one or more wishes and attach them to our wish tree. There was the possibility to send us the wishes online too: https://makeawish.liip.ch/ We looked through all the wishes, and were astonished by what we found!

Young people wished for world peace and sustainability – and a relationship.

The majority of the wishes were about the well-being of humans and nature. ‘World peace’ and ‘green technologies for all’ made an appearance. Most young people and participants wished for a more peaceful world and for sustainability. We thought this was remarkable! Of course, there were also personal wishes such as successfully completing a university degree, which is of course perfectly OK. One wish particularly caught our eye: ‘I wish for a girlfriend’. He hung his number on the wish tree. Hats off for audacity! Although we could not couple him up, we gave him a cool present for his pluckiness.

Outlook for TEDx events

There are will be more TEDx events for us to come in 2019. The wishes collected at the TEDx in St. Gallen touched us, so we are making another wish tree in Geneva. This will follow the same concept but be in a different format. Do French-speaking Swiss people have different wishes? We will find out on 7 November. Les Jours qui viennent – TEDx Geneva. We are once again organising a Liip bar. Meet us there! Or send us your wish online: https://makeawish.liip.ch/

]]>
How a simple solution alleviated a complex problem https://www.liip.ch/en/blog/how-a-simple-solution-alleviated-a-complex-problem https://www.liip.ch/en/blog/how-a-simple-solution-alleviated-a-complex-problem Tue, 30 Oct 2018 00:00:00 +0100 Estimated reading time: < 5 minutes. Target audience: developers and product owners.

First, a word about software development

Over time, every software goes into maintenance; minor features might get developed, bugs are fixed and frameworks get upgraded to their latest versions. One of the potential side effect of this activity is "regression". Something that used to work suddenly doesn't anymore. The most common way to prevent this is writing tests and running them automatically on every changes. So everytime a new feature is developed or a bug fixed, a piece of code is written to ensure that the application will, from that moment on, work as expected. And if some changes break the application, the failing tests should prevent them from being released.

In practice however, it happens more often than not, that tests get overlooked... That's where it all started.

The situation

We maintain an application built with Symfony. It provides an API for which some automated tests were written, when it was first implemented years ago. But even though the application kept evolving as the years went by (and its API started being used by more and more third-party applications), the number of tests remained unchanged. This slowly created a tension, as the importance of the API's stability increased (as more applications depended on it) and the tests coverage decreased (as new features were developed and no tests were written for them).

The solution that first came to mind

Facing this situation, my initial thoughts went something like this :

We should review every API-related test, evaluate their coverage and complete them if needed!

We should review every API endpoint to find those that are not yet covered by tests and add them!

That felt like an exhaustive and complete solution; one I could be proud of once delivered. My enthusiasm was high, because it triggered something dear to my heart, as my quest to leave the code cleaner than I found it was set in motion (also known as the "boy-scout rule"). If this drive for quality had been my sole constraint, that's probably the path I would have chosen — the complex path.

Here, however, the project's budget did not allow to undertake such an effort. Which was a great opportunity to...

Gain another perspective

As improving the test suite was out of the picture, the question slowly shifted to :

What could bring more confidence and trust to the developers that the API will remain stable on the long run, when changes in the code will inevitably occur?

Well, "changes" is still a little vague here to answer the question, so let's get more specific :

  • If a developer changes something in the project related to the API, I trust that he will test the feature he changed; there's not much risk involved in that scenario. But...
  • If a developer changes something in the project that has nothing to do with the API and yet the change may break it, this is trouble !

The biggest risk I've identified to break the API inadvertently, by applying (seemingly) unrelated changes and not noticing it, lies in the Symfony Routing component, used to define API's endpoints :

  • Routes can override each other if they have the same path, and the order in which they are added to the configuration files matters. Someone could add a new route with a path identical to an existing API endpoint's one and break the latter.
  • Upgrading to the next major version of Symfony may lead to mandatory changes in the way routes are defined (it happened already), which opens up the door to human errors (forgetting to update a route's definition for example).
  • Routes' definitions are located in other files and folders than the code they're related to, which makes it hard for the developer to be conscious of their relationship.

All of this brings fragility. So I decided to focus on that, taking a "Minimum Viable Product" approach that would satisfy the budget constraint too.

Symfony may be part of the problem, but it can also be part of the solution. If the risk comes from changes in the routing, why not use Symfony's tools to monitor them ?

The actual implementation (where it gets technical)

The command debug:router lists all the routes' definitions defined in a Symfony application. There's also a --format argument that allows to get the output as JSON, which is perfect to write a script that relies on that data.

As for many projects at Liip, we use RMT to release new versions of the application. This tool allows for "prerequisites" scripts to be executed before any release is attempted : useful to run a test suite or, in this case, to check if the application's routing underwent any risky changes.

The first requisite for our script to work is to have a reference point. We need a set of the routes' definitions in a "known stable state". This can be done by running the following command on the master branch of the project, for example:

bin/console debug:router --format=json > some_path/routing_stable_dump.json

Then it could go something like this :

  1. Use the Process Component to run the bin/console debug:router --format=json command, pass the output to json_decode(), store it in a variable (that's the new routing).
  2. Fetch the reference point using file_get_contents(), pass the output to json_decode(), store it in a variable (that's the stable routing).
  3. Compare the two variables. I used swaggest/json-diff to create a diff between the two datasets.
  4. Evaluate if the changes are risky or not (depending on the business' logic) and alert the developer if they are (and prevent the release).

Here's an example of output from our script :

Closing thoughts

I've actually had a great time implementing this script and do feel proud of the work I did. And besides, I'm quite confident that the solution, while not perfect, will be sufficient to increase the peace of mind of the project's developers and product owners.

What do you think? Would you have an another approach? I'd love to read all about it in the comments.

]]>
Drupal Europe 2018 https://www.liip.ch/en/blog/drupal-europe-2018 https://www.liip.ch/en/blog/drupal-europe-2018 Mon, 29 Oct 2018 00:00:00 +0100 In 2017, Drupal Association decided not to host a DrupalCon Europe 2018 due to waning attendance and financial losses. They took some time to make the European event more sustainable. After this, the Drupal community decided to organise a Drupal Europe event in Darmstadt, Germany in 2018. My colleagues and I joined the biggest European Drupal event in October and here is my summary of few talks I really enjoyed!

Driesnote

By Dries Buytaert
Track: Drupal + Technology
Recording and slides

This year, Dries Buytaert focuses on improvements made for Drupal users such as content creators, evaluators and developers.

Compared to last year, Drupal 8 contributions increased by 10% and stable modules released by 46%. Moreover, a steady progress is noticeable. Especially in many core initiatives like the last version of Drupal 8 which is shipped with features and improvements created from 4 core initiatives.

Content creators are the key-decision makers in the selection of a CMS now. Their expectations have changed: they need flexibility but also simpler tools to edit contents. The layout_builder core module gives some solutions by enabling to edit a content inline and drag-and-dropping elements in different sections. The management of medias has been improved too and there is a possibility to prepare different “states” of contents using workspaces module. But the progress doesn’t stop here. The next step is to modernize the administrative UI with a refresh of the Seven administration theme based on React. Using this modern framework makes it familiar to Javascript (JS) developers and is building a bridge with the JS community.

Drupal took a big step forward for evaluators as it provides a demo profile called “Umami” now. Evaluators have a clear understanding of what kind of websites can be produced by Drupal and how it works by navigating through the demo website.
The online documentation on drupal.org has also been reorganized with a clear separation of Drupal 7 and Drupal 8. It provides some getting-started guides too. Finally, a quick-install link is available to have a website running within 3 clicks and 1 minute 27 seconds!

Developers experience has been improved as well: minor releases are now supported for 12 months instead of the former 4 weeks. Teams will have more time to plan their updates efficiently. Moreover, Gitlab will be adopted within the next months to manage the code contributions. This modern collaborative tool will encourage more people to participate to projects.

Regarding the support of the current Drupal versions, Dries shares that Symfony 3, the base component of Drupal 8 will be end-of-life by 2021. To keep the CMS secure, it implies to be end-of-life by November 2021 and Drupal 9 should be released in 2020. The upgrade from Drupal 8 to Drupal 9 should be smooth as long as you stay current with the minor releases and don’t use modules with deprecated APIs.
The support of Drupal 7 has been extended to November 2021 as the migration path from Drupal 7 to Drupal 8 is not stable with multilingualism yet.

This is a slide from Driesnote presentation showing a mountain with many tooltips: "Drupal 8 will be end-of-life by November 2021", "Drupal 7 will be supported until November 2021", "Drupal 9 will be released in 2020", "Drupal 8 became a better tool for developers", "You now have up to 12 months to upgrade your sites", "Drupal 8 became much easier to evaluate", "We've begun to coordinate the marketing of Drupal", "Drupal 8 became easier to use for content creators", "Drupal.org is moving to GitLab very soon".
Slide from Driesnote showing current state of Drupal.

Last but not least, DrupalCon is coming back next year and will be held in Amsterdam!

JavaScript modernisation initiative

By Cristina Chumillas, Lauri Eskola, Matthew Grill, Daniel Wehner and Sally Young
Track: Drupal + Technology
Recording and slides

After a lot of discussions on which JS framework will be used to build the new Drupal administrative experience, React was finally chosen for its popularity.

The initiative members wanted to focus on the content editing experience. This affects a big group of Drupal users. The goal was to simplify and modernize the current interface. Furthermore, embracing practices that are familiar to JS developers so they can easier join the Drupal community.
On one hand, a UX team ran some user tests. Those showed that users like the flexibility they have with Drupal interface but dislike its complexity usually. A comparative study was ran to know what has been used in other tools or CMSs too. On the other hand, the User Interface (UI) team worked on the redesign of the administrative interface and built a design system based on components. The refreshment of the Seven administration theme is ongoing.
Another group worked on prototyping the User Experience (UX) and User Interface (UI) changes with React. For instance, if an editor quits a page without saving they's last changes, a popup appears to restore the last changes. This is possible due to contents stored to the state of the application.

You can see a demo of the new administrative UI in the video (go to 20 minutes 48 seconds):

Demo of the new administrative UI in Drupal 8

If you are interested, you can install the demo and of course join the initiative!

Drupal Diversity & Inclusion: Building a stronger community

By Tara King and Elli Ludwigson
Track: Drupal Community
Recording

Diversity in gender, race, ethnicity, immigration status, disability, religion etc. helps a lot. Proven it makes a team more creative, collaborative and effective.

Tara King and Elli Ludwigson who are part of the Drupal Diversity and Inclusion team presented how Drupal is building a stronger and smarter community. The initial need was to make Drupal a safer place for all. Especially for the less visible ones at community events such as women, minorities and people with disabilities.
The group addressed several issues, such as racism, sexism, homophobia, language barriers etc. with different efforts and initiatives. For example, diversity is highlighted and supported in Drupal events: pronoun stickers are distributed, #WeAreDrupal hashtag is used on Twitter and social events are organized for underrepresented people as well. Moreover, the group has released an online resource library, which collects articles about diversity. All of this is ongoing and new initiatives were created. Helping people finding jobs or attracting more diverse people as recruiters are only two to name.

Flyer put on a table with the text "Make eye Contact. Invite someone to join the conversation. Consider new perspectives. Call out exclusionary behavior. Be an ally at Drupal events."
Diversity and Inclusion flyer, photo by Paul Johnson, license CC BY-NC 2.0
Sign mentionning "All-gender restrooms" at Drupal Europe venue.
All-gender restrooms sign, photo by Gábor Hojtsy, license CC BY-SA 2.0

If you are interested in the subject and would like to be involved, there are weekly meetings in #diversity-inclusion Drupal Slack channel. You can join the contrib team or work on the issue queue too.

Willy Wonka and the Secure Container Factory

By Dave Hall
Track: DevOps + Infrastructure
Recording

Docker is a tool that is designed to create, deploy and run applications easily by using containers. It is also about “running random code downloaded from the internet and running it as root”. This quote points out how it is important to maintain secure containers. David Hall illustrates this with practical advice and images from the “Willy Wonka and the chocolate factory” movie. Here is a little recap:

  • Have a light image: big images will slow down deployments and also increase the attack surface. Install an Alpine distribution rather than a Debian which is about 20 times lighter;
  • Check downloaded sources very carefully: for instance, you can use wget command and validate checksum for a file. Plus you can scan your images to check vulnerabilities using tools like Microscanner or Clair;
  • Use continuous development workflows: build a plan to maintain your Docker images, using a good Continous Integration / Continous Delivery (CI/CD) system and document it;
  • Specify a user in your dockerfile: running root on a container is the same as running root on the host. You need to reduce the actions of a potential attacker;
  • Measure your uptime in hours/days: it is important to rebuild and redeploy often to potentially avoid having a compromised system for a long time.

Now you are able to incorporate these advice into your dockerfiles in order to build a safer factory than Willy Wonka’s.

Decoupled Drupal: Implications, risks and changes from a business perspective

By Michael Schmid
Track: Agency + Business
Recording

Before 2016, Michael Schmid and his team worked on fully Drupal projects. Ever since they are working on progressive and fully decoupled projects.
A fully decoupled website means that frontend is not handled with Drupal but with a JS framework such as React. This framework is “talking” to Drupal via an API such as GraphQL. It also means, that all interactions from Drupal are gone: views with filters, webforms, comments etc. If a module provides frontend, it is not useable anymore and needs to be somehow re-implemented.
When it comes to progressive decoupled websites, frontend stack is still built with Drupal. But some parts are implemented with a JS framework. You can have data provided by APIs or injected from Drupal too. The advantage is that you can benefit from Drupal components and don’t need to re-implement everything. A downside of it are conflicts with CSS styling and build systems handled on both sides. Therefore you need to have a clear understanding of what does what.

To be able to run such projects successfully, it is important to train every developer in new technologies: JS has evolved and parts of the logic can be built with it. We can say that backenders can do frontend now. In terms of hiring it means, you can hire full stack developers but also JS engineers. Attracting more developers as they love working with JS frameworks such as React on a global level.

Projects are investments which continue over time and expect failures at the beginning. These kinds of projects are more complex than regular Drupal ones, they can fail or go over budget. Learn from your mistakes and share them with your team in retrospectives. It is also very important to celebrate successes!
Clients request decoupled projects to have a faster and cooler experience for users. They need to understand that this is an investment that will pay off in the future.

Finally, fully decoupled Drupal is a trend for big projects and other CMSs are already using decoupled out of the box. Drupal needs to focus on a better editor experience and a better API. There might also be projects that require simple backend edition instead of Drupal.

Hackers automate but the Drupal Community still downloads updates on drupal.org or: Why we need to talk about Auto Updates

By Joe Noll and Hernani Borges de Freitas
Track: Drupal + Technology
Recording and slides

In 2017, 59% of Drupal users were still downloading modules from drupal.org. In other words, more than half of the users didn’t have any automatisation processes to install modules. Knowing that critical security updates were released in the past months and it is only a matter of hours until a website gets potentially hacked, it comes crucial to have a process to automate these updates.
The update can be quite complex and may take time: installing the update, reviewing the changes, deploying on a test environment, testing either automatically or manually and deploying on production. However this process can be simplify with automation in place.

There is a core initiative to support small-to-medium sites owners that usually are not taking care of security updates. The idea is a process to download the code and update sources in the Drupal directory.
For more complex websites, automating the composer workflow with a CI pipeline is recommended. Everytime a security update is released, the developer pushes it manually in the pipeline. The CI system builds an installation containing the security fix within a new branch. This will be deployed automatically to a non-productive environment where tests can be done and build approved. Changes can be merged and deployed on production afterwards.

A schema showing the update strategy through all steps from a CI pipeline
Update strategy slide by Joe Noll and Hernani Borges de Freitas

To go further, the update_runner module focuses on automatizing the first part by detecting an update and firing up a push for an update job.

Conclusion

Swiss Drupal community members cheering at a restaurant
Meeting the Swiss Drupal community, photo by Josef Dabernig, license CC BY-NC-SA 2.0

We are back with fresh ideas, things we are curious to try and learnings from great talks! We joined social events in the evenings too. Therefore we exchanged with other drupalists, in particular with the Swiss Drupal community! This week went so fast. Thank you Drupal Europe organizers for making this event possible!

Header image credits: Official Group Photo Drupal Europe Darmstadt 2018 by Josef Dabernig, license CC BY-NC-SA 2.0.

]]>
Real time numbers recognition (MNIST) on an iPhone with CoreML from A to Z https://www.liip.ch/en/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z https://www.liip.ch/en/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z Tue, 23 Oct 2018 00:00:00 +0200 Creating a CoreML model from A-Z in less than 10 Steps

This is the third part of our deep learning on mobile phones series. In part one I have shown you the two main tricks on how to use convolutions and pooling to train deep learning networks. In part two I have shown you how to train existing deep learning networks like resnet50 to detect new objects. In part three I will now show you how to train a deep learning network, how to convert it in the CoreML format and then deploy it on your mobile phone!

TLDR: I will show you how to create your own iPhone app from A-Z that recognizes handwritten numbers:

Let’s get started!

1. How to start

To have a fully working example I thought we’d start with a toy dataset like the MNIST set of handwritten letters and train a deep learning network to recognize those. Once it’s working nicely on our PC, we will port it to an iPhone X using the CoreML standard.

2. Getting the data

# Importing the dataset with Keras and transforming it
from keras.datasets import mnist
from keras import backend as K

def mnist_data():
    # input image dimensions
    img_rows, img_cols = 28, 28
    (X_train, Y_train), (X_test, Y_test) = mnist.load_data()

    if K.image_data_format() == 'channels_first':
        X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
        X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
        input_shape = (1, img_rows, img_cols)
    else:
        X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
        X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
        input_shape = (img_rows, img_cols, 1)

    # rescale [0,255] --> [0,1]
    X_train = X_train.astype('float32')/255
    X_test = X_test.astype('float32')/255

    # transform to one hot encoding
    Y_train = np_utils.to_categorical(Y_train, 10)
    Y_test = np_utils.to_categorical(Y_test, 10)

    return (X_train, Y_train), (X_test, Y_test)

(X_train, Y_train), (X_test, Y_test) = mnist_data()

3. Encoding it correctly

When working with image data we have to distinguish how we want to encode it. Since Keras is a high level-library that can work on multiple “backends” such as Tensorflow, Theano or CNTK, we have to first find out how our backend encodes the data. It can either be encoded in a “channels first” or in a “channels last” way which is the default in Tensorflow in the default Keras backend. So in our case, when we use Tensorflow it would be a tensor of (batch_size, rows, cols, channels). So we first input the batch_size, then the 28 rows of the image, then the 28 columns of the image and then a 1 for the number of channels since we have image data that is grey-scale.

We can take a look at the first 5 images that we have loaded with the following snippet:

# plot first six training images
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np

(X_train, y_train), (X_test, y_test) = mnist.load_data()

fig = plt.figure(figsize=(20,20))
for i in range(6):
    ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
    ax.imshow(X_train[i], cmap='gray')
    ax.set_title(str(y_train[i]))

4. Normalizing the data

We see that there are white numbers on a black background, each thickly written just in the middle and they are quite low resolution - in our case 28 pixels x 28 pixels.

You have noticed that above we are rescaling each of the image pixels, by dividing them by 255. This results in pixel values between 0 and 1 which is quite useful for any kind of training. So each of the images pixel values look like this before the transformation:

# visualize one number with pixel values
def visualize_input(img, ax):
    ax.imshow(img, cmap='gray')
    width, height = img.shape
    thresh = img.max()/2.5
    for x in range(width):
        for y in range(height):
            ax.annotate(str(round(img[x][y],2)), xy=(y,x),
                        horizontalalignment='center',
                        verticalalignment='center',
                        color='white' if img[x][y]<thresh else 'black')

fig = plt.figure(figsize = (12,12)) 
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)

As you noticed each of the grey pixels has a value between 0 and 255 where 255 is white and 0 is black. Notice that here mnist.load_data() loads the original data into X_train[0]. When we write our custom mnist_data() function we transform every pixel intensity into a value of 0-1 by calling X_train = X_train.astype('float32')/255 .

5. One hot encoding

Originally the data is encoded in such a way that the Y-Vector contains the number value that the X Vector (Pixel Data) contains. So for example if it looks like a 7, the Y-Vector contains the number 7 in there. We need to do this transformation, because we want to map our output to 10 output neurons in our network that fire when the according number is recognized.

6. Modeling the network

Now it is time to define a convolutional network to distinguish those numbers. Using the convolution and pooling tricks from part one of this series we can model a network that will be able to distinguish numbers from each other.

# defining the model
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
def network():
    model = Sequential()
    input_shape = (28, 28, 1)
    num_classes = 10

    model.add(Conv2D(filters=32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))
    model.add(Flatten())
    model.add(Dense(500, activation='relu'))
    model.add(Dropout(0.4))
    model.add(Dense(num_classes, activation='softmax'))

    # summarize the model
    # model.summary()
    return model 

So what did we do there? Well we started with a convolution with a kernel size of 3. This means the window is 3x3 pixels. The input shape is our 28x28 pixels. We then followed this layer by a max pooling layer. Here the pool_size is two so we downscale everything by 2. So now our input to the next convolutional layer is 14 x 14. We then repeated this two more times ending up with an input to the final convolution layer of 3x3. We then use a dropout layer where we randomly set 30% of the input units to 0 to prevent overfitting in the training. Finally we then flatten the input layers (in our case 3x3x32 = 288) and connect them to the dense layer with 500 inputs. After this step we add another dropout layer and finally connect it to our dense layer with 10 nodes which corresponds to our number of classes (as in the number from 0 to 9).

7. Training the model

#Training the model
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])

model.fit(X_train, Y_train, batch_size=512, epochs=6, verbose=1,validation_data=(X_test, Y_test))

score = model.evaluate(X_test, Y_test, verbose=0)

print('Test loss:', score[0])
print('Test accuracy:', score[1])

We first compile the network by defining a loss function and an optimizer: in our case we select categorical_crossentropy, because we have multiple categories (as in the numbers 0-9). There are a number of optimizers that Keras offers, so feel free to try out a few, and stick with what works best for your case. I’ve found that AdaDelta (an advanced form of AdaGrad) works fine for me.

So after training I’ve got a model that has an accuracy of 98%, which is quite excellent given the rather simple network infrastructure. In the screenshot you can also see that in each epoch the accuracy was increasing, so everything looks good to me. We now have a model that can quite well predict the numbers 0-9 from their 28x28 pixel representation.

8. Saving the model

Since we want to use the model on our iPhone we have to convert it to a format that our iPhone understands. There is actually an ongoing initiative from Microsoft, Facebook and Amazon (and others) to harmonize all of the different deep learning network formats to have an interchangable open neural networks exchange format that you can use on any device. Its called ONNX.

Yet, as of today Apple devices work only with the CoreML format though. In order to convert our Keras model to CoreML Apple luckily provides a very handy helper library called coremltools that we can use to get the job done. It is able to convert scikit-learn models, Keras and XGBoost models to CoreML, thus covering quite a bit of the everyday applications. Install it with “pip install coremltools” and then you will be able to use it easily.

coreml_model = coremltools.converters.keras.convert(model,
                                                    input_names="image",
                                                    image_input_names='image',
                                                    class_labels=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
                                                    )

The most important parameters are class_labels, they define how many classes the model is trying to predict, and input names or image_input_names. By setting them to image XCode will automatically recognize that this model is about taking in an image and trying to predict something from it. Depending on your application it makes a lot of sense to study the documentation, especially when you want to make sure that it encodes the RGB channels in the same order (parameter is_bgr) or making sure that it assumes correctly that all inputs are values between 0 and 1 (parameter image_scale) .

The only thing left is to add some metadata to your model. With this you are helping all the developers greatly, since they don’t have to guess how your model is working and what it expects as input.

#entering metadata
coreml_model.author = 'plotti'
coreml_model.license = 'MIT'
coreml_model.short_description = 'MNIST handwriting recognition with a 3 layer network'
coreml_model.input_description['image'] = '28x28 grayscaled pixel values between 0-1'
coreml_model.save('SimpleMnist.mlmodel')

print(coreml_model)

9. Use it to predict something

After saving the model to a CoreML model we can try if it works correctly on our machine. For this we can feed it with an image and try to see if it predicts the label correctly. You can use the MNIST training data or you can snap a picture with your phone and transfer it on your PC to see how well the model handles real-life data.

#Use the core-ml model to predict something
from PIL import Image  
import numpy as np
model =  coremltools.models.MLModel('SimpleMnist.mlmodel')
im = Image.fromarray((np.reshape(mnist_data()[0][0][12]*255, (28, 28))).astype(np.uint8),"L")
plt.imshow(im)
predictions = model.predict({'image': im})
print(predictions)

It works hooray! Now it's time to include it in a project in XCode.

Porting our model to XCode in 10 Steps

Let me start by saying: I am by no means a XCode or Mobile developer. I have studied a quite a few super helpful tutorials, walkthroughs and videos on how to create a simple mobile phone app with CoreML and have used those to create my app. I can only say a big thank you and kudos to the community being so open and helpful.

1. Install XCode

Now it's time to really get our hands dirty. Before you can do anything you have to have XCode. So download it via Apple-Store and install it. In case you already have it, make sure to have at least version 9 and above.

2. Create the Project

Start XCode and create a single view app. Name your project accordingly. I did name mine “numbers”. Select a place to save it. You can leave “create git repository on my mac” checked.

3. Add the CoreML model

We can now add the CoreML model that we created using the coremltools converter. Simply drag the model into your project directory. Make sure to drag it into the correct folder (see screenshot). You can use the option “add as Reference”, like this whenever you update your model, you don’t have to drag it into your project again to update it. XCode should automatically recognize your model and realize that it is a model to be used for images.

4. Delete the view or storyboard

Since we are going to use just the camera and display a label we don’t need a fancy graphical user interface - or in other words a view layer. Since the storyboard in Swing corresponds to the view in the MVC pattern we are going to simply delete it. In the project settings deployment info make sure to delete the Main Interface too (see screenshot), by setting it to blank.

5. Create the root view controller programmatically

Instead we are going to create view root controller programmatically by replacing the funct application in AppDelegate.swift with the following code:

// create the view root controller programmatically
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
    // create the user interface window, make it visible
    window = UIWindow()
    window?.makeKeyAndVisible()

    // create the view controller and make it the root view controller
    let vc = ViewController()
    window?.rootViewController = vc

    // return true upon success
    return true
}

6. Build the view controller

Finally it is time to build the view controller. We will use UIKit - a lib for creating buttons and labels, AVFoundation - a lib to capture the camera on the iPhone and Vision - a lib to handle our CoreML model. The last is especially handy if you don’t want to resize the input data yourself.

In the Viewcontroller we are going to inherit from UI and AV functionalities, so we need to overwrite some methods later to make it functional.

The first thing we will do is to create a label that will tell us what the camera is seeing. By overriding the viewDidLoad function we will trigger the capturing of the camera and add the label to the view.

In the function setupCaptureSession we will create a capture session, grab the first camera (which is the front facing one) and capture its output into captureOutput while also displaying it on the previewLayer.

In the function captureOutput we will finally make use of our CoreML model that we imported before. Make sure to hit Cmd+B - build, when importing it, so XCode knows it's actually there. We will use it to predict something from the image that we captured. We will then grab the first prediction from the model and display it in our label.

\\define the ViewController
import UIKit
import AVFoundation
import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    // create a label to hold the Pokemon name and confidence
    let label: UILabel = {
        let label = UILabel()
        label.textColor = .white
        label.translatesAutoresizingMaskIntoConstraints = false
        label.text = "Label"
        label.font = label.font.withSize(40)
        return label
    }()

    override func viewDidLoad() {
        // call the parent function
        super.viewDidLoad()       
        setupCaptureSession() // establish the capture
        view.addSubview(label) // add the label
        setupLabel()
    }

    func setupCaptureSession() {
        // create a new capture session
        let captureSession = AVCaptureSession()

        // find the available cameras
        let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

        do {
            // select the first camera (front)
            if let captureDevice = availableDevices.first {
                captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice))
            }
        } catch {
            // print an error if the camera is not available
            print(error.localizedDescription)
        }

        // setup the video output to the screen and add output to our capture session
        let captureOutput = AVCaptureVideoDataOutput()
        captureSession.addOutput(captureOutput)
        let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.frame = view.frame
        view.layer.addSublayer(previewLayer)

        // buffer the video and start the capture session
        captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
        captureSession.startRunning()
    }

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // load our CoreML Pokedex model
        guard let model = try? VNCoreMLModel(for: SimpleMnist().model) else { return }

        // run an inference with CoreML
        let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in

            // grab the inference results
            guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }

            // grab the highest confidence result
            guard let Observation = results.first else { return }

            // create the label text components
            let predclass = "\(Observation.identifier)"

            // set the label text
            DispatchQueue.main.async(execute: {
                self.label.text = "\(predclass) "
            })
        }

        // create a Core Video pixel buffer which is an image buffer that holds pixels in main memory
        // Applications generating frames, compressing or decompressing video, or using Core Image
        // can all make use of Core Video pixel buffers
        guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

        // execute the request
        try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
    }

    func setupLabel() {
        // constrain the label in the center
        label.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true

        // constrain the the label to 50 pixels from the bottom
        label.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -50).isActive = true
    }
}

Make sure that you have changed the model part to the naming of your model. Otherwise you will get build errors.

6. Add Privacy Message

Finally, since we are going to use the camera, we need to inform the user that we are going to do so, and thus add a privacy message “Privacy - Camera Usage Description” in the Info.plist file under Information Property List.

7. Add a build team

In order to deploy the app on your mobile iPhone, you will need to register with the Apple developer program. There is no need to pay any money to do so, you can register also without any fees. Once you are registered you can select the team Apple calls it this way) that you have signed up there in the project properties.

8. Deploy on your iPhone

Finally it's time to deploy the model on your iPhone. You will need to connect it via USB and then unlock it. Once it's unlocked you need to select the destination under Product - Destination- Your iPhone. Then the only thing left is to run it on your mobile: Select Product - Run (or simply hit CMD + R) in the Menu and XCode will build and deploy the project on your iPhone.

9. Try it out

After having had to jump through so many hoops it is finally time to try out our app. If you are starting it for the first time it will ask you to allow it to use your camera (after all we have placed this info there). Then make sure to hold your iPhone sideways, since it matters on how we trained the network. We have not been using any augmentation techniques, so our model is unable to recognize numbers that are “lying on the side”. We could make our model better by applying these techniques as I have shown in this blog article.

A second thing you might notice is, that the app always recognizes some number, as there is no “background” class. In order to fix this, we could train the model additionally on some random images, which we classify as the background class. This way our model would be better equipped to tell apart if it is seeing a number or just some random background.

Conclusion or the famous “so what”

Obviously this has is a very long blog post. Yet I wanted to get all the necessary info into one place in order to show other mobile devs how easy it is to create your own deep learning computer vision applications. In our case at Liip it will most certainly boil down to a collaboration between our data services team and our mobile developers in order to get the best of both worlds.

In fact we are currently innovating together by creating an app that will be able to recognize animals in a zoo and working on another small fun game that lets two people doodle against each other: You will be given a task, as in “draw an apple” and the person who draws the apple faster in such a way that it is recognised by the deep learning model wins.

Beyond such fun innovation projects the possibilities are endless, but always depend on the context of the business and the users. Obviously the saying “if you have a hammer every problem looks like a nail to you” applies here too, not every app will benefit from having computer vision on board, and not all apps using computer vision are useful ones as some of you might know from the famous Silicon Valley episode.

Yet there are quite a few nice examples of apps that use computer vision successfully:

  • Leafsnap, lets you distinguish different types of leafs.
  • Aipoly helps visually impaired people to explore the world.
  • Snooth gets you more infos on your wine by taking a picture of the label.
  • Pinterest has launched a visual search that allows you to search for pins that match the product that you captured with your phone.
  • Caloriemama lets you snap a picture of your food and tells you how many calories it has.

As usual the code that you have seen in this blogpost is available online. Feel free to experiment with it. I am looking forward to your comments and I hope you enjoyed the journey. P.S. I would like to thank Stefanie Taepke for proof reading and for her helpful comments which made this post more readable.

]]>
The Liip Bike Grand Tour Challenge 2018 https://www.liip.ch/en/blog/the-liip-bike-grand-tour-challenge-2018 https://www.liip.ch/en/blog/the-liip-bike-grand-tour-challenge-2018 Fri, 12 Oct 2018 00:00:00 +0200 Birth of an idea

It all started because of Liip's longlasting engagement to Pro Velo's Bike to Work action. It take place every year during May or June. And foster seldom bikers to get into the habit of biking to work, for at least parts of their trip. Liip is a long-time participant and actively encourages Liipers to participate.

I had been thinking of reaching all offices all at once for quite some time: it could be organized in the form of a relay, participants could use different means of transportation, and so on. At the 2018 LiipConf, I shared the idea with other Liipers and got a lot enthousiastic feedback to finally get around to organize "something". That same evening, I turned to my favorite Bike Router and tried to connect the dots. The idea had then become "try to work in every Liip office, bike between the offices".

Initial implementation

With five offices, Liip spreads over most of Switzerland, from lake Geneva to lake Constance, along the Geneva → St.Gallen IC 1 train line. Initially, I thought of spreading the voyage in 5 days, one per office. But looking at the map and at the routing calculations, it quickly became obvious it wouldn't work, because of the Bern → Zürich leg, which is at least a 125 km ride. Cutting it in half, and not staying overnight in Bern made the plan somewhat realistic.

In early September, I announced the plan on Liip's Slack #announcements channel, in hope to find "Partners in Crime":

🚴‍♀️ Liip Bike Grand Tour Challenge 2018 🚴‍♂️

Motivated to join on a Bike Grand Tour with Liipers? The idea is very simple: connect all Liip offices in one week, on a bike.
  • When? W40: Oct 1. → Oct 5.
  • How? On your bike.
  • Work? Yes; from all offices, in a week!

    Afterwards, the team of bikers took some time to materialize: although we are working in a very flexible environment, being available for work half-days only for a week still isn't easy to arrange for: client and internal meetings, projects and support to work on, and so on. After a month, four motivated Liipers decided to join, some for all the legs, some for only some steps.

    It is important to mention that the concept was never thought as a sports' stunt, or being particularly tough: e-bikes were explicitly encouraged, and it was by no means mandatory to participate in all parts. In other words: to enjoy outdoors and have a reachable sports challenge with colleagues matters more than completing the tour in a certain time.

    Now, doing it

    Monday October 1. - Lausanne → Fribourg

    Fast forward to Monday October 1st. The plan was to work in the morning, and leave around 3 p.m. The estimated biking time is approximately 4:30. But the weather in Lausanne was by no means fantastic - light rain for most if not all the trail. That's why we decided to leave early, and were on our bikes at 2 p.m. As for routing, we agreed to go through Romont, which has the advantage of providing an intermediate stop with a train station, in case we wished to stop.

    We started with a 15kms climb up to Forel and one very steep ascend in La Croix-sur-Lutry, on which we made the mistake to stay on our bikes.
    We arrived in Fribourg after 5 hours in the cold, wind and light rain; often in alternance, but also combined. Thankfully, we were welcomed by friendly Liipers in Fribourg who had already planned a pizza dinner and board-games night; It was just perfect!

    Tuesday October 2nd. - Fribourg → Bern

    After a well-deserved sleep; the plan was to work in Fribourg two hours only, to leave on time and arrive in Bern for lunch.

    • ~ 33 km
    • Amplitude: 534m - 674m
    • Ascend: -72m; total 181m
    • Cantons crossed: Fribourg, Bern
    • Fribourg → Bern

    This was frankly a pleasant ride, with an almost 10kms downhill from Berg to Wünnewil, and then a reasonable uphill from Flamatt to Bern. In approximaely two hours, we were able to reach Bern. The weather had become better; not as cold as the previous day, and rain stopped.

    Tuesday October 2nd. - Bern → Langenthal

    In Bern, changes within the team happened; one rider who made it from Lausanne decided to stop and got replaced by a fresh one! ☺ After a General Company Circle Tactical meeting (see the Holacracy – what’s different in our daily life? ), we jumped on our bikes towards the first non-Liip office overnight stop, northern of canton Bern.

    • ~ 45 km
    • Amplitude: 466m - 568m
    • Ascend: -71m; total 135m
    • Cantons crossed: Bern, Solothurn
    • Bern → Langenthal

    Wednesday October 3rd. - Langenthal → Zürich

    After a long night's sleep in a friendly BnB downtown Langenthal and a fantastic gargantuous breakfast, we were now aiming for Zürich. The longest leg so far, crossing Canton Aargau West to East.

    • ~ 80 km
    • Amplitude: 437m - 510m
    • Ascend: -67m; total 391m
    • Cantons crossed: Bern, Aargau, Zürich
    • Langenthal → Zürich

    When approaching Zürich, we initially followed the Route 66 "Goldküst - Limmatt", with small up- and downhills on cute gravel. But after 30 minutes of that fun, we realized that we didn't progress fast enough. Therefore we tried to get to our destination quicker! We re-routed ourselves to more conventional, car-filled routes and arrived in the Zürich office around 1 p.m., quite hungry!

    Thursday October 4th. - Zürich → St. Gallen

    After a half-day of work in the great Zürich office, and sore legs, we geared towards St. Gallen. The longest part with the biggest total ascend of the trip:

    • ~ 88 km
    • Amplitude: 412m - 606m
    • Ascend: 271m; total 728m
    • Cantons crossed: Zürich, Thurgau, St. Gallen
    • Zürich → St. Gallen

    After three days of biking and more than 200 kms in the legs, this step wasn't expected to be an easy ride and it hasn't been indeed. On the other side, it provided with nice downhill slides (Wildberg → Thurbenthal) and fantastic landscapes, with stunning views: from the suburbs of Zürich to Thurgauer farmland and the St. Gallen hills. Besides, the weather was just as it should be: sunny yet not too warm.

    After 4:45 and a finish while the sun was setting, we finally reached the St. Gallen Liip office!

    « Fourbus, mais heureux ! »

    Friday October 5th. St. Gallen

    Friday was the only day planned without biking,. And frankly, for good. We were not only greeted by the very friendly St.Gallen colleagues, but were also lucky enough to arrive on a massage day! (Yes, Liip offers each Liiper a monthly half-hour massage… ⇒ https://www.liip.ch/jobs ☺). After a delicious lunch, it was time to jump on a train back to Lausanne: Four days to come, 3:35 to return. It really brought a bizarre feeling: it's possible to bike from Lake Geneva to Lake Constance in four days; but it still takes 3.5 hours on some of the most efficient trains to run back.

    Wrap up

    • ~ 314.15 kms (yes; let's say approximately 100 * π)
    • 8 cantons crossed
    • ~ 2070 m of cumulated ascend
    • No single mechanical problem
    • Sore legs
    • Hundreds of cows of all colours and sizes
    • One game of Hornussen
    • Wireless electricity in Thurgau

    Learnings

    • Liip has people willing to engage in fun & challenging ideas!
    • Liip has all the good people it takes to support such a project!
    • The second day eightiest kilometer is easier than the fourth day eightiest kilometer: it would have been way easier with legs of decreasing intensity.
    • It takes quite some time to migrate from a desk to a fully-equipped ready-to-go bike.
    • Carrying personal equipment for one full week makes a heavy bike;
    • Bike bags are a must: one of us had a backpack and it's just not bearable;
    • The first week of October is too late in the year, and makes for uneasy conditions (rain and cold);
    • One month advance notice is too short;
    • Classical Bed-and-Breakfast are very charming.

    Thanks

    Managing this ride would not have been possible without:

    • Liip for creating a culture where implementing crazy ideas like this is encouraged ("Is it safe enough to try?");
    • Biking Liipers Tobias & Heiko, for coming along;
    • Supporting Liipers in various roles for arranging or providing accomodation, ordering cool sports T-shirts, organizing cool welcome gatherings (game night, music night), and being always welcoming, encouraging and simply friendly;
    • The SwitzerlandMobility Foundation for providing fantastic cycling routes, with frequent indicators, orientation maps and markings for "analog" orientation.

    Next year

    Given the cool experience, and many declarations of intent, it is very likely that this challenge will happen again next year, in Autumn; but vice versa! Want to join?

    ]]>
    Add syntactic sugar to your Android Preferences https://www.liip.ch/en/blog/syntactic-sugar-android-preferences-kotlin https://www.liip.ch/en/blog/syntactic-sugar-android-preferences-kotlin Tue, 09 Oct 2018 00:00:00 +0200 TL;DR

    You can find SweetPreferences on Github.

    // Define a class that will hold the preferences
    class UserPreferences(sweetPreferences: SweetPreferences) {
        // Default key is "counter"
        // Default value is "0"
        var counter: Int by sweetPreferences.delegate(0)
    
        // Key is hardcoded to "usernameKey"
        // Default value is "James"
        var username: String? by sweetPreferences.delegate("James", "usernameKey") 
    }
    
    // Obtain a SweetPreferences instance with default SharedPreferences
    val sweetPreferences = SweetPreferences.Builder().withDefaultSharedPreferences(context).build()
    
    // Build a UserPreferences instance
    val preferences = UserPreferences(sweetPreferences)
    
    // Use the preferences in a type-safe manner
    preference.username = "John Doe"
    preference.counter = 34

    Kotlin magic

    The most important part of the library is to define properties that run code instead of just holding a value.

    From the example above, when you do:

    val name = preference.username

    what really happening is:

    val name = sweetPreferences.get("username", "James", String::class)

    The username property is converted from a property name to a string, the "James" string is taken from the property definition and the String class is automatically inferred.

    To write this simple library, we used constructs offered by Kotlin such as Inline Functions, Reified type parameters, Delegated Properties, Extension Functions and Function literals with receiver. If you are starting with Kotlin, I warmly encourage you to go check those. It's only a small part of what Kotlin has to offer to ease app development, but already allows you to create great APIs.

    Next time you need to store preferences in your Android app, give SweetPreferences a try and share what you have built with it. We’d like to know your feedback!

    ]]>
    How Content drives Conversion https://www.liip.ch/en/blog/how-content-drives-conversion https://www.liip.ch/en/blog/how-content-drives-conversion Tue, 09 Oct 2018 00:00:00 +0200 What do users really want from website content? We have created a pyramid of needs. Discover our 5 insights.

    ]]>
    From coasters to Vuex https://www.liip.ch/en/blog/from-coasters-to-vuex https://www.liip.ch/en/blog/from-coasters-to-vuex Tue, 09 Oct 2018 00:00:00 +0200 You'll take a coaster and start calculating quickly. All factors need to be taken into account as you write down your calculations on the edge of the coaster. Once your coaster is full, you'll know a lot of answers to a lot of questions: How much can I offer for this piece of land? How expensive will one flat be? How many parking lots could be built and how expensive are they? And of course there's many more.

    In the beginning, there was theory

    Architecture students at the ETH learn this so-called "coaster method" in real estate economics classes. Planning and building a house of any size is no easy task to begin with, and neither is understanding the financial aspect of it. To understand all of those calculations, some students created spreadsheets that do the calculations for them. This is prone to error. There are many questions that can be answered and many parameters that influence those answers. The ETH IÖ app was designed to teach students about the complex correlations of different factors that influence the decision. Furthermore, if building a house on a certain lot is financially feasible or not.

    The spreadsheet provided by the client PO

    The product owner at ETH, a lecturer for real estate economics, took the tome to create such speadsheets, much like the students. These spreadsheets contained all calculations and formulas that were part of the course, as well as some sample calculations. After a thorough analysis of the spreadsheet, we came up with a total of about 60 standalone values that could be adjusted by the user, as well as about 45 subsequent formulas that used those values and other formulas to yield yet another value.

    60 values and 45 subsequent formulas, all of them calculated on a coaster. Implementing this over several components would end up in a mess. We needed to abstract this away somehow.

    Exploring the technologies

    The framework we chose to build the frontend application with, was Vue. We used Vue to build a prototype already. so we figured we could reuse some components. We already valued Vue's size and flexibility and were somewhat familiar with it, so it was a natural choice. There's two main possibilities of handling your data when working with Vue: Either manage state in the components, or in a state machine, like Vuex.

    Since many of the values need to be either changed or displayed in different components, keeping the state on a component level would tightly couple those components. This is exactly what is happening in the spreadsheet mentioned earlier. Fields from different parts of the sheet are referenced directly, making it hard to retrace the path of the data.

    A set of tightly coupled components. Retracing the calculation of a single field can be hard.

    Keeping the state outside of the components and providing ways to update the state from any component decouples them. Not a single calculation needs to be done in an otherwise very view-related component. Any component can trigger an update, any component can read, but ultimately, the state machine decides what happens with the data.

    By using Vuex, components can be decoupled. They don't need state anymore.

    Vue has a solution for that: Vuex. Vuex allows to decouple the state from components, moving them over to dedicated modules. Vue components can commit mutations to the state or dispatch actions that contain logic. For a clean setup, we went with Vuex.

    Building the Vuex modules

    The core functionality of the app can be boiled down to five steps:

    1. Find the lot - Where do I want to build?
    2. Define the building - How large is it? How many floors, etc.?
    3. Further define any building parameters and choose a reference project - How many flats, parking lots, size of a flat?
    4. Get the standards - What are the usual prices for flats and parking lots in this region?
    5. Monetizing - What's the net yield of the building? How can it be influenced?

    Those five steps essential boil down to four different topics:

    1. The lot
    2. The building with all its parameters
    3. The reference project
    4. The monetizing part

    These topics can be treated as Vuex modules directly. An example for a basic module Lot would look like the the following:

    // modules/Lot/index.js
    
    export default {
      // Namespaced, so any mutations and actions can be accessed via `Lot/...`
      namespaced: true,
    
      // The actual state: All fields that the lot needs to know about
      state: {
        lotSize: 0.0,
        coefficientOfUtilization: 1.0,
        increasedUtilization: false,
        parkingReductionZone: 'U',
        // ...
      }
    }

    The fields within the state are some sort of interface: Those are the fields that can be altered via mutations or actions. They can be considered a "starting point" of all subsequent calculations.

    Those subsequent calculations were implemented as getters within the same module, as long as they are still related to the Lot:

    // modules/Lot/index.js
    
    export default {
      namespaced: true,
    
      state: {
        lotSize: 0.0,
        coefficientOfUtilization: 1.0
      },
    
      // Getters - the subsequent calculations
      getters: {
        /**
         * Unit: m²
         * DE: Theoretisch realisierbare aGF
         * @param state
         * @return {number}
         */
        theoreticalRealizableCountableFloorArea: state => {
          return state.lotSize * state.coefficientOfUtilization
        },
    
        // ...
      }
    }

    And we're good to go. Mutations and actions are implemented in their respective store module too. This makes the part of the data actually changing more obvious.

    Benefits and drawbacks

    With this setup, we've achieved several things. First of all, we separated the data from the view, following the "separation of concerns" design principle. We also managed to group related fields and formulas together in a domain-driven way, thus making their location more predictable. All of the subsequent formulas are now also unit-testable. Testing their implementation within Vue components is harder as they are tightly coupled to the view. Thanks to the mutation history provided by the Vue dev tools, every change to the data is traceable. The overall state of the application also becomes exportable, allowing for an easier implementation of a "save & load" feature. Also, reactivity is kept as a core feature of the app - Vuex is fast enough to make any subsequent update of data virtually instant.

    However, as with every architecture, there's also drawbacks. Mainly, by introducing Vuex, the application is getting more complex in general. Hooking the data to the components requires a lot of boilerplating - otherwise it's not clear which component is using which field. As all the store modules need similar methods (f.e. loading data or resetting the entire module) there's also a lot of boilerplating going on. Store modules are tightly coupled with each other by using fields and getters of basically all modules.

    In conclusion, the benefits of this architecture outweigh the drawbacks. Having a state machine in this kind of application makes sense.

    Takeaway thoughts

    The journey from the coasters, to the spreadsheets, to a whiteboard, to an actual usable application was thrilling. The chosen architecture allowed us to keep a consistent set up, even with growing complexity of the calculations in the back. The app became more testable. The Vue components don't even care anymore about where the data is from, or what happens with changed fields. Separating the view and the model was a necessary decision to avoid a mess and tightly coupled components - the app stayed maintainable, which is important. After all, the students are using it all the time.

    ]]>
    Enkeltauglichkeit : responsibility for the future https://www.liip.ch/en/blog/enkeltauglichkeit https://www.liip.ch/en/blog/enkeltauglichkeit Wed, 03 Oct 2018 00:00:00 +0200 Fit for our grandchildren

    On 10 October, we show that we are willing to view nature as the key stakeholder in all decisions, for the well-being of future generations. The ‘Enkeltauglichkeit’ (‘fit for our grandchildren’) concept was created by the enkeltauglichkeit.jetzt association, Bread for All and committed individuals from the business community.

    Purpose over profit

    The first step in creating change is the question of a company’s internal attitude to nature. Thus far, there has been a dominant attitude of superiority (‘we are more important than nature’), with its corresponding results. Our commitment to a world ‘fit for our grandchildren’ means that our work is built on an internal attitude of being a part of nature.

    Liip is thinking and acting for the well-being of future generations

    Liip understands that we all share a connection with nature. It feeds us and our economy, so we are taking responsibility – for our employees, our customers, the community and nature. We are cultivating an integrated view of business and nature.

    We are the forefathers of the future

    Liipers take their responsibility seriously and act sustainably, in a pragmatic way. We support the interests of enkeltauglich.jetzt. Humans are naturally designed to collaborate in groups, take care of each other, and make use of our collective intelligence in a sustainable and self-organised way. This planet is our home, and nature is the basis of our existence. We are now the forefathers of the future. How will our descendants look back at us?

    ]]>