Blog Liip https://www.liip.ch/fr/blog Kirby Mon, 16 Apr 2018 00:00:00 +0200 Les derniers articles du blog Liip fr The Data Science Stack 2018 https://www.liip.ch/fr/blog/the-data-science-stack-2018 https://www.liip.ch/fr/blog/the-data-science-stack-2018 Mon, 16 Apr 2018 00:00:00 +0200 More than one year ago I sat down and went through my various github stars and browser bookmarks to compile what I then called the Data Science stack. It was basically an exhaustive collection of tools from which some I use on a daily basis, while others I have only heard of. The outcome was a big PDF poster which you can download here.

The good thing about it was, that every tool I had in mind could be found there somewhere, and like a map I could instantly see to which category it belonged. As a bonus I was able to identify my personal white spots on the map. The bad thing about it was, that as soon as I have compiled the list, it was out of date. So I transferred the collection into a google sheet and whenever a new tool emerged on my horizon I added it there. Since then - in almost a year - I have added over 102 tools to it.

From PDF to Data Science Stack website

While it would be OK to release another PDF of the stack year after year, I thought that might be a better idea to turn this into website, where everybody can add tools to it.
So without further ado I present you the http://datasciencestack.liip.ch page. Its goal is still to provide an orientation like the PDF, but eventually never become stale.

frontpage

Adding Tools: Adding tools to my google sheet felt a bit lonesome, so I asked others internally to add tools whenever they find new ones too. Finally when moving away from the old google sheet and opening our collection process to everybody I have added a little button on the website that allows everybody to add tools by themselves to the appropriate category. Just send us the name, link and a quick description and we will add it there after a quick sanity check. The goal is to gather user generated input too! The I am thinking also about turning the website into a “github awesome” repository, so that adding tools can be done more in a programmer friendly way.

adding tools for everyone

Search: When entering new tools, I realized that I was not sure if that tool already exists on the page, and since tools are hidden away after the first five the CTRL+F approach didn’t really work. That's why the website now has a little search box to search if a tool is already in our list. If not just add it to the appropriate category.

Mailing List: If you are a busy person and want to stay on top of things, I would not expect you to regularly check back and search for changed entries. This is why I decided to send out a quarterly mailing that contains the new tools we have added since our last data science stack update. This helps you to quickly reconnect to this important topic and maybe also to discover a data science gem you have not heard of yet.

JSON download: Some people asked me for the raw data of the PDF and at that time I was not able to give it to them quickly enough. That's why I added a json route that allows you to simply download the whole collection as a json file and create your own visualizations / maps or stacks with the tools that we have collected. Maybe something cool is going to come out of this.

Communication: Scanning through such a big list of options can sometimes feel a bit overwhelming, especially since we don’t really provide any additional info or orientation on the site. That’s why I added multiple ways of contacting us, in case you are just right now searching for a solution for your business. I took the liberty to also link our blog posts that are tagged with machine learning at the bottom of the page, because often we make use of the tools in these.

Zebra integration: Although it's nowhere visible on the website, I have hooked up the data science stack to our internal “technology database” system, called Zebra (actually Zebra does a lot more, but for us the technology part is relevant). Whenever someone enters a new technology into our technology db, it is automatically added for review to the data science stack. Like this we are basically tapping into the collective knowledge of all of our employees our company. A screenshot below gives a glimpse of our tech db on zebra capturing not only the tool itself but also the common feelings towards it.

Zebra integration

Insights from collecting tools for one more year

Furthermore, I would like to provide you with the questions that guided me in researching each area and the insights that I gathered in the year of maintaining this list. Below you see a little chart showing to which categories I have added the most tools in the last year.

overview

Data Sources

One of the remaining questions, for us is what tools do offer good and legally compliant ways to capture user interaction? Instead of Google Analytics being the norm, we are always on the lookout for new and fresh solutions in this area. Despite Heatmap Analytics, another new category I added is «Tag Management˚ Regarding the classic website analytics solutions, I was quite surprised that there are still quite a lot of new solutions popping up. I added a whole lot of solutions, and entirely new categories like mobile analytics and app store analytics after discovering that great github awesome list of analytics solutions here.

data sources

Data Processing

How can we initially clean or transform the data? How and where can we store logs that are created by these transformation events? And where do we also take additional valuable data? Here I’ve added quite a few of tools in the ETL area and in the message queue category. It looks like eventually I will need to split up the “message queue” category into multiple ones, because it feels like this one drawer in the kitchen where everything ends up in a big mess.

data processing

Database

What options are out there to store the data? How can we search through it? How can we access data sources efficiently? Here I mainly added a few specialized solutions, such as databases focused on storing mainly time series or graph/network data. I might either have missed something, but I feel that since there is no new paradigm shift on the horizon right now (like graph oriented, or nosql, column oriented or newsql dbs). It is probably in the area of big-data where most of the new tools emerged. An awesome list that goes beyond our collection can be found here.

database

Analysis

Which stats packages are available to analyze the data? What frameworks are out there to do machine learning, deep learning, computer vision, natural language processing? Obviously, due to the high momentum of deep learning leads to many new entries in this category. In the “general” category I’ve added quite a few entries, showing that there is still a huge momentum in the various areas of machine learning beyond only deep learning. Interestingly I did not find any new stats software packages, probably hinting that the paradigm of these one size fits all solutions is over. The party is probably taking place in the cloud, where the big five have constantly added more and more specialized machine learning solutions. For example for text, speech, image, video or chatbot/assistant related tasks, just to name a few. At least those were the areas where I added most of the new tools. Going beyond the focus on python there is the awesome list that covers solutions for almost every programming language.

analysis

Visualization, Dashboards, and Applications

What happens with the results? What options do we have to visually communicate them? How do we turn those visualizations into dashboards or entire applications? Which additional ways of to communicate with user beside reports/emails are out there? Surprisingly I’ve only added a few new entries here, may it be due to the fact that I accidentally have been quite thorough at research this area last year, or simply because of the fact that somehow the time of js visualizations popping up left and right has cooled off a bit and the existing solutions are rather maturing. Yet this awesome list shows that development in this area is still far from cooling off.

visualization

Business Intelligence

What solutions do exist that try to integrate data sourcing, data storage, analysis and visualization in one package? What BI solutions are out there for big data? Are there platforms/solutions that offer more of a flexible data-scientist approach (e.g. free choice of methods, models, transformations)? Here I have added solutions that were platforms in the cloud, it seems that it is only logical to offer less and less of desktop oriented BI solutions, due to the restrained computational power or due to the high complexity of maintaining BI systems on premise. Although business intelligence solutions are less community and open source driven as the other stacks, there are also awsome lists where people curate those solutions.

business intelligence

You might have noticed that I tried to slip in an awsome list on github into almost every category to encourage you to look more in depth into each area. If you want to spend days of your life discovering awesome things, I strongly suggest you to check out this collection of awesome lists here or here.

Conclusion or what's next?

I realized that keeping the list up to date in some areas seems almost impossible, while others gradually mature over time and the amount of new tools in those areas is easy to oversee. I also had to recognize that maintaining an exhaustive and always up to date list in those 5 broad categories seems quite a challenge. That's why I went out to get help. I’ve looked for people in our company interested particularly in one of these areas and nominated them technology ambassadors of this part of the stack. Their task will be to add new tools whenever they pop up on their horizon.

I have also come to the conclusion that the stack is quite useful when offering customers a bit of an overview at the beginning of a journey. It adds value to just know what popular solutions are out there and start digging around yourself. Yet separating more mature tools from the experimental ones or knowing which open source solutions have a good community behind it, is quite a hard task for somebody without experience. Somehow it would be great to highlight “the pareto principle” in this stack by pointing out to only a handful of solutions and saying you will be fine when you use those. Yet I also have to acknowledge that this will not replace a good consultation in the long run.

Already looking towards the improvement of this collection, I think that each tool needs some sort of scoring: While there could be plain vanilla tools that are mature and do the job, there are also the highly specialized very experimental tools that offers help in very niche area only. While this information is somewhat buried in my head, it would be good to make it explicit on the website. Here I am highly recommending what Thoughtworks has come up with in their technology radar. Although their radar goes well beyond our little domain of data services, it offers a great idea to differentiate tools. Namely into four categories:

  • Adopt: We feel strongly that the industry should be adopting these items. We see them when appropriate on our projects.
  • Trial: Worth pursuing. It is important to understand how to build up this capability. Enterprises should try this technology on a project that can handle the risk.
  • Asses: Worth exploring with the goal of understanding how it will affect your enterprise.
  • Hold: Proceed with caution.
Technology radar

Assessing tools according to these criteria is no easy task - thoughtworks is doing it by nominating a high profile jury that vote regularly on these tools. With 4500 employees, I am sure that their assessment is a representative sample of the industry. For us and our stack, a first start would be to adopt this differentiation, fill it out myself and then get other liipers to vote on these categories. To a certain degree we have already started this task internally in our tech db, where each employee assessed a common feeling towards a tool.

Concluding this blogpost, I realized that the simple task of “just” having a list with relevant tools for each area seemed quite easy at the start. The more I think about it, and the more experience I collect in maintaining this list, the more realize that eventually such a list is growing into a knowledge and technology management system. While such systems have their benefits (e.g. in onboarding or quickly finding experts in an area) I feel that turning this list into one will be walking down this rabbit hole of which I might never re-emerge. Let’s see what the next year will bring.

]]>
Best of Swiss Web 2018 https://www.liip.ch/fr/blog/bosw2018 https://www.liip.ch/fr/blog/bosw2018 Mon, 16 Apr 2018 00:00:00 +0200 Nouvelle consécration pour Liip à l’occasion du Best of Swiss Web Award

Avec quatre prix gagnés au Best of Swiss Web Award 2018, Liip maintient sa place parmi les dix plus grands gagnants au Best of Swiss Web. Liip a décroché l’or dans la catégorie Innovation, l’argent dans la catégorie Public Affairs et le bronze dans les catégories Technology et Business. Les cinq projets soumis ont tous été nominés.

Le projet romand Tooyoo remporte l’or

Pour la première fois dans l’histoire de Liip au Best of Swiss Web Award, un projet originaire de Suisse romande est nominé au Master: Tooyoo - Anticiper pour mieux transmettre. Tooyoo est une plateforme digitale sécurisée qui stocke toutes les informations importantes dont vos proches auront besoin s'il vous arrive quelque chose. Car un décès s’accompagne souvent de démarches administratives fastidieuses. Étape par étape, Tooyoo vous aide à organiser les tâches d’ordre administratif qui surviennent à la suite d’un décès - de manière simple, fiable et intuitive. Levant un véritable tabou en Suisse, la plateforme permet d’anticiper dès aujourd’hui l’organisation de votre décès en enregistrant vos dernières volontés dans un coffre-fort numérique. En coopération avec Superhuit, Liip a mis en œuvre le projet de la Mobilière. «C’est l’envie de réussir ensemble qui nous a permis de développer tooyoo.ch. Nous sommes ravis de pouvoir faire confiance au savoir-faire technique et aux compétences de Liip en matière de définition et de développement de produit», affirme Julien Ferrari, Operations Happyender in Chief au sein de la start-up Tooyoo. Tooyoo a remporté l’or dans la catégorie Innovation.
www.tooyoo.ch

rokka simplifie le traitement d’images

«Des sites web plus rapides contenant des images moins lourdes permettent de gagner du temps en terme de chargement et de l’argent en terme d’hébergement» – tel est le credo de rokka. Ce produit créé par Liip comprime les images sans nuire à leur qualité. Avec rokka, le poids de vos fichiers d’images peut être réduit de 80%. Formats d’images, attributs SEO, extraits d’images spécifiques, distribution ultrarapide, rokka est un outil idéal pour le traitement numérique d'images. À l’heure actuelle, plus de 5 millions d’images ont d’ores et déjà été chargées et livrées, avec la gestion d’un volume de données dépassant 120 Go. «Nous disposons avec rokka d’un service de traitement centralisé pour nos images, qui garantit l’enregistrement et la livraison rapides, sûrs et efficaces des images. Sur n’importe laquelle de nos plateformes, nous pouvons ainsi afficher la même image dans les formats les plus divers. Le tout sans frais importants de gestion des images.» Déclaration d’Alain Petignat, responsable du développement en ligne et des opérations, Fédération des coopératives Migros, à propos de rokka. rokka a décroché le bronze dans la catégorie Technology.
rokka.io

La plateforme immobilière de l’ETHZ facilite la vie des étudiants en architecture

Liip a développé une application destinée à la chaire d’architecture et de processus de construction du Département Architecture de l’École polytechnique fédérale. Cette application est un moyen facile d’initier les étudiants aux mécanismes économiques de l’immobilier. Comment, où et dans quelles conditions la construction d’un bâtiment est-elle rentable? À partir de ces problématiques, l’application aide les étudiants à procéder à des calculs complexes (coûts du bâtiment, exploitation, etc.). «De nombreux étudiants redoutent nos disciplines en raison de leur composante mathématique importante. L’exécution rapide de nombreux calculs compliqués et la combinaison de données sont des processus très rigides sur le papier. Pendant les examens, les étudiants peuvent rencontrer des problèmes s’ils s’aperçoivent en aval d’une erreur qui a corrompu tous les calculs suivants. L’application leur est ici d’une grande aide, car les modules peuvent être révisés à tout moment», déclare Hannes Reichel, professeur au Département Architecture. Le projet Économie immobilière de l’ETHZ a remporté l’argent dans la catégorie Public Affairs.

Le monde du travail de Migros décroche le bronze

La plateforme migros-gruppe.jobs a été conçue afin de réunir sur un seul et même site web les postes à pourvoir au sein de la soixantaine d’entreprises du groupe Migros. Créé en tant qu’interface de contact pour les demandeurs d’emploi, ce portail apporte une aide précieuse au groupe Migros dans un marché de l’emploi de plus en plus rude. L’objectif du projet était de développer et d’établir une nouvelle plateforme de l’emploi, conviviale et complète, pour le groupe Migros à l’attention du marché suisse du travail. Très clair et facile à comprendre pour l’utilisateur, le portail présente toutes les informations importantes sous forme optimisée. «La transformation numérique nécessite de réagir toujours plus rapidement sur le marché. Le lancement d’une plateforme de l’emploi commune est une première étape», déclare Micol Rezzonico, responsable du Centre de compétences, Employer Branding. Ce portail de l’emploi, qui favorise la stratégie marketing et celle de valorisation de l’image d’employeur du groupe Migros, a remporté le bronze dans la catégorie Business.
migros-gruppe.jobs

Toute l’équipe de Liip est extrêmement heureuse de ce succès. Et nous nous réjouissons bien entendu également de la réussite des autres agences. Nous félicitons tout particulièrement Unic SA et SBB SA élu Master of Swiss Web 2018.

]]>
The Facebook Scandal - or how to predict psychological traits from Facebook likes. https://www.liip.ch/fr/blog/the-facebook-scandal-or-how-to-predict-psychological-traits-from-facebook-likes https://www.liip.ch/fr/blog/the-facebook-scandal-or-how-to-predict-psychological-traits-from-facebook-likes Fri, 06 Apr 2018 00:00:00 +0200 The facebook scandal or the tempest in a teapot seems to be everywhere these days. Due to it Facebook has lost billions in stock market value, governments on both sides of the Atlantic have opened investigations, and a social movement is calling on users to #DeleteFacebook. While the press around the world and also in Switzerland are discussing back and forth, how big Facebook’s role was in the current data miss-use between, I thought it would be great to answer three central questions, which even I had after reading a couple of articles:

  1. It seems that CA obtained the data somewhat semi-legally, but everyone seems very upset about that. How did they do it and can you do it, too? :)
  2. People feel manipulated with big data and big buzzwords, yet practically how much can one deduce from our facebook likes?
  3. How does the whole thing work, predicting my most inner psychological traits from facebook likes, it seems almost like wizardry.

To provide answers to these questions I have structured the article mainly around a theoretical part and a practical part. In the theoretical part I'd like to give you an introduction on psychology on Facebook, explaining the research behind it, showing how such data was collected initially and finally highlighting its implications for political campaigning. In the practical part I will walk you through through a step by step example that shows how machine learning can deduce psychological traits from facebook data, showing you the methods, little tricks and actual results.

Big 5 Personality Traits

Back in 2016 an article was frantically shared in the DACH region. With its quite sensation-seeking title Ich habe nur geziegt dass es die Bombe gibt, the article claimed that basically our most “inner” personality traits - often called the big 5 or OCEAN - can be extracted from our Facebook usage and are used to manipulate us in political campaigns.

personality traits

This article itself was based on the research paper of Michal Kosinski and other Cambridge researchers who studied the correlation between Facebook likes and our personality traits, while their older research about this topic goes even back to 2013. Although some of these researchers ended up in the eye of the storm of this scandal, they are definitely not the only ones studying this interesting field.

In the OCEAN definition our personality traits are: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. While psychologists normally have to ask people around 100 questions to determine these traits - you can do an online test yourself online to get an idea - the article discussed a way where Facebook data could be used to infer them instead. They showed that "(i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49)". This basically meant that by using a machine learning approach based on your Facebook data they were able to train a model that was better in describing you, than a friend of yours. They also found out that, the more facebook likes they have for a single person, the better the model is predicting those traits. So as you might have expected having more on users pays off in these terms. The picture below shows a birds-eye view of their approach.

the original approach from the paper

They also came to the conclusion that "computers outpacing humans in personality judgment, presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy." So in other words hinting that their approach might be quite usable beyond the academic domain.

Cambridge Analytica

This is how Cambridge Analytica (CA) comes into play, which has its name originally from the cooperation of the psychometrics department in Cambridge. Founded in 2014, their goal was to make use of such psychometric results in the political campaign domain. Making use of psychometric insights is nothing new per-se since at least the 1980ties different actors have been making use of various psychometric classifications for various applications. For example the Sinus Milieus are quite popular in the DACH region, mainly for marketing areas.

The big dramatic shift in comparison to these “old” survey based approaches is that CA was doing two things differently: Firstly they focused specifically on Facebook as a data source and secondly they used Facebook as a platform primarily for their interventions. Instead of asking long and expensive questionnaires, they were able to collect such data in an easy manner and also, they reach their audience in a highly individualized way.

How did Cambridge Analytica get the data?

Cambridge researchers had originally created a Facebook app that was used to collect data for the research paper mentioned above. Users filled out a personality test on Facebook and then agreed on, that the app can collect their profile data. There is nothing wrong with that, except for the fact that at this time Facebook (and the users) also allowed the app to collect the personality profiles of their friends, who never participated in the first place. At this point I am not sure which of many of such apps (“mypersonality”, “thisisyourdigitallife” or others) was used to gather the data, but the result was that with this snowball system approach CA quickly collected data about roughly 50 Mio users. This data became the base for the work of CA. And CA was heading towards using this data to change the way political campaigns work forever.

Campaigns: From Mass communication to individualized communication

I would argue that the majority of people are still used to the classic political campaigns that are broadcasted to us via billboards-ads or TV-ads and have one message for everybody, thus following the old paradigm of mass communication. While such messages have a target group (e.g.women over 40), it is still hard to reach exactly those people, since so many other people are reached too, making this approach rather costly and ineffective.

With the internet era things changed quickly here: In nowadays online advertising world a very detailed target group can be reached easily (via Google Ads or Facebook advertising) and each group can potentially receive a custom marketing message. Such programmatic advertising surely disrupted the adverising world, but CA’s idea was not to use this mechanism for advertising but for political campaigns. While quite a lot of details are known about those campaigns and some shady practices have been made visible, here I want to focus on the idea behind them. The really smart idea about these campaigns is that a political candidate can appeal to many different divergent groups of voters at the same time! This means that they could appear as safety loving to risk-averse persons and could appear as risk-loving to young entrepreneurs at the same time. So the same mechanisms that are used to convince people of a product are now being used to convince people to vote for a politician - and every one of these voters might do so because of individual reasons. Great isn't it?

doctor or tattoo artist?

This brings our theoretical part to the end. We know now they hows and whys behind Cambridge Analytica and why these new form of political campaigns matter. It's time now to find out how they were able to infer personality traits from facebook likes. I am sorry to disappoint you though, that I won’t cover on how to run a political campaign on Facebook in the next practical part.

Step1 : Get, read in and view data

What most people probably don’t know is, that one of the initial projects website still offers a big sample of the collected (and of course anonymized) data for research purposes. I downloaded it and put it to use it for our small example. After reading in the data from the csv format we see that we have 3 Tables.

users = pd.read_csv("users.csv")
likes = pd.read_csv("likes.csv")
ul = pd.read_csv("users-likes.csv")
users table

The user table with 110 thousand contains information about demographic attributes (such as age and gender) as well as their measurements on the 5 different personality traits: openness to experience(ope), conscientiousness(con), extraversion(ext), agreeableness(agr), and neuroticism(neu).

pages table

The likes table contains a list of all 1.5 Mio pages that have been liked by users.

edges

The edges table contains all the 10 Mio edges between the users and the pages. In fact seen from a network research perspective our “small sample” network is already quite big for normal social network analysis standards.

Step 2: Merge Users with Likes in one dataframe aka the adjacency matrix

In step 2 we now want to create a network between users and pages. In order to do this we have to convert the edge-list to a so called adjacency matrix (see image with an example below). In our case we want to use a sparse adjacency format, since we have roughly 10 Mio edges. The conversion, as shown below, transforms our edgelist into categorical integer numbers which are then used to create our matrix. By doing it in an ordered way, the rows of the resulting matrix still match the rows from our users table. This saves us a lot of cumbersome lookups.

from edge list to adjacency matrix
#transforming the edge list into a sparse adjacency matrix
rows = ul["userid"].astype('category',ordered=True,categories=users["userid"]).cat.codes # important to maintain order
cols = ul["likeid"].astype("category",ordered=True,categories=likes["likeid"]).cat.codes
ones = np.ones(len(rows), np.uint32).tolist()
sparse_matrix = csc_matrix((ones, (rows, cols)))

If you are impatient like me, you might have tried to use this matrix directly to predict the users traits. I have tried it and the results are rather unsatisfactory, due to the fact that we have way too many very “noisy” features, which give us models that predict almost nothing of our personality traits. The next two steps can be considered feature engineering or simply good tricks that work well with network data.

Trick 1: Prune the network

To obtain more meaningful results, we want to prune this network and only retain the most active users and pages. Theoretically - while working with a library like networkx - we could simply say:

«Lets throw out all users with with a degree less than 5 and all pages with a degree of less than 20.» This would give us a network of users that are highly active and of pages that seem to be highly relevant. Often under social network research computing k-cores gives you a similar pruning effect.

But since our network is quite big I have used a poor man's version of that pruning approach: We will be going through a while loop that throws out columns (in our case users) where the sum of their edges less than 50 and rows (pages) where the sum of their edges less than 150. This is equivalent to throwing out users that follow less than 50 pages and throwing out pages that have less than 150 users. Since a removal of a page or user might hurt one of our conditions again, we will continue reduce the network as long as still columns or rows need to be removed. Once both conditions are met the loop will stop. While pruning we are also updating our user and pages lists (via boolean filtering), so we can track which users like which pages and vice versa.

# pruning the network into most relevant users and pages
print(sparse_matrix.shape)
max = 50
while True:
    i = sum(sparse_matrix.shape)
    columns_bool = (np.sum(sparse_matrix,axis=0)>3*max).getA1()
    sparse_matrix = sparse_matrix[:, columns_bool] #columns
    likes = likes[columns_bool]    
    rows_bool = (np.sum(sparse_matrix,axis=1)>max).getA1()
    sparse_matrix = sparse_matrix[rows_bool] #rows
    users = users[rows_bool]
    print(sparse_matrix.shape)
    print(userst.shape)
    if sum(sparse_matrix.shape) == i: 
        break

This process quite significantly reduces our network size to roughly a couple thousand users (19k) with a couple of thousands pages (8-9k) or attributes each. What I will not show here is my second failed attempt at predicting the psychological traits. While the results were slightly better due to the better signal to noise ratio, they were still quite unsatisfactory. That's where our second very classic trick comes into play: dimensionality reduction.

Trick 2: Dimensionality reduction or in our case SVD

The logic behind dimensionality reduction can be explained intuitively: Usually when we analyse user’s attributes we often find attributes, that describe the "same things" (latent variables or factors). Factors are artificial categories emerge by combining “similar” traits, that behave in a very “similar” way (e.g. your age and your amount of wrinkles). In our case these variables are pages that a user liked. So there might be a page called "Britney Spears" and another one "Britney Spears Fans" and all users that like the first also like the second. Intuitively we would want both pages to "behave" like one, so kind of merge them into one page.

For such approaches a number of methods are available - although they all work a little bit different - the most used examples are Principal component analysis, Singular value decomposition, Linear discriminant analysis. These methods allow us to “summarize” or “compress” the dataset in as many dimensions as we want. And as a benefit these dimensions are sorted in the way that the most important ones come first.

So instead of looking at a couple of thousand pages per user, we can now group them into 5 “buckets”. Each bucket will contain pages that are similar in regard to how users perceive these pages. Finally we can correlate these factors with the users personality traits. Scikit learn offers us a great way to perform a PCA on a big dataset with the incremental PCA method that even works with datasets that don’t fit into RAM. An even more popular approach (often used in recommender systems) is SVD, which is fast, easy to compute and yields good results.

#performing dimensionality reduction
svd = TruncatedSVD(n_components=5) # SVD
#ipca = IncrementalPCA(n_components=5, batch_size=10) # PCA 
df = svd.fit_transform(sparse_matrix)
#df = ipca.fit_transform(sparse_matrix)

In the case above I have reduced the thousands of pages into just 5 features for visualization purposes. We can now do a pairwise comparison between the personality traits and the 5 factors that we computed and visualize it in a heatmap.

#generating a heatmap of user traits vs factors
users_filtered = users[users.userid.isin(matrix.index)]
tmp = users_filtered.iloc[:,1:9].values # remove userid, convert to np array
combined = pd.DataFrame(np.concatenate((tmp, df), axis=1)) # one big df
combined.columns=["gender","age","political","ope","con","ext","agr","neu","fac1","fac2","fac3","fac4","fac5"]
heatmap = combined.corr().iloc[8:13].iloc[:,0:7] # remove unwanted columns
sns.heatmap(heatmap, annot=True)
heatmap

In the heatmap above we see that factor 3 seems to be quite highly positively correlated with the user's openness. We also see that factor 1 is negatively correlated with age. So the older you get the less you probably visit pages from this area. Generally we see though that the correlations between some factors and traits are not very high though (e.g. agreeableness)

Step 3: Finally build a machine learning model to predict personality traits

Armed with our new features we can come back and try to build a model that finally will do what I promised: namely predict the user's traits based solely on those factors. What I am not showing here, is the experimentation of choosing the right model for the job. After trying out a few models like Linear Regression, Lasso or decision trees the LassoLars Model with cross validation worked quite well. In all the approaches I’ve split the data into a test (90% of the data) and training set (10% of the data), to be able to compute the accuracy of the models on unseen data. I also applied some poormans hyperparameter tuning, where all the predictions are going through different variants of k of the SVD dimensionality reduction.

#training and testing the model
out = []
out.append(["k","trait","mse","r2","corr"])
for k in [2,5,10,20,30,40,50,60,70,80,90,100]:
    print("Hyperparameter SVD dim k: %s" % k)
    svd = TruncatedSVD(n_components=k)
    sparse_matrix_svd = svd.fit_transform(sparse_matrix)
    df_svd = pd.DataFrame(sparse_matrix_svd)
    df_svd.index=userst["userid"]
    total = 0
    for target in ["ope","con","ext","agr","neu"]:
        y = userst[target]
        y.index = userst["userid"]
        tmp = pd.concat([y,df_svd],axis=1)
        data = tmp[tmp.columns.difference([target])]
        X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.1)
        clf=LassoLarsCV(cv=5, precompute=False)
        clf.fit(X_train,y_train)
        y_pred = clf.predict(X_test)
        mse = mean_squared_error(y_test,y_pred)
        r2 = r2_score(y_test,y_pred)
        corr = pearsonr(y_test,y_pred)[0]
        print('   Target %s Corr score: %.2f. R2 %s. MSE %s' % (target,corr,r2,mse))
        out.append([k,target,mse,r2,corr])
    print(" k %s. Total R2 %s" % (k,total))

To see which amount of dimensions gave us the best results we can simply look at the printout or visualize it nicely with seaborn below. In the case above I found that solutions with 90 dimensions gave me quite good results. A more production ready way of doing is can be done with GridSearch, but I wanted to keep the amount of code for this example minimal.

#visualizing hyperparameter search
gf = pd.DataFrame(columns=out[0],data=out[1:-1])
g = sns.factorplot(x="k", y="r2", data=gf,size=10, kind="bar", palette="muted")
results

Step 4: Results

So now we can finally look at how the model performed on each trait, when using 90 dimensions in the SVD. We see that we are not great at explaining the user’s traits. Our R^2 score shows that for the most traits we can explain them only roughly between 10-20% using Facebook Likes. Among all traits openness seems to be the most predictable attribute. It seems to make sense, as open people would be willing to share more of their likes on facebook. While for some of you it might feel a bit disappointed that we were not able to predict 100% of the psychological traits for a user, you should know that in a typical re-test of psychological traits, researchers are also only are able to explain roughly 60-70% of the traits again. So we did not too bad after all. Last but not least, it's worth mentioning that if we are able to predict roughly 10-20% per trait times 5 traits, overall we know quite a bit about the user in general.

results

Conclusion or what's next?

From of my point of view I think this small example shows two things:

Firstly, we learned that it is hard to predict our personality traits well from JUST Facebook likes. In this example we were only able to predict a maximum 20% of a personality trait (in our case for openness). While there are of course myriads of ways to improve our model (e.g. by using more data, different types data, smarter features, better methods) quite a bunch of variance might be still remain unexplained - not too surprising in the area of psychology or social sciences. While I have not shown it here, the easiest way to improve our model would be simply to allow demographic attributes as features too. So knowing a users age and gender would allow us to improve roughly 10-20%. But this would then rather feel like one of these old-fashioned approaches. What we could do instead is to use the user's likes to predict the their gender and age ; but lets save this for another blog post though.

Secondly, we should not forget that even knowing a person's personality very well, might not transfer into highly effective political campaigns that would be able to swing a users vote. After all that is what CA promised to their customers, and that's what the media is fearing. Yet the general approach to run political campaigns in the same way such as marketing campaigns is probably here to stay. Only the future will show how effective such campaigns really are. After all Facebook is not an island: Although you might see Facebook ads showing your candidate in the best light for you, there are still the old fashioned broadcast media (that still makes the majority of consumption today), where watching an hour interview with your favorite candidate might shine a completely different light on him. I am looking forward to see if old virtues such as authenticity or sincerity might not give a better mileage than personalized facebook ads.

]]>
3D Easter Bunny Making-of https://www.liip.ch/fr/blog/3d-easter-bunny-making-of https://www.liip.ch/fr/blog/3d-easter-bunny-making-of Fri, 06 Apr 2018 00:00:00 +0200 Conceptual challenges

For us as Designers the biggest challenge was to design the bunny out of the blue. Since we didn’t have a concept made by a dedicated 2D Artist, there was a lot of trying out to find its ultimate cuteness.

Design Process

The usual way of designing a 3D Model, is to create a 2D model first, usually by a concept artist. That model can either be a sketch or a detailed drawing. Subsequently, the model then goes through a feedback loop, is refined and approved. At that stage, the 3D artist then models it. Thus, dividing the whole process into three major steps.

Just like when we design web pages, our UX designers start by crafting mockup screens, followed by a frontend developer that assesses their feasibility and implements them afterwards. Professional 3D artists in big studios however, are generally given a concept drawing, that is detailed enough for them to simply “copy” or rather translate it into 3D.
Pedro Couto – our main 3D artist at Liip – however, was given free reign in creativity, doing the entire form-finding process directly in 3D. The challenging part was to build everything from his imagination, without a 2D model, but ultimately lead to a great result, we believe.

Decision to go straight into 3D

Practice over theory is one of our core principles at Liip. As such, we worked with compasses over maps or creativity over fully defined concept art this time. Starting to model in 3D directly led to several iterations of the bunny and allowed Pedro to further hone his modeling skills. Furthermore, it enabled him to explore new ways to efficiently draft objects in 3D.

Pedro and I as a UX designer and Illustrator worked closely together to further enhance Hazel’s cuteness factor and to align its visual appearance with Liip’s branding guidelines.

Evolution of the bunny

It takes a lot of effort to build things in 3D. Here you see some of the several stages Hazel went through.

Step 1: Sculpting the first draft

bunny-1

Step 2: Joining all the blocks, plus further sculpting. Hazel was in need for some arms too…

bunny2

Last but not least: Hazel received a proper Liip branding shower and a cute-effect facelift

bunny3]]>
Mannar : from Aleppo to Bern https://www.liip.ch/fr/blog/mannar-from-aleppo-to-bern https://www.liip.ch/fr/blog/mannar-from-aleppo-to-bern Thu, 05 Apr 2018 00:00:00 +0200 I met Mannar the first time last summer in our Bern office. She is one of our trainee from the Powercoders program.
I get the chance to discuss with her over a coffee when I travel to Bern. It’s time you get to know her too. She is pretty inspiring to me, we are lucky to have her among us!
Thank you Mannar for sharing your experience and being so sincere !

NB: Header picture credit: Marco Zanoni

How did you end up with us?

Mannar: The journey to Switzerland in short, from Syria i escaped to Turkey then from Turkey by land all the way to Switzerland with several transportation methods, a truck container was one of them.
I first entered Switzerland on the last days of Dec 2015, i applied for asylum in Canton Waadt, after which i was transferred to Niedergösgen in canton Solothurn where i still live at the moment, there through more than one person i heard about a Coding bootcamp for refugees taking place in Bern (called Powercoders), i immediately applied, got accepted, then went through the 3 months program, through Powercoders i was introduced to several IT companies in Bern, LIIP was one of them.

Why Bern?

Mannar: I like Pandas, so i thought i must go to the city of bears maybe raising a little Panda at home is legal there, haha .. a joke! Because the first bootcamp of Powercoders was in Bern :)

Did you already speak German?

Mannar: Nope, i invested the last 2 years in Switzerland to study it, i’m currently doing B2 German course. And by the way i’d love to learn French one day, i’ll have to find a sponsor first :)

What it is like to arrive here and apply for asylum?

Mannar: It was a new country, new culture, it’s natural one would feel the unknown and overwhelmed all over, especially those who can not speak a mutual language, luckily i already spoke English, it didn’t take me long to orient myself and figure out my way around how things work in Switzerland. For my luck as well i already knew how to use technology as i have studied computer engineering back in Syria. Which was a huge plus in a land where almost everything is digitalized.
I’m here since 2 years, i still hold the Permit N, which means i’m still an asylum seeker, chances that the N holder get a job is impossible, same applies for getting an apprentice, here came Powercoders again and plead SEM for an exception to enable me to start my internship at LIIP, it was successfully issued.

Thank you Christian Hirsig, Sunita Asnani, Hannes Gassert, Marco Jakob, and all the teachers who were so kind and patient with out late attendance and poor concentration, sometimes :)

Why did you apply to powercoders?

Mannar: Back in Syria, after high school i studied 2 years computer engineering, after graduation i developed my graphic design knowledge and could do some freelancing works, as for coding i studied the basics of programming in the college, but never had the chance to go beyond that or to write codes myself. Powercoders came in the right time, they offered to teach web development, with all computers, teachers, projects, support, networking food cookies all for free, and most importantly a chance for internships later. Who would say No to all of these? I would just say yes just for the cookies! Just kidding :)
My goal was to be independent as soon as possible from the social help, to get a job (a cool exciting one that i enjoy doing and could be creative at), earn money enough to stand on my own feet.

A part of Powercoders program is to get us (the student who done the bootcamp) chances to do internship at IT companies in Bern, which i think was the greatest gift ever.
Through a “Career day” where companies met students, i was introduced to several IT companies, eventually i settled down for LIIP.

How did it go?

Mannar: At first learning coding was overwhelming, lots of web technologies to learn, my German was not as good as now, i lacked team integration as well.
But then after the German course was out of the way, my concentration was set and things started to have a pattern, goals were defined and evaluated weekly with my onboarding coach. during the weeks i was learning and applying what i learnt on small tasks my colleagues gave me. Once the task is finished, it gets evaluated then i move to the next one, and so on.
Did you have an onboarding buddy? Yes I have an onboarding buddy/coach who still helps me on the vocational level and on the integrational level. Simone Wegelin is a irreplaceable support during this journey. All of the work colleagues are friendly and generous. They’ve become a second family to me.

Did you feel shy?

Mannar: Not shy, rather disconnected at first, specially I had to visit a German course every morning before work, and the fact that I travel daily from my village to Bern (1.5 hour). All of it was new and tiring.

How does it go with coding?

Mannar: I forgot to mention that I’m learning a UX design here as well, not only coding.
Coding is finding solutions to problems, it's cracking puzzles which luckily i’ve always liked, yet it was (and still is) challenging to learn to code. There are days when i feel very disappointed with myself for not being able to solve a simple exercise, or to style few elements, then I’d put myself in this tense state of mind to insist to solve it, and I fail and push to only fail again then i get more angry on myself... There is a psychological inner battle that comes with it, so it’s not only about learning to code, but also learning how to deal with failure.
On the other hands there are those moment when I feel I could be the next coding star female in town! (it’s mostly when i solve a good exercise on Codewars).

What’s the main difference between now at Liip and your previous job?

Mannar: In Syria i was employed at a governmental association, where basically i did data entry and some web content management. It wasn’t exciting for me, rather limited and outdated. I didn’t feel I belonged there, Why I got me into this boring job then, you may ask? Because it was the only job available at that time. I couldn’t take my time to do a further education and learn Web development, I had to make money fast one way or another.
Here i have to mention that there is no social help back in Syria, you don’t work, you end up in the street. Government didn’t give a shit. There was no solid health insurance system for employees, and by the end of the month the salary was only enough to get the basic life requirements, every family member must work and earn to save the ship from drowning. I worked 2 jobs to get me good income.
Syria is an enormously rich country, the textile industry, wood, iron, plastic and food was at its peak, we have incredible petrol and oil productions that could make each and every Syrian citizen live a luxurious life (assuming no corruption and monopolism was there). Yet we lived in indigence and poverty, that was one of the reason the poor people went against the dictatorship. But the rich people who didn’t want to change their lifestyle teamed up with the regime against the poor ones to shut them down. However, the political game in the area has played the biggest role. but we’re not going there now :)

On the profession level, the main differences between Switzerland and Syria go down to self-discipline, motivation and taking responsibility, those i truly embrace here, because i learnt them here, now i wake up everyday with excitement to go to work to learn and make awesome things. Because i’m doing the things i love to do, i’m not forced to do them i chose it and i enjoy doing it. That was not the case in my previous job in Syria.
I’ll have to mention that in Syria it’s not common for females to work in IT. Most female students get marry before or right after graduation, to resign from the school life and sign in into the housewife position.

What do you miss most?

Mannar: I mostly miss privacy, i’ve been living for more than 2 years in a shared house with another family who has 2 babies and lots of visitors, a shared kitchen and bathroom, I miss the quiet and solitude. I’m not allowed to change my resident place because of Permit N.
I also miss financial comfort, the Permit N salary is honestly a joke. It changed my lifestyle as I always must consider every Frank I spend, as simple as it might sound to some people, but I miss buying things that are not from Second hand shops, to buy meals from take-away, to visit a gym..etc
I just hope to get B Permit soon to be able to work and earn well. Permit B would also gives me the right to travel and see my family who I miss the most.

mannar-and-isaline

Mannar likes challenges and always has an objective in sight! :)

What’s your next step?

Mannar: My next step will be determined by the decision issued from SEM, either a negative decision Permit F (provisionally admitted foreigners) or a positive one B permit (Resident foreign nationals).
I must consider both possibilities, with B Permit things like employment, changing address and family reunion can be smoothly achieved, with F they could be done as well, but as slow as 5 years minimum.
So there is the highway plan (with B):
first thing I will do is to buy my own postpaid SIM card! Seriously.
then I would want to do an apprenticeship of Informatics at Liip, get a Diploma of Informatics, apply for a job, only then I can be independent from the social help. And rent of my own! finally!

And there is the bicycle track plan (F), and to fulfill my plan I must get ready to go through a long progress and bureaucratic procedures and expect refusal at any step, one should be patient and keep failing until it works.

Thank you Isaline for giving me the chance to be heard (or read), I hope someone from SEM is reading this while he or she is having a good mood. ^_^

---------

Update: Since we wrote this blogpost, Mannar received her Permit B ! It means that she is allowed to stay, work, study, rent in Switzerland as well as to travel. She has now the right to sign a contract (like a mobile phone). She will start her apprenticeship at Liip in August!
Follow Mannar on LinkedIn
---------

Check Power Coders Don't hesitate to contact them if you wish to get involved as a coach or if you want to welcome a trainee, or just if you want more information :)
The programm is currently starting in Suisse Romande too!

---------

]]>
Our learnings from changing the feedback culture https://www.liip.ch/fr/blog/our-learnings-from-changing-the-feedback-culture https://www.liip.ch/fr/blog/our-learnings-from-changing-the-feedback-culture Tue, 03 Apr 2018 00:00:00 +0200 Last november we started a pilot project where a group of people committed themselves to give and get more feedback over a period of four months. Here are our learnings and the next steps we will implement.

“I started looking at feedback as an art, a skill to develop, and I now feel "officially" supported by Liip in spending time giving/receiving it.” François Bruneau

It takes a committed and organised team to pull something off

The organisers each committed to spend half a day for this project. Of course it turned out to be more. We organised trainings, answered questions, evaluated tools, reported bugs, led interviews, prepared newsletters and a talk. We organised ourselves in roles and each of us took responsibility for specific topics. Short reviews of what went well and what didn’t are totally worth their time.

Going with volunteers had pro’s and con’s

We asked for volunteers because we wanted to find people who cared about feedback. It was great to see how many volunteered and it gave us the safety of working on a topic that is important for the company. For the participants it would have been easier giving feedback to people who are open for it. But with volunteers in different locations all over switzerland this was not possible.

Offering a feedback training paid off greatly and is now offered company wide

We invited Marion Walz as a coach to give feedback trainings (one of many things I learned from Jurgen Appelo’s book “How to change the world”). More than thirty participants attended. Many told me that they had a better understanding on feedback mechanics, received inputs on how to formulate feedback and how to deal with feedback situations. The trainings gave a much better learning experience than any of the videos or reading material that we provided. We already organised more trainings for the whole company and will continue to do so.

“After the feedback training yesterday I managed to overcome my obstacles of giving difficult feedback and gave some. And it was amazing for all involved.” Michelle Sanver

Asking regularly about progress and obstacles helped us to take decisions

At the end of the pilot we decided if it’s worth spending more time on the topic and in what to invest next. To back this decision with data and to have regular feedback from the participants, we sent out a survey every couple of weeks and had detailed interviews with some of them.
In the first month the participants made remarkable progress, experimented, set goals and the motivation was high. Over time the engagement started sinking. We could identify which participants profited and stayed engaged, and what challenges were still unsolved.
Next time I would care more about how to visualise the data to make it easily accessible for everybody.

Participants who worked with a mentor made visible progress

Unsurprisingly “no time” was one of the main reasons for not giving feedback. Many participants didn’t or couldn’t take time to reflect on themselves or others.
The group of people who chose to work with a mentor took this time regularly and kept it going. The advice from their mentors had a great impact on their progress and their goals that would have stagnated otherwise. Nadja Perroulaz will implement mentoring company wide this year, yay!

“I could solve two rather big problems in current projects with the help of a mentor which was a big success for me. I also asked somebody for a one-time mentoring for a specific question where I benefited a lot. I really appreciate the chance of being able to discuss challenges with a mentor and get feedback from an outer perspective.” Simone Wegelin

We do not need a tool to deliver feedback

We offered Leapsome to our participants as a tool to give and request feedback and monitor progress. After an initial peak when everybody tested the tool, we only counted two feedbacks per week. Most participants preferred to give feedback face to face and said that they don’t need a tool at all. Some would like to manage their feedbacks, keep track of what they have given, received or learned, but not with Leaspsome. For now we are continuing without a tool.

“I'm sure now the tools or the methodology is not the big issue, but rigor about actually doing it is. And this doesn't apply only to me ':)” Valentin Delley

Given feedbacks had an impact and led to changes and learning

A few people took the time and shared compliments or gave difficult or critical feedback to other Liiper. Not all of them went well, some people overreacted or just ignored the feedback. But mostly the feedback did lead to change the receiver and they were grateful for it.

“Before the pilot, I wasn’t sharing when somebody did a good job, I always thought that somebody else will maybe do it. Expressing it now makes the interaction more valuable and precious with this person.” Raphaël Santos

“I gave two critical feedbacks, both hard to give and to receive, but they had a tremendous impact. My learning: if there is a problem, go talk to the person instead of ignoring the person.” Thomas Botton

How might we reach a critical mass of Liiper who address issues directly and timely?

However, many Liiper don’t address their issues. Offering support, addressing conflicts or sharing compliments are often forgotten or avoided. For a lot of people it is still easier to ignore these issues instead of addressing them. We picked this as our next challenge to work on.
Christina Henkel, Christian Stocker, Martin Meier, Rita Barracha and Simone Wegelin gave their time and energy to come up with ideas on how to solve this. Jake Knapp’s book “Sprint” was a great help in this process. We will test and implement three winner ideas.

Feedback champions across teams and locations

We will recruit interested Liiper, who want to encourage feedback in their teams and locations and help to keep the culture alive. We will offer actionable activities for them, that are easy to do it beside everyday work.

A team budget for feedback

We will provide budget to teams which they are expected and encouraged to spend on giving feedback and increasing their feedback culture.

The feedback trophy

We are producing and testing a trophy, which can be placed on a colleagues desk. This colleague has to give feedback to somebody of their choice within a week and pass on the trophy. A history displays the given feedbacks.

How about you?

How do you handle feedback in your company? Do you face similar challenges or completely different ones? I’d love to hear from you at zahida.huber@liip.ch

]]>
When I realised we could do something about diversity in tech https://www.liip.ch/fr/blog/when-i-realised-we-could-do-something-about-diversity-in-tech https://www.liip.ch/fr/blog/when-i-realised-we-could-do-something-about-diversity-in-tech Wed, 28 Mar 2018 00:00:00 +0200 NB: The header picture was taken at the last Django Girls event in Lausanne.

Educational and diversity related events

Last year, I took over all sponsoring requests for Liip for a while, not just the ones for the Romandie. I learnt a lot about prioritization, because such coordination was time-consuming. It also gave me an interesting overview of events about innovations, technologies, self-management and agility in Switzerland.

Obviously, there is a correlation between our principles at Liip and the types of event we sponsor. For example, we have a very sweet spot for diversity and education related events. We participated from Django girls, to Railgirls Summer of Code, Girls in Tech, Jugend Hackt, PowerCoders and Webprofessionals.

In terms of communication, I realised that, in comparison with the number of events we sponsor, the media coverage was relatively low. From 2016 to 2017, we often acted as some sort of underground supporter.

Talking about diversity at the coffee table

To me, integration and diversity have always been interesting subjects. However I never related these subjects to the tech field or to Liip. As I started to be better informed about diversity and education in tech, I talked more about these subjects in my professional environment.

For example, I entered our internal slack channel #diversity. There, I discovered that many colleagues within Liip shared my interests, and many of them had already taken actions! It gave me the confidence to start taking actions within Liip, too.

At this point, I wished people outside Liip knew more, on one hand about Liip’s commitment to support educational initiative, and on the other hand, of Liipers' interests in this subject. Reasons to apply in a company for me are people caring about diversity and talk about it. Maybe we could start a virtual circle? We talk more about diversity on our digital channels, to becoming more diverse?

Team work about language

Before I started working in marketing, I was a bookworm, which turned into a linguistic freak at university. Language is essential, it shapes who we are and how we think, because we think in a language. In other words, the language we use defines on how we see the world. For example, when we say a ‘fireman’ and a ‘Krankenschwester’, we define the gender we are expecting in these jobs.

On our last website I often felt frustrated to see an old sentence going ‘For the sake of simplicity and easier reading, only the masculine form has been used for the individual categories of people’. In my opinion, such sentences are not be fitting for several reasons. For example, it separates the world into genders and creates a dichotomy between masculine and feminine. As if you have to fit into one of these categories. Because it's harder to identify with a company that doesn't appeal to me and doesn't bother to adapt its language.

At the same time as I experienced the interest and commitment of my colleagues, a team was doing a project for our new website. Between my personal belief, my colleagues' motivation and the timely opportunity, we started a project to modify the language on our website.

We compared several websites and texts to gather ideas. Everyone provided examples and their personal best practice cases. Changing thoroughly the language on a website requires a strategy, guidelines and time. We achieved a first step by starting the conversation, raising awareness within the company and setting up a scratch of guideline.

What matters here is how awesome my colleagues have been! We've been holding workshops to find solutions for each language. It was a great feeling to receive ideas and read feedback.

A video project that shines

My objective was that people outside Liip knew about the companies commitments to support educational initiatives, and on the other had, of the Liipers' interest in this subject.

I broadly discussed the idea to create a video about diversity with Tatjana, one of my colleagues, and she took over the project. From there on, the project started to shine.
It became way more than what I had dreamt of!

Listen to the video, it's worth it :)

I am very thankful for Tatjana's coordination of the video project and for Rae, Michelle, Léo, Thereza, Mannar and Stefanie for participating and making it happen.

Next steps like a hummingbird

Liip, is culturally and socially involved and stands for state of the art work conditions. It means that diversity is not an option, but a principle. In terms of marketing, that means that in our strategy, we are using inclusive language on all our channels (incl. pictorial language) and we support cultural and educational events.

I stopped working in the field of sponsoring within Liip. However I’m happy to say that my colleagues who took over the sponsoring activities planning to carry on the sponsorship of educational initiatives.

The first step we took concerning the language on our website is an interesting one, but there is still lots to do. However each step matters.

We achieved one small step, just like the help of the little hummingbird. Do you know the legend of the hummingbird? I think it is a native american legend, which was told by Pierre Rabhi, a controversial French author:

One day, says the legend, there was a huge wildfire. All terrified animals, aghast, watched helplessly the disaster. Only the little hummingbird was busy fetching a few drops with his beak to throw into the fire. After a moment, the armadillo, annoyed by this ridiculous agitation, said: "Hummingbird, you are crazy, because of these drops of water the fire will not extinguish!" The hummingbird answered : "I know, but I'm doing what I can."

On a personal level, I will remember sharing what matters to me in my working life. And I encourage you to do the same. You might discover some fellow conspirators. You never know what will happen, maybe a part of your private interest might become part of your work?

Interesting stuff to click on

Thanks to Fabio, here is an article about diversity and the roles of women in job interviews.

Thanks to Caroline, here is an article about the influence of how we talk (example: use of pronouns, pause) and how it affects relationships. The article is partly based on differences in the language men and women are using.

Lukas leads diversity initiatives for the Symfony community. He currently works on a Code of Conduct and enforcement process that will hopefully be finalised and adopted very soon. CoC are useful tools for integration, because they define behaviours that are destructive to underrepresented groups in tech. Read The complex reality of adopting a meaningfuls code of conduct for more info on the subject.

]]>
Was ist ein MVP und wozu ist es gut? https://www.liip.ch/fr/blog/was-ist-ein-mvp-und-wozu-ist-es-gut https://www.liip.ch/fr/blog/was-ist-ein-mvp-und-wozu-ist-es-gut Fri, 23 Mar 2018 00:00:00 +0100 Voraussetzung, um Kernfunktionen zu definieren ist, dass die strategische Stossrichtung (Was? Warum? Für Wen?) kurzfristig (MVP Produktversion 1) und mittel- und langfristig (Produktversion 2 – x) bekannt ist. Wohl wissend, dass diese sich mit den Erkenntnissen, die während eine Entwicklungsprozess gewonnen werden, verändern kann.

mvp-howto

Kernfunktionen identifizieren

Die Strategie des MVP-Ansatzes impliziert ein benutzerzentriertes Vorgehen. Es geht um Kernfunktionen, Feedback und Iteration und darum schnell zu lernen, ob Nutzer ein Produkt wertschätzen oder ob es auf Ablehnung bzw. Nichtnutzung stösst.
Versteht man Funktionalität als technische Umsetzung eines Prozesses, ist dies zu kurz gedacht und schöpft das Potenzial des Ansatzes nicht ausreichend aus: die einfachste technische Implementierung ist nicht gleichbedeutend mit der Kernfunktion eines MVP. Ob Nutzer ein Produkt wertschätzen und nutzen, hängt von weiteren Faktoren ab:

  • Fühlt der Nutzer sich angesprochen?
  • Entspricht das Produkt seinen Erwartungen und aktuellen Bedürfnissen?
  • Ist die Tonalität und emotionale Ansprache die richtige?
  • Versteht der Nutzer das Angebot und kann es einfach bedienen?

Die vier Grössen der Kernfunktionen

In seinem Buch “The Lean Product Playbook” veranschaulicht Dan Olsen, welche Dimensionen bei der Definition von Kernfunktionen berücksichtigt werden müssen:

  1. Sie müssen funktional sein.
  2. Sie müssen vertrauenswürdig und verlässlich sein.
  3. Sie müssen einfach sein.
  4. Sie müssen den Nutzer “erfreuen” oder im besten Falle begeistern.
mvp-dimensions

Jussi Pasanen in Dan Olsen in “The Lean Product Playbook” und Aaron Walter’s “Designing for Emotion”

1. Funktional

Um die Funktionalität eines Prozesses sicherzustellen, müssen die Bedürfnisse der Nutzer bekannt sein. Diese äussern sich zunächst in Aktionen, die der Nutzer mit Hilfe eines digitalen Produkt durchführen möchte. Erst wenn seine Absicht klar ist, wird die Kernfunktion in einen sinnvollen digitalen Prozess übersetzt. Denkt man an einen Onlineshop, ist ein Bedürfnis, die angebotenen Ware zu kaufen. Die Aktion ist der Erwerb der Ware. Die Kernfunktion ist folglich der Check-out Prozess. Technisch gesehen ist klar, was zu tun ist. Jedoch wird diese Definition noch zu keinem guten Ergebnis führen.

2. Vertrauenswürdig & verlässlich

Es ist nötig Vertrauen zu schaffen und dem Nutzer ein gutes Gefühl zu vermitteln. Der Nutzer möchte Sicherheit – bspw. dass der Shop seriös ist und der Einkauf reibungslos funktionieren wird – er benötigt zusätzliche Informationen über Lieferfristen, Rückgabekonditionen, Zahlungsmodalitäten, Lieferant, usw…, die in den Prozess integriert werden. Vielleicht wecken Bewertungen von anderen Käufern, Qualitätssiegel oder eine mögliche Kontaktaufnahme mit dem Verkäufer ein gutes Gefühl während des Prozesses. Erst wenn er während der Transaktion ein gutes und sicheres Gefühl hat, wird er einen Abschluss tätigen. Es gilt also herauszufinden, welche Informationen Vertrauen schaffen während der Transaktion.

3. Einfach

Einfach bedeutet intuitiv nutzbar. Voraussetzung hierfür ist, dass die Fähigkeiten und mentalen Modelle der Nutzer bekannt sind und berücksichtigt werden. Denken wir wieder an den Check-out Prozess: erscheint er dem Nutzer zu kompliziert oder nicht verständlich fühlt er sich verunsichert und wird die Transaktion mit hoher Wahrscheinlichkeit abbrechen.

4. Begeisternd

Die Frage wann und wie ein Kunde begeistert wird, ist nicht pauschal zu beantworten. Handelt es sich um einen Shop, ist der Kunde allenfalls über die termingerechte Lieferung, Rabatte oder einen einfachen Rückgabeprozess zu begeistern. Klar ist, dass Begeisterung nicht ohne Feedback der Nutzer fassbar ist. Sie kann nicht am “Reissbrett” entworfen werden. User Research ist nötig, um allgemeine Annahmen in wirkliches Verstehen umzuwandeln.

Vorteile und Einsatz des MVP Ansatzes

Die vier oben beschriebenen Punkte verdeutlichen, dass Kernfunktionen eines MVP alle vier Dimensionen berücksichtigen müssen, um sicherzustellen, dass ein Produkt Zuspruch findet. Die Bedeutung der einzelnen Dimensionen wird priorisiert.
Wenn es gelingt, sämtliche Dimensionen zu berücksichtigen und sinnvoll gegeneinander abzuwägen liegen die Vorteile dieses Vorgehens auf der Hand:

  • Die wirklichen Bedürfnisse der Nutzer werden adressiert.
  • Das Risiko am Nutzer vorbei zu entwickeln wird minimiert.
  • Die Entwicklungszeiten und die Time-to-Market” werden verkürzt.
  • Ein iteratives und adaptives Vorgehen spart Zeit und Geld.

Sinnvoll ist ein MVP nur bei nutzerzentriert entwickelten Produkten. Entscheidend ist dies primär bei neuen Produkten oder Business Modellen, die in der digitalen Welt erfolgreich platziert, verankert und schnell auf den Markt gebracht werden. Aber auch bei sehr komplexen Projekten ist dieser Ansatz sinnvoll, um den Scope der Projektphase einzugrenzen. Oder auch wenn noch kein Wissen oder Daten über Verhaltensweisen und Vorlieben von Nutzern bekannt sind.

]]>
Stolpersteine bei der MVP Entwicklung https://www.liip.ch/fr/blog/stolpersteine-bei-der-mvp-entwicklung https://www.liip.ch/fr/blog/stolpersteine-bei-der-mvp-entwicklung Fri, 23 Mar 2018 00:00:00 +0100 Die Technik Brille

Die Gefahr (insbesondere während Sprints im agilen Vorgehen) ist, dass der Fokus zu stark auf der technischen Entwicklung liegt. Dies geschieht insbesondere, wenn es an die Detailarbeit geht. Dann treten die Bedürfnisse der Nutzer in den Hintergrund und der Fokus liegt auf der technischen Umsetzung.

Abhilfe schafft ein gutes Projekt Setup: die Definition und später die fortwährende Rückbesinnung, Reflektion und Verfeinerung von Personas und User Journeys helfen Produkte zu entwickeln, die wertgeschätzt werden. Dies bedeutet, dass User Experience Designer als “Advokat der Nutzer” auch während der Entwicklungs- und Implementierungsphase involviert sind. Als Experten können sie wertvollen Input liefern oder geeignete Tools und (Test-)Methoden definieren, um (Teil-)Ergebnisse zu verifizieren.

mvp-pyramide

Jussi Pasanen in Dan Olsen in “The Lean Product Playbook” und Aaron Walter’s “Designing for Emotion”

Das Ja-Aber-Syndrom

Eine weitere Barriere im Projektteam (bestehend aus Auftraggeber und Umsetzungsteam) sind mögliche Interpretationsspielräume: selbst bei klaren Bedürfnissen der Nutzer gibt es verschiedene Möglichkeiten wie diese digital bedient und umgesetzt werden. Unterschiedliche Meinungen innerhalb des Teams und zwischen Disziplinen führen zu Diskussionen, die in einer unterschiedlichen Priorisierung der Funktionen resultieren. Dies erschwert die Konzentration auf Kernfunktionen bzw. deren Ausgestaltung.

Der Ausweg ist, den Nutzer mit einzubeziehen und Optionen zu testen, um Entscheidungen nicht auf Basis von Meinungen, sondern tatsächlichen Erkenntnissen zu treffen. Meist ist dies einfach über bspw. Wireframes, Papierprototypen oder A/B- Testing möglich.

Der Business-Ziele-Tunnel-Blick

Für Auftraggeber ist es schwierig, den Perspektivwechsel von einer Innen/ Unternehmens- zur einer Aussen-/Kundensicht zu vollziehen.

Bspw. setzt sich ein Unternehmen das Ziel, Daten über Nutzer zu sammeln. Es möchte möglichst viel über die Nutzer erfahren. Technisch gesehen kein Problem: bei der Call-To-Action, d.h. der Aufforderung, bspw. einen Newsletter zu abonnieren bedeutet dies lediglich, ein paar weitere Inputfelder neben der E-Mail Adresse zu implementieren. Gleichzeitig stellen diese Felder eine Hürde dar, die verhindert, dass der Nutzer sich registriert. Es gilt, genau zu überlegen wann und wo Daten abgefragt werden und ob der Nutzer bereit ist, die Daten preis zu geben. Aufwand und Ertrag müssen sich die Waage halten: Wieso soll ich meine Telefonnumer für einen Newsletter preisgeben? Oh, nee, wollen, die mich anrufen? Die Conversion ist dann am höchsten, wenn ein Formular so simpel wie möglich ist: “Every time you cut a field or question from a form, you increase its conversion rate” (Nielsen Group).

Wichtig ist, immer wieder einen Abgleich zu schaffen zwischen den Absichten eines Unternehmens und den Bedürfnissen der Nutzer. Ein Nutzer-Zentriertes-Vorgehen stellt sicher, dass der Nutzer nicht vergessen geht. Business-Ziele und Nutzer-Bedürfnisse müssen immer wieder austariert und in Einklang gebracht werden.

Die Budget Sorge

Es kommt vor, dass Iterationen gefürchtet oder nicht gewünscht sind und der Wunsch vorherrscht, die richtige Lösung ad hoc zu definieren. Die implizite Sorge, die dahinter steht ist, Budget zu “verbrennen”, wenn Dinge zweimal “in die Hand genommen werden”.

Aufgrund der Komplexität von digitalen Produkten und Anwendungen, ist es meist schwierig alle Komponenten beim Start eines Projektes zu kennen und zu identifizieren. Oftmals treten Schwachstellen erst im Laufe des Projektes auf. Ad hoc ist also häufig ohnehin eine Wunschvorstellung.

Während des Entwicklungsprozesses hingegen ist es einfach, mithilfe Nutzer Feedbacks Schwachstellen zu eliminieren. Werden diese erst später entdeckt, ist der Aufwand um ein vielfaches höher. Beim MVP Ansatz setzen wir eine gewisse Fehlertoleranz voraus. Iterationen sind als positiv und nicht als Blocker zu verstehen. Sie verschwenden keine Ressourcen, sondern generieren Mehrwert und Output.

Entwicklung ohne Hindernisse

Der Weg, genannte Stolperstein zu vermeiden:

  • Vermeiden der Technik Brille: User zentriertes Vorgehen, Rückbesinnung und Verfeinern von Personas und User Journeys während des gesamten Projektverlaufs.
  • Vermeiden des Ja-Aber Syndroms: Eine grundsätzliche Bereitschaft zu iterieren muss bei allen Beteiligten braucht es: Statt Ja-Aber ist es sinnvoller Ja-Und zu argumentieren. So werden Probleme konstruktiv diskutiert und “Meinungsfronten” vermieden.
  • Vermeiden des Business-Ziele-Tunnel-Blick: Das gemeinsame Priorisieren von Funktionen und verbundenen Task aufgrund von “User Insights” schafft Sicherheit und ermöglicht ein fokussiertes den Business-Zielen dienliches Projektvorgehen. Es verhindert, dass Business Ziele und Nutzerbedürfnisse auseinander triften.
  • Vermeiden der Budget Sorge: Mind Set, das Fehler zulässt und Iterationen als gewinnbringend und zielführend akzeptiert.
]]>
How to make customers happy? Start with your (internal) processes. https://www.liip.ch/fr/blog/how-to-make-customers-happy-start-with-your-internal-processes https://www.liip.ch/fr/blog/how-to-make-customers-happy-start-with-your-internal-processes Thu, 22 Mar 2018 00:00:00 +0100 Forget your fancy new product idea.

Talking about improving customer satisfaction, companies often describe the fancy new product they are about to design. The one that is meant to boost customer satisfaction and sales rates like a miracle. But: Is the lack of this new product really the source of unhappy customers?

The biggest obstacles for customer friendliness are (internal) processes.

As a Service Designer at Liip, I do a lot of user research in my projects to find out what makes customers unhappy and how we can solve this. And in most of the cases, I encounter difficult, complicated or nontransparent processes as the biggest pain points. Customers feel, that they have to make too much effort to get the problem solved. Often it doesn’t seem to be clear what to do next. Or they are redirected many times and have to tell the same story over and over again.
These problems typically result from (internal) processes that don’t suit the customers' and employees' needs. And in the meantime they seem to have a big impact on customer satisfaction and the way a customer talks about a company.

Internal processes often seem complex and difficult to change.

In my projects, I experience that many people often don’t dare to touch these processes, although they realise something is not working well. Why? Typically, the problems have many different causes. So a variety of processes and systems are affected and they can’t be assigned to just one department or one person’s responsibility. So who should take care of them? Who feels responsible to change something? This threatens to end up expensive and complicated. Sounds a bit like Pandora’s box, right?

But also the cost of doing nothing is high.

People often forget that doing nothing also is expensive and complicated too. Unhappy customers who spread bad word of mouth or don’t buy again can have a big impact on a company’s revenues. Also, handling customer enquiries costs a lot of money, especially when internal processes to handle them are complicated too. And last but not least - the impact of unhappy employees on a company’s performance is not to be underestimated.

Align the customer experience with what happens behind the scenes.

align-what-happens-behind-the-scenes-with-the-customer-exper

So in order to improve customer satisfaction, it’s time to pay attention to user friendly and efficient services. It’s about aligning the customer experience with what happens behind the scenes - from internal processes to tools and systems.

But how to get there? And how to not get lost in complexity, especially when the service touches many different processes, systems and departments? Service Design provides us a lot of useful answers to these questions.

How to design user friendly and efficient services - 9 steps

  1. Have a clear mission:
    At the beginning of every project, I work on creating a clear mission and a common understanding of where to go together with the team - like a lighthouse that helps to keep orientation on the way.
  2. Understand the problem in all its aspects before working on the solution:
    In my opinion, the most important part of creating useful new services is to have a clear and overall understanding of where exactly the issues are - from the user’s needs to the company’s goals and problems. And very important: based on data, not assumptions.
  3. Start with the user’s needs, not what tools allow you.
  4. Focus in Ideation:
    It requires some discipline to not ideate on whatever seems to be cool. But clearly focusing on solving exactly the problems the team encountered is crucial to not get lost in complexity.
  5. Prototype ideas already at early stages:
    The clear common understanding of what the idea consists of is extremely valuable.
  6. Test continuously:
    The more feedback we get, the better. It helps to discover if we are on the right or wrong way at an early stage.
  7. Implement step by step:
    Implementing one idea after the other helps to get things done and not get lost in too many measures we can’t cope with. Improving services is often about continuously implementing a bunch of measures in order to fulfill one big longterm mission. Agile methods as Scrum perfectly support this way of working during implementation.
  8. Think big. But start small:
    Sometimes also small changes are promising.
  9. Evolve:
    Projects are never done with the GoLive. They just enter a new phase: the one where our work gets really tested by the mass of users. Every new learning helps us to continuously improve the service.

What are your experiences with designing better services and processes? What was hard, what worked well? Let me know by leaving a comment.

]]>