Blog Liip https://www.liip.ch/fr/blog Kirby Tue, 11 Dec 2018 00:00:00 +0100 Les derniers articles du blog Liip fr Digitalisierung erleben: EKZ Prototyping Workshop https://www.liip.ch/fr/blog/digitalisierung-erleben-ekz-prototyping-workshop https://www.liip.ch/fr/blog/digitalisierung-erleben-ekz-prototyping-workshop Tue, 11 Dec 2018 00:00:00 +0100 Der Workshop

Vor zwei Wochen konnten wir unser erstes 3.5 stündiges Prototyp Training mit Mitarbeitern der EKZ durchführen. Die von uns im Workshop gestellte fiktive Aufgabe war, eine App zu entwickeln, die einen Nutzer bei der Urlaubsplanung, während oder nach der Reise unterstützt. Die Aufgabe war sehr offen gestellt. Die primäre Persona haben wir grob vorgegeben: in Form eines jungen Paares, eines Individualreisenden, einer Familie oder eines Teilnehmers einer Gruppe.

Im ersten Schritt baten wir die vier Gruppen, sich für eine Persona und somit ein vorgegebenes Setting zu entscheiden und die Persona dann auszuarbeiten: Was für eine Art der Reise plant die Persona? Wer ist ihre Begleitung? Welche Bedürfnisse hat sie im Zusammenhang mit der Planung oder Durchführung der Reise? Auf welche Schwierigkeiten stösst sie dabei?

Danach hat jede Gruppe eine User Journey für ihre Persona erarbeitet. Es wurde festgehalten, welche Tätigkeiten die Persona in welcher Reihenfolge tun möchte. Aufgrund der Persona und der skizzierten User Journey, haben die Teilnehmer entschieden, welche Bedürfnisse die App bedienen soll und welche Funktionalitäten dafür geeignet sind.

Schlussendlich kamen wir zum “handwerklichen” Teil des Workshops und haben die App in Form von User Flows und Screens gezeichnet: in Form eines einfachen Papierprototypen. Diesen haben die Teams sich gegenseitig vorgestellt und Feedback gegeben.

Wir sind begeistert und finden, es sind tolle Ideen entstanden: eine App, die einen Reisenden mit Hund während einer Wanderung auf dem Jakobsweg unterstützt. Neben spezifischen Informationen rund um den Hund bietet sie Orientierung und Hilfe – aber sie gibt auch gleichzeitig Hinweise, wie es möglich ist, für sich zu sein und den Weg möglichst alleine zu machen. Oder eine App, die den Mitarbeitern der EKZ ermöglicht, ein Team für einen Iron Man aufzustellen. Die App koordiniert die Teilnehmer und ihr Training. Aber auch nach dem Event hat die App etwas zu bieten: eine Bildergalerie und eine Möglichkeit sich erneut anzumelden, denn nach dem Event ist vor dem Event...

Unsere Learnings

Bei der Planung waren wir uns nicht 100% sicher, ob man den User Centered Design Prozess so komprimiert durchlaufen kann. Und auch der Sprung von einer Aktivität (User Journey) zu einer Funktionalität (Papierskizze eines Screens der App), schien uns ziemlich gross. Aber es hat ganz hervorragend funktioniert. Der Grund ist, dass alle Teilnehmer bereit waren, sich auf das Experiment des nutzerzentrierten Denkens und Skizzierens von Hand einzulassen.

Es war schön zu beobachten, dass auch in Teams, die sich nicht täglich mit der Entwicklung digitaler Produkte beschäftigen, in kurzer Zeit Ideen entstehen, die in einfachen Skizzen zum Leben erweckt werden.

Ein Wunsch, den wir vorher nicht so realisiert haben, aber für unseren nächsten Workshop berücksichtigen werden, ist das Interesse an der Technologie: vielleicht werden wir in unserem nächsten Workshop eine Landschaft an gängigen Tools zeigen – neben Stift und Papier – wie man einerseits Prototypen aber auch Apps erstellen kann. Darüber werden wir nachdenken. Es war auch für uns ein sehr lehrreicher und spannender Abend! Denn wie David L. Rogers sagt, „Constant learning and the rapid iteration of products before and after their launch date, are becoming the norm.“ (vgl. David L Rogers, The Transformation Playbook). Ein herzliches Dankeschön an Herrn Hauser, der uns die Möglichkeit gegeben hat den Workshop bei der EKZ durchzuführen und zu lernen!

]]>
Entity Explorer, your trusty helper 🐕 https://www.liip.ch/fr/blog/entity-explorer-your-trusty-helper https://www.liip.ch/fr/blog/entity-explorer-your-trusty-helper Wed, 28 Nov 2018 00:00:00 +0100 When you are building complex Drupal websites you rely on entities, and they are often entities within entities, linked to entities, and so on.

When you need to debug data deep down in the persistence layer, be it in a moderation state, language revision or somewhere else because Drupal gives you an inconsistent response, where do you begin?

The debugger?

A great start is setting a breakpoint instead of just trying to observe the data structure as it passes through your call with kint(), print_r() or the like.

Xdebug is certainly a very good route to discover the state and properties of a specific entity. However, it is often insufficient to debug complex entity relationships in an efficient manner: Child entities might only be visible as their target ID or dynamic properties are empty because they have not been built yet.

The database?

Another place to look is the database, and that’s fine. It is of course the final say on the persistence layer but has significant disadvantages: writing joins by hand to discover the linkages of fields and tables is at best cumbersome and downright annoying when you are jumping from entity to entity. Also, reusing and managing such query snippets is not easy.

Entity API?

So your next step would likely be to write a custom script and make use of entityTypeManager. It can answer most complex queries and if you have done a little bit of Drupal 8 development you’ll likely already have come across it. Just select the desired storage, fetch a query, add your conditions and you can access the relevant revisions, their fields, and properties with ease. In some cases you might need to include some help from EntityFieldManager.

Drupal 7 hint: If you are stuck on a site you can’t migrate just yet, you can still make your life easier by using entity_metadata_wrapper() and EntityFieldQuery. You don’t have to live with stdClass objects and raw database queries.

However, you’re still writing a lot of common queries repeatedly by hand, when you just want something more efficient than digging through revisions in the UI for a particular problem.

Entity Explorer

Entity Explorer is a simple Drupal Console command which uses just a handful of entityTypeManager commands to build you an efficient output of an entity across languages and revisions including child elements, if needed:
$ drupal entity_explorer node 16287 47850 # Arguments: type, id, revision (optional)

The first example shows the translation revision of a node with the IDs of embedded entities. Use --all-fields to recursively show their details:

Check it out:

composer require drupal/entity_explorer:~1.0

drupal.org/project/entity_explorer

]]>
My Learnings from Product Management Festival 2018 - Day 1 https://www.liip.ch/fr/blog/my-learnings-from-the-product-management-festival-2018-day-1 https://www.liip.ch/fr/blog/my-learnings-from-the-product-management-festival-2018-day-1 Tue, 27 Nov 2018 00:00:00 +0100 Product Management is a set of hard and soft skills that, when mastered, helps a lot to bridge the gaps between the triangle composed of the business, the UX, and the tech.
This know-how helps me a lot to build the right products for our customers and their end users, and not only to build it right by using Scrum.

I went to the Product Management Festival 2018 edition last week in order to learn about current the best practices in the field.

You find below the notes and learnings I took away.

Power of transparency

By Christian Sutherland-Wong, COO at Glassdoor

There are two things that stuck with me after the talk:

  1. Glassdoor wishes that every company would be more transparent about salaries. But internally, they don't openly share them. We made the move to full salary transparency here at Liip. I can't help to recommend any company to do so, as it only brings more fairness. On the short term, there are painful discussions to have. But it pays off on the long term, as people with salary issues get to understand why, and are then able to improve. Hopefully Glassdoor makes this move soon too.

  2. Christian also talked about monetization. As an entrepreneur myself, that's a topic that I love to discuss as it's so challenging to find the best ways to make money in a way that is win-win for the the entrepreneur and the client/user. As the saying goes: "Revenue is like oxygen for a startup. We don't live for it, but we need it to live".
    I disagree with him about one point: he explained that monetizing vanity was one key way to make money from their product at Glassdoor.
    My (potentially hasty) conclusion from this statement is that it sounds like the most economically viable solution. I'm challenging it because, as a CEO or founder, I would focus on finding better value for money strategies. I don't think exploiting such human nature trait foster better behaviours. I prefer having less paying customers, but that get real value from it. And there is for sure good value in Glassdoor who helps employees have the same information access as companies get when they interview you.

I asked Christian to know more about his opinion. You can follow the discussion here.

"Power of transparency" speech by Glassdoor COO
"Power of transparency" speech by Glassdoor COO

Startup versus Enterprise Product Management

By Tom Leung, Director of PM at Youtube

This talk resonated a lot with me, as I feel that Liip and our self-organization model provides the best of both world: having your own startup, within the comfort of a company with 180 employees.

Here is what I learnt (or rehearsed) that is applicable to any products/projects you could have, independently of your company's size:

  • The only thing that matters is product-market fit and repeatable growth to sustain on the long run. And not your big media coverage, nor your invitation to talk at this big conference. These latter are only vanity metrics, that should come as results and not objectives in their own.
  • Would you use your product if you weren't in this company? If not, then you're probably building the wrong thing. Or at least it will be harder for you to stay motivated month after month.

From Tom's experience if you go for big corporate career:

  • You'll have to balance the "Just ship it and see what happens" with global scale impact (like you can't have YouTube go down every week)
  • You'll need to accept and cope with some (long) discussions to convince and rally people to follow you (aka. politics). Else you're better to stay at your startup.
  • You'll have to motivate teams and earn credibility — you don't have any CEO nor founder title to help you there
  • In certain "modern" corps like Google or Spotify, you can be lucky to have small cross-functional teams with a lot of flexibility
  • Startup veterans are like wild horses, and it's very hard for true entrepreneurs to go to big corporation. Often, after acquisitions, you see founders leaving. Not because they don't fit. Just because they need to move on and craft their next thing.

Regarding his startup's career:

  • Startup move fast. Every day. And that's a drug hard to give up once you sensed it.
  • When you're in a startup that grows, you quickly have to take care about management, marketing, and all sort of boring stuff. Make sure to keep the biggest chunk of your time dedicated to your product. Because that's what makes your company first. Easier said than done :)

Tom closed the talk with 3 skills a PM needs to have:

  • Great judgment: being able to take imperfect infos and making a call. And being right often with such calls
  • Delivering results: "This product growth happened because it was this PM in the team"
  • Customer insights: go talk to customers. Go in the street. Go out of the building. Do whatever it takes but talk to real customers

This led me to ask the discussion with Tom after his session:

How a Product Manager can hone his judgment skills
How a Product Manager can hone his judgment skills

Viral loop

By John Koenig, Senior Product Manager Growth at Typeform

I mostly rehearsed knowledge, but it was still interesting to see how they apply it at Typeform.

One of the trick to ensure self-fueled growth of a product is to make use of viral loop.
A viral loop relies on the viral coefficient. Put simply it's the number of new users that an existing user generates. A 0.3 coef means that for each 10 users, you will get 3 new users. If you get above 1.0, then you get exponential growth.

John explained the viral loop model he uses at Typeform:

  1. Signup: creating a typeform account requires only 3 fields for lowest friction possible.
  2. Motivation: here comes their value proposition, which is sharing a Typeform form. To support this, they provide plenty of options so that you, as a customer, find a way to share your form and don't leave the platform there.
  3. Share: the nice fact is that sharing can be part of your product, like Typeform which goal is to be shared to collect responses. That surely help to develop your own viral coefficient.
  4. Impression: finally, when using the product, there are some tricks to leave your name around. Like the Intercom tool design that, even when customized is recognizable. Or Kickstarter, which you may use in 3-5 years when you suddenly get this idea of a new product to be backed up. At Typeform, their easily identifiable's user interaction and bottom logo play this role.

Once this loop is completed by one user, the new acquired customers fuel the viral loop themselves.

GIST planning

By Itamar Gilad, Product, Strategy and Growth Consultant, Ex-Google PM

GIST stands for Goals, Ideas, Steps, and Tasks.
It's a product framework imagined by Itamar while he was PM at Google. It mixes Lean Startup and Agile concept to align people from top management to delivery teams.

It aim to avoid the pattern consisting of: the strategy definition, then the roadmap building, then split into project plans, then the (Agile) execution done waterfallishly.

Below are the key elements of GIST planning:
Goal
You define goals using the OKR system for the next quarter. It's easy to adapt on the go, and is easily defined between stakeholders. And it's transferable/visible to anyone in the organization.

Idea
Below the "Goal" timeline, you then have to fill your tank with ideas to accomplish this objective. The more you fill the tank the better, knowing that only 1/3 of ideas are good at supporting objectives. Afterwards, you evaluate your ideas against a prioritization-by-evidence mechanism, like the ICE one (standing for Impact, Confidence, and Ease) popularized by Sean Ellis.

Step
Once you found your next idea to validate, you go into a 10 weeks block of experiments as with the Lean Startup methodology.

Tasks
Each experiment above can be seen as a Scrum sprint or can be detailed using Kanban. You can use whatever tool you're used to work with for this.

As you can see, compared to waterfall then Agile iterations, you get the whole system to iterate as a whole.
That's somehow how we operated at Liip, but a lot less formally. I will apply it on a small scale project to see how it works, and experiment with the pros and cons.

For a mental model's addict as I am, it was quite of an interesting session.

If you wanna learn more about GIST, I recommend you this reading.

GIST planning overview
GIST planning overview

The anatomy of influence power

By Stephanie Judd and Kara Davidson (both founders of Wolf and Heron)

The definition of influence is to "change how people think and act".
After this first sentence from Stephanie, my first reaction was "No thanks, I don't wanna become one of these manipulative people."
My second reaction was: "Well, I influence my kids every day for educational purpose. I also influence my clients to show them how Lean Startup, Agile, and self-organization principles are great. So it may not be that bad to learn more about this topic."

Stephanie and Kara then explained that infuence is made of two things: power (given to you, and situation agnostic), and pathway (which is situation specific, depending on which path you choose). They focused the rest of their talk on the power part, as it's situation-agnostic, so applicable to the biggest number of people in the room.

Power has 9 main sources:

  1. Expertise: what you know and can do
  2. Title: formal role and authority, as proxy of our influence
  3. Likeability: the first impression that other people have about you (posture, physical looking, sense of humor)
  4. Familiarity: your history and closeness with a specific person, gained through shared experiences
  5. Network: some tips are to focus on diversity, doing small favors to pay forward without expecting rewards
  6. Communication: be precise, be focused on the conversation (no smartphone on the table when talking to people)
  7. Reputation: what people say when you're not here. The best way to score point is honesty. With everybody.
  8. Resources: aggregate, organize, and redistribute it in your own way to add value to it (what I try to do with this blogpost, commenting and challenging ideas that I learnt)
  9. Grit: the desire and courage to act. It's an accelerant for all points above. Everyone has it. It's a muscle. The more you train it, the more you get good at it. Make sure you know your why first, then act on it. Relentlessly.

That's for another mental model of what power sources I can activate to better influence people.
As with any power sources, it's better and stronger when it's balanced. By the way, their company Wolf and Heron provides a "Personal Influence Diagnostic" on their website if you're interested to develop this part of yourself.

Ah, and don't forget, if you change how people think and act for bad reasons (including selfish ones): that's manipulation. You don't wanna do that.

The cover of W&H Personal Influence Diagnostic
The cover of W&H Personal Influence Diagnostic

Building for the Next Billion Users

By Minal Mehta (Head of Product at YouTube, Emerging Markets)

The famous NBU as they now call it. The Next Billion Users. To be frank, I feared to only hear the message "We at Google will change every single part of the world". I was surprised by Minal's honesty when during the Q/A someone asked why big companies didn't ally to have a bigger impact. Her answer: "NBU is a market, and we're doing business there, not non-profit."

Moreover, what she shared was down-to-earth and aligned with my beliefs.
Before listing my takeaways, I will put below my notes about this big market that is India which is important to know for global corporates as this market rises strongly.

  • People in India are way more social than we are. So much that they share their own phones.
  • Data usage becomes more and more cheap, connecting a lot more people to the Internet.
  • Feature phones (think Nokia 3310 but with a decent web browser) are kings due to their price. That's one of the reason behind the launch of Android Go. Thus you need to think interfaces and interactions differently if you target this market.
  • As Mina confirmed with her research on the ground, Indian people are consuming content for the same matters as the rest of the world: messaging, getting info, and entertainment. And they expect to be treated as well regarding UX as Western mobile users.

Again, if you build a local product for Swiss people, that may not be interesting. On the other hand, if your digital product is useful worldwide, then you better start to plan your future holidays where one additional billion users will get access more and more to your tools. Nothing breaking new, but worth the reminder.

What resonated the most with me in this talk are the five lessons that Mina learnt on how to build great teams, and stay connected with them. This to overcome the project problems the best way possible.

Lesson #1: find people who believe in the product you're building. Additionaly, make sure they're comfortable with ambiguity, that they can manage their energy, and that they want to draft with their teammates

Lesson #2: rally your team around the user and the specific needs you're solving. This specific point is critical to me, and it proved to be a game changer everytime I applied it, by asking our client to give us a tour of his company and activities, physically, before starting the project and with the entire Liip team.

Lesson #3: create a culture of psychological safety. Leave room for failure, without stress, and most importantly for being one-self. I could write hours about the impact of such culture as it makes us thrive both socially and economically at Liip.

Lesson #4: leave the building to connect with your team. We like to do project retrospectives at a coffee (which happens to be a bar too:)) near our Lausanne office. The place and its atmosphere impacts a lot the outcomes, positively. I recommend you to try it out.

Lesson #5: always celebrate your wins. Always. One simple way of how I organize it is by putting myself a calendar reminder 1 month before the planned launch to schedule a "Celebration lunch". That helps everyone to see the golive positively, not stressful. Obviously, you need to celebrate smaller wins along the way too.

Next Billion Users speech by Minal Mehta
Next Billion Users speech by Minal Mehta
]]>
Les coulisses du lancement de la nouvelle application mobile Compex Coach https://www.liip.ch/fr/blog/secrets-behind-the-launch-of-the-new-compex-coach-mobile-app https://www.liip.ch/fr/blog/secrets-behind-the-launch-of-the-new-compex-coach-mobile-app Fri, 16 Nov 2018 00:00:00 +0100 En juin dernier, nous avons commencé la création d’une seconde application mobile avec Compex (une marque du groupe DJO Global . Compex est un leader dans le domaine de la stimulation musculaire sportive, à la fois pour les performances et pour la récupération. L’application mobile a été conçue afin d’aider les utilisateurs des appareils Compex à atteindre leurs objectifs (performances, récupération, gestion de la douleur, etc.), des options non proposées sur les appareils.

Des appareils durables

Chez Liip, nous aimons les produits qui sont faits pour durer, et non pour inciter les utilisateurs à consommer davantage. Avec sa vision de l’application mobile, Compex nous a réellement séduits. Compex voulait une application qui allongerait la durée de vie de ses appareils. En intégrant le composant évolutif (le logiciel) dans le smartphone à la place de l’appareil, Compex garantit une expérience qui évolue en permanence, tout en augmentant la longévité de ses appareils (réputés pour leur solidité dans le marché du sport).
Beaucoup d’utilisateurs sont encore très satisfaits de leur appareil Compex datant de 2014. Compex continue de les accompagner. Et cela nous réjouit.

Compex Coach app listing the products of Compex

Choix de votre appareil dans la gamme Compex

Une valeur ajoutée pour les utilisateurs

Lorsque Matteo Morbatti, Chef de Produit Senior pour Compex International, [ADI2] nous a fait part de son idée de remplacer les modes d’emploi papier Compex par une application mobile, nous avons été un peu sceptiques. Nous ne voulions pas créer une application juste pour le principe d’en avoir une.
Il nous a ensuite expliqué que son idée répondait à un réel besoin. Il voulait créer plus qu’un simple PDF sur une application mobile. La réponse à la question «Qu’est-ce que je fais maintenant que j’ai acheté mon appareil?» nous a apporté la solution: une application pour guider l’utilisateur dès le moment où il déballe son produit.
C’est ainsi que nous avons défini la structure de l’outil numérique. L’utilisateur indique tout d’abord la partie du corps qu’il souhaite renforcer ou développer. L’outil pose ensuite la question: «Quel est votre objectif?» en utilisant toutes les informations essentielles du mode d’emploi. «La bonne action au bon moment», comme nous aimons le dire dans notre univers mobile.
Pour que les utilisateurs en retirent une vraie valeur.

Features of the Compex Coach app

La bonne action et les bonnes informations au bon moment

"Compex a identifié un besoin et a décidé d’y répondre en créant une application: Compex Coach. Pour ce projet, Compex s’est tournée vers Liip et ne s’y est pas trompée. Ayant personnellement participé au développement de ce projet, je peux vous assurer que la collaboration a été fructueuse et motivante tout au long de ce projet. L’équipe de projet de Liip a fait part d’un grand professionnalisme et a livré l’application dans le respect des délais et du budget impartis" Matteo Morbatti, International Product Director at DJO Global/Compex

Un marketing respectueux

Compex doit aussi assurer son activité et fidéliser ses clients. L’un de ses objectifs commerciaux consistait à récolter les adresses e-mail des utilisateurs, afin de leur envoyer des informations utiles sur l’utilisation des produits.
Plutôt que d’ajouter un champ d’adresse e-mail obligatoire pour utiliser l’application, Compex est allée plus loin. Elle a choisi une option avantageuse pour tous, en proposant une extension de garantie de 1 an et une remise sur les produits dont les utilisateurs réguliers de Compex auront besoin.
Le plus important, pour nous chez Liip, c’est que ces deux options sont facultatives. Si l’utilisateur refuse de communiquer son adresse e-mail, il a toujours la possibilité d’utiliser gratuitement l’application. Et s’il accepte, il recevra quelque chose en retour. Nous laissons donc le choix final à l’utilisateur.

Reviews from the Play Store with a 4.5 rating

La note de l'app sur le Play Store de Google lorsque vous apportez de la valeur à la vie des gens

Et maintenant?

Commencer petit et itérer: c’est l’une de nos principales philosophies chez Liip. Matteo et l’équipe de Compex ont le même état d’esprit. Cela nous a permis de lancer ce nouveau produit en moins de 3 mois. Nous aurions pu établir une connexion au cloud (pour permettre aux utilisateurs d’enregistrer leurs objectifs sur Internet) ou mettre à disposition une version actualisée du mode d’emploi. Mais nous avons préféré créer un produit minimum viable et nous l’avons lancé. Cela nous a permis de connaître les avis des utilisateurs finaux le plus rapidement possible et d’en tenir compte dans la version suivante, lors de l’élaboration de la partie cloud de la solution.
Les notes attribuées par les clients de Compex confirment que cette méthode est la bonne pour concevoir des produits (4.2/5 sur l'AppStore d'Apple et 4.5/5 sur le Play Store de Google). Tout simplement une solution minimum viable dans un premier temps. Et nous avons hâte d’accroître encore leur satisfaction avec de nouvelles options!

]]>
PSR-18: The PHP standard for HTTP clients https://www.liip.ch/fr/blog/psr-18-php-standard-for-http-clients https://www.liip.ch/fr/blog/psr-18-php-standard-for-http-clients Fri, 16 Nov 2018 00:00:00 +0100 First, PSR-7 "HTTP message interfaces" defined how HTTP requests and responses are represented. For server applications that need to handle incoming requests and send a response, this was generally enough. The application bootstrap creates the request instance with a PSR-7 implementation and passes it into the application, which in turn can return any instance of a PSR-7 response. Middleware and other libraries can be reused as long as they rely on the PSR-7 interfaces.

However, sometimes an application needs to send a request to another server. Be that a backend that uses HTTP to communicate like ElasticSearch, or some third party service like Twitter, Instagram or weather. Public third party services often provide common client libraries. Since PSR-17 "HTTP Factories", this code does not need to bind itself to a specific implementation of PSR-7 but can use the factory to create requests.

Even with the request factory, libraries still had to depend on a concrete HTTP client implementation like Guzzle to actually send the request. (They can also do things themselves very low-level with curl calls, but this basically means implementing an own HTTP client.) Using a specific implementation of an HTTP client is not ideal. It becomes a problem when your application uses a client as well, or you start combining more than one client and they use different clients - or even more when needing different major versions of the same client. For example, Guzzle had to change its namespace from Guzzle to GuzzleHttp when switching from version 3 to 4 to allow both versions to be installed in parallel.

Libraries should not care about the implementation of the HTTP client, as long as they are able to send requests and receive responses. A group of people around Márk Sági-Kazár started defining an interface for the HTTP client, branded HTTPlug. Various libraries like Mailgun, Geocoder or Payum adopted their HTTP request handling to HTTPlug. Tobias Nyholm, Mark and myself proposed the HTTPlug interface to the PHP-FIG and it has been adopted as PSR-18 "HTTP Client" in October 2018. The interfaces are compatible from a consumer perspective. HTTPlug 2 implements PSR-18, while staying compatible to HTTPlug 1 for consumers. Consumers can upgrade from HTTPlug 1 to 2 seamlessly and then start transforming their code to the PSR interfaces. Eventually, HTTPlug should become obsolete and be replaced by the PSR-18 interfaces and HTTP clients directly implementing those interfaces.

PSR-18 defines a very small interface for sending an HTTP request and receiving the response. It also defines how the HTTP client implementation has to behave in regard to error handling and exceptions, redirections and similar things, so that consumers can rely on a reproducable behaviour. Bootstrapping the client with the necessary set up parameters is done in the application, and then inject the client to the consumer:

use Psr\Http\Client\ClientInterface;
use Psr\Http\Client\ClientExceptionInterface;
use Psr\Http\Message\RequestFactoryInterface;

class WebConsumer
{
    /**
     * @var ClientInterface
     */
    private $httpClient;

    /**
     * @var RequestFactoryInterface
     */
    private $httpRequestFactory;

    public function __construct(
        ClientInterface $httpClient,
        RequestFactoryInterface $httpRequestFactory
    ) {
        $this->httpClient = $httpClient;
        $this->httpRequestFactory = $httpRequestFactory;
    }

    public function fetchInfo()
    {
        $request = $this->httpRequestFactory->createRequest('GET', 'https://www.liip.ch/');
        try {
            $response = $this->httpClient->sendRequest($request);
        } catch (ClientExceptionInterface $e) {
            throw new DomainException($e);
        }

        $response->...
    }
}

The dependencies of this class in the "use" statements are only the PSR interfaces, no need for specific implementations anymore.
Already, there is a release of php-http/guzzle-adapter that makes Guzzle available as PSR-18 client.

Outlook

PSR-18 does not cover asynchronous requests. Sending requests asynchronous allows to send several HTTP requests in parallel or to continue with other work, then wait for the result. This can be more efficient and helps to reduce response times. Asynchronous requests return a "promise" that can be checked if the response has been received or waited on, to block until the response has arrived. The main reason PSR-18 does not cover asynchronous requests is that there is no PSR for promises. It would be wrong for a HTTP PSR to define the much broader concept of promises.

If you want to send asynchronous requests, you can use the HTTPlug Promise component together with the HTTPlug HttpAsyncClient. The guzzle adapter mentioned above also provides this interface. When a PSR for promises has been ratified, we hope to do an additional PSR for asynchronous HTTP requests.

]]>
Comment définir votre voix en 3 étapes https://www.liip.ch/fr/blog/define-your-brand-voice https://www.liip.ch/fr/blog/define-your-brand-voice Tue, 13 Nov 2018 00:00:00 +0100 Pour ce projet de mise à jour de textes, nous visions à :

  • assurer la qualité et la cohérence des textes entre les différentes sections du site web,
  • faciliter le travail de l’équipe de rédaction sur le long terme.

Dans cet article, nous expliquons notre méthode pour ce projet. Nous avons procédé par étapes:

  1. définir l’identité de l’OCN,
  2. formuler l’identité de l’OCN sous une forme facile à comprendre et partager,
  3. définir les bases de la voix de l’OCN.

Définir la voix d’une entreprise permet à une équipe de rédaction de savoir comment transmettre son identité à l’écrit.

1 - Définir l’identité de l’entreprise

Qu’est-ce que cela signifie?

L’identité d’une entreprise est la base de sa voix. Si l’identité d’une entreprise est clairement définie, la voix est aussi facile à définir.
L’identité d’une entreprise est sa raison d’être: pourquoi faites-vous ce que vous faites aù-delà des bénéfices financiers ? Cette vidéo de Simon Sinek Start with Why explique clairement l’importance de cette question.

“Your company has a purpose beyond the money you make, beyond the things you do. The better you put it into words, the better you can see it - we can only see what we have put into words. Once you have it into words, others can see it and focus all of their efforts into making it happen. It makes work unfilling when we don’t know what we are working towards.“
Simon Sinek, extrait du discours Start with Why.

L’exemple de l’OCN

Des bases légales définissent les fonctions et les prestations de l’OCN. Une charte définit la notion de service public.

Sur la base de la charte interne de l’OCN, nous avons formulé des questions, telles que :

  • pourquoi l’OCN admet des conducteurs et des véhicules à la circulation routière?
  • pourquoi l’OCN admet des conducteurs et des bateaux à la navigation?
  • pourquoi l’OCN organise des cours et actions de prévention?
  • pourquoi l’OCN exécute des mesures administratives (avertissements et retraits de permis de conduire)?
  • pourquoi l’OCN perçoit des taxes cantonales et fédérales (impôts sur les véhicules et les bateaux et redevances sur le trafic des poids lourds)?
  • est-ce que les activités se regroupent en catégories?

Durant un workshop, nous avons questionné des membres de l’équipe de projet OCN, pour faire surgir les connaissances tacites internes.

Dans le cas de l’OCN, nous nous sommes aussi inspirés de sites web d’entreprises du domaine de la sécurité routière.
L’équipe de l’OCN a partagé des liens et des images de contenus textuels qu’elle trouvait ‘engageants’ ou au contraire ‘trop complexes’. L’équipe nous a expliqué en quoi et pourquoi tel contenu textuel était jugé ‘engageant’ ou ‘trop complexe’.

Comment faire vous-même?

Les activités de votre entreprises sont clairement définies, mais il n’existe pas de document qui définit son identité ou ses valeurs? Vous pouvez vous baser sur les activités d'une entreprise pour questionner son purpose ou raison d'être. Lorsque vous utilisez le modèle de Sinek, commencez par lister les activités how. Ensuite, vous questionnez ces activités: Pourquoi faisons-nous cette activité? pourquoi faisons-nous ce que nous faisons? Qui sommes-nous? Les réponses à ces questions vous donnent des éléments de réponse. Vous trouvez aussi des éléments de réponse en lisant des documents internes, en interrogeant vos collègues et en comparant votre entreprise à vos concurrents.

Vous lisez les documents qui existent déjà
Vous trouvez des réponses à la question Pourquoi existons-nous? dans des documents appelés ‘corporate identity’ ou encore ‘brand manifesto’, charte des valeurs. Si ces documents n’existent pas, interrogez vos collègues.

Vous interrogez vos collègues
Vous interrogez vos collègues, comme les fondateurs de votre entreprise et des personnes qui sont en contact avec votre clientèle. Il est essentiel de saisir les connaissances existantes.
Vous pouvez aider vos collègues à verbaliser leurs idées, en leur posant des questions.
Par exemple, vous pouvez demander:

  • d’après toi, quelle est l’activité la plus importante de notre entreprise?
  • pourquoi faisons-nous cette activité?
  • pourquoi cette activité est-elle importante?
  • qu’est-ce que cela changerait pour nos clients, si nous arrêtions cette activité?
  • sommes-nous les leaders de notre domaine ou plutôt des suiveurs?

Vous analysez vos concurrents
Une analyse concurrentielle vous permet d’identifier comment vous vous positionnez par rapport à vos concurrents. Par exemple, vous pouvez faire une analyse SWOT pour définir quelles sont vos faiblesses et vos forces par rapport à vos concurrents. Vous pouvez aussi poser des questions comme: Par rapport à nos concurrents, sommes-nous leader ou suiveur?

Durant votre analyse vous pouvez aussi réunir des exemples de communication que vous appréciez ou au contraire ce que vous voulez éviter. Cela vous sert à définir comment vous vous positionnez.

2 - Formuler l’identité sous forme de vision et de mission

Qu’est-ce que cela signifie?

Au point 1, vous avez réuni des points-clés qui définissent l’identité de votre entreprise. Maintenant, l’objectif est de résumer ces points-clés grâce aux concepts de vision et de mission. Les concepts de vision et de mission sont un moyen de partager facilement les points centraux de votre identité.
Votre vision explique qui vous êtes pour votre clientèle. Votre mission explique pourquoi vous faites ce que vous faites.

L’exemple de l’OCN

Nous avons utilisé le cercle de Simon Sinek. Durant un workshop, nous avons questionné des membres de l’équipe au sujet des activités de l’OCN (How selon Sinek) et de l’objectif de ces activités (Why selon Sinek).

Après avoir rédigé la vision et la mission, nous avons demandé des feedback à l’équipe de projet, pour nous assurer d’avoir formulé correctement l’identité de l’OCN.

Comment faire vous-même?

Sur la base de vos recherches au point 1, rédigez la vision et la mission de votre entreprise. Vous êtes précis et concis. Vous évitez les adjectifs sujets à interprétation comme bon, bien, gentil, joli.

Votre vision exprime qui vous êtes dans votre domaine ou par rapport à votre clientèle: Nous sommes leader dans…, nous sommes des partenaires, nous sommes des conseillers, etc.

Votre mission exprime ce que vous faites, au-delà du profit: Nous apportons l’égalité dans…, nous favorisons l’innovation de …, etc.

Demander le feedback de vos collègues
Vous testez votre rédaction en demandant l’avis de vos collègues. Vous posez des questions comme:

  • est-ce que cette phrase représente notre entreprise?
  • si non, quel mot remplacer pour représenter notre entreprise?
  • est-ce que cette phrase représente pourquoi nous faisons ce que nous faisons?
  • si non, quel mot remplacer pour représenter correctement pourquoi nous faisons ce que nous faisons?

Si vous devez expliquer ou justifier le choix de vos mots c’est probablement que votre rédaction n’est pas limpide. Votre objectif est que l’équipe soit d’accord de dire ’oui, cette phrase représente qui nous sommes, oui, cette phrase représente ce que nous faisons’.

3 - Définir les principes de base de votre voix

Qu’est-ce que cela signifie?

Sur la base de votre vision et votre mission, vous définissez avec des adjectifs comment votre entreprise s’exprime. Ces principes de base vous guident dans votre rédaction.

L’exemple de l’OCN

Dans le cas de l’OCN, nous avons sélectionné 3 séries de 3 adjectifs qui définissent la voix de l’OCN. Par exemple:
Accessible
Nous utilisons un langage courant pour être compris par tous. Nous expliquons les termes et concepts que nous utilisons. Nous soutenons notre clientèle avec une rédaction permettant une lecture transversale.

Respectueux
Notre langage est adéquat en toutes circonstances quel que soit notre interlocuteur. Nous évitons les adjectifs de jugement et le vocabulaire chargé de significations alternatives.

Comment faire?

Choisissez entre 3 et 7 adjectifs, selon la complexité de votre voix. Nous vous recommandons d’accompagner chaque adjectif par une courte explication. Evitez les adjectifs sujets à interprétation comme bon, bien, gentil, joli.

Vous pouvez aider vos collègues en leur proposant des adjectifs, par exemple via des post-it ou un jeu de carte. Vous pouvez leur demander:

  • Est-ce que nous sommes rigoureux?
  • Est-ce que nous sommes avant-gardiste?
  • Est-ce que nous sommes déjantés?

Demander le feedback de vos collègues
Vous testez votre rédaction en demandant l’avis de vos collègues. Vous posez des questions comme:
Est-ce que cet adjectif définit la manière dont notre entreprise, ou notre produit digital, parlent à sa clientèle?
Si non, quel adjectif serait correct?

Vous cherchez la collaboration et des solutions adéquates.

Les points-clés à retenir

  • La voix de votre entreprise se base sur l’identité de votre entreprise.
  • Vous vous inspirez de la charte, des valeurs et des connaissance de vos collègues pour définir l’identité de votre entreprise.
  • Vous examinez comment votre entreprise se positionne par rapport à vos concurrents.
  • Vous visez un alignement collectif en demandant du feedback à vos collègues.
  • Vous définissez une série d’adjectifs qui sont les principes de votre voix.

Share the love <3

Merci à l’équipe de l’OCN, tout spécialement à Fanny et à l’équipe de rédaction pour leur motivation et leur accueil!
Merci à Yves pour sa collaboration et son enthousiasme tout au long du projet!
Merci à Darja, Jérémie et Tom, pour les précieux conseils et feedback!
Merci Sara, tout juste arrivée chez Liip et déjà une source de motivation!

Nos lectures pour approfondir le sujet

Erika Heald, Content marketing Institute, 5 Easy Steps to Define and Use Your Brand Voice

De Kinneret Yifrah, article posté sur Medium, 6 reasons to design a voice and tone for your digital product

]]>
TEDx - make a wish https://www.liip.ch/fr/blog/tedx https://www.liip.ch/fr/blog/tedx Wed, 31 Oct 2018 00:00:00 +0100 Destination Tomorrow

Depuis plusieurs années déjà, nous sommes partenaires de nombreuses initiatives TEDx en Suisse. Nous avons soutenu les conférences TEDx à St-Gall, Berne et Fribourg. TEDx est un forum conçu pour «partager des idées qui méritent de l’être». Ce format, qui rassemble des événements locaux et auto-organisés dans différentes villes, s’est progressivement répandu à travers l’Europe. Lors de l’événement TEDxHSG «Destination Tomorrow» organisé l’année dernière à l’Université de Saint-Gall, nous voulions montrer comment allier innovation et transformation numérique.

L’arbre à souhaits

Avec près de 400 participants, nous avons travaillé à des perspectives d’avenir dans sept amphithéâtres. Point d’orgue de l’événement: notre arbre à souhaits. Les participants ont eu la possibilité d’écrire un ou plusieurs vœux et de les accrocher sur les branches de ce véritable arbre, installé à l’extérieur ou à l’intérieur. Pour la symbolique, les souhaits poussent avec l’arbre en direction du ciel, entrant ainsi dans le vaste réseau de l’univers. Nous avons été ensuite bien surpris en examinant tous ces vœux!

La paix dans le monde, le développement durable. Et une relation stable: voilà ce que souhaitent les jeunes.

La majeure partie des vœux traitaient du bien-être des êtres humains et de la sauvegarde de la nature. «La paix dans le monde» ou «des technologies vertes» ont fait partie des messages les plus fréquents. La plupart des participants souhaitent donc un monde plus harmonieux et la promotion du développement durable. Remarquable, n’est-ce pas ? D’autres souhaits étaient d’ordre plus personnel, comme l’obtention d’un diplôme, ce qui est tout à fait légitime. Un souhait a tout particulièrement attiré notre attention: «Je souhaiterais une copine». Un numéro de téléphone était joint au message. Bravo pour ce courage ! Nous n’avons malheureusement pas réussi à exaucer le rêve de son auteur, mais en remerciement de son audace, nous lui avons fait parvenir une gourde Sigg ultra-tendance.

Prochains événements TEDx

Différents événements TEDx sont prévus fin 2018 ainsi qu’en 2019. Les contenus des souhaits collectés à TEDxHSG nous ont émus. Aussi avons-nous décidé de reproduire l’arbre à souhaits à Genève. Les Romands ont-ils d’autres souhaits ? Nous le découvrirons le mercredi 7 novembre lors de la conférence Les Jours qui viennent – TEDx Genève. Et nous y installerons à nouveau notre bar Liip. À bientôt !

]]>
How a simple solution alleviated a complex problem https://www.liip.ch/fr/blog/how-a-simple-solution-alleviated-a-complex-problem https://www.liip.ch/fr/blog/how-a-simple-solution-alleviated-a-complex-problem Tue, 30 Oct 2018 00:00:00 +0100 Estimated reading time: < 5 minutes. Target audience: developers and product owners.

First, a word about software development

Over time, every software goes into maintenance; minor features might get developed, bugs are fixed and frameworks get upgraded to their latest versions. One of the potential side effect of this activity is "regression". Something that used to work suddenly doesn't anymore. The most common way to prevent this is writing tests and running them automatically on every changes. So everytime a new feature is developed or a bug fixed, a piece of code is written to ensure that the application will, from that moment on, work as expected. And if some changes break the application, the failing tests should prevent them from being released.

In practice however, it happens more often than not, that tests get overlooked... That's where it all started.

The situation

We maintain an application built with Symfony. It provides an API for which some automated tests were written, when it was first implemented years ago. But even though the application kept evolving as the years went by (and its API started being used by more and more third-party applications), the number of tests remained unchanged. This slowly created a tension, as the importance of the API's stability increased (as more applications depended on it) and the tests coverage decreased (as new features were developed and no tests were written for them).

The solution that first came to mind

Facing this situation, my initial thoughts went something like this :

We should review every API-related test, evaluate their coverage and complete them if needed!

We should review every API endpoint to find those that are not yet covered by tests and add them!

That felt like an exhaustive and complete solution; one I could be proud of once delivered. My enthusiasm was high, because it triggered something dear to my heart, as my quest to leave the code cleaner than I found it was set in motion (also known as the "boy-scout rule"). If this drive for quality had been my sole constraint, that's probably the path I would have chosen — the complex path.

Here, however, the project's budget did not allow to undertake such an effort. Which was a great opportunity to...

Gain another perspective

As improving the test suite was out of the picture, the question slowly shifted to :

What could bring more confidence and trust to the developers that the API will remain stable on the long run, when changes in the code will inevitably occur?

Well, "changes" is still a little vague here to answer the question, so let's get more specific :

  • If a developer changes something in the project related to the API, I trust that he will test the feature he changed; there's not much risk involved in that scenario. But...
  • If a developer changes something in the project that has nothing to do with the API and yet the change may break it, this is trouble !

The biggest risk I've identified to break the API inadvertently, by applying (seemingly) unrelated changes and not noticing it, lies in the Symfony Routing component, used to define API's endpoints :

  • Routes can override each other if they have the same path, and the order in which they are added to the configuration files matters. Someone could add a new route with a path identical to an existing API endpoint's one and break the latter.
  • Upgrading to the next major version of Symfony may lead to mandatory changes in the way routes are defined (it happened already), which opens up the door to human errors (forgetting to update a route's definition for example).
  • Routes' definitions are located in other files and folders than the code they're related to, which makes it hard for the developer to be conscious of their relationship.

All of this brings fragility. So I decided to focus on that, taking a "Minimum Viable Product" approach that would satisfy the budget constraint too.

Symfony may be part of the problem, but it can also be part of the solution. If the risk comes from changes in the routing, why not use Symfony's tools to monitor them ?

The actual implementation (where it gets technical)

The command debug:router lists all the routes' definitions defined in a Symfony application. There's also a --format argument that allows to get the output as JSON, which is perfect to write a script that relies on that data.

As for many projects at Liip, we use RMT to release new versions of the application. This tool allows for "prerequisites" scripts to be executed before any release is attempted : useful to run a test suite or, in this case, to check if the application's routing underwent any risky changes.

The first requisite for our script to work is to have a reference point. We need a set of the routes' definitions in a "known stable state". This can be done by running the following command on the master branch of the project, for example:

bin/console debug:router --format=json > some_path/routing_stable_dump.json

Then it could go something like this :

  1. Use the Process Component to run the bin/console debug:router --format=json command, pass the output to json_decode(), store it in a variable (that's the new routing).
  2. Fetch the reference point using file_get_contents(), pass the output to json_decode(), store it in a variable (that's the stable routing).
  3. Compare the two variables. I used swaggest/json-diff to create a diff between the two datasets.
  4. Evaluate if the changes are risky or not (depending on the business' logic) and alert the developer if they are (and prevent the release).

Here's an example of output from our script :

Closing thoughts

I've actually had a great time implementing this script and do feel proud of the work I did. And besides, I'm quite confident that the solution, while not perfect, will be sufficient to increase the peace of mind of the project's developers and product owners.

What do you think? Would you have an another approach? I'd love to read all about it in the comments.

]]>
Drupal Europe 2018 https://www.liip.ch/fr/blog/drupal-europe-2018 https://www.liip.ch/fr/blog/drupal-europe-2018 Mon, 29 Oct 2018 00:00:00 +0100 In 2017, Drupal Association decided not to host a DrupalCon Europe 2018 due to waning attendance and financial losses. They took some time to make the European event more sustainable. After this, the Drupal community decided to organise a Drupal Europe event in Darmstadt, Germany in 2018. My colleagues and I joined the biggest European Drupal event in October and here is my summary of few talks I really enjoyed!

Driesnote

By Dries Buytaert
Track: Drupal + Technology
Recording and slides

This year, Dries Buytaert focuses on improvements made for Drupal users such as content creators, evaluators and developers.

Compared to last year, Drupal 8 contributions increased by 10% and stable modules released by 46%. Moreover, a steady progress is noticeable. Especially in many core initiatives like the last version of Drupal 8 which is shipped with features and improvements created from 4 core initiatives.

Content creators are the key-decision makers in the selection of a CMS now. Their expectations have changed: they need flexibility but also simpler tools to edit contents. The layout_builder core module gives some solutions by enabling to edit a content inline and drag-and-dropping elements in different sections. The management of medias has been improved too and there is a possibility to prepare different “states” of contents using workspaces module. But the progress doesn’t stop here. The next step is to modernize the administrative UI with a refresh of the Seven administration theme based on React. Using this modern framework makes it familiar to Javascript (JS) developers and is building a bridge with the JS community.

Drupal took a big step forward for evaluators as it provides a demo profile called “Umami” now. Evaluators have a clear understanding of what kind of websites can be produced by Drupal and how it works by navigating through the demo website.
The online documentation on drupal.org has also been reorganized with a clear separation of Drupal 7 and Drupal 8. It provides some getting-started guides too. Finally, a quick-install link is available to have a website running within 3 clicks and 1 minute 27 seconds!

Developers experience has been improved as well: minor releases are now supported for 12 months instead of the former 4 weeks. Teams will have more time to plan their updates efficiently. Moreover, Gitlab will be adopted within the next months to manage the code contributions. This modern collaborative tool will encourage more people to participate to projects.

Regarding the support of the current Drupal versions, Dries shares that Symfony 3, the base component of Drupal 8 will be end-of-life by 2021. To keep the CMS secure, it implies to be end-of-life by November 2021 and Drupal 9 should be released in 2020. The upgrade from Drupal 8 to Drupal 9 should be smooth as long as you stay current with the minor releases and don’t use modules with deprecated APIs.
The support of Drupal 7 has been extended to November 2021 as the migration path from Drupal 7 to Drupal 8 is not stable with multilingualism yet.

This is a slide from Driesnote presentation showing a mountain with many tooltips: "Drupal 8 will be end-of-life by November 2021", "Drupal 7 will be supported until November 2021", "Drupal 9 will be released in 2020", "Drupal 8 became a better tool for developers", "You now have up to 12 months to upgrade your sites", "Drupal 8 became much easier to evaluate", "We've begun to coordinate the marketing of Drupal", "Drupal 8 became easier to use for content creators", "Drupal.org is moving to GitLab very soon".
Slide from Driesnote showing current state of Drupal.

Last but not least, DrupalCon is coming back next year and will be held in Amsterdam!

JavaScript modernisation initiative

By Cristina Chumillas, Lauri Eskola, Matthew Grill, Daniel Wehner and Sally Young
Track: Drupal + Technology
Recording and slides

After a lot of discussions on which JS framework will be used to build the new Drupal administrative experience, React was finally chosen for its popularity.

The initiative members wanted to focus on the content editing experience. This affects a big group of Drupal users. The goal was to simplify and modernize the current interface. Furthermore, embracing practices that are familiar to JS developers so they can easier join the Drupal community.
On one hand, a UX team ran some user tests. Those showed that users like the flexibility they have with Drupal interface but dislike its complexity usually. A comparative study was ran to know what has been used in other tools or CMSs too. On the other hand, the User Interface (UI) team worked on the redesign of the administrative interface and built a design system based on components. The refreshment of the Seven administration theme is ongoing.
Another group worked on prototyping the User Experience (UX) and User Interface (UI) changes with React. For instance, if an editor quits a page without saving they's last changes, a popup appears to restore the last changes. This is possible due to contents stored to the state of the application.

You can see a demo of the new administrative UI in the video (go to 20 minutes 48 seconds):

Demo of the new administrative UI in Drupal 8

If you are interested, you can install the demo and of course join the initiative!

Drupal Diversity & Inclusion: Building a stronger community

By Tara King and Elli Ludwigson
Track: Drupal Community
Recording

Diversity in gender, race, ethnicity, immigration status, disability, religion etc. helps a lot. Proven it makes a team more creative, collaborative and effective.

Tara King and Elli Ludwigson who are part of the Drupal Diversity and Inclusion team presented how Drupal is building a stronger and smarter community. The initial need was to make Drupal a safer place for all. Especially for the less visible ones at community events such as women, minorities and people with disabilities.
The group addressed several issues, such as racism, sexism, homophobia, language barriers etc. with different efforts and initiatives. For example, diversity is highlighted and supported in Drupal events: pronoun stickers are distributed, #WeAreDrupal hashtag is used on Twitter and social events are organized for underrepresented people as well. Moreover, the group has released an online resource library, which collects articles about diversity. All of this is ongoing and new initiatives were created. Helping people finding jobs or attracting more diverse people as recruiters are only two to name.

Flyer put on a table with the text "Make eye Contact. Invite someone to join the conversation. Consider new perspectives. Call out exclusionary behavior. Be an ally at Drupal events."
Diversity and Inclusion flyer, photo by Paul Johnson, license CC BY-NC 2.0
Sign mentionning "All-gender restrooms" at Drupal Europe venue.
All-gender restrooms sign, photo by Gábor Hojtsy, license CC BY-SA 2.0

If you are interested in the subject and would like to be involved, there are weekly meetings in #diversity-inclusion Drupal Slack channel. You can join the contrib team or work on the issue queue too.

Willy Wonka and the Secure Container Factory

By Dave Hall
Track: DevOps + Infrastructure
Recording

Docker is a tool that is designed to create, deploy and run applications easily by using containers. It is also about “running random code downloaded from the internet and running it as root”. This quote points out how it is important to maintain secure containers. David Hall illustrates this with practical advice and images from the “Willy Wonka and the chocolate factory” movie. Here is a little recap:

  • Have a light image: big images will slow down deployments and also increase the attack surface. Install an Alpine distribution rather than a Debian which is about 20 times lighter;
  • Check downloaded sources very carefully: for instance, you can use wget command and validate checksum for a file. Plus you can scan your images to check vulnerabilities using tools like Microscanner or Clair;
  • Use continuous development workflows: build a plan to maintain your Docker images, using a good Continous Integration / Continous Delivery (CI/CD) system and document it;
  • Specify a user in your dockerfile: running root on a container is the same as running root on the host. You need to reduce the actions of a potential attacker;
  • Measure your uptime in hours/days: it is important to rebuild and redeploy often to potentially avoid having a compromised system for a long time.

Now you are able to incorporate these advice into your dockerfiles in order to build a safer factory than Willy Wonka’s.

Decoupled Drupal: Implications, risks and changes from a business perspective

By Michael Schmid
Track: Agency + Business
Recording

Before 2016, Michael Schmid and his team worked on fully Drupal projects. Ever since they are working on progressive and fully decoupled projects.
A fully decoupled website means that frontend is not handled with Drupal but with a JS framework such as React. This framework is “talking” to Drupal via an API such as GraphQL. It also means, that all interactions from Drupal are gone: views with filters, webforms, comments etc. If a module provides frontend, it is not useable anymore and needs to be somehow re-implemented.
When it comes to progressive decoupled websites, frontend stack is still built with Drupal. But some parts are implemented with a JS framework. You can have data provided by APIs or injected from Drupal too. The advantage is that you can benefit from Drupal components and don’t need to re-implement everything. A downside of it are conflicts with CSS styling and build systems handled on both sides. Therefore you need to have a clear understanding of what does what.

To be able to run such projects successfully, it is important to train every developer in new technologies: JS has evolved and parts of the logic can be built with it. We can say that backenders can do frontend now. In terms of hiring it means, you can hire full stack developers but also JS engineers. Attracting more developers as they love working with JS frameworks such as React on a global level.

Projects are investments which continue over time and expect failures at the beginning. These kinds of projects are more complex than regular Drupal ones, they can fail or go over budget. Learn from your mistakes and share them with your team in retrospectives. It is also very important to celebrate successes!
Clients request decoupled projects to have a faster and cooler experience for users. They need to understand that this is an investment that will pay off in the future.

Finally, fully decoupled Drupal is a trend for big projects and other CMSs are already using decoupled out of the box. Drupal needs to focus on a better editor experience and a better API. There might also be projects that require simple backend edition instead of Drupal.

Hackers automate but the Drupal Community still downloads updates on drupal.org or: Why we need to talk about Auto Updates

By Joe Noll and Hernani Borges de Freitas
Track: Drupal + Technology
Recording and slides

In 2017, 59% of Drupal users were still downloading modules from drupal.org. In other words, more than half of the users didn’t have any automatisation processes to install modules. Knowing that critical security updates were released in the past months and it is only a matter of hours until a website gets potentially hacked, it comes crucial to have a process to automate these updates.
The update can be quite complex and may take time: installing the update, reviewing the changes, deploying on a test environment, testing either automatically or manually and deploying on production. However this process can be simplify with automation in place.

There is a core initiative to support small-to-medium sites owners that usually are not taking care of security updates. The idea is a process to download the code and update sources in the Drupal directory.
For more complex websites, automating the composer workflow with a CI pipeline is recommended. Everytime a security update is released, the developer pushes it manually in the pipeline. The CI system builds an installation containing the security fix within a new branch. This will be deployed automatically to a non-productive environment where tests can be done and build approved. Changes can be merged and deployed on production afterwards.

A schema showing the update strategy through all steps from a CI pipeline
Update strategy slide by Joe Noll and Hernani Borges de Freitas

To go further, the update_runner module focuses on automatizing the first part by detecting an update and firing up a push for an update job.

Conclusion

Swiss Drupal community members cheering at a restaurant
Meeting the Swiss Drupal community, photo by Josef Dabernig, license CC BY-NC-SA 2.0

We are back with fresh ideas, things we are curious to try and learnings from great talks! We joined social events in the evenings too. Therefore we exchanged with other drupalists, in particular with the Swiss Drupal community! This week went so fast. Thank you Drupal Europe organizers for making this event possible!

Header image credits: Official Group Photo Drupal Europe Darmstadt 2018 by Josef Dabernig, license CC BY-NC-SA 2.0.

]]>
Real time numbers recognition (MNIST) on an iPhone with CoreML from A to Z https://www.liip.ch/fr/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z https://www.liip.ch/fr/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z Tue, 23 Oct 2018 00:00:00 +0200 Creating a CoreML model from A-Z in less than 10 Steps

This is the third part of our deep learning on mobile phones series. In part one I have shown you the two main tricks on how to use convolutions and pooling to train deep learning networks. In part two I have shown you how to train existing deep learning networks like resnet50 to detect new objects. In part three I will now show you how to train a deep learning network, how to convert it in the CoreML format and then deploy it on your mobile phone!

TLDR: I will show you how to create your own iPhone app from A-Z that recognizes handwritten numbers:

Let’s get started!

1. How to start

To have a fully working example I thought we’d start with a toy dataset like the MNIST set of handwritten letters and train a deep learning network to recognize those. Once it’s working nicely on our PC, we will port it to an iPhone X using the CoreML standard.

2. Getting the data

# Importing the dataset with Keras and transforming it
from keras.datasets import mnist
from keras import backend as K

def mnist_data():
    # input image dimensions
    img_rows, img_cols = 28, 28
    (X_train, Y_train), (X_test, Y_test) = mnist.load_data()

    if K.image_data_format() == 'channels_first':
        X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
        X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
        input_shape = (1, img_rows, img_cols)
    else:
        X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
        X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
        input_shape = (img_rows, img_cols, 1)

    # rescale [0,255] --> [0,1]
    X_train = X_train.astype('float32')/255
    X_test = X_test.astype('float32')/255

    # transform to one hot encoding
    Y_train = np_utils.to_categorical(Y_train, 10)
    Y_test = np_utils.to_categorical(Y_test, 10)

    return (X_train, Y_train), (X_test, Y_test)

(X_train, Y_train), (X_test, Y_test) = mnist_data()

3. Encoding it correctly

When working with image data we have to distinguish how we want to encode it. Since Keras is a high level-library that can work on multiple “backends” such as Tensorflow, Theano or CNTK, we have to first find out how our backend encodes the data. It can either be encoded in a “channels first” or in a “channels last” way which is the default in Tensorflow in the default Keras backend. So in our case, when we use Tensorflow it would be a tensor of (batch_size, rows, cols, channels). So we first input the batch_size, then the 28 rows of the image, then the 28 columns of the image and then a 1 for the number of channels since we have image data that is grey-scale.

We can take a look at the first 5 images that we have loaded with the following snippet:

# plot first six training images
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np

(X_train, y_train), (X_test, y_test) = mnist.load_data()

fig = plt.figure(figsize=(20,20))
for i in range(6):
    ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
    ax.imshow(X_train[i], cmap='gray')
    ax.set_title(str(y_train[i]))

4. Normalizing the data

We see that there are white numbers on a black background, each thickly written just in the middle and they are quite low resolution - in our case 28 pixels x 28 pixels.

You have noticed that above we are rescaling each of the image pixels, by dividing them by 255. This results in pixel values between 0 and 1 which is quite useful for any kind of training. So each of the images pixel values look like this before the transformation:

# visualize one number with pixel values
def visualize_input(img, ax):
    ax.imshow(img, cmap='gray')
    width, height = img.shape
    thresh = img.max()/2.5
    for x in range(width):
        for y in range(height):
            ax.annotate(str(round(img[x][y],2)), xy=(y,x),
                        horizontalalignment='center',
                        verticalalignment='center',
                        color='white' if img[x][y]<thresh else 'black')

fig = plt.figure(figsize = (12,12)) 
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)

As you noticed each of the grey pixels has a value between 0 and 255 where 255 is white and 0 is black. Notice that here mnist.load_data() loads the original data into X_train[0]. When we write our custom mnist_data() function we transform every pixel intensity into a value of 0-1 by calling X_train = X_train.astype('float32')/255 .

5. One hot encoding

Originally the data is encoded in such a way that the Y-Vector contains the number value that the X Vector (Pixel Data) contains. So for example if it looks like a 7, the Y-Vector contains the number 7 in there. We need to do this transformation, because we want to map our output to 10 output neurons in our network that fire when the according number is recognized.

6. Modeling the network

Now it is time to define a convolutional network to distinguish those numbers. Using the convolution and pooling tricks from part one of this series we can model a network that will be able to distinguish numbers from each other.

# defining the model
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
def network():
    model = Sequential()
    input_shape = (28, 28, 1)
    num_classes = 10

    model.add(Conv2D(filters=32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))
    model.add(Flatten())
    model.add(Dense(500, activation='relu'))
    model.add(Dropout(0.4))
    model.add(Dense(num_classes, activation='softmax'))

    # summarize the model
    # model.summary()
    return model 

So what did we do there? Well we started with a convolution with a kernel size of 3. This means the window is 3x3 pixels. The input shape is our 28x28 pixels. We then followed this layer by a max pooling layer. Here the pool_size is two so we downscale everything by 2. So now our input to the next convolutional layer is 14 x 14. We then repeated this two more times ending up with an input to the final convolution layer of 3x3. We then use a dropout layer where we randomly set 30% of the input units to 0 to prevent overfitting in the training. Finally we then flatten the input layers (in our case 3x3x32 = 288) and connect them to the dense layer with 500 inputs. After this step we add another dropout layer and finally connect it to our dense layer with 10 nodes which corresponds to our number of classes (as in the number from 0 to 9).

7. Training the model

#Training the model
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])

model.fit(X_train, Y_train, batch_size=512, epochs=6, verbose=1,validation_data=(X_test, Y_test))

score = model.evaluate(X_test, Y_test, verbose=0)

print('Test loss:', score[0])
print('Test accuracy:', score[1])

We first compile the network by defining a loss function and an optimizer: in our case we select categorical_crossentropy, because we have multiple categories (as in the numbers 0-9). There are a number of optimizers that Keras offers, so feel free to try out a few, and stick with what works best for your case. I’ve found that AdaDelta (an advanced form of AdaGrad) works fine for me.

So after training I’ve got a model that has an accuracy of 98%, which is quite excellent given the rather simple network infrastructure. In the screenshot you can also see that in each epoch the accuracy was increasing, so everything looks good to me. We now have a model that can quite well predict the numbers 0-9 from their 28x28 pixel representation.

8. Saving the model

Since we want to use the model on our iPhone we have to convert it to a format that our iPhone understands. There is actually an ongoing initiative from Microsoft, Facebook and Amazon (and others) to harmonize all of the different deep learning network formats to have an interchangable open neural networks exchange format that you can use on any device. Its called ONNX.

Yet, as of today Apple devices work only with the CoreML format though. In order to convert our Keras model to CoreML Apple luckily provides a very handy helper library called coremltools that we can use to get the job done. It is able to convert scikit-learn models, Keras and XGBoost models to CoreML, thus covering quite a bit of the everyday applications. Install it with “pip install coremltools” and then you will be able to use it easily.

coreml_model = coremltools.converters.keras.convert(model,
                                                    input_names="image",
                                                    image_input_names='image',
                                                    class_labels=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
                                                    )

The most important parameters are class_labels, they define how many classes the model is trying to predict, and input names or image_input_names. By setting them to image XCode will automatically recognize that this model is about taking in an image and trying to predict something from it. Depending on your application it makes a lot of sense to study the documentation, especially when you want to make sure that it encodes the RGB channels in the same order (parameter is_bgr) or making sure that it assumes correctly that all inputs are values between 0 and 1 (parameter image_scale) .

The only thing left is to add some metadata to your model. With this you are helping all the developers greatly, since they don’t have to guess how your model is working and what it expects as input.

#entering metadata
coreml_model.author = 'plotti'
coreml_model.license = 'MIT'
coreml_model.short_description = 'MNIST handwriting recognition with a 3 layer network'
coreml_model.input_description['image'] = '28x28 grayscaled pixel values between 0-1'
coreml_model.save('SimpleMnist.mlmodel')

print(coreml_model)

9. Use it to predict something

After saving the model to a CoreML model we can try if it works correctly on our machine. For this we can feed it with an image and try to see if it predicts the label correctly. You can use the MNIST training data or you can snap a picture with your phone and transfer it on your PC to see how well the model handles real-life data.

#Use the core-ml model to predict something
from PIL import Image  
import numpy as np
model =  coremltools.models.MLModel('SimpleMnist.mlmodel')
im = Image.fromarray((np.reshape(mnist_data()[0][0][12]*255, (28, 28))).astype(np.uint8),"L")
plt.imshow(im)
predictions = model.predict({'image': im})
print(predictions)

It works hooray! Now it's time to include it in a project in XCode.

Porting our model to XCode in 10 Steps

Let me start by saying: I am by no means a XCode or Mobile developer. I have studied a quite a few super helpful tutorials, walkthroughs and videos on how to create a simple mobile phone app with CoreML and have used those to create my app. I can only say a big thank you and kudos to the community being so open and helpful.

1. Install XCode

Now it's time to really get our hands dirty. Before you can do anything you have to have XCode. So download it via Apple-Store and install it. In case you already have it, make sure to have at least version 9 and above.

2. Create the Project

Start XCode and create a single view app. Name your project accordingly. I did name mine “numbers”. Select a place to save it. You can leave “create git repository on my mac” checked.

3. Add the CoreML model

We can now add the CoreML model that we created using the coremltools converter. Simply drag the model into your project directory. Make sure to drag it into the correct folder (see screenshot). You can use the option “add as Reference”, like this whenever you update your model, you don’t have to drag it into your project again to update it. XCode should automatically recognize your model and realize that it is a model to be used for images.

4. Delete the view or storyboard

Since we are going to use just the camera and display a label we don’t need a fancy graphical user interface - or in other words a view layer. Since the storyboard in Swing corresponds to the view in the MVC pattern we are going to simply delete it. In the project settings deployment info make sure to delete the Main Interface too (see screenshot), by setting it to blank.

5. Create the root view controller programmatically

Instead we are going to create view root controller programmatically by replacing the funct application in AppDelegate.swift with the following code:

// create the view root controller programmatically
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
    // create the user interface window, make it visible
    window = UIWindow()
    window?.makeKeyAndVisible()

    // create the view controller and make it the root view controller
    let vc = ViewController()
    window?.rootViewController = vc

    // return true upon success
    return true
}

6. Build the view controller

Finally it is time to build the view controller. We will use UIKit - a lib for creating buttons and labels, AVFoundation - a lib to capture the camera on the iPhone and Vision - a lib to handle our CoreML model. The last is especially handy if you don’t want to resize the input data yourself.

In the Viewcontroller we are going to inherit from UI and AV functionalities, so we need to overwrite some methods later to make it functional.

The first thing we will do is to create a label that will tell us what the camera is seeing. By overriding the viewDidLoad function we will trigger the capturing of the camera and add the label to the view.

In the function setupCaptureSession we will create a capture session, grab the first camera (which is the front facing one) and capture its output into captureOutput while also displaying it on the previewLayer.

In the function captureOutput we will finally make use of our CoreML model that we imported before. Make sure to hit Cmd+B - build, when importing it, so XCode knows it's actually there. We will use it to predict something from the image that we captured. We will then grab the first prediction from the model and display it in our label.

\\define the ViewController
import UIKit
import AVFoundation
import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    // create a label to hold the Pokemon name and confidence
    let label: UILabel = {
        let label = UILabel()
        label.textColor = .white
        label.translatesAutoresizingMaskIntoConstraints = false
        label.text = "Label"
        label.font = label.font.withSize(40)
        return label
    }()

    override func viewDidLoad() {
        // call the parent function
        super.viewDidLoad()       
        setupCaptureSession() // establish the capture
        view.addSubview(label) // add the label
        setupLabel()
    }

    func setupCaptureSession() {
        // create a new capture session
        let captureSession = AVCaptureSession()

        // find the available cameras
        let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

        do {
            // select the first camera (front)
            if let captureDevice = availableDevices.first {
                captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice))
            }
        } catch {
            // print an error if the camera is not available
            print(error.localizedDescription)
        }

        // setup the video output to the screen and add output to our capture session
        let captureOutput = AVCaptureVideoDataOutput()
        captureSession.addOutput(captureOutput)
        let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.frame = view.frame
        view.layer.addSublayer(previewLayer)

        // buffer the video and start the capture session
        captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
        captureSession.startRunning()
    }

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // load our CoreML Pokedex model
        guard let model = try? VNCoreMLModel(for: SimpleMnist().model) else { return }

        // run an inference with CoreML
        let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in

            // grab the inference results
            guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }

            // grab the highest confidence result
            guard let Observation = results.first else { return }

            // create the label text components
            let predclass = "\(Observation.identifier)"

            // set the label text
            DispatchQueue.main.async(execute: {
                self.label.text = "\(predclass) "
            })
        }

        // create a Core Video pixel buffer which is an image buffer that holds pixels in main memory
        // Applications generating frames, compressing or decompressing video, or using Core Image
        // can all make use of Core Video pixel buffers
        guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

        // execute the request
        try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
    }

    func setupLabel() {
        // constrain the label in the center
        label.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true

        // constrain the the label to 50 pixels from the bottom
        label.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -50).isActive = true
    }
}

Make sure that you have changed the model part to the naming of your model. Otherwise you will get build errors.

6. Add Privacy Message

Finally, since we are going to use the camera, we need to inform the user that we are going to do so, and thus add a privacy message “Privacy - Camera Usage Description” in the Info.plist file under Information Property List.

7. Add a build team

In order to deploy the app on your mobile iPhone, you will need to register with the Apple developer program. There is no need to pay any money to do so, you can register also without any fees. Once you are registered you can select the team Apple calls it this way) that you have signed up there in the project properties.

8. Deploy on your iPhone

Finally it's time to deploy the model on your iPhone. You will need to connect it via USB and then unlock it. Once it's unlocked you need to select the destination under Product - Destination- Your iPhone. Then the only thing left is to run it on your mobile: Select Product - Run (or simply hit CMD + R) in the Menu and XCode will build and deploy the project on your iPhone.

9. Try it out

After having had to jump through so many hoops it is finally time to try out our app. If you are starting it for the first time it will ask you to allow it to use your camera (after all we have placed this info there). Then make sure to hold your iPhone sideways, since it matters on how we trained the network. We have not been using any augmentation techniques, so our model is unable to recognize numbers that are “lying on the side”. We could make our model better by applying these techniques as I have shown in this blog article.

A second thing you might notice is, that the app always recognizes some number, as there is no “background” class. In order to fix this, we could train the model additionally on some random images, which we classify as the background class. This way our model would be better equipped to tell apart if it is seeing a number or just some random background.

Conclusion or the famous “so what”

Obviously this has is a very long blog post. Yet I wanted to get all the necessary info into one place in order to show other mobile devs how easy it is to create your own deep learning computer vision applications. In our case at Liip it will most certainly boil down to a collaboration between our data services team and our mobile developers in order to get the best of both worlds.

In fact we are currently innovating together by creating an app that will be able to recognize animals in a zoo and working on another small fun game that lets two people doodle against each other: You will be given a task, as in “draw an apple” and the person who draws the apple faster in such a way that it is recognised by the deep learning model wins.

Beyond such fun innovation projects the possibilities are endless, but always depend on the context of the business and the users. Obviously the saying “if you have a hammer every problem looks like a nail to you” applies here too, not every app will benefit from having computer vision on board, and not all apps using computer vision are useful ones as some of you might know from the famous Silicon Valley episode.

Yet there are quite a few nice examples of apps that use computer vision successfully:

  • Leafsnap, lets you distinguish different types of leafs.
  • Aipoly helps visually impaired people to explore the world.
  • Snooth gets you more infos on your wine by taking a picture of the label.
  • Pinterest has launched a visual search that allows you to search for pins that match the product that you captured with your phone.
  • Caloriemama lets you snap a picture of your food and tells you how many calories it has.

As usual the code that you have seen in this blogpost is available online. Feel free to experiment with it. I am looking forward to your comments and I hope you enjoyed the journey. P.S. I would like to thank Stefanie Taepke for proof reading and for her helpful comments which made this post more readable.

]]>