Blog Liip https://www.liip.ch/fr/blog Kirby Tue, 23 Oct 2018 00:00:00 +0200 Les derniers articles du blog Liip fr Real time numbers recognition (MNIST) on an iPhone with CoreML from A to Z https://www.liip.ch/fr/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z https://www.liip.ch/fr/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z Tue, 23 Oct 2018 00:00:00 +0200 Creating a CoreML model from A-Z in less than 10 Steps

This is the third part of our deep learning on mobile phones series. In part one I have shown you the two main tricks on how to use convolutions and pooling to train deep learning networks. In part two I have shown you how to train existing deep learning networks like resnet50 to detect new objects. In part three I will now show you how to train a deep learning network, how to convert it in the CoreML format and then deploy it on your mobile phone!

TLDR: I will show you how to create your own iPhone app from A-Z that recognizes handwritten numbers:

Let’s get started!

1. How to start

To have a fully working example I thought we’d start with a toy dataset like the MNIST set of handwritten letters and train a deep learning network to recognize those. Once it’s working nicely on our PC, we will port it to an iPhone X using the CoreML standard.

2. Getting the data

# Importing the dataset with Keras and transforming it
from keras.datasets import mnist
from keras import backend as K

def mnist_data():
    # input image dimensions
    img_rows, img_cols = 28, 28
    (X_train, Y_train), (X_test, Y_test) = mnist.load_data()

    if K.image_data_format() == 'channels_first':
        X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
        X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
        input_shape = (1, img_rows, img_cols)
    else:
        X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
        X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
        input_shape = (img_rows, img_cols, 1)

    # rescale [0,255] --> [0,1]
    X_train = X_train.astype('float32')/255
    X_test = X_test.astype('float32')/255

    # transform to one hot encoding
    Y_train = np_utils.to_categorical(Y_train, 10)
    Y_test = np_utils.to_categorical(Y_test, 10)

    return (X_train, Y_train), (X_test, Y_test)

(X_train, Y_train), (X_test, Y_test) = mnist_data()

3. Encoding it correctly

When working with image data we have to distinguish how we want to encode it. Since Keras is a high level-library that can work on multiple “backends” such as Tensorflow, Theano or CNTK, we have to first find out how our backend encodes the data. It can either be encoded in a “channels first” or in a “channels last” way which is the default in Tensorflow in the default Keras backend. So in our case, when we use Tensorflow it would be a tensor of (batch_size, rows, cols, channels). So we first input the batch_size, then the 28 rows of the image, then the 28 columns of the image and then a 1 for the number of channels since we have image data that is grey-scale.

We can take a look at the first 5 images that we have loaded with the following snippet:

# plot first six training images
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np

(X_train, y_train), (X_test, y_test) = mnist.load_data()

fig = plt.figure(figsize=(20,20))
for i in range(6):
    ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
    ax.imshow(X_train[i], cmap='gray')
    ax.set_title(str(y_train[i]))

4. Normalizing the data

We see that there are white numbers on a black background, each thickly written just in the middle and they are quite low resolution - in our case 28 pixels x 28 pixels.

You have noticed that above we are rescaling each of the image pixels, by dividing them by 255. This results in pixel values between 0 and 1 which is quite useful for any kind of training. So each of the images pixel values look like this before the transformation:

# visualize one number with pixel values
def visualize_input(img, ax):
    ax.imshow(img, cmap='gray')
    width, height = img.shape
    thresh = img.max()/2.5
    for x in range(width):
        for y in range(height):
            ax.annotate(str(round(img[x][y],2)), xy=(y,x),
                        horizontalalignment='center',
                        verticalalignment='center',
                        color='white' if img[x][y]<thresh else 'black')

fig = plt.figure(figsize = (12,12)) 
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)

As you noticed each of the grey pixels has a value between 0 and 255 where 255 is white and 0 is black. Notice that here mnist.load_data() loads the original data into X_train[0]. When we write our custom mnist_data() function we transform every pixel intensity into a value of 0-1 by calling X_train = X_train.astype('float32')/255 .

5. One hot encoding

Originally the data is encoded in such a way that the Y-Vector contains the number value that the X Vector (Pixel Data) contains. So for example if it looks like a 7, the Y-Vector contains the number 7 in there. We need to do this transformation, because we want to map our output to 10 output neurons in our network that fire when the according number is recognized.

6. Modeling the network

Now it is time to define a convolutional network to distinguish those numbers. Using the convolution and pooling tricks from part one of this series we can model a network that will be able to distinguish numbers from each other.

# defining the model
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
def network():
    model = Sequential()
    input_shape = (28, 28, 1)
    num_classes = 10

    model.add(Conv2D(filters=32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))
    model.add(Flatten())
    model.add(Dense(500, activation='relu'))
    model.add(Dropout(0.4))
    model.add(Dense(num_classes, activation='softmax'))

    # summarize the model
    # model.summary()
    return model 

So what did we do there? Well we started with a convolution with a kernel size of 3. This means the window is 3x3 pixels. The input shape is our 28x28 pixels. We then followed this layer by a max pooling layer. Here the pool_size is two so we downscale everything by 2. So now our input to the next convolutional layer is 14 x 14. We then repeated this two more times ending up with an input to the final convolution layer of 3x3. We then use a dropout layer where we randomly set 30% of the input units to 0 to prevent overfitting in the training. Finally we then flatten the input layers (in our case 3x3x32 = 288) and connect them to the dense layer with 500 inputs. After this step we add another dropout layer and finally connect it to our dense layer with 10 nodes which corresponds to our number of classes (as in the number from 0 to 9).

7. Training the model

#Training the model
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])

model.fit(X_train, Y_train, batch_size=512, epochs=6, verbose=1,validation_data=(X_test, Y_test))

score = model.evaluate(X_test, Y_test, verbose=0)

print('Test loss:', score[0])
print('Test accuracy:', score[1])

We first compile the network by defining a loss function and an optimizer: in our case we select categorical_crossentropy, because we have multiple categories (as in the numbers 0-9). There are a number of optimizers that Keras offers, so feel free to try out a few, and stick with what works best for your case. I’ve found that AdaDelta (an advanced form of AdaGrad) works fine for me.

So after training I’ve got a model that has an accuracy of 98%, which is quite excellent given the rather simple network infrastructure. In the screenshot you can also see that in each epoch the accuracy was increasing, so everything looks good to me. We now have a model that can quite well predict the numbers 0-9 from their 28x28 pixel representation.

8. Saving the model

Since we want to use the model on our iPhone we have to convert it to a format that our iPhone understands. There is actually an ongoing initiative from Microsoft, Facebook and Amazon (and others) to harmonize all of the different deep learning network formats to have an interchangable open neural networks exchange format that you can use on any device. Its called ONNX.

Yet, as of today apple devices work only with the CoreML format though. In order to convert our Keras model to CoreML Apple luckily provides a very handy helper library called coremltools that we can use to get the job done. It is able to convert scikit-learn models, Keras and XGBoost models to CoreML, thus covering quite a bit of the everyday applications. Install it with “pip install coremltools” and then you will be able to use it easily.

coreml_model = coremltools.converters.keras.convert(model,
                                                    input_names="image",
                                                    image_input_names='image',
                                                    class_labels=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
                                                    )

The most important parameters are class_labels, they define how many classes the model is trying to predict, and input names or image_input_names. By setting them to image XCode will automatically recognize that this model is about taking in an image and trying to predict something from it. Depending on your application it makes a lot of sense to study the documentation, especially when you want to make sure that it encodes the RGB channels in the same order (parameter is_bgr) or making sure that it assumes correctly that all inputs are values between 0 and 1 (parameter image_scale) .

The only thing left is to add some metadata to your model. With this you are helping all the developers greatly, since they don’t have to guess how your model is working and what it expects as input.

#entering metadata
coreml_model.author = 'plotti'
coreml_model.license = 'MIT'
coreml_model.short_description = 'MNIST handwriting recognition with a 3 layer network'
coreml_model.input_description['image'] = '28x28 grayscaled pixel values between 0-1'
coreml_model.save('SimpleMnist.mlmodel')

print(coreml_model)

9. Use it to predict something

After saving the model to a CoreML model we can try if it works correctly on our machine. For this we can feed it with an image and try to see if it predicts the label correctly. You can use the MNIST training data or you can snap a picture with your phone and transfer it on your PC to see how well the model handles real-life data.

#Use the core-ml model to predict something
from PIL import Image  
import numpy as np
model =  coremltools.models.MLModel('SimpleMnist.mlmodel')
im = Image.fromarray((np.reshape(mnist_data()[0][0][12]*255, (28, 28))).astype(np.uint8),"L")
plt.imshow(im)
predictions = model.predict({'image': im})
print(predictions)

It works hooray! Now it's time to include it in a project in XCode.

Porting our model to XCode in 10 Steps

Let me start by saying: I am by no means a XCode or Mobile developer. I have studied a quite a few super helpful tutorials, walkthroughs and videos on how to create a simple mobile phone app with CoreML and have used those to create my app. I can only say a big thank you and kudos to the community being so open and helpful.

1. Install XCode

Now it's time to really get our hands dirty. Before you can do anything you have to have XCode. So download it via Apple-Store and install it. In case you already have it, make sure to have at least version 9 and above.

2. Create the Project

Start XCode and create a single view app. Name your project accordingly. I did name mine “numbers”. Select a place to save it. You can leave “create git repository on my mac” checked.

3. Add the CoreML model

We can now add the CoreML model that we created using the coremltools converter. Simply drag the model into your project directory. Make sure to drag it into the correct folder (see screenshot). You can use the option “add as Reference”, like this whenever you update your model, you don’t have to drag it into your project again to update it. XCode should automatically recognize your model and realize that it is a model to be used for images.

4. Delete the view or storyboard

Since we are going to use just the camera and display a label we don’t need a fancy graphical user interface - or in other words a view layer. Since the storyboard in Swing corresponds to the view in the MVC pattern we are going to simply delete it. In the project settings deployment info make sure to delete the Main Interface too (see screenshot), by setting it to blank.

5. Create the root view controller programmatically

Instead we are going to create view root controller programmatically by replacing the funct application in AppDelegate.swift with the following code:

// create the view root controller programmatically
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
    // create the user interface window, make it visible
    window = UIWindow()
    window?.makeKeyAndVisible()

    // create the view controller and make it the root view controller
    let vc = ViewController()
    window?.rootViewController = vc

    // return true upon success
    return true
}

6. Build the view controller

Finally it is time to build the view controller. We will use UIKit - a lib for creating buttons and labels, AVFoundation - a lib to capture the camera on the iPhone and Vision - a lib to handle our CoreML model. The last is especially handy if you don’t want to resize the input data yourself.

In the Viewcontroller we are going to inherit from UI and AV functionalities, so we need to overwrite some methods later to make it functional.

The first thing we will do is to create a label that will tell us what the camera is seeing. By overriding the viewDidLoad function we will trigger the capturing of the camera and add the label to the view.

In the function setupCaptureSession we will create a capture session, grab the first camera (which is the front facing one) and capture its output into captureOutput while also displaying it on the previewLayer.

In the function captureOutput we will finally make use of our CoreML model that we imported before. Make sure to hit Cmd+B - build, when importing it, so XCode knows it's actually there. We will use it to predict something from the image that we captured. We will then grab the first prediction from the model and display it in our label.

\\define the ViewController
import UIKit
import AVFoundation
import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    // create a label to hold the Pokemon name and confidence
    let label: UILabel = {
        let label = UILabel()
        label.textColor = .white
        label.translatesAutoresizingMaskIntoConstraints = false
        label.text = "Label"
        label.font = label.font.withSize(40)
        return label
    }()

    override func viewDidLoad() {
        // call the parent function
        super.viewDidLoad()       
        setupCaptureSession() // establish the capture
        view.addSubview(label) // add the label
        setupLabel()
    }

    func setupCaptureSession() {
        // create a new capture session
        let captureSession = AVCaptureSession()

        // find the available cameras
        let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

        do {
            // select the first camera (front)
            if let captureDevice = availableDevices.first {
                captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice))
            }
        } catch {
            // print an error if the camera is not available
            print(error.localizedDescription)
        }

        // setup the video output to the screen and add output to our capture session
        let captureOutput = AVCaptureVideoDataOutput()
        captureSession.addOutput(captureOutput)
        let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.frame = view.frame
        view.layer.addSublayer(previewLayer)

        // buffer the video and start the capture session
        captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
        captureSession.startRunning()
    }

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // load our CoreML Pokedex model
        guard let model = try? VNCoreMLModel(for: SimpleMnist().model) else { return }

        // run an inference with CoreML
        let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in

            // grab the inference results
            guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }

            // grab the highest confidence result
            guard let Observation = results.first else { return }

            // create the label text components
            let predclass = "\(Observation.identifier)"

            // set the label text
            DispatchQueue.main.async(execute: {
                self.label.text = "\(predclass) "
            })
        }

        // create a Core Video pixel buffer which is an image buffer that holds pixels in main memory
        // Applications generating frames, compressing or decompressing video, or using Core Image
        // can all make use of Core Video pixel buffers
        guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

        // execute the request
        try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
    }

    func setupLabel() {
        // constrain the label in the center
        label.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true

        // constrain the the label to 50 pixels from the bottom
        label.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -50).isActive = true
    }
}

Make sure that you have changed the model part to the naming of your model. Otherwise you will get build errors.

6. Add Privacy Message

Finally, since we are going to use the camera, we need to inform the user that we are going to do so, and thus add a privacy message “Privacy - Camera Usage Description” in the Info.plist file under Information Property List.

7. Add a build team

In order to deploy the app on your mobile iPhone, you will need to register with the apple developer program. There is no need to pay any money to do so, you can register also without any fees. Once you are registered you can select the team (apple calls it this way) that you have signed up there in the project properties.

8. Deploy on your iPhone

Finally it's time to deploy the model on your iPhone. You will need to connect it via USB and then unlock it. Once it's unlocked you need to select the destination under Product - Destination- Your iPhone. Then the only thing left is to run it on your mobile: Select Product - Run (or simply hit CMD + R) in the Menu and XCode will build and deploy the project on your iPhone.

9. Try it out

After having had to jump through so many loops it is finally time to try out our app. If you are starting it for the first time it will ask you to allow it to use your camera (after all we have placed this info there). Then make sure to hold your iPhone sideways, since it matters on how we trained the network. We have not been using any augmentation techniques, so our model is unable to recognize numbers that are “lying on the side”. We could make our model better by applying these techniques as I have shown in this blog article.

A second thing you might notice is, that the app always recognizes some number, as there is no “background” class. In order to fix this, we could train the model additionally on some random images, which we classify as the background class. This way our model would be better equipped to tell apart if it is seeing a number or just some random background.

Conclusion or the famous “so what”

Obviously this has is a very long blog post. Yet I wanted to get all the necessary info into one place in order to show other mobile devs how easy it is to create your own deep learning computer vision applications. In our case at Liip it will most certainly boil down to a collaboration between our data services team and our mobile developers in order to get the best of both worlds.

In fact we are currently innovating together by creating an app that will be able to recognize animals in a zoo and working on another small fun game that lets two people doodle against each other: You will be given a task, as in “draw an apple” and the person who draws the apple faster in such a way that it is recognised by the deep learning model wins.

Beyond such fun innovation projects the possibilities are endless, but always depend on the context of the business and the users. Obviously the saying “if you have a hammer every problem looks like a nail to you” applies here too, not every app will benefit from having computer vision on board, and not all apps using computer vision are useful ones as some of you might know from the famous Silicon Valley episode.

Yet there are quite a few nice examples of apps that use computer vision successfully:

  • Leafsnap, lets you distinguish different types of leafs.
  • Aipoly helps visually impaired people to explore the world.
  • Snooth gets you more infos on your wine by taking a picture of the label.
  • Pinterest has launched a visual search that allows you to search for pins that match the product that you captured with your phone.
  • Caloriemama lets you snap a picture of your food and tells you how many calories it has.

As usual the code that you have seen in this blogpost is available online. Feel free to experiment with it. I am looking forward to your comments and I hope you enjoyed the journey. P.S. I would like to thank Stefanie Taepke for proof reading and for her helpful comments which made this post more readable.

]]>
The Liip Bike Grand Tour Challenge 2018 https://www.liip.ch/fr/blog/the-liip-bike-grand-tour-challenge-2018 https://www.liip.ch/fr/blog/the-liip-bike-grand-tour-challenge-2018 Fri, 12 Oct 2018 00:00:00 +0200 Birth of an idea

It all started because of Liip's longlasting engagement to Pro Velo's Bike to Work action. It take place every year during May or June. And foster seldom bikers to get into the habit of biking to work, for at least parts of their trip. Liip is a long-time participant and actively encourages Liipers to participate.

I had been thinking of reaching all offices all at once for quite some time: it could be organized in the form of a relay, participants could use different means of transportation, and so on. At the 2018 LiipConf, I shared the idea with other Liipers and got a lot enthousiastic feedback to finally get around to organize "something". That same evening, I turned to my favorite Bike Router and tried to connect the dots. The idea had then become "try to work in every Liip office, bike between the offices".

Initial implementation

With five offices, Liip spreads over most of Switzerland, from lake Geneva to lake Constance, along the Geneva → St.Gallen IC 1 train line. Initially, I thought of spreading the voyage in 5 days, one per office. But looking at the map and at the routing calculations, it quickly became obvious it wouldn't work, because of the Bern → Zürich leg, which is at least a 125 km ride. Cutting it in half, and not staying overnight in Bern made the plan somewhat realistic.

In early September, I announced the plan on Liip's Slack #announcements channel, in hope to find "Partners in Crime":

🚴‍♀️ Liip Bike Grand Tour Challenge 2018 🚴‍♂️

Motivated to join on a Bike Grand Tour with Liipers? The idea is very simple: connect all Liip offices in one week, on a bike.
  • When? W40: Oct 1. → Oct 5.
  • How? On your bike.
  • Work? Yes; from all offices, in a week!

    Afterwards, the team of bikers took some time to materialize: although we are working in a very flexible environment, being available for work half-days only for a week still isn't easy to arrange for: client and internal meetings, projects and support to work on, and so on. After a month, four motivated Liipers decided to join, some for all the legs, some for only some steps.

    It is important to mention that the concept was never thought as a sports' stunt, or being particularly tough: e-bikes were explicitly encouraged, and it was by no means mandatory to participate in all parts. In other words: to enjoy outdoors and have a reachable sports challenge with colleagues matters more than completing the tour in a certain time.

    Now, doing it

    Monday October 1. - Lausanne → Fribourg

    Fast forward to Monday October 1st. The plan was to work in the morning, and leave around 3 p.m. The estimated biking time is approximately 4:30. But the weather in Lausanne was by no means fantastic - light rain for most if not all the trail. That's why we decided to leave early, and were on our bikes at 2 p.m. As for routing, we agreed to go through Romont, which has the advantage of providing an intermediate stop with a train station, in case we wished to stop.

    We started with a 15kms climb up to Forel and one very steep ascend in La Croix-sur-Lutry, on which we made the mistake to stay on our bikes.
    We arrived in Fribourg after 5 hours in the cold, wind and light rain; often in alternance, but also combined. Thankfully, we were welcomed by friendly Liipers in Fribourg who had already planned a pizza dinner and board-games night; It was just perfect!

    Tuesday October 2nd. - Fribourg → Bern

    After a well-deserved sleep; the plan was to work in Fribourg two hours only, to leave on time and arrive in Bern for lunch.

    • ~ 33 km
    • Amplitude: 534m - 674m
    • Ascend: -72m; total 181m
    • Cantons crossed: Fribourg, Bern
    • Fribourg → Bern

    This was frankly a pleasant ride, with an almost 10kms downhill from Berg to Wünnewil, and then a reasonable uphill from Flamatt to Bern. In approximaely two hours, we were able to reach Bern. The weather had become better; not as cold as the previous day, and rain stopped.

    Tuesday October 2nd. - Bern → Langenthal

    In Bern, changes within the team happened; one rider who made it from Lausanne decided to stop and got replaced by a fresh one! ☺ After a General Company Circle Tactical meeting (see the Holacracy – what’s different in our daily life? ), we jumped on our bikes towards the first non-Liip office overnight stop, northern of canton Bern.

    • ~ 45 km
    • Amplitude: 466m - 568m
    • Ascend: -71m; total 135m
    • Cantons crossed: Bern, Solothurn
    • Bern → Langenthal

    Wednesday October 3rd. - Langenthal → Zürich

    After a long night's sleep in a friendly BnB downtown Langenthal and a fantastic gargantuous breakfast, we were now aiming for Zürich. The longest leg so far, crossing Canton Aargau West to East.

    • ~ 80 km
    • Amplitude: 437m - 510m
    • Ascend: -67m; total 391m
    • Cantons crossed: Bern, Aargau, Zürich
    • Langenthal → Zürich

    When approaching Zürich, we initially followed the Route 66 "Goldküst - Limmatt", with small up- and downhills on cute gravel. But after 30 minutes of that fun, we realized that we didn't progress fast enough. Therefore we tried to get to our destination quicker! We re-routed ourselves to more conventional, car-filled routes and arrived in the Zürich office around 1 p.m., quite hungry!

    Thursday October 4th. - Zürich → St. Gallen

    After a half-day of work in the great Zürich office, and sore legs, we geared towards St. Gallen. The longest part with the biggest total ascend of the trip:

    • ~ 88 km
    • Amplitude: 412m - 606m
    • Ascend: 271m; total 728m
    • Cantons crossed: Zürich, Thurgau, St. Gallen
    • Zürich → St. Gallen

    After three days of biking and more than 200 kms in the legs, this step wasn't expected to be an easy ride and it hasn't been indeed. On the other side, it provided with nice downhill slides (Wildberg → Thurbenthal) and fantastic landscapes, with stunning views: from the suburbs of Zürich to Thurgauer farmland and the St. Gallen hills. Besides, the weather was just as it should be: sunny yet not too warm.

    After 4:45 and a finish while the sun was setting, we finally reached the St. Gallen Liip office!

    « Fourbus, mais heureux ! »

    Friday October 5th. St. Gallen

    Friday was the only day planned without biking,. And frankly, for good. We were not only greeted by the very friendly St.Gallen colleagues, but were also lucky enough to arrive on a massage day! (Yes, Liip offers each Liiper a monthly half-hour massage… ⇒ https://www.liip.ch/jobs ☺). After a delicious lunch, it was time to jump on a train back to Lausanne: Four days to come, 3:35 to return. It really brought a bizarre feeling: it's possible to bike from Lake Geneva to Lake Constance in four days; but it still takes 3.5 hours on some of the most efficient trains to run back.

    Wrap up

    • ~ 314.15 kms (yes; let's say approximately 100 * π)
    • 8 cantons crossed
    • ~ 2070 m of cumulated ascend
    • No single mechanical problem
    • Sore legs
    • Hundreds of cows of all colours and sizes
    • One game of Hornussen
    • Wireless electricity in Thurgau

    Learnings

    • Liip has people willing to engage in fun & challenging ideas!
    • Liip has all the good people it takes to support such a project!
    • The second day eightiest kilometer is easier than the fourth day eightiest kilometer: it would have been way easier with legs of decreasing intensity.
    • It takes quite some time to migrate from a desk to a fully-equipped ready-to-go bike.
    • Carrying personal equipment for one full week makes a heavy bike;
    • Bike bags are a must: one of us had a backpack and it's just not bearable;
    • The first week of October is too late in the year, and makes for uneasy conditions (rain and cold);
    • One month advance notice is too short;
    • Classical Bed-and-Breakfast are very charming.

    Thanks

    Managing this ride would not have been possible without:

    • Liip for creating a culture where implementing crazy ideas like this is encouraged ("Is it safe enough to try?");
    • Biking Liipers Tobias & Heiko, for coming along;
    • Supporting Liipers in various roles for arranging or providing accomodation, ordering cool sports T-shirts, organizing cool welcome gatherings (game night, music night), and being always welcoming, encouraging and simply friendly;
    • The SwitzerlandMobility Foundation for providing fantastic cycling routes, with frequent indicators, orientation maps and markings for "analog" orientation.

    Next year

    Given the cool experience, and many declarations of intent, it is very likely that this challenge will happen again next year, in Autumn; but vice versa! Want to join?

    ]]>
    Add syntactic sugar to your Android Preferences https://www.liip.ch/fr/blog/syntactic-sugar-android-preferences-kotlin https://www.liip.ch/fr/blog/syntactic-sugar-android-preferences-kotlin Tue, 09 Oct 2018 00:00:00 +0200 TL;DR

    You can find SweetPreferences on Github.

    // Define a class that will hold the preferences
    class UserPreferences(sweetPreferences: SweetPreferences) {
        // Default key is "counter"
        // Default value is "0"
        var counter: Int by sweetPreferences.delegate(0)
    
        // Key is hardcoded to "usernameKey"
        // Default value is "James"
        var username: String? by sweetPreferences.delegate("James", "usernameKey") 
    }
    
    // Obtain a SweetPreferences instance with default SharedPreferences
    val sweetPreferences = SweetPreferences.Builder().withDefaultSharedPreferences(context).build()
    
    // Build a UserPreferences instance
    val preferences = UserPreferences(sweetPreferences)
    
    // Use the preferences in a type-safe manner
    preference.username = "John Doe"
    preference.counter = 34

    Kotlin magic

    The most important part of the library is to define properties that run code instead of just holding a value.

    From the example above, when you do:

    val name = preference.username

    what really happening is:

    val name = sweetPreferences.get("username", "James", String::class)

    The username property is converted from a property name to a string, the "James" string is taken from the property definition and the String class is automatically inferred.

    To write this simple library, we used constructs offered by Kotlin such as Inline Functions, Reified type parameters, Delegated Properties, Extension Functions and Function literals with receiver. If you are starting with Kotlin, I warmly encourage you to go check those. It's only a small part of what Kotlin has to offer to ease app development, but already allows you to create great APIs.

    Next time you need to store preferences in your Android app, give SweetPreferences a try and share what you have built with it. We’d like to know your feedback!

    ]]>
    How Content drives Conversion https://www.liip.ch/fr/blog/how-content-drives-conversion https://www.liip.ch/fr/blog/how-content-drives-conversion Tue, 09 Oct 2018 00:00:00 +0200 What do users really want from website content? We have created a pyramid of needs. Discover our 5 insights.

    ]]>
    From coasters to Vuex https://www.liip.ch/fr/blog/from-coasters-to-vuex https://www.liip.ch/fr/blog/from-coasters-to-vuex Tue, 09 Oct 2018 00:00:00 +0200 You'll take a coaster and start calculating quickly. All factors need to be taken into account as you write down your calculations on the edge of the coaster. Once your coaster is full, you'll know a lot of answers to a lot of questions: How much can I offer for this piece of land? How expensive will one flat be? How many parking lots could be built and how expensive are they? And of course there's many more.

    In the beginning, there was theory

    Architecture students at the ETH learn this so-called "coaster method" in real estate economics classes. Planning and building a house of any size is no easy task to begin with, and neither is understanding the financial aspect of it. To understand all of those calculations, some students created spreadsheets that do the calculations for them. This is prone to error. There are many questions that can be answered and many parameters that influence those answers. The ETH IÖ app was designed to teach students about the complex correlations of different factors that influence the decision. Furthermore, if building a house on a certain lot is financially feasible or not.

    The spreadsheet provided by the client PO

    The product owner at ETH, a lecturer for real estate economics, took the tome to create such speadsheets, much like the students. These spreadsheets contained all calculations and formulas that were part of the course, as well as some sample calculations. After a thorough analysis of the spreadsheet, we came up with a total of about 60 standalone values that could be adjusted by the user, as well as about 45 subsequent formulas that used those values and other formulas to yield yet another value.

    60 values and 45 subsequent formulas, all of them calculated on a coaster. Implementing this over several components would end up in a mess. We needed to abstract this away somehow.

    Exploring the technologies

    The framework we chose to build the frontend application with, was Vue. We used Vue to build a prototype already. so we figured we could reuse some components. We already valued Vue's size and flexibility and were somewhat familiar with it, so it was a natural choice. There's two main possibilities of handling your data when working with Vue: Either manage state in the components, or in a state machine, like Vuex.

    Since many of the values need to be either changed or displayed in different components, keeping the state on a component level would tightly couple those components. This is exactly what is happening in the spreadsheet mentioned earlier. Fields from different parts of the sheet are referenced directly, making it hard to retrace the path of the data.

    A set of tightly coupled components. Retracing the calculation of a single field can be hard.

    Keeping the state outside of the components and providing ways to update the state from any component decouples them. Not a single calculation needs to be done in an otherwise very view-related component. Any component can trigger an update, any component can read, but ultimately, the state machine decides what happens with the data.

    By using Vuex, components can be decoupled. They don't need state anymore.

    Vue has a solution for that: Vuex. Vuex allows to decouple the state from components, moving them over to dedicated modules. Vue components can commit mutations to the state or dispatch actions that contain logic. For a clean setup, we went with Vuex.

    Building the Vuex modules

    The core functionality of the app can be boiled down to five steps:

    1. Find the lot - Where do I want to build?
    2. Define the building - How large is it? How many floors, etc.?
    3. Further define any building parameters and choose a reference project - How many flats, parking lots, size of a flat?
    4. Get the standards - What are the usual prices for flats and parking lots in this region?
    5. Monetizing - What's the net yield of the building? How can it be influenced?

    Those five steps essential boil down to four different topics:

    1. The lot
    2. The building with all its parameters
    3. The reference project
    4. The monetizing part

    These topics can be treated as Vuex modules directly. An example for a basic module Lot would look like the the following:

    // modules/Lot/index.js
    
    export default {
      // Namespaced, so any mutations and actions can be accessed via `Lot/...`
      namespaced: true,
    
      // The actual state: All fields that the lot needs to know about
      state: {
        lotSize: 0.0,
        coefficientOfUtilization: 1.0,
        increasedUtilization: false,
        parkingReductionZone: 'U',
        // ...
      }
    }

    The fields within the state are some sort of interface: Those are the fields that can be altered via mutations or actions. They can be considered a "starting point" of all subsequent calculations.

    Those subsequent calculations were implemented as getters within the same module, as long as they are still related to the Lot:

    // modules/Lot/index.js
    
    export default {
      namespaced: true,
    
      state: {
        lotSize: 0.0,
        coefficientOfUtilization: 1.0
      },
    
      // Getters - the subsequent calculations
      getters: {
        /**
         * Unit: m²
         * DE: Theoretisch realisierbare aGF
         * @param state
         * @return {number}
         */
        theoreticalRealizableCountableFloorArea: state => {
          return state.lotSize * state.coefficientOfUtilization
        },
    
        // ...
      }
    }

    And we're good to go. Mutations and actions are implemented in their respective store module too. This makes the part of the data actually changing more obvious.

    Benefits and drawbacks

    With this setup, we've achieved several things. First of all, we separated the data from the view, following the "separation of concerns" design principle. We also managed to group related fields and formulas together in a domain-driven way, thus making their location more predictable. All of the subsequent formulas are now also unit-testable. Testing their implementation within Vue components is harder as they are tightly coupled to the view. Thanks to the mutation history provided by the Vue dev tools, every change to the data is traceable. The overall state of the application also becomes exportable, allowing for an easier implementation of a "save & load" feature. Also, reactivity is kept as a core feature of the app - Vuex is fast enough to make any subsequent update of data virtually instant.

    However, as with every architecture, there's also drawbacks. Mainly, by introducing Vuex, the application is getting more complex in general. Hooking the data to the components requires a lot of boilerplating - otherwise it's not clear which component is using which field. As all the store modules need similar methods (f.e. loading data or resetting the entire module) there's also a lot of boilerplating going on. Store modules are tightly coupled with each other by using fields and getters of basically all modules.

    In conclusion, the benefits of this architecture outweigh the drawbacks. Having a state machine in this kind of application makes sense.

    Takeaway thoughts

    The journey from the coasters, to the spreadsheets, to a whiteboard, to an actual usable application was thrilling. The chosen architecture allowed us to keep a consistent set up, even with growing complexity of the calculations in the back. The app became more testable. The Vue components don't even care anymore about where the data is from, or what happens with changed fields. Separating the view and the model was a necessary decision to avoid a mess and tightly coupled components - the app stayed maintainable, which is important. After all, the students are using it all the time.

    ]]>
    Enkeltauglichkeit. Notre responsabilité envers l’avenir. https://www.liip.ch/fr/blog/enkeltauglichkeit https://www.liip.ch/fr/blog/enkeltauglichkeit Wed, 03 Oct 2018 00:00:00 +0200 Maintien de la qualité de vie pour les générations futures.

    Il s’agit de faire un geste qui prouve que nous sommes prêts à considérer la nature comme le facteur le plus important dans toutes nos décisions, afin d’agir en faveur des générations futures. L’association enkeltauglich.jetzt, Pain pour le prochain ainsi que des personnalités engagées du monde de l’économie sont à l’initiative du concept de «Enkeltauglichkeit», ou encore de maintien de la qualité de vie pour demain.

    Purpose over profit

    La première étape vers le changement consiste à identifier le positionnement et l’attitude des entreprises vis-à-vis de la nature. Jusqu’à présent, elles ont majoritairement adopté une posture de supériorité («Nous sommes au-dessus de la nature»), avec les résultats que l’on connaît. Grâce à notre engagement en faveur du maintien de la qualité de vie pour les générations futures, nous ancrons dans l’attitude intrinsèque de l’entreprise le fait de faire partie de la nature.

    Liip pense et agit en faveur du maintien de la qualité de vie pour les générations futures

    Chez Liip, nous sommes convaincus que nous sommes tous liés à la nature. Elle nous nourrit, nous et notre économie, et nous avons donc une responsabilité envers elle. Envers nos collaborateurs, nos clients, et la communauté. Nous cultivons ainsi une vision intégrée de l’économie et de la nature.

    Nous sommes aujourd’hui les ancêtres de demain

    Chez Liip, nous assumons notre responsabilité et agissons de manière durable. De manière pragmatique, comme à notre habitude, nous soutenons la cause de enkeltauglich.jetzt. Car les individus sont destinés à coopérer en groupes, à prendre soin les uns des autres et à utiliser leur intelligence collective. Et ce de façon durable et autoorganisée. La Terre est notre maison à toutes et tous, et la nature notre moyen de subsistance. Nous sommes aujourd’hui les ancêtres de demain. Quel regard nos descendants porteront-ils sur nous?

    ]]>
    meinplatz.ch https://www.liip.ch/fr/blog/meinplatz-ch https://www.liip.ch/fr/blog/meinplatz-ch Fri, 28 Sep 2018 00:00:00 +0200 Projet

    La nouvelle plateforme aide les personnes souffrant d’un handicap à trouver une place à la journée, un logement ou un emploi dans le canton de Zurich. Cette plateforme regroupe toutes les informations nécessaires sur les places existantes. Les services de conseil pour les personnes souffrant d’un handicap sont indiqués de manière claire, tout comme les places disponibles dans le domaine institutionnel. Néanmoins, toutes les places figurent sur la plateforme, pas uniquement les places disponibles. Cette plateforme constitue donc un premier point de contact et chacun pourra facilement y trouver les informations qu’il recherche. La capacité à satisfaire les exigences de tous les groupes cibles a été le principal défi de ce projet.

    Lors de la conception de la plateforme, toute l’équipe de développement s’est penchée de manière intensive sur les besoins des différents groupes cibles. Les personnes concernées ainsi que leurs proches, les organes consultatifs et référents, les institutions, les autorités et les représentants légaux: tous ces groupes ont été des parties prenantes tout au long de la phase de conception.

    Design

    Liip a conçu le logo de meinplatz.ch, mais aussi toute son identité visuelle. Sobre et simple, le design est conçu pour répondre aux attentes de groupes cibles très différents les uns des autres, mais utilisant tous la même plateforme. Pour que le design reste centré sur les besoins de l’utilisateur le plus exigeants/intensifs, l’accent a été mis sur les personnes souffrant d’un handicap. La plateforme doit être tout autant facile d’accès pour les proches comme pour les autorités et représentants légaux. Elle doit être simple et fournir des informations pertinentes pour tous les utilisateurs. Les services de conseil (une des fonctionnalités de la plateforme) doivent également apporter un soutien aux utilisateurs intensifs. Réunir toutes ces exigences dans un design simple n’a pas été facile, mais nos designers spécialisés dans l’expérience utilisateur y sont parvenus et ont même dépassés les attentes.

    Technologie

    Le site Internet a été réalisé avec October, un CMS open source. Du fait de la simplicité d’utilisation et du schéma précis d’autorisation des rôles, le CMS October était la solution idéale. Pour meinplatz.ch, différents administrateurs doivent saisir du contenu et le modifier. Grâce à notre expérience et à l’aide du site accessibility developer guide , la réalisation d’une plateforme adaptée aux personnes souffrant d’un handicap (n’a pas posé de problème) s’est révélé être un succès.

    Trust over control

    Avec INSOS Zurich, nous sommes parvenus à créer une nouvelle plateforme destinée aux personnes souffrant d’un handicap, en mettant l’accent sur la facilité d’accès. La clé de cette réussite: une coopération basée sur la confiance et des objectifs communs.

    INSOS : La collaboration avec Liip a été très enrichissante et professionnelle. Liip se distingue par une transparence et une communication exemplaires. Je me réjouis de poursuivre notre collaboration.
    Maya Graf-Seelhofer, assistante/directrice adjointe. INSOS Zurich

    Une réalisation réussie avec un minimum de temps investi grâce à la collaboration efficace entre Liip et INSOS Zurich.
    Marc Brühwiler, Product Owner Liip AG

    ]]>
    Playing with Hololens https://www.liip.ch/fr/blog/playing-with-hololens https://www.liip.ch/fr/blog/playing-with-hololens Tue, 25 Sep 2018 00:00:00 +0200 Yesterday we took a couple of hours to play with the Hololens device, the augmented reality helmet from Microsoft. We quickly did a prototype without a single line of code, using the basic tools available out of the box in the Hololens: display images, 3D objects or web browser windows in the room. All this was done using the device itself, and looking funny.

    The idea

    As a designer, I wanted to learn how to play and display information in 3D, and of course, have fun with this nice big tech-toy. During the tests, we thought about what was already at our disposal to play with. We remembered a web tool we'd developed internally. It's a simple web interface showing the reservations of our conference rooms. You can display the detail of a room and book it through a simple form. What a nice base to start with!

    The process

    Put up web browsers in the hallway, near the doors to the conference rooms. Display the reservation system in these windows. Then virtually put an image "on" the doors: a green checkmark if the room is free, a red cross if the room is occupied. That's it.

    The result

    We were able to walk in the office, see clearly which room was free and book it through the web interface displayed next to the door. Yay!

    Well, wearing this big device on your head still make your colleagues laugh, but this technology is so fascinating that you quickly forget you look like an astronaut talking to invisible things, lost in the wide space of the office.

    Next steps

    As the prototype is made with the basic tools of the device, it can't be saved and started as an app. All the windows can be moved or removed by the next user, not very convenient. Encapsulate these functionalities into an app should be the next step. In parallel, the UX experience could be adapted to this new 3D environment: bigger buttons, simpler interface, or why not trigger a reservation through a single voice command: “Hey, I need this room right now for 30 minutes!”

    Learnings

    Prototyping for augmented reality can be quick and efficient without effort, even for non coders. Designers can learn intuitively how to layout information, arrange things and create a visual hierarchy directly in the space. A couple of images and a bit of HTML can be enough to deliver great prototypes and be a solid base for your future cutting edge application.

    ]]>
    Delivering Service Design with Scrum - 6 insights https://www.liip.ch/fr/blog/delivering-service-design-with-scrum-6-insights https://www.liip.ch/fr/blog/delivering-service-design-with-scrum-6-insights Wed, 19 Sep 2018 00:00:00 +0200 Starting something new is always inspiring and exciting.

    Getting the chance to start from scratch designing a new and effective service, together with a team is something I like best in my job as Service Designer at Liip. Immersing myself in customers’ needs, developing great new ideas, making them tangible with prototypes and getting stimulating feedbacks. These parts are definitely the most inspiring and fun of service design projects.

    But the delivery can be a really hard landing.

    When working on service design projects, we break open existing silos. We align all the different parts involved in the service to create a better and more efficient service experience. For the delivery of the new service, that can also entail a high degree of complexity. In addition to the hard work of developing concrete solutions, we also have to deal with other challenges. For example, changing the habits and behavior of people or clarifying organizational uncertainties. The search for the right decision-makers and sponsors between the different parts of the company and technical restrictions as further examples. After the thrill of the first creative phases, delivery can mean a really hard landing.

    Combining service design with agile methods helps facing the challenges of delivering.

    Having worked in both Service Designer and Scrum Master roles in recent years, I tried several ways of combining Service Design with Scrum. My goal is to combine the best of the two ways of working to make this hard landing a little softer. Here are 6 learnings that proved to be very helpful:

    1. Use epics and user stories to split the service into more “digestible” pieces.

    Everyone probably knows the feeling of not seeing the wood for the trees when you’re standing in front of a wall full of sketches and stickies with ideas. Then it’s very helpful to create a list of epics. In the Scrum world, epics are “a large body of work that can be broken down into a number of smaller stories” (see Atlassian). In Service Design, epics can help dividing the entire service into smaller pieces. This reduces complexity, and allows dealing within specific and limited challenges of a single epic, rather than the whole. Also, the ability to clarify one epic gives good clues where to start with this big mountain of work.

    2. Use the service blueprint as the master to create the backlog.

    In software projects we often use user story maps to create epics and user stories. In service design projects, the service blueprint is a very powerful alternative to do user story mapping. Service blueprints help mapping and defining all aspects of the future service - from the targeted experience to internal processes, systems, people, tools, etc involved. This contains a lot of useful information for user stories e.g.

    • The actors involved, eg. the different types of users (as personas), different staff people, systems, tools, etc.
    • The required functions, as each step of a service blueprint usually contains a number of functions that will be written in the different user stories.
    • The purpose of the function, as you can read from each part of the blueprint what is triggered by this step.

    After a first version of the user story backlog is created, you can reassign the user stories to the service blueprint. Mapping all the written stories to the blueprint is also great to determine if some user stories have been forgotten. This helps a lot to have a better overview of what to do and how it affects the service experience in the end.

    3. Do technical spikes in an early stage of the project in order to make your service more feasible.

    If the service contains digital parts, it’s highly recommended to face the technical crack nuts in the project as soon as possible. Scrum provides us with the so called technical spikes - a great chance to dive deeper into different possibilities of solving technical issues of the new service. Strictly timeboxed, they allow developers to explore different technical solutions and suggest the one that fits best. Furthermore the team can discuss the consequences and adapt the service. In order to still create a great experience but also find a feasible way of delivering it.

    4. Estimate the business value of the different aspects of the service.

    In Scrum, we use business value poker to prioritize user stories. A business value is a relative comparison of the value of different user stories. It helps to prioritize the delivery and to show where the most time and money needs to be invested. This process is also very healthy (and tough!) for service ideas. Knowing how much value each part of the service brings to the whole service vision is very valuable and allows the team focus on what really matters.

    You can also do business value poker in combination with an adaption of the six thinking hats method, e.g. one of the team estimates the business value in the hat of the user, one in the hat of the top manager interested in return on investment, and one in the hat of the staff member interested in delivering a service experience that doesn’t mean additional work.

    5. Deliver a “Minimum Viable Service” (MVS) before taking care of the rest.

    Once we have the user story backlog rooted in the service blueprint and we know which story brings most value to our service vision, we start step by step to deliver the service. In agile software projects, the team starts by producing the Minimum Viable Product (MVP). Which means, delivering the smallest amount of features necessary in order to create a valuable, reduced product to users. For services, we are doing the same - creating a “Minimum Viable Service” (MVS). This allows the team developing a first basic version of the service in a short time to market. Delivering results in a early stage of the project is not only motivating the team but also allows continuous learning, adapting and evolving of the service.

    6. Work in cross functional, self organised and fully empowered teams.

    Scrum teams are self organised and include all skills needed. Without having a hierarchy based system. In a service design setting, many different fields of a company are involved and it’s hard to specify decision makers and people responsible. But that’s the key. Including each and every stakeholder of a whole service in the project is never ending and rarely contributing. Therefore dedicate a small and powerful team of experts involved, give them the full competence to decide and to organise themselves but also the responsibility to deliver value.

    Scrum provides great ways to deliver complex service projects.

    This blogpost highlights a few aspects of how we manage the challenges of delivering a complex service project. By combining service design with scrum - from the the tools and artifacts to the mindset and the way how teams work together.

    Yet, also when following all these aspects, delivering a complex service remains a hard piece of work. But definitely an easier one to handle with the structured and well working delivery methods to bring our ideas to life. Step by step - sprint by sprint.

    ]]>
    A personal view on what Holacracy has brought us https://www.liip.ch/fr/blog/a-personal-view-on-what-holacracy-has-brought-us https://www.liip.ch/fr/blog/a-personal-view-on-what-holacracy-has-brought-us Tue, 18 Sep 2018 00:00:00 +0200 I was invited to share on our adoption of Holacracy in a recent online gathering of the Siemens Grow2Glow network. It gave me the opportunity to think about the impact the Holacracy organisational system has had on the agency, its culture and the way we work. I first focused on the negative impacts, the things we need to fix, but then realized I wasn’t fair to the reality: the benefits outgrow the challenges by far.

    Thinking of it, and having had the opportunity to be in the company through several “eras”, I realized how much things have changed. I have no scientific evidences of what I am sharing here, those are things I experienced and observed, a personal view.

    Good stuff happened before

    Liip always had a strong focus on values and human-centeredness: ethics always had a say, collaborators also. And before the adoption of Holacracy, we benefited a lot from the introduction of the Scrum agile framework (2009) and later from cross-functional teams and guilds (~ Spotify model, 2012).

    That was before the founders adopted Holacracy (2016), a period of time which I will refer to, in this post, as the "partners-era". Family-like dynamics were the norm, partners were the “elders” of the organisation, employees the “grown-up children”.

    The years since our adoption of Holacracy will hereunder be coined the "holacracy-era".

    Good stuff happening, 3 years after Holacracy adoption

    It is ok to try things, even serious things

    Entrepreneurship in the partners-era summed up to: partners decided on life and death of the offering. Adventurous workers had to pitch their ideas and go get buy-in and approval from the partners.

    In the holacracy-era, things are different: services are launched out of the initiative and authority of any employee, creating a momentum that attracts collaborators and clients. In the last two years we have indeed added several services to our portfolio through individual initiatives, and more are incubating.

    It is ok to stop things, even serious things

    If it has gotten ok to launch things, it’s also ok to stop things. In one of the circles I am involved in, we launched a new consulting service a few months ago, thinking it was a great complement to our existing offering. Although I still think it was, it turned out the market was not at the rendez-vous. We took the opportunity of someone leaving the company to consider dropping that new offering.

    In the partners-era, I would have had to raise awareness of the partners to stop the thing – I wouldn’t have had the authority to do so – and the partners would have probably felt compelled to adopt a parental posture “you should have known better”, “be careful next time”, “what will the clients think?”, “how should I now communicate on that?”, …

    Nowadays: I sensed the tension in my role in that circle, and took the initiative: first seeking advice from the ones impacted, clients included, and envisioning with them the possible scenarii. Then I made up my mind on a “closing” scenario. Yet, I sensed the need to give the rest of the company the opportunity to react and potentially on the decision before I would finally enact it. After all, I wasn’t sure all impacted persons had been involved. I launched an integrative decision process on our internal messaging app that read like: “In my role X, I’m about to close service Y, read more in this doc. Today you can ask questions, tomorrow you can give feedbacks, thursday you can raise objections.” It all went very well. Those who cared or were loosely impacted decided to participate, the other ones didn’t feel compelled to participate – great relief – and we closed this service smoothly.

    Continuous, organic micro re-organisations

    In the partners-era I participated and was impacted by several meetings during which we “reshaped teams”. In such meetings we would list all people in the office, and then try to shape a certain amount of “balanced” teams out of them.

    In this Holacracy-era: we don’t “gather and redistribute the cards” anymore. Teams evolve slowly, roles and people come and go organically. An interesting example: a team of 20+ had a very rough year, their financial performance was bad for several months. The team could not agree on how to change things: should they focus? and if yes, on what market, technology, value proposition, ... Disagreement and the implicit need for consensus prevented the situation from evolving.

    What unlocked the situation was that some people from that team literally raised their hands and pitched: “I wanna launch a mini team focused on tech X, I have the passion, we can make it happen, who’s in?” A few joined. Others saw the drive and copied: “I believe there’s a market for Y, who’s in?”. The big team slowly dissolved into smaller teams, that were more focused and with clearer motivation and purpose. The big team had re-purposed itself into smaller ones.

    Organisational fitness

    In the partners-era my job in the company was forcefully important to me, because it was my only job within the company. Like most employees in this world. This one-person-one-job relationship forces people to protect their job, in order to protect their belonging to the organisation, their salary, status, etc. And thus, organisations keep adding jobs, and almost never remove them, except in abrupt attempts: re-organisations.

    In a holacratic organisation employeeship and roles are decoupled: I now see people suppressing some role they fill, because the role is not needed anymore. I see others merging their accountabilities in other roles, that they don't hold. The roles that exist are there because of an existing need, not a past one.

    I am not my roles and my roles are not me, getting rid of my role doesn’t directly threaten me.

    Talents, experience and passion flow to roles

    With Holacracy, we see much more opportunities for employees to act in roles that fit their experience, motivation and/or talents. Talents, experience or passion get noticed, roles get proposed.

    More demand for personal development

    In the partner-era: just a few of us were interested in improving our soft skills. The only incentive would have actually come from a partner, in a talk like “Now that you are a Product Owner, you should improve on this and that soft skills”. In this holacracy-era: I believe the talk happens within each of us: “Now that I am self-org, I sense the need to be a better leader/colleague/collaborator/...”. We do see growing demand and attendance for trainings on social and leadership skills.

    Less rants about the heads and the organisation

    Those who process the annual Employee Meetings sense that there’s much less rants, and that the messages from the employees has moved from “We (meaning you, Boss) need to change this“, to “I know it’s in my hands to make this happen, where do I start?” We moved from expecting change from others to expecting change from ourselves.

    A last word

    I wanted this blogpost to focus on the benefits we are seeing at Liip from having adopted Holacracy. A post on the challenges, again a personal view on them, will follow.

    ]]>