Liip Blog https://www.liip.ch/en/blog Kirby Wed, 11 Jul 2018 00:00:00 +0200 Latest articles from the Liip Blog en It's never "just a WebView" https://www.liip.ch/en/blog/its-never-just-a-webview https://www.liip.ch/en/blog/its-never-just-a-webview Wed, 11 Jul 2018 00:00:00 +0200 Thomas already talked about how the web and the app ecosystems are different. They don't have the same goals, and should aim for a different user experience. I will focus on the technical side on implementing a website into a native app using WebView on Android and WKWebView on iOS here.

Websites have extra UI that you don't want in your app

Websites always have extra content which is not needed, when wrapping them in an app. They have a header title, a navigation menu, and a footer with extra links. Depending on how much "native" you want your app to appear, you will show a native navigation bar, a custom navigation flow and certainly not a footer under each screen.

If you are lucky, you can ask the website developer to create a special template for the app to remove those extra features and only show the main content. Otherwise you'll have to inject javascript to hide this content.

If the website contains a special "app" template, make sure you always use it

We built an app where the website had a special template. Each page could be loaded with the extra parameter mobile=true like http://www.liip.ch/?mobile=true. This worked great, but each link contained to the page did not have this extra parameter. If we’d simply allowed link clicks without filtering, the user would see the non-app pages. We had to catch every link that the user clicked, and append the extra parameter "manually". This is quite easy for GET parameters, but it can get quite tricky when POSTing a form.

Making sure users cannot go anywhere

By default a WebView will follow every link. This means that as long as the web page shows a link, the user will be able to click on. If your page links to Google or Wikipedia, the user can go anywhere from within the app. That can be confusing.

It is easier to block every link, and specifically allow those that we know, in a "whitelist" fashion. This is particularly important because the webpage can change without the app developer being notified. New links can appear on the application and capsize the whole navigation.

WebViews take a lot of memory

WebViews use a lot of RAM compared to a native view. When a WebView is hidden — when we show another screen or put the app in background for example — it is very likely that the system will kill the Android Activity that contains the WebView or the whole iOS application.

As it takes a lot of memory, it matches killing criterias to making space for other apps in the system.

When restoring the Activity or application which contains the WebView, the view has lost its context. This means, if the user entered content in a form, everything is gone. Furthermore, if the user navigated within the website, the WebView can’t remember on which page the user was.

You can mitigate some of the inconvenience by handling state restoration on Android and iOS. Going as far as remembering the state inside a WebView will cause a lot of distress for the developer :)

WebView content is a blackbox

Unless you inject JavaScript to know what is inside, you cannot know what is displayed to the user. You don't know which areas are clickable, you don't know (easily) if the view is scrollable, etc...

WebViews don't handle file downloads

On iOS, there is no real concept of file system, even with the new Files app. If your website offers PDF downloads for example, the WebView does simply nothing. An option is to catch the URLs that are PDFs and open Safari to view them.

On Android, you can use setDownloadListener to be notified when the WebView detects a file that should be downloaded instead of displayed. You have to handle the file download by using DownloadManager for example.

webview.setDownloadListener { url, _, contentDisposition, mimetype, _ ->
  val uri = Uri.parse(url)
  val request = DownloadManager.Request(uri)
  val filename = URLUtil.guessFileName(url, contentDisposition, mimetype)
  request.allowScanningByMediaScanner()
  request.setNotificationVisibility(DownloadManager.Request.VISIBILITY_VISIBLE_NOTIFY_COMPLETED)
  request.setDestinationInExternalPublicDir(Environment.DIRECTORY_DOWNLOADS, filename)
  val dm = context?.getSystemService(Context.DOWNLOAD_SERVICE) as DownloadManager
  dm.enqueue(request)
}

WebViews don't handle non-http protocols such as mailto:

You will have to catch any non-http protocol loads and define what the application should do.

The Android back button is not handled

If the user presses their back button, the standard action (leave activity, ...) will happen. It will not do a "previous page" action like the user is used to. You have to handle it yourself.

webview.setOnKeyListener { _, keyCode, event ->
 if (keyCode == KeyEvent.KEYCODE_BACK && event.action == MotionEvent.ACTION_UP && webview.canGoBack()) {
    webview.goBack()
    return@setOnKeyListener true
 }
  return@setOnKeyListener false
}

There is no browser UI to handle previous page, reload, loading spinner, etc...

If the website is more than one page, you need to have the possibility to go back and forth in the navigation history. This is important if you removed the website's header/footer/menu in particular. On Android, users can still use the back button (if you enabled it) but on iOS there is no way to do that.

You should offer the possibility to reload the page. Showing a loading indicator so that users know when the page is loaded completely, like in a native application or a standard browser helps.

This is no default way to display errors

When an HTTP error occurs, if you don't have a network connection for example, the WebView doesn’t handle it for you.

On iOS, the view will not change. If the first page fails to load, you will have a white screen.

On Android, you will have an ugly default error message like this one:

Sample Android WebView error

WebViews don't know how to open target="_blank" links

WebViews don't know how to handle links that are supposed to open a new browser window. You have to handle it. You can decide to stop the "new window" opening and load the page on the same WebView for example. But by default nothing will happen for those links.

iOS and security

iOS is — rightfully — very conservative regarding security. Apple added App Transport Security in iOS 9 to prevent loading unsecure content. If your website does not use a recent SSL certificate, you will have to disable ATS, which never is a good sign.

It is hard to make code generic

Since every page can be different, having different needs, it is hard to make code generic. For a client, where we show two sub-pages of their website on different screens, we have two webview handlers because of the various needs.

Once you have thought through all these things and believe you can start coding your webview, you will discover that iOS and Android handle them differently.

There is a strong dependency on the website

Once you faced all challenges and your app is ready to ship, there is one last thing that you cannot control: You are displaying a website that you did not code, and that is most likely not managed by someone in your company.

If the website changes, there is a high chance that the webmaster doesn’t think of telling you. Mainly because he doesn’t think one change on the website is enough to mess up your app completely.

Conclusion

WebViews can be used for wrapping upon existing websites fast, but it is never "just" a WebView. There is some work to do to deliver the high quality that we promise at Liip. My advices for you:

  • Use Defensive programming.
  • List all features of each page with the client, and define the steps clearly, like for any native app.
  • Get in touch with the website developer and collaborate. Make sure they tell you when the website changes.
]]>
How to change things in your company https://www.liip.ch/en/blog/how-to-change-things-in-your-company https://www.liip.ch/en/blog/how-to-change-things-in-your-company Wed, 11 Jul 2018 00:00:00 +0200 Do you sometimes want to change things in your company? Do you face processes that are slow or error-prone. Do you see things that are missing or should be done in a new way?
I moved from being a UX Designer to take care of changes like these within Liip. In the beginning it often took me forever to have an impact. But over time my process became more stable and reliable. Today I’m able to deliver the first results within a month or two, and this is how I do it.

Form a team with diverse skills that can implement solutions

Like everything else, change means a lot of work. You have to write, develop, design, communicate and organise lots of tasks. I lost time often because of staffing my projects with volunteers who cared for the topic, but then didn’t have the skills needed to implement the ideas. Today I form and book a core team with diverse skills (a developer, a designer, an expert of the matter) before I start with the project.

Formulate a goal and face risks to know where you are going

Finding a common vision was often time-consuming and talking about risks sometimes drained energy and enthusiasm. After I read the excellent book “Sprint” by James Knapp, it went down much faster and easier. Many of the following steps are inspired from this book.
On the first day, we (the core team) ask ourselves “Why are we doing this? Where do we want to be in five years?” and formulate a long term goal. We answer “How could we fail?” and formulate the emerging fears and risks as challenges.
This usually takes about one hour and the team is still motivated and energised afterwards.

Make a map to understand the scope of your challenge

Change feels often epic and overwhelming to me, especially in the beginning. Where should I start in this huge, complex topic?
So we started drawing a map of the process we are about to change or improve. That’s sometimes quite difficult to do, but in the end we’re getting an overview of our challenge, who is involved and which steps the different roles need to take.

goal-map

Ask customers and experts to unlock their knowledge

We show the map to affected employees, clients and experts. By interviewing them about the challenge at hand, we unlock their knowledge and experience. This leads to a complete and more differentiated picture of the challenge. Everything is captured on stickies and clustered into topics at the end of the morning.

Pick a specific target to focus your energies on

We vote for the most important insights of the interviews, most relevant challenges and most critical steps in the process. In the end we decide on what to focus and put our energy on. Out of our decisions, we synthesise a design challenge, a problem statement that we want to solve. Out of an open and complex topic, we pick one specific target with a lot of potential.

pick-target

Turn ideas into detailed sketches to choose the best ones

In my first projects I let people create ideas on a whole process or topic and write ideas on stickies with only a sentence or two. The results were broad and often not thought through. It was hard to compare them and more than once we failed to implement the winner ideas.
Today I let the team members sketch out their ideas more carefully and draw them out in solo work with the 4 Step Sketch technique (also from the Sprint book). We have fewer solutions, but they are more concrete and it’s easier to evaluate them and to implement the winner ideas.

ideation

Use a service blueprint to stitch the ideas into a coherent solution

Often the winner solutions don’t fit together neatly and have a different level of granularity. To come up with a coherent solution and to not forget anything, we use a service blueprint. We define how our solutions will be delivered, what material, actions and infrastructure are needed in every step of the process, all mapped on a huge sheet with different lanes.
Out of this blueprint we write user stories of what needs to be done and prioritise them.

blueprint

Block time for the whole team to implement solutions fast

I usually work on changes with volunteers who do this next to their daily job. It’s hard for them to make time for an internal project and finish their tasks on time. We started to block 4-5 half days in advance, where the whole team is together in a room, but everyone works individually on their own tasks. Like this we can implement and deliver solutions much faster and work on a predictable timeline.

Deliver a first result within a month

When I started my work as an internal Service Designer and Change Agent it took me forever to tackle problems in my company. With this process, I got a lot faster and I can deliver a result in a predictable timeframe.
If a small team is willing to invest 3-5 days into changing something, we can understand the problem, define the most relevant step and deliver a solution within one month. We don’t produce groundbreaking innovations in this time, but at least a first step.
More often than not, the change process continues once it is started and the team keeps producing more solutions over time.

Steal it, if you like

If you can use any part of this process or the whole thing, please do. Let me know if you have questions, need more explanations or tell me how your own change went at zahida.huber@liip.ch

]]>
Which issues are fixed by using message queues, such as RabbitMQ, and why this is interesting to me. https://www.liip.ch/en/blog/which-issues-are-being-fixed-by-using-message-queues-and-why-its-interesting-to-me https://www.liip.ch/en/blog/which-issues-are-being-fixed-by-using-message-queues-and-why-its-interesting-to-me Wed, 04 Jul 2018 00:00:00 +0200 First of all, thanks for all the feedback I got for my last post. 🤟
For everyone who didn’t read it: This post is an addition to my previous post.
❗: This post might be not very interesting to people that already know about message queues. 😬

This blogpost will briefly describe what message queues are, how and where this technology will be used and as I promised in my previous blogpost, and here it is. 🙌

Let’s tackle this topic with:

What are message queues?

What are message queues in the first place? 📤 ✉️ 📥

Message queues are used to decouple applications by adding a common communication layer, that makes applications communicate over this layer rather than peer to peer. The message could possibly be anything that can be saved as simple text. An application or a software sends a message to the message queue where it's being stored until an application or software takes the message off again. There are several technologies that gets the job done but one of the most famous ones, nowadays, is RabbitMQ.
This is a very brief explanation, so bear with me if it’s simplified too much because I don't want to deepdive in the basics too much.

An image showing the Queuesystem

To put the advantages of message queuing in a nutshell:

  • Fault tolerance 😶 because, if one system breaks for whatever reason another system can still send messages to the queue
  • Improve scalability 😮 because, if there is a unexpected load on an application you can create multiple instances of this application to balance the load and it will easily use message queues for communication
  • Decrease latency 😲 because, since messages are being sent asynchronously, systems don't have to wait for each other before finishing a cross-system task

How am I going to use message queues?

There is a framework called Sylius, which I will not be explaining. If you are interested in it, but don't want to spend hours reading their doc, have a look at one of Lukas' software evualuation post about Sylius. Basically, this framework is an eCommerce framework which is based on Symfony. People that know Symfony, know symfonies bundle-system.
For the others: You can extend Symfony applications easily by adding bundles. Bundles can be seen like plugins to a Symfony application. There is an open-source bundle called “SyliusImportExportPlugin”. As the name implies, you can import and export data of the eCommerce application with this bundle. The main topic of my bachelor’s thesis is, to add a mq system into this import/export plugin and see how the mq system affects the latency, scalability and fault tolerance by doing benchmarks.
The following (beautiful) drawing illustrates the changes I'll be doing to the bundle:

The drawing shows the bundle before and after I changed my topic outlined above

The regular file export and the red part will be benchmarked and implifications will be discussed in my scientific paper.
I will make a follow up post about the benchmarks, findings, as well as the comparison before and after my project.

Why did I choose this subject?

I liked the subject because it gives me the opportunity, to learn something about a solution, to the problem of having no fault tolerance, not being able scale and systems having the need of wait for each other. Two different systems talking to each other is not an easy task. Before having message queues: An admin had to export data from the system to a file and then (bestcase) import the file without changing anything to another system. By queuing messages, this task can be automated.

"Great things in business are never done by one person. They're done by a team of people."

  • Steve Jobs

Thanks to everyone that is helping me with working on the bundle and in addition for helping me on my way to graduation.
Thanks to Michael Weibel and Fabian Ryf for helping me with this blogpost.
You guys are all 🔥

]]>
Http headers to improve your web app security https://www.liip.ch/en/blog/http-headers-improve-your-web-app-security https://www.liip.ch/en/blog/http-headers-improve-your-web-app-security Tue, 03 Jul 2018 00:00:00 +0200 There is a tool out there that will help you assess the general protection of your web application: Mozilla Observatory

The first time I've run the security test, the result was a bleak "D" (with "A+" being the best and "F" the worst). Not a particularly bad result, but no reason for celebration either. High profile sites like wikipedia (scored a "D" as well) are oviously perfectly fine with a result like this, but not us!

Screenshot of wikipedia score on observatory.mozilla.org

Wikipedia doesn't care about security headers

So what can we do?

The answer is to step up your security headers game! There are a few headers you'll have to implement in your web server configuration in order to tell the web browser running your web app what it is allowed to do and what not. Please be aware that this is no substitute for hardening your webserver security in general! Also don't think that all browsers will obey the rules you'll ask them to enforce. This is just a best practice that has been implemented to browsers in recent years. Older browsers will not support all of them.

The usual suspects:

Lets see what headers are usually not implemented by default:

  • Content Security Policy
  • X-Content-Type-Options
  • X-Frame-Options
  • X-XSS-Protection

Content Security Policy

This is a tough one to begin with. What it basically tells your browser is which external resources you'll allow it to load. This includes for example external fonts, external scripts (like Google analytics and tag manager), images and every asynchronous requests done by any script externally.
If you enforce this policy, you have to tell the browser exactly what you will allow it to load from external sources. This can get very tedious especially since google is doing all kinds of wizardry to 3rd party domains you'll sometimes don't even see in a debug window. So we've refrained from implementing it in our case (even though I've tried).

See this article if you are bold enough to try it yourself and won't fear a discussion with your friendly analytics specialist tracking the site.

For everything else on this policy I'll kindly ask you go to developer.mozilla.org

X-Content-Type-Options

This header basically tells the browser not to assume the type of content by "sniffing out" the MIME type, but to strictly rely on the mime type sent via the response with the content-type header.

Be aware that the webserver not serves a correct mime type by default. E.g. if you cache/minify scripts/css and don't provide a content type in the response (just a file extension won't suffice). In that case that file would be rejected. Make sure your web app provides a content type header with the correct mime type in the response. If not possible, you can tell the webserver to deliver files from a certain path with a default mime type.

Documentation and details can be found here.

X-Frame-Options

This header tells the the browser from which sources iframes can be shown in your web application. There is still some discussion on how web browsers should act when confronted with this header (rule is generally not enforced on frame ancestors) and yeah, it will be superceded by "Content-Security-Policy: frame-ancestors" (but is still recommended), Nevertheless, if you want to score a good result on mozilla observatory you should add it anyway.

Documentation and details can be found here.

X-XSS-Protection

This header tells the browser to enforce Cross-site Scripting (XSS) protection. This feature is enabled by default on modern browsers nowadays. But still: no harm done if you'll enforce it anyway (And it will give you a better rating too).

Documentation and details can be found here.

Conclusion

While not all security headers bring real value to the table, consider the "reputation value" a good rating on Mozilla Observatory can offer you in the next meeting with a CTO. It's like Google Page Speed but for security. Not all of it makes perfect sense, but if you don't wan't to answer inconvenient questions in the next security review, I recommend to cover all bases.

]]>
Helping refugees https://www.liip.ch/en/blog/helping-refugees https://www.liip.ch/en/blog/helping-refugees Mon, 02 Jul 2018 00:00:00 +0200 The disastrous refugee crisis in Europe affects all of us. Over the past years the tone against refugees got very harsh and it can be really frustrating not being able to do anything against how people are being treated. There are people who focus on trying to help refugees who fled from terrible war zones or political repression. Michael Räber is one of them. On vacation in Greece he decided on the spot to not return home, but to help. This bold step led to the foundation of Schwizerchrüz and One Happy Family (OHF). Organisations trying to ease the tragedy these people are forced to go through.

The interesting fact of OHF and Schwizerchrüz is that they help by inventing a currency - they let people decide what to buy themselves. Whatever they need most. For some it might be a pile of t-shirts, for others a good coffee per day. These organisations aren’t focussing on distribution of what we think is needed, but on people's needs.

Why this blogpost on our website? As Liipers we got affected by the crisis too. All of us - through horrible images. Some through personal experiences with relatives. As a company we try to help. By supporting a great initiative: Powercoders, where Mannar and Rami actually found Liip, or we found them. People like Rami or Mannar fled from Iraq and Syria. And also, by donations. Cash of course, but also by donating material. Like we did a few months back, in cooperation with Freitag: we shipped a big pile of Freitag bags to Lesvos. Mostly, refugees living in tents carry their belongings with them. Therefore robust and sturdy bags are really important. OHF set up a store for people to buy the bags of their choices. And it seems to work quite well - have a look at the video.

We won’t stop supporting this kind of projects. What about you? #ohf-donate

]]>
Method follows project https://www.liip.ch/en/blog/agile-models https://www.liip.ch/en/blog/agile-models Mon, 02 Jul 2018 00:00:00 +0200 Trust over control
Agile methods are based on trust. From contract drafting and requirements to meetings – there is so much knowledge in people's minds that not everything can be documented.

Scrum by textbook
In big projects such as Raiffeisen Login or MemberPlus, a comprehensive Scrum setup is worthwhile. The Raiffeisen Product Owner focuses fully on this multifaceted product. On the other hand, The Liip-ScrumMaster supports the entire Scrum team on the way to outright self-organisation. The cross-company team covers all competencies required for implementation.

Adaptation of agile project methods
If irregular or smaller developments are craftet, the project team is usually too small for a product owner and a ScrumMaster, as this can also become an "overhead". In cooperation with Raiffeisen and our other customers, we use Kanban as a methodology. Compared to iterative, time-boxed Scrum, Kanban is ongoing: each request is actively dragged into the next, self-defined step of the workflow. The main objective is to complete a requirement as fast as possible.
In other projects, such as internal non-IT projects, agility helps us with task boards, daily meetings, regular reviews and retros .

Experience and recommendations
The culture and values of a cooperation are decisive for the choice of the project methodology. Without trust there is no (positive) change. If the project manager is renamed to Product Owner and still "only" leads the team, we even run the risk of regression. Step-by-step changes, no matter how small the improvements seem, are valuable for any project. However, a complete contradiction between method and team culture is often doomed to failure.

To be continued
The next blog post will focus on agile contracts.

]]>
Sprint Goal: do you have it? https://www.liip.ch/en/blog/sprint-goal-do-you-have-it https://www.liip.ch/en/blog/sprint-goal-do-you-have-it Tue, 26 Jun 2018 00:00:00 +0200 My story with the Sprint Goal

Lately I am learning a lot, I admit it!
Some months ago I joined our internal Slack channel #ask-scrum. Together with Léo we create challenging Scrum related questions for the subscribers . All questions are based on the theory contained in the Scrum Guide. Therefore, I am reading the Scrum “Bible” over and over again. :-)

Once we asked “It is possible to add or remove Product Backlog Items from the Sprint Backlog within the Sprint?” The answer is yes! “Yes, the Development Team can renegotiate the scope with the Product Owner.
If the Development Team wants to renegotiate the scope, a Sprint Goal is needed. Otherwise it is hard to identify what is important and what is not. If the Development Team has a goal, they can add or remove some work to the Sprint, because it is beneficial to the Sprint Goal.
The Sprint Goal describes why the Development Team creates the Product Increment.
Together with stakeholders it is drafted during the Sprint Review, where Scrum Team talk about the next ToDos. It is official defined during the Sprint Planning and serves as the base for planning and every days work. Without the Sprint Goal the Development Team lacks a guidance and can’t reduce/increase the scope within the Sprint in a valuable way. The Sprint Goal is one of the key concepts of Scrum framework.

Does my Scrum Team has a Sprint Goal?

No!
Why? Defining a Sprint Goal is not easy as Scrum itself; “Simple to understand, difficult to master”.
The product which my Scrum Team develops exists for a long time and as we are in the so called “maintenance” phase, we have a lot of small enhancements – apparently unrelated to each other. Additional our clients don't necessarily have a clear vision and roadmap for the product, which makes defining a detailed goal even harder.

My learning

So basically I said “Ok this is fine, we can survive without a Sprint Goal". But I was missing an important point. Actually the Scrum Guide says “...The Sprint Goal can be any other coherence that causes the Development Team to work together rather than on separate initiatives.” This means, it does not matter whether you have a well defined product roadmap or you work on several small unrelated enhancements. The Sprint Goal will always help to work together.

Work together is the success

Does it mean we don’t work together as a Scrum Team? No, of course we do, but it is much harder. Having a common goal simplifies our daily live.

The key of all the story is: without a Sprint Goal working together and implementing Scrum successfully is challenging. Let's define one!

]]>
Gebäudehülle Schweiz - verhüllt zum Durchblick https://www.liip.ch/en/blog/gebaeudehuelle-schweiz-verhuellt-zum-durchblick https://www.liip.ch/en/blog/gebaeudehuelle-schweiz-verhuellt-zum-durchblick Mon, 25 Jun 2018 00:00:00 +0200 Digitalisierung im und am Bau

Mit wenigen Klicks zum Durchblick und damit die Digitalisierung vorantreiben. Viele Nutzende mit unterschiedlichen Anforderungen im Thema Gebäudehüllen zu begeistern, ist eines der Ziele von Gebäudehülle.swiss. Der Verband ist das führende Kompetenzzentrum und der professionelle Dienstleistungsanbieter für Gebäudehüllen. Die Interessen von rund 600 Unternehmen der Gebäudehüllen-Branche werden bei Gebäudehülle.swiss vereint. Die Anforderungen an die modernen Gebäudehüllen sind breit: Energieeffizienz, Nachhaltigkeit, Ästhetik und angenehmer Wohnkomfort bspw. Das sollte die neue Plattform vermitteln und zudem natürlich intuitiv sein. Zum Rebranding des Verbandes gehört ein neuer Webauftritt – auf dessen Umsetzung sind wir stolz. Die Plattform bietet seit der Erneuerung eine intelligente Facettensuche. Das macht alles anwendbar, übersichtlich und aufgeräumt.

Über das Projekt

Drei unterschiedliche Employer Brandings und somit Designs zusammenzufassen war unsere Herausforderung. Damit eine intuitive Plattform zu entwickeln unsere Aufgabe. Entstanden ist eine Webapplikation, die benutzerfreundlich und ansprechend ist. Die publizierten Inhalte vom Verband werden sofort sichtbar und können themen- und typbezogen abgerufen werden. Das wurde alles geschafft durch den Navision ERP (heute Microsoft Dynamics). Neu können Mitglieder all ihre wichtigen Stammdaten über das Drupal 8 Frontend einfach administrieren. Nebst der öffentlicher Suche erhalten Mitglieder auch Zugriff auf weitere Fachinformationen und ein internes File-Ablagesystem – alles vom ERP aus steuerbar. Zusammen mit der neuen Webseite zu Polybau und zum Vorruhestandsmodell wird Gebäudehülle aus einer Entwicklung genährt, d.h. durch minimale Anpassugen konnten grosse Teile der Funktionalität übernommen werden.

Practice over control

Alles muss probiert getestet und verändert werden. So funktionieren agile Projekte und die Zusammenarbeit am besten. Die partnerschaftliche Zusammenarbeit half aus verschiedensten Produkten eine Plattform zu machen, auf die wir alle stolz sind. Die gewünschten Informationen werden so gezielt mit nur wenigen Klicks gefunden. Das Ziel, die Suche als tägliches Arbeitsinstrument einzusetzen, ist gelungen.

Das Versprechen aus der Pitchpräsentation (im März 2015) wurde trotz Projektlänge (Start Januar 2016, Ende Dezember 2018) und Komplexität der IST-Situation komplett eingelöst. Die grösste Herausforderung dieses Grossprojektes lag im «Herunterbrechen» der bestehenden Komplexität hin zu einer aufgeräumten, generischen und künftig einfach ausbaubaren Plattform. Rückblickend einfach grossartig, wie wir gemeinsam die Vogelperspektive immer zum richtigen Zeitpunkt fanden und die Vision dank der professionellen Umsetzung Schritt für Schritt zur Realität wurde – phänomenal und passgenau. Die Vorfreude für den weiteren Ausbau ist gross.
Chantal Huser, Projektleiterin Website Gebäudehülle

Dank der konstruktive und ziehlführende Zusammenarbeit mit dem Verband Gebäudehülle können wir eine Lösung zur Verfügung stellen, die, wenn es um zusätzliche Funktionalitäten oder Geschäftsfelder geht, einfach wart- und erweiterbar ist. Ich habe selten so ein konstruktives Zusammenspiel zwischen Kunde und Agentur erlebt wie hier!
Daniel Frey, Product Owner Liip

]]>
Let’s make Moodle amazing https://www.liip.ch/en/blog/lets-make-moodle-amazing https://www.liip.ch/en/blog/lets-make-moodle-amazing Thu, 14 Jun 2018 00:00:00 +0200 A new empowering direction for Moodle

MoodleMoot UK & Ireland 2018 in Glasgow was the place to be, if you asked yourself like I did: “What will be the future of the Learning Management System (LMS) called Moodle?”. In fact, from the 26th to the 28th of March 2018, the Moodle Headquarters organized a conference dedicated to Moodle Partners (companies offering Moodle services such as Liip), as well as developers and administrators of the very popular open source course management system. That was a great opportunity to meet all these stakeholders and learn about the actual trends of this LMS. The program begins with the announcement of a $6 million investment from the company Education for the Many. Moodle HQ will make use of this funding to improve consistency and sustainability, to build a new European Headquarter in Barcelona and to improve its didactic approach.

A new investor believing in the Moodle mission

Martin Dougiamas, founder and CEO of the Moodle HQ, opens the conference with an inspiring keynote about the goals for the near future. Reminding the mission – empowering educators to improve our world – he articulates the vision of the company.

“Education is maybe the only weapon that can make a difference, as we need responsible persons to face the current issues of our world”.

This turning point requires financial support. Education for the Many, an investment company of the French-based Leclercq family involved in well-known businesses such as Decathlon sporting goods, understands the challenges that Moodle is facing. They are not focused only on the return on investment but they also care about the educational vision. For the time and money invested, Education for the Many receives a minor stake in Moodle HQ and a seat on the board.

Future challenges

“It’s time to make Moodle amazing!”, continued Martin. One of the benefits for Europeans will be the growth of the Moodle office in Barcelona. It should expand to become like the headquarters in Perth. Therefore, Barcelona will turn into the European Moodle HQ. As most Moodle users are located in Europe, being close to them is an advantage. The Moodle product is and should always remain competitive. Ensuring this is one of the pillars of the new strategy. With this goal in mind, the future Moodle 3.6+ versions will be designed to achieve sustainability at a high level. Furthermore, they will concentrate on improving the usability, creating standards, enhancing system integrations as well as being supported across all devices.

Engaging the learners

One of the big challenges as a teacher is to keep participants engaged during the learning process. To support this, Moodle HQ is developing a special certification for Moodle Partners, so they can deepen their software knowledge and get up-to-date on the best practices for online content creation. Through official Moodle Partners, teachers can access the same education platforms. This is how the Learn Moodle platform aims to significantly improve the quality of teaching. Moreover, effort will be invested to maximize connections inside the community of users and administrators, in order to build a big and strong userbase through the moodle.net association. This platform will support the creation of educative content as well as sharing and offering services. Every Moodler is welcome to take part in this project.

To summarize, I came back from the conference more confident than ever about Moodle's potential, empowered as a Moodle Partner, and impatient to bring Moodle's capabilities to our customers.

]]>
Recipe Assistant Prototype with ASR and TTS on Socket.IO - Part 3 Developing the prototype https://www.liip.ch/en/blog/recipe-assistant-prototype-with-asr-and-tts-on-socket-io-part-3-developing-the-prototype https://www.liip.ch/en/blog/recipe-assistant-prototype-with-asr-and-tts-on-socket-io-part-3-developing-the-prototype Tue, 12 Jun 2018 00:00:00 +0200 Welcome to part three of three in our mini blog post series on how to build a recipe assistant with automatic speech recognition and text to speech to deliver a hands free cooking experience. In the first blog post we gave you a hands on market overview of existing Saas and opensource TTS solutions, in the second post we have put the user in the center by covering the usability aspects of dialog driven apps and how to create a good conversation flow. Finally it's time to get our hands dirty and show you some code.

Prototyping with Socket.IO

Although we envisioned the final app to be a mobile app and run on a phone it was much faster for us to build a small Socket.io web application, that is basically mimicking how an app might work on the mobile. Although socket.io is not the newest tool in the shed, it was great fun to work with it because it was really easy to set up. All you needed is a js library on the HTML side and tell it to connect to the server, which in our case is a simple python flask micro-webserver app.

#socket IO integration in the html webpage
...
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.0/socket.io.js"></script>
</head>
<body>
<script>
$(document).ready(function(){
    var socket = io.connect('http://' + document.domain + ':' + location.port);
    socket.on('connect', function() {
        console.log("Connected recipe");
        socket.emit('start');
    });
    ...

The code above connects to our flask server and emits the start message, signaling that our audio service can start reading the first step. Depending on different messages we can quickly alter the DOM or do other things in almost real time, which is very handy.

To make it work on the server side in the flask app all you need is a python library that you integrate in your application and you are ready to go:

# socket.io in flask
from flask_socketio import SocketIO, emit
socketio = SocketIO(app)

...

#listen to messages 
@socketio.on('start')
def start_thread():
    global thread
    if not thread.isAlive():
        print("Starting Thread")
        thread = AudioThread()
        thread.start()

...

#emit some messages
socketio.emit('ingredients', {"ingredients": "xyz"})

In the code excerpt above we start a thread that will be responsible for handling our audio processing. It starts when the web server receives the start message from the client, signalling that he is ready to lead a conversation with the user.

Automatic speech recognition and state machines

The main part of the application is simply a while loop in the thread that listens to what the user has to say. Whenever we change the state of our application, it displays the next recipe state and reads it out loudly. We’ve sketched out the flow of the states in the diagram below. This time it is really a simple mainly linear conversation flow, with the only difference, that we sometimes branch off, to remind the user to preheat the oven, or take things out of the oven. This way we can potentially save the user time or at least offer some sort of convenience, that he doesn’t get in a “classic” recipe on paper.

flow

The automatic speech recognion (see below) works with wit.ai in the same manner like I have shown in my recent blog post. Have a look there to read up on the technology behind it and find out how the RecognizeSpeech class works. In a nutshell we are recording 2 seconds of audio locally and then sending it over a REST API to Wit.ai and waiting for it to turn it into text. While this is convenient from a developer’s side - not having to write a lot of code and be able to use a service - the downside is the reduced usability for the user. It introduces roughly 1-2 seconds of lag, that it takes to send the data, process it and receive the results. Ideally I think the ASR should take place on the mobile device itself to introduce as little lag as possible.

#abbreviated main thread

self.states = ["people","ingredients","step1","step2","step3","step4","step5","step6","end"]
while not thread_stop_event.isSet():
    socketio.emit("showmic") # show the microphone symbol in the frontend signalling that the app is listening
    text = recognize.RecognizeSpeech('myspeech.wav', 2) #the speech recognition is hidden here :)
    socketio.emit("hidemic") # hide the mic, signaling that we are processing the request

    if self.state == "people":
        ...
        if intro_not_played:
            self.play(recipe["about"])
            self.play(recipe["persons"])
            intro_not_played = False
        persons = re.findall(r"\d+", text)
        if len(persons) != 0:
            self.state = self.states[self.states.index(self.state)+1]
        ...
    if self.state == "ingredients"
        ...
        if intro_not_played:
            self.play(recipe["ingredients"])
            intro_not_played = False
        ...
        if "weiter" in text:
            self.state = self.states[self.states.index(self.state)+1]
        elif "zurück" in text:
            self.state = self.states[self.states.index(self.state)-1]
        elif "wiederholen" in text:
            intro_not_played = True #repeat the loop
        ...

As we see above, depending on the state that we are in, we play the right audio TTS to the user and then progress into the next state. Each step also listens if the user wanted to go forward (weiter), backward (zurück) or repeat the step (wiederholen), because he might have misheard.

The first prototype solution, that I am showing above, is not perfect though, as we are not using a wake-up word. Instead we are offering the user periodically a chance to give us his input. The main drawback is that when the user speaks when it is not expected from him, we might not record it, and in consequence be unable to react to his inputs. Additionally sending audio back and forth in the cloud, creates a rather sluggish experience. I would be much happier to have the ASR part on the client directly especially when we are only listening to mainly 3-4 navigational words.

TTS with Slowsoft

Finally you have noticed above that there is a play method in the code above. That's where the TTS is hidden. As you see below we first show the speaker symbol in the application, signalling that now is the time to listen. We then send the text to Slowsoft via their API and in our case define the dialect "CHE-gr" and the speed and pitch of the output.

#play function
    def play(self,text):
        socketio.emit('showspeaker')
        headers = {'Accept': 'audio/wav','Content-Type': 'application/json', "auth": "xxxxxx"}
        with open("response.wav", "wb") as f: 
            resp = requests.post('https://slang.slowsoft.ch/webslang/tts', headers = headers, data = json.dumps({"text":text,"voiceorlang":"gsw-CHE-gr","speed":100,"pitch":100}))
            f.write(resp.content)
            os.system("mplayer response.wav")

The text snippets are simply parts of the recipe. I tried to cut them into digestible parts, where each part contains roughly one action. Here having an already structured recipe in the open recipe format helps a lot, because we don't need to do any manual processing before sending the data.

Wakeup-word

We took our prototype for a spin and realized in our experiments that it is a must to have a wake-up. We simply couldn’t time the input correctly to enter it when the app was listening, this was a big pain for user experience.

I know that nowadays smart speakers like alexa or google home provide their own wakeup word, but we wanted to have our own. Is that even possible? Well, you have different options here. You could train a deep network from scratch with tensorflow-lite or create your own model by following along this tutorial on how to create a simple speech recognition with tensorflow. Yet the main drawback is that you might need a lot (and I mean A LOT as in 65 thousand samples) of audio samples. That is not really applicable for most users.

snowboy

Luckily you can also take an existing deep network and train it to understand YOUR wakeup words. That means that it will not generalize as well to other persons, but maybe that is not that much of a problem. You might as well think of it as a feature, saying, that your assistant only listens to you and not your kids :). A solution of this form exists under the name snowboy, where a couple of ex-Googlers created a startup that lets you create your own wakeup words, and then download those models. That is exactly what I did for this prototype. All you need to do is to go on the snowboy website and provide three samples of your wakeup-word. It then computes a model that you can download. You can also use their REST API to do that, the idea here is that you can include this phase directly in your application making it very convenient for a user to set up his own wakeup- word.

#wakeup class 

import snowboydecoder
import sys
import signal

class Wakeup():
    def __init__(self):
        self.detector = snowboydecoder.HotwordDetector("betty.pmdl", sensitivity=0.5)
        self.interrupted = False
        self.wakeup()

    def signal_handler(signal, frame):
        self.interrupted = True

    def interrupt_callback(self):
        return self.interrupted

    def custom_callback(self):
        self.interrupted = True
        self.detector.terminate()
        return True

    def wakeup(self):
        self.interrupted = False
        self.detector.start(detected_callback=self.custom_callback, interrupt_check=self.interrupt_callback,sleep_time=0.03)
        return self.interrupted

All it needs then is to create a wakeup class that you might run from any other app that you include it in. In the code above you’ll notice that we included our downloaded model there (“betty.pmdl”) and the rest of the methods are there to interrupt the wakeup method once we hear the wakeup word.

We then included this class in your main application as a blocking call, meaning that whenever we hit the part where we are supposed to listen to the wakeup word, we will remain there unless we hear the word:

#integration into main app
...
            #record
            socketio.emit("showear")
            wakeup.Wakeup()
            socketio.emit("showmic")
            text = recognize.RecognizeSpeech('myspeech.wav', 2)
…

So you noticed in the code above that we changed included the wakeup.Wakeup() call that now waits until the user has spoken the word, and only after that we then record 2 seconds of audio to send it to processing with wit.ai. In our testing that improved the user experience tremendously. You also see that we signall the listening to the user via graphical clues, by showing a little ear, when the app is listening for the wakeup word, and then showing a microphone when the app is ready is listening to your commands.

Demo

So finally time to show you the Tech-Demo. It gives you an idea how such an app might work and also hopefully gives you a starting point for new ideas and other improvements. While it's definitely not perfect it does its job and allows me to cook handsfree :). Mission accomplished!

What's next?

In the first part of this blog post series we have seen quite an extensive overview over the current capabilities of TTS systems. While we have seen an abundance of options on the commercial side, sadly we didn’t find the same amount of sophisticated projects on the open source side. I hope this imbalance catches up in the future especially with the strong IoT movement, and the need to have these kind of technologies as an underlying stack for all kinds of smart assistant projects. Here is an example of a Kickstarter project for a small speaker with built in open source ASR and TTS.

In the second blog post, we discussed the user experience of audio centered assistants. We realized that going audio-only, might not always provide the best user experience, especially when the user is presented with a number of alternatives that he has to choise from. This was especially the case in the exploration phase, where you have to select a recipe and in the cooking phase where the user needs to go through the list of ingredients. Given that the Alexas, Homepods and the Google Home smart boxes are on their way to take over the audio-based home assistant area, I think that their usage will only make sense in a number of very simple to navigate domains, as in “Alexa play me something from Jamiroquai”. In more difficult domains, such as cooking, mobile phones might be an interesting alternative, especially since they are much more portable (they are mobile after all), offer a screen and almost every person already has one.

Finally in the last part of the series I have shown you how to integrate a number of solutions together - wit.ai for ASR, slowsoft for TTS, snowboy for wakeupword and socket.io and flask for prototyping - to create a nice working prototype of a hands free cooking assistant. I have uploaded the code on github, so feel free to play around with it to sketch your own ideas. For us a next step could be taking the prototype to the next level, by really building it as an app for the Iphone or Android system, and especially improve on the speed of the ASR. Here we might use the existing coreML or tensorflow light frameworks or check how well we could already use the inbuilt ASR capabilities of the devices. As a final key take away we realized that building a hands free recipe assistant definitely is something different, than simply having the mobile phone read out the recipe out loud for you.

As always I am looking forward to your comments and insights and hope to update you on our little project soon.

]]>