<?xml version="1.0" encoding="utf-8"?>
<!-- generator="Kirby" -->
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">

  <channel>
    <title>Mot-cl&#233;: prototype &#183; Blog &#183; Liip</title>
    <link>https://www.liip.ch/fr/blog/tags/prototype</link>
    <generator>Kirby</generator>
    <lastBuildDate>Mon, 28 May 2018 00:00:00 +0200</lastBuildDate>
    <atom:link href="https://www.liip.ch" rel="self" type="application/rss+xml" />

        <description>Articles du blog Liip avec le mot-cl&#233; &#8220;prototype&#8221;</description>
    
        <language>fr</language>
    
        <item>
      <title>Recipe Assistant Prototype with Automatic Speech Recognition (ASR) and Text to Speech (TTS) on Socket.IO - Part 1 TTS Market Overview</title>
      <link>https://www.liip.ch/fr/blog/betti-bossi-recipe-assistant-prototype-with-automatic-speech-recognition-asr-and-text-to-speech-tts-on-socket-io</link>
      <guid>https://www.liip.ch/fr/blog/betti-bossi-recipe-assistant-prototype-with-automatic-speech-recognition-asr-and-text-to-speech-tts-on-socket-io</guid>
      <pubDate>Mon, 28 May 2018 00:00:00 +0200</pubDate>
      <description><![CDATA[<h2>Intro</h2>
<p>In one of our monthly innodays, where we try out new technologies and different approaches to old problems, we had the idea to collaborate with another company. Slowsoft is a provider of text to speech (TTS) solutions. To my knowledge they are the only ones who are able to generate Swiss German speech synthesis in various Swiss accents. We thought it would be a cool idea to combine it with our existing automatic speech recognition (ASR) expertise and build a cooking assistant that you can operate completely hands free. So no more touching your phone with your dirty fingers only to check again how many eggs you need for that cake. We decided that it would be great to go with some recipes from a famous swiss cookbook provider. </p>
<h2>Overview</h2>
<p>Generally there are quite a few text to speech solutions out there on the market. In the first out of two blog posts would like to give you a short overview of the available options. In the second blog post I will then describe at which insights we arrived in the UX workshop and how we then combined wit.ai with the solution from slowsoft in a quick and dirty web-app prototype built on socket.io and flask. </p>
<p>But first let us get an overview over existing text to speech (TTS) solutions. To showcase the performance of existing SaaS solutions I've chosen a random recipe from Betty Bossi and had it read by them:</p>
<pre><code class="language-text">Ofen auf 220 Grad vorheizen. Broccoli mit dem Strunk in ca. 1 1/2 cm dicke Scheiben schneiden, auf einem mit Backpapier belegten Blech verteilen. Öl darüberträufeln, salzen.
Backen: ca. 15 Min. in der Mitte des Ofens.
Essig, Öl und Dattelsirup verrühren, Schnittlauch grob schneiden, beigeben, Vinaigrette würzen.
Broccoli aus dem Ofen nehmen. Einige Chips mit den Edamame auf dem Broccoli verteilen. Vinaigrette darüberträufeln. Restliche Chips dazu servieren. </code></pre>
<h3>But first: How does TTS work?</h3>
<p>The classical way works like this: You have to record at least dozens of hours of raw speaker material in a professional studio. Depending on the task, the material can range from navigation instructions to jokes, depending on your use case. The next trick is called &quot;unit-selection&quot;, where recorded speech is sliced into a high number (10k - 500k) of elementary components called <a href="https://en.wikipedia.org/wiki/Phone">phones</a>, in order to be able to recombine those into new words, that the speaker has never recorded. The recombination of these components is not an easy task because the characteristics depend on the neighboring phonemes and the accentuation or <a href="https://en.wikipedia.org/wiki/Prosody">prosody</a>. These depend on a lot on the context. The problem is to find the right combination of these units that satisfy the input text and the accentuation and which can be joined together without generating glitches. The raw input text is first translated into a phonetic transcription which then serves as the input to selecting the right units from the database that are then concatenated into a waveform. Below is a great example from Apple's Siri <a href="https://machinelearning.apple.com/2017/08/06/siri-voices.html">engineering team</a> showing how the slicing takes place. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/3096e9/components.png" alt=""></figure>
<p>Using an algorithm called <a href="https://en.wikipedia.org/wiki/Viterbi_algorithm">Viterbi</a> the units are then concatenated in such a way that they create the lowest &quot;cost&quot;, in cost resulting from selecting the right unit and concatenating two units together. Below is a great conceptual graphic from Apple's engineering blog showing this cost estimation. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/166653/cost.png" alt=""></figure>
<p>Now in contrast to the classical way of TTS <a href="http://josesotelo.com/speechsynthesis/">new methods based on deep learning</a> have emerged. Here deep learning networks are used to predict the unit selection. If you are interested how the new systems work in detail, I highly recommend the <a href="https://machinelearning.apple.com/2017/08/06/siri-voices.html">engineering blog entry</a> describing how Apple crated the Siri voice. As a final note I'd like to add that there is also a format called <a href="https://de.wikipedia.org/wiki/Speech_Synthesis_Markup_Language">speech synthetisis markup language</a>, that allows users to manually specify the prosody for TTS systems, this can be used for example to put an emphasis on certain words, which is quite handy.  So enough with the boring theory, let's have a look at the available solutions.</p>
<h2>SaaS / Commercial</h2>
<h3>Google TTS</h3>
<p>When thinking about SaaS solutions, the first thing that comes to mind these days, is obviously Google's <a href="https://cloud.google.com/text-to-speech/">TTS solution</a> which they used to showcase Google's virtual assistant capabilities on this years Google IO conference. Have a look <a href="https://www.youtube.com/watch?v=d40jgFZ5hXk">here</a> if you haven't been wowed today yet. When you go to their website I highly encourage you to try out their demo with a German text of your choice. It really works well - the only downside for us was that it's not really Swiss German. I doubt that they will offer it for such a small user group - but who knows. I've taken a recipe and had it read by Google and frankly liked the output. </p>
<figure class="embed-responsive embed-responsive--16/9"><iframe src="//player.vimeo.com/video/270423560" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true"></iframe></figure>
<h3>Azure Cognitive Services</h3>
<p>Microsoft also offers TTS as part of their Azure <a href="https://azure.microsoft.com/en-us/services/cognitive-services/speech/">cognitive services</a> (ASR, Intent detection, TTS). Similar to Google, having ASR and TTS from one provider, definitely has the benefit of saving us one roundtrip since normally you would need to perform the following trips:</p>
<ol>
<li>Send audio data from client to server, </li>
<li>Get response to client (dispatch the message on the client)</li>
<li>Send our text to be transformed to speech (TTS) from client to server </li>
<li>Get the response on client. Play it to the user.</li>
</ol>
<p>Having ASR and TTS in one place reduces it to:</p>
<ol>
<li>ASR From client to server. Process it on the server. </li>
<li>TTS response to client. Play it to the user.</li>
</ol>
<p>Judging the speech synthesis quality, I personally I think that Microsoft's solution didn't sound as great as Googles synthesis. But have a look for yourself. </p>
<figure class="embed-responsive embed-responsive--16/9"><iframe src="//player.vimeo.com/video/270423598" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true"></iframe></figure>
<h3>Amazon Polly</h3>
<p>Amazon - having placed their bets on Alexa - of course has a sophisticated TTS solution, which they call <a href="https://console.aws.amazon.com/polly/home/SynthesizeSpeech">Polly</a>. I love the name :). To be where they are now, they have acquired a startup called Ivona already back in 2013, which were back then producing state of the art TTS solutions. Having tried it I liked the soft tone and the fluency of the results. Have a check yourself:</p>
<figure class="embed-responsive embed-responsive--16/9"><iframe src="//player.vimeo.com/video/270423539" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true"></iframe></figure>
<h3>Apple Siri</h3>
<p>Apple offers TTS as part of their iOS SDK in the name of <a href="https://developer.apple.com/sirikit/">SikiKit</a>. I haven’t had the chance yet to play in depth with it. Wanting to try it out I made the error to think that apples TTS solution on the Desktop is the same as SiriKit. Yet SiriKit is nothing like the built in TTS on the MacOS. To have a bit of a laugh on your Macbook you can do a really poor TTS in the command line you can simply use a command:</p>
<pre><code class="language-bash">say -v fred "Ofen auf 220 Grad vorheizen. Broccoli mit dem Strunk in ca. 1 1/2 cm dicke Scheiben schneiden, auf einem mit Backpapier belegten Blech verteilen. Öl darüberträufeln, salzen.
Backen: ca. 15 Min. in der Mitte des Ofens."</code></pre>
<p>While the output sounds awful, below is the same text read by Siri on the newest iOS 11.3. That shows you how far TTS systems have evolved in the last years. Sorry for the bad quality but somehow it seems impossible to turn off the external microphone when recording on an IPhone. </p>
<figure class="embed-responsive embed-responsive--16/9"><iframe src="//player.vimeo.com/video/270441878" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true"></iframe></figure>
<h3>IBM Watson</h3>
<p>In this arms race IBM also offers a TTS system, with a way to also define the prosody manually, using the <a href="https://de.wikipedia.org/wiki/Speech_Synthesis_Markup_Language">SSML markup language standard</a>. I didn't like their output in comparison to the presented alternatives, since it sounded quite artificial in comparison. But give it a try for yourself.</p>
<figure class="embed-responsive embed-responsive--16/9"><iframe src="//youtube.com/embed/2Er2xl7MPBo" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true"></iframe></figure>
<h3>Other commercial solutions</h3>
<p>Finally there are also competitors beyond the obvious ones such as <a href="https://www.nuance.com">Nuance</a> (formerly Scansoft - originating from Xerox research). Despite their page promising a <a href="http://ttssamples.syntheticspeech.de/ttsSamples/nuance-zoe-news-1.mp3">lot</a>, I found the quality of the TTS in German to be a bit lacking. </p>
<figure class="embed-responsive embed-responsive--16/9"><iframe src="//player.vimeo.com/video/270423596" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true"></iframe></figure>
<p>Facebook doesn't offer a TTS solution, yet - maybe they have rather put their bets on Virtual Reality instead. Other notable solutions are <a href="http://www.acapela-group.com/">Acapella</a>, <a href="http://www.innoetics.com">Innoetics</a>, <a href="http://www.onscreenvoices.com">TomWeber Software</a>, <a href="https://www.aristech.de/de/">Aristech</a> and <a href="https://slowsoft.ch">Slowsoft</a> for Swiss TTS.</p>
<h2>OpenSource</h2>
<p>Instead of providing the same kind of overview for the open source area, I think it's easier to list a few projects and provide a sample of the synthesis. Many of these projects are academic in nature, and often don't give you all the bells and whistles and fancy APIs like the commercial products, but with some dedication could definitely work if you put your mind to it.  </p>
<ul>
<li><a href="http://espeak.sourceforge.net">Espeak</a>. <a href="http://ttssamples.syntheticspeech.de/ttsSamples/espeak-s1.mp3">sample</a> - My personal favorite. </li>
<li><a href="http://www.speech.cs.cmu.edu/flite/index.html">Festival</a> a project from the CMU university, focused on portability. No sample.</li>
<li><a href="http://mary.dfki.de">Mary</a>. From the german &quot;Forschungszentrum für Künstliche Intelligenz&quot; DKFI. <a href="http://ttssamples.syntheticspeech.de/ttsSamples/pavoque_s1.mp3">sample</a></li>
<li><a href="http://tcts.fpms.ac.be/synthesis/mbrola.html">Mbrola</a> from the University of Mons <a href="http://ttssamples.syntheticspeech.de/ttsSamples/de7_s1.mp3">sample</a></li>
<li><a href="http://tundra.simple4all.org/demo/index.html">Simple4All</a> - a EU funded Project. <a href="http://ttssamples.syntheticspeech.de/ttsSamples/simple4all_s1.mp3">sample</a></li>
<li><a href="https://mycroft.ai">Mycroft</a>. More of an open source assistant, but runs on the Raspberry Pi.</li>
<li><a href="https://mycroft.ai/documentation/mimic/">Mimic</a>. Only the TTS from the Mycroft project. No sample available.</li>
<li>Mozilla has published over 500 hours of material in their <a href="https://voice.mozilla.org/de/data">common voice project</a>. Based on this data they offer a deep learning ASR project <a href="https://github.com/mozilla/DeepSpeech">Deep Speech</a>. Hopefully they will offer TTS based on this data too someday. </li>
<li><a href="http://josesotelo.com/speechsynthesis/">Char2Wav</a> from the University of Montreal (who btw. maintain the theano library). <a href="http://josesotelo.com/speechsynthesis/files/wav/pavoque/original_best_bidirectional_text_0.wav">sample</a></li>
</ul>
<p>Overall my feeling is that unfortunately most of the open source systems have not yet caught up with the commercial versions. I can only speculate about the reasons, as it might take a significant amount of good raw audio data to produce comparable results and a lot of fine tuning on the final model for each language. For an elaborate overview of all TTS systems, especially the ones that work in German, I highly recommend to check out the <a href="http://ttssamples.syntheticspeech.de">extensive list</a> that Felix Burkhardt from the Technical University of Berlin has compiled. </p>
<p>That sums up the market overview of commercial and open source solutions. Overall I was quite amazed how fluent some of these solutions sounded and think the technology is ready to really change how we interact with computers. Stay tuned for the next blog post where I will explain how we put one of these solutions to use to create a hands free recipe reading assistant.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/29d939/baking-bread-knife-brown-162786.jpg" length="2948380" type="image/jpeg" />
          </item>
        <item>
      <title>Counting people on stairs &#8211; or IoT with a particle photon and node.js</title>
      <link>https://www.liip.ch/fr/blog/counting-people-stairs-particle-photon-node-js</link>
      <guid>https://www.liip.ch/fr/blog/counting-people-stairs-particle-photon-node-js</guid>
      <pubDate>Mon, 17 Oct 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>In this article I will show you in 3 easy steps how to actually get started with an IoT a project build with a particle photon and a node.js server in order to have your own dashboard. I admit, IoT is a bit of a trend these days, and yes I jumped on the bandwaggon too. But since visiting the maker faire Zürich I have seen so many enthousiastic people building things, it has also motived me to also try out something. Thats why I decided to count the people that are running up and down our stairs at Liip. Follow along if you are – like me – a total noob when it comes to connecting wires but still want to experience the fun of building IoT devices.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/3e59624591e516f70b081270849a5e347a6f5726/1120.jpg" alt="1120"></figure>
<p>These days there are a myriad of possible IoT devices, among the most popular are the <a href="http://www.arduino.cc/">Arduino</a>, Raspberry Pi, ESP8266 or the <a href="https://www.particle.io">Spark/Particle</a> (see this <a href="https://openhomeautomation.net/internet-of-things-platforms-prototyping/">blogpost</a> which gives you a nice overview of the different models). For me the Particle Photon was a great choice because it is cheap (19 USD), has an online IDE (see screenshot below), and works mostly out of the box. The sensor for the project can be bought <a href="https://www.maker-shop.ch/grove-ultrasonic-ranger">online at the maker-show.ch</a> for about CHF 19. If you buy them in bulk from a chinese retailer, you can cut the cost probably down to a few bucks per sensor.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/bb60770d2ae2d4712b6ef42558af763b63f2f92b/bildschirmfoto-2016-10-17-um-11-43-42.jpg" alt="Bildschirmfoto 2016-10-17 um 11.43.42"></figure>
<h2>Step 1: Making the IoT Device</h2>
<p>When you bought one of these starter kits, those Particle Photons come with such a wiring board where you can plug in the cables easily. If not you can buy one anywhere for a CHF 1-3. I followed this <a href="https://community.particle.io/t/simple-photon-ping-sensor-hc-sr04/16737">blog post about connecting the sensor</a>. That worked great, only don't make the same mistake as I did to put two wires at the same “height” – they should be set off one pin because otherwise you will create a short circuit. (Btw. from my experience a short circuit apparently luckily doesn't kill the photon, but forces it to turn itsself off.)</p>
<figure><img src="https://liip.rokka.io/www_inarticle/4f4587d1ff470e7157713d458836f9be63969880/0f62da1c272e71ce8216f8a9f5ff173f16dcc5cc.jpg" alt="0f62da1c272e71ce8216f8a9f5ff173f16dcc5cc"></figure>
<p>What I did additionally is I've connected it to a normal USB powerbank and I've put the whole thing it into a little lunch box where I have drilled out the holes for the sensor so I can put it somewhere and it doesn't break that easily.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/581a4164952cdb5dbf66b48dd7aa0d8aae18ad56/photon-300x169.jpg" alt="Photon in a lunch box"></figure>
<h2>Step 2: Writing the Firmware</h2>
<p>Ususally those IoT devices work in such a way that you write a piece of program that runs on the device. This piece of software or simply firmware is responsible for handling the input form the sensor and sending it somewhere over the internet. The photon devices come with a so called <a href="https://docs.particle.io/guide/getting-started/tinker/photon/">tinker</a> firmware which lets you use your mobile phone to turn certain bits on and off. Its nice to start with it, but for this sensor we can't just turn it on and then see something change on the phone. It is because this sensor needs to constantly send out a signal (echo) every couple of microseconds and then listen for it. So we are going to replace this firmware with our own firmware that is going to take the signal of the <a href="https://www.sparkfun.com/products/13959">HC-SR04 sensor</a> process it and send it to the server.</p>
<p>Now luckily, on the blog post above the author has also already provided some nice <a href="https://gist.github.com/technobly/349a916fb2cdeb372b5e">firmware code</a> that works well with this sensor. All firmware code is written in C++, but don't worry its quite easy to understand. Lets have a look a some points:</p>
<h3>The actual ping</h3>
<p>So what we are doing here is sending a signal every 10 ms on the trigger_pin and then are listening on the echo_pin for the signal to come back. It basically works like a echolot bat, send out something and the faster the signal comes back, the closer you are to the sensor.</p>
<pre><code>    digitalWriteFast(trig_pin, HIGH);
    delayMicroseconds(10);
    digitalWriteFast(trig_pin, LOW);
    duration = pulseIn(echo_pin, HIGH);</code></pre>
<h3>The setup</h3>
<p>What is nice about the particle devices is that you can connect those to your computer via USB and then <a href="https://www.particle.io/products/development-tools/particle-local-ide">download the offline ATOM IDE</a>. In this IDE you can select the USB port to listen to (see screenshot below). You can then run your little firmware and make it output stuff to the console. Like this you can debug your tiny device by sending signals to the console and having them displayed on your computer – thats why we need to open this serial port there in setup.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/767727010a51ee13feebc459aeda05734c678b4d/bildschirmfoto-2016-10-17-um-12-12-54.jpg" alt="Bildschirmfoto 2016-10-17 um 12.12.54"></figure>
<pre><code>    Serial.begin(115200);
    Particle.variable("cm", cm);
    Particle.variable("human_count", &amp;human_count, INT)</code></pre>
<h3>The REST API</h3>
<p>The second cool thing about the particle is that any variable that you use in the code can be made accessible online via a REST API. This makes it insanely easy to access data from your device; might it be for debugging reasons or for actually interacting with the device. For this you need to define which variables you want to make accessible in the setup method. So for example the variable above “human_count” can be made accessible online via <a href="https://api.spark.io/v1/devices/240034001147343339383037/human_count?access_token=abcde1234">https://api.spark.io/v1/devices/240034001147343339383037/human_count?access_token=abcde1234</a>. (Notice that this is just a dummy link, I've replaced the access token with random chars)</p>
<figure><img src="https://liip.rokka.io/www_inarticle/6a2db81cdf73ed7ebdaef13ddffc2e1286d0bb46/bildschirmfoto-2016-10-17-um-12-18-25.jpg" alt="Bildschirmfoto 2016-10-17 um 12.18.25"></figure>
<h3>The main loop</h3>
<p>The main loop basically does the following, it sends out a signal every 10ms and once the measured distance is below a certain threshold, we say we see a human. As long as this distance stays under that threshold we say we are still tracking the same person. This might occur when someone is standing in front of the device. We don't want to count this person multiple times but just once. Additionally we have some timeouts, e.g. saying how long should a minimal distance be, between seeing two humans. In the end the sensor at in the loop says ok lets count this person. We output some nice debugging information, set up human_count +1 and most important of all publish this variable as a stream to the net.</p>
<pre><code>          human_count++;
          Serial.printf("Humans %2d:", human_count);
          Particle.publish("humancount", String(human_count));</code></pre>
<h3>The stream</h3>
<p>Now the last point is the most interesting one. In order to have a nice real time application we could potentially simply query the REST API constantly and ask how many persons we have counted so far. This is doable, but its slow, not actually in realtime and very expensive for the server. Instead we will create a stream of events that is <a href="https://community.particle.io/t/tutorial-getting-started-with-spark-publish/3422">published to the net</a> and consumed by our server. The server can listen to this event, and whenever a new person comes along we can update our widget. You can actually curl this stream via for example: curl “<a href="https://api.spark.io/v1/devices/240034001147343339383037/human_count?access_token=abcde1234">https://api.spark.io/v1/devices/240034001147343339383037/human_count?access_token=abcde1234</a>”</p>
<p>and see those events fireing in realtime. Or you can go the really nice <a href="https://console.particle.io">particle dashboard</a> that the particle people have already build and have a look there. It looks like this:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/2524eb6780ef83dfad48f83f155a8988951ffab2/bildschirmfoto-2016-10-17-um-12-30-22.jpg" alt="Bildschirmfoto 2016-10-17 um 12.30.22"></figure>
<p>So bascially you are done. You have a little device that is counting, when a person walks by, and it is both making the total count accessible via REST API and is producing a stream of events that can be consumed by our server. The only thing left is to build tiny little dashboard. So we did that too.</p>
<h2>Step 3: The dashboard</h2>
<p>The dashboard is a tiny node application with a d3.js widget in the frontend. The backend is served by the node application and in the frontend we have an event listener that is bound to our stream of events that we are receiving from the photon. Now normally you woundn't want to build it this way, because you would want to have some sort of database or buffer in between. This buffer would log these events and make it easy to aggregate and query those. For that you might use google <a href="https://cloud.google.com/pubsub/">pubsub</a> or the opensource <a href="https://kafka.apache.org">kafka</a> alternative. Both are fine, but for the sake of brevity we won't go into details how to publish and subscribe to those services and how to save it in the database. Lets save it for another blogbost.</p>
<pre><code>function connect() {
    console.log('connecting')
    var deviceID = "12345";
    var accessToken = "abcdefg";
    var eventSource = new EventSource("https://api.spark.io/v1/devices/" + deviceID + "/events/?access_token=" + accessToken);

    eventSource.addEventListener('open', (e) =&gt; {
        console.log("Opened!"); },false)

    eventSource.addEventListener('error', (e) =&gt; {
        console.log("Errored!"); },false)

    return eventSource
}
...
eventSource.addEventListener('humancount', (e) =&gt; {
    let parsedData = JSON.parse(e.data)
    console.log('Received data', parsedData)
    pplCount.innerHTML = parsedData.data
    displayChart.newPoint(0.3)
}, false)</code></pre>
<p>So what we see above, is that we simply create an event source that connects to our stream and an event listener that adds another datapoint to our widget once someone walks by. Thats basically it. You migh want to <a href="http://github.com/plotti/humancounter">checkout the project code on github</a> and leave a little star if you like it. Below you can see our little widget in action. Notice the small little red spikes that occur when actual people walking by.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/eb9130725e71388ac74239c0ac31e2f95c13ed65/l64yh6cotn.jpg" alt="Our widget in action. Notice the red bars represent people walking by."></figure>
<p>Our widget in action. Notice the red bars represent people walking by in real-time.</p>
<h2>Where to go from here?</h2>
<p>Well for starters if you were to aggregate this data over minutes, weeks or hours, you would actually get a nice chart of how frequented our steps are and on what weekdays people are walking them the most. For that we would want to save our events to a database and query it with a <a href="https://www.elastic.co/products/kibana">kibana dashboard</a> or use a prebuilt IoT infrastructure for it like <a href="https://ubidots.com">ubidots project</a>.</p>
<p>We could improve our battery life by just collecting the datapoints and not constantly sending them over wifi, because this is draining our little battery pack quite fast. But none the less our experiments have shown that with a small lipstick battery pack this device can run for up to 24 hours. So this might be enough in order to deploy it in a one time measurement scenario, like in a shop to measure how many customers are walking by certain isles for example.</p>
<p>On the other hand you might want to deploy those devices with a fixed power source and monitor data constantly to acualy learn something about the seasonality of the data, or use it in a completely differen way.</p>
<p>We were thinking to connect those devices to <a href="https://i.ytimg.com/vi/DJ2JjirBw1o/maxresdefault.jpg">PIR sensors</a> and placing them into our meeting rooms. Like this we could have smart meeting rooms that actually know if there are persons in them or not. Based on that we might discover that often a meeting room looks booked in the calendar but is actually empty. But that is material for another project.</p>
<p>There is btw. a great ressource of <a href="https://www.hackster.io/particle/products/photon">photon tutorials</a> out there, if you want to build more things.</p>
<p>I hope you enjoyed this little IoT experiment and have found some motivation to get started yourself.</p>
<p>Cheers</p>
<p>Thomas Ebermann and Lukasz Gintowt</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/4f4587d1ff470e7157713d458836f9be63969880/0f62da1c272e71ce8216f8a9f5ff173f16dcc5cc.jpg" length="961094" type="image/jpeg" />
          </item>
        <item>
      <title>Why I don&#8217;t use the javascript &#8220;new&#8221; keyword</title>
      <link>https://www.liip.ch/fr/blog/why-i-dont-use-the-javascript-new-keyword</link>
      <guid>https://www.liip.ch/fr/blog/why-i-dont-use-the-javascript-new-keyword</guid>
      <pubDate>Thu, 09 Oct 2014 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Coming from the PHP world, I've spent a lot of time trying to reproduce or finding a similar way to work with objects and inheritance with Javascript. A lot of libraries gives you their own style of a class-like functionality with javascript, and yes, it will work similarly as OO classes, but after a big javascript project, you will probably want to understand what's all this <code>prototype</code> stuff appearing in your console during your debug sessions, and eventually find out that the classical inheritance and its preferred keyword <code>new</code> does not suit javascript so well.</p>
<h2>Forget about “new”</h2>
<p>The first thing that confuses developer that really start developing with javascript is the <code>new</code> keyword. “ah ok, like in PHP or Java, I can create instances of <code>Car</code> with <code>new Car()</code>, easy”. I won't dive into [history details] here, but you have to know that <code>new</code> was only introduced to let javascript gain more popularity. <code>new</code> is <a href="http://en.wikipedia.org/wiki/Syntactic_sugar" title="Definition on wikipedia">syntactic sugar</a>, but hides the real prototypal nature of the language.</p>
<p>Check the following constructor pattern in javascript:</p>
<pre><code class="language-js">function Rectangle(width, height) {
  this.height = height;
  this.width = width;
}

Rectangle.prototype.area = function () {
  return this.width * this.height;
};

var rect = new Rectangle(5, 10);
alert(rect.area());</code></pre>
<p>It looks not so bad. Well I have to use this <code>prototype</code> keyword there, but ok, if it works like that. Let's define a square now, which is kind of a rectangle, so it extends our <code>Rectangle</code> class:</p>
<pre><code class="language-js">function Square(side) {
  return Rectangle.call(this, side, side);
}

Square.prototype = new Rectangle(); // yes, no arguments
Square.prototype.constructor = Square; // fix, otherwise it points to Rectangle

var sq = new Square(5);
alert(sq.area());</code></pre>
<p>It become quite complicated to explain what we are doing here, we have to play a lot with this 'prototype' property, and this is not intuitive at all.</p>
<h2>'Object.create' is your friend</h2>
<p>Forget the constructor stuff, we have to think differently, with objects, and you will eventually find out that there is not much to know to build whatever you want: in javascript, <strong>objects inherit from other objects</strong>, and that's what you do when you want a new instance inheriting from another: create a new object 'B' and tell it to inherit this other object 'A'. All the properties of 'A' will be available in the new object, and you can override them if you want to.<br />
Let's take our previous example with Rectangle and Square and rewrite it:</p>
<pre><code class="language-js">var Rectangle = { // just an object
  width: 0,
  height: 0,
  area: function() {
    return this.width * this.height;
  }
};

var rect1 = Object.create(Rectangle); // create a new object, extend it from Rectangle
rect1.width = 10;
rect1.height = 5;
alert(rect1.area());</code></pre>
<p>Isn't that much clearer? <code>Object.create</code> is a native javascript function in ECMAScript 5, compatible in all major browsers (there is a <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/create#Polyfill" title="polyfill on mozilla developer website">polyfill</a> for old browsers). The argument it takes is the object we want to extend (it can take a second one but I won't talk about it in this post). The way to define <code>width</code> and <code>height</code> is not so nice though, let's create kind of a constructor:</p>
<pre><code class="language-js">var Rectangle = {
  create: function(width, height) {
    var newObj = Object.create(this); // create a new object based on itself
    newObj.width = width;
    newObj.height = height;

    return newObj;
  },
  area: function() {
    return this.width * this.height;
  }
};

var rect1 = Rectangle.create(10, 5);
alert(rect1.area());</code></pre>
<p><code>create</code> is kind of a factory (avoid the term <em>constructor</em> now), and what's nice is that it's integrated in the object itself.</p>
<p>Let's create the square now:</p>
<pre><code class="language-js">var Square = Object.create(Rectangle); // extend Rectangle
Square.create = function(side) {
  // take the create function of Rectangle, or do something totally different
  return Rectangle.create(side, side);
}

var sq = Square.create(10);
alert(sq.area());</code></pre>
<p>This is so more intuitiv: <code>Square</code> is a base object, once defined you don't want to touch it (except if you want to modify all square instances). Then you create an instance of it by calling <code>Square.create</code>. Note that we didn't have to use the <code>prototype</code> keyword to achieve inheritance.</p>
<h2>What's behind</h2>
<p>Let's have a look at the <code>sq</code> object in the console:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/7efa61a80a8a1dafee529427b6144e073f1157c6/screenshot-square-prototype.jpg" alt="Details of a square instance"></figure>
<p>We can see that the properties of <code>Rectangle</code> are in <code>__proto__</code>, and <code>width</code> and <code>height</code> are own properties of the object. If the object doesn't redefine its properties, the javascript engine will look for them in <code>__proto__</code>, and if it doesn't find them, will dive deeper into <code>__proto__</code> of <code>__proto__</code>, until it finds it (or an exception is thrown). So by nature, javascript has this inheritance chaining built-in, and there lies all the power of the language.</p>
<p>By the way you maybe also noticed that the very last <code>__proto__</code> contains some basic functions: all objects in javascript inherit from <code>Object</code> and therefore have these native functions like <code>valueOf</code> or <code>toString</code>.</p>
<h2>Beware of references</h2>
<p>Because the objects extend other objects, you have to keep in mind that some properties of your object are in <code>__proto__</code>, and therefore were <strong>not copied</strong> when you created your instance. This can lead to unexpected behaviour if you forget about it, let's see an example:</p>
<pre><code class="language-js">var CarOption = {
  create: function(name, price) {
    var newOption = Object.create(this);
    newOption.name = name;
    newOption.price = price;

    return newOption;
  }
};

var Car = {
  type: 'car',
  options: [],
  addOption: function(option) {
    this.options.push(option);
  }
};

var leatherOption = CarOption.create('leather', 200);
var metallicOption = CarOption.create('metallic paint', 600);

var luxuriousCar = Object.create(Car);
luxuriousCar.addOption(leatherOption);
luxuriousCar.addOption(metallicOption);
console.log(luxuriousCar.options.length); // 2 options, ok

var poorCar = Object.create(Car);
console.log(poorCar.options.length); // 2 options too!</code></pre>
<p><code>poorCar</code> has 2 options too because both instances have their <code>options</code> property pointing to the base <code>Car.options</code> property. To fix it, you have to create an <code>options</code> array whenever you create a car instance. Let's add a <code>create</code> function to do this:</p>
<pre><code class="language-js">var Car = {
  type: 'car',
  create: function() {
    var newCar = Object.create(this);
    newCar.options = []; // give it its own options array

    return newCar;
  },
  addOption: function(option) {
    this.options.push(option);
  }
};

var luxuriousCar = Car.create(Car); // use our new factory
luxuriousCar.addOption(leatherOption);
console.log(luxuriousCar.options.length); // 1 option, ok

var poorCar = Car.create();
console.log(poorCar.options.length); // 0 options, ok</code></pre>
<p>Having properties in common can be very useful though, if I change the <code>type</code> property of <code>Car</code>, all the instances will get the new type.</p>
<h2>Conclusion</h2>
<p>Javascript requires another way of thinking, where Objects are first-citizen and can construct or be constructed easily, without the need for a <code>new</code> keyword.</p>
<p>Create a base object, extend from it, extend from others, change its behaviour, the language let you do nearly everything, leading to a lot of flexibility, but also to multiple way of doing things (which is not always good for developer, often looking for the right way to do what they intend to do).</p>
<p>Of course <code>new</code> has sometimes <a href="http://stackoverflow.com/a/383503/783743" title="Is JavaScript 's “new” Keyword Considered Harmful?">legitimate uses</a>, and does not really cause problems if you don't need inheritance. But are you really wanting to use it now that you read this?</p>]]></description>
          </item>
    
  </channel>
</rss>
