<?xml version="1.0" encoding="utf-8"?>
<!-- generator="Kirby" -->
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">

  <channel>
    <title>Mot-cl&#233;: javascript &#183; Blog &#183; Liip</title>
    <link>https://www.liip.ch/fr/blog/tags/javascript</link>
    <generator>Kirby</generator>
    <lastBuildDate>Tue, 09 Oct 2018 00:00:00 +0200</lastBuildDate>
    <atom:link href="https://www.liip.ch" rel="self" type="application/rss+xml" />

        <description>Articles du blog Liip avec le mot-cl&#233; &#8220;javascript&#8221;</description>
    
        <language>fr</language>
    
        <item>
      <title>From coasters to Vuex</title>
      <link>https://www.liip.ch/fr/blog/from-coasters-to-vuex</link>
      <guid>https://www.liip.ch/fr/blog/from-coasters-to-vuex</guid>
      <pubDate>Tue, 09 Oct 2018 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>You'll take a coaster and start calculating quickly. All factors need to be taken into account as you write down your calculations on the edge of the coaster. Once your coaster is full, you'll know a lot of answers to a lot of questions: How much can I offer for this piece of land? How expensive will one flat be? How many parking lots could be built and how expensive are they? And of course there's many more.</p>
<h2>In the beginning, there was theory</h2>
<p>Architecture students at the ETH learn this so-called &quot;coaster method&quot; in real estate economics classes. Planning and building a house of any size is no easy task to begin with, and neither is understanding the financial aspect of it. To understand all of those calculations, some students created spreadsheets that do the calculations for them. This is prone to error. There are many questions that can be answered and many parameters that influence those answers. The ETH IÖ app was designed to teach students about the complex correlations of different factors that influence the decision. Furthermore, if building a house on a certain lot is financially feasible or not.</p>
<figure><figure><img src="https://liip.rokka.io/www_inarticle/0c0b57/spreadsheet.jpg" alt=""></figure><figcaption>The spreadsheet provided by the client PO</figcaption></figure>
<p>The product owner at ETH, a lecturer for real estate economics, took the tome to create such speadsheets, much like the students. These spreadsheets contained all calculations and formulas that were part of the course, as well as some sample calculations. After a thorough analysis of the spreadsheet, we came up with a total of about 60 standalone values that could be adjusted by the user, as well as about 45 subsequent formulas that used those values and other formulas to yield yet another value.</p>
<p>60 values and 45 subsequent formulas, all of them calculated on a coaster. Implementing this over several components would end up in a mess. We needed to abstract this away somehow.</p>
<h2>Exploring the technologies</h2>
<p>The framework we chose to build the frontend application with, was Vue. We used Vue to build a prototype already. so we figured we could reuse some components. We already valued Vue's size and flexibility and were somewhat familiar with it, so it was a natural choice. There's two main possibilities of handling your data when working with Vue: Either manage state in the components, or in a state machine, like Vuex.</p>
<p>Since many of the values need to be either changed or displayed in different components, keeping the state on a component level would tightly couple those components. This is exactly what is happening in the spreadsheet mentioned earlier. Fields from different parts of the sheet are referenced directly, making it hard to retrace the path of the data.</p>
<figure><figure><img src="https://liip.rokka.io/www_inarticle/431b52/components-coupled.jpg" alt=""></figure><figcaption>A set of tightly coupled components. Retracing the calculation of a single field can be hard.</figcaption></figure>
<p>Keeping the state outside of the components and providing ways to update the state from any component decouples them. Not a single calculation needs to be done in an otherwise very view-related component. Any component can trigger an update, any component can read, but ultimately, the state machine decides what happens with the data.</p>
<figure><figure><img src="https://liip.rokka.io/www_inarticle/373d8b/components-decoupled.jpg" alt=""></figure><figcaption>By using Vuex, components can be decoupled. They don't need state anymore.</figcaption></figure>
<p>Vue has a solution for that: Vuex. Vuex allows to decouple the state from components, moving them over to dedicated modules. Vue components can commit mutations to the state or dispatch actions that contain logic. For a clean setup, we went with Vuex.</p>
<h2>Building the Vuex modules</h2>
<p>The core functionality of the app can be boiled down to five steps:</p>
<ol>
<li>Find the lot - Where do I want to build?</li>
<li>Define the building - How large is it? How many floors, etc.?</li>
<li>Further define any building parameters and choose a reference project - How many flats, parking lots, size of a flat?</li>
<li>Get the standards - What are the usual prices for flats and parking lots in this region?</li>
<li>Monetizing - What's the net yield of the building? How can it be influenced?</li>
</ol>
<p>Those five steps essential boil down to four different topics:</p>
<ol>
<li>The lot</li>
<li>The building with all its parameters</li>
<li>The reference project</li>
<li>The monetizing part</li>
</ol>
<p>These topics can be treated as Vuex modules directly. An example for a basic module <code>Lot</code> would look like the the following:</p>
<pre><code class="language-javascript">// modules/Lot/index.js

export default {
  // Namespaced, so any mutations and actions can be accessed via `Lot/...`
  namespaced: true,

  // The actual state: All fields that the lot needs to know about
  state: {
    lotSize: 0.0,
    coefficientOfUtilization: 1.0,
    increasedUtilization: false,
    parkingReductionZone: 'U',
    // ...
  }
}</code></pre>
<p>The fields within the state are some sort of interface: Those are the fields that can be altered via mutations or actions. They can be considered a &quot;starting point&quot; of all subsequent calculations.</p>
<p>Those subsequent calculations were implemented as getters within the same module, as long as they are still related to the <code>Lot</code>:</p>
<pre><code class="language-javascript">// modules/Lot/index.js

export default {
  namespaced: true,

  state: {
    lotSize: 0.0,
    coefficientOfUtilization: 1.0
  },

  // Getters - the subsequent calculations
  getters: {
    /**
     * Unit: m²
     * DE: Theoretisch realisierbare aGF
     * @param state
     * @return {number}
     */
    theoreticalRealizableCountableFloorArea: state =&gt; {
      return state.lotSize * state.coefficientOfUtilization
    },

    // ...
  }
}</code></pre>
<p>And we're good to go. Mutations and actions are implemented in their respective store module too. This makes the part of the data actually changing more obvious.</p>
<h2>Benefits and drawbacks</h2>
<p>With this setup, we've achieved several things. First of all, we separated the data from the view, following the &quot;separation of concerns&quot; design principle. We also managed to group related fields and formulas together in a domain-driven way, thus making their location more predictable. All of the subsequent formulas are now also unit-testable. Testing their implementation within Vue components is harder as they are tightly coupled to the view. Thanks to the mutation history provided by the Vue dev tools, every change to the data is traceable. The overall state of the application also becomes exportable, allowing for an easier implementation of a &quot;save &amp; load&quot; feature. Also, reactivity is kept as a core feature of the app - Vuex is fast enough to make any subsequent update of data virtually instant.</p>
<p>However, as with every architecture, there's also drawbacks. Mainly, by introducing Vuex, the application is getting more complex in general. Hooking the data to the components requires a lot of boilerplating - otherwise it's not clear which component is using which field. As all the store modules need similar methods (f.e. loading data or resetting the entire module) there's also a lot of boilerplating going on. Store modules are tightly coupled with each other by using fields and getters of basically all modules.</p>
<p>In conclusion, the benefits of this architecture outweigh the drawbacks. Having a state machine in this kind of application makes sense.</p>
<h2>Takeaway thoughts</h2>
<p>The journey from the coasters, to the spreadsheets, to a whiteboard, to an actual usable application was thrilling. The chosen architecture allowed us to keep a consistent set up, even with growing complexity of the calculations in the back. The app became more testable. The Vue components don't even care anymore about where the data is from, or what happens with changed fields. Separating the view and the model was a necessary decision to avoid a mess and tightly coupled components - the app stayed maintainable, which is important. After all, the students are using it all the time.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/09cc96/coasters.jpg" length="5389277" type="image/jpeg" />
          </item>
        <item>
      <title>Progressive web apps, Meteor, Azure and the Data science stack or The future of web development conference.</title>
      <link>https://www.liip.ch/fr/blog/progressive-web-apps-meteor-azure-and-the-data-science-stack-or-the-future-of-web-development-conference</link>
      <guid>https://www.liip.ch/fr/blog/progressive-web-apps-meteor-azure-and-the-data-science-stack-or-the-future-of-web-development-conference</guid>
      <pubDate>Wed, 09 May 2018 00:00:00 +0200</pubDate>
      <description><![CDATA[<h3>Back to the future</h3>
<p>Although the conference (hosted in Zürich last week in the Crown Plaza) had explicitly the word future in the title, I found that often the new trends felt a bit like &quot;back to the future&quot;. Why ? Because it seems that some rather old concepts like plain old SQL, &quot;offline first&quot; or pure javascript frameworks or are making a comeback in web development - but with a twist.  This already  brings us to the first talk. </p>
<h3>Modern single page apps with meteor</h3>
<figure><img src="https://liip.rokka.io/www_inarticle/ce3196/meteor.png" alt=""></figure>
<p>Timo Horstschaefer from <a href="https://www.ledgy.com">Ledgy</a> showed how to create modern single page apps with <a href="https://www.meteor.com">meteor.js</a>. Although every framework promises to &quot;ship more with less code&quot;, he showed that for their project Ledgy - which is a mobile app to allocate shares among stakeholders - they were able to actually write it in less than 3 months using 13'000 lines of code. In comparison to other web frameworks where there is a backend side, that is written in one language (e.g. ruby - rails, python - django etc..) and a js-heavy frontend framework (e.g. react or angular) meteor does things differently by also offering a tightly coupled frontend and a backend part written purely in js. The backend is mostly a node component. In their case it is really slim, by only having 500 lines of code. It is mainly responsible for data consistency and authentication, while all the other logic simply runs in the client. Such client projects really shine especially when having to deal with shaky Internet connections, because meteor takes care of all the data transmission in the backend, and catches up on the changes once it has regained accessibility. Although meteor seemed to have had a rough patch in the community in 2015 and 2016 it is heading for a strong come back. The framework is highly opinionated, but I personally really liked the high abstraction level, which seemed to allow the team a blazingly fast time to market. A quite favorable development seems to be that Meteor is trying to open up beyond MongoDB as a database by offering their own GraphQL client (Apollo) that even outshines Facebook's own client, and so offers developers freedom on the choice of a database solution.</p>
<p>I highly encourage you to have a look at Timo's <a href="http://mypage.netlive.ch/demandit/files/M_D0861CC4DCEF62DFADC/dms/File/Moderne%20Single%20Page-Apps%20mit%20Meteor%20_%20Timo.pdf">presentation.</a> </p>
<h3>The data science stack</h3>
<figure><img src="https://liip.rokka.io/www_inarticle/8b4877/datastack.png" alt=""></figure>
<p>Then it was my turn to present the data science stack. I won't bother you about the contents of my talk, since I've already blogged about it in detail <a href="https://www.liip.ch/en/blog/the-data-science-stack-2018">here</a>. If you still want to  have a look at the presentation, you can <a href="http://mypage.netlive.ch/demandit/files/M_D0861CC4DCEF62DFADC/dms/File/Liip%20Data%20Stack.pdf">download</a> it of course. In the talk offered a very subjective birds eyes view on how the data centric perspective touches modern web standards. An interesting feedback from the panel was the question if such an overview really helps our developers to create better solutions. I personally think that having such maps or collections for orientation helps especially people in junior positions to expand their field of view. I think it might also help senior staff to look beyond their comfort zone, and overcome the saying &quot;if everything you have is a hammer, then every problem looks like a nail to you&quot; - so using the same set of tools for every project. Yet I think the biggest benefit might be to offer the client a really unbiased perspective on his options, of which he might have many more than some big vendors are trying to make him believe. </p>
<h3>From data science stack to data stack</h3>
<figure><img src="https://liip.rokka.io/www_inarticle/ed727f/azure.png" alt=""></figure>
<p>Meinrad Weiss from Microsoft offered interesting insights into a glimpse of the Azure universe, showing us the many options on how data can be stored in an azure cloud. While some facts were indeed surprising, for example Microsoft being unable to find two data centers that were more than 400 miles apart in Switzerland (apparently the country is too small!) other facts like the majority of clients still operating in the SQL paradigm were less surprising. One thing that really amazed me was their &quot;really big&quot; storage solution so basically everything beyond 40 peta!-bytes: The data is spread into 60! storage blobs that operate independently of the computational resources, which can be scaled for demand on top of the data layer. In comparison to a classical hadoop stack where the computation and the data are baked into one node, here the customer can scale up his computational power temporarily and then scale it down after he has finished his computations, so saving a bit of money. In regards to the bill though such solutions are not cheap - we are talking about roughly 5 digits per month entrance price, so not really the typical KMU scenario. Have a look at the <a href="http://mypage.netlive.ch/demandit/files/M_D0861CC4DCEF62DFADC/dms/File/AzureAndData_Meinrad_Microsoft.pdf">presentation</a> if you want a quick refresher on current options for big data storage at Microsoft Azure. An interesting insight was also that while a lot of different paradigms have emerged in the last years, Microsoft managed to include them all (e.g. Gremlin Graph, Cassandra, MongoDB) in their database services unifying their interfaces in one SQL endpoint. </p>
<h3>Offline First or progressive web apps</h3>
<figure><img src="https://liip.rokka.io/www_inarticle/7a4898/pwa.png" alt=""></figure>
<p>Nicro Martin, a leading Web and Frontend Developer from the <a href="https://sayhelloagency.com">Say Hello</a> agency showcased how the web is coming back to mobile again. Coming back? Yes you heard right. If thought you were doing mobile first for many years now, you are right to ask why it is coming back. As it turns out (according to a recent comscore report from 2017) although people are indeed using their mobile heavily, they are spending 87% of their time inside apps and not browsing the web. Which might be surprising. On the other hand though, while apps seem to dominate the mobile usage, more than 50% of people don't install any new apps on their phone, simply because they are happy with the ones the have. Actually they spend 80% of their time in the top 3 apps. That poses a really difficult problem for new apps - how can they get their foot into the door with such a highly habitualized behavior. One potential answer might be <a href="https://developers.google.com/web/progressive-web-apps/">Progressive Web apps</a>, a standard defined by Apple and Google already quite a few years ago, that seeks to offer a highly responsive and fast website behavior that feels almost like an application. To pull this off, the main idea is that a so called &quot;service worker&quot; - a piece of code that is installed on the mobile and continues running in the background - is making it possible for these web apps  to for example send notifications to users while she is not using the website. So rather something that users know from their classical native apps. Another very trivial benefit is that you can install these apps on your home screen, and by tapping them it feels like really using an app and not browsing a website (e.g. there is no browser address bar). Finally the whole website can operate in offline mode too, thanks to a smart caching mechanism, that allows developers to decide what to store on the mobile in contrast to what the browser cache normally does. If you feel like trying out one of these apps I highly recommend to try out <a href="http://mobile.twitter.com">mobile.twitter.com</a>, where Google and Twitter sat together and tried to showcase everything that is possible with this new technology. If you are using an Android phone, these apps should work right away, but if you are using an Apple phone make sure to at least have the most recent update 11.3 that finally supports progressive apps for apple devices. While Apple slightly opened the door to PWAs I fear that their lack of support for the major features might have something to do with politics. After all, developers circumventing the app store and interacting with their customers without an intermediary doesn’t leave much love for Apples beloved app store.  Have a look at Martin's great <a href="https://slides.nicomartin.ch/pwa-internet-briefing.html">presentation</a> here. </p>
<h3>Conclusion</h3>
<p>Overall although the topics were a bit diverse, but I definitely enjoyed the conference. A big thanks goes to the organizers of <a href="http://internet-briefing.ch">Internet Briefing series</a> who do an amazing job of constantly organizing those conferences in a monthly fashing. These are definitely a good way to exchange best practices and eventually learn something new. For me it was the motivation to finally get my hands dirty with progressive web apps, knowing that you don't really need much to make these work.  </p>
<p>As usual I am happy to hear your comments on these topics and hope that you enjoyed that little summary.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/25d9a4/abstract-art-colorful-942317.jpg" length="1981006" type="image/jpeg" />
          </item>
        <item>
      <title>Shapefiles - Of avalanches and ibexes</title>
      <link>https://www.liip.ch/fr/blog/shapefiles-of-avalanches-and-ibexes</link>
      <guid>https://www.liip.ch/fr/blog/shapefiles-of-avalanches-and-ibexes</guid>
      <pubDate>Wed, 06 Dec 2017 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>At Liip, we recently completed an application for architecture students that uses shapefiles as its backbone data to deliver information about building plots in the city of Zurich. They are displayed on a map and can be inspected by the user. A similar project was done by a student team, of which I was part, for the FHNW HT project track: A fun-to-use web application that teaches children in primary school about their commune.</p>
<p>But not only can shapefiles help to enhance the user experience, they also contain data, that, when brought into context, can yield some interesting insight. So let's see, what I can find out by digging into some shapefiles and building a small app to explore them.</p>
<p>In this blog post, I'm taking a practical approach to working with shapefiles and show you how you can get started working with them as well!</p>
<h2>Alpine Convention</h2>
<p>The first hurdles of working with shapefiles are understanding what they actually are and acquiring them in the first place.</p>
<p>Shapefiles are binary files that contain geographical data. They usually come in groups of at least 3 different files that all have the same name, their only difference is their file ending: <code>.shp</code>, <code>.shx</code> and <code>.dbf</code>. However, a complete data set can contain many more files. The data format was developed by Esri and introduced in the early 1990s. Shapefiles contain so called <em>features</em>, some geometrical shape with its metadata. A feature could be a point (single X/Y coordinates) and a text of what this point is. Or a polygon, consisting of several X/Y points, a single line, a multi-line, etc.</p>
<p>To acquire some shapefiles to work with, I have a look at <a href="https://opendata.swiss/en/dataset?res_format=SHAPEFILE">OpenData.swiss</a>. OpenData offers a total of 102 (as of November 2017) different shapefile data sets. I only need to download and start to work with them. For this example, I chose a rather small shapefile: <a href="https://opendata.swiss/en/dataset/alpenkonvention">Alpine convention</a> consisting of the perimeters of the <a href="http://www.alpconv.org/en/convention/default.html">Alpine Convention</a> in Switzerland.</p>
<p>Since these files are binaries and I cannot look at them in some text editor, I need some tool to have a look at what I just downloaded.</p>
<h2>Introducing QGIS - A Free and Open Source Geographic Information System</h2>
<p><a href="http://www.qgis.org/en/site/">QGIS</a> is an extensive solution for all kinds of GIS. It can be downloaded and installed for free, has a lot of plugins and an active community. I'm going to use it to have a look at my previously downloaded shapefile.</p>
<p>For this example, I installed QGIS with the <a href="https://plugins.qgis.org/plugins/openlayers_plugin/">OpenLayers plugin</a>. This is what it looks like, when started up:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/f498c7/1-blank.jpg" alt=""></figure>
<p>To get some reference on where I actually am and where my data goes, I add OpenStreetMap as a layer. I also pan and zoom the map, so I have Switzerland in focus.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/8f4fb0/2-osm-layer.jpg" alt=""></figure>
<p>The next step would be to actually open the shapefile, so QGIS can display it on top of the map. For that I can simply double-click the <code>.shp</code> file in the file browser on the left.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/2ffcf0/3-alpine-convention.jpg" alt=""></figure>
<p>This view already gives me some information about the shapefile I'm dealing with, on what area it covers, and how many features there are. The Alpine Convention shapefile seems to only consist of one feature, a big polygon, which is enough, given that it only shows the perimeters of the Alpine Convention in Switzerland.</p>
<p>To see, what areas and/or details it covers, I can alter the style of the layer, making it transparent. I also zoom in more, to see if the shape is accurate. Its edges should exactly cover the Swiss borders.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/fb586b/4-layer-style.jpg" alt=""></figure>
<p>Marvelous. Now that I've seen the shape of the feature, I'll have a look at the metadata the shapefile offers. They can be inspected by opening the attributes table of the shapefile in the bottom left browser.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/4814d8/5-attribute-table.jpg" alt=""></figure>
<p>This file also doesn't offer much metadata, but now I can see that there's actually two features in there. Anyways, there might be more interesting shapefiles out there to work with. </p>
<p>But I'm going to stick with the alpine topic.</p>
<h2>Of avalanches and ibexes</h2>
<p>Digging through OpenData's shapefile repository a bit more, I found two more shapefiles that could potentially yield interesting data: <a href="https://opendata.swiss/en/dataset/verbreitung-der-steinbockkolonien">The distribution of ibex colonies</a> and <a href="https://opendata.swiss/en/dataset/stand-der-naturgefahrenkartierung-in-den-gemeinden-lawinen">the state of natural hazard mapping by communes for avalanches.</a></p>
<p>As awesome as QGIS is for examining the data at hand, I want to build my own explorer, maybe even built upon multiple shapefiles and in my own design. For this, there's multiple libraries for all kinds of languages. For web development, the three most interesting ones would be for <a href="https://packagist.org/?q=shapefile&amp;p=0">PHP</a>, <a href="https://pypi.python.org/pypi/pyshp">Python</a> and <a href="https://www.npmjs.com/search?q=shapefile">JavaScript</a>.</p>
<p>For my example, I'll build a small frontend application in JavaScript to display my three shapefiles: The alpine convention, the ibex colonies and the avalanche mapping. For this I'm going to use a JavaScript package simply called <a href="https://www.npmjs.com/package/shapefile"><em>&quot;shapefile&quot;</em></a> and <a href="http://leafletjs.com/">leafletjs</a> to display the polygons from the features. But first things first.</p>
<h2>Reading out the features</h2>
<p><strong>Disclaimer:</strong> The following code examples are using ES6 and imply a webpack/babel setup. They're not exactly copy/paste-able, but they show how to work with the tools at hand.</p>
<p>First, I try to load the alpine convention shapefile and log it into the console. The shapefile package mentioned above comes with two examples in the README, so for simplicity, I just roll with this approach:</p>
<pre><code class="language-javascript">import { open } from 'shapefile'

open('assets/shapefiles/alpine-convention/shapefile.shp').then(source =&gt; {
  source.read().then(function apply (result) { // Read feature from the shapefile
    if (result.done) { // If there's no features left, abort
      return
    }

    console.log(result)

    return source.read().then(apply) // Iterate over result
  })
})</code></pre>
<figure><img src="https://liip.rokka.io/www_inarticle/38c897/log-1.jpg" alt=""></figure>
<p>Works like a charm! This yields the two features of the shapefile in the console. What I see here already, is that the properties of each feature, normally stored in separate files, are already included in the feature. This is the result of the library fetching both files and processing them together:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/66d7b1/log-2.jpg" alt=""></figure>
<p>Now I have a look at the coordinates that are attached to the first feature. The first thing that occurs is that there's two different arrays, implying two different sets of coordinates, but I'm going to ignore the second one for now.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/8deff6/log-3.jpg" alt=""></figure>
<p>And there's the first pitfall. These coordinates look nothing like <em>WGS84</em> (Latitude/Longitude), but rather something different. Coordinates in WGS84 should be some floating point number, ideally starting with 47 and 8 for Switzerland, so this shapefile confronted me with a different coordinate reference system (or short: CRS). In order to correctly display the shape on top of a map, or use it with other data, I would want to convert these coordinates to WGS84 first, but in order to do that, I need to figure out, which CRS this shapefile is using. Since this shapefile is coming from the Federal Office for Spatial Development ARE, the CRS used is most likely <a href="https://en.wikipedia.org/wiki/Swiss_coordinate_system">CH1903(+), aka the &quot;Swiss coordinate system&quot;.</a></p>
<p>So in order to convert that, I need some maths. Searching the web a bit, I can find <a href="https://raw.githubusercontent.com/ValentinMinder/Swisstopo-WGS84-LV03/master/scripts/js/wgs84_ch1903.js">a JavaScript solution to calculate back and forth between CH1903 and WGS84.</a> But I only need some parts of it, so I copy and alter the code a bit:</p>
<pre><code class="language-javascript">// Inspired by https://raw.githubusercontent.com/ValentinMinder/Swisstopo-WGS84-LV03/master/scripts/js/wgs84_ch1903.js

/**
 * Converts CH1903(+) to Latitude
 * @param y
 * @param x
 * @return {number}
 * @constructor
 */
const CHtoWGSlat = (y, x) =&gt; {
  // Converts military to civil and to unit = 1000km
  // Auxiliary values (% Bern)
  const yAux = (y - 600000) / 1000000
  const xAux = (x - 200000) / 1000000

  // Process lat
  const lat = 16.9023892 +
    (3.238272 * xAux) -
    (0.270978 * Math.pow(yAux, 2)) -
    (0.002528 * Math.pow(xAux, 2)) -
    (0.0447 * Math.pow(yAux, 2) * xAux) -
    (0.0140 * Math.pow(xAux, 3))

  // Unit 10000" to 1" and converts seconds to degrees (dec)
  return lat * 100 / 36
}

/**
 * Converts CH1903(+) to Longitude
 * @param y
 * @param x
 * @return {number}
 * @constructor
 */
const CHtoWGSlng = (y, x) =&gt; {
  // Auxiliary values (% Bern)
  const yAux = (y - 600000) / 1000000
  const xAux = (x - 200000) / 1000000

  // Process lng
  const lng = 2.6779094 +
    (4.728982 * yAux) +
    (0.791484 * yAux * xAux) +
    (0.1306 * yAux * Math.pow(xAux, 2)) -
    (0.0436 * Math.pow(yAux, 3))

  // Unit 10000" to 1 " and converts seconds to degrees (dec)
  return lng * 100 / 36
}

/**
 * Convert CH1903(+) to WGS84 (Latitude/Longitude)
 * @param y
 * @param x
 */
export default (y, x) =&gt; [
  CHtoWGSlat(y, x),
  CHtoWGSlng(y, x)
]</code></pre>
<p>And this I can now use in my main app:</p>
<pre><code class="language-javascript">import { open } from 'shapefile'
import ch1903ToWgs from './js/ch1903ToWgs'

open('assets/shapefiles/alpine-convention/shapefile.shp').then(source =&gt; {
  source.read().then(function apply (result) { // Read feature from the shapefile
    if (result.done) { // If there's no features left, abort
      return
    }

    // Convert CH1903 to WGS84
    const coords = result.value.geometry.coordinates[0].map(coordPair =&gt; {
      return ch1903ToWgs(coordPair[0], coordPair[1])
    })

    console.log(coords)

    return source.read().then(apply) // Iterate over result
  })
})</code></pre>
<p>Which yields the following adjusted coordinates:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/bcbc96/log-4.jpg" alt=""></figure>
<p>A lot better. Now I'll throw them on top of a map, so I have something I can actually look at.</p>
<h2>Making things visible</h2>
<p>For that, I add leaflet to the app together with the Positron (lite) tiles from CartoDB. The Positron (lite) theme is light grey/white, which provides a good contrast to the features I'm going to display over them. OpenStreetMap is awesome, but too colourful, so the polygons are not visible enough.</p>
<pre><code class="language-javascript">import leaflet from 'leaflet'
import { open } from 'shapefile'
import ch1903ToWgs from './js/ch1903ToWgs'

/**
 * Add the map
 */
const map = new leaflet.Map(document.querySelector('#map'), {
  zoomControl: false, // Disable default one to re-add custom one
}).setView([46.8182, 8.2275], 9) // Show Switzerland by default

// Move zoom control to bottom right corner
map.addControl(leaflet.control.zoom({position: 'bottomright'}))

// Add the tiles
const tileLayer = new leaflet.TileLayer('//cartodb-basemaps-{s}.global.ssl.fastly.net/light_all/{z}/{x}/{y}.png', {
  minZoom: 9,
  maxZoom: 20,
  attribution: '&amp;copy; CartoDB basemaps'
}).addTo(map)

map.addLayer(tileLayer)

/**
 * Process the shapefile
 */
open('assets/shapefiles/alpine-convention/shapefile.shp').then(source =&gt; {
  source.read().then(function apply (result) { // Read feature from the shapefile
    if (result.done) { // If there's no features left, abort
      return
    }

    // Convert CH1903 to WGS84
    const coords = result.value.geometry.coordinates[0].map(coordPair =&gt; {
      return ch1903ToWgs(coordPair[0], coordPair[1])
    })

    console.log(coords)

    return source.read().then(apply) // Iterate over result
  })
})</code></pre>
<p>This already shows me a nice looking map I can work with:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/71b463/map-1.jpg" alt=""></figure>
<p>The next step step is to hook up the map with the shapefile loading. For this, I create leaflet polygons out of the coordinates I transformed to WGS84 earlier:</p>
<pre><code class="language-javascript">// ...

// Convert CH1903 to WGS84
const coords = result.value.geometry.coordinates[0].map(coordPair =&gt; {
  return ch1903ToWgs(coordPair[0], coordPair[1])
})

const leafletPolygon = new leaflet.Polygon(coords, {
  weight: 0.5,
  color: '#757575',
  fillOpacity: 0.3,
  opcacity: 0.3,
})

leafletPolygon.addTo(map)

// ...</code></pre>
<p>And I have a look at the result:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/9019b4/map-2.jpg" alt=""></figure>
<p>Nice! </p>
<h2>The end result</h2>
<p>With some more UI and adding a toggle for the multiple shapefiles, I downloaded earlier, I can build a little app that shows me the hazard zones for avalanches and where in Switzerland ibexes are living:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/7767b5/map-3.jpg" alt=""></figure>
<p>We can see the two shapefiles in action: The light green, green, yellow and red polygons are communes with some hazard of avalanches (light green = low, green = mid-low, yellow = mid, red = high), the darker polygons on top are the ibex colonies.</p>
<p>This map already gives a good insight on the behaviour of ibexes: apparently they're living in low-hazard to mid-low-hazard zones for avalanches only. They also tend to avoid mid-hazard and high-hazard zones. Since ibexes prefer alpine terrain and such terrain usually bears some danger of avalanches, this is plausible, but now I've got some visible data to actually support this claim!</p>
<p>The finished app can be found in action <a href="https://liip.github.io/shapefile-blog-example/">here</a>, its source code can be found <a href="https://github.com/liip/shapefile-blog-example">in this repository</a>.</p>
<h2>The pitfalls</h2>
<p>Although shapefiles are a mighty way to handle GIS data, there's some pitfalls to avoid. One I've already mentioned is an unexpected CRS. In most cases, it can be identified at the point of usage, but checking, which CRS a shapefile uses when inspecting it, is recommended. The second big pitfall is the size of the shapefiles. When using some shapefiles with JavaScript in the browser directly, the browser might crash while trying to handle the huge amount of polygons. There's several solutions, though: One can either simplify the shapefile by removing unnecessary polygons or pre-process it and store its polygons in some kind of database and only query the polygons currently in sight.</p>
<h2>Takeaway thoughts</h2>
<p>Personally, I love working with shapefiles. There's a lot of use cases for them and exploring the data they hold is something really exciting. Even though there might be a pitfall or two, in general it's a bliss, as one can get something up and running with little effort. The OpenData community is using shapefiles quite a lot, so it is a well established standard and there's a lot of libraries and tools out there that make working with shapefiles as awesome as it is.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/f02a70/ibex.jpg" length="6101132" type="image/jpeg" />
          </item>
        <item>
      <title>Accessibility: make your website barrier-free with a11ym!</title>
      <link>https://www.liip.ch/fr/blog/accessibility-with-a11ym</link>
      <guid>https://www.liip.ch/fr/blog/accessibility-with-a11ym</guid>
      <pubDate>Tue, 06 Dec 2016 00:00:00 +0100</pubDate>
      <description><![CDATA[<p><em>Accessibility is not only about people with disabilities but also about making your website accessible to search engines robots. This blog post shares our experience with making the website of a famous luxury watchmaker more accessible, an important e-commerce application. We have built a custom tool, called <a href="https://github.com/liip/TheA11yMachine">The A11y Machine</a> to help us crawl and test each URL against accessibility and HTML conformances. Less than 100 hours were required to fix a thousand of strict errors.</em></p>
<h2>Issues with unaccessible application</h2>
<p>Accessibility is not just a matter of helping people with disabilities. Of course, it is very important that an application can be used by everyone. But, have you ever considered that robots are visitors with numerous disabilities? What kind of robots you may ask, well: Search engine robots for instance, like Google, Bing, DuckDuckGo, Baidu… They are totally blind, they don't have a mouse, they don't have a keyboard, they must understand your content only with the source code.</p>
<p>Whilst it is easy to consider a man or a woman with a pushchair having a bad time in public transport, someone color-blind, a widespread disability, could also have issues browsing the web.</p>
<h3>Several domains are concerned by the accessibility of an application:</h3>
<ul>
<li><strong>Content description</strong> , the use of appropriated HTML tags and attributes help to structure the content of a document (like <code>nav</code>, <code>main</code>, <code>article</code>, <code>aside</code>, <code>footer</code> etc.),</li>
<li><strong>Design of the content</strong> , the use of appropriated colors, contrasts, size of elements, layouts, animations etc.,</li>
<li><strong>SEO</strong> , a content that is understandable by a robot can be well referenced, and search engines can print more content, like associated links or menus (with “Jump to”, or “Related section” etc.),</li>
<li><strong>Development framework</strong> , using ARIA recommendations help to create rich applications with good, clear and relevant practices (<code>aria-hidden</code> to hide an element, <code>aria-describedby</code> to describe objects, all roles like <code>tab</code>, <code>tabpanel</code>, <code>progressbar</code>, <code>alert</code>, <code>dialog</code> etc. to create your own component),</li>
<li><strong>Legal</strong> , where more and more foundations or group of people attend to sue companies for not respecting common recommendations. Unfortunately, while this might be good to improve the status of accessibility for Web applications, it also turns into a juicy market…</li>
</ul>
<p>In other words, from an editor's point of view, making a product accessible is unavoidable, and even beneficial.</p>
<p>Do not think that the mobile market is somehow different. iOS and Android are gently becoming the first screen readers of the market, even for Web content.</p>
<p>However, testing the accessibility conformance is often a matter of money. It is costly because it takes time to check everything. Now, this is no longer the case.</p>
<h2>E-commerce solution of a luxury brand</h2>
<p>Our client, a famous watchmaker's application is an important e-commerce solution, with more than 16 domains, 10 languages, 1000+ products, and hundreds of thousands of visitors from around the globe.</p>
<p>A simple calculation quickly shows that it is very hard to check the application page by page (better say URL by URL). Content can differ according to language, the business constraints, available products, localized features etc.</p>
<p>For this reason, we were looking for a tool that does 3 things:</p>
<ol>
<li>Crawls an entire application based on a starting URL,</li>
<li>Tests each URL against pre-defined conformance levels/accessibility rules,</li>
<li>Can be installed locally for privacy concerns.</li>
</ol>
<p>Searching online, we found several awesome tools doing either point 1, or point 2, or point 3, but never all 3 at once. So we decided to develop our own tool, called <a href="https://github.com/liip/TheA11yMachine/">The A11y Machine</a> (<code>a11ym</code> for short). More below.</p>
<h2>Conformance levels</h2>
<p>Several accessibility recommandations exist, like:</p>
<ul>
<li><a href="http://www.w3.org/TR/WCAG20/">W3C Web Content Accessibility Guidelines</a> (WCAG) 2.0, including A, AA and AAA levels ( <a href="http://www.w3.org/TR/UNDERSTANDING-WCAG20/conformance.html#uc-levels-head">understanding levels</a> of conformance),</li>
<li>U.S. <a href="http://www.section508.gov/">Section 508</a> legislation.</li>
</ul>
<p>We might consider the following recommandation too:</p>
<ul>
<li><a href="https://www.w3.org/TR/html/">W3C HTML5 Recommendation</a>.</li>
</ul>
<p>Conforming to the HTML specification guarantees that the document is not broken, which is a good basis.</p>
<p>Our client's goal was to reach the WCAG 2.0 AA conformance level, with HTML recommendation. Some other specific SEO rules have been added, like: Only one <code>h1</code> per page, or no link with an empty target ( <a href="https://github.com/liip/TheA11yMachine#write-your-custom-rules">Learn how to write your own rules</a>).</p>
<h2>Results</h2>
<p>In less than 100 hours, we have been able to fix more than 1300 strict errors and 400 warnings. As a result, with a team of 3 motivated developers, it took around 4 days to fix everything (including developing The A11y Machine)!</p>
<p>Given a starting URL, The A11y Machine extracts each URL from this document. For each extracted URL, the same operation is repeated until the maximum of URL to compute is reached.</p>
<p>In parallel, several tests are ran on each URL. They are categorized in 2 groups: Accessibility (e.g. WCAG) and other (e.g. HTML). Test results are stored as standalone HTML reports. Why standalone? Because it makes it possible to simply attach the report to an email, so that everyone is able to read and interact with it, even offline.</p>
<p>A report contains only errors. They are classified in 3 classical categories: Error, warning and notice. A CSS selector is provided to better select and analyse the culprit element, in addition to a very complete description, and a link to the recommendation. For instance:</p>
<ul>
<li>Error: “This button element does not have a name available to an accessibility API. Valid names are: title attribute, element content”,</li>
<li>Code: <a href="http://www.w3.org/TR/WCAG20-TECHS/H91.html">H91</a>,</li>
<li>Rule name: <code>#BRAND.Principle4.Guideline4_1.4_1_2.H91.Button.Name</code>,</li>
<li>Selector: <code>#reel-collection_product-section-580456752a01b &gt; button:nth-child(1)</code>,</li>
<li>Code extract: <code>&lt;button class="reel__previous btn btn--bare btn--disabled" data-navigate="previous"&gt;</code>.</li>
</ul>
<figure><a href="https://www.liip.ch/content/4-blog/20161206-accessibility-with-a11ym/screenshot_error.png"><img src="https://liip.rokka.io/www_inarticle/8566da879f14b62b239ee9246ae06a902a037454/screenshot-error.jpg" alt="A typical error message"></a></figure>
<p>A typical error message</p>
<p>With all these information in hands, it is really easy for a developer to target the element and fix it. Accessibility recommendations are not always hard to apply. Indeed, they are mostly hard and long to detect. This tool really eases this step, thus reducing the costs of making an application accessible.</p>
<h2>Automated testing and reports</h2>
<p>Every Monday, <strong>a11ym</strong>  computes a new report. It crawls a set of pre-defined URLs that are important for our customer, and applies all the tests on these URLs.</p>
<p>A board displays the evolution over time. For obvious confidentiality reasons, we cannot display anything about our client. Consequently, the following screenshots are reports from our own website: <a href="https://www.liip.ch/">liip.ch</a>.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20161206-accessibility-with-a11ym/dashboard.jpg"><img src="https://liip.rokka.io/www_inarticle/12f45ca1e769b83105bae35ac4363c18cdc79a83/dashboard.jpg" alt="Example of a typical a11ym dasboard."></a></figure>
<p>Example of a typical a11ym dasboard.</p>
<p>We regularly check this board to see if we have introduced a regression or not. When developing, we can also run The A11y Machine on our local server and check if everything conforms. It takes less than 10 minutes to check hundreds of URLs.</p>
<p>The following screenshot is the index of all reports per URLs.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20161206-accessibility-with-a11ym/index.png"><img src="https://liip.rokka.io/www_inarticle/f4bcee02c7b15dd17fe17a2b5bde97e67c65d310/index.jpg" alt="A typical index of all reports generated by a11ym."></a></figure>
<p>A typical index of all reports generated by a11ym.</p>
<p>The following screenshot is a detailed report for one specific URL.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20161206-accessibility-with-a11ym/report.png"><img src="https://liip.rokka.io/www_inarticle/5a54899df271c2f8c270776b6a84c3800ad6f36d/report.jpg" alt="A typical report generated by a11ym"></a></figure>
<p>A typical report generated by a11ym</p>
<h2>Build upon awesome tools</h2>
<p>The A11y Machine is mostly an orchestration of several tools:</p>
<ul>
<li><a href="https://github.com/cgiffard/node-simplecrawler/"><code>node-simplecrawler</code></a> to crawl the Web application, but a custom exploration algorithm has been developed to detect various errors with the lowest similarities (all products may have the same portion of errors because the HTML structure is rather the same, so we need diversity to target more relevant errors). This exploration algorithm also supports parallelism,</li>
<li><a href="http://phantomjs.org/">PhantomJS</a> to open a headless browser and executes <a href="https://github.com/squizlabs/HTML_CodeSniffer"><code>HTML_CodeSniffer</code></a> in order to check the accessibility conformance. This step is semi-automated with the help of <a href="https://github.com/nature/pa11y"><code>pa11y</code></a>,</li>
<li><a href="http://validator.github.io/validator/">The Nu HTML Checker</a> for the HTML conformance.</li>
</ul>
<p><a href="https://github.com/liip/TheA11yMachine#how-does-it-work">Learn more about how these tools are used and orchestrated</a>.</p>
<h2>We ❤️ Open Source</h2>
<p><a href="https://github.com/liip/TheA11yMachine#authors-and-license">The A11y Machine is open source, under a BSD-3 license</a>. It is developed on Github, under the <a href="https://github.com/liip/TheA11yMachine"><code>liip/TheA11yMachine</code></a> repository.</p>
<p>If you need anything or would like to contribute, you will be very welcome. Let's break the barriers together and make the Web more accessible!</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/f4ff61d52897d9421bc66263c2b32d8f60cba5b5/admin-ajax-2.jpg" length="76790" type="image/png" />
          </item>
        <item>
      <title>Hacking with Particle server and spark firmware</title>
      <link>https://www.liip.ch/fr/blog/hacking-particle-server-spark-firmware</link>
      <guid>https://www.liip.ch/fr/blog/hacking-particle-server-spark-firmware</guid>
      <pubDate>Fri, 02 Sep 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<h2>The particle server</h2>
<p>In my previous <a href="https://blog.liip.ch/archive/2016/08/03/music-dance-web-technologies.html">blog post</a>, I wrote about the concept of <a href="https://github.com/JordanAssayah/MVM">my project</a> using particle. Now I will explain what I had to do to increase the data rate transfer of my modules (remember, my goal is to get data  with the closest data transfer of 1 [ms] ).</p>
<p>First, I installed the local Api server ( <a href="https://github.com/spark/spark-server">github.com/spark/spark-server</a>). </p>
<p>Then I had to register all of my photon's public key on my server and the server public key on my photons.</p>
<p>Using this command :</p>
<pre><code>particle keys server local_server_key.pub.pem IP_ADDRESS</code></pre>
<p>Then, I launched the server to see if my photons were responding with something like this :</p>
<pre><code>Connection from: 192.168.1.159, connId: 1
on ready { coreID: '48ff6a065067555008342387',
 ip: '192.168.1.159',
 product_id: 65535,
 firmware_version: 65535,
 cache_key: undefined }
Core online!</code></pre>
<p>So from here all was working fine but what I also needed to use there is JS library to get data from OAuth. The thing is that you have to do a lot of configurations if you want to make it works but in this project it was not the goal. I had to test as quickly as possible. So I did what you usually do not have to do with a library installed via npm.</p>
<p>In the file “node_modules/particle-api-js/lib/Default.js” I replaced :</p>
<pre><code>'use strict';

Object.defineProperty(exports, "__esModule", {
    value: true
});

exports.default = {
    baseUrl: 'https://api.particle.io',
    clientSecret: 'particle-api',
    clientId: 'particle-api',
    tokenDuration: 7776000 
};

module.exports = exports['default'];
//# sourceMappingURL=Defaults.js.map</code></pre>
<p>By :</p>
<pre><code>'use strict';

Object.defineProperty(exports, "__esModule", {
    value: true
});

exports.default = {
    baseUrl: 'https://localhost:8080',
    clientSecret: 'particle',
    clientId: 'particle',
    tokenDuration: 7776000 
};

module.exports = exports['default'];
//# sourceMappingURL=Defaults.js.map</code></pre>
<p>And then you have a server where you can create OAuth users accounts and use them from a local app.</p>
<h2>The spark firmware</h2>
<p>The second part is about the firmware of the photon. In the spark protocol library, I had to remove some <a href="https://github.com/spark/firmware/blob/v0.5.2/communication/src/spark_protocol.cpp#L483">lines of code</a> that was fixing a data limit rate per second.</p>
<p>So removed those lines :</p>
<pre><code>if (now - recent_event_ticks[evt_tick_idx] &lt; 1000) {
   // exceeded allowable burst of 4 events per second
   return false;
}</code></pre>
<p>And finally the longest part of the whole thing was to build a complete and clean firmware and upload it to the photon without breaking it (with a bad firmware uploaded to the device, electronic components burn).</p>
<p>So you have to install <a href="http://dfu-util.sourceforge.net/">dfu utils</a>, put your photon in dfu mode and follow the next steps : </p>
<ul>
<li>Within the “firmware/main” folder, type </li>
</ul>
<pre><code>make clean all PLATFORM=photon program-dfu</code></pre>
<p>It will generate the new firmware and will upload it to the photon.</p>
<ul>
<li>Restart you local server</li>
<li>Test the code you want to use and see a very big difference ( about 20 / 30 [ms] to send, receive and process data. Before it was 70 / 80 [ms] )</li>
</ul>
<p>So from here you can get what you want. You just have an idea in your head and you can transform your “local” projects by a “wireless” projects ;)</p>
<p>In conclusion it's a bit complicated to configure the environment to use the API in a local network but it let's you use and remove all unnecessary code processing  to have the best performances.</p>
<p>The web api that <a href="https://www.particle.io/">particle.io</a> offers is really great ! It's really simple to use. Another alternative would be to use TCP / UDP protocols and launch a server that is listening to a defined port (about 50 / 60 [ms] to send, receive and process data but with some lag).</p>
<p>Another goal to this post was to record a little demo video. Unfortunately, my app is not finished but I'll release on my <a href="https://www.youtube.com/channel/UCLQGLUROLHiRdpLe-wRwIsQ">youtube channel</a> the video when my app will work ! So stay tuned ;)</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/4be82c44ac13eb673812adc58f66189614905498/12711026-1154414194578531-30521974511740211-o.jpg" length="206242" type="image/jpeg" />
          </item>
        <item>
      <title>Experimenting with React Create App and Google Sheets API</title>
      <link>https://www.liip.ch/fr/blog/experimenting-with-react-create-app-and-google-apis</link>
      <guid>https://www.liip.ch/fr/blog/experimenting-with-react-create-app-and-google-apis</guid>
      <pubDate>Wed, 24 Aug 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Since the opening of our Lausanne office, a person who shall remain anonymous collected the most epic — from weirdest to funniest — statements made in the open space to fulfill a constantly growing database of quotes. From post-its to Google Docs to Google Spreadsheets, it definitely deserved a better interface to read, filter and… vote!</p>
<h3>Setting up the development environment</h3>
<p>After having done some projects with it, it was clear React would be a good choice. But unlike the previous occasions, I did not want to waste time setting everything up; experiments are about coding right? There's plenty of React boilerplates out there and whereas some are great, most usually include too many tools and dependencies.</p>
<p>Luckily, Facebook recently released their own <a href="https://github.com/facebookincubator/create-react-app">React app generator</a> which is probably the best attempt so far. The packaging is handled by WebPack, ES6 features are available thanks to Babel, and your code stays clean and consistent thanks to EsLint. Oh, and Autoprefixer handles any CSS vendor prefixes required here and there. No router, no Redux, no tests, no server-side rendering, no CSS preprocessor, …</p>
<p>After generating an app, you end up with a rather clean tree and two main scripts in your <em>package.json</em>:</p>
<ul>
<li><code>npm start</code> to run the development server with live reload (≠  <em>hot</em> reload)</li>
<li><code>npm build</code> to package everything in a sub-folder, ready to be deployed to Github pages or anywhere you like</li>
</ul>
<p>Their WebPack setup supports the import of common files such as images, videos and even JSON. The build task is pretty smart and uses the <em>homepage</em> field of your <em>package.json</em> to correctly generate paths to your assets. And finally — the coolest feature — the ability to escape from the generator lockup by simply running <code>npm run eject</code>. All the dependencies, configuration and scripts will be moved to your project, ready to be customized.</p>
<h3>Interacting with the spreadsheet</h3>
<p>Let's talk a bit about the app now. As the quotes were already in Google Spreadsheets I thought it would be fun to experiment a bit with its API. The documentation is definitely not the best I've seen so far but the examples guided me enough to finally being able to read and write in the spreadsheet.</p>
<p>I used the official  <a href="https://developers.google.com/sheets/">Google Sheets API</a> version 4, initialized through their <a href="https://developers.google.com/api-client-library/javascript/">JavaScript API Client Library</a>. My first attempt was to connect with a token but this solution didn't allow me to write into the spreadsheet. I decided to move to oAuth which works like a charm for both reading and writing. It's actually even more useful as the spreadsheet sharing policy is respected. People not authorized to interact with the document simply get an error. Better than a Basic Auth isn't it?</p>
<p>The load time might be the biggest issue with this solution. It takes a bit more than 2 seconds for the quotes to show-up. Most of the time is spent for the load &amp; initialization of the API. Nothing unbearable though.</p>
<p>Most of the interactions with the API are done <a href="https://github.com/LeBenLeBen/quotes/blob/master/src/helpers/spreadsheet.js">in this file</a> if you're interested by the code.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/b4de9a0b58326ba64265db3be5300b0ffac2a02d/quotes-vote.jpg" alt=""></figure>
<h3>The voting system</h3>
<p>Now that the quotes are fetched and displayed, I thought it would be cool to let people vote. Saving the number into the spreadsheet is easy. Keeping track of who liked what is a bit more complex. It wasn't worth the effort to have a secure way to validate the uniqueness of likes. Therefore I decided to save this information in the browser LocalStorage. Of course this means people can vote again if they want to, but the audience is pretty small and there's no point to rig the results, so…</p>
<p>Here we are, a simple React app with authentication using an uncommon back-end to save its data. All hosted for free and built in a couple of hours. Pretty cool I'd say. Hope you enjoyed it !</p>
<p>You can experiment with the <a href="https://lebenleben.github.io/quotes/">demo app</a> even if likes won't save (on purpose 😈) and browse/fork the <a href="https://github.com/lebenleben/quotes">source code on GitHub</a>.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/22096f8319920663572f24bc052fe3100c659314/quotes.jpg" length="56927" type="image/jpeg" />
          </item>
        <item>
      <title>When music and dance meet web technologies</title>
      <link>https://www.liip.ch/fr/blog/music-dance-web-technologies</link>
      <guid>https://www.liip.ch/fr/blog/music-dance-web-technologies</guid>
      <pubDate>Wed, 03 Aug 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<h2>Who am I ?</h2>
<p>I'm Jordan Assayah, I am 18 years old and am currently working at Liip as a trainee from February 1st to August 31st. My main job is to work in a team and develop new functionalities for the zebra project. I'm also a dancer (especially a tap dancer) and love music. Every year since ~ 4 / 5 years , I go to the tapdance world championships to represent Switzerland.</p>
<h2>Personal project</h2>
<p>After 4 months of really nice integration in the team, I've asked to David J. if it was possible to restart a project that I was developing at home. The main goal of the project is to produce music over WIFI with little and cheap modules (because I'm a student) that you can put on your shoes and control the whole thing from a web application where you can choose sounds for sensors, the velocity, the volume, etc.</p>
<p>I was searching for good and cheap modules for a long time and I finally found something : <a href="http://www.particle.io">ParticleIO</a>. They sell WIFI and GSM modules arduino compatible. The perfect microcontroller for the project. So I've bought two of <a href="https://www.particle.io/products/hardware/photon-wifi-dev-kit">those</a> and try to use them. The cool thing about it is that there are multiple tools like a Particle CLI, a dashboard to sniff data, an online IDE (little bit weird, I know), a Particle JS library, an IOS and Android SDK, a firmware documentation and the best for the end, a <a href="https://github.com/spark/spark-server">local REST API server</a>.</p>
<p>So far, I've only hacked a little bit the local server to let WIFI modules send data much faster (about 80 |  70 [ms]). This let me make a lot of queries in a second and allows the app to be a “Real Time App”. I did, but not yet completed, the interface that let user choose sounds, activate /deactivate sensors, etc.</p>
<h2>Technologies</h2>
<p>As a personal project, I wanted to learn new framework and new ways to program so I've searched for a framework/library that let me compose my application with multiple components (e.g.: the audio player  as a component). After some days I've found a JS library. <a href="http://vuejs.org">VueJS</a>. VueJS is like React but don't use a virtual-DOM. It is focused on the view layer only. VueJS uses the actual DOM as the template and keeps references to actual nodes for data bindings. This limits VueJS to environments where DOM is present. </p>
<p>This is a simple example of a vue component with 3 parts : the script, the template and the style.<figure><img src="https://liip.rokka.io/www_inarticle/90768a104adce1e82957485ab648b8593c0f979d/vue-component.jpg" alt="vue-component"></figure></p>
<h2>The next step</h2>
<p>As I've started my project only 2 months ago, I don't think I can finish it until the end of my internship knowing that I work on it only every Monday. I will maybe do a little demo to see what I've done so far.</p>
<p>I would like to get to the point where I can use the application with, at least, the possibility of playing sounds with it. The final goal is to have a complete and simple interface with the possibility to records “songs” and manage sounds from a timeline.</p>
<p>At the end of my internship, I'll release a new blog post talking about the final stage of my project and especially the REST API server. Finally, if you want to contribute to the project, you can go and see the code on github : <a href="https://github.com/JordanAssayah/MVM">github.com/JordanAssayah/MVM</a>.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20160803-music-dance-web-technologies/12711026_1154414194578531_30521974511740211_o.jpg"><img src="https://liip.rokka.io/www_inarticle/3ca7543ef1deb4c1101073928f50e53234d3a10c/12711026-1154414194578531-30521974511740211-o-1024x682.jpg" alt="Jordan dancing"></a></figure>
<p>Me dancing at the opening of the ThinkSpace in Lausanne</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/4be82c44ac13eb673812adc58f66189614905498/12711026-1154414194578531-30521974511740211-o.jpg" length="206242" type="image/jpeg" />
          </item>
        <item>
      <title>What&#8217;s your twitter mood?</title>
      <link>https://www.liip.ch/fr/blog/whats-your-twitter-mood</link>
      <guid>https://www.liip.ch/fr/blog/whats-your-twitter-mood</guid>
      <pubDate>Tue, 07 Jun 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<h1>The idea</h1>
<ul>
<li>Analyze tweets of a user for being positive, negative or neutral using machine learning techniques</li>
<li>Show how the mood of your tweets change over time</li>
</ul>
<h2>Why?</h2>
<ul>
<li>Fun way to experiment with Sentiment Analysis</li>
<li>Experiment with language detection</li>
</ul>
<h2>How</h2>
<h3>Gathering data</h3>
<p>We analyzed tweets from Switzerland, England, and Brazil. We put extra care to make sure our model can do well against Swiss-German text.</p>
<h3>Make awesome model in node</h3>
<p>We created custom fast Natural Language Processor in node.js. Why node? It has very good run-time when dealing with lots and lots of strings. We used unsupervised machine learning techniques to teach our model the Swiss German and English writing model. Once we had a working model, we added couple other models using Bayesian inference to create an ensemble <a href="https://en.wikipedia.org/wiki/Ensemble_learning">en.wikipedia.org/wiki/Ensemble_learning</a></p>
<h3>Make nice front-end</h3>
<figure><a href="https://www.liip.ch/content/4-blog/20160607-whats-your-twitter-mood/portugese-sentiment-analysys.png"><img src="https://liip.rokka.io/www_inarticle/d311aad3e328a315dc449563fc4197ff6e4d0146/portugese-sentiment-analysys.jpg" alt="portugese sentiment analysys"></a></figure>
<p>Once we got our server working we thought about adding some better UI. We asked our User Experience specialist Laura to suggest improvements. See for yourself:</p>
<figure><a href="https://www.liip.ch/content/4-blog/20160607-whats-your-twitter-mood/mood-detector-graph11.png"><img src="https://liip.rokka.io/www_inarticle/46a6e9aa682ef26d910255776568662a8fc1f8bc/mood-detector-graph11-1024x536.jpg" alt="mood-detector-graph1"></a></figure>
<h2>Problems and learnings</h2>
<h3>Language detection is needed to use the right sentiment model</h3>
<p>Design model for Swiss-German is especially hard: the language incorporates German, with a lot of French and Italian words. Also spelling of words changes from canton to canton. If we add that most people when writing tweets are forced to use abbreviation, we get the whole picture of the challenge.</p>
<h3>An accurate model needs a lot of data</h3>
<p>In order to get a good result we needed to incorporate data from various people and different nationalities. The good thing is that the more you use our model the more accurate it gets.</p>
<h3>Training data is available</h3>
<p>One of the problems is that for humans is hard to understand the irony or sarcasm. Especially in short tweets. So it's also hard for a machine.</p>
<h2>If you want to play with our results in this machine learning experiment:</h2>
<p><a href="https://twittersentiment.liip.ch">twittersentiment.liip.ch</a></p>
<p>I would like to thanks Andrey Poplavskiy for his “css love”, and Adrian Philipp for his huge contribution and encouragement towards this project.</p>
<p>PS.</p>
<p>Some comments that we received, were not so nice, but as always we are happy to receive <strong>any</strong> feedback.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20160607-whats-your-twitter-mood/twitter-mood-not-so-nice.png"><img src="https://liip.rokka.io/www_inarticle/8225ce1a57a89d2bf46dba226f032738b9f82c10/twitter-mood-not-so-nice.jpg" alt="twitter-mood-not-so-nice"></a></figure>]]></description>
          </item>
        <item>
      <title>Swiss Confederation, the Styleguide version 3</title>
      <link>https://www.liip.ch/fr/blog/swiss-confederation-the-styleguide-version-3</link>
      <guid>https://www.liip.ch/fr/blog/swiss-confederation-the-styleguide-version-3</guid>
      <pubDate>Wed, 11 May 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<h2>What is a StyleGuide for ?</h2>
<p>It is a long-term and flexible solution listing and exemplifying web components and tools useful to create a website. For instance, it explains how to use each component, how they should appear on the web and interact. It is a support for developers while integrating, and useful for designers, as it allows them to keep a general vision on the style and the system's functionalities. It is obviously necessary to keep a StyleGuide up-to-date, anytime the Corporate Design or Corporate Identity is modified.</p>
<h2>Why is it for the Swiss Confederation useful?</h2>
<p>The Swiss Confederation is split in multiple departments, each of them owning one or more website. The StyleGuide supplies them a common ground for the creation and their websites, while ensuring a coherent visual identity for the user. The StyleGuide does not only offers web-component AA certified according to the <a href="https://www.w3.org/TR/WCAG20/">Web Content Accessibility Guidelines' recommendations</a> about accessibility, it also provides additional information. The StyleGuide aims at ensuring a wide access to information to any kind of users, including disabled people (for instance sight or hearing disability). These recommendations are also useful to all users.</p>
<h2>Swiss Confederation Web Guideline 3</h2>
<p><a href="http://swiss.github.io/styleguide/en/index.html">Swiss Confederation web guidelines</a> define the graphic guidelines of the Swiss Confederation on the web. It ensure coherence on the different websites developed under the admin.ch domain. </p>
<h3>Innovations of the 3 version</h3>
<ol>
<li>Change in the system generating the StyleGuide. Hologram (coded in Ruby) was replaced by <a href="https://fbrctr.github.io/">Fabricator</a> (identic but coded in <a href="https://nodejs.org/en/">Node</a>). It facilitates its installation and development with Windows</li>
<li>It is translated in the swiss national languages</li>
<li>The components' accessibility is improved (AA)</li>
<li>Problems raised on Github solved</li>
</ol>
<h2>Who are the users ?</h2>
<p>The StyleGuide is to be used by internal federal project manager and external service providers. The code is opensource, each can use, modify, solve issues or propose improvement. The StyleGuide is very convenient to use in all projects and easy to install with <a href="https://www.npmjs.com/">NPM</a> or <a href="http://bower.io/">Bower</a>. It is possible to download an archive or duplicate the project from Github. The whole <a href="https://github.com/swiss/styleguide/">installation process</a> is available on Github. </p>
<p>The last version of the StyleGuide is built on a <a href="http://fbrctr.github.io/">Fabricator</a>. It is automatically multilingually generated with Gulp. Gulp is also gathers and improve all necessary files for the framework to work properly. The documentation is written in markdown, the components are dynamic templates Handlebars. The translation is performed with the support of a personalised Handelbars, referencing translated files in YAML.</p>]]></description>
          </item>
        <item>
      <title>Hackday React Native for Android</title>
      <link>https://www.liip.ch/fr/blog/hackday-react-native-android</link>
      <guid>https://www.liip.ch/fr/blog/hackday-react-native-android</guid>
      <pubDate>Thu, 08 Oct 2015 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>When React Native for Android came out I was excited to investigate it more at one of Liips monthly innovation days. Liip already developed a React Native app for iOS and we wanted to know how it works for Android. We were: Andrey, Germain, Lukasz and me. Germain is currently working on a cross platform app written with Xamarin.</p>
<p>For this hackday we tried to port <a href="https://github.com/liip/guess-the-liiper-ios">an existing React Native iOS app</a> to Android.</p>
<p>TL;DR: We are waiting for WebViews to be supported. See the <a href="https://github.com/liip/guess-the-liiper-ios/pull/29/files">pull request</a> for changes. We didn't need to dive deep into Android APIs like XML Layouts for views.</p>
<h3>How code sharing works</h3>
<p>React Native has a “packer” which is responsible for collecting and loading all javascript files and resources. To avoid explicitly checking for the current platforms using  _if/else _blocks, the packer ignores all files which end in .android.js on iOS and all files ending in .ios.js  on Android. The way to develop platform specific components is: First divide the app into small components, each component in its own file. Then implement a platform specific version of a component that works differently.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20151008-hackday-react-native-android/guess-android.gif"><img src="https://liip.rokka.io/www_inarticle/b3e35a72739b583231ed822accd71287f708c8f9/guess-android.jpg" alt="Guess the Liiper on Android – Progress circle animation"></a></figure>]]></description>
          </item>
    
  </channel>
</rss>
