<?xml version="1.0" encoding="utf-8"?>
<!-- generator="Kirby" -->
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">

  <channel>
    <title>Mot-cl&#233;: services &#183; Blog &#183; Liip</title>
    <link>https://www.liip.ch/fr/blog/tags/services</link>
    <generator>Kirby</generator>
    <lastBuildDate>Tue, 05 Dec 2017 00:00:00 +0100</lastBuildDate>
    <atom:link href="https://www.liip.ch" rel="self" type="application/rss+xml" />

        <description>Articles du blog Liip avec le mot-cl&#233; &#8220;services&#8221;</description>
    
        <language>fr</language>
    
        <item>
      <title>The road ahead for iterative practices in Swiss government</title>
      <link>https://www.liip.ch/fr/blog/iterative-practices-for-government-the-road-ahead-in-switzerland</link>
      <guid>https://www.liip.ch/fr/blog/iterative-practices-for-government-the-road-ahead-in-switzerland</guid>
      <pubDate>Tue, 05 Dec 2017 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>When I say ‹relevant›, I mean those government processes, regulations and laws which have got relevance for drafting, procuring, building and running government digital services. And which in that respect probably are generating uncertainty, if iterative methods can be applied without violating existing processes or laws.</p>
<p><strong>Why government can[not] build digital services iteratively</strong></p>
<p>I wrote a thesis on the topic, and tried to condense my findings in a <strong>20 minutes’ presentation</strong> <a href="https://speakerdeck.com/andreasamsler/why-government-can-not-build-digital-services-iteratively">as a PDF on speakerdeck</a>, and <a href="http://liip.to/iterate">as a Google-Presentation</a>, which includes my speaker notes, too.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/9828a7/processes-methods-iteration-in-government.jpg" alt=""></figure>
<p>(<a href="https://www.liip.ch/content/4-blog/20171205-iterative-practices-for-government-the-road-ahead-in-switzerland/dna-of-government-and-iterative-methods.jpg?1512490464">Old graphic</a>, published until 5 January 2018.)</p>
<p>My thesis got accepted, but I will invest some time until the end of 2017 to revise it, before it is going to be published in early 2018. I will add it to this blogpost, when it's ready.</p>
<p><strong>What is your experience with iterating in government?</strong></p>
<p>I am thankful for any feedback on the topic. Please get in contact with me via the following channels:</p>
<ul>
<li><a href="https://twitter.com/andreasamsler">@andreasamsler</a></li>
<li><a href="mailto:&#97;&#x6e;&#100;&#114;&#x65;&#97;&#115;&#46;&#97;&#109;&#x73;&#108;&#x65;&#x72;&#64;&#108;&#105;&#105;&#112;&#46;&#99;&#104;">andreas.amsler@liip.ch</a></li>
</ul>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/9828a7/processes-methods-iteration-in-government.jpg" length="147361" type="image/jpeg" />
          </item>
        <item>
      <title>Multi-Device Interactions &#8211; Part 3 : The Canvas</title>
      <link>https://www.liip.ch/fr/blog/multi-device-interactions-part-3-the-canvas</link>
      <guid>https://www.liip.ch/fr/blog/multi-device-interactions-part-3-the-canvas</guid>
      <pubDate>Tue, 28 Jan 2014 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>This is the last and final part of the blog post series on Multi-Device Interactions. Previously, I outlined the second-screen trend in TV industry ( <a href="https://blog.liip.ch/archive/2013/12/20/multi-device-interactions-part-1-the-second-screen.html">Part 1</a>) and introduced some underlying models in our multi-device world ( <a href="https://blog.liip.ch/archive/2014/01/13/multi-device-interactions-part-2-the-models.html">Part 2</a>).</p>
<p>In this blogpost we (finally) focus on the practicalities of Multi-Device Interaction Design. It indeed has become a challenge for User Experience Designers to develop solutions that account for the multi-device behaviour of today's user. As mentioned earlier, we have developed a canvas to think and design multi-device interactions. The Multi-Device Interaction Canvas (MDIC) is a modifiable and simple canvas to map multi-device use cases. It bases on the theoretical models we presented in previous blog <a href="https://blog.liip.ch/archive/2014/01/13/multi-device-interactions-part-2-the-models.html">posts</a>.</p>
<p>At its core it respects three important factors:</p>
<ul>
<li>Interactions (with Devices)</li>
<li>User Tasks</li>
<li>Context</li>
</ul>
<h2>Activity mapping with the canvas</h2>
<figure><img src="https://liip.rokka.io/www_inarticle/df463adc514cff1736ca0bae42fdec326d02452b/mdic-tool-example.jpg" alt=""></figure>
<p>Have a look at the canvas example. As you can see, the “interaction” zone composes the core of the MDIC canvas. In it, you diagram the different interactions with the devices that can occur. Besides the regular interaction with the common gadgets such as smartphone, tablet, laptop or TV there are two other options we suggest. Throughout the day, we might encounter public screens at the train station or at school. The section “other interactions” is designated for any objects or device you interact with that also might grab your attention – be it auditory, visual or haptic. Imagine you are in your kitchen and besides your tablet with the recipe, you use a knife to prepare your ingredients. Driving to work by car could be another example (steering wheel). Practically any other activity that does not fit with a specific screen listed but still occupy your sensory channels so to say. List the user's main activity or description on top.</p>
<p>We provided you with an example. The legend in the left corner shows you the different interaction types (hereafter presented).</p>
<p>A task is either unfinished or completed. Think of reading your favorite news on your tablet for a couple of minutes. Then, you move on to a next task. In this case you would assign filled bullets to this activity, to state that it started and stopped at defined times. Oppositely, if an action starts on one device but shifts to another device or gets interrupted, you should use open bullets in-between. It is that easy.</p>
<p>In using dashed lines you signal a device shift or complementary usage for a same task. Maybe you listen to music on your tablet and then move to your smartphone since when leaving the house. Use the dashed lines for this. Also whenever a task is shared (complementary). For example in our scenario, TV ignites the search for a related actor on Daniel's smartphone. It is a complementary task.</p>
<p>If your persona engages in multi-tasking, more continuous lines appear on top of each other. The thicker a line, the more attention is directed towards this specific activity.</p>
<p>So far so good. What about the context? The bottom section is there for this purpose. As the user's location changes, so do his interactions with the devices. We have based ours on the social presence of other people and whether privacy (private vs. public) is given. Also, we include mood types and location specifications. Feel free to adapt the contextual dimensions.</p>
<h2>Think Multi-Device Interaction through the Canvas</h2>
<p>A simple watchdog is to never have more than four tasks in parallel. Keep in mind that the user's attention is much limited. There are a numerous of problems that can be spotted here. For example due to the fact that our user switches devices many times, the main activity gets disrupted. Look for disruptive factors. How long does it take a user to finish a task – basically the length of an activity path from open to filled bullet. Or what about device switches? Is there a cost of shift involved? It could be that the next device is simply not in reachable distance.</p>
<p>The canvas will help you to ask the right questions. What if our persona used the tablet and not the smartphone for a given task? Can this action be continued or how likely is it to occur in parallel? What happens, if the context changes and our user spends time with his friends? Is the user interested in social or investigative information engagement (two different types of processing with the content, see bog post two). Ask yourself whether complementary issues can occur e.g. how to actually search for the actor during a film? Does the user actively search for this information or does the TV channel provide him with hints or even better, companion applications.</p>
<p>Similar to the well-known business model canvas, it can be used in very different manners.</p>
<p>The empty Canvas can be downloaded <a href="https://www.liip.ch/files/Multi-Device%20Interaction%20Canvas.pdf">here</a>.</p>
<p>To give you an idea how to use the canvas, have a look at our example scenario <a href="https://www.liip.ch/files/Multi-Device%20Interacation%20Canvas%20Example.pdf">here</a>.</p>
<p>Put your ideas and scenarios to practice and “think multi-device” from day one. We hope you like our tool. Feel free to send us feedback or improvements.</p>]]></description>
          </item>
        <item>
      <title>Multi-Device Interactions &#8211; Part 2 : The Models</title>
      <link>https://www.liip.ch/fr/blog/multi-device-interactions-part-2-the-models</link>
      <guid>https://www.liip.ch/fr/blog/multi-device-interactions-part-2-the-models</guid>
      <pubDate>Mon, 13 Jan 2014 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>In our <a href="http://blog.liip.ch/archive/2013/12/20/multi-device-interactions-part-1-the-second-screen.html">last blog post</a> we started off with John's story to show the everyday encounter of multiple devices and screens, and outlined the emergence of the second screen business. The classical second screen solution is a companion app for mobile devices that delivers additional information to TV content, e.g. a quiz or sport statistics on your smartphone or tablet. With all the possibilities in a multi-device world, it's crucial to focus on the conductor of all these instruments – the user! In the following sections we dive into some theoretical models on multi-device interaction.</p>
<p><strong>Why would the user choose one device over another, pursue one activity over another? How important is the context and what other factors contribute to a specific user-behavior pattern? </strong> </p>
<p>Let's have a look at the context which in many cases influences the choice of device.</p>
<h2>Context of use</h2>
<p>Digital devices are being used in different social environments, as our story of John tries to transcribe. The context of use is key to the selection of a device. It makes indeed a big difference whether you sit in a coffee shop, chill out at home on the couch, or are with friends at a bar watching a football game.</p>
<p>What are the key factors which drive the preference for a device over others? <a href="http://www.google.com/think/research-studies/the-new-multi-screen-world-study.html">Google</a> identifies four of them:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/cd2c1e5edaed05ba9ea0179ef59bb1ac780781b7/blog-post-context.jpg" alt=""></figure>
<ul>
<li><strong>Time</strong> : How much time is available and needed for a given task?</li>
<li><strong>Goal</strong> : What is the defined intent, goal or task?</li>
<li><strong>Location</strong> : Where is the user and which devices are physically close?</li>
<li><strong>Attitude</strong> : How does the user feel and what is going on in his mind?</li>
</ul>
<p>In our example scenario, John is on the go, riding the bus from his office to the shop during rush hour. This doesn't allow him to engage in longer tasks as many distractors and possibility of disruptions are present. Here, the smartphone is the right device. But later at night, relaxing with Michelle at home, it's a much different setting with respect to the four factors above: time is available, John and Michelle are in a relaxed mindset, and start discussing vacation ideas. Their goal is to find more inspiration for their trip, the casual browsing task that can easily be completed from the couch, at home. A tablet is a perfect match for this purpose.</p>
<h2></h2>
<h2>The right device</h2>
<p>Google attributes different characteristics to each device. Whereas <strong>PCs and laptops are the primary tools to be “productive and informed”</strong> and mostly used at home or at the office with the idea to pursue goals that demand time and focus, <strong>smartphones are the connecting devices.</strong> Usually used on-the-go as well as at home, and best for communication and connecting with others. Often, smartphones are of course used when time is sparse or information needs to be accessed quickly e.g. small tasks. <strong>And tablets? Entertainers, clearly.</strong> 70% of the time, Tablets used is at home and mostly for browsing and entertainment, when time is largely available.</p>
<p>A Consumer in the report sums up as following:</p>
<p><em>“My phone… I consider it my personal device, my go-to device. It's close to me, if I need that quick, precise feedback.</em><br />
<em>When I need to be more in depth, that's when I start using my tablet. The other part of it is where I disconnect from my work life and kind of go into where I want to be at the moment… I'm totally removed from today's reality. I can't get a phone call, I don't check my email it's my dream world.</em><br />
<em>And then moving to the laptop, well, for me that's business. That's work. I feel like I've got to be crunching numbers or doing something.”</em></p>
<p>Bradely, <a href="http://www.google.com/think/research-studies/the-new-multi-screen-world-study.html">Google Report 2012</a>.</p>
<p><a href="http://advertising.microsoft.com/en-uk/cl/1932/cross-screen-research-report">Microsoft</a> defines different archetypes to understand marketers how users relate to their devices. It's a form of labeling six diverse user types. </p>
<p>The Everyman: TV as one of the most popular devices in our multi-screen world that delivers passive entertainment and comfort</p>
<p>The Sage: The laptop informs, empowers and teaches – clearly key to productivity</p>
<p><strong>The Jester:</strong> The Gaming Console immerses consumers in another world </p>
<p>The Dreamer: E-Readers help us to escape in the world of books and are mainly used for this purpose only (despite the fact that some of them provide internet browsing too)</p>
<p>The Explorer: The Tablet facilitates discovery and investigation and is a great device on the go and rich in media and video</p>
<p>The Lover: The Mobile phone is the most personal device and evokes intimacy, commitment and trust – however, its downside is the constant demand of the user's attention</p>
<p>Now that we might understand how a user comes to choose one device over another and how he relates to these devices, we may think about how to interact with a multiple of devices.</p>
<h2>Modes of multi-screening</h2>
<p>Screens can either be used sequentially or simultaneously (Google, 2012). </p>
<p>Sequential device usage occurs when one task is initiated on device A and finishes on device B. One common example is browsing the web for shoes and bookmarking the interesting products on a smartphone, to later purchase the item on a laptop. In John's story, many sequential actions happen. For instance the vacation trip planning is an activity that goes through different devices throughout the day. Based on numbers from the Google report, over 90% of users indeed engage in a sequential use of devices to accomplish a given task the same day. It is thus not astonishing that Google launched a new Adwords tracking measurement for marketers in early October 2013. With the “Estimated Total Conversions” cross-device conversions can be calculated.</p>
<p>Parrallel device usage. Opposed to the sequential mode is the parallel use mode. Google speaks of simultaneous use when multiple screens are active and information on the second screen is either related or unrelated to a main screen. The report distinguishes between multi-tasking (unrelated activity) and complementary usage (related activity). Clearly, multi-screening with a smartphone and TV is ranked as most frequent activity among the users in their study, followed by smartphone &amp; laptop. According to their research, emailing, browsing and social networking are the most performed task during simultaneous screen usage.</p>
<p>Even though multi-tasking and juggling different activities at the same time has been stated to mainly induce negative effects on performance and accuracy of a given task (Rachel et al, 2011), 78% of the participants in the Google Multi-Screen World study perform multi-tasking (simultaneous usage). Complimentary usage was thus only 22%. It turns out that 77% of TV viewers use another device at the same time while watching Television. Often TV ignites search; at least for 1/4 of search inquiries occasions that are prompted by television (Google, 2012). </p>
<p>There are other approaches to defining modes of use in a multi-device world. Here is another one.</p>
<h2>The four paths of engagement</h2>
<p>Microsoft on the other hand defines the mode of use with four “paths of engagement” with devices that partially overlap with Google's definitions of sequential and simultaneous use.</p>
<ol>
<li>
<p>Content Grazing</p>
</li>
<li>
<p>Investigative Spider-Webbing</p>
</li>
<li>
<p>Social Spider-Webbing</p>
</li>
<li>
<p>Quantum</p>
</li>
</ol>
<p>Content Grazing is the classical distractive behavior when using multiple screens. It can be either related or unrelated to the content on the primary screen, similar to Google's definition of simultaneous use. It's often habit driven about small tasks running in the background. Think of Michelle texting back and forth with Sarah during the movie.</p>
<p>The Investigative and Social Spider-Webbing paths of engagement are about consuming content that is clearly related to a primary screen. Microsoft's distinction between investigative and social are straightforward: Investigative Spider-Webbing happens when moments of curiosity or knowledge seeking trigger a search action. Michelle's interest to find additional information about the movie star is a good example. Social Spider-Webbing on the other hand, is about social engagement in forms of conversations and connecting to like-minded individuals. Let's say a tweet or online discussion about content on a primary screen (Microsoft, 2013).    </p>
<p>Quantum Tasking is the equivalent of Google's sequential use. Here, intended tasks travel over space and time from screen to screen. Meaning big tasks are often divided in subtasks. What the report also states is that, when it comes to shopping, spontaneity plays an important role that mostly is present while using a smartphone. In our story it's when John purchases the flight tickets, remember? At home, clearly PC or Laptop is the leading device. Nevertheless, 67% of the studied users started shopping on a smartphone and accomplished the goal on a PC/Laptop (Google, 2012), just like John did.</p>
<p>With all the different modes of use mentioned, users and solution providers started to care a lot about information and interaction orchestration. No wonder some theoretical concepts for screen coordination have thus been developed.</p>
<h2>Screen coordination</h2>
<p>In <a href="http://www.slideshare.net/preciousforever/patterns-for-multiscreen-strategies?ref=http://previous.precious-forever.com/2011/05/26/patterns-for-multiscreen-strategies/">Ecosystems of screens</a>, PRECIOUS DESIGN STUDIO documents six patterns for screen coordination </p>
<p>Coherence (appearance and functionality is coherent across different devices)</p>
<p>Syncronization (data gets syncronized)</p>
<p>Screen sharing (multiple screens share a single source)</p>
<p>Device shifting (possibility to actively shift content from one device to another)</p>
<p>Complementarity (the classical TV companion app)</p>
<p>Simultaneity (devices display similar content simultaneously) </p>
<p>The strategies defined, should help to understand and describe our multi-screen world. Here is an example: When John is working on his report, his data is synchronized and displayed coherently on different screens. While watching TV, the companion app Michelle uses is most likely simultaneous and complementary. Device shifting occurs, when John preselected a song on one device and later streamed his music library in the gym.</p>
<h2>Summary</h2>
<p>The trend towards a multi-screen world is emerging and is just about to become mainstream. How we interact with multiple screens is an art for itself. Different modes of use, contextual factors and screen coordination strategies have been outlined and are crucial but must always be considered together, in a holistic approach. The user acts as the “conductor of an orchestra of devices” that plays a hopefully “harmonious experience tune” that some user experience architects designed. </p>
<p>To to do so, we can cut down to four main factors that contribute to the understanding:  </p>
<p>Context</p>
<p>User</p>
<p>Activity</p>
<p>Screen and Interactions</p>
<figure><img src="https://liip.rokka.io/www_inarticle/822d1efc4ed5600909b4b87b1736b8bf0bbe7ae5/blog-post-interactioncanvas.jpg" alt=""></figure>
<p>In a next post we'll introduce the Multi-Device Interaction Canvas – a tool to model and think multi-device interaction scenarios in a simple and convenient way.</p>]]></description>
          </item>
        <item>
      <title>Multi-Device Interactions &#8211; Part 1 : The Second Screen</title>
      <link>https://www.liip.ch/fr/blog/multi-device-interactions-part-1-the-second-screen</link>
      <guid>https://www.liip.ch/fr/blog/multi-device-interactions-part-1-the-second-screen</guid>
      <pubDate>Fri, 20 Dec 2013 00:00:00 +0100</pubDate>
      <description><![CDATA[<figure><img src="https://liip.rokka.io/www_inarticle/cf1bab8380fcacdd8901427655c774ac162ddbfb/blog-post-mdi1.jpg" alt=""></figure>
<p>This article, the first in a series on multi-device interactions, introduces the concept and analyses existing second screen solutions from the broadcasting industry.</p>
<p>Let us start with a (not so) small introductory story (or directly check out the main part).</p>
<p>It's 6pm and John shuts down his desktop computer at work. He has been writing all day on the yearly financial report to hand in next week and it's not done yet. Of course, things are available in the cloud and he can continue later on from anywhere. John is running late and leaves the office in a hurry (as usual): grocery shopping before the shops close, meet his wife Michelle at the coffee shop, make a dinner reservation for the next day and send a first draft of his report to the management. John is far from ‘done with his day'.</p>
<p>6:10pm. He catches the bus. It's rush hour downtown. Arriving at the deli shop ten minutes later, he takes a look at his shopping list he wrote down on paper earlier during the day. His wife texted him to buy this noble vinegar from Italy but John can't recall the name and checks it quickly on his smartphone. He shortly checks e-mails on his various mail accounts – a bad habit. Paul, a good friend, had sent him some notes and links for a road trip to southern France in spring, which seems like a perfect idea! John therefore creates a to-do item in his favorite app.</p>
<p>7pm. While walking to the station, John makes use of the idle time and calls the restaurant to make a table reservation for the next day. Ten minutes later, he meets his wife at the coffee shop. They discuss the road trip plan for next spring. The smartphone comes handy to take notes and dive into some first research for cheap flights. Bookmarked!</p>
<p>8pm and at home, they hang out on the couch and browse the web on their tablet with the sunny pictures from southern France to get some inspiration. TV News is rambling in the background. Oh well, it's about time John finishes up his report and delivers a first draft!</p>
<p>10pm and finally done with his draft. Shutting down the laptop. After this late work shift, John feels the urge to go to the gym. The music on his smartphone keeps him going, pushing the weights. One hour later, John and Michelle relax and watch a movie on TV.  Michelle is curious about the movie. She launches the companion app to read about the actors and the storyline. All of the sudden, a text message from a friend is interrupting. The smartphone is in close distance, why wouldn't she take a look? It's Sarah! sending a photo of her new, red shoes. Of course Michelle wants to know all about it. After messaging back and forth, Sarah shares the link to the web shop with Michelle. She bookmarks it instantly with the intention to purchase the same shoes, but in blue, later.</p>
<p>11pm. The movie still running. Michelle follows up with friends she had missed out on her smartphone. A Meanwhile, John purchases the flights he had bookmarked earlier this day. On his laptop it's just a few clicks. he shares the news with Paul via text message.</p>
<p>It's been a long and rough day. John sets his phone alarm clock to 7am. He will wake up with a terrific fresh song he discovered on the daily bus ride yesterday. An exciting, brand new day is waiting.</p>
<h2></h2>
<h2>Introduction</h2>
<p>Every minute of our lives we face and interact with many screens for many more purposes. Just like John. A large portion of us owns multiple connected devices ( <a href="http://www.digital-tsunami.com/2013/03/19/mobile-consumer-2012-statistics/">Digital Tsunami, 2013</a>; <a href="http://www.mobify.com/resources/mobile-device-ownership-statistics/">Mobify, 2012</a>), and uses them together. Think of surfing the web on a tablet while watching television, or listening to music through phone and tablet, in an interchangeable manner. Bridging information through cloud services has already become indispensable to our daily routines: emails, calendars, music, documents, …</p>
<p>In such a multi-devices environment, it becomes complicated for solution providers to ensure the consistency and quality of user experience ( <a href="http://www.nngroup.com/articles/cross-channel-consistency/">Janelle Estes, 2013</a>). Similarly to the conductors of an orchestra, we daily interact with different ‘instruments' toward a common goal. When, with which device and in what circumstances a user is doing what activity is a tough yet essential question to answer.</p>
<p>These days, novel and interesting ways of (social) engagement through multi-screen interaction are emerging, first and foremost in the TV industry. In fact, the market has evolved at a fast pace towards the new world of multi-screen digitalism already. With up to 90% of media interactions being screen based ( <a href="http://www.google.com/think/research-studies/the-new-multi-screen-world-study.html">Google, 2012</a>) and mobile data traffic on high-rise ( <a href="http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-520862.html">Cisco, 2013</a>), the foundation for prospective innovation in multi-device interaction world is laid.</p>
<h2></h2>
<h2>Second Screen and Television</h2>
<p>The television industry was first to coin the term “second screen”, referring to applications that go hand in hand with a primary screen: the TV. Consumers better know it under the name “companion devices” – an application that provides supplementary information to the content displayed on the primary screen. But there's clearly more to it than just two screens.</p>
<p>A current report from Business Insider explains “Why the second screen Industry is set to explode”, referring to the very frequent use of mobile devices while watching TV (see <a href="http://www.businessinsider.com/bii-report-why-the-second-screen-industry-is-set-to-explode-2013-2">Business Insider, 2013</a>). It seems to be one of the most popular side activities of this mobile era of broadcasting. According to a survey conducted by Nielsen, about 86% of tablet and smartphone users engage in multi-device activities while watching TV ( <a href="http://www.nielsen.com/us/en/newswire/2012/double-vision-global-trends-in-tablet-and-smartphone-use-while-watching-tv.html">Nielsen, 2012</a>). And here, the second screen apps act in bridging gaps between the user and media content. Often these apps provide incentives in forms of social media engagement or additional, in-depth information. For the later, think of the sport game companion application from <a href="http://www1.skysports.com/mobile/apps/7378885">Sky Sports</a> that feeds users with live statistics and numbers while streaming a football game on a first screen. The app includes recent player transfer deals and historical data on previous team performance to name a few examples. In formula one on the other hand, users can take a more active role and customize their TV experience in switching live camera views of cockpits during the race. In fact, the implementation of such applications turned out to be a success ( <a href="http://www.broadbandtvnews.com/2013/09/06/sky-summer-of-sport-breaks-multiplatform-records/">BroadbandTV News, 2013</a>).</p>
<p>When it comes to second screen social engagement, according to eMarketer, one sixth of the audience actually shows social media activity about the content they consume on TV ( <a href="http://www.forbes.com/sites/jeffbercovici/2013/10/10/the-second-screen-phenomenon-is-much-bigger-than-twitter-and-facebook/">Forbes, 2013</a>). Many shows nowadays contain live feeds from social media channels such as Twitter. NBC's Second Screen App for The Voice for example uses a live voting system and enables fans to engage with celebrity judges while broadcasting ( <a href="http://simplymeasured.com/blog/2013/04/30/nbcs-the-voice-is-changing-how-we-watch-tv-with-twitter/">Simply Measured, 2013</a>). Another companion app made by NBC for the Million Second Quiz where viewers can play the quiz in parallel to the show and enter competitions (Million Second Quiz, 2013). There are many more second screen products available that provide some sort of engagement for users (e.g. <a href="http://www.youtube.com/watch?v=IdxoCDNx2nQ">Zeebox App</a>)</p>
<p>Of course, one could argue that these second screen applications are specifically designed for the US and UK market and that the underlying concepts might not be applicable to a culturally different market. This observation is very correct. However, when having a look around in the swiss TV industry, a recent surveyshows that up to 76 percent of TV consumers use a second device connected to the internet while broadcasting ( <a href="http://www.werbewoche.ch/second-screen-auch-in-der-schweiz-ein-trend">werbewoche.ch, 2013</a>). Also, SRF launched a companion game app for the quiz show Millionenfalle ( <a href="http://www.srf.ch/sendungen/die-millionen-falle/spielen-sie-ab-11-november-2013-live-mit">App</a>)  in November 2013. And television pioneer channel <a href="http://www.joiz.ch/">Joiz</a> is all social TV since 2011: users can check-in to programmes and message with community members, respectively win goodies for engagement and social interaction activity.</p>
<h2></h2>
<h2>Beyond the Screen: the User</h2>
<p>The TV world with social engagement incentives is important, let us however refocus on the user. After all, when engaging in multi-screen activities, the user attention is split and every additional interaction is source of distraction. We have to understand the underlying reasons that drive the user towards one device over another, to pursue one activity over another, in what context and why. This leads to a number of questions that are important to study in multi-screen interactions. We'll find possible answers to these questions based on a variety of different sources in our next blog post.</p>]]></description>
          </item>
    
  </channel>
</rss>
