<?xml version="1.0" encoding="utf-8"?>
<!-- generator="Kirby" -->
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">

  <channel>
    <title>Mot-cl&#233;: iphone &#183; Blog &#183; Liip</title>
    <link>https://www.liip.ch/fr/blog/tags/iphone</link>
    <generator>Kirby</generator>
    <lastBuildDate>Tue, 23 Oct 2018 00:00:00 +0200</lastBuildDate>
    <atom:link href="https://www.liip.ch" rel="self" type="application/rss+xml" />

        <description>Articles du blog Liip avec le mot-cl&#233; &#8220;iphone&#8221;</description>
    
        <language>fr</language>
    
        <item>
      <title>Real time numbers recognition (MNIST) on an iPhone with CoreML from A to Z</title>
      <link>https://www.liip.ch/fr/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z</link>
      <guid>https://www.liip.ch/fr/blog/numbers-recognition-mnist-on-an-iphone-with-coreml-from-a-to-z</guid>
      <pubDate>Tue, 23 Oct 2018 00:00:00 +0200</pubDate>
      <description><![CDATA[<h1>Creating a CoreML model from A-Z in less than 10 Steps</h1>
<p>This is the third part of our deep learning on mobile phones series. In part one I have shown you <a href="https://www.liip.ch/en/blog/poke-zoo-or-making-deep-learning-tell-oryxes-apart-from-lamas-in-a-zoo-part-1-the-idea-and-concepts">the two main tricks on how to use convolutions and pooling to train deep learning networks</a>. In part two I have shown you <a href="https://www.liip.ch/en/blog/zoo-pokedex-part-2-hands-on-with-keras-and-resnet50">how to train existing deep learning networks like resnet50 to detect new objects</a>. In part three I will now show you how to train a deep learning network, how to convert it in the CoreML format and then deploy it on your mobile phone! </p>
<p>TLDR: I will show you how to create your own iPhone app from A-Z that recognizes handwritten numbers: </p>
<figure><img src="https://liip.rokka.io/www_inarticle/812493/output.gif" alt=""></figure>
<p>Let’s get started!</p>
<h2>1. How to start</h2>
<p>To have a fully working example I thought we’d start with a toy dataset like the <a href="https://en.wikipedia.org/wiki/MNIST_database">MNIST set of handwritten letters</a> and train a deep learning network to recognize those. Once it’s working nicely on our PC, we will port it to an iPhone X using the <a href="https://developer.apple.com/documentation/coreml">CoreML standard</a>. </p>
<h2>2. Getting the data</h2>
<pre><code class="language-python"># Importing the dataset with Keras and transforming it
from keras.datasets import mnist
from keras import backend as K

def mnist_data():
    # input image dimensions
    img_rows, img_cols = 28, 28
    (X_train, Y_train), (X_test, Y_test) = mnist.load_data()

    if K.image_data_format() == 'channels_first':
        X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
        X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
        input_shape = (1, img_rows, img_cols)
    else:
        X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
        X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
        input_shape = (img_rows, img_cols, 1)

    # rescale [0,255] --&gt; [0,1]
    X_train = X_train.astype('float32')/255
    X_test = X_test.astype('float32')/255

    # transform to one hot encoding
    Y_train = np_utils.to_categorical(Y_train, 10)
    Y_test = np_utils.to_categorical(Y_test, 10)

    return (X_train, Y_train), (X_test, Y_test)

(X_train, Y_train), (X_test, Y_test) = mnist_data()</code></pre>
<h2>3. Encoding it correctly</h2>
<p>When working with image data we have to distinguish how we want to encode it. Since Keras is a high level-library that can work on multiple “backends” such as <a href="https://www.tensorflow.org">Tensorflow</a>, <a href="http://deeplearning.net/software/theano/">Theano</a>  or <a href="https://www.microsoft.com/en-us/cognitive-toolkit/">CNTK</a>, we have to first find out how our backend encodes the data. It can either be encoded in a “channels first” or in a “channels last” way which is the default in Tensorflow in the <a href="https://keras.io/backend/">default Keras backend</a>. So in our case, when we use Tensorflow it would be a tensor of (batch_size, rows, cols, channels). So we first input the batch_size, then the 28 rows of the image, then the 28 columns of the image and then a 1 for the number of channels since we have image data that is grey-scale.  </p>
<p>We can take a look at the first 5 images that we have loaded with the following snippet:</p>
<pre><code class="language-python"># plot first six training images
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np

(X_train, y_train), (X_test, y_test) = mnist.load_data()

fig = plt.figure(figsize=(20,20))
for i in range(6):
    ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
    ax.imshow(X_train[i], cmap='gray')
    ax.set_title(str(y_train[i]))</code></pre>
<figure><img src="https://liip.rokka.io/www_inarticle/7cce04/numbers.png" alt=""></figure>
<h2>4. Normalizing the data</h2>
<p>We see that there are white numbers on a black background, each thickly written just in the middle and they are quite low resolution - in our case 28 pixels x 28 pixels. </p>
<p>You have noticed that above we are rescaling each of the image pixels, by dividing them by 255. This results in pixel values between 0 and 1 which is quite useful for any kind of training. So each of the images pixel values look like this before the transformation:</p>
<pre><code class="language-python"># visualize one number with pixel values
def visualize_input(img, ax):
    ax.imshow(img, cmap='gray')
    width, height = img.shape
    thresh = img.max()/2.5
    for x in range(width):
        for y in range(height):
            ax.annotate(str(round(img[x][y],2)), xy=(y,x),
                        horizontalalignment='center',
                        verticalalignment='center',
                        color='white' if img[x][y]&lt;thresh else 'black')

fig = plt.figure(figsize = (12,12)) 
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)</code></pre>
<figure><img src="https://liip.rokka.io/www_inarticle/6d0772/detail.png" alt=""></figure>
<p>As you noticed each of the grey pixels has a value between 0 and 255 where 255 is white and 0 is black. Notice that here <code>mnist.load_data()</code> loads the original data into X_train[0]. When we write our custom mnist_data() function we transform every pixel intensity into a value of 0-1 by calling  <code>X_train = X_train.astype('float32')/255 </code>. </p>
<h2>5. One hot encoding</h2>
<p>Originally the data is encoded in such a way that the Y-Vector contains the number value that the X Vector (Pixel Data) contains. So for example if it looks like a 7, the Y-Vector contains the number 7 in there. We need to do this transformation, because we want to map our output to 10 output neurons in our network that fire when the according number is recognized. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/46a2ef/onehot.png" alt=""></figure>
<h2>6. Modeling the network</h2>
<p>Now it is time to define a convolutional network to distinguish those numbers. Using the <a href="https://www.liip.ch/en/blog/poke-zoo-or-making-deep-learning-tell-oryxes-apart-from-lamas-in-a-zoo-part-1-the-idea-and-concepts">convolution and pooling tricks from part one of this series</a> we can model a network that will be able to distinguish numbers from each other. </p>
<pre><code class="language-python"># defining the model
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
def network():
    model = Sequential()
    input_shape = (28, 28, 1)
    num_classes = 10

    model.add(Conv2D(filters=32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))
    model.add(Flatten())
    model.add(Dense(500, activation='relu'))
    model.add(Dropout(0.4))
    model.add(Dense(num_classes, activation='softmax'))

    # summarize the model
    # model.summary()
    return model </code></pre>
<p>So what did we do there? Well we started with a <a href="https://keras.io/layers/convolutional/">convolution</a> with a kernel size of 3. This means the window is 3x3 pixels. The input shape is our 28x28 pixels.  We then followed this layer by a <a href="https://keras.io/layers/pooling/">max pooling layer</a>. Here the pool_size is two so we downscale everything by 2. So now our input to the next convolutional layer is 14 x 14. We then repeated this two more times ending up with an input to the final convolution layer of 3x3. We then use a <a href="https://keras.io/layers/core/#dropout">dropout layer</a> where we randomly set 30% of the input units to 0 to prevent overfitting in the training. Finally we then flatten the input layers (in our case 3x3x32 = 288) and connect them to the dense layer with 500 inputs. After this step we add another dropout layer and finally connect it to our dense layer with 10 nodes which corresponds to our number of classes (as in the number from 0 to 9). </p>
<h2>7. Training the model</h2>
<pre><code class="language-python">#Training the model
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])

model.fit(X_train, Y_train, batch_size=512, epochs=6, verbose=1,validation_data=(X_test, Y_test))

score = model.evaluate(X_test, Y_test, verbose=0)

print('Test loss:', score[0])
print('Test accuracy:', score[1])</code></pre>
<p>We first compile the network by defining a loss function and an optimizer: in our case we select categorical_crossentropy, because we have multiple categories (as in the numbers 0-9). There are a number of optimizers that <a href="https://keras.io/optimizers/#usage-of-optimizers">Keras offers</a>, so feel free to try out a few, and stick with what works best for your case. I’ve found that AdaDelta (an advanced form of AdaGrad) works fine for me. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/42b4b8/train.png" alt=""></figure>
<p>So after training I’ve got a model that has an accuracy of 98%, which is quite excellent given the rather simple network infrastructure. In the screenshot you can also see that in each epoch the accuracy was increasing, so everything looks good to me. We now have a model that can quite well predict the numbers 0-9 from their 28x28 pixel representation. </p>
<h2>8. Saving the model</h2>
<p>Since we want to use the model on our iPhone we have to convert it to a format that our iPhone understands. There is actually an ongoing initiative from Microsoft, Facebook and Amazon (and others) to harmonize all of the different deep learning network formats to have an interchangable open neural networks exchange format that you can use on any device. Its called <a href="https://onnx.ai">ONNX</a>. </p>
<p>Yet, as of today Apple devices work only with the CoreML format though. In order to convert our Keras model to CoreML Apple luckily provides  a very handy helper library called <a href="https://apple.github.io/coremltools/generated/coremltools.converters.keras.convert.html">coremltools</a> that we can use to get the job done. It is able to convert scikit-learn models, Keras and XGBoost models to CoreML, thus covering quite a bit of the everyday applications.  Install it with “pip install coremltools” and then you will be able to use it easily. </p>
<pre><code class="language-python">coreml_model = coremltools.converters.keras.convert(model,
                                                    input_names="image",
                                                    image_input_names='image',
                                                    class_labels=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
                                                    )</code></pre>
<p>The most important parameters are class_labels, they define how many classes the model is trying to predict, and input names or image_input_names. By setting them to <ode>image</code> XCode will automatically recognize that this model is about taking in an image and trying to predict something from it. Depending on your application it makes a lot of sense to study the <a href="https://apple.github.io/coremltools/generated/coremltools.converters.keras.convert.html">documentation</a>, especially when you want to make sure that it encodes the RGB channels in the same order (parameter is_bgr) or making sure that it assumes correctly that all inputs are values between 0 and 1 (parameter image_scale) . </p>
<p>The only thing left is to add some metadata to your model. With this you are helping all the developers greatly, since they don’t have to guess how your model is working and what it expects as input. </p>
<pre><code class="language-python">#entering metadata
coreml_model.author = 'plotti'
coreml_model.license = 'MIT'
coreml_model.short_description = 'MNIST handwriting recognition with a 3 layer network'
coreml_model.input_description['image'] = '28x28 grayscaled pixel values between 0-1'
coreml_model.save('SimpleMnist.mlmodel')

print(coreml_model)</code></pre>
<h2>9. Use it to predict something</h2>
<p>After saving the model to a CoreML model we can try if it works correctly on our machine. For this we can feed it with an image and try to see if it predicts the label correctly. You can use the MNIST training data or you can snap a picture with your phone and transfer it on your PC to see how well the model handles real-life data. </p>
<pre><code class="language-python">#Use the core-ml model to predict something
from PIL import Image  
import numpy as np
model =  coremltools.models.MLModel('SimpleMnist.mlmodel')
im = Image.fromarray((np.reshape(mnist_data()[0][0][12]*255, (28, 28))).astype(np.uint8),"L")
plt.imshow(im)
predictions = model.predict({'image': im})
print(predictions)</code></pre>
<p>It works hooray! Now it's time to include it in a project in XCode. </p>
<h1>Porting our model to XCode in 10 Steps</h1>
<p>Let me start by saying: I am by no means a XCode or Mobile developer. I have studied a <a href="https://github.com/markmansur/CoreML-Vision-demo">quite a few</a> <a href="https://sriraghu.com/2017/06/15/computer-vision-in-ios-object-recognition/">super</a> <a href="https://www.raywenderlich.com/577-core-ml-and-vision-machine-learning-in-ios-11-tutorial">helpful tutorials</a>, <a href="https://www.pyimagesearch.com/2018/04/23/running-keras-models-on-ios-with-coreml/">walkthroughs</a>  and <a href="https://www.youtube.com/watch?v=bOg8AZSFvOc">videos</a> on how to create a simple mobile phone app with CoreML and have used those to create my app. I can only say a big thank you and kudos to the community being so open and helpful. </p>
<h2>1. Install XCode</h2>
<p>Now it's time to really get our hands dirty. Before you can do anything you have to have XCode. So download it via <a href="https://itunes.apple.com/us/app/xcode/id497799835?mt=12">Apple-Store</a> and install it. In case you already have it, make sure to have at least version 9 and above. </p>
<h2>2. Create the Project</h2>
<p>Start XCode and create a single view app. Name your project accordingly.  I did name mine “numbers”. Select a place to save it. You can leave “create git repository on my mac” checked. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/26a145/single.png" alt=""></figure>
<h2>3. Add the CoreML model</h2>
<p>We can now add the CoreML model that we created using the coremltools converter. Simply drag the model into your project directory. Make sure to drag it into the correct folder (see screenshot). You can use the option “add as Reference”, like this whenever you update your model, you don’t have to drag it into your project again to update it. XCode should automatically recognize your model and realize that it is a model to be used for images. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/d4115c/addmodel.png" alt=""></figure>
<h2>4. Delete the view or storyboard</h2>
<p>Since we are going to use just the camera and display a label we don’t need a fancy graphical user interface - or in other words a view layer. Since the storyboard in Swing corresponds to the view in the MVC pattern we are going to simply delete it. In the project settings deployment info make sure to delete the Main Interface too (see screenshot), by setting it to blank.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/8f4709/storyboard.png" alt=""></figure>
<h2>5. Create the root view controller programmatically</h2>
<p>Instead we are going to create view root controller programmatically by replacing the <code>funct application</code> in AppDelegate.swift with the following code:</p>
<pre><code class="language-swift">// create the view root controller programmatically
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -&gt; Bool {
    // create the user interface window, make it visible
    window = UIWindow()
    window?.makeKeyAndVisible()

    // create the view controller and make it the root view controller
    let vc = ViewController()
    window?.rootViewController = vc

    // return true upon success
    return true
}</code></pre>
<h2>6. Build the view controller</h2>
<p>Finally it is time to build the view controller. We will use UIKit - a lib for creating buttons and labels, AVFoundation - a lib to capture the camera on the iPhone and Vision - a lib to handle our CoreML model. The last is especially handy if you don’t want to resize the input data yourself. </p>
<p>In the Viewcontroller we are going to inherit from UI and AV functionalities, so we need to overwrite some methods later to make it functional. </p>
<p>The first thing we will do is to create a label that will tell us what the camera is seeing. By overriding the <code>viewDidLoad</code> function we will trigger the capturing of the camera and add the label to the view. </p>
<p>In the function <code>setupCaptureSession</code> we will create a capture session, grab the first camera (which is the front facing one) and capture its output into <code>captureOutput</code> while also displaying it on the <code>previewLayer</code>. </p>
<p>In the function <code>captureOutput</code> we will finally make use of our CoreML model that we imported before. Make sure to hit Cmd+B - build, when importing it, so XCode knows it's actually there. We will use it to predict something from the image that we captured. We will then grab the first prediction from the model and display it in our label. </p>
<pre><code class="language-swift">\\define the ViewController
import UIKit
import AVFoundation
import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    // create a label to hold the Pokemon name and confidence
    let label: UILabel = {
        let label = UILabel()
        label.textColor = .white
        label.translatesAutoresizingMaskIntoConstraints = false
        label.text = "Label"
        label.font = label.font.withSize(40)
        return label
    }()

    override func viewDidLoad() {
        // call the parent function
        super.viewDidLoad()       
        setupCaptureSession() // establish the capture
        view.addSubview(label) // add the label
        setupLabel()
    }

    func setupCaptureSession() {
        // create a new capture session
        let captureSession = AVCaptureSession()

        // find the available cameras
        let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

        do {
            // select the first camera (front)
            if let captureDevice = availableDevices.first {
                captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice))
            }
        } catch {
            // print an error if the camera is not available
            print(error.localizedDescription)
        }

        // setup the video output to the screen and add output to our capture session
        let captureOutput = AVCaptureVideoDataOutput()
        captureSession.addOutput(captureOutput)
        let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.frame = view.frame
        view.layer.addSublayer(previewLayer)

        // buffer the video and start the capture session
        captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
        captureSession.startRunning()
    }

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // load our CoreML Pokedex model
        guard let model = try? VNCoreMLModel(for: SimpleMnist().model) else { return }

        // run an inference with CoreML
        let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in

            // grab the inference results
            guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }

            // grab the highest confidence result
            guard let Observation = results.first else { return }

            // create the label text components
            let predclass = "\(Observation.identifier)"

            // set the label text
            DispatchQueue.main.async(execute: {
                self.label.text = "\(predclass) "
            })
        }

        // create a Core Video pixel buffer which is an image buffer that holds pixels in main memory
        // Applications generating frames, compressing or decompressing video, or using Core Image
        // can all make use of Core Video pixel buffers
        guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

        // execute the request
        try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
    }

    func setupLabel() {
        // constrain the label in the center
        label.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true

        // constrain the the label to 50 pixels from the bottom
        label.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -50).isActive = true
    }
}</code></pre>
<p>Make sure that you have changed the model part to the naming of your model. Otherwise you will get build errors. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/b4364b/modeldetails.png" alt=""></figure>
<h2>6. Add Privacy Message</h2>
<p>Finally, since we are going to use the camera, we need to inform the user that we are going to do so, and thus add a privacy message “Privacy - Camera Usage Description”  in the Info.plist file under Information Property List. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/29ab1e/privacy.png" alt=""></figure>
<h2>7. Add a build team</h2>
<p>In order to deploy the app on your mobile iPhone, you will need to <a href="https://developer.apple.com/programs/enroll/">register with the Apple developer program</a>. There is no need to pay any money to do so, <a href="https://9to5mac.com/2016/03/27/how-to-create-free-apple-developer-account-sideload-apps/">you can register also without any fees</a>. Once you are registered you can select the team Apple calls it this way) that you have signed up there in the project properties. </p>
<h2>8. Deploy on your iPhone</h2>
<p>Finally it's time to deploy the model on your iPhone. You will need to connect it via USB and then unlock it. Once it's unlocked you need to select the destination under Product - Destination- Your iPhone. Then the only thing left is to run it on your mobile: Select Product - Run (or simply hit CMD + R) in the Menu and XCode will build and deploy the project on your iPhone. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/7cc4f5/destination.png" alt=""></figure>
<h2>9. Try it out</h2>
<p>After having had to jump through so many hoops it is finally time to try out our app. If you are starting it for the first time it will ask you to allow it to use your camera (after all we have placed this info there). Then make sure to hold your iPhone sideways, since it matters on how we trained the network. We have not been using any augmentation techniques, so our model is unable to recognize numbers that are “lying on the side”. We could make our model better by applying these techniques as I have shown in <a href="https://www.liip.ch/en/blog/zoo-pokedex-part-2-hands-on-with-keras-and-resnet50">this blog article</a>.</p>
<p>A second thing you might notice is, that the app always recognizes some number, as there is no “background” class. In order to fix this, we could train the model additionally on some random images, which we classify as the background class. This way our model would be better equipped to tell apart if it is seeing a number or just some random background. </p>
<figure><img src="https://liip.rokka.io/www_inarticle/812493/output.gif" alt=""></figure>
<h2>Conclusion or the famous “so what”</h2>
<p>Obviously this has is a very long blog post. Yet I wanted to get all the necessary info into one place in order to show other mobile devs how easy it is to create your own deep learning computer vision applications. In our case at Liip it will most certainly boil down to a collaboration between our <a href="https://www.liip.ch/en/work/data">data services team</a> and our mobile developers in order to get the best of both worlds. </p>
<p>In fact we are currently innovating together by creating an app that <a href="https://www.liip.ch/en/blog/zoo-pokedex-part-2-hands-on-with-keras-and-resnet50">will be able to recognize</a> <a href="https://www.liip.ch/en/blog/poke-zoo-or-making-deep-learning-tell-oryxes-apart-from-lamas-in-a-zoo-part-1-the-idea-and-concepts">animals in a zoo</a> and working on another small fun game that lets two people doodle against each other: You will be given a task, as in “draw an apple” and the person who draws the apple faster in such a way that it is recognised by the deep learning model wins. </p>
<p>Beyond such fun innovation projects the possibilities are endless, but always depend on the context of the business and the users. Obviously the saying “if you have a hammer every problem looks like a nail to you” applies here too, not every app will benefit from having computer vision on board, and not all apps using computer vision are <a href="https://www.theverge.com/2017/6/26/15876006/hot-dog-app-android-silicon-valley">useful ones</a> as some of you might know from the famous Silicon Valley episode. </p>
<p>Yet there are quite a few nice examples of apps that use computer vision successfully: </p>
<ul>
<li><a href="http://leafsnap.com">Leafsnap</a>, lets you distinguish different types of leafs. </li>
<li><a href="https://www.aipoly.com">Aipoly</a> helps visually impaired people to explore the world.</li>
<li><a href="http://www.snooth.com/iphone-app/">Snooth</a> gets you more infos on your wine by taking a picture of the label.</li>
<li><a href="https://www.theverge.com/2017/2/8/14549798/pinterest-lens-visual-discovery-shazam">Pinterest</a> has launched a visual search that allows you to search for pins that match the product that you captured with your phone. </li>
<li><a href="http://www.caloriemama.ai">Caloriemama</a> lets you snap a picture of your food and tells you how many calories it has. </li>
</ul>
<p>As usual the code that you have seen in this blogpost is <a href="https://github.com/plotti/mnist-to-coreml">available online</a>. Feel free to experiment with it. I am looking forward to your comments and I hope you enjoyed the journey.  P.S. I would like to thank Stefanie Taepke for  proof reading and for her helpful comments which made this post more readable.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/d6f619/p1013593.jpg" length="5538521" type="image/jpeg" />
          </item>
        <item>
      <title>Writing iOS Layout Constraints The Easy Way</title>
      <link>https://www.liip.ch/fr/blog/writing-ios-layout-constraints-the-easy-way</link>
      <guid>https://www.liip.ch/fr/blog/writing-ios-layout-constraints-the-easy-way</guid>
      <pubDate>Thu, 27 Aug 2015 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Coming from a web-development background, native iOS development always feels a bit clunky to me when it comes to creating the layouts.</p>
<p>Yes, there is the Interface Builder and it is a great tool, but sometimes,</p>
<p>things get more generic and building the views and layouts can be more efficiently done by hand.</p>
<p>Except – layout constraints! Writing layout constraints can be tedious work.</p>
<p>Example, making an element the half of the width of its parent element in objective-c:</p>
<pre><code>[self.view addSubview:centerView];

// Width constraint, half of parent view width
[self.view addConstraint:[NSLayoutConstraint constraintWithItem:centerView
                               attribute:NSLayoutAttributeWidth relatedBy:NSLayoutRelationEqual
                               toItem:self.view attribute:NSLayoutAttributeWidth
                               multiplier:0.5 constant:0]];</code></pre>
<p>It is not much better in C# with Xamarin either:</p>
<pre><code>View.AddSubview(centerView);

// Width constraint, half of parent view width
View.AddConstraint(
    NSLayoutConstraint.Create(centerView, 
        NSLayoutAttribute.Width, NSLayoutRelation.Equal, 
        View, NSLayoutAttribute.Width, 
        0.5f, 0f
   )
);</code></pre>
<p><strong>But behold! There is our ConstraintHelper!</strong> </p>
<pre><code>ConstraintHelper.Attach(centerView).WidthOfParent(0.5f).Top().Center();</code></pre>
<p>The ConstraintHelper is a small C# library to help with the layout constraints and it brings less common concepts like <a href="https://en.wikipedia.org/wiki/Method_chaining">Method Chaining</a> to the layout constraints.</p>
<p><a href="https://github.com/semiroot/ConstraintHelper">ConstraintHelper is Open Source and can be forked from GitHub</a>.</p>]]></description>
          </item>
        <item>
      <title>Open Sourcing Radios &#8211; A PhoneGap iPhone/iPad app</title>
      <link>https://www.liip.ch/fr/blog/open-sourcing-radios-a-phonegap-iphoneipad-app</link>
      <guid>https://www.liip.ch/fr/blog/open-sourcing-radios-a-phonegap-iphoneipad-app</guid>
      <pubDate>Wed, 30 Jun 2010 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>As of today, we officially open source our iPhone/iPad app “ <a href="http://liip.to/radios">Radios</a>“. We started Radios as a Liip Hackday project soon after the iPad was announced and it was clear from the beginning, that we will make the code public some day. We used a lot of Open Source code to build it, after all. We publish it under the same license as the underlying software we used. It's the very liberal <a href="https://github.com/liip/Radios/blob/master/LICENSE">MIT license</a> and the whole code is available at <a href="http://github.com/liip/Radios">our github repository</a>. So feel free to do some great new stuff with the code.</p>
<p>The app is built on <a href="http://www.phonegap.com">PhoneGap</a>, meaning that the whole UI and most of the logic needed is written in html, JavaScript and CSS. It should be therefore very easy to get into it and enhance it. The more “core-ish” stuff is written in Objective-C and is described in <a href="http://blog.liip.ch/archive/2010/06/09/the-technical-details-behind-the-radios-app.html">another blog post</a>.</p>
<p>For getting started, all you need is the <a href="http://developer.apple.com/iphone">iPhone SDK</a>, an installation of PhoneGap and of course our code. Open the “radioapp.xcodeproj” in Xcode, Build&amp;Run it in the simulator and you should be ready. If you later want to test the App on your actual iPhone, you also need a paid iPhone Dev Account (those 99$/year)</p>
<p>For some special features like the “3G/Edge detection” I had to patch PhoneGap and that fork is available at <a href="http://github.com/chregu/phonegap-iphone">github.com/chregu/phonegap-iphone</a>, but for just enhancing “Radios” with some new features, you don't need that fork, the standard PhoneGap should be fine.</p>
<p>I'm sure, you now want to go ahead, fork Radios and start developing for it. Let us know via comments or <a href="mailto:radios@liip.ch">email</a>, if you did something cool and we will try to integrate it into the official version. And if you don't know, what you could add to it, here's a little wishlist:</p>
<ul>
<li>The possibility to add your own radiostations</li>
<li>Use touch instead of click events</li>
<li>A sleep function (turn off after one hour)</li>
</ul>]]></description>
          </item>
        <item>
      <title>Radios for iPhone/iPad with background audio on iOS 4 released</title>
      <link>https://www.liip.ch/fr/blog/radios-for-iphoneipad-with-background-audio-on-ios-4-released</link>
      <guid>https://www.liip.ch/fr/blog/radios-for-iphoneipad-with-background-audio-on-ios-4-released</guid>
      <pubDate>Tue, 29 Jun 2010 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Apple finally approved the next iteration of our <a href="http://liip.to/radios">Radios</a> App. It now not only runs on the iPad, but also on the iPhone. And if you already have iOS 4, you can listen to the radio while using other apps.</p>
<p>Unfortunately, something went wrong with the first iOS 4 submission. We couldn't really figure out, if it was our or “their” fault. If it would have worked out on the first time, you could have enjoyed background music from the day iOS 4 was released. Now you just had to wait another week and I'm very happy it ended up in the AppStore now.</p>
<p>And as promised we will open source the whole application. This week. Stay tuned :) If you want to know some technical details already, <a href="http://blog.liip.ch/archive/2010/06/09/the-technical-details-behind-the-radios-app.html">read this blog post</a>.</p>]]></description>
          </item>
        <item>
      <title>The technical details behind the Radios App</title>
      <link>https://www.liip.ch/fr/blog/the-technical-details-behind-the-radios-app</link>
      <guid>https://www.liip.ch/fr/blog/the-technical-details-behind-the-radios-app</guid>
      <pubDate>Wed, 09 Jun 2010 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Almost two week ago, we released our iPad App <a href="http://liip.to/radios">Radios</a>. And today the first update, with more features, more radio stations and multilingual interface, was approved and deployed by Apple.</p>
<p>Radios plays mp3 streams on your iPad from some Swiss radio stations, so nothing special. But additionally, it also displays which band and song is currently playing including artist information and pictures from last.fm. All in all, it is an app which is not designed for the time you spend with the iPad on your couch, but for the many hours the device sits on the sideboard next to your couch. And it's really fun to see the pictures to the bands currently playing on the radio, especially if the pictures are a little bit older :)</p>
<p>The project started with the idea of trying out this new tablet format and a Liip hackday. I made some proof-of-concept work with the mp3 streaming part and then four Liipers sat together a whole day (a so called hackday, it's an institutionalized format at Liip, where you get time to “hack” on whatever you want) and built that thing. Or most of it, at least :)</p>
<p>The app uses <a href="http://phonegap.com/">PhoneGap</a> as the underlying part. With PhoneGap one can program an iPhone app with “just” html, JavaScript and CSS. It provides some <a href="http://wiki.phonegap.com/JavaScript-API">additional APIs</a> you don't have in Mobile Safari for some device specific features, like vibration or the accelerometer.</p>
<p>While PhoneGap has an API for Sound, it doesn't have one for live streaming and reading metadata from that streams. We needed the first to stream the radio station feeds and the second to display the artist info. I started googling and quickly found this “ <a href="http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html">Streaming and playing an MP3 stream</a>” article by Matt Gallagher, incl. Objective-C code and all. Integrating that in the app was a breeze, even with my pretty basic Objective-C skills.</p>
<p>Next task then was to be able to call those newly added methods via JavaScript. Thanks to PhoneGap that wasn't too hard to implement. Basically you create some “special” methods in Objective-C, like <a href="http://github.com/chregu/phonegap-iphone/blob/master/PhoneGapLib/Classes/AudioStream.m#L96">the play method in AudioStream.m</a> and then add some “special” javascript methods like <a href="http://github.com/chregu/phonegap-iphone/blob/master/PhoneGapLib/javascripts/plugins/AudioStream.js#L22">the play method in AudioStream.js</a>, which basically just calls the native method. It's easy as that. If you need return values from a method call, you have to do that with callbacks, see eg. <a href="http://github.com/chregu/phonegap-iphone/blob/master/PhoneGapLib/javascripts/plugins/AudioStream.js#L35">getMetaData</a>.</p>
<p>We also added a very simple <a href="http://github.com/chregu/phonegap-iphone/blob/master/PhoneGapLib/Classes/Lang.m">Lang class</a> to get the user language for displaying the info in the right language.</p>
<p>The one problem with the above mentioned MP3 streaming class was, that we didn't have the info about which artist is currently playing. I tried to figure it our by myself how that works. Most radio stations are using <a href="http://www.smackfu.com/stuff/programming/shoutcast.html">the Shoutcast Metadata Protocol</a>. Fortunately I didn't have to implement it by myself and I found the <a href="http://code.google.com/p/audiostreamer-meta/">audiostreamer-meta</a> class by Mike Jablonski, which was based on the work abov. I put some wrapper around it and my work on the native Objective-C side was almost done.</p>
<p>The whole code/fork of phonegap-iphone with the mentioned plugins can be found on github at <a href="http://github.com/chregu/phonegap-iphone">github.com/chregu/phonegap-iphone</a>, the audiostream plugin code ist at <a href="http://github.com/liip/phonegap-plugins-audiostream">github.com/liip/phonegap-plugins-audiostream</a></p>
<p>After having done that underlying work, we were able to start with the html/JS/CSS work for Radios. It was almost standard procedure, but with trying out some of those new CSS3 and html5 features, it took some time to figure out some things. The scrolling wasn't an easy task either. But at the end, we have a standard based setting, which could be easily ported to Android (if the streaming/metadata part was ported to Android/Java, but that's doable), or to a “Browser-only” based version. Fabian actually did a version based on this, which worked perfectly fine in a standard browser. It just does the metadata stuff on the server side.</p>
<p>We really think, html/JS/CSS (aka <a href="http://liip.to/niwea">NIWEA</a>) based development of (not only) mobile apps is the future and where the future is too far away today, we try to mix it and get the best of both worlds, like here with this Radios app. We're actually doing a rather big customer project on PhoneGap right now, which will go online pretty soon and which allows us to deliver to iPhone and Android with almost the same code base.</p>
<p>More about that “Apps with Web Technologies” topic was published yesterday by Hannes in the blog post “ <a href="http://liip.to/niwea">What's NIWEA?</a>“.</p>
<p>We will also Open Source the app very soon, we just have to clean up some things. Watch this space.</p>
<p>And last but not least, many thanks to Fabian, Peter and Roland for their development work. And to Hannes, Fabienne, Memi and Tobias for the input during the brainstorming phase. It was a fun project and I'm sure it will live on and get some nice new features (like adding any station :)) in the near future. Thanks to the Liip Innovation and Training Program there's certainly time and budget for that.</p>]]></description>
          </item>
        <item>
      <title>GottaGo &#8211; Episode II</title>
      <link>https://www.liip.ch/fr/blog/gottago-episode-ii</link>
      <guid>https://www.liip.ch/fr/blog/gottago-episode-ii</guid>
      <pubDate>Wed, 17 Sep 2008 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Hi there, it's Marc again, talking about recent development in the GottaGo camp.</p>
<p>As I promised in <a href="http://issuu.com/blickamabend/docs/22082008_zh/23">Blick am Abend</a> and a recent <a href="http://codesofa.com/blog/archive/2008/08/11/whatever-will-be-will-be.html">Blogpost</a>, we are ready to roll out the new version of “GottaGo” in September.</p>
<p>We are facing minor naming issues, since another app is on the App Store with the same name, but we got it first, so it won't be a big issue.</p>
<p>The testing versions are already out there and we got amazing feedback so far. That's why we chose to submit GottaGo v.0.1.0 to the App Store on September 16th. We hope that Apple will release it sooner than later, but we can't really tell. Usually it takes between 1 – 2 weeks.</p>
<p>So, what is new in this already useful app? Everything you wanted it to have – and more.</p>
<p>We waited quite long to release this version, against a software engineering law: “release often, release early”. This is because we wanted to provide you with a complete tool, where none of your wishes are prioritized – they were all very important to us.</p>
<p>You can now either watch <a href="http://couch.codesofa.com/static/ggo_010_streaming.mp4">the new video</a> (h.264, 50mb) or read on and see some screenshots. I recommend both.</p>
<p>Before we take a deeper look at the features, here are some screenshots for you, for a better imagination.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/154f2b483df9c29f048df8658ec79b5bc91a816d/ggo-010-1.jpg" alt=""></figure>
<figure><img src="https://liip.rokka.io/www_inarticle/f40658eb0bdee97dc0802f42f27aa00307b341cd/ggo-010-3.jpg" alt=""></figure>
<figure><img src="https://liip.rokka.io/www_inarticle/63fab7fb89203c791320ebd01bba5896bd87901d/ggo-010-5.jpg" alt=""></figure>
<p>To the features:</p>
<ul>
<li>Language support: We now support English, German, French and Italian – we'd actually support Rumantsch too, but there is no setting for the iPhone.</li>
<li>Search as you type: A list of stations and contacts which match your input are displayed as you type. Also known as “Live Search”</li>
<li>Stations and addresses get validated, just in case you misspelled it.</li>
<li>All new locator: This should dramatically increase your experience with GottaGo. You can set the accuracy which fits your needs and your city/town.</li>
<li>Use an address for nearby search: What was only possible with “Current Location” is now possible with addresses as well. Just type your address and it will automatically find the nearest stations around you.</li>
<li>Set your travel date: Now you can look up later trips and your returning trips as well.</li>
<li>Switch “from” and “to”: With a click on the “From” and “To” texts, you switch the values – just in case you want to get home again.</li>
<li>Transparent offline mode / Recent trips: Your old searches are stored on your iPhone for later reuse. Even when you're in a tunnel or the like, you can take a look at your recent trips, without internet connection.</li>
<li>Open GottaGo where you left it: Whenever you decide to close GottaGo or open Maps out of it, it will open at the same position you left it.</li>
<li>Loads of user interface changes: You will see, that with the new features comes a much better UI, which will help you find your way through all the new features.</li>
</ul>
<p>We hope it's not too much change, but they all make a lot of sense to us – and if you think there is something else you want to see in GottaGo v.0.2.0: Don't hesitate to contact us!</p>
<p>In the end, it's a pleasure to thank the other main contributors to GottaGo – there are also a lot of small contributions, which were/are also needed, thanks a lot to you, too!</p>
<p>Main contributers were our partner designer from sichr.com and the guys over at <a href="http://local.ch">local.ch</a>, especially Vasile who did a really great job with their API.</p>
<p>Thanks a lot to you guys!</p>
<p>If you want to get bleeding edge information and updates, visit <a href="http://codesofa.com">my blog on codesofa.com</a> – where you can also sign up yourself for the newest of the newset GottaGo test-versions.</p>
<p>Have a safe trip.</p>]]></description>
          </item>
        <item>
      <title>GottaGo in &#8220;Blick am Abend&#8221;</title>
      <link>https://www.liip.ch/fr/blog/gottago-in-blick-am-abend</link>
      <guid>https://www.liip.ch/fr/blog/gottago-in-blick-am-abend</guid>
      <pubDate>Mon, 25 Aug 2008 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>In Fridays “ <a href="http://www.blick.ch/blickamabend">Blick am Abend</a>“, Marc was interviewed by <a href="http://www.benkoe.ch/">Thomas Benkö</a> about GottaGo, the innovative Public Transport iPhone App. Quite an interesting read, which lead to the title of “ <a href="http://blog.autolos.com/stories/30344/">Hero of the Week</a>” for Marc by <a href="http://blog.autolos.com">blog.autolos.com</a>.</p>
<p>In the same issue on the same page, the new service <a href="http://news.local.ch/">news.local.ch</a> by our colleagues from local.ch were also mentioned.</p>
<p>Link to the <a href="http://php.blick.ch/ha/download.php?ausgabe=22082008_ZH.pdf">PDF of the whole issue</a> (ca. 15MB)</p>]]></description>
          </item>
        <item>
      <title>GottaGo is Number 2^H1 in the Swiss iTunes Store</title>
      <link>https://www.liip.ch/fr/blog/gottago-is-number-2h1-in-the-swiss-itunes-store</link>
      <guid>https://www.liip.ch/fr/blog/gottago-is-number-2h1-in-the-swiss-itunes-store</guid>
      <pubDate>Thu, 07 Aug 2008 00:00:00 +0200</pubDate>
      <description><![CDATA[<p><a href="http://blog.liip.ch/archive/2008/06/09/gottago-location-based-iphone-bring-me-home-tool.html">GottaGo</a>, the free “iPhone bring me home with public transport” application for Switzerland was officially released this Wednesday on the iTunes App Store. Get it at <a href="http://liip.to/gottaGo">http://liip.to/gottaGo</a> while it's hot.</p>
<p>Today it's already number 2 in the most downloaded applications from the swiss store. Plus raving reviews and some <a href="http://search.twitter.com/search?q=gottago">twitter buzz</a>. Here's the proof:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/e89ba1bcd61432120e598147053f3aa21be136c9/gottago2.jpg" alt="Gottago2"></figure>
<p>GottaGo was written by our employee Marc Ammann. On his <a href="http://www.codesofa.com/blog/">Codesofa</a> blog there's much more information about it. And if you find bugs or have any suggestion, you can use the <a href="https://jira.liip.ch/browse/GGO">GGO project in our Issue-Tracker</a></p>
<p>To get all the information, GottaGo also uses a self-written mashup API using the SBB and Google Map data for providing everything needed. There's nothing like that already available. It's built with <a href="http://okapi.liip.ch/">Okapi</a> and is explained in <a href="http://www.codesofa.com/blog/archive/2008/08/03/a-swiss-public-transit-api-sbb-and-google.html">more detail on Marc's Blog</a>.</p>
<p>And we're almost sure, that this wasn't our last iPhone application :)</p>
<p>Update 8.8.08: It's number one now:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/c89213a81c3ac752ecb4269ff06ef5405318cf2a/gg2.jpg" alt="Gg2"></figure>]]></description>
          </item>
        <item>
      <title>GottaGo &#8211; iPhone bring me home</title>
      <link>https://www.liip.ch/fr/blog/gottago-iphone-bring-me-home</link>
      <guid>https://www.liip.ch/fr/blog/gottago-iphone-bring-me-home</guid>
      <pubDate>Mon, 09 Jun 2008 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>With the <a href="http://friendfeed.com/rooms/venturebeat-wwdc-livestream">upcoming release</a> of the iPhone in Switzerland and the opening of Apple's AppStore, I was for sure pretty excited to get my hands on the iPhone SDK. A few weeks ago, I was granted access to the <a href="http://developer.apple.com/iphone/program/details.html">iPhone SDK</a> Beta – which meant a lot more helpful resources and a first try of the new iPhone OS.. It felt pretty easy to handle right at the first touch – those “Framework Evangelists” did a pretty decent job. I did really miss a couple of things which were available in the unofficial “SDK” but now the tools of the trade were finally at hands.</p>
<p>Getting into it meant a lot more reading than actual writing, which is not usually the way I learn(ed) programming. At last when I was able to catch some time to get started with the iPhone SDK, and after those few hours needed to find my way around, things got up to speed. Once you know how and where to look up for help, those things do becomes like your mothers tongue ;)</p>
<p>I'll blog about my experiences with the iPhone SDK later on and I might be able to put a few tips and tricks online.</p>
<h2>Now to the real news..</h2>
<p>I'm “proud” to present one of the first swiss-made native iPhone applications, called “ <strong>GottaGo</strong> ” (well, I'm not aware of any others).</p>
<p>Imagine: how many times were you out in the city and didn't have a clue where your closest, fastest traffic links were.. this happened to me like every other day or so.. <strong> enter GottaGo, telling me how to get home, to the office (or just about anywhere) from where I am, using the nearest and best public transport option.</strong> </p>
<p>So GottaGo is a little location-based application to help Switzerlands public-transport fans (everybody here is, right?) find their way around the country. <strong> GottaGo locates you, locates the next stations around you, tells you for every station the next Bus/Train/Tram connection and when you'll be back home.</strong> It's also able to look up just a single link like you'd probably do on sbb.ch. I'm sure you get the idea about the little lifesaver. If not, it will probably be way clearer if you take a look at the screenshots or the video:</p>
<table border="0" cellspacing="2" align=""><tbody><tr><td><figure><a href="https://www.liip.ch/content/4-blog/20080609-gottago-iphone-bring-me-home/gottago_0.png"><img src="https://liip.rokka.io/www_inarticle/90b3bb0dd9a8b6f46a31bf19f37a160b3682f3e4/gottago-0.jpg" alt="gottago_0"></a></figure></td>
<td><figure><a href="https://www.liip.ch/content/4-blog/20080609-gottago-iphone-bring-me-home/gottago_1.png"><img src="https://liip.rokka.io/www_inarticle/5cbf4909bd93f38c751adf5732bbab3eb1216f3b/gottago-1.jpg" alt="gottago_1"></a></figure></td>
</tr><tr><td><figure><a href="https://www.liip.ch/content/4-blog/20080609-gottago-iphone-bring-me-home/gottago_2.png"><img src="https://liip.rokka.io/www_inarticle/e8e4d8675cb48d7f1b31a832676d7a30f44eca63/gottago-2.jpg" alt="gottago_2"></a></figure></td>
<td><figure><a href="https://www.liip.ch/content/4-blog/20080609-gottago-iphone-bring-me-home/gottago_3.png"><img src="https://liip.rokka.io/www_inarticle/1991dadf3b22b1a2bab93fd705ba851b90472a36/gottago-3.jpg" alt="gottago_3"></a></figure></td>
</tr></tbody></table>
<p>And the highlight: <strong> <a href="https://www.liip.ch/blog/gottago-iphone-bring-me-home/gottago.mov">the video of GottaGo in action</a></strong> (960×540, H.264).</p>
<p>GottaGo is built on the official SDK and no secret backdoors were used, so this is going to be released through <strong>AppStore</strong> soon. We hope to release the app for free. We'll see, when AppStore launches, what the conditions actually really will be. I hope you like the app and consider using it in the future. If applicable, please do consider switching from your car to public transportation. And yes, sure there will be a lot more things to come for the iPhone from Liip/me – so, stay tuned for more :)</p>]]></description>
          </item>
    
  </channel>
</rss>
