<?xml version="1.0" encoding="utf-8"?>
<!-- generator="Kirby" -->
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">

  <channel>
    <title>Mot-cl&#233;: php &#183; Blog &#183; Liip</title>
    <link>https://www.liip.ch/fr/blog/tags/php</link>
    <generator>Kirby</generator>
    <lastBuildDate>Mon, 11 Dec 2017 00:00:00 +0100</lastBuildDate>
    <atom:link href="https://www.liip.ch" rel="self" type="application/rss+xml" />

        <description>Articles du blog Liip avec le mot-cl&#233; &#8220;php&#8221;</description>
    
        <language>fr</language>
    
        <item>
      <title>Speeding up Composer based deployments on AWS Elastic Beanstalk</title>
      <link>https://www.liip.ch/fr/blog/speeding-up-composer-based-deployments-on-aws-elastic-beanstalk</link>
      <guid>https://www.liip.ch/fr/blog/speeding-up-composer-based-deployments-on-aws-elastic-beanstalk</guid>
      <pubDate>Mon, 11 Dec 2017 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>Some of our applications are deployed to <a href="https://aws.amazon.com/de/elasticbeanstalk/">Amazaon Elastic Beanstalk</a>. They are based on PHP, Symfony and of course use <a href="https://getcomposer.org">composer</a> for downloading their dependencies. This can take a while, approx. 2 minutes on our application when starting on a fresh instance. This can be annyoingly long, especially when you're upscaling for more instances due to for example a traffic spike.</p>
<p>You could include the vendor directory when you do <code>eb deploy</code>, but then Beanstalk doesn't do a <code>composer install</code> at all anymore, so you have to make sure the local vendor directory has the right dependencies. There's other caveats with doing that, so was not a real solution for us.</p>
<p>Composer cache to the rescue. Sharing the composer cache between instances (with a simple up and download to an s3 bucket) brought the deployment time for composer install down from about 2 minutes to 10 seconds. </p>
<p>For that to work, we have this on a file called <code>.ebextensions/composer.config</code>:</p>
<pre><code>commands:
  01updateComposer:
    command: export COMPOSER_HOME=/root &amp;&amp; /usr/bin/composer.phar self-update
  02extractComposerCache:
    command: ". /opt/elasticbeanstalk/support/envvars &amp;&amp; rm -rf /root/cache &amp;&amp; aws s3 cp s3://rokka-support-files/composer-cache.tgz /tmp/composer-cache.tgz &amp;&amp;  tar -C / -xf /tmp/composer-cache.tgz &amp;&amp; rm -f /tmp/composer-cache.tgz"
    ignoreErrors: true

container_commands:
  upload_composer_cache:
    command: ". /opt/elasticbeanstalk/support/envvars &amp;&amp; tar -C / -czf composer-cache.tgz /root/cache &amp;&amp; aws s3 cp composer-cache.tgz s3://your-bucket/ &amp;&amp; rm -f composer-cache.tgz"
    leader_only: true
    ignoreErrors: true

option_settings:
  - namespace: aws:elasticbeanstalk:application:environment
    option_name: COMPOSER_HOME
    value: /root</code></pre>
<p>It downloads the composer-cache.tgz on every instance before running <code>composer install</code> and extracts that to <code>/root/cache</code>. And after a new deployment is through, it creates a new tar file from that directory on the &quot;deployment leader&quot; only and uploads that again to S3. Ready for the next deployment or instances.</p>
<p>One caveat we currently didn't solve yet. That .tgz file will grow over time (since it will have old dependencies also in that file). Some process should clear it from time to time or just delete it on S3 when it gets too big. The <code>ignoreErrors</code> options above make sure that the deployment doesn't fail, when that tgz file doesn't exist or is corrupted.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/847da8/nz-2197-edit.jpg" length="6971833" type="image/jpeg" />
          </item>
        <item>
      <title>libvips adapter for PHP Imagine</title>
      <link>https://www.liip.ch/fr/blog/libvips-adapter-for-php-imagine</link>
      <guid>https://www.liip.ch/fr/blog/libvips-adapter-for-php-imagine</guid>
      <pubDate>Sun, 19 Nov 2017 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>The <a href="https://jcupitt.github.io/libvips/">VIPS image processing system</a> is a very fast, multi-threaded image processing library with low memory needs. And it really is pretty fast, the perfect thing for <a href="https://rokka.io">rokka</a> and we'll be transitioning to using it soon.</p>
<p>Fortunately, there's a <a href="https://github.com/jcupitt/php-vips-ext">PHP extension</a> for VIPS and a <a href="https://github.com/jcupitt/php-vips">set of classes</a> for easier access to the VIPS methods. So I started to write a VIPS adapter for <a href="https://imagine.readthedocs.io/en/latest/">Imagine</a> and came quite far in the last few days. Big thanks to the maintainer of VIPS <a href="https://github.com/jcupitt/">John Cupitt</a>, who helped me with some obstacles I encountered and even fixed some issues I found in a very short time.</p>
<p>So, without much further ado I present <a href="https://github.com/rokka-io/imagine-vips">imagine-vips</a>, a VIPS adapter for Imagine. I won't bore you with how to install and use it, it's all described on the GitHub repo.</p>
<p>There is still some functionality missing (see the <a href="https://github.com/rokka-io/imagine-vips/blob/master/README.md">README</a> for details), but the most important operations (at least for us) are implemented. One thing which will be hard to implement correctly are Layers. Currently the library just loads the first image in for example an animated gif. Not sure, we will ever add that functionality, since libvips can't write those gifs anyway. But with some fallback to imagick or gd, it would nevertheless be possible.  </p>
<p>The other thing not really well tested yet (but we're on it) are images with ICC colour profiles. Proper support is coming.</p>
<p>As VIPS is not really something installed on many servers, I don't expect a huge demand on this package, but it may be of use for someone, so we open sourced this with joy. Did I say, that it's really fast?  And maybe someone finds some well hidden bugs or extends it to make it even more useful. Patches and reports are of course always welcome.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/b7bab3/nema-5-1-devices2.jpg" length="2679100" type="image/jpeg" />
          </item>
        <item>
      <title>Magento 2 Config Search</title>
      <link>https://www.liip.ch/fr/blog/magento-2-config-search</link>
      <guid>https://www.liip.ch/fr/blog/magento-2-config-search</guid>
      <pubDate>Fri, 29 Sep 2017 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Do you remember, I recently wrote about implementation of a small but handy <a href="https://blog.liip.ch/archive/2015/05/21/8062.html">extension</a> for config search in Magento1? I have become so used to it, that I had to do the same for Magento 2. And since I heard many rumors about improved contribution process to M2, I also decided to make it as a contribution and get my hands “dirty”.</p>
<p>Since the architecture of the framework has drastically changed, I expected many troubles. But in fact, it was even a little bit easier than for M1. From the development point of view it was definitely more pleasant to work with the code, but I also wanted to test the complete path to the fully merged pull request.</p>
<h3>Step #0 (Local dev setup)</h3>
<p>For the local setup I decided to use Magento2  <a href="http://devdocs.magento.com/guides/v2.1/install-gde/docker/docker-over.html">docker devbox</a>, and since it was still in the beta state I ran the first command without any hope for smooth execution. But surprisingly I had no issues with the whole set up. By executing few commands in terminal and cup of coffee, Magento 2 was successfully installed and ready to use. Totally positive experience.</p>
<h3>Step #1 (Configuration)</h3>
<p>All I had to do is to declare my search model in di.xml, not too hard, right ?)</p>
<figure><img src="https://liip.rokka.io/www_inarticle/9035905badda7e6a90560726a044b8db3b4a572d/screenshot-1.jpg" alt="app/code/Magento/Backend/etc/adminhtml/di.xml"></figure>
<p>app/code/Magento/Backend/etc/adminhtml/di.xml</p>
<h3>Step #2 (Implementation)</h3>
<p>Implementation of search itself was trivial, we just have to look for matches for a given keyword in the ConfigStructure object using  <a href="http://php.net/manual/en/function.mb-stripos.php">mb_stripos(</a>).</p>
<figure><img src="https://liip.rokka.io/www_inarticle/0f5b0af4ee70e489e3b6e9c79ff1bdd84d489a84/screenshot-2.jpg" alt="app/code/Magento/Backend/Model/Search/Config.php"></figure>
<p>app/code/Magento/Backend/Model/Search/Config.php</p>
<h3>Step #3 (View)</h3>
<p>The same as for M1, the result of the search is a list of URLs to the matched configuration label. When the user clicks the selected URL, they are redirected to the config page and the searched field is highlighted.</p>
<p>That would be it regarding the implementation :)</p>
<h3>Step #4 (Afterparty)</h3>
<p>Too simple to believe? You are right. I thought that it is enough for submitting the <a href="https://github.com/magento/magento2/pull/10335">PR</a>. But I completely forgot about tests :) This one of main requirements for accepting pull request by Magento Team.</p>
<p>Since all implemented code was well isolated (had no strict dependencies), it was pretty easy to write tests. I have covered most of the code with unit tests, and for the main search method I wrote the integration test.</p>
<h3>Conclusion</h3>
<p>I would like to point out that during the whole cycle of the pull request, I had fast and high-quality support from the Magento team. They were giving useful recommendations and consulted me sometimes even during their vacations. This is what I call outstanding interaction with the community!</p>
<p>My special thanks to  <a href="https://github.com/vrann">Eugene Tulika</a> and  <a href="https://github.com/maghamed">Igor Miniailo</a>, and of course <a href="https://github.com/vil11">Dmitrii Vilchinskii</a> for the idea for creation this handy feature.</p>]]></description>
          </item>
        <item>
      <title>Advanced Drupal 8 Configuration Management (CMI) Workflows</title>
      <link>https://www.liip.ch/fr/blog/advanced-drupal-8-cmi-workflows</link>
      <guid>https://www.liip.ch/fr/blog/advanced-drupal-8-cmi-workflows</guid>
      <pubDate>Fri, 07 Apr 2017 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>After implementing some larger enterprise Drupal 8 websites, I would like to share some insights, how to solve common issues in the deployment workflow with Drupal 8 CMI.</p>
<h2>Introduction to Drupal CMI</h2>
<p>First of all, you need to understand, how the configuration management in Drupal 8 works. CMI allows you to export all configurations and its dependencies from the database into yml text files. To make sure, you never end up in an inconsistent state, CMI always exports everything. By default, you cannot exclude certain configurations.</p>
<h3>Example:</h3>
<p>If you change some configuration on the live database, these configurations will be reverted in the next deployment when you use</p>
<pre><code>drush config-import</code></pre>
<p>This is helpful and will make sure, you have the same configuration on all your systems.</p>
<h2>How can I have different configurations on local / stage / live environments?</h2>
<p>Sometimes, you want to have different configurations on your environments. For example, we have installed a “devel” module only on our local environment but we want to have it disabled on the live environment.</p>
<p>This can be achieved by using <a href="https://www.drupal.org/project/config_split">the configuration split module</a>.</p>
<h3>What does Configuration Split?</h3>
<p>This module slightly modifies the CMI by implementing a <a href="https://www.drupal.org/project/config_filter">Config Filter</a>. Importing and exporting works the same way as before, except some configuration is read from and written to different directories. Importing configuration still removes configuration not present in the files. Thus, the robustness and predictability of the configuration management remains. And the best thing is: You still can use the same drush commands if you have at <strong>least Drush 8.1.10 installed</strong> .</p>
<h3>Configuration Split Example / Installation Guide</h3>
<p>Install config_split using composer. You need need at least “ <a href="https://www.drupal.org/project/config_split/releases/8.x-1.0-beta4">8.x-1.0-beta4</a>” and <strong> &gt;</strong> drush 8.1.10 for this guide.</p>
<pre><code>composer require drupal/config_split "^1.0"</code></pre>
<p>Enable config_split and navigate to “admin/config/development/configuration/config-split”</p>
<pre><code>drush en config_split -y</code></pre>
<p>Optional: Installing the <a href="https://www.drupal.org/project/chosen">chosen module</a> will make the selection of blacklists / greylists way more easier. You can enable chosen only on admin pages.</p>
<pre><code>composer require drupal/chosen "^1.0"</code></pre>
<p>I recommend you to create an “environments” subfolder in your config folder. Inside this folder you will have a separate directory for every environment:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/9c0892d2d0f29225ff661d1fd126efe86eb98821/folder.jpg" alt="Drupal 8 Configuration Management Folders"></figure>
<p>Now you can configure your environments:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/b227da8edbc1f2a2285e57135989adc7f2ade4c0/sonfig-split-overview.jpg" alt="Config Split in Drupal 8 Configuration Management"></figure>
<p>The most important thing is, that you <strong>set every environment to “Inactive”.</strong> We will later activate them according to the environment via settings.php</p>
<figure><img src="https://liip.rokka.io/www_inarticle/ba0deaae35cf9943916748361f205d59ab66bf53/inactive.jpg" alt="Config Split settings with the Drupal 8 Configuration Management"></figure>
<p>Here is my example where I enable the devel module on local:</p>
<figure><img src="https://liip.rokka.io/www_inarticle/ca408abae031f7fb47eeae005387914febb71661/devel.jpg" alt="Dev Environment Example"></figure>
<h4>Activate the environments via settings.php</h4>
<p>This is the most important part of the whole setup up. Normally, we never commit the settings.php into git. But we have a [environment]-settings.php in git for every environment:</p>
<pre><code>settings.php (not in git)

variables-dev.php (in git and included in the settings.php of dev)
variables-live.php (in git and included in the settings.php of live)
settings.local.php (in git and included locally)</code></pre>
<p>You need to add the following line to the variables-[environment].php. Please change the <strong>variable name</strong> according to your environment <strong> machine name</strong> :</p>
<pre><code>// This enables the config_split module
$config['config_split.config_split.dev']['status'] = TRUE;</code></pre>
<p>If you have done everything correctly and cleared the cache you will see  <strong>“active (overriden)”</strong> in the config_split overview next to the current environment.</p>
<p>Now you can continue using</p>
<pre><code>drush config-import -y
drush config-export -y</code></pre>
<p>and config_split will do the magic.</p>
<h2>How can I exclude certain Config Files and prevent them to be overridden / deleted on my live environment?</h2>
<p>The most prominent candidates for this workflow are <strong>webforms</strong> and <strong>contact forms</strong> . In Drupal 7, webforms are nodes and you were able to give your CMS administrator the opportunity to create their own forms.</p>
<p>In Drupal 8 webforms are <strong>config entities</strong> , which means that they will be deleted while deploying if the yml files are not in git.</p>
<p>After testing a lot of different modules / drush scripts, I finally came up with an easy to use workflow to solve this issue and give CMS administrators the possibility to create webforms without git knowledge:</p>
<h3>Set up an “Excluded” environment</h3>
<p>First of all, we need an “excluded” environment. I created a subfolder in my config-folder and added a .htaccess file to protect the content. You can copy the .htaccess from an existing environment, if you are lazy. Don't forget to deploy this folder to your live system before you do the next steps.</p>
<figure><img src="https://liip.rokka.io/www_inarticle/45a33184b847dff36284b9214265f1a2f98fe018/excluded.jpg" alt="Folders"></figure>
<figure><img src="https://liip.rokka.io/www_inarticle/cf587f0a75dc0022b0ea79ebe635f6622e3de82f/excluded-list.jpg" alt="Excluded"></figure>
<p>Now you can exclude some config files to be excluded / grey-listed on your live environment:</p>
<pre><code>webform.webform.*
contact.form.*</code></pre>
<figure><img src="https://liip.rokka.io/www_inarticle/1b56c2fe044d55da0205588b7a3654a822f2e373/greylisted.jpg" alt="Greylist Webform in Config Split"></figure>
<p>Set the excluded environment to <strong> “Inactive”</strong> . We will later enable it on the live / dev environment via settings.php.</p>
<h3>Enable “excluded” environment and adapt deployment workflow</h3>
<p>We enable the “excluded” environment on the live system via variables-live.php (see above):</p>
<pre><code>// This will allow module config per environment and exclude webforms from being overridden
$config['config_split.config_split.excluded']['status'] = TRUE;</code></pre>
<p>In your deployment workflow / script you need to add the following line before you do a drush config-import:</p>
<pre><code>#execute some drush commands
echo "-----------------------------------------------------------"
echo "Exporting excluded config"
drush @live config-split-export -y excluded

echo "-----------------------------------------------------------"
echo "Importing configuration"
drush @live config-import -y</code></pre>
<p>The drush command “ <strong>drush @live config-split-export -y excluded</strong> ” will export all webforms and contact forms created by your CMS administrators into the folder “excluded”. The “drush config-import” command will therefore not delete them and your administrators can happily create their custom forms.</p>
<h3>Benefit of disable “excluded” on local environment</h3>
<p>We usually disable the “excluded” environment on our local environment. This allows us to create complex webforms on our local machine for our clients and deploy them as usual. In the end you can have a mix of customer created webforms and your own webforms which is quite helpful.</p>
<h2>Final note</h2>
<p>The CMI is a great tool and I would like to thank the maintainers of the <a href="https://www.drupal.org/project/config_split">config_split module</a> for their great extension. This is a huge step forward making Drupal 8 a real Enterprise CMS Tool.</p>
<p>If you have any questions, don't hesitate to post a comment.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/283d9942e75d52c1ca4ceb29373d3cd6cc0ba2eb/drupal-blogposts.jpg" length="57057" type="image/png" />
          </item>
        <item>
      <title>The DrupalDay 2017 in Rome</title>
      <link>https://www.liip.ch/fr/blog/drupalday-2017-rome</link>
      <guid>https://www.liip.ch/fr/blog/drupalday-2017-rome</guid>
      <pubDate>Thu, 09 Mar 2017 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>This year was the 6th edition of the <a href="http://roma2017.drupalday.it/">DrupalDay Italy</a>, the main event to attend for Italian-speaking drupalists.</p>
<p>Previous editions took place in other main Italian cities like Milan, Bologna and Naples.</p>
<p>This time Rome had the privilege to host such a challenging event, ideally located in the Sapienza University Campus.</p>
<p>The non-profit event, was <strong>free of charge</strong> .</p>
<h2>A 2-days event</h2>
<p>Like most development-related events these days, the event spanned over 2 days, March, 3rd and 4th.</p>
<p>The first day was the conference day, with more than 20 talks split in 3 different “tracks” or, better, rooms.</p>
<p>In fact, there was no clear separation of scopes and the same room hosted Biz and Tech talks, probably (but this is just my guess) in an attempt to mix different interests and invite people to get out of their confort zone.</p>
<p>The second day was mainly aimed at developers of all levels with the “Drupal School” track providing courses ranging from site-building to theming.</p>
<p>The “Drupal Hackaton” track was dedicated to developers willing to contribute (in several ways) to the Drupal Core, community modules or documentation.</p>
<h2>The best and the worst</h2>
<p>As expected, I've found the quality of the talks a bit fluctuating.</p>
<p>Among the most interesting ones, I would definitely mention Luca Lusso's “ <strong>Devel – D8 release party</strong> ” and Adriano Cori's talks about <a href="https://www.drupal.org/project/http_client_manager">HTTP Client Manager</a> module.</p>
<p>I was also positively surprised (and enjoyed a lot) the presentation about “ <strong>Venice and Drupal</strong> ” by Paolo Cometti and Francesco Trabacchin where I discovered that the City of Venice has an in-house web development agency using Drupal for the main public websites and services.</p>
<p>On the other hand, I didn't like Edoardo Garcia's Keynote “Saving the world one Open Source project at a time”.</p>
<p>It seemed to me mostly an excuse to advertise his candidature as <a href="https://assoc.drupal.org/election/18/candidates">Director of the Drupal Association</a>,</p>
<p>I had the privilege to talk about “ <a href="https://speakerdeck.com/ralf57/decoupled-frontend-with-drupal-8-e-openui-5">Decoupled frontend with Drupal 8 and OpenUI 5</a>“.</p>
<p>The audience, initially surprised by the unusual Drupal-SAP (the company behind OpenUI) association, showed a real interest and curiosity.</p>
<p>After the presentation, I had chance to go into the details and discuss my ideas with a few other people.</p>
<p>I also received some critics, which I really appreciated and will definitely make me improve as a presenter.</p>
<h2>Next one?</h2>
<p>In the end, I really enjoyed the conference, both the contents and the ambiance, and will definitely join next year.</p>]]></description>
          </item>
        <item>
      <title>We can all learn from the Drupal community</title>
      <link>https://www.liip.ch/fr/blog/can-learn-drupal-community</link>
      <guid>https://www.liip.ch/fr/blog/can-learn-drupal-community</guid>
      <pubDate>Wed, 01 Mar 2017 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>I started hearing about Drupal 8 back in 2014, how this CMS would start using Symfony components, an idea I as a PHP and Symfony developer found very cool.</p>
<p>That is when I got involved with Drupal, not the CMS, but the community.</p>
<p>I got invited to my first DrupalCon back in 2015. That was the biggest conference I have ever been to, thousands of people were there. When I entered the conference building I saw several things, one of them was that the code of conduct was very visible and printed. I also got a t-shirt that fit me really well – A rarity at most tech conferences I go to. The gender and racial diversity also seemed fairly high, I immediately felt comfortable and like I belonged – Super cool first impression.</p>
<p>I as many other geeks have social anxiety, so I was still overwhelmed with all these people, and I did not know who to talk to. Luckily Larry was there so I had someone to hug.</p>
<p>I went to many great talks as there were a lot of tracks – Including the Symfony one where I was speaking. A conference well worth going to for EVERYONE, this is also something that I like: They try to make every DrupalCon affordable for everyone.</p>
<p>That evening I felt a bit shy again and stood somewhere, all on my own, and couldn't see the two people out of thousands, that I knew. Then someone walked up to me and just started talking to me, making me feel welcome. I said I don't do Drupal at all and they said that that's nice! We talked about what I do and they were very interested.</p>
<p>This year I went to a local DrupalCamp here in Switzerland, drupal mountain camp, it was an event a lot more focused on Drupal, as you could expect, so I did not attend as many talks as I did at DrupalCon, but again the inclusiveness and the atmosphere was in the air – I felt so very very welcome and safe (Except maybe when sledging down a mountain…).</p>
<p>They mentioned the code of conduct in the beginning of the conference and then proceeded to organise an awesome event with winter sports around it.</p>
<p>I spoke at DrupalMountainCamp giving an introduction to Neo4j, a talk I have given many times with various results. People were extremely interested in graph databases, the concepts and how they work and they asked a lot of questions. Again – When I told them I don't do Drupal noone even tried to convince me to start, that is where our communities differ a bit.</p>
<p>I think that we can learn from Drupal, embrace our differences, and each other, and accept that we do different things and we are different people and it doesn't matter because that is what makes community work, that is what makes us awesome. Diversity matters, Drupal got this.</p>
<p>Thank you to the Drupal community for showing how to be inclusive the right way and how to not try to convince someone to try or be someone they are not, but rather support that person and try to learn from them, this is the best behaviour a community could ever have.</p>
<p>And hugs! So much hugs.</p>]]></description>
          </item>
        <item>
      <title>Drupal SearchAPI and result grouping</title>
      <link>https://www.liip.ch/fr/blog/drupal-searchapi-result-grouping</link>
      <guid>https://www.liip.ch/fr/blog/drupal-searchapi-result-grouping</guid>
      <pubDate>Mon, 24 Oct 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>In this blog post I will present how, in a recent e-Commerce project built on top of Drupal7 (the former version of the Drupal CMS), we make Drupal7, SearchAPI and Commerce play together to efficiently retrieve grouped results from Solr in SearchAPI, with no indexed data duplication.</p>
<p>We used the <a href="https://www.drupal.org/project/search_api">SearchAPI</a> and the <a href="https://www.drupal.org/project/facetapi">FacetAPI</a> modules to build a search index for products, so far so good: available products and product-variations can be searched and filtered also by using a set of pre-defined facets. In a subsequent request, a new need arose from our project owner: provide a list of products where the results should include, in addition to the product details, a picture of one of the available product variations, while keep the ability to apply facets on products for the listing. Furthermore, the product variation picture displayed in the list must also match the filter applied by the user: this with the aim of not confusing users, and to provide a better user experience.</p>
<p>An example use case here is simple: allow users to get the list of available products and be able to filter them by the color/size/etc field of the available product variations, while displaying a picture of the available variations, and not a sample picture.</p>
<p>For the sake of simplicity and consistency with <a href="https://www.drupal.org/project/commerce">Drupal's Commerce</a> module terminology, I will use the term “Product” to refer to any product-variation, while the term “Model” will be used to refer to a product.</p>
<h2>Solr Result Grouping</h2>
<p>We decided to use <a href="http://lucene.apache.org/solr/">Solr</a> (the well-known, fast and efficient search engine built on top of the Apache Lucene library) as the backend of the eCommerce platform: the reason lies not only in its full-text search features, but also in the possibility to build a fast retrieval system for the huge number of products we were expecting to be available online.</p>
<p>To solve the request about the display of product models, facets and available products, I intended to use the feature offered by Solr called <a href="https://cwiki.apache.org/confluence/display/solr/Result+Grouping">Result-Grouping</a> as it seemed to be suitable for our case: Solr is able to return just a subset of results by grouping them given an “single value” field (previously indexed, of course). The Facets can then be configured to be computed from: the grouped set of results, the ungrouped items or just from the first result of each group.</p>
<p>Such handy feature of Solr can be used in combination with the SearchAPI module by installing the <a href="https://www.drupal.org/project/search_api_grouping">SearchAPI Grouping</a> module. The module allows to return results grouped by a single-valued field, while keeping the building process of the facets on all the results matched by the query, this behavior is configurable.</p>
<p>That allowed us to:</p>
<ul>
<li>group the available products by the referenced model and return just one model;</li>
<li>compute the attribute's facets on the entire collection of available products;</li>
<li>reuse the data in the product index for multiple views based on different grouping settings.</li>
</ul>
<h2>Result Grouping in SearchAPI</h2>
<p>Due to some limitations of the SearchAPI module and its query building components, such plan was not doable with the current configuration as it would require us to create a copy of the product index just to apply the specific Result Grouping feature for each view.</p>
<p>The reason is that the features implemented by the SearchAPI Grouping are implemented on top of the “ <a href="https://www.drupal.org/node/1254452">Alterations and Processors</a>” functions of SearchAPI. Those are a set of specific functions that can be configured and invoked both at indexing-time and at querying-time by the SearchAPI module. In particular <em>Alterations</em> allows to programmatically alter the contents sent to the underlying index, while the <em>Processors</em> code is executed when a search query is built, executed and the results returned.</p>
<p>Those functions can be defined and configured only per-index.</p>
<p>As visible in the following picture, the SearchAPI Grouping module configuration could be done solely in the Index configuration, but not per-query.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20161024-drupal-searchapi-result-grouping/SearchAPI_processor_settings.png"><img src="https://liip.rokka.io/www_inarticle/663d0523052de6eafe7f91da70a21268f5f6718a/searchapi-processor-settings-300x280.jpg" alt="SearchAPI: processor settings"></a></figure>
<p>Image 1: SearchAPI configuration for the Grouping Processor.</p>
<p>As the SearchAPI Grouping module is implemented as a SearchAPI Processor (as it needs to be able to alter the query sent to Solr and to handle the returned results), it would force us to create a new index for each different configuration of the result grouping.</p>
<p>Such limitation requires to introduce a lot of (useless) data duplication in the index, with a consequent decrease of performance when products are saved and later indexed in multiple indexes.</p>
<p>In particular, the duplication is more evident as the changes performed by the Processor are merely an alteration of:</p>
<ol>
<li>the query sent to Solr;</li>
<li>the handling of the raw data returned by Solr.</li>
</ol>
<p>This shows that there would be no need to index multiple times the same data.</p>
<p>Since the the possibility to define per-query processor sounded really promising and such feature could be used extensively in the same project, a new module has been implemented and published on Drupal.org: the <a href="https://www.drupal.org/project/search_api_extended_processors">SearchAPI Extended Processors</a> module. (thanks to SearchAPI's maintainer, <a href="https://www.drupal.org/u/drunken-monkey">DrunkenMonkey</a>, for the help and review :) ).</p>
<h2>The Drupal SearchAPI Extended Processor</h2>
<p>The new module allows to extend the standard SearchAPI behavior for Processors and lets admins configure the execution of SearchAPI Processors per query and not only per-index.</p>
<p>By using the new module, any index can now be used with multiple and different Processors configurations, no new indexes are needed, thus avoiding data duplication.</p>
<p>The new configuration is exposed, as visible in the following picture, while editing a SearchAPI view under “Advanced &gt; Query options”.</p>
<p>The SearchAPI processors can be altered and re-defined for the given view, a checkbox allows to completely override the current index setting rather than providing additional processors.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20161024-drupal-searchapi-result-grouping/SearchAPI_view_extended_processor_settings.png"><img src="https://liip.rokka.io/www_inarticle/49035888583138fd0866e31eb73c51154ac10a51/searchapi-view-extended-processor-settings-300x213.jpg" alt="Drupal SearchAPI: view's extended processor settings"></a></figure>
<p>Image 2: View's “Query options” with the SearchAPI Extended Processors module.</p>
<p>Conclusion: the new SearchAPI Extended Processors module has now been used for a few months in a complex eCommerce project at Liip and allowed us to easily implement new search features without the need to create multiple and separated indexes.</p>
<p>We are able to index Products data in one single (and compact) Solr index, and use it with different grouping strategies to build both product listings, model listings and model-category navigation pages without duplicating any data.</p>
<p>Since all those listings leverages the Solr <a href="https://cwiki.apache.org/confluence/display/solr/Common+Query+Parameters#CommonQueryParameters-Thefq(FilterQuery">cwiki.apache.org/confluence/display/solr/Common+Query+Parameters#CommonQueryParameters-Thefq(FilterQuery</a>Parameter text: FilterQuery query parameter) to filter the correct set of products to be displayed, Solr can make use of its internal set of caches and specifically the <a href="https://cwiki.apache.org/confluence/display/solr/Query+Settings+in+SolrConfig#QuerySettingsinSolrConfig-filterCache">filterCache</a> to speed up subsequent searches and facets. This aspect, in addition to the usage of only one index, allows caches to be shared among multiple listings, and that would not be possible if separate indexes were used.</p>
<p>For further information, questions or curiosity drop me a line, I will be happy to help you configuring Drupal SearchAPI and Solr for your needs.</p>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/30eba90bb413ef702e64fbb0e6858c1e8106f877/searchapi-processor-settings.jpg" length="90531" type="image/png" />
          </item>
        <item>
      <title>Testing in the Cloud &#8211; Using Bamboo with Amazon AWS</title>
      <link>https://www.liip.ch/fr/blog/testing-cloud-using-bamboo-with-amazon-aws</link>
      <guid>https://www.liip.ch/fr/blog/testing-cloud-using-bamboo-with-amazon-aws</guid>
      <pubDate>Wed, 08 Jun 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Bamboo is the continous integration service by Atlassian, the company owning the code management service Bitbucket (as well as the Jira issue tracker and Confluence wiki). Bamboo can run test suites and build any kind of artefact like generated documentation or installable packages. It integrates with Amazon Web Services, allowing to spin up EC2 instances as needed. So far, I mostly worked with <a href="http://travis-ci.org/">travis-ci</a>, because of open source projects I maintain on <a href="https://github.com">github.com</a>. What Bamboo does really better than travis-ci – besides supporting other code repository SaaS than github.com – is the dynamic allocation of your own EC2 instances. Bamboo is just the manager, and can be configured to spin up EC2 instances when the number of tests to run increases. This keeps the costs at Amazon to a minimum while offering large capabilities to run tests when needed.</p>
<p>Besides licensing a Bamboo CI server to run yourself, you can also use it as a cloud service. I recently helped set up tests with this. Unfortunately, the documentation is really worse than expected, and the integration hampered by really silly mistakes that we had to dig up in discussion boards and on stackoverflow. This blog post contains a few notes if you want to do the same that hopefully will help others facing the same challenges. A word of caution: we did most of this in March and April 2016 – things might get fixed or change over time…</p>
<h3>Permissions for starting EC2 Instances</h3>
<p>When setting up the Bamboo cloud service, you provide it your EC2 credentials. Bamboo will use them to create a bamboo user in AWS that it will use to start EC2 instances. Unfortunately, it does not give the user enough permissions to do so. After setting up Bamboo, log into AWS and find the bamboo user and give it enough permissions to spin up instances. Bamboo also won't tell you what is wrong, just saying that it failed to start a server.</p>
<p>Note that it takes a little moment until Bamboo spins up an instance, and while starting EC2 is quite fast, the time until the Bamboo Agent connects with the Bamboo server takes a couple of minutes. So don't worry if it takes a moment after set up.</p>
<h3>Trigger Automatic builds</h3>
<p>Once you connected Bamboo with Bitbucket, you create a project. You can specify that the build should be automatically started on every Bitbucket commit. So Bitbucket and Bamboo are both owned by Atlassian. Unfortunately, you need to <em>manually</em> specify the Bitbucket IPs in Bamboo as <a href="https://confluence.atlassian.com/bamkb/bitbucket-commits-do-not-trigger-builds-in-bamboo-cloud-779166062.html">IPs that are allowed to trigger builds</a>. That IP field only accepts except IPs, no ranges or wildcards. <a href="https://confluence.atlassian.com/bamkb/bamboo-triggers-require-an-ip-address-update-for-bitbucket-cloud-794362641.html">This page helpfully lists all the 50+ IPs Bitbucket might use</a>.</p>
<h3>Customizing the Test Runner</h3>
<p>You can use any EC2 image, including custom images. Most of the time, there are much easier solutions and even the EC2 EBS mentioned in the documentation is overkill. There is a simple field in Bamboo to specify shell commands that are executed as root on instance startup. To find it, go to <strong>Bamboo Administration =&gt; Overview =&gt; Elastic Bamboo / Image configurations =&gt; {the image you want} – “Edit” =&gt; Instance startup script</strong> .</p>
<p>You can use the shared keys to ssh into an EC2 test runner and try out the commands you need. Just make sure you keep track of all of them for copying into the startup script. The runner could stop at any point, at which point changes made manually on the runner are lost. And you want new runners to be set up properly.</p>
<p>If you change something in the “Instance startup script”, you should stop all instances and let Bamboo start new ones to have the changes take effect. I recommend to do that even if you first did things manually to try out, and verify that the script was actually working correctly.</p>
<h3>PHP on Bamboo</h3>
<p>While the Amazon images list phpunit versions, they omit the PHP version itself. With good reason, its only the outdated and unmaintained 5.3 version. Unless you are into PHP archeology, you want to use a newer version. A good choice is the Ubuntu Stock Image, which has PHP 5.6. Simply disable all other images in the Elastic Bamboo / Image Configurations section. If you need more than the default PHP extensions, install them with the Instance startup script (see above). E.g. if you need curl, do apt-get install php5-curl.</p>
<p>If your project uses Composer (and it should!), best also add the composer installation to the Instance startup script.</p>
<p>Our project had a phpunit.xml.dist  file in the root folder. Normally Phpunit picks this .dist file if there is no phpunit.xml in the folder. But on Bamboo, tests failed with autoloading not working at all, until we realized that we need to specify the parameter -c phpunit.xml.dist  in the Phpunit task definition explicitly, otherwise Bamboo uses its own stock phpunit.xml file that makes no sense, instead of letting Phpunit pick the right file.</p>]]></description>
          </item>
        <item>
      <title>Order fulfillment with Swiss Post and YellowCube</title>
      <link>https://www.liip.ch/fr/blog/how-yellowcube-works</link>
      <guid>https://www.liip.ch/fr/blog/how-yellowcube-works</guid>
      <pubDate>Wed, 08 Jun 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<h2>Setting the stage</h2>
<p>We live in a time where more and more goods are purchased online. May it be your airline ticket or the diapers for your newborn, many things are often cheaper and more conveniently purchased online. As comfortable it is for the customer buying things this way, as challenging it can be for the seller. They need to have an online presence that is easy to use and need to have all goods in stock to sell and send them quickly.</p>
<p>This means that selling physical products requires sufficient storage space on-site, people that handle picking, packing and shipping and someone that handles returned goods. All this factors can sum up to a costly venture, especially when you don't have any infrastructure from other sales channels already.</p>
<figure><a href="https://www.liip.ch/content/4-blog/20160608-how-yellowcube-works/Architekture-PostLogistics-YellowCube-Connector.png"><img src="https://liip.rokka.io/www_inarticle/5b2e5275a9e8fec68a466c7973d57e99b86892c1/architekture-postlogistics-yellowcube-connector-300x225.jpg" alt="Architekture PostLogistics YellowCube Connector"></a></figure>
<p>The PHP-YellowCube interface is part of all YellowCube extensions for PHP based web stores</p>
<h3>What is order fulfillment?</h3>
<p>This is where <a href="https://www.post.ch/en/business/a-z-of-subjects/dropping-off-mail-items/dropping-off-business-parcels/yellowcube">YellowCube</a> comes in handy. YellowCube is a fulfillment service offered and run by Swiss Post. But what is a fulfillment service anyway? In the case of YellowCube it means everything to do with the storing and shipping of the products you sell on your store. This means receiving and storing of the products, handling of the inventory, picking, packing, shipping and handling of returns.</p>
<h3>How to implement it into your web shop?</h3>
<p>So far so good, but how does this translate into real life? Liip has, in close collaboration with Swiss Post, developed a PHP-YellowCube-SDK and also the integration for Magento and Drupal web shops. The PHP-YellowCube interface provides an object-oriented wrapper to the SOAP based interface provided by Swiss Post. The code is readily available on <a href="https://github.com/swisspost-yellowcube">Github</a>. Please feel free to contribute to the <a href="https://github.com/swisspost-yellowcube/yellowcube-php">PHP-YellowCube-SDK</a>, as well as the  <a href="https://github.com/swisspost-yellowcube/magento-yellowcube">Magento extension</a> or the <a href="https://github.com/swisspost-yellowcube/drupal-yellowcube">Drupal extension</a>.</p>
<h3>Please explain…</h3>
<p>Now that we know about the SDK, we need to know how the basic process works. I want to show you what needs to happen in order to implement YellowCube successfully into a web shop. The process is agnostic to a certain integration, so I won't go into details how it is integrated in a specific shop extension. I will explain the basic process, which applies to all web shop integrations in the same manner (more or less).</p>
<h2>Go with the flow</h2>
<p>The flowcharts below show you the process and the different “actors” involved (swim-lanes). The red coloured lanes represent the web shop and the yellow lanes show the services provided by YellowCube and exposed by our SDK.</p>
<p>First we need to have products in our web shop. For simplicity lets assume that all the products in our web shop are products handled by YellowCube.</p>
<h3>Feeding data into the system</h3>
<figure><a href="https://www.liip.ch/content/4-blog/20160608-how-yellowcube-works/Screenshot-2016-06-08-09.43.31.png"><img src="https://liip.rokka.io/www_inarticle/3d5ff7c4390b071c91aebfcdb0415d0eed3803b5/screenshot-2016-06-08-09-43-31-e1465372848934.jpg" alt=""></a></figure>]]></description>
                  <enclosure url="http://liip.rokka.io/www_card_2/29e0ffb2de794e2c574f86627cdf7aede9a09bd4/yellowcube.jpg" length="687171" type="image/gif" />
          </item>
        <item>
      <title>Symfony: A Tool to Convert NelmioApiDocBundle to Swagger PHP</title>
      <link>https://www.liip.ch/fr/blog/convert-nelmioapidocbundle-to-swagger-php</link>
      <guid>https://www.liip.ch/fr/blog/convert-nelmioapidocbundle-to-swagger-php</guid>
      <pubDate>Wed, 11 May 2016 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>We have an API built with Symfony that outputs its specification in the Swagger format. We needed to upgrade from version 1 to 2. As we switched the library to generate the specification while upgrading, we had to convert the configuration. In our case that configuration was so extensive that we decided to build a script to convert the configuration.</p>
<p><a href="http://swagger.io/">Swagger</a> is a standard to document REST APIs. Using a JSON file, an application can document its API. Swagger specifies the path for each resource and allowed HTTP methods, as well as input parameters and the returned data. On top of this specification, tools like <a href="http://swagger.io/swagger-ui/">Swagger UI</a> can automatically provide an API client in a browser. This is an excellent way to explore the documentation and also very helpful when investigating data issues.</p>
<p>We have been using <a href="https://github.com/nelmio/NelmioApiDocBundle">NelmioApiDocBundle</a> with our application for a while now. This bundle reads annotations on the controllers and combines them with the Symfony routing informations to produce an API documentation in the Swagger 1 format. Support for Swagger version 2 however was not available in NelmioApiDocBundle at the time of this blog post. We would have stayed with NelmioApiDocBundle, as it worked well for us, but we did not want to invest the time to refactor that bundle to Swagger 2.</p>
<p>For the consumers of our Swagger json files, we had to migrate to the new version 2 of Swagger. We chose to switch to <a href="https://github.com/zircote/swagger-php/">Swagger-PHP</a> that is capable of generating Swagger 2. It also uses PHPDoc annotations for the additional meta information. As we have about 100 paths (Symfony actions in our case), we decided to write a converter rather than rewrite all of this by hand. The converter is available as <a href="https://gist.github.com/dbu/7551f4ed2c5ad62c570d730ad8c1bb0c">Symfony command</a> – of course without any warranty.</p>
<p>To replace the browser view of NelmioApiDocBundle, we use the <a href="https://github.com/activelamp/swagger-ui-bundle">SwaggerUiBundle</a>. That bundle only handles displaying, but has no integration with Swagger-PHP to generate the Swagger json.</p>
<h2>Learnings</h2>
<p>During the transition, we discovered some drawbacks of Swagger-PHP where NelmioApiDocBundle is more convenient:</p>
<ul>
<li>NelmioApiDocBundle reads the Symfony route definition. With Swagger-PHP you need to duplicate that information into the swagger annotations:
<ul>
<li>Route path</li>
<li>HTTP methods (GET / POST / …)</li>
<li>Path parameters with their restrictions and description from the PHPDoc '@param'.</li>
</ul></li>
<li>Swagger-PHP uses the local class names of models for the schema name if no name is explicitly specified. This forces us to manually specify schema names for models from different namespaces that have the same local name.</li>
<li>Swagger-PHP does not use PHP to read the annotations but instead parses the PHP code itself. This makes for some hard to read parser code, should you ever want to fix a bug or extend the library.</li>
</ul>
<p>Most of the redundancy in the annotations is due to Swagger-PHP not knowing about Symfony. Maybe a SwaggerPhpBundle would be a worthwile effort. Or even better, merge the efforts of NelmioApiDocBundle with Swagger-PHP to have one good library that supports Swagger 2.</p>
<h2>Further Resources</h2>
<ul>
<li><a href="https://gist.github.com/dbu/7551f4ed2c5ad62c570d730ad8c1bb0c">Symfony command</a> we wrote to convert from NelmioApiDocBundle to Swagger-PHP</li>
</ul>]]></description>
          </item>
    
  </channel>
</rss>
