<?xml version="1.0" encoding="utf-8"?><rss version="2.0">
  <channel>
    <title>Liip Blog</title>
    <link>https://www.liip.ch/en/blog</link>
    <lastBuildDate>Tue, 21 Apr 2026 10:23:03 +0200</lastBuildDate>
            <item>
      <title>Who still needs an apprenticeship when AI exists?</title>
      <link>https://www.liip.ch/en/blog/who-still-needs-an-apprenticeship-when-ai-exists</link>
      <guid>https://www.liip.ch/en/blog/who-still-needs-an-apprenticeship-when-ai-exists</guid>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0200</pubDate>
      <description><![CDATA[<h3>AI is already part of my everyday life</h3>
<p>At the end of 2022, I first saw ChatGPT in a video on TikTok and tried it immediately. It was able to explain Contents to me faster than my teachers, solve tasks better than if I did them myself, and even develop ideas I would never have come up with. That really fascinated me.</p>
<p>But it wasn't until I started my apprenticeship that I realized just how powerful and important AI really is. Today, I use it daily. Not to do my work for me, but to teach myself new things, research more effectively, and work more efficiently on certain tasks.</p>
<h3>My apprenticeship is diverse. That's exactly what makes it exciting</h3>
<p>I'm completing an apprenticeship as a <a href="https://www.ict-berufsbildung.ch/grundbildung/ict-lehren/entwickler-in-digitales-business-efz">Digital Business Developer EFZ</a>. I'm currently in my second year. After the first year at Bbc Basislehrjahr, I've been working at Liip since summer 2025.</p>
<p>What I find particularly exciting about this apprenticeship is that I have many different tasks. I work in various Circles, self-organized teams based on holacracy. This way, I get to know many different perspectives, tasks, and people.</p>
<p>I started in the Finance Circle, which focused more on data management. Since then, I've had the opportunity to look into various other teams and get to know different tasks, perspectives, and ways of working. Today, I work in the Content and Design Circle with a focus on process optimization, automation, and AI. I'll also get to know several other Circles, tasks, and roles throughout my apprenticeship.</p>
<figure><img alt="" src="https://liip.rokka.io/www_inarticle_5/ccafe5/grafik-1.jpg" srcset="https://liip.rokka.io/www_inarticle_5/o-dpr-2/ccafe5/grafik-1.jpg 2x"></figure>
<p>This variety suits me well. I like working in a structured way, communicate actively, and find it exciting to analyze and improve workflows. At Liip, I have the privileged opportunity to take on responsibility early in my apprenticeship. I work on impactful internal and external projects, make decisions, and contribute my own ideas. For me, this strongly distinguishes Liip from many other employers offering apprenticeships.</p>
<h3>Dealing with clients is part of it</h3>
<p>One part of my apprenticeship that might be less visible from the outside is dealing with clients. I learn to communicate professionally, represent my own company well, and understand important factors for good collaboration. This means, for example, clearly communicating information within the team, gathering requirements, and communicating in a structured way in projects. Because I participate in real projects early on as an apprentice, I develop a good sense of what really matters to clients.</p>
<h3>How I use AI specifically</h3>
<p>AI becomes particularly helpful when I provide a lot of context. Then, a general tool becomes a valuable sparring partner. AI supports me especially when I have a clear goal and need a good first draft quickly. I use it for brainstorming, to understand new terms, for research, or to structure texts.</p>
<ul>
<li><em>Fun fact on the side: This blog post was also partially structured with AI. I gathered my thoughts, explained the context, and then got help creating a meaningful structure. This is how I use AI in everyday life.</em></li>
</ul>
<p>A good example is the manual for the <a href="https://www.liip.ch/en/work/projects/liipgpt">LiipGPT</a> backend that I wrote. At the beginning, I had practically no experience writing manuals. So I first taught myself the basics with the help of AI. Then I gave the AI as much context as possible and developed a draft step by step with it. In the end, a handbook was created that is often used internally and individually expanded for clients. For me, this was a moment when I realized how much added value AI can bring, because I could not only work faster and without experience, but also create a result that really helps others in their everyday work.</p>
<figure><img alt="" src="https://liip.rokka.io/www_inarticle_5/0714c5/grafik-2-en.jpg" srcset="https://liip.rokka.io/www_inarticle_5/o-dpr-2/0714c5/grafik-2-en.jpg 2x"></figure>
<aside>
<blockquote>
<h3>My Top 3 AI Tools for everyday Digital Business</h3>
</blockquote>
<p>A brief look at the tools that help me most in everyday life:</p>
<ul>
<li>
<p><strong>Claude</strong></p>
<p>My most important tool for brainstorming, writing, and understanding new topics.</p>
</li>
<li>
<p><strong>Cursor</strong></p>
<p>Helps me with coding, even without advanced programming skills. For a rendering tool in Figma, I first explained the context in Plan Mode and then implemented it step by step. Cursor helped me really bring the idea to life.</p>
</li>
<li>
<p><strong>NotebookLM</strong></p>
<p>I upload documents or notes and get summaries, questions, or compact learning material from them. Particularly useful for exams or when familiarizing myself with new topics.</p>
<blockquote>
</aside>
</blockquote>
</li>
</ul>
<p>I don't use these tools thoughtlessly. I test them, compare them, and carefully consider which use cases they really make sense for.</p>
<h3>AI is my best learning buddy</h3>
<p>AI has great potential for learning. It simplifies content, provides examples, and adapts to my level. No textbook can do that. Getting started with a topic becomes faster this way. As an apprentice, I ask the AI questions and follow-up questions that I wouldn't dare to ask in class.</p>
<p>At the same time, however: <strong>AI must never replace your own thinking. Those who don't understand how AI works and how it can be used will end up learning very little.</strong></p>
<p>For me, openness and critical questioning are part of dealing with new tools. AI can empower learners, but only if it's consciously used as a tool. That's why it's important for apprentices to engage with AI during their apprenticeship. Not later, when everyone else has long been a routine user.</p>
<h3>How do you use AI?</h3>
<p>Perhaps it's worth pausing briefly after this Blogpost and thinking about your own use of AI. How do you really use AI in your everyday life today? What concrete added value does it bring you? Is it just a tool you try out occasionally? Or do you use it consciously to work better, faster, or more structured?</p>
<p>At Liip, I directly experience how an organization doesn't just use AI, but actively shapes it:</p>
<ul>
<li>Regular knowledge exchanges and Liip Talks about AI, from new models to ethical questions.</li>
<li>AI training for all employees. Internally, we offer workshops and special modules so everyone stays confident in using AI.</li>
<li>Development and further development of our own AI tools like <a href="https://www.liip.ch/en/work/projects/liipgpt">LiipGPT</a>, which is already in use with clients from healthcare, legal, and public sectors.</li>
<li><a href="https://www.liip.ch/en/blog/shaping-ai-for-the-people-and-the-planet">AI Sustainability Guidelines</a>, an internal framework that ensures AI projects are implemented ethically and sustainably.</li>
</ul>
<p>This shows me what an organization can look like that doesn't blindly deploy AI, but takes responsibility for it.</p>
<p>How do employees in your company use AI? Are there clear rules or offerings to use AI meaningfully? Where could AI specifically support you in your role if you used it more deliberately?</p>
<p>AI is not a fixed concept that works the same for everyone. But those who use it early quickly realize: <strong>The question is not whether you still need an apprenticeship, but how much more you get out of it when you have AI.</strong></p>]]></description>
    </item>
        <item>
      <title>BOSW 26: the recap</title>
      <link>https://www.liip.ch/en/blog/bosw-26-the-recap</link>
      <guid>https://www.liip.ch/en/blog/bosw-26-the-recap</guid>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>Once again this year, The Hall in Dübendorf brought together a who’s who of the Swiss web industry to honour the best projects of the past year across various categories. As usual, we submitted projects – three this year – all of which made it onto the shortlist. These were the digital platforms we developed for the Museum für Gestaltung Zürich, the Migros Karrierportal and the new website for the Canton of Solothurn. The awards ceremony provided an opportunity for members of the project team from both the client side and Liip to get together. But above all, it was a chance to experience the suspense of finding out which projects would receive gold, silver or bronze awards.</p>
<h2>Museum für Gestaltung: a masterpiece in Public Value and User Experience</h2>
<p>The new website of the Museum für Gestaltung Zürich serves as the central hub for the museum's digital communication and offers visitors the opportunity to connect with the museum. In co-creation, we crafted a solution that allows the design to speak for itself and delivers real public value. The BOSW jury agreed, awarding the new site the Silver prize in both the User Experience and Public Value categories. </p>
<p>More on the <a href="https://www.liip.ch/en/work/projects/museum-fur-gestaltung-zurich">winning project</a> </p>
<p>As Clelia Kanai, head of Marketing &amp; Communication, points out “For us, design means clarity and responsibility – even in the digital realm. In Liip, we had a partner who shared and upheld this ethos, from the initial concept right through to the final accessibility check. The fact that this resulted in a project which the BOSW jury has awarded silver in the User Experience and Public Value categories is both a validation and a wonderful recognition for us.”</p>
<figure><img alt="" src="https://liip.rokka.io/www_inarticle_5/e48782/bosw2026teampic.jpg" srcset="https://liip.rokka.io/www_inarticle_5/o-dpr-2/e48782/bosw2026teampic.jpg 2x"></figure>
<h2>so.ch: transparency, accessibility and user-friendliness to strengthen the trust of the public</h2>
<p>We developed a new website for the canton of Solothurn, so.ch. It creates genuine public value by making cantonal services transparent, accessible and easy to understand, thereby strengthening the trust of the public, local authorities and the business community in public services. As bs.ch showed us last year, a user-centred website is crucial for improving the lives of citizens who increasingly need to find their public info and documents online. The jury recognised the quality of its user experience by awarding us Bronze in this key category. </p>
<p>More on the <a href="https://www.liip.ch/en/work/projects/so-ch-user-experience">winning project</a></p>
<p>Even though we haven’t won as many awards as we’d have liked (and we, of course, think we deserve), it was interesting to feel the pulse of the web industry. We sincerely congratulate "Swissgrid 24/7" on being named the Master of Swiss Web 26. See you next year!</p>]]></description>
    </item>
        <item>
      <title>Zurich Climate Week</title>
      <link>https://www.liip.ch/en/blog/zurich-climate-week</link>
      <guid>https://www.liip.ch/en/blog/zurich-climate-week</guid>
      <pubDate>Thu, 16 Apr 2026 00:00:00 +0200</pubDate>
      <description><![CDATA[<p>The first <a href="https://www.climateweekzurich.org/" rel="noreferrer" target="_blank">Zurich Climate Week</a> (May 4–9) has arrived—and somehow, it comes at just the right moment. Given the current geopolitical landscape, the initiative could easily have fallen flat. The opposite is true: the response and participation have been remarkable. In fact, other cities are already looking on with a bit of envy—and simply coming to Zurich to be part of it.</p>
<p>In light of recent political developments, a “do-talk gap” has been pointed out in sustainability efforts. Zurich Climate Week proves differently: taking action on the climate crisis <strong>and</strong> talking about it.</p>
<p>And that’s exactly what we need right now. Because sustainable transformation doesn’t happen in silence. It happens where different perspectives and commitments come together: business, policy, academia, and civil society. Where knowledge is shared. Where solutions are developed and scaled together.</p>
<h1>Why we’re getting involved</h1>
<p>At Liip, we decided to contribute to the Zurich Climate Week programme for several reasons:</p>
<ul>
<li>To support the initiative by actively contributing to the programme</li>
<li>To strengthen our commitment to sustainability beyond the Climate Week</li>
<li>And not least: to share our experiences, learn from others, and grow our network of like-minded people</li>
</ul>
<p>Our contribution to Climate Week—just like the 2 focus areas we’ve chosen for our events—is guided by one principle: <strong>taking responsibility</strong>.</p>
<h1>Our contribution</h1>
<p>Liip is part of the programme with 2 formats—both focused on exchange, practical insights, and real-world applicability.</p>
<h2>Hacking for Sustainable AI</h2>
<p><a href="https://climateweekzurich.glueup.com/event/hacking-for-sustainable-ai-176395/" rel="noreferrer" target="_blank">More info &amp; registration</a></p>
<p>Artificial intelligence has long become part of our everyday lives. It seems to offer solutions to many problems—and is often the first place we turn when we reach a dead end. At the same time, its environmental and societal impacts remain largely unresolved.</p>
<p>When does using AI actually create real value? And how can we use it in ways that contribute to a more sustainable future?</p>
<p>Together with <a href="https://opendata.ch" rel="noreferrer" target="_blank">opendata.ch</a>, we’re bringing these questions to the table in a mini hackathon. Developers, designers, sustainability experts, companies, and other interested participants are invited to collaboratively develop new ideas, prototypes, and perspectives.</p>
<p>👉 Goal: to develop concrete approaches for using AI in a meaningful, responsible, and impactful way.</p>
<h2>Sustainability Reporting for SMEs: Real Cases, Practical Steps</h2>
<p><a href="https://climateweekzurich.glueup.com/event/sustainability-reporting-for-smes-real-cases-practical-steps-176404/" rel="noreferrer" target="_blank">More info &amp; registration</a></p>
<p>SMEs play a key role in the sustainable transformation. And yet many find themselves at the same starting point: where do we begin? What really matters? And how much effort does it take?</p>
<p>In this session, four SMEs share their experiences with voluntary sustainability reporting.</p>
<p>The format is interactive: as a roundtable, participants can ask questions, share their own approaches, and discuss together how to overcome common challenges.</p>
<p>👉 Goal: to lower the barrier to entry, show that getting started is feasible, and grow a community of committed SMEs.</p>
<h1>From talking to doing—together</h1>
<p>Zurich Climate Week stands for a mindset we strongly believe in: combining urgency with optimism. Not just talking about problems, but driving concrete solutions forward. Taking responsibility together—and creating impact in the process.</p>
<p>Especially in a context where uncertainty is increasing and priorities are shifting, it is more important than ever to stay the course. Sustainability is not a short-term trend—it is a long-term commitment. And at the same time, an opportunity: for innovation, for new business models, for real differentiation—and for a different quality of life.</p>
<p>What we need are more spaces for exchange. More transparency. And more courage to share work that is still in progress.</p>
<p>Zurich Climate Week creates exactly these spaces. We’re excited to be part of it.</p>]]></description>
    </item>
        <item>
      <title>Iframes are still odd</title>
      <link>https://www.liip.ch/en/blog/iframes-are-still-odd</link>
      <guid>https://www.liip.ch/en/blog/iframes-are-still-odd</guid>
      <pubDate>Mon, 23 Mar 2026 00:00:00 +0100</pubDate>
      <description><![CDATA[<h2>The Challenge</h2>
<p>The application does one - rather complicated - task, with lots of business logic. There was no way we could rewrite it to include it directly into the website code. And because the application comes with its own Javascript and CSS, we decided to use an iframe to embed the application with clean isolation.</p>
<p>The company maintaining that application provided us with a version - running as a Docker container - where they had stripped all extra elements like the navigation, so that it would visually fit within the website. There was however no way for us to customise anything within the application.</p>
<h2>iframe security</h2>
<p>The promise of an iframe is to keep a clean security boundary between embedding page and embedded content. This means that it is by design not possible to call Javascript across the boundary. </p>
<p>Because injecting iframes could potentially trick a user into submitting data to an attacker (clickjacking), as the iframe may be from a different origin than the main page. Thus, to even render the iframe, the browser checks the Content-Security-Policy (CSP) HTTP header. That header has a field frame-src to control what may be included as an iframe. With this, I allowed the domain of the application to be included in iframes.</p>
<pre><code>Content-Security-Policy: frame-src https://my-embed.com;</code></pre>
<p>But not only does the including page need to allow an iframe. The page to be embedded also needs to allow being included with the frame-ancestors attribute of the CSP header. As we run the application Docker image under our control, I was able to add that header in the proxy that runs before the Docker image:</p>
<pre><code>Content-Security-Policy: frame-ancestors https://my-website.com;</code></pre>
<p>Several things to note:</p>
<ul>
<li>If you have other CSP rules, merge them with the rules, nginx will overwrite the header and not add to it</li>
<li>Both options also support "self" to allow embedding resp. being embedded with the same webserver</li>
<li>Prior to the CSP becoming a standard, there was an unofficial header <code>X-Frame-Options</code>, which is still supported by browsers</li>
<li><code>Content-Security-Policy</code> must be an actual HTTP header, <code>&lt;meta http-equiv=”...”&gt;</code> is ignored for <code>Content-Security-Policy</code> (and also ignored for <code>X-Frame-Options</code>).</li>
</ul>
<h2>Size of the iframe element</h2>
<p>Now we get to the weird parts. To prevent multiple scrollbars, we need the iframe element to be exactly big enough for the embedded page. If it is too small, there is an additional scrollbar (or hidden content). If it is too large, there is odd whitespace.</p>
<p>The size of the element needs to be set on the iframe, owned by the parent. The dimensions of the content are however only known by the embedded application. HTML / CSS do not provide any means to let the parent page declare that it wants the iframe to have the “necessary size”. </p>
<p>We ended up with a really convoluted way, which seems the only way to achieve this: sending messages from the child page to the parent. This problem spawned dedicated javascript libraries like <a href="https://github.com/davidjbradshaw/iframe-resizer">iframe-resizer</a>. We ended up reimplementing the logic in the React application, as it was so small that a dedicated library felt like overkill. Following the tutorial at <a href="https://github.com/craigfrancis/iframe-height/">iframe-height</a> (which also has some interesting background on the discussion about iframes in the Whatwg), we came up with this code for the containing website:</p>
<pre><code class="language-js">// register an event listener for messages
window.addEventListener('message', receiveMessage, false);

// handle a message
function receiveMessage(event) {
    const origin = event.origin || event.originalEvent.origin;
    // we configure the expected domain to allow for this additional sanity check
    if (expectedDomain !== origin) {
      return;
    }
    if (!event.data.request || 'iframeResize' !== event.data.request) {
      return;
    }
    // the id is known in the js class. we need to find the element that needs to be resized
    const iframe = document.getElementById(`iframe-${id}`);
    if (iframe) {
      // pad the height a bit to avoid unnecessary tiny scrolling
      iframe.style.height = (event.data.height + 20) + 'px';
      // width could be handled the same way if necessary - in our case the width is fix
    }
}</code></pre>
<p>Now we need to make the embedded content send a message with its height. The Javascript for that is a bit verbose to allow for different browsers, but not complicated either:</p>
<pre><code class="language-js">(
    function(document, window) {
      if (undefined === parent || !document.addEventListener) {
        return;
      }
      function init() {
        let owner = null;
        const width = Math.max(document.body.scrollWidth, document.body.offsetWidth, document.documentElement.clientWidth, document.documentElement.scrollWidth, document.documentElement.offsetWidth);
        const height = Math.max(document.body.scrollHeight, document.body.offsetHeight, document.documentElement.clientHeight, document.documentElement.scrollHeight, document.documentElement.offsetHeight);
        if (parent.postMessage) {
          owner = parent;
        } else if (parent.contentWindow &amp;&amp; parent.contentWindow.postMessage) {
          owner = parent.contentWindow;
        } else {
          return;
        }
        owner.postMessage({
          'request' : 'iframeResize',
          'width' : width,
          'height' : height
        }, '*');
      }

      if (document.readyState !== 'loading') {
        window.setTimeout(init);
      } else {
        document.addEventListener('DOMContentLoaded', init);
      }
      // this is needed to also adjust the iframe if something (e.g. the Javascript of the application) changes the size of the iframe without an actual page reload.
      const observer = new ResizeObserver(init);
      observer.observe(document.body);
    }
  )(document, window);"</code></pre>
<h3>iframes with same origin</h3>
<p>If the iframe comes from the same origin (= domain) as the parent page, Javascript can cross the boundary. From parent to child, there is a <code>contentWindow</code> property on the <code>iframe</code> element. From child to parent, there is <code>window.parent</code>. With same origin, those elements expose all things the window usually has. For different origins, they only expose the function <code>postMessage</code> for the secure separation.</p>
<h2>Injecting content with Nginx</h2>
<p>Remember how I said we can’t modify the application? That still holds true. If we would have loaded both applications from the same domain, we could have had the parent page add a listener inside the iframe to directly update dimensions as needed. But the application contained absolute paths for its assets, so providing it from a subfolder of the same domain would have been tricky and we had to run it on a separate domain. </p>
<p>I ended up injecting the above snippet of Javascript in the Nginx proxy that sits in front of the Docker container:</p>
<pre><code>proxy_set_header Accept-Encoding ""; # make sure we get plain response for substitution to work
...
sub_filter_last_modified on;
sub_filter "&lt;/body&gt;" "&lt;script language='javascript'&gt;${script}&lt;/script&gt;&lt;/body&gt;";
sub_filter_once on;
...
proxy_pass https://my-embed.com$request_uri;</code></pre>
<p>Now the embedded iframe communicates its size to the containing page, which adjusts the iframe size accordingly.</p>
<p>(Note that Nginx does not execute these statements in order. The sub_filter instructions apply to the response, wihle proxy_set_header and proxy_pass apply to the request.)</p>
<h2>Alternatives</h2>
<p>Web Components are a more lightweight solution to combine separate sources into one website. If what you need to integrate is just an element and not a whole application, they might be a better fit. My collegue Falk wrote about <a href="https://www.liip.ch/en/blog/web-components-the-good-the-bad-and-the-ugly">Web Components</a> last week.</p>
<hr />
<h2>Bonus: Access control for the iframe content</h2>
<p>Because the application is not under our control, we need to manage access to it. We told the supplier of the application to remove access control and simply allow items to be created and edited by ID. Of course, this means that the application must never be directly exposed to the internet, but only reachable through the proxy.</p>
<p>On the embedding side, we track which user is allowed what ids, and have Nginx do a pre-flight check against the website to get the access decision:</p>
<pre><code># at the beginning of the location for the main request to the embeded application
auth_request /auth;

location /auth {
    # preflight authorization request with symfony
    fastcgi_pass phpfcgi;
    include /usr/local/openresty/nginx/conf/fastcgi_params;
    fastcgi_param SCRIPT_FILENAME /app/public/index.php;
    # forward the original request URI to allow our main application to verify access to the specific resource
    fastcgi_param REQUEST_URI /embed-check$request_uri;
    # body is not forwarded. we have to remove content length separately, otherwise PHP-FPM will wait for the body until the auth request times out.
    fastcgi_pass_request_body off;
    fastcgi_param CONTENT_LENGTH "";
    fastcgi_param CONTENT_TYPE "";
    internal;
}</code></pre>
<p>If the call at <code>/embed-check/...</code> returns a 2xx status, Nginx continues with the request, otherwise it returns the response with the status code to the client, allowing for example to redirect to the login page. In my case, i return an empty response with status 204 if the user is allowed to access the specific resource.</p>
<p>On Symfony side, I use Symfony security to make sure the user is logged in. And then parse the path to know which item in the application the request wants to access, and check if the user has access. This leaks knowledge about the URL design of the embedded application, which is not avoidable for granular access control.</p>]]></description>
    </item>
        <item>
      <title>Preventing Context Pollution for AI Agents</title>
      <link>https://www.liip.ch/en/blog/preventing-context-pollution-for-ai-agents</link>
      <guid>https://www.liip.ch/en/blog/preventing-context-pollution-for-ai-agents</guid>
      <pubDate>Wed, 18 Mar 2026 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>Context pollution happens when the context window fills up with information that is irrelevant to the current task. The more an agent has to juggle, the more likely it loses track of what it was actually doing.</p>
<p>Here are practical techniques to prevent it.</p>
<h2>Session Hygiene</h2>
<p>Start a fresh session for each task. This is the simplest technique and the easiest to get right. If earlier research is needed, write it into a temporary handoff file and let a new session pick up from there.</p>
<h2>Streamline Tool Calling</h2>
<p>Every tool call adds tokens to the context. Poorly built tools add a lot of them. To keep the context lean:</p>
<ul>
<li>Choose tools and MCPs that are well built and optimize token usage</li>
<li>Make sure via prompting that the right tools are used from the start</li>
</ul>
<p>A single bloated tool response can waste more context than an entire conversation turn.</p>
<h2>Subagents</h2>
<p>Agents can spawn other agents that run in their own context. This isolates work and keeps the parent context clean. It helps most when building large features where individual parts are independent.</p>
<p>The easiest way to use subagents is to prompt something like:</p>
<pre><code>Split the current plan into tasks, use a subagent for each task.</code></pre>
<h2>Persistent Tasks</h2>
<p>I built an MCP for Claude Code called <code>deliverables-mcp</code> that lets an agent create persistent tasks per codebase. Tasks are stored in <code>.claude/deliverables.jsonl</code> and persist across sessions.</p>
<p>This allows:</p>
<ul>
<li>Starting a new session before implementing each task</li>
<li>Running subagents in parallel based on tasks dependencies</li>
<li>Restarting a failed task in a clean session</li>
</ul>
<p>The tool replaces Claude Code's internal tasks and is deliberately called "deliverables" for two reasons:</p>
<ol>
<li>To avoid confusing the agent with two tools both called "tasks"</li>
<li>Deliverables are typically larger than just a task, which is a sweet spot for AI agents. Not so small that handoff cost dominates, but small enough that context problems are rare.</li>
</ol>
<p>You can check out <code>deliverables-mcp</code> on <a href="https://github.com/FalkZ/deliverables-mcp">GitHub</a>.</p>]]></description>
    </item>
        <item>
      <title>The ConfIAnce Chatbot, one year later</title>
      <link>https://www.liip.ch/en/blog/the-confiance-chatbot-one-year-later</link>
      <guid>https://www.liip.ch/en/blog/the-confiance-chatbot-one-year-later</guid>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>A little less than a year ago, we introduced <a href="https://www.liip.ch/en/blog/confiance-the-first-llm-chatbot-for-general-medicine-in-switzerland">the ConfIAnce chatbot</a>. In collaboration with the Geneva University Hospitals (HUG), we developed this conversational agent to provide easier, interactive access to medical information about common chronic diseases. The content is produced and validated by the medical institution itself.</p>
<p>An article published by the team behind the project in the latest issue of the  Revue Médicale Suisse provides a first assessment one year after its public launch.</p>
<figure><img alt="" src="https://liip.rokka.io/www_inarticle_5/9661ee/rms-confiance.jpg" srcset="https://liip.rokka.io/www_inarticle_5/o-dpr-2/9661ee/rms-confiance.jpg 2x"></figure>
<h2>An official chatbot instead of unreliable answers online</h2>
<p>Primary care, which is essential for a well-functioning healthcare system, is facing a growing shortage, even in urban areas. Without easy access to their primary care physician, many patients turn to the internet to search for answers. Unfortunately, the information they find is often inaccurate or even potentially harmful.</p>
<p>In this context, a well-designed AI solution can help deliver <strong>the right information at the right time</strong>.</p>
<p>This is why we supported HUG in developing a <strong>RAG-based chatbot</strong> (Retrieval Augmented Generation). ConfIAnce is not the first chatbot designed for patients. However, it stands out thanks to its institutional roots, its use of locally validated medical content, and the control layers implemented to ensure reliable responses.</p>
<p>To guarantee safety, the system integrates monitoring mechanisms, including matching, groundedness checks, harmfulness detection, automated testing, and semantic routing.</p>
<h2>Keeping control of the tool to ensure quality</h2>
<p>One key challenge is maintaining control over the system, which requires monitoring capabilities. To achieve this, automated tests are run on all answers generated in response to user questions.</p>
<p>These tests measure the factual consistency of the chatbot’s responses compared with the knowledge base.</p>
<p>In addition, adjustable routing rules allow administrators to maintain human oversight by filtering and directing questions appropriately. Administrators can also immediately take the chatbot offline if a malfunction is suspected.</p>
<p>Topics that are frequently asked about but are insufficiently covered in the source documents are identified. These are then developed further to enrich the knowledge base as part of a continuous improvement process.</p>
<h2>Even the best tool is only useful if people use it</h2>
<p>For the chatbot to be useful, patients need to use it. To support adoption, HUG ran a public information campaign that promoted the tool while setting realistic expectations about what it can do.</p>
<p>ConfIAnce is <strong>not a medical device meant to replace a consultation</strong>. Instead, it provides informational support for questions related to the most common chronic diseases affecting adults.</p>
<p>In February 2025, ConfIAnce was released in beta. Between early February and the end of October 2025: 3,823 users interacted with the chatbot, generating 5,969 conversations  with <strong>11,781 questions</strong> (about two questions per conversation)</p>
<p><strong>Feedback</strong> provided directly through the chatbot is <strong>75% positive</strong>.</p>
<h2>Strong acceptance for a different kind of chatbot</h2>
<p>Chatbots in healthcare journeys are generally well accepted by patients thanks to their constant availability and ease of use. However, studies highlight recurring issues: inconsistent response quality and a lack of transparency regarding sources.</p>
<p>These are precisely the aspects that differentiate ConfIAnce from many other medical chatbots.</p>
<p>Designed to <strong>support, not replace, the relationship between patients and physicians</strong>, ConfIAnce helps primary care doctors by freeing up time so they can focus on practising medicine with the human-centred approach that motivated them to choose this profession.</p>
<p>The chatbot, developed in the specific context of HUG and its information resources, could be adapted to other institutional settings.<br />
For such a project to succeed, access to high-quality data is essential, as was the case here. Beyond that, control layers, automated testing, and user feedback enable continuous improvement and ensure the safety and relevance needed to build trust.</p>]]></description>
    </item>
        <item>
      <title>Making LiipGPT Accessible: Our Journey to WCAG AA Compliance</title>
      <link>https://www.liip.ch/en/blog/making-liipgpt-accessible</link>
      <guid>https://www.liip.ch/en/blog/making-liipgpt-accessible</guid>
      <pubDate>Mon, 16 Mar 2026 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>After focusing on themability for our chatbot <a href="https://www.liipgpt.ch/" rel="noreferrer" target="_blank">LiipGPT</a> (most recently showcased in the <a href="https://zuericitygpt.ch/" rel="noreferrer" target="_blank">Z&uuml;riCityGPT Relaunch</a>), we turned our attention to accessibility with the goal of reaching WCAG AA compliance. As we do with many features, we first examined how industry leaders like ChatGPT, Perplexity, and Claude handle accessibility. While we found room for improvement across the board, this inspired us to think about how we could do better.</p>
<p>Our accessibility journey followed four main steps: automatic scans and quick-fixes, keyboard navigation, mobile zoom optimization, and screen reader experience.</p>
<h2>Automatic Scans and Quick-Fixes</h2>
<p>We started with automated accessibility testing using browser extensions like <a href="https://chromewebstore.google.com/detail/ibm-equal-access-accessib/lkcagbfjnkomcinoddgooolagloogehp" rel="noreferrer" target="_blank">IBM Equal Access Accessibility Checker</a> and <a href="https://www.deque.com/axe/devtools/extension" rel="noreferrer" target="_blank">axe DevTools</a>. These tools helped us identify common issues: missing labels, insufficient color contrast, improper semantic HTML, and missing ARIA attributes. While automated scans only catch about 40% of accessibility issues, they provided a solid foundation for our work.</p>
<h2>Keyboard Navigation</h2>
<p>Proper keyboard navigation is fundamental to accessibility. Ensuring basic Tab navigation works across the app is straightforward, but more complex components like <a href="https://www.w3.org/WAI/ARIA/apg/patterns/tabs/examples/tabs-automatic/" rel="noreferrer" target="_blank">tabs</a>, <a href="https://www.w3.org/WAI/ARIA/apg/patterns/menubar/" rel="noreferrer" target="_blank">menus</a>, and <a href="https://www.w3.org/WAI/ARIA/apg/patterns/dialog-modal/" rel="noreferrer" target="_blank">modals</a> require advanced keyboard interactions: arrow keys, Escape key handling, and focus management that follow official W3C guidelines. Users who rely on keyboard navigation have learned to expect these specific patterns, and deviating from them creates confusion and frustration. Rather than building these patterns from scratch, we leveraged <a href="https://bits-ui.com/" rel="noreferrer" target="_blank">Bits UI</a>, a headless UI library that implements these accessibility guidelines correctly.</p>
<p>Beyond individual components, we implemented focus loops and focus restoration at the application level to keep users oriented as they move through different stages of the chat interface.</p>
<h2>Mobile Zoom Optimization</h2>
<p>During user testing for <a href="https://meinplatz.ch/" rel="noreferrer" target="_blank">meinplatz.ch</a> with users who have disabilities, we observed something striking: many users navigate websites on mobile devices with 200% or more zoom, holding their devices just 10cm from their eyes. This insight highlighted a critical gap in most chatbot implementations.</p>
<p>Most chatbots use fixed-position elements: a chat input at the bottom and often a header at the top. When users zoom in significantly, these fixed elements can consume the entire viewport, making the interface unusable. Unfortunately, reliably detecting user zoom levels is impossible in browsers. Our solution: use Intersection Observer to detect when the header or footer take up more space than expected, then dynamically remove their fixed positioning to restore usability.</p>
<figure class="video"><video autoplay controls loop muted playsinline><source src="https://www.liip.ch/media/pages/blog/making-liipgpt-accessible/343fac4583-1769071651/chatgpt-zoom.mp4" type="video/mp4"></video><figcaption>Fixed-position elements are problematic on zoomed viewports.</figcaption></figure> 
<figure class="video"><video autoplay controls loop muted playsinline><source src="https://www.liip.ch/media/pages/blog/making-liipgpt-accessible/6900992c5e-1769071651/liipgpt-zoom.mp4" type="video/mp4"></video><figcaption>Solution:Revert fixed elements to static positioning when zoom is detected.</figcaption></figure> 
<h2>Screen Reader Experience</h2>
<p>Screen reader accessibility isn't automatic: it requires careful design. We focused on providing clear context through proper page structure (landmarks and headings), ensuring users always understand where they are and what's happening, and provide them shortcut to the key parts of the app.</p>
<h4>Providing Context</h4>
<p>We implemented a comprehensive outline structure with landmarks for main navigation, settings, and input areas. Each message includes proper headings and labels, and we added a skip link after the chat input (at the bottom of the page) to help users quickly return to the top.</p>
<h4>Web Component Challenges</h4>
<p>Working with web components introduced unique challenges. VoiceOver is particularly sensitive to how libraries are implemented. We worked closely with the bits-ui team (who were very responsive to bug reports) and implemented local portals for dropdown menus to avoid VoiceOver navigation issues, for example.</p>
<h4>Managing Announcements</h4>
<p>One of the trickiest challenges was managing VoiceOver announcements when multiple events occur simultaneously. Since queuing announcements doesn't work reliably, we carefully sequenced events and merged related announcements. For example, when a user clicks "select all options" for a list, individual announcements for each option would normally fire and override each other. Instead, we cancel those separate announcements and replace them with a single clear announcement summarizing everything that happened (all items selected or deselected, reset to the predefined set of items, etc.).</p>
<p>Since the chat is a SPA without page reloads, it was also important to announce all changes that are only visually visible, for example: light/dark mode switch, language switch, etc.</p>
<h4>Chat Flow for Screen Readers</h4>
<p>We designed the chat experience specifically for screen reader users:</p>
<ul>
<li>The input field includes both a placeholder and an aria-label with the page title, providing context on page load since the input auto-focuses and users skip over the initial page content.</li>
<li>When a response is being generated, we announce this clearly, providing the same feedback that a visual loading indicator would.</li>
<li>Once a response is ready, it's read without markdown formatting (no bold, no links, etc.) to maintain a natural reading flow.</li>
<li>After reading a response, we make users aware that they can ask another question directly or navigate to the last message's options to provide feedback or view references. We dynamically add this interactive section of the last message (where users are most likely to interact) to the document outline, creating a quick navigation shortcut.</li>
<li>Chat history is structured as articles with labels, making it easy to navigate past conversations.</li>
</ul>
<figure class="video"><video autoplay controls loop muted playsinline><source src="https://www.liip.ch/media/pages/blog/making-liipgpt-accessible/912fd0d7cd-1769018708/screenreader.mp4" type="video/mp4"></video><figcaption>Solution:Navigating the chatbot using VoiceOver screen reader on macOS.</figcaption></figure> 
<h2>Try It Yourself</h2>
<p>You can experience these improvements with <a href="https://www.bs.ch/alva" rel="noreferrer" target="_blank">Alva</a>, the chatbot of the Basel-Stadt administration. Try navigating with <a href="https://www.google.com/search?q=how+to+navigate+a+website+with+voiceover" rel="noreferrer" target="_blank">VoiceOver (MacOS)</a> or <a href="https://www.google.com/search?q=how+to+navigate+a+website+with+nva+screen+reader" rel="noreferrer" target="_blank">NVA (Windows)</a>, use only your keyboard, or zoom in significantly on a mobile device.</p>
<h2>An ongoing journey</h2>
<p>Our next goal is to integrate automated accessibility testing into our CI pipeline. However, as mentioned earlier, automated scans only catch around 40% of accessibility issues. This means we'll still need to carefully plan and test each new feature manually. Nothing replaces human testing when it comes to accessibility—automated tools can flag missing labels or contrast issues, but they can't evaluate whether an interface is actually usable for someone navigating with a screen reader or keyboard.</p>
<p>Accessibility is an ongoing journey, not a destination. We're committed to making LiipGPT usable for everyone, and we'll continue refining our approach based on real-world feedback.</p>
<h2>Need Help with your Accessibility?</h2>
<p>We offer accessibility audits to help you identify and fix issues in your own applications. If you're looking to improve the accessibility of your product, <a href="https://www.liip.ch/en/contact">get in touch with us</a>, we would be happy to help.</p>]]></description>
    </item>
        <item>
      <title>Insights on AI and Open Source for government at Drupal4Gov</title>
      <link>https://www.liip.ch/en/blog/insights-on-ai-and-open-source-for-government-at-drupal4gov</link>
      <guid>https://www.liip.ch/en/blog/insights-on-ai-and-open-source-for-government-at-drupal4gov</guid>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0100</pubDate>
      <description><![CDATA[<p>The Drupal4Gov conference was packed with interesting talks. Here, you’ll find here my personal highlights. I was also there to showcase our work on the Kanton Basel-Stadt/Alva/blökkli project. We already wrote about it, but I’ll share with you the current status and new features.</p>
<h2>GovNL: From months to minutes to build sites</h2>
<p>GovNL combines open source Drupal components and an open design system to run many Dutch government sites in a way that is accessible and scalable. It reduces time to build new sites from <strong>3 months to about 10 minutes</strong>, pretty impressive to say the least. This is a strong use case of designing for reuse at large scale.</p>
<h2>European Commission: Coordination is key to scaling</h2>
<p>The European Commission already runs no less than <strong>770 sites</strong> and invests heavily in the Drupal ecosystem! What stood out for me was how much they <strong>focus on coordination</strong>—making sure the right content is published through the right channel across that landscape. Open source program offices (OSPOs) were established as a way to drive open source agendas both at government level and inside organisations.</p>
<figure><img alt="" src="https://liip.rokka.io/www_inarticle_5/ac3271/drupal4gov2026-josef.jpg" srcset="https://liip.rokka.io/www_inarticle_5/o-dpr-2/ac3271/drupal4gov2026-josef.jpg 2x"></figure>
<h2>Kanton Basel-Stadt website and Alva: a blueprint for local public administration</h2>
<p>Just before lunch break, it was time for me to present the <strong>different AI use cases we implemented</strong> for <a href="https://www.liip.ch/en/work/projects/basel-stadt">Kanton Basel-Stadt</a>. The canton set new standards with the bs.ch relaunch with user-centred design, topic-based access instead of internal org structure, and <a href="https://www.bs.ch/alva">Alva</a> as <strong>the first AI-based chatbot for a Swiss canton</strong>. The stack is based on open source components and Liip heavily contributed to open source as part of this relaunch. We use Drupal as CMS, Nuxt/Vue, the <a href="https://blokk.li/">bl&ouml;kkli</a> editor for the headless frontend and Elasticsearch for search. Content is produced by a cross-department editorial team following a clear content strategy.</p>
<h2>AI to support the public and editors of the website</h2>
<p>The talk was an opportunity to share figures more than 18 months after the go-live. Today, Alva handles <strong>over 10,000 questions per month</strong>, with about <strong>1.36 questions per conversation and +44% growth since Alva 2.0.</strong> API integrations let the chatbot answer questions based on real-time information. Alva is also used heavily by internal users from the canton as well as the public. The chatbot always shows and validates its sources, which is central to creating trust. </p>
<p>On the editing side, blökkli is working hard to simplify texts. By using <strong>the blökkli editor</strong> with integrated AI, editors can now run a readability audit, see proposed simplifications side-by-side and accept or adapt them. Alva and the AI features on bs.ch are continuously developed further to provide editors and citizens trustworthy AI technology.</p>
<h2>AI assisted technologies at the French Government</h2>
<p>Another inspiring talk to watch was Use-cases of AI in the Services Publics+ platform at the French government. With more than 140.000 experiences shared and over a million of reactions, the system uses AI assisted technology to help State service provide feedback to citizens. They leverage speech to text and real-time summaries as an enabling technology. C’est magnifique!</p>
<h2>The EU trusts open source more than ever</h2>
<p>The European Union introduced <strong>Website Evidence Collector</strong> that scans sites for security issues and is open source. It was notable that they publish it under the <strong>EUPL</strong> (European Union Public Licence), which emphasises interoperability between countries and licences and supports multilingual collaboration. I wonder if Switzerland has something similar? </p>
<p>Not only does the EU trust open source for security they also provide through Interoperable Europe a new portal there’s a useful <strong>Licensing Assistant</strong>. You can <a href="https://interoperable-europe.ec.europa.eu/collection/eupl/solution/licensing-assistant/find-and-compare-software-licenses">find and compare software licences</a> and run a <a href="https://interoperable-europe.ec.europa.eu/collection/eupl/solution/licensing-assistant/compatibility-checker">compatibility checker</a> to see if different open source licences can be combined and whether there are legal complications. </p>
<h2>Using open source is not enough, we need champions</h2>
<p>Finally, <strong>Tiffany Farris</strong> from strategy consultancy <a href="https://www.palantir.net/">Palantir.net</a> (not to be confused with the infamous Palantir Technologies) stressed that <strong>using open source is good, but not enough</strong>. You need <strong>champions</strong> in the organisation who put contribution and ecosystem health on the agenda. Designing for reuse should be a core principle. From a US perspective, procurement is a problem: open source usage has grown, but support mechanisms often haven’t. Treating open source as “free” can lead to contracts going to vendors who brand their work as open source without actually supporting a thriving ecosystem. She proposed concrete <strong>public money, public code</strong>-style amendments to public procurement to better support the ecosystem. This was really an inspiring conclusion to an intense day of learning and exchanges.</p>
<p>You can <a href="https://www.youtube.com/playlist?list=PLNubpNMwP36QH5Y3RlbOiV4f9hjlrxCOo">watch the playlist</a> if you would like to dive deeper into the presentations from Drupal4Gov EU 2026.</p>]]></description>
    </item>
        <item>
      <title>Web Components: The Good, the Bad, and the Ugly</title>
      <link>https://www.liip.ch/en/blog/web-components-the-good-the-bad-and-the-ugly</link>
      <guid>https://www.liip.ch/en/blog/web-components-the-good-the-bad-and-the-ugly</guid>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0100</pubDate>
      <description><![CDATA[<h1>Introduction</h1>
<p>We created a fully themeable chat UI that can be embedded in any website and has no effect on the parent page. <a href="https://www.bs.ch/alva">Kanton Basel-Stadts Alva</a> and <a href="https://ramms.ch/">RAMMS' Rocky AI</a> are instances of that UI.</p>
<p>This blog post will show you what we learned about creating web components that do not influence the parent page. Here are the good, the bad and the ugly when working with web components.</p>
<h1>The Good</h1>
<p>These are the good parts of web components. They will lay the foundation for why you might use them.</p>
<h2>Portability</h2>
<p>Every system that can handle HTML can handle web components. A simple tag and a script will integrate it into any web framework. It doesn't even need to be a JavaScript framework.</p>
<pre><code class="language-html">&lt;body&gt;
  &lt;your-webcomponent&gt;&lt;/your-webcomponent&gt;
  &lt;script src="path/to/your-webcomponent.js"&gt;&lt;/script&gt;
&lt;/body&gt;</code></pre>
<h2>Native Feel</h2>
<p>IFrames are another way to embed UI into a page, and they are arguably easier to use. But the main difference is that web components feel more native to the page, since they directly integrate into the parent page's layout. This means you can use transparency, intrinsic sizing (size based on the web component's contents), and seamless event communication with the parent page.</p>
<h2>Slots</h2>
<p>With slots you can provide content that will be added at a specified point inside your web component.</p>
<p>In our chat UI, we used a slot to let integrators provide a custom loading spinner. This spinner needs to be visible immediately, before the full theme loads asynchronously.</p>
<h2>Shadow DOM - Isolating Styles</h2>
<p>A robust way to ensure that your styles do not affect the parent page is to use the Shadow DOM. Shadow DOM is a web component feature to add a boundary for styles. Styles applied inside the Shadow DOM never apply to the parent page.</p>
<h3>Caveat: Inheritable CSS Properties</h3>
<p>There is one exception to the isolation where CSS properties of the parent page apply to the web component.</p>
<p>These are the properties that pierce through the boundary:</p>
<ul>
<li>Inheritable CSS properties like <code>color</code>, <code>font-family</code>, <code>line-height</code></li>
<li>CSS custom properties like <code>--my-var</code></li>
</ul>
<p>In practice, we have found it helps to fully specify the common properties like fonts and color on every element. That way you will never be surprised by different styles on integration.</p>
<h1>Vite</h1>
<p>For bundling web components, we can highly recommend Vite. There are a lot of neat tricks you can apply while bundling. Here are the Vite features we used for our web component.</p>
<h2>Inlining Assets</h2>
<p>Vite's <a href="https://vite.dev/guide/assets#explicit-inline-handling">explicit inline handling</a> feature allowed us to inline our external CSS files into the JS bundle.</p>
<pre><code class="language-ts">import cssContentString from "./index.css?inline";</code></pre>
<p>This feature will not only inline the raw content of the imported <code>index.css</code>. It will also resolve all CSS imports, apply PostCSS transforms, and even work with CSS preprocessors like SASS. While inlined CSS is not the most efficient for browsers to render, the benefit is that we can ship a single JS file.</p>
<h2>Library Mode</h2>
<p>The Vite <a href="https://vite.dev/guide/build#library-mode">library mode</a> provides you with fine-grained control of how the bundle should behave. To enable the library mode just add the <code>build.lib</code> option in your Vite config.</p>
<h1>The Bad</h1>
<p>Not everything about web components is great though. Here are the bad parts.</p>
<h2>SSR - Hard to Get Working</h2>
<p>Server-side rendering will almost certainly not work. The rest of the page can still be rendered server-side, but the web component will only show up as a <code>&lt;your-webcomponent&gt;&lt;/your-webcomponent&gt;</code> tag. Its contents will only be rendered in the browser.</p>
<p>There is one <a href="https://lit.dev/docs/ssr/overview/">experimental package by Lit Labs</a> that tries to solve this, but we never tried it.</p>
<h2>Tailwind - Not a Great Fit</h2>
<p>Tailwind feels like a natural choice for web components, but it does not play well with them.</p>
<p>The core issue is twofold. First, Tailwind ships its own CSS reset (called Preflight), which overrides default browser styles. When injected into a page that does not use Tailwind, it potentially breaks the page. Shadow DOM could isolate this reset, but Tailwind is fundamentally not designed to work inside a Shadow DOM. Here is the <a href="https://github.com/tailwindlabs/tailwindcss/discussions/1935">discussion</a> if you are interested.</p>
<p>There are some hacky workarounds, but we tried them and had no success getting them to work reliably.</p>
<p>Our recommendation is to only use Tailwind if you are guaranteed that the parent page also uses it, and then use web components without Shadow DOM.</p>
<h1>The Ugly</h1>
<h2>Verbose Web Components API</h2>
<p>The native web component API is verbose and hard to read. A simple counter component, for example, requires manually defining a class, attaching a shadow root, setting up <code>innerHTML</code>, and wiring event listeners in <code>connectedCallback</code>. This boilerplate adds up quickly. You can see examples of the API <a href="https://github.com/mdn/web-components-examples">here</a>.</p>
<p>Fortunately, web components make for a great compile target. <a href="https://svelte.dev/docs/svelte/custom-elements">Svelte</a> and <a href="https://vuejs.org/api/custom-elements.html#definecustomelement">Vue</a> directly support compiling to web components. <a href="https://blog.logrocket.com/working-custom-elements-react/">React</a> is a bit trickier, but totally doable as well. We used this approach for our chat UI, where the first iteration was built with React and the current one with Svelte.</p>
<h1>Weird Quirks</h1>
<p>Advanced web component features come with edge cases that no documentation warns you about. Even Svelte, which has excellent web component support, ships with a notable <a href="https://svelte.dev/docs/svelte/custom-elements#Caveats-and-limitations">list of caveats</a>.</p>
<p>We even hit an undocumented edge case with slots in Svelte: the bundle script must load after the component markup, or slotted content will not render. An ugly <a href="https://github.com/FalkZ/svelte-web-components-starter/blob/main/src/slot.svelte">wrapper for slots</a> fixes the problem, but quirks like this add up and slow you down.</p>
<h2>Font Loading - Not Working Inside Shadow DOM</h2>
<p>When authoring web components, you get into the habit of defining all stylesheet links etc. in the web component body. As you should, so they do not affect the parent page. But there is another annoying detail here: @font-face will not work when defined in the Shadow DOM. If your web component needs a custom font, you need to inject the font CSS into the parent page to make it work.</p>
<h1>Conclusion</h1>
<p>I do not want to end on this ugly note though. I really think there are cases where web components are the right choice, and in our case we would choose Svelte &amp; web components again.</p>
<p>To help you get started, here is a <a href="https://github.com/FalkZ/svelte-web-components-starter">Svelte starter template</a>.</p>]]></description>
    </item>
        <item>
      <title>City of Zurich&#039;s 900+ Open Data Sets Now Have an MCP Server</title>
      <link>https://www.liip.ch/en/blog/city-of-zurich-s-900-open-data-sets-now-have-an-mcp-server</link>
      <guid>https://www.liip.ch/en/blog/city-of-zurich-s-900-open-data-sets-now-have-an-mcp-server</guid>
      <pubDate>Thu, 26 Feb 2026 00:00:00 +0100</pubDate>
      <description><![CDATA[<p><a href="https://www.linkedin.com/in/alexander-g%C3%BCntert-3379071b6/">Alexander Güntert</a> <a href="https://www.linkedin.com/posts/activity-7432101739589345280-0YcB">posted on LinkedIn</a> about a new open-source project his colleague <a href="https://www.linkedin.com/in/hayaloezkan/">Hayal Oezkan</a> had built: an <a href="https://github.com/malkreide/zurich-opendata-mcp">MCP server for Zurich's open data</a>. The post got quite some reactions and I liked the idea very much. But it still needed a local installation, not something non-developers easily know how to do. So I had it packaged and deployed on our servers, available for anyone to use as the "OGD City of Zurich" remote MCP server.</p>
<p>The City of Zurich publishes over 900 datasets as open data, spread across six different APIs. There's <a href="https://data.stadt-zuerich.ch">CKAN</a> for the main data catalog, a WFS Geoportal for geodata, the Paris API for parliamentary information from the Gemeinderat, a tourism API, SPARQL linked data, and ParkenDD for real-time parking data. All public, all freely available. But until now, making an AI assistant actually use these APIs meant writing custom integrations for each one.</p>
<p>The MCP server wraps all six APIs into 20 tools that any MCP-compatible AI assistant can call directly. Ask "How warm is it in Zurich right now?" and it queries the live weather stations. Ask about parking availability, and it pulls real-time data from 36 parking garages. It also covers parliamentary motions, tourism recommendations, SQL queries on the data store, and GeoJSON features for school locations, playgrounds, or climate data. All through a single, standardized <a href="https://modelcontextprotocol.io/">Model Context Protocol</a> interface.</p>
<p>Hayal Oezkan  built it in Python using FastMCP. One file for the server with all 20 tool handlers. The <a href="https://github.com/malkreide/zurich-opendata-mcp">repo</a> is on GitHub.</p>
<p>Deploying it on our side took very little effort. The server supports both stdio transport for local use (like in Claude Desktop or Claude Code) and SSE and HTTP Streaming for remote deployment. I packaged it with Docker, deployed it to our cluster, and now it's available as a remote MCP server that anyone can add to their AI tools without installing anything locally.</p>
<p>The natural next step was integrating this with <a href="https://zuericitygpt.ch/">ZüriCityGPT</a>. It happened, just not quite in the direction I originally had in mind.</p>
<p>ZüriCityGPT already had its own MCP server at zuericitygpt.ch/mcp, exposing tools for searching the city's website content, "Stadtratsbeschlüsse" and looking up waste collection schedules. Instead of wiring the open data tools into ZüriCityGPT, I went the other way: the open data MCP server now proxies tools from the ZüriCityGPT MCP server. A lightweight proxy client connects to the remote server via streamable-http and forwards calls. The whole thing is about 40 lines of Python.</p>
<p>So now, when you connect to the Zurich Open Data MCP server, you get 23 tools in one place. The 21 original open data tools across six APIs, plus <code>zurich_search</code> for querying the city's knowledge base and <code>zurich_waste_collection</code>  for waste pickup schedules (based on the <a href="https://openerz.metaodi.ch/documentation">OpenERZ API</a>). One MCP endpoint, many services behind it. </p>
<p>A city employee builds something useful in the open, publishes the code, and within a day it's deployed and available to a wider audience. Open data and open source working together, exactly as intended.</p>]]></description>
    </item>
      </channel>
</rss>