After focusing on themability for our chatbot LiipGPT (most recently showcased in the ZüriCityGPT Relaunch), we turned our attention to accessibility with the goal of reaching WCAG AA compliance. As we do with many features, we first examined how industry leaders like ChatGPT, Perplexity, and Claude handle accessibility. While we found room for improvement across the board, this inspired us to think about how we could do better.

Our accessibility journey followed four main steps: automatic scans and quick-fixes, keyboard navigation, mobile zoom optimization, and screen reader experience.

Automatic Scans and Quick-Fixes

We started with automated accessibility testing using browser extensions like IBM Equal Access Accessibility Checker and axe DevTools. These tools helped us identify common issues: missing labels, insufficient color contrast, improper semantic HTML, and missing ARIA attributes. While automated scans only catch about 40% of accessibility issues, they provided a solid foundation for our work.

Keyboard Navigation

Proper keyboard navigation is fundamental to accessibility. Ensuring basic Tab navigation works across the app is straightforward, but more complex components like tabs, menus, and modals require advanced keyboard interactions: arrow keys, Escape key handling, and focus management that follow official W3C guidelines. Users who rely on keyboard navigation have learned to expect these specific patterns, and deviating from them creates confusion and frustration. Rather than building these patterns from scratch, we leveraged Bits UI, a headless UI library that implements these accessibility guidelines correctly.

Beyond individual components, we implemented focus loops and focus restoration at the application level to keep users oriented as they move through different stages of the chat interface.

Mobile Zoom Optimization

During user testing for meinplatz.ch with users who have disabilities, we observed something striking: many users navigate websites on mobile devices with 200% or more zoom, holding their devices just 10cm from their eyes. This insight highlighted a critical gap in most chatbot implementations.

Most chatbots use fixed-position elements: a chat input at the bottom and often a header at the top. When users zoom in significantly, these fixed elements can consume the entire viewport, making the interface unusable. Unfortunately, reliably detecting user zoom levels is impossible in browsers. Our solution: use Intersection Observer to detect when the header or footer take up more space than expected, then dynamically remove their fixed positioning to restore usability.

Fixed-position elements are problematic on zoomed viewports.
Solution:Revert fixed elements to static positioning when zoom is detected.

Screen Reader Experience

Screen reader accessibility isn't automatic: it requires careful design. We focused on providing clear context through proper page structure (landmarks and headings), ensuring users always understand where they are and what's happening, and provide them shortcut to the key parts of the app.

Providing Context

We implemented a comprehensive outline structure with landmarks for main navigation, settings, and input areas. Each message includes proper headings and labels, and we added a skip link after the chat input (at the bottom of the page) to help users quickly return to the top.

Web Component Challenges

Working with web components introduced unique challenges. VoiceOver is particularly sensitive to how libraries are implemented. We worked closely with the bits-ui team (who were very responsive to bug reports) and implemented local portals for dropdown menus to avoid VoiceOver navigation issues, for example.

Managing Announcements

One of the trickiest challenges was managing VoiceOver announcements when multiple events occur simultaneously. Since queuing announcements doesn't work reliably, we carefully sequenced events and merged related announcements. For example, when a user clicks "select all options" for a list, individual announcements for each option would normally fire and override each other. Instead, we cancel those separate announcements and replace them with a single clear announcement summarizing everything that happened (all items selected or deselected, reset to the predefined set of items, etc.).

Since the chat is a SPA without page reloads, it was also important to announce all changes that are only visually visible, for example: light/dark mode switch, language switch, etc.

Chat Flow for Screen Readers

We designed the chat experience specifically for screen reader users:

  • The input field includes both a placeholder and an aria-label with the page title, providing context on page load since the input auto-focuses and users skip over the initial page content.
  • When a response is being generated, we announce this clearly, providing the same feedback that a visual loading indicator would.
  • Once a response is ready, it's read without markdown formatting (no bold, no links, etc.) to maintain a natural reading flow.
  • After reading a response, we make users aware that they can ask another question directly or navigate to the last message's options to provide feedback or view references. We dynamically add this interactive section of the last message (where users are most likely to interact) to the document outline, creating a quick navigation shortcut.
  • Chat history is structured as articles with labels, making it easy to navigate past conversations.
Solution:Navigating the chatbot using VoiceOver screen reader on macOS.

Try It Yourself

You can experience these improvements with Alva, the chatbot of the Basel-Stadt administration. Try navigating with VoiceOver (MacOS) or NVA (Windows), use only your keyboard, or zoom in significantly on a mobile device.

An ongoing journey

Our next goal is to integrate automated accessibility testing into our CI pipeline. However, as mentioned earlier, automated scans only catch around 40% of accessibility issues. This means we'll still need to carefully plan and test each new feature manually. Nothing replaces human testing when it comes to accessibility—automated tools can flag missing labels or contrast issues, but they can't evaluate whether an interface is actually usable for someone navigating with a screen reader or keyboard.

Accessibility is an ongoing journey, not a destination. We're committed to making LiipGPT usable for everyone, and we'll continue refining our approach based on real-world feedback.

Need Help with your Accessibility?

We offer accessibility audits to help you identify and fix issues in your own applications. If you're looking to improve the accessibility of your product, get in touch with us, we would be happy to help.