AI and sustainability are quite contradictory. LLMs and AI infrastructures come with real environmental costs, high energy demand, rebound effects and “climate shadows” that are often invisible at first glance. However as Liipers we agree that AI can be shaped to have a lesser impact and be more responsible. We have a duty to do so for our clients, society, and our long-term digital ecosystem.

This duty led Liip to publish the Sustainability Guidelines for AI-enabled products. For now, we have a functional draft, to be tested in first projects and progressively adopted across teams.


Why We Started This Work

The initial project proposal framed the challenge well: teams working on AI products often lack a consolidated checklist to help them mitigate negative impacts on people and planet, while still delivering high-quality results that meet client expectations.

The goal was never to restrict creativity or slow delivery. Instead, it was to enable developers, designers, and strategists to make informed decisions, grounded in:

  • Offers and projects
  • User value
  • Technical optimisation
  • UX relevance
  • Ethical considerations
  • Transparency
  • Digital sustainability

In short: better products for users, with lower impact, and stronger alignment with the company strategy.


What the Guidelines Enable Today

After several exchanges and analysis of existing sustainable AI references, we now have a first version ready to use. The guidelines offer:

1. Clear criteria for responsible AI use

A structured set of 38 questions and checks to integrate into projects progressively.

These include impact reporting, model choice, UX considerations, data management, transparency to users, and more. Some are pretty obvious, some are already applied for a while, and some will require efforts.

2. A framework for continuous improvement

The expectation is simple: even adding one new criterion per project already counts as valuable progress. Continuous improvement is built into the guidelines, as initially defined in the project outline.

3. Concrete “Essentials” for AI projects

From the draft release shared internally, four essential practices have already emerged as non-negotiable in the near future for GenAI projects going forward:

  • Provide LLM and hosting alternatives in offers (e.g., open-source, efficiency criteria, or greener hosting providers).
  • Include UX designers more systematically to refine user flows and enhance experience.
  • Use Impact Reporting & Performance Cards (such as Ecologits) to track footprint of our products.
  • Provide continuous improvement roadmap based on the guidelines for project maintenance and future releases.

And in bonus, this initiative also fits into the wider sustainability strategy of Liip.


Early Adoption: From Optional to Systematic

The guidelines are progressively being considered on projects where AI usage is substantial enough to benefit from impact evaluation and optimisation. This “soft launch” phase enables teams to:

  • Evaluate which criteria are already met
  • Identify friction points
  • Uncover opportunities to propose sustainability add-ons to clients
  • Validate which recommendations should become baseline practice

This gradual integration is key to transforming the guidelines from a draft into standards for Liip.


Toward Transparency & Collective Progress: Open-Sourcing the Guidelines

Once consolidated through real-project feedback, we aim to open-source the guidelines. Because the AI industry still lacks clear, actionable standards for responsible practice, by sharing our approach, we want to contribute to a collective shift toward lower-impact, higher-transparency AI products—and to collaborate with others interested in this mission.

Alongside this evolution, we are preparing the beta launch of Lowwwimpact, our upcoming web sustainability evaluation platform. An additional module dedicated to sustainable AI practices is planned, and early adopters can already join the waiting list at:

👉 Lowwwimpact.com

Companies or individuals interested in collaborating, testing the guidelines, or contributing to their next iteration can sign up today and become part of the first testing cohort.


What Comes Next

Over the coming months, we will:

  • Consolidate the criteria and documentation based on the experience gained on projects
  • Go forward with continuous improvement
  • Gather feedback from interested partners
  • Prepare the open-source release
  • Integrate the guidelines into Lowwwimpact as a dedicated AI chapter

This project started with a simple idea: to help our team make more conscious choices when building AI-powered digital products. Today, it has grown into a cross-circle effort, aligned with our sustainability strategy and creating new forms of value for clients—whether through optimisation, transparency, ethics, or new feature opportunities.

If you’d like to be part of the journey, let's get in touch.