The rapid advancement and widespread adoption of AI, including Large Language Models (LLM) like ChatGPT have generated significant hype, alongside a complex mix of expectations and concerns. One gets the impression that hardly any new application that is planned or launched today does not make use of AI in some form.

As a digital agency, I see it as our job to take on such innovations and tools, explore their potential and gain experience. Our work with ChatGPT interfaces is particularly visible to the outside world. (We actively share related experiences in blog posts like here or here). In workshops with clients or specific communities (like journalists and editorial teams), we are experimenting also with other applications, exploring possibilities and developing useful use cases.

Exploring these possibilities, we actively pursue sustainability issues and ethical discussions, and also conduct them internally.

We perceive the situation as complex. In this blog post, I put together relevant points that we see in the discussion about the potential benefits and risks surrounding AI. This is a snapshot of an ongoing development process.

Triple Bottom Line as Framework

Liip’s Purpose is aligned with the Triple Bottom Line: social, ecological and economic sustainability. The UN’s Sustainable Development Goals (SDG) are a central reference point in our work. That's why I decided to also align this analysis with the Triple Bottom Line and to list the potentials and challenges that face each other in the development and work with AI tools. I cannot delve into details here, but share some resources for further information. (The references shared are recommendations from my colleagues: Thank you!)

Social Sustainability

Strengths and potential

  • Education and Accessibility: AI can enhance educational experiences through personalized learning and can make information more accessible, for example to people with disabilities or those in remote areas. On a general level, the function of ChatGPT-like interfaces via question–answer works in a very user-centered manner, from which many profiles can benefit.[1]
  • Empowering the Middle Class: Some experts believe AI could boost the middle class by challenging the high earnings often associated with seasoned professionals, like doctors or leading lawyers. The idea is that AI can combine detailed information and established rules with learned experience to assist in making decisions. This means a broader group of people, who have the basic required training, might be able to take on tasks that involve making significant decisions.[2]
  • Dangerous and Unpleasant Work: This article in unite.ai describes different potentials for AI and robotics to automate dangerous and unpleasant work, as mining and underground operations, utilities and energy maintenance or high-risk jobs in farming and agriculture.

Weaknesses and risks

  • Inequalities and Decent Work: Automation and AI could lead to significant job losses in certain sectors, particularly for routine or manual tasks, potentially exacerbating economic inequalities. This can become a threat to the people affected if they are not further educated and retrained. This dimension is particularly irritating because lower-paid workers often engage in essential but less recognized tasks like cleaning, labelling and preparing the large datasets used to develop and refine AI models. The Verge refers to this situation as the Bizarro Twin: “work that people want to automate, and often think is already automated, yet still requires a human stand-in”. A Times article discusses the example of jobs that have been created to OpenAI’s creation and success with ChatGPT, but has stressful consequences for workers carrying out this work. The case described involves work that was outsourced by OpenAI to Kenyan workers who, for less than $2 per hour, had to train systems to recognize and remove toxic language like hate speech from the platform (a practice that we have already seen with social media companies). The proceeding: “feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild”. Labeling is handled by an outsourcing partner in Kenya who claims of having helped lift more than 50,000 people out of poverty. Such cases are difficult to evaluate, as on the one hand jobs are created, but on the other hand, these are difficult for the workers to bear. Additionally, we must be aware that much of the world’s population, especially poorer countries, will miss out on the benefits of AI for the lack of access to basic digital infrastructure. [3]
  • Privacy and Security Threads: The accumulation and analysis of vast amounts of personal data raise significant privacy concerns. There's also an increased risk of cyber-attacks and misuse of personal information (e.g. AI can enhance the sophistication and evasiveness of phishing attacks by using natural language processing to generate convincing fake messages (even video calls) or emails that mimic personal communications). The Economic Times further mentions the use of AI for surveillance and monitoring purposes (i.e. facial recognition). A report published by Deloitte in fall 2023 also highlights the promises of AI in cybersecurity, like improved threat detection and response times, enhanced automation for routine security tasks, scalability and adaptability.
  • Bias and Discrimination: AI systems can perpetuate or even exacerbate biases present in their training data, leading to discrimination in hiring, law enforcement, lending, and other areas.[4] (Which was actually already an issue for algorithmic systems using big data.)
  • Data Use, New Data Protection, and Copyright: Our colleague Stefan Huber already wrote a blogpost on the critical importance of establishing ethical guidelines for AI training, addressing the nuanced challenges of incorporating copyrighted and copyleft material to foster fairness, accessibility, and innovation in the digital age. Another question is who will still produce trustworthy data if everyone is only interested in reusing data. Another colleague in our team considers the economic dimension as generative AI is trained on potentially copyrighted material, in a sense exploiting creative workers (authors, musicians etc.).[5]

Environmental Sustainability

Strengths and potential

  • Efficiency Gains: AI can optimize energy consumption, reducing waste and improving efficiency in sectors like manufacturing, transportation, and buildings. Here we have to be aware that it is partly not the lack of awareness that such potential exists, but partly the lack of pressure to exploit it (as was also shown, for example, by the energy crisis triggered by the war of aggression against Ukraine).
  • Renewable Energy Integration: AI can help balance power grids, predict energy supply and demand, and optimize the distribution of renewables, facilitating the transition to clean energy.
  • Environmental Monitoring and Protection: AI applications in satellite imaging or data analysis can help monitor deforestation, pollution, or biodiversity loss, enabling better environmental protection strategies. The collaboration between people and technology leads to the most promising outcomes.[6]

Weaknesses and risks

  • Energy and Water Consumption: Certain practices in machine learning - particularly the training of AI models - are particularly resource intensive. Several studies have highlighted the need for better estimates of their footprint – which doesn't necessarily become easier as companies have become more secretive (read more in this recently published overview by The Verge). Especially when we take this large footprint into account, it becomes necessary to weigh up where the use of these technologies can be justified (because clear advantages outweigh them) and where it must be avoided. The increased energy consumption must also be seen in a larger context: the energy transition is actually forcing us to reduce our resource consumption overall. AI applications can help us increase efficiency - but these must be higher in the overall calculation than the additional energy consumption that the applications generate to actually help us in this transition.[7]

Economic Sustainability

Strengths and potential

  • Productivity and Innovation: AI can drive productivity gains across many sectors, fostering innovation and potentially leading to new products, services, and markets. Unfortunately, such innovations do not per se contribute positively to the sustainability and biodiversity challenges we are facing.
  • Competitiveness: Companies and economies that effectively integrate AI can gain a competitive edge, attracting investment and talent. (Today, it is difficult to estimate how long such aspects will continue to be a comparative advantage.)

Weaknesses and risks

  • Economic Inequality: The benefits of AI could be unevenly distributed, leading to greater economic disparities between and within countries.
  • Job Losses: The Pew Research Center conducted a study conducted a study and examined which jobs are most threatened by AI. As profiles that have high exposure to AI in the U.S. they identified budget analysts, data entry keyers, tax preparers, technical writers and web developers.
  • Market Concentration: The high costs and expertise required to develop AI could lead to market concentration, where a few large players dominate AI innovations, stifling competition and innovation.

Efforts to mitigate the negative effects while maximizing the positive include developing ethical AI guidelines, promoting transparent and inclusive AI research, investing in AI literacy and education for yourself and your employees or colleagues. And: Invest the resources we have and are using consciously. AI is a tool. A powerful tool that must be used wisely. Not using the possibilities it provides for sustainability reasons would mean not using the potential that already exists - and potentially leaving it to others for their purposes. Let’s remain in the discussion, actively shape the world we live in and the tools we are using for the better.

What can we draw from all of this for AI-based projects on a meta-level?

We need a strong push for projects that implement AI-for-good. We need these projects to be open (Open data and Open Source) and apply an inclusive user approach. And we need a culture that drives these projects forward, not because AI is the current hype, but because it is the best solution in the specific case. In other words, AI solutions that provide digital progress and not only innovation.

This text is based on an ongoing, internal exchange on the topic of AI, on exchange with and feedback from individual colleagues, an exchange with ChatGPT 4 and language support by Google Translate. Thanks to everyone who contributed, especially Flavio, Grégoire, Max and David.

[1] Jakob Nielsen on the potential of Generative AI for Individualized UX (although we disagree with the Accessibility-is-too-expensive paragraph)
[2] David Autor on how AI could help rebuild the middle class
[3] Read deeper into these arguments in this article on AI’s potential for social good by Mark Purdy
[4] For your deep dive into this topic, one of my colleagues recommends you this book: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O'Neil.
[5] Read the Authors and Performers Call for Safeguards Around Generative AI in the European AI Act.
[6] In this podcast episode, Karen Bakker explores how the emerging field of bioacoustic is unveiling complex animal communications and reshaping our ethical and conceptual understanding of the natural world, thanks to advancements in technology and AI.
[7] Find an ongoing debate on ChatGPT’s energy consumption in this forum, and an intro into AI’s contribution to water scarcity in this this Forbes article by ​​Federico Guerrini.