Search
Close this search box.

Business Ethics in AI: A Wake-Up Call for the Industry

2

Artificial Intelligence has been evolving for decades, mainly behind the scenes. However, the breakthrough and widespread availability of generative AI technologies like ChatGPT and Dall-E have catapulted this technology into the public eye, sparking enthusiasm and concern.

 

Doomsayers warn that AI could lead to the end of humankind.1 While this may seem exaggerated, few would dispute that AI has the potential to inflict severe damage. This warrants attention from an ethical point of view in general and a business ethics point of view in particular.

 

The Big Tech monopoly

“I don't care if we burn $50 billion a year,” said OpenAI Chief Sam Altman, who reportedly seeks to raise an exorbitant USD 7 trillion for an AI chip venture. While AI technology promises advancements, intense market concentration and resource consolidation in a few companies endanger fair competition and innovation, leaving smaller enterprises dependent on Big Tech.

Image: freepik

Contrary to other ground-breaking and potentially harmful technologies often developed in universities, the design and development of some of the most powerful and widely available AI applications stem primarily from private corporations driven by profit.

 

To hold these powerful companies accountable, we must ensure they adhere to robust notions of business ethics. The core elements of business ethics are corporate responsibility, accountability along the entire value chain and towards stakeholders, and safe and responsible products. All of these concepts have evolved significantly over the past decades. But how well are AI firms adapting to these modern standards of business ethics?

In their quest for what amounts to world dominance, AI leader do not hesitate to demand exorbitant sums of money for a goal that remains poorly defined and disputed in terms of feasibility, desirability, and safety.

Recent trends give cause for concern. Major AI players seem to predominantly focus on expanding their power and pursuing the development of so-called Artificial General Intelligence (AGI). AGI can be vaguely defined as a stage where AI’s cognitive abilities are alleged to match those of humans. In their quest for what amounts to world dominance, AI leaders do not hesitate to demand exorbitant sums of money for a goal that remains poorly defined and disputed in terms of feasibility, desirability, and safety.2 This singular focus on money and power seems to leave little room to consider the rights of those who contribute to and are affected by AI advancements. Put differently, it is at odds with modern notions of corporate responsibility.

 

Simply saying “the business of business is business” might have been enough in the past.3 However, today’s public discourse and numerous national and international standards, norms, and regulations reflect an understanding of corporate responsibility that holds modern companies accountable for what happens in their owned operations and across their entire value chain. They are accountable not only to regulators and shareholders but to a wide range of stakeholders throughout the entire lifecycle of their product.

Yet, supply chain considerations are often overlooked in the AI industry, and stakeholders are not fully acknowledged. For one thing, AI companies like to pretend, and users want to believe, that their products are mainly virtual and intangible and that the work behind them is either done by algorithms or by highly paid human engineers. Yet, more recently, it has become clear that this is not true. “Ghost Work” highlights the hidden labour force in developing countries performing precarious and underpaid click work. Recent revelations, such as OpenAI’s employment of workers in Kenya for less than USD2 per hour to manually filter out the most horrible texts from ChatGPT — we are talking about child sexual abuse, murder, suicide, torture, self-harm, incest, etc — underscore the harsh realities behind AI development 4

AI companies encounter two other, distinctly new types of stakeholders beyond the traditional ones, such as suppliers, customers, employees, shareholders, and regulators, and they do not pay them the attention they deserve.

However, workers in the supply chain are just some stakeholders who fly under the radar. AI companies encounter two other, distinctly new types of stakeholders beyond the traditional ones, such as suppliers, customers, employees, shareholders, and regulators, and they do not pay them the attention they deserve.

 

Firstly, there are “human data sources”. These are the people whose data is used to train AI models. Many AI applications use data without explicit consent from their creators or owners. The term “human data sources” underscores the extractive relationship between AI companies and individuals. Are AI companies aware of these stakeholders? How do they handle their responsibilities towards them? The heated debate on intellectual property infringement in the training of large language models (LLMs) and image generation AI, the refusal to reveal which (and whose) data was used to train their models, and recent statements from top executives like Mina Murati, Chief Technology Officer of OpenAI, suggest that there is a long way to go. 5


Secondly, there are the individuals to whom AI is applied. There isn’t a widely accepted term for these stakeholders yet, but we can think of them as “human objects of AI”. This term emphasises their passive role and the often involuntary impact of AI on them. Consider job applicants whose interviews are analysed by emotion recognition AI, welfare recipients whose credibility is judged by AI, borrowers scrutinised by AI for creditworthiness, and patients undergoing AIdriven medical diagnostics. These people are neither users nor automatic beneficiaries because they do not actively use AI themselves and do not necessarily benefit from it. What is needed here is to empower these stakeholders to know their rights when subjected to AI-generate decisions. Regulators across the globe have picked up on this issue. In the EU, citizens will soon “have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights”. 6 But to ensure that such legislation is not just a “paper tiger”, AI companies must provide tools that allow stakeholder control.

Did AI clone her voice (or not)?

Actor Scarlett Johansson expressed shock and anger over OpenAI’s use of a voice eerily similar to hers in the ChatGPT-4o chatbot. Even though OpenAI has denied the imitation and removed the voice, this incident raises concerns about the ethical practices of the tech industry, considering generative AI’s capacity to replicate actual people without their consent.

Photo: Entertainment Pictures / Alamy Stock Photo

 

Yet, this is a challenge, as becomes evident when we look at what is commonly discussed under the term “product safety”. Discussing corporate responsibility without addressing the critical product safety issue would be futile. Product safety cases, e.g. from the automobile industry, from consumer goods or pharmaceuticals, typically involve a product, a user, and victims. Sometimes, the users are the victims; sometimes, others are victims. When analysed carefully, it becomes apparent why a product has failed: an error in construction, a lack of safety mechanisms, or insufficient trials, often driven by prioritising profits over consumer safety. In most cases, things can be fixed or banned, and those in charge can be held accountable.

 

Things are different when we look at AI. An inherent lack of transparency characterises advanced AI models. And this constitutes one of the key impediments to ensuring the safety of AI. Despite efforts to arrive at “explainable AI”, it is often impossible to describe how AI arrives at its conclusions. Yet, the more power we yield to AI and its creators, the more devastating the potential consequences of its errors, and the more important that we understand why they happen.

In light of this, it is particularly concerning when some of the most prominent minds in AI, who were crucial in developing potentially dangerous AI, warn that their innovation could mean the end of humankind. After all, this is not the first time that “technological progress” has been linked to existential risks. It is reminiscent of debates on the existential dangers of nuclear technology in the 20th century. However, since atomic technology was mainly in the hands of governments rather than profit-seeking private companies, these debates fell under technology and environmental ethics rather than business ethics. One of the fundamental principles from this debate, which plays a vital role in modern environmental regulation, is the precautionary principle from the Jewish German philosopher Hans Jonas. It states: “Act so that the effects of your action are compatible with the permanence of genuine human life” or, conversely, “Do not compromise the conditions for an indefinite continuation of humanity on earth”. 7 Unfortunately, this principle appears to be primarily disregarded by those at the forefront of AI development.

Eye on human data sources

On 22 May 2024, the cryptocurrency project Worldcoin, which requires participants to scan their irises with an “Orb” for a “World ID digital passport” in exchange for free crypto tokens, was ordered to cease all operations in Hong Kong for breaching privacy laws. Worldcoin has faced similar privacy issues in Spain, Portugal, and Kenya. The company insists the data is encrypted and secure.

Photo: Worldcoin X

As the name suggests, general-purpose AI can be adapted for an almost unlimited range of applications. This begs the question: Can we ever achieve product safety for general-purpose AI?

Product safety is further compounded by the emergence of “General purpose AI”. Traditional products are designed for specific functions, where misuse is clearly defined (e.g., using a toaster for anything other than making toast), and the damage they can create in the hands of an average user is limited. Yet, as the name suggests, general-purpose AI can be adapted for an almost unlimited range of applications. This begs the question: Can we ever achieve product safety for general-purpose AI? How could you test the safety of AI for every single imaginable purpose? Are AI companies that create general-purpose AI willing to assume responsibility for every application? Or is the term general purpose AI indicative of the hubris of an industry that only looks at its outputs in terms of power but hardly ever in terms of responsibility?

A call to pause and ponder

“Pause Giant AI Experiments” is an open letter to urge all AI labs to halt training of systems more powerful than GPT-4 for at least 6 months due to risks like AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. Signed by 30,000+ including tech and business leaders like Elon Musk and Steve Wozniak.

Photo: Thinkhubstudio / iStock

The rapid and extensive development of AI has generated a lot of excitement. AI is exceptional in many ways, and influential figures cultivate this impression with bold claims, threats, and promises. However, we should not lose sight of the fact that profit-seeking companies control the most influential AI applications. Regarding business ethics, these companies must be held to the same fundamental standards as any other company: to act with integrity and ensure that what they do and sell benefits humanity, rather than vice versa. ∞

AI incident surge highlights safety urgency

In 2023, the AI Incident Database, an open-source project aimed at indexing the collective history of harms or near-harms caused by AI systems, logged 121 incidents — a 30% increase from the previous year. These incidents constitute one-fifth of all reported from 2010 to 2023, underscoring the urgent need for enhanced AI safety measures and oversight.

Image: gorodenkoff / 123rf

DR DOROTHEA BAUR

Dr Dorothea Baur is the founder and owner of Baur Consulting and an independent expert with many years of international and interdisciplinary experience in the field of ethics, responsibility, and sustainability. She advises companies, pension funds, foundations, and NGOs on issues related to corporate sustainability, sustainable investments (ESG), and artificial intelligence and ethics, with a focus on the financial and technological industries. She is also a keynote speaker and has spoken at TEDx Zürich, the Futurist of the Year 2024, and the UNESCO Forum. Prior to founding her own consulting company, Dr. Baur was a lecturer and researcher at leading European business schools. She continues to lecture at universities and has published in leading academic journals.

JULY 2024 | ISSUE 12

NAVIGATING THE AI TERRAIN

About

Leaders and changemakers of today face unique and complex challenges. The HEAD Foundation Digest features insights and opinions from those in the know addressing a wide range of pertinent issues that factor in a society’s development. 

Informed opinions can inspire healthy discussions and open up our imagination to new possibilities. Interested in contributing? Write to us at info@headfoundation

Stay updated on our latest announcements on events and publications

About

Leaders and changemakers of today face unique and complex challenges. The HEAD Foundation Digest features insights and opinions from those in the know addressing a wide range of pertinent issues that factor in a society’s development. 

Informed opinions can inspire healthy discussions and open up our imagination to new possibilities. Interested in contributing? Write to us at info@headfoundation

Stay updated on our latest announcements on events and publications

Join our mailing list

Stay updated on all the latest news and events

Join our mailing list

Stay updated on all the latest news and events