Doing Things Right

AI Ethics and Governance

The main difference between traditional IT and AI-driven technology is that the former allows for full tracking of the decisions made. With IFs, THENs and ELSEs, most actions (and mistakes) can be reproduced. Artificial Intelligence, however, is built around the promise that it can react flexibly to varying inputs without always following a fully reproducible logic.

Combined with the power of AI, this creates significant risks. Some people say that AI has the potential to destroy humanity. We wouldn't go that far, but AI definitely has the power of any potent tool: it can cause great benefit, or great harm.

“Sir, my need is sore.
Spirits that I've cited
My commands ignore.”

The Sorcerer's Apprentice, Johann Wolfgang von Goethe (1797/98)

First of all, AI makes mistakes that might be hard to detect as many AI models don't account for the logic behind their actions. If that leads to incorrect employee actions, negative customer experiences or missed opportunities, it is damaging the bottom line.  A badly set up AI chatbot is able to turn many people away without getting noticed. An employee using AI to solve a problem might make a decision with negative repercussions.

Because it is used and set up so quickly and easily, it, in turn, requires more time for testing and governance than conventional solutions. We regularly encounter this when conducting Chatbot Audits, where most show significant problems that could be avoided.

Ethics play into this as well: how does AI make decisions? Is it using biased input or producing biased output? How does it deal when it detects stress on the user side? What if it gives the wrong advice in a critical situation, for example involving health emergencies or psychologically harmful situations? Answering those questions is essential when deploying AI systems responsibly.

The risky part is that AI creates legal liabilities far bigger than traditional IT systems. By using AI, additional data protection and other regulations (e.g. the EU AI Act) become relevant, and as many AI models handle data in the cloud, transferring data off-premises creates additional issues to handle. Understanding and managing these risks is essential. Therefore, establishing solid AI governance and ethics rules is imperative for any organization using the technology. 9senses can help organizations navigate these governance and compliance challenges.

Beyond that, AI systems are also high-risk entry points for cyberattacks. If not protected and hardened properly, they are quick to share company secrets and internal instructions to someone asking a public-facing chatbot the right (or wrong) questions.

AI vs. Human Error

While we typically tolerate human error, society is far less ready to accept technology failure, particularly when it endangers lives or creates unfair outcomes. Self-driving vehicle manufacturers have experienced that problem first-hand: while human driving is typically a dynamic process where we inherently accept risk, we don't accept the same error rate from fully automated vehicles. When in doubt, self-driving cars are thus more prone to hitting the brakes where humans wouldn't blink an eye. This has led to rear-end collisions by surprised drivers becoming the predominant cause of accidents involving self-driving vehicles, just because they exert so much caution.

When setting up AI solutions, particularly those that have the ability to impact lives, this societal context needs to be accounted for in rules and guidelines.

Key AI Governance and Ethics Topics

There are a few key aspects to keep in mind when introducing AI to your business. While we think that the benefits of AI by far outweigh its risks when managed properly, not spending sufficient time on assessing and managing those aspects can become very costly.

Data Protection

1

AI poses entirely new risks for data protection. Many processes, for example the use of large language models, takes place in the cloud, and contrary to normal processing, the way data is handled is much less clear. If sensitive customer or employee data is processed that way, many jurisdictions require explicit consent. Additionally, further risk emerges from the fact that AI processes are not necessarily fully traceable and reproducible.  Evaluating the risks and possible scenarios and covering them in the data management and protection policies is thus essential.

Governance (and the EU AI Act)

 

Image by Conny Schneider on Unsplash.com

As much as we set rules for human behavior, we have to establish the same for AI systems. Solid governance rules and controls are thus required, particularly as some aspects also fall under binding regulatory frameworks. Among them are data protection regulations, but equally more specific AI-related rules, such as the EU AI Act.

2

Liability Risks

3

Decisions made by AI that create negative outcomes for others will become a liability for organizations like any other act, there will be no hiding behind "the AI did it.". Quite the contrary, as society is far less ready to accept machine failures compared to human error, court decisions might be harsher in case AI-driven systems are responsible for bad outcomes. Therefore, a careful risk evaluation and extensive safety precautions are even more essential as soon as we - for example - control machines using Artificial Intelligence. This should not stop us from using AI, as AI can actually become helpful in preventing harm, but using it wisely is essential.

Reputation Risk

 

Image by Isabella Fischer on unsplash.com

This is one of the key risks of using Artificial Intelligence without proper review of its possible impacts. By inviting a "black box" into the company's decision-making, those risks increase manifold compared to well defined business-processes. Key risks can either come by poorly set up processes producing unsatisfactory results, or by overly autonomous AI systems making decisions that deviate from publicly or internally defined standard behaviors.

4