Responsible artificial intelligence means responsible use


SOURCE: MG.CO.ZA
SEP 29, 2021

AI cannot be responsible itself, but there are ways to ensure decisions made by AI are not discriminatory. (Science Photo Library/ Sergi Laremenko)

The concept of responsible artificial intelligence (RAI) has been coming to the fore in recent months. With new regulations in certain parts of the world looking to govern artificial intelligence (AI), and analysts ranking smarter and scalable AI as a top priority, many organisations are shifting their focus to finding the required guidance and tools to achieve this.

Of course, the term RAI can be misleading. It is not the responsibility of AI to be, well, responsible. This is akin to expecting a car to be responsible as opposed to its driver. Instead, it comes down to the responsible use of AI.

There are four main reasons why organisations need to examine this issue. Firstly, there is the accelerated and expanded use of AI. Secondly, regulation and compliance must be considered. Up next is risk mitigation, and finally, there are customer expectations.

1.AI is becoming ubiquitous

While organisations should always ensure that their decisions are not discriminatory against certain customers, it is even more important when such decisions are delegated to AI. AI expands our ability to make decisions faster and at larger scales. Coupled with the fact that we are continuously finding new use cases and applying AI in more areas, this means it can make a significant impact to customers, good or bad. There is always risk attached to each application

2. Regulation and compliance is being given attention

This brings us to the second reason — regulation and compliance. Until now, many countries have developed their own AI guidelines and best practices, but they are not yet part of a legal framework. More recently, the European Commission is proposing hard legislation that applies to AI across all sectors and represents the first attempt at a dedicated and comprehensive AI framework. While it is going to be a lengthy process, eventually it will become law and will likely inspire similar proposals in other countries. Canada, Japan and Singapore are already working on their own regulatory frameworks.

3. AI is not infallible and risk mitigation is necessary

From a risk mitigation perspective, there are many examples of where AI has got things wrong. While some are hilarious, others can be serious enough to damage the reputation of an organisation. There are many articles calling out AI applications that have gone wrong and nobody wants to make headlines for the wrong reasons.

4. Customers expect transparency

It’s not just about avoiding negative press. On the flip side, organisations can use RAI as a differentiator. As customers, we expect transparency. Customers increasingly want to know more about the products they are offered and purchase. They care about whether the products they buy are produced with recycled materials, or if the materials are sourced ethically. Similarly, they want to know why they were offered a product and at a certain price. RAI can ensure that organisations are able answer these questions when customers ask. Some organisations have begun to see RAI as an opportunity to differentiate themselves in their customers’ minds.

Attempt to contain bias

One of the biggest drivers that cause AI to be biased and unethical is, of course, biased data. Firstly, data must be an accurate representation of reality. For instance, it must reflect the demographics of a population such as gender, race, age, and so on. Next, we need to remove the bias that can even exist in data that is a perfect representation of reality. All humans are biased, which means all data is biased. Our biased decisions are recorded in the data. We need to identify these to ensure we don’t perpetuate and amplify bias going into the future.

Ethical considerations

But bias isn’t the only problem. AI can also behave in unexpected ways. It is very much a case of being careful what you wish for if you do not understand the fine print. An extreme example is for scientists to ask AI to eradicate cancer on earth. AI might identify the most expedient way to do this is by eliminating all humans. A more practical example is fashion retail systems that analyse transactional data as well as social media posts of customers. This means it can detect that a person is most likely to go on a shopping spree when they are emotionally vulnerable. AI will see this as the perfect time to send out promotions to manipulate human behaviour, which is unethical and exploits emotions.

Fundamentally, there is no one-size-fits-all approach when it comes to RAI. Instead, people should ask critical questions on why and how AI is used and monitor it continually. For example, should we use gender information in AI models? For granting credit, we know the answer is “No”, but for medical diagnosis it’s “Yes”, because certain diseases are gender-specific. While it might be impossible to completely rule out all bias, we must understand how the AI is functioning to avoid it perpetuating certain stereotypes and identify if it’s doing something unexpected.

Similar articles you can read