BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Three Core Pillars Of Responsible AI

Forbes Technology Council

João Graça is the co-founder and CTO of Unbabel.

We’d all like to think our technology has a positive impact on people’s lives (or at least a neutral one). But too many examples of the negative effects of AI algorithms keep surfacing in the headlines. As global corporate investments in AI set a pace to exceed a staggering $120 billion by 2025, it's more critical than ever for companies to embrace responsible AI.

Responsible AI is a framework for organizations to figure out how to address both the legal and ethical challenges surrounding AI. While many of the largest developers of AI have established centers or guidelines around responsible AI, even the smallest companies must create best practices around the responsible use of AI in their products and services. 

When done right, responsible AI can open up new opportunities to solve serious problems with technology. As algorithms are trained to replicate human decision-making in a broader context (and in sensitive industries such as healthcare), it becomes more urgent for all AI technologists to take ownership for responsible AI. To break it down, let’s look at three of the discipline’s core pillars: fairness, explainability and sustainability. 

Pillar 1: Fairness

Perhaps the most important pillar, fairness speaks to attempts to correct algorithmic biases. There are plenty of examples of AI bias, from recruiting tools that irregularly weigh technical jobs in favor of men to algorithms that land people in jail with little evidence. The consequences are too shocking and too frequent to continue the path we are on today. 

These biases arise because neural network models are black boxes. When you train them, you’re training the parameters of the network based on a real dataset. Many of these massive datasets are based on the language of the internet, which can often be biased and even vitriolic. Negative behaviors can result from training with biased data. In some cases, the unknown cognitive biases of the people developing the algorithms can sink in.

To create fairer algorithms, we need to increase the diversity of the people who work with them. For example, women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. In addition, we need to continuously refine our models with human feedback in the loop, as well as modify the biased data on which we train them.

Pillar 2: Explainability

Explainability is closely related to fairness but is more focused on the transparency of the technology itself. According to a recent study from FICO, 65% of companies can’t explain how their AI models’ decisions are made. Only 35% of companies have taken steps to ensure they’re using AI transparently and with accountability. 

Imagine, for example, that a neural network predicts that a patient has a certain disease. Equipped with this diagnosis, the doctor is advised to start a course of medication. But if the doctor isn’t 100% sure about the disease, should they really start medicating the patient? While this is an extreme example, it points to the lack of explainability of algorithms. The doctor should not only receive the diagnosis but also be able to understand the reasoning behind why the decision was made.

This is one of the areas the research team at our company works with most. Our machine learning algorithms translate text from one language to another. While this sounds simple in theory, poor translations can be confusing at best and offensive at worst. It’s important for us to be able to explain why a certain translation was made and take steps to refine the algorithm continuously.

Pillar 3: Sustainability

It takes a lot of compute power to train machine learning algorithms. Not only is this power usage costly, but its environmental impact is staggering. In fact, research from UMass Amherst shows that training a single AI model can emit as much carbon into the atmosphere as five cars during their lifetimes. At the rate of growth for the AI sector, this environmental impact is not sustainable. 

There are many attempts to reduce the amount of power required to train and run these models in production. Some techniques are meant to shrink model size or optimize larger models so they can run on low-power, energy-efficient equipment. Other ideas to reduce impact include providing an environmental score for models, much like the “Energy Star” seal on an appliance. Ideally, the industry will converge around a set of principles and best-practices for making machine learning models more energy-efficient.

Raising Executive-Level Awareness For Responsible AI

While AI practitioners involved in the day-to-day work are aware of these issues with algorithms, many companies still struggle to gain support for responsible AI initiatives. The same FICO research cited above showed that 78% of companies find it hard to secure executive-level support for responsible or ethical AI initiatives. 

A good way to raise executive-level awareness around the benefits of responsible AI is to establish a set of best practices your company can follow that map back to these three core pillars. To make it easier, the World Economic Forum has created a toolkit for boards to support and understand various aspects of AI, including ethics, risk, responsibility and sustainable development. Taking action is crucial. Not only is it good for business and meeting regulatory requirements, but it’s the right thing to do.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on LinkedInCheck out my website