Using Responsible AI to Design a Better Tomorrow

January 19, 2022

Artificial intelligence is the new electricity — it’s rapidly transforming every industry, enhancing our lives, and creating huge economic value. However, left unchecked, AI also has the potential to inflict harm upon humanity. We have seen cases where it has discriminated against minority groups and unintentionally promoted hateful and violence-inciting speech. How can we ensure that AI remains a force for positive change while reducing its capacity to cause harm? 

To address this complex problem, we joined forces with nine other startups (including global industry-leaders like Feedzai and Talkdesk) and six AI research centers to form the Center for Responsible AI. We are working together to lead the charge on the development of the next generation of AI products that are created ethically and used to change society for the better. 

Read on to learn more about the three core pillars of responsible AI: fairness, explainability, and sustainability. 

Leveling the playing field 

Fairness, the first pillar of responsible AI, focuses primarily on reducing biases and their negative consequences — both in the machines themselves and in our society as a whole. There have been many headlines in recent years that underscore the impact of biased AI, such as the Apple card algorithm that gave much higher credit limits to men than women. 

Much like with humans, it’s impossible to eliminate 100% of biases in machines. Even when a company claims to have found the antidote to biased AI, the model can still be found capable of disturbing prejudice. How can we increase fairness in AI while minimizing unintended negative outcomes? 

We must consider the fact that fairness is an inherently socio-technical challenge. Fairness is related to not just the technology, but the societal context it originates from and into which it is deployed. What is considered fair in one country or culture may not be considered fair in another. As in many areas of the modern business ecosystem, we need to promote greater diversity of the people who develop and work with AI, as well as the data used to train our systems. 

The Center for Responsible AI will be working on creating a more holistic definition of fairness through a multi-stakeholder conversation where experts in this field are engaged in a dialogue with users and local communities to discuss what fairness means to them. 

Understanding why AI behaves the way it does 

Interlinked with fairness, explainability is the second pillar of responsible AI that deals with transparency about how this technology works. AI is often seen as a “black box,” meaning that people can see what goes into it and what comes out, but don’t understand what happens in between. Sometimes even the engineers who create an AI model cannot explain what is happening inside the system that leads to a certain output. 

In the Apple card example mentioned above, the total lack of AI explainability led to an awkward situation for the company’s support teams when customers came looking for answers and solutions for the gender-biased credit score debacle. The agents essentially had to tell customers, “We don’t know why this happened and we don’t have a way to fix it.” 

Our friends at Feedzai, on the other hand, were one of the first to raise the importance of explainable AI systems. Their RiskOps platform uses machine learning to process events and transactions in milliseconds to identify instances of potential financial crime. This includes a human-readable semantic layer that helps them understand the underlying machine logic. 

In a similar vein, we introduced MT-Telescope to allow those working with neural machine translation models to understand why one system earns a higher or lower COMET score compared to another. Because machine decision making can be difficult to put into words, technologists at both Unbabel and Feedzai rely on visual analysis tools to identify patterns and trends to enrich their interpretation of the AI being used. In the application of our product, we also employ human-in-the-loop AI so that humans can intervene when needed

In particular, we would like to avoid critical translation mistakes, such as producing toxic or gender-biased translations, or dropping important information. These kinds of critical mistakes are completely intolerable in many real-world use cases – imagine the translation of a medicine description leaflet hiding some side effects. The effects can be catastrophic. We just kicked off our QUARTZ project, which aims to develop new AI techniques for Responsible MT, precisely to tackle this problem and make MT usable in domains such as life sciences, finance, e-commerce, and legal.

Making the world a better place 

The third pillar of responsible AI, sustainability, is about tackling the environmental implications of AI development. Training a state-of-the-art machine learning model requires significant energy — in some cases, this process can emit as much carbon dioxide as five vehicles across their lifespans. One of the solutions that partners of the Center for Responsible AI will be collaborating on is the development of a “green score” for rating AI models on their energy efficiency. Similar to Energy Star certified appliances, we want to have a seal of approval that designates when an AI model has achieved a certain level of sustainable energy performance. 

It’s also important to recognize that AI possesses vast potential to accelerate sustainable efforts in other areas of business and public interest. For example, it’s already been determined that responsible AI is capable of addressing several of the United Nations’ Sustainable Development Goals, including: 

  • Promoting gender equality

  • Access to decent work and economic growth for all 

  • Improving industry innovation and infrastructure 

  • Reducing societal inequality 

At Unbabel, we are very interested in exploring how AI could be used to address more of the UN’s sustainable goals and further the positive impact we are able to make within our lifetimes. For instance, we developed the AI Moonshot Challenge in conjunction with the Foundation for Science and Technology, the Portuguese Space Agency, and the European Space Agency. 

For this project we invite some of the brightest minds in AI to work on solving complex societal issues through innovative uses of AI and emerging space technologies. Specific areas of interest include supporting the development of a carbon-free society, managing and preserving vital land resources, and monitoring and reducing maritime waste on a global scale. 

Raising awareness about Responsible AI 

Responsible AI is a team effort — the more businesses that commit to understanding where problem areas exist and taking action to address them, the better. We believe that endeavors such as the Center for Responsible AI are proof that our industry is headed in a positive direction when it comes to using machine learning to build a more just and equitable society, as well as mitigating the potential for unintended negative outcomes. Head over here to learn about how we are working to connect the world through language and give back to the AI community:

About the Author

Profile Photo of Paulo Dimas
Paulo Dimas

Paulo Dimas is the VP of Product Innovation at Unbabel, contributing to build the world’s translation layer by combining AI with a global community of human translators. Joining when Unbabel was a 12 people-strong team, Paulo has helped Unbabel’s growth through 3 series of funding, totalling $88 million, by creating new game-changing AI products. His passion for startups and products took him to co-found two startups and, at 14 years of age, develop and launch the first commercial product.