Explainable AI ( XAI)

Usually, when AI produces a prediction, its end users won’t know how it arrived there. In effect, it’s a “black box.”  

If an AI black box on an e-commerce website suggests new music to a consumer, the risks involved are low. But what happens when AI-powered software turns down a mortgage application for reasons that the bank can’t explain? What if AI flags a certain category of individual at airport security with no apparent justification? How about when an AI trading algorithm makes suspicious bets on the stock market?

XAI - ‘explainable AI’ – is an attempt to drive openness, transparency and explicability into AI solutions. XAI systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.  

One approach is to add decision reasoning in outputted labels. For example, without XAI a Neural Network trained to recognise cats should identify a cat as such and return a label such as “this is a cat”. With XAI, in the same scenario the system should return something like “this is a cat because it has fur, claws, whiskers and pointed ears”. 

Figure 18. How AI appears to users TODAY. Image - DARPA[i]


Figure 19. How AI NEEDS to appear to users. Image - DARPA[i]

[i] DARPA XAI - https://www.darpa.mil/program/explainable-artificial-intelligence

But even if an AI system can give reasons for its outputs, users may still not trust them if they can’t understand how they work. Leaders might also not invest in AI if they can’t see evidence of how it made its decisions. 

So, AI solutions need to be built to allow an understanding of the reasoning and modelling behind each decision, and the reasoning behind the mathematical certainty behind decisions.

Truly explainable AI requires every step of the process to be documented and clearly explained. This may well make obtaining the benefits of AI solutions more expensive, but XAI will reduce risks and help establish much needed stakeholder trust.

IEEE, the Institute of Electrical and Electronics Engineers, have developed standards for Artificial Intelligence affecting human well-being[ii]. They also identify four forms of transparency needed for AI[i]  

·       Engineering (design and maintenance) 

·       User 

·       Professional (AI ‘plumbers’)

·       Legal

[i] https://transmitter.ieee.org/new-ieee-standards-artificial-intelligence-affecting-human-well/

Complete and Continue