Five critical questions to explain Explainable AI

Five critical questions to explain Explainable AI

Explainable AI is a critical element of the broader discipline of responsible AI. Responsible AI encompasses ethics, regulations, and governance across a range of risks and issues related to AI including bias, transparency, explicability, interpretability, robustness, safety, security, and privacy.

Who to explain

End users: Consumers who are receiving an explanation on a decision, action, or recommendation made by an AI system.

How to Explain?

There are a number of different modes for explanation.

What is the Explanation (technique)?

There are six broad approaches to post-hoc explainability

Why Explain?

The need for an explanation depends on the audience or the answer to the previous question. End users require an explanation of the decision or action recommended made by the AI system in order to carry out the recommendation.

When to Explain?

Ex-ante: the model is trained and tested first and then the explanation may be generated.

Source

Similar products

Get in