Council Post: Explainable AI and its impact on creating a data-driven culture


SOURCE: ANALYTICSINDIAMAG.COM
FEB 16, 2022

Over the years, the AI & ML domain has evolved by leaps and bounds. Despite the progress, the AI/ML models suffer from a few challenges, including:

  • Lack of explainability and trust.
  • Security, privacy, and ethical regulations.
  • Bias in AI systems.

These challenges can make or break AI systems.

With the rapid evolution of ML, various metrics linked to accuracy have gained importance, calling for explainability AI (XAI). As shown in the scatter plot below, accuracy and the explainability of machine learning models are suspect. For instance, deep learning techniques are higher in accuracy, whereas decision trees are poor in terms of performance and good in explainability.

Enter explainable AI (XAI)

As the model gets complex, developers often fail to understand why the system has arrived at a specific decision. This is where explainable AI comes into play.

Explainable AI (XAI) aims to address how black box decisions of AI systems are made. According to ResearchandMarkets, the global XAI market size is estimated to touch USD 21.03 billion by 2030, growing at a CAGR of 19% (2021-2030).

XAI is a catch-all term for the movements, initiatives, and efforts made in response to AI transparency and trust issues.

According to the defence advanced research projects agency (DARPA), XAI aims to produce more explainable ML models with a high level of prediction accuracy.

Today, explainable AI (XAI) is a hot topic across industries, including retail, healthcare, media and entertainment, aerospace and defence. For example, in retail, XAI helps predict upcoming trends, with a logical reasoning to boot, allowing the retailer to manage inventory better. In ecommerce, the explainable AI help makes sense of the suggestions of the recommendation system based on customers’ search history and spending habits.

Need for XAI

In general, the need for explaining AI system arise from four reasons:

Explain to justify: XAI ensures an auditable and provable way to defend algorithmic decisions being fair and ethical, which leads to building trust.

Explain to control: Understanding system behaviour provides greater visibility over unknown vulnerabilities and flaws. This helps to identify and correct errors, thus enabling control rapidly.

Explain to improve: As users know why the system produced specific output, they will also know how to make it smarter. Thus, XAI could be the foundation for further iterations and improvements.

Explain to discover: Asking for explanation can help the users learn new facts and gather actionable intel from the data.

Data-driven culture

Interpretable ML is a core concept of XAI and helps embed trust in AI systems, and brings fairness (make predictions without discernible bias), accountability (trace predictions reliably back to something or someone), and transparency (explain how and why predictions are made).

Most importantly, once you have an understanding of how the decisions are made by the AI system, it leads to better AI governance within the organisation and improves the model performance.

Also, knowing why and how the model works and why it fails enable ML engineers and data scientists to optimise the model, and helps in creating a data-driven culture. For instance, understanding the model behaviour for various input data distributions helps explain the biases in the input data that ML engineers can use to make adjustments and generate a more fair and robust model.

Similar articles you can read