How Explainable AI (XAI) for Health Care Helps Build User Trust – Even During Life-and-Death Decisions.

Picture this: You’re using an AI model when it recommends a course of action that doesn’t seem to make sense. However, because the model can’t explain itself, you’ve got no insight into the reasoning behind the recommendation. Your only options are to trust it or not – but without any context.

It’s a frustrating yet familiar experience for many who work with artificial intelligence (AI) systems, which in many cases function as so-called “black boxes” that sometimes can’t even be explained by their own creators. For some applications, black box-style AI systems are completely suitable (or even preferred by those who would rather not explain their proprietary AI). But in other contexts the consequences of a wrong AI decision can be far-reaching and extremely damaging. Mistakes made by AI systems in the justice system or health care, for example, can ruin a person’s life or livelihood – which, in turn, erodes public and user trust in these systems and undermines their usefulness.

That’s why explainable AI (XAI) is so crucial for the health care industry. Providers and patients need to know the rationale for significant AI recommendations such as surgical procedures or hospitalizations. XAI provides interpretable explanations in natural language or other easy-to-understand representations, allowing doctors, patients, and other stakeholders to understand the reasoning behind a recommendation – and more easily question its validity, if necessary.

When and in what context is XAI necessary?

AI is used by health care professionals to speed up and improve numerous tasks, including decision-making, predictions, risk management, and even diagnosis by scanning medical images to identify anomalies and patterns undetectable by the human eye. AI has become an essential tool for many health care practitioners, but is often not easily explainable – leading to frustrations among providers and patients, especially when making high-stakes decisions. 

AI in Healthcare

According to Ahmad et al., XAI is required within any of the following scenarios:

  • When fairness is paramount, and when end-users or customers need an explanation to make an informed decision
  • When the consequences of a wrong AI decision are far-reaching (such as a recommendation for unnecessary surgery)
  • When the cost of a mistake is high, such as malignant tumor misclassification leading to unnecessary financial costs, elevated health risks, and personal trauma
  • When a new hypothesis is drawn by the AI system that must be verified by domain or subject matter experts 
  • For compliance purposes, such as under the EU’s General Data Protection Regulation (GDPR), which promises the “right to an explanation” when user data runs through automated systems

Some experts say the relatively slow adoption of AI systems in health care is because of the near-impossibility of verifying results from black box-type systems. “Doctors are trained primarily to identify the outliers, or the strange cases that don’t require standard treatments,” explains Erik Birkeneder, a digital health and medical devices expert, in Forbes. “If an AI algorithm isn’t trained properly with the appropriate data, and we can’t understand how it makes its choices, we can’t be sure it will identify those outliers or otherwise properly diagnose patients.”

Indeed, for a doctor to double-check a diagnosis made by a complex deep learning system – such as verifying suspicious masses identified in medical images such as MRIs and CT scans – it’s virtually impossible if they don’t know the context of such a diagnosis. This ambiguity is also a recurring issue for the U.S Food and Drug Administration (FDA), Birkeneder notes, because it’s responsible for validating and approving AI models in health care. An FDA draft guidance from September 2019 says doctors must independently verify AI systems’ recommendations or risk having these systems re-classified as medical devices (which come with more stringent compliance standards).

Achieving XAI in health care: Keep it simple (for now)

As Cognylitica’s Ron Schmeltzer explains, the easiest way to achieve functional XAI in health care is through algorithms “that are inherently explainable.” That means instead of complicated deep learning or ensemble methods such as random forests, simpler solutions such as decision trees, regression models, Bayesian classifiers, and other transparent algorithms can be used “without sacrificing too much performance or accuracy.” 

Birkeneder predicts these performance issues will eventually dissipate as explainable algorithms improve and become “the dominant algorithms in health care.” But the fact remains that even though these algorithm types are more explainable, they currently aren’t as powerful and can’t handle nearly as many use cases as more complex ML techniques. Thomas Lukasiewicz, AXA Chair in Explainable Artificial Intelligence for Healthcare at the University of Oxford, adds that along with being less explainable, deep learning algorithms can also be biased or have robustness issues.

However, researchers Bellio et al. say the performance vs. explainability debate misses the point somewhat and that sacrificing performance for accuracy isn’t necessary over the long-term. In their view, health care XAI should perform like any other high-performance product such as a car or laptop computer, where most users don’t know the intricate workings under their respective hoods but can quickly notice when they’re not working correctly. 

“We are not expecting humans to grasp sophisticated calculations,” they write. “In fact, explanations should be tailored on what matters to the person, and not on a deep understanding of the model. One way to do that could be by interpreting the outcome, and understanding when the system works or not. For example, I care that my car moves, stops, and steers as I need, in a safe and efficient way, but not really about the engine’s functioning details. So, what makes me trust my car? It is not only confidence in the manufacturing process, but also my ability to promptly detect when something is not working.”

Bellio et al. go on to explain three ways advanced AI for health care can be made more explainable and trustworthy while not sacrificing performance:

  1. Better accounting of generalization errors: Ways to more easily identify deviations or generalization errors in the algorithm could help users determine when the model isn’t working correctly and provide context to help users determine whether the model’s suggestions are trustworthy.
  2. Role-dependent XAI: The level of an AI system’s required explanations largely depend on the user’s role – a doctor likely requires more detailed context behind an AI suggestion than an HR staffing planner, for example. Tailoring explanations based on user roles and requirements could lead to more satisfactory outcomes. 
  3. Interactive user interfaces: Intuitive graphical user interfaces can help users better understand a system’s accuracy while also allowing users to gain a fuller understanding of the accuracy of a particular AI system.

Riding the ‘third wave’ of AI capabilities

According to Oxford University’s Thomas Lukasiewicz, the limitations of current XAI systems means a “third wave” of AI technology is now required. First-wave AI systems are rule- or logic-based, and second-wave systems constitute machine learning and deep learning. The third wave of AI technology, he says, must combine the strengths and weaknesses of the first two types. Where the first is good at reasoning and the second good at statistical learning and prediction, “A very natural idea is thus to combine them… and to create a third wave of AI systems that we also call neural-symbolic AI systems.”

This new type of AI, the professor explains, has massive implications for health care, including improved disease prevention, more effective and cheaper diagnosis, and better design of treatment and pharmaceutical products – along with being inherently explainable for all types of users, from doctors to hospital staffers, to patients and insurance companies.

Contact Us.