Lead Forensics

Science Journal: Beware explanations from Al in health care

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions. However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion.

AI Health care benefits

Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake. As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions. Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Get full access to this article

By: Boris Babic, Sara Gerke, Theodoros Evgeniou and I. Glenn Cohen


Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

DSA Compliance

Digital Services Coordinators: Who are they?

Like any regulation, the success of the Digital Services Act (DSA) hinges not only on its wording but, above all, on its enforcement. To this end, the DSA establishes a detailed and robust enforcement framework: the European Commission is not the sole enforcer; instead, Digital Services Coordinators (DSCs) are assigned a critical role within Member

Best practices in trust & safety

What does it take to make your business LLM and GenAI proof?

Theodoros Evgeniou* (Tremau), Max Spero* (Checkfor.ai) Arguably “the person of the year for 2023” has been AI. We have all been taken by surprise by the speed of innovation and capabilities of Large Language Models (LLMs) and more generally generative AI (GenAI). At the same time, many, particularly in online platforms, raise questions about potential

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.