2022-04-28 14:08Press release

New MedTech Guidance on Risk Management for AI, Machine Learning

Artist interpretation: An alarmed man see's his doctor is a robot/artifical intelligence.

AAMI this month published a consensus report (CR) for identifying, evaluating, and managing risk for healthcare technology that incorporates artificial intelligence (AI) or machine learning (ML).

AAMI CR34971:2022, Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learningresponds to an urgent, immediate need. Existing standards for regulated medical devices do not yet adequately address the potential risks of emerging AI and ML applications, which “could jeopardize patient health and safety, increase inequalities and inefficiencies, undermine trust in healthcare, and adversely impact the management of healthcare,” the CR states.

Now, for those familiar with the widely used international standard, ISO 14971:2019, Medical devices—Application of risk management to medical devices, AAMI’s CR is a must-have companion for risk management of AI- or ML-enabled medical systems and devices.

“We intentionally structured the CR to be easy for people that know 14971 to use,” said Pat Baird, co-chair of the AAMI Artificial Intelligence Committee and senior regulatory specialist at Philips. “Readers are probably aware of the existing companion document, 24971, which provides guidance on how to use 14971. We modeled the structure of 34971 to be similar to a section in 24971 about risk management for in vitro diagnostics. The idea is that the risk management process is the same, and here are a few new ways that this particular technology can fail that you might not have thought about.”

Baird is part of a small task force of the AAMI Artificial Intelligence Committee that developed CR34971, which was reviewed by the full committee and by risk analysis experts at the British Standards Institute (BSI). The committee then approved the consensus-driven report. AAMI and BSI plan to use this CR as the basis for an AAMI technical report and a British Standard.

“We hope to complete the AAMI technical report and the BSI standard sometime this year,” said Joe Lewelling, senior advisor on content and strategy at AAMI. “The AAMI committee,” which includes clinical, manufacturing, regulatory, information technology, and risk management expertise, “is working hand in hand with a similarly focused BSI committee on these documents.”

Longer term, AAMI and BSI expect to propose these resources to the International Standards Organization as guidance, informative, or annex documents to ISO 14971 or ISO 24971.

Learning from Other Industries

To develop the CR, “we conducted a literature review for ML failures in multiple industries, in an attempt to learn from others that have gone before us,” Baird said.

Additionally, the task force reviewed prepublication documents from ISO/IEC JTC 1 / SC 42, a subcommittee of a joint technical committee on artificial intelligence that is developing a series of horizontal (cross-sector) standards that address such issues as bias management.

The CR offers insights into how risk management systems and processes can be adapted for AI and ML medical devices. It also details safety-related characteristics and considerations in five areas:

  1. Data management
  2. Bias
  3. Data storage, security, privacy
  4. Overtrust
  5. Adaptive systems

The CR includes informative annexes covering the risk management process, risk management examples, considerations for autonomous systems, and personnel qualifications.

For example, personnel qualifications apply to people developing AI- or ML-enabled products. “One of the things we noticed in the literature about ML systems is that many times failures occurred because, although the development team had data, they didn’t have knowledge,” Baird said. “Developers had logical assumptions regarding the use of their product, but the reality was different, leading to failure. To be successful, we really need to understand the context of use and leverage the wisdom around us. We felt it was important to stress this point when discussing risk management.”

Next Steps

The AAMI Artificial Intelligence Committee is now turning its attention to other issues that warrant exploration and consensus, such as change control for systems that continue to learn over time.

“The use of ML in healthcare has the potential make significant improvements in the delivery of healthcare, but only if those products are safe and effective,” Baird said. “So the logical first step is to address safety-related concerns. New technologies always introduce new risks, and good risk management is obviously a key driver for the medical device sector. We always put safety first, and improvements to performance can come a little later.”

Learn More

Machine Learning AI in Medical Devices: Adapting Regulatory Frameworks and Standards to Ensure Safety and Performance



About AAMI

AAMI (www.aami.org) is a nonprofit organization founded in 1967. It is a diverse community of more than 10,000 healthcare technology professionals united by one important mission—supporting the healthcare community in the development, management, and use of safe and effective health technology. AAMI is the primary source of consensus standards, both national and international, for the medical device industry, as well as practical information, support, and guidance for health technology and sterilization professionals.


Contacts

Brian Stallard
Director of News and Media Relations
Brian Stallard