Skip to content

10th October 2022

AI CREATES HEADACHES FOR MEDICAL DEVICE MAKERS

Medical devices based on artificial intelligence (AI) present a unique set of challenges for regulators, with no comprehensive standards in place to address the issue. 

It’s an issue that the International Medical Device Regulators Forum (IMDRF) has recognised.

Currently, the regulatory requirements in the European Union (EU) and other major medical markets do not address the uniqueness and complexity of AI-based medical devices and devices incorporating machine learning (ML) technologies. 

Too often, technology that can vastly improve patient outcomes, is being developed faster than regulatory standards. This creates issues for manufacturers seeking to gain approval for their new devices, which function on software that is being frequently updated, and which has the potential to change the safety and effectiveness of the device.

AI-based medical devices process large troves of data in electronic medical records searching for patterns to predict future outcomes. Treatments are then recommended based on the information that has been gathered. AI is able to assist in spotting subtle but serious changes in a patient’s condition that are yet not visible to, or noticed by, the doctors.

AI and adaptive technology

AI-based medical devices and software as a medical device (SaMD) adapt to new information and changing conditions quickly. They also optimise their performance in real-time by leveraging advanced algorithms and large quantities of data acquired through their routine use. This results in a significant improvement in the overall quality of healthcare. 

Artificial intelligence has an inexhaustible list of use cases in medicine, from identifying symptoms to making diagnoses. It can assist in surgery, predict when an epidemic will break out, and undertake hospital administrative work, such as making appointments and registering patients. AI holds significant promise to transform and improve patient care and safety in the Emergency Room (ER) and/or Intensive Care Unit.

However, the technology also poses some challenges when it comes to assessing safety – the most important factor considered in the approval of medical devices. Traditionally, medical device safety assessment is based on pre-established and clearly defined risk-assessment principles and practices, as well as a detailed roadmap. 

However, AI-enabled technologies have the ability to adapt their prediction to reflect accumulated data – and, this is a major challenge.

Artificial intelligence is not just a mathematical instrument that runs only on the data that humans feed it, but can also constantly acquire and update information on its own. Most algorithms and data models in medical technologies with integrated AI capabilities are dynamic and continuously learn and adapt their functionality in real-time to enhance performance. 

This means that in contrast to static program code, which can be assessed line by line for its suitability, evaluating AI functionality is less transparent because the initial product and the already deployed device may present different risk profiles. 

Assessing quality and quantity

To have a thorough assessment of AI functionality, data quantity and data quality have to be largely evaluated, since they directly influence the performance of the algorithm. 

These two – data quantity and data quality – are not without issues. Data quality is affected by disparities in the alignment of the data set with the data models being used, errors in data labelling, and biases in the selection and collection of data. 

The complexity of the AI algorithm and the complexity of the problem that the model is tasked with must be taken into consideration when assessing the quantity of data required to validate AI models. 

Itoro Udofia, the director of Medical Health Service at TÜV SÜD, a global product testing and certification organisation, wrote in an article recently published on Med-Tech News; “Organisations developing medical technologies with integrated AI capabilities should strongly consider taking a more expansive approach in assessing the safety of their products. 

Such a holistic approach would address every aspect of the product planning and development process and extend beyond the initial product release date to include rigorous post-market surveillance activities.”

At present there are no coordinated standards specifically addressing unique performance aspects of AI technologies. If they are at all present, regulations currently put in place by major jurisdictions only address specific aspects regarding the evaluation of software.

TAKE PART IN MED-TECH WORLD 2022

It is now estimated that the global digital health market will increase to around $640 billion by 2026. Through our expertise coupled with optimized networking, we will ensure that both investors and startups are on the ground floor of this health revolution. The event which is organized and curated alongside a team of doctors, attracts legislators and policymakers, medical professionals, and investors from across the world, addresses the opportunities and challenges driving this million-dollar forum.