Skip to content

10th February 2023

Is it time for an AI regulatory framework?

MedTech World is a global summit that brings together leading companies, startups, investors, healthcare professionals, and the media to share health-tech ideas. It offers a platform where healthcare professionals can explore and help shape products that will impact the future of medicine. One of our panel sessions at the November 2022 Malta Week was themed: Is it time for an AI regulatory framework?

We were honoured to have experts discuss and share with us their wealth of knowledge. Invited were Daniel Young, Managing Director at Bold Works, Hugh Harvey, Managing Director at Hardian Health, and Daniel Torres Gonçalves, Health Lawyer, Head of the Health Economic Unit and Partner of PRA-Raposo, Sá Miranda & Associados. Speaking also were Vipul Shrivastava, Project Manager – Digital Health Centre for Cellular and Molecular Platforms and Nidhi Gani, Sr. Regulatory Affairs Cybersecurity and Data Privacy at ICAD.

What are the current challenges with trying to get a CE mark on an AI product? What does the process look like?

Hugh Harvey: The regulations are a very logical set of standards that have grown over time, borrowing lessons learnt from physical medical devices and pharmaceutical medicines. The basic premise is that before you can put something that’s going to change the care pathway or affect a patient in certain ways on market, you have to do certain things, such as write down what your product does. Once you know what it does, then you start thinking of how to prove that it does what it does.

There are three ways to prove that there’s a valid clinical association between what your device does and the outcome you want to claim. You have to show that it’s technically capable to do what it does, and you have to show in a clinical study that the clinical evidence for the outcomes is actually achievable.

The regulations are written into EU law, and more recently, UK law and federal law (in America). So, you have to provide a whole pile of technical documentation which, sometimes, may be up to thousands of pages to show how you’ve designed your software based on the requirements that you’ve decided that your device needs and you’ve done it in conformance with a logical, scientific, and technical development process.

When it comes to AI, these apparently long process becomes seemingly complicated. One of the points of worry is how to monitor AI products once they are on market.

 

What kind of framework should be put in place to embrace more of machine learning and dynamic learning?

Daniel Torres Gonçalves: Although things are moving, the challenges are great and will take some time to be dealt with. Currently, the EU is aware that there’s a path to go and some steps have recently been taken in that direction.

The first time that e-health appeared in law was in 2011. There was a directive that stated that member states have to make the data sharable, however, no significant further steps were taken since then. With AI, data has to be available and accurate. In Europe, there’s a pool of reliable data available – the European personal data space. The aim is to have health data from European citizens secure, sharable, and interoperable.

The starting point is to have something to feed the AI systems so that they continuously get better. Another point is controlling the AI after feeding the machine. Last year, there was a proposal about artificial intelligence which clearly states some points about improving the acceptability of this technology. AI-based products have to be assessed, made human-centred, and accountable. This is the starting point. On AI regulation, there are other kinds of stuff, such as the liability of using AI.

 

On Data Security, Nidhi Gani took the floor to share with us her perspective on cybersecurity.

Nidhi: More than just regulations, AI in medical devices should be about the effectiveness and patient safety. Cybersecurity is one of the major safety risks. When an individual’s health data is hacked, drug dosage could be altered, resulting in severe if not life-threatening outcomes. So when you’re thinking about AI regulation, it’s important to consider cybersecurity, data validation, and the whole data supply chain.

Cybersecurity, more than pre-market, is the post-market. What happens when products are out there in the market? Are you monitoring them? Can you detect when there’s a hack? How are you going to respond when there’s a hack? These questions are important and need to be answered because, within 12 seconds of hacking, the whole product can be shut down.

AI regulation can’t just be a silo. There have to be strong foundations with data validation regulation, cybersecurity regulation, and establishing trust between the patient and the company.

On the same topic, Vipul Shrivastava complemented Nidhi.

He talked about the inherent bias that exists and the quality of data being dealt with. Being Indian, he bridged the gap and exposed us to the current state of things in India.

Vipul Shrivastava: At this stage, India is lagging behind in terms of AI debate and regulation, but we are trying to see what’s happening elsewhere. For example, we are closely following the EU. EU, in 2021, came up with AI proposed legislation where they provided a risk stratification kind of mechanism where the products that had high risk had to be strictly regulated; for products with low risk, there was relaxed regulation. So we have come up with technical consultation papers on that and we are closely following the EU.

Click here for more Med-Tech news.