Opening the panel, moderator Michael Tendo acknowledges that recent breakthroughs in AI computational power have led to some significant advancements in hardware and software. There are questions that beg to be answered though. Specifically – whether throwing AI into the mix will bring more good into the world. Can we apply it safely to clinical practice? Turning to Ilya Radlgruber, the CEO and AI Strategy at Pantaflow digital business, he asks whether we are ready yet in terms of security. Can we trust the systems to protect our data – especially sensitive data like that for the healthcare industry?
Radlgruber believes the lack of trust is a topic which has to be tackled. “Stating how well AI works is simply not enough. Saying that you have accuracy of 99% won’t be enough. It won’t be enough because the patient, the healthcare insurance company, and the hospital who is paying, want to understand what the benefit to the patient is.
“On the other side of the fence, the doctors want to understand the solution before they fully trust it. I have experienced myself that if trust is not there, the solution is not adopted. We see it in the healthcare space everywhere. Technically speaking, we’ve come a long way in healthcare.”
He goes on to illustrate just how far we’ve come by highlighting the early 2016 tech for diagnosing breast cancer, where AI equalled the bottom 10% of all radiologists. Comparatively, a 2021 study by Nature showed that AI now surpasses human ability by 10%.
One caveat remains however. Both Radlgruber’s experience and recent research show that to get trust you need to show empathy. Unlike for a human doctor, AI can show empathy and emotions and still not receive that trust. This is, he believes, a real obstacle for the implementation of AI in healthcare.
With that said, Tendo turns to Jane Thomason, Chair of the Board at Kasei Holdings: “Humans have a weird psychological connection with AI and they see it as ‘creepy’. There’s a problem of trust and feeling comfortable with robots and chatbots. So how do we deal with this?
“We all know that there are risks with the data sets that the AI is being trained on – are there biases in the data set? Sometimes they’re gender biases, sometimes racial biases, and sometimes the data sets come from a skewed proportion of the population. So I’d want to know that the data set that it was being trained on was actually representative of the population, therefore, representative of me. The second thing that I’d want to know is what’s happening to my data.
“So who’s using it, and for what, where’s it being stored? How’s it being shared? Do I know if once I’ve given it to the doctor he’s not going to on-sell it to someone else? So I think it’s trust in the whole system. I don’t think it’s just trust in AI. The other thing is GDPR. What if I want my data not used in this particular AI. The patient needs to be convinced. They need to trust in this system. Trust that they’re going to get a decent deal, that they’re not going to be denied surgery, that their data is going to be kept carefully and securely. So I think they’re things that consumers are going to be worried about in terms of AI in healthcare,” opines Jane.
The other question that remains, says Michael Tendo, asks who’s going to be responsible. When mistakes are made, who is to blame? When there’s harm with AI the likelihood of harm is multiplied by orders of magnitude. Drug discovery is a positive use of AI and we can very fast identify drugs for new strains of diseases. But how about if someone is using this maliciously?
Gil Solomon, Founding Partner at Gil Solomon & Co chimes in here. “Privacy issues are a broad topic, and AI, we have to remember, is still just a software. It’s a software that is subject to vulnerabilities like any other software and it can leak data. So this is one thing we need to discuss and trust the system to take care of. A way to take care of that is insurance.
“Another thing we need to think about is that once an AI is being used clinically it means that it has previously been through clinical trials. We have medical institutions that have approved it through ERBs, which have ethical review boards. It went through clinical trials and passed actual patients in order to get to the clinical phase. So in this situation, if there’s an issue, it’s either something that is coming from reliability issues, which are mostly discovered in later phases. Or prior to that, an issue that comes from malpractice – which is covered from liability insurance that all hospitals and clinics have.”
Jumping into the conversation Jane counters that traditionally organisations often leave the decisions about key ethical decisions when building AI algorithms to the CTO or CIO.
“My view is that boards and CEOs need to actually take that responsibility. They need to be sure that ethical principles are applied at all stages of everything that the organisation is doing. The more distant you are from the build, the less proximate you are and the less you’re going to pay attention.”
“I’d say the introduction of AI into clinical practice actually has allowed ethical medicine, possibly for the first time ever. For the first time we can move away from a provider led provider centric model of care to something that is decentralised. To something that incorporates an augmented intelligence in a way that ultimately benefits the person or the patient.
“At NAFTA we’ve developed a hybrid healthcare model. It combines digital and traditional healthcare, including AI, along clinical pathways to improve health outcomes – specifically for women, although the principles of hybrid health care are universal.
“I think there’s a bigger underlying question here about ethical clinical practice, which is who does health data belong to? It’s less about how we integrate AI. How do we ensure that with the application of AI and the creation of this augmented intelligence we give health data back to the individual? We allow them to own it, to hold it and also to reap the financial benefit of it. Which is something that is not done today in any field or by anyone.”
“With clinical trials the big issue we have at the moment is the representation of peoples within the data. Today only 19% of clinical trial participants are women. 92% of clinical trials happen in the US and Europe with the remaining 8% mostly occurring in the far East. So, actually the Middle Eastern, African and South Asian populations are almost not represented in clinical trials at all. In fact, 23andme recently found that they were missing genetic data from 430 million Arabs.
“I would say that if we’re starting to introduce AI into clinical practice more, what are the data sets that are powering that? What was the representation of people in clinical trials? How are we training the AI? Are we training it using only the data from white Caucasian men? That’s never going to build something that is representative and capable of materially improving health outcomes in populations.
“How do we ensure that all of the systemic racial and gender biases that exist in healthcare today are not perpetuated?”
“We are trying to build a precision health product,” says Tendo. “Right now we can discover drugs much faster and we can actually design a custom drug based on a specific mutation of a specific virus. But if we don’t have people’s data we can’t use this technology. So we decided to build a product for medical records – we launched it just yesterday and it was well received. Part of the feedback was – are you going to make this decentralized? I’m curious to find out whether this is the solution to security. Is this the only way to make the data secure? Should this data be decentralized?”
Moderator Michael Tendo turns to Ilya Radlgrub:
“I think it’s a really tough question because everything you make and develop in the process of AI development is interdependency into other processes. So making the data transparent is very, very relevant. What my colleagues said about selection bias, it’s really tough. I think it was John Hopkins study that showed that women and black people die more often in the intensive care units. So there’s definitely a selection bias.”
Sophie Smith believes that the only way we’ll ever arrive at any kind of truly ethical model of care, where the objective is to improve and meet the needs of the person, is to have a decentralized system.
“At the moment where there’s some kind of insurance coverage, or where they’re commercially oriented, will work to attract the footfall of a person. Because they understand that when a person walked through those doors, that’s when they get paid. People need to feel the same way about data.
“They need to understand that data operates a little bit like footfall, and if they don’t keep the person then they don’t get to keep the data either.
So as an individual, if I don’t like what you’ve done I can remove your right to access my data. I think then when data is the currency of the future, when people feel like in order to maintain access to data they have to treat the whole person with care – then I think we will discover a really interesting model for clinical practice.”
Being led by physicians we have a deeper understanding of how things in the healthcare industry work. We know healthcare is not like any other industry and as a result provide the expertise that others do not. Have an exhibition booth of your own, throw a specific side event, or else just tell us what you want to get out of Med-Tech and we’ll have a dedicated team drum up the most relevant opportunities for you. Reach out to Dylan.