Words by Dr. Ryan Grech and Dr. Dylan Attard, Clinical & Health Tech Advisors for MedTech World and two of the co-founders of Digital Health Malta.
In the previous article, Dr. Ryan Grech and Dr. Dylan Attard discussed the importance of supporting the workforce for digital transformation to improve the future of healthcare.
Last February Andrew Ng, one of the foremost brains in Artificial Intelligence (AI) today, in the February edition of The Batch pointed out 5 levels of automation in medicine with level 1 being no automation and level 5 being full automation, independent of any human input. Andrew makes an important point that AI/automation is not a binary thing. It is not an on or off switch. It is a spectrum, represented in its most basic form by these 5 levels, and different industry players need to choose where their product fits on this spectrum.
Whilst today we certainly have no level 5 systems, Andrew’s consideration shows a concise and easy way of understanding how AI’s involvement in medicine will unfold in the years/decades to come. Given this, Andrew points out that “Today’s algorithms are good enough only for certain points on the spectrum in a given application. As an AI team gains experience and collects data, it might gradually move to higher levels of automation within ethical and legal boundaries.”
Let us delve a bit more into these 5 levels.
As the title suggests no automation is involved here. Everything that is done involves humans although it does not preclude the use of simple algorithms*. Most medical procedures today fall under this category.
Here, the AI acts as a medical student. It takes notes, trains and learns. It shadows the healthcare professional but does not interfere with the decision making process in the real world. We’ve moved from the notion of a simple algorithm here to AI as now it is using data to improve and get better. In his example using medical imaging, Andrew portrays the situation where you have a radiologist reading an X-ray and decides on the diagnosis. The AI system attempts the diagnosis as well and compares notes, analysis what it did wrong and improves.
Here we have the algorithm which supports the doctor. Again using Andrew’s example, whilst a human doctor is still responsible for the diagnosis, the AI system may supply suggestions. It may indicate areas to focus on, suggest a differential for the appearances or perhaps show a region of subtle abnormality that may go unnoticed by the human eye. It is still however up to the doctor to make the final call.
Here we have moved to a more independent system. Our AI can come up with its diagnosis. In instances where the level of confidence is not within acceptable parameters, it will turn to a human for help. So here, an AI system looks at an X-ray image. It can do one of two things if it has high confidence it can output the diagnosis whilst if not it asks the human to make the decision. As you may imagine this is where it all starts to get a bit muddy in ethical and legal terms. If the AI makes a mistake who will bear the burden? Is it the hospital? Is it the company? Or is it the software engineer? This is a minefield with no answer yet. We think it is imperative that further research is carried out to establish this before any of the products enter the mainstream.
Full automation = processes performed by an A.I. alone without any human input. AI makes the diagnosis alone all the time. Again, the ethical and legal minefield here remains.
Whilst we stayed true to Andrew NG’s examples this can be extrapolated beyond medical imaging to diagnosis in pathology or diagnosis based on the clinical history and biochemical markers or even automated surgery.
We think that we are far away from level 5 automation but it certainly is something that we think will be achieved. The question is whether it will happen in our lifetime or not. Whilst our thoughts lie within the beneficial AI movement and we believe that in the future (perhaps distant future) AI will replace most of what we do as doctors as it will excel at certain tasks, in the short to medium term this will probably not happen. And it may not necessarily be because our technology won’t allow us but it will rest on the social acceptance of the general public and the loosening grips of doctors.
We believe that AI systems in the coming decades will be extremely beneficial to doctors and help in caring for our patients enabling us to take healthcare to the next level. Therefore it seems more likely that doctors who use and understand AI will replace those who do not.
*An algorithm defines a process through which a decision is made by whoever is writing said algorithm. AI uses training data to make such a decision.
The Med-Tech World conference, which follows a successful digital event in 2020, will run from 18th-19th November 2021 and will highlight innovations and developments in digital health across the globe. With so many countries realising the potential for exponential growth, Med-Tech World will address the opportunities and challenges driving this multi-million forum – embracing the potential for technological innovation to change the face of medicine in this global sector. Register your interest here!