Skip to content

24th May 2021

Dear Google: sensationalism has no role in health tech innovation

Google’s new “dermatology assist tool” panders to the sensationalism encountered in Health Tech innovation as tech giants race to impress

Words by Dr. Ryan Grech and Dr. Dylan Attard, Clinical & Health Tech Advisors for MedTech World and two of the co-founders of Digital Health Malta. 
We can no longer deny the fact that Artificial Intelligence (AI) is starting to change the way that medicine is practised. Unfortunately, sensationalism, coupled with the over-eagerness of “disruption” and “being first” by some companies may do more harm than good. Google unveiled its “dermatology assist tool” at the tech giant’s annual developer conference, Google IO. Whilst it is always great to see big companies that yield influence impress in the digital health sphere, the way that it is being done makes us a bit concerned. Focusing on this latest revelation we have dissected Google’s latest innovation as an example in order to see what is unacceptable.

1. The need for prospective clinical trials and real world testing to back claims

One needs prospective clinical trials and real world testing in order to back certain statements such as an AI system can achieve accuracy that is on par with U.S. board-certified dermatologists. Unfortunately, Google has already launched their product to make dramatic headlines without actually doing any prospective trials. In one of their latest published studies which they referenced in their press release, Google only used their test set images, thus one can come to the conclusion that such AI will perform well. Imagine if you’re given the same exam over and over again; you’re bound to get a 100% correctness eventually. Unfortunately, a lot of AI medical algorithms make headlines when they are still closed within a silo  as they are not tested on real-world data and  clinical flows, which leads to sensational headlines and rarely come to fruition. So before we get this data from any prospective clinical trial available we do not know if Google’s dermatology aid can have a real-world impact.

2.  Lack of inclusion for all skin types elicits AI bias

In the same published study, Google acknowledges that of the six Fitzpatrick skin types (which categorize skin tone and propensity to tan), certain categories of tones, namely types I and V are underrepresented, and type VI is absent in their data set.  This is a huge limitation in an algorithm that’s set “to help find answers to common skin conditions” where the validity of an answer is heavily dependent on whether one’s skin tone was part of the original training set. Such AI bias is a very dangerous feature for an algorithm to learn, particularly in healthcare. Frankly, whilst we’re sure that all the engineers are working tirelessly to reduce  such bias we would have expected better from Google decreasing the bias as much as possible before publishing and announcing it to the world. We find it appalling that type VI skin is not represented for example.

3. In the dataset used, not all labellings/diagnosis were biopsy-confirmed

Dermatologists typically confirm their suspicions of skin lesions, say melanoma, by doing a biopsy of said lesion, then pathology confirms whether the lesion is malignant or not. The initial research data on which their latest Nature publication and press release are based shows that in the dataset used, not all labellings/diagnosis were biopsy-confirmed. Some were diagnosed based on “collective intelligence” of a group of board-certified dermatologists. So how can we teach an algorithm what melanoma looks like when we are not 100% sure that the test image is actually melanoma? 

4. Google views this AI as better than searching for the information yourself, rather than a substitute for medical advice

As usual, doctors are left to deal with the fallout from these innovations, which cause extra anxiety and lead to increasing needless doctor visits, because Google is not accepting any responsibility for what it’s algorithm comes up with.Whilst this is not an issue isolated to Google it is something that regulatory bodies should seriously take a look at – especially with a tool that has not been refined to medical-grade standard, yet will be operating similarly under a different pretence. Whilst such a tool is labeled as a class 1 CE certification, do not let it fool you. Such a label is a self-certification product that can’t be used as a diagnostic tool as per EU definitions*. So if one ever ends up using the product (at least in its current state) what the algorithm might say is not melanoma, may still well be.
Whilst we applaud Google for the innovation and acknowledge the efforts they have made in the realm of AI we must never forget that Medicine is not like the rest of the industries. We can’t understand why time and time again companies want to just show and tell before there is concrete evidence that what they’re stating really works in the real world. It looks like Google has a clinical trial lined up, so why not wait until that’s done before parading a solution?
Furthermore, we believe that just like a toy company that sells defective toys should be held liable for any grievance it causes, similarly the creators/companies of medical diagnostic AI algorithms should also be held accountable. After all, if a doctor causes unnecessary anxiety or misdiagnosed a patient, the doctor gets sued. The medical industry needs AI as part of the healthcare system  but the way such software is introduced  should be in a prudent way backed by scientific evidence – sensationalism has no place .
Please note that we do understand that Google is only claiming that their product is better than self-searching however, most of what we said is easily translatable to other scenarios and we do expect that we will in the future see the scope of their programme expanded.
*If your product is class I, and it’s not a sterile or measuring device, then all you need to do is to self-certificate it, and formally declare its compliance with the applicable requirements of the MDD via a written statement

Med-Tech World: 18th-19th November 2021

The Med-Tech World conference, which follows a successful digital event in 2020, will run from 18th-19th November 2021 and will highlight innovations and developments in digital health across the globe. With so many countries realising the potential for exponential growth, Med-Tech World will address the opportunities and challenges driving this multi-million forum – embracing the potential for technological innovation to change the face of medicine in this global sector. Register your interest here!