eHealthNews.nz: AI & Analytics

My View - AI myths

2 hours ago  

VIEW - Karen Day, Fellow of HiNZ, senior lecturer health systems, School of Population Health, Auckland University

Karen DayA recent conversation with digital health experts left me feeling that it’s hard to gauge AI maturity, or even define it, and so my mind turned to AI myths.  

The Cambridge Dictionary defines ‘myth’ as a traditional explanatory story, or a false idea that is commonly believed. Myths abound about health information technology, more so about AI. 

Myth 1 - AI thinks like a human 

There are many types of AI and they have all been developed to process information the way humans do, i.e., they are built to mimic human thought processes. For example, neural networks are built on what is known about how the human brain works. AI is provided with a human-computer interface that is programmed to seem helpful, friendly, polite, eager to please, and seeks reinforcing feedback from the user. AI appears to think like a human. It makes a close approximation, and like a human it makes mistakes and fabricates answers that don’t exist in its database.  

It is tempting to anthropomorphise technology of any kind, more specifically AI. This is evident in the widely contested Turing Test which claims that, if a human using a computer can’t tell if the computer output is from a human or a computer, artificial intelligence exists in that interaction. 

When an interaction with AI becomes frustrating because the AI appears to go on a tangent that the operator is not interested in, but the AI persists, the human is tempted to use the metaphor of a stubborn teenager. When the AI tool presents hallucinations, one is tempted to think about this as a guess from someone who is eager to please.  

AI mimics human thought by design. Its functional maturity is hard to pin down – AI technologies only recently became commercially available en masse, but the technology itself covers a range of sophistication. Has the machine come of age? Not yet.  

Myth 2 - AI is reliable, so we can use it without gathering scientific evidence 

Marketing and commercialisation at scale have led us to believe that sufficient testing has been done for safe implementation and continued use in healthcare contexts. In contrast, medicines and medical devices cannot be implemented in clinical settings unless research has demonstrated their utility, effectiveness, efficiency, and outcomes on the health of those for whom it is prescribed. Some AI tools have been rigorously tested when designed as a medical intervention, e.g., radiology image analysis has been tested for reliability, validity, specificity and scope.  

Narrow AI enables predictions of specific phenomena, while predictive AI uses past data to predict a described future, and generative AI creates new content. Chatbots use conversational AI and the theory of mind AI to produce, such as mental health companions.  

Each clinical application of AI requires rigorous research about the clinical implications, but also about the effects of the AI on how we work, interact with one another and the technology, and how the tools affect work processes, policies, workload, and the cognitive loads and shifts of its users. 

Marketing and commercialisation at scale fill this gap and we respond to the need that the tools fill (if it works, we will use it) – this is a dangerous approach and requires the governance that is rapidly developing in this space.  

Myth 3 - AI makes clinical work easier and quicker 

It feels like it at first. The promise of information systems and technologies in health care has always been to automate the drudgery, simplify work, and augment what we re usually able to achieve in a day’s work. Research on digital health technology implementations reveals a mixed picture of mostly failure and some success. The ‘productivity paradox’ has emerged whereby the promise of easier and quicker work has not been kept, and digitally supported work can become more difficult, time consuming, and less productive than analogue work.  

AI offers the promise of expanding, amplifying, extending, and enhancing clinical work. At first use, it seems to keep this promise. The implementation of AI scribes has seen rapid and eager adoption in Aotearoa New Zealand. Its core function is to relieve clinical staff (doctors, nurses, allied healthcare professionals) of note taking during consultations with patients – it does this well. But AI does not do reasoning unless specific forms of reasoning and critical thinking are part of its programme. Initial signs are there about its utility in this scenario, but the cracks are emerging where some indicate that critical thinking is delayed until the next appointment where the clinician reviews and analyses the AI-recorded summary before starting the consultation.  

An uneasy relationship

Three myths that form our uneasy relationship with AI. AI mimics human thinking – we are still learning how to introduce critical thinking into its programming. For now, we should resist anthropomorphising the technology and sharpen our own critical thinking skills.  

We do not know how reliable it is, especially when it’s not explainable or where the black box of its reasoning hides what it is doing – research is needed to establish reliability and explainability. Until we have enough evidence of the right kinds, our relationship with AI remains awkward – how do you trust a technology?  

AI is still a ‘new kid on the block’ and we don’t know how well it makes clinical work easier or quicker, implementation science research should document this and help us avoid the productivity paradox. In the end, the humans should always be in charge. We bring humanity to clinical work.  

 
If you want to contact eHealthNews.nz regarding this View, please email the editor Rebecca McBeth.

 

Read more VIEWS


Return to eHealthNews.nz home page