eHealthNews.nz: AI & Analytics

Te Whatu Ora advises against using Generative AI in health

Thursday, 7 September 2023  

NEWS - eHealthNews.nz editor Rebecca McBeth

Te Whatu Ora staff must not use AI tools such as ChatGPT to help make clinical decisions, or provide patient care advice or documentation, new guidance says.

Advice issued on the website says, “the available large language models (LLMs) and Generative AI tools have not been validated as safe and effective for use in healthcare; nor have the risks and benefits been adequately evaluated in the Aotearoa New Zealand health context”.

“Te Whatu Ora does not endorse the use of LLMs or generative AI tools where non-public information is used to train the model or used within the context of the model,” it says.

Employees and contractors must not enter any personal, confidential or sensitive patient or organisational data into LLMs or Generative AI tools, or use them for any clinical decision, personalised patient-related documentation or personalised advice to patients.


You’ve read this article for free, but good journalism takes time and resource to produce. Please consider supporting eHealthNews by becoming a member of HiNZ, for just $17 a month.


The guidance comes with posters and a screensaver saying “our data is taonga - treasure it, protect it, don’t feed it to AI”.

Picture: AI guidance tile from Te Whatu Ora

The National Artificial Intelligence and Algorithm Expert Advisory Group (NAIAEAG) advice says AI technology is rapidly advancing and changing, and “safe and appropriate uses that help our staff and the population of Aotearoa NZ may well be developed in the near future.”

But the group currently advises a precautionary approach due to risks around breach of privacy, inaccuracy of output, bias, lack of transparency and data sovereignty.

“We will continue to investigate these risks and the use of these tools,” it says.

Chris Paton, clinical senior lecturer, Otago University says the group is right to advise a precautionary approach to the use of LLMs in healthcare.

“Although the chatbot user interface of LLMs makes them very appealing and accessible, their over-confidence and willingness to answer questions when they do not know the answer can lull users into a false sense of security that could be harmful if relied on for making a healthcare decision,” he says.

“LLMs take a new approach to machine learning aimed at creating a general-purpose AI with similar capabilities to humans on a wide range of tasks. It may be some time before we can rely on them for routine use for healthcare decisions despite their usefulness in less safety-critical areas.”

However, Paton says “narrow AI” models that have been trained on datasets that can be verified and that are limited to particular areas, such as recognising patterns in radiology images, are already approved by regulators and are safely being used in hospitals around the world.

Staff members with ideas or plans for potential use cases should register with NAIAEAG for advice around appropriate process.

Watch the HiNZ webinar ‘AI in health – what does ChatGPT mean for me?’ on demand here.


To comment on or discuss this news story, go to the eHealthNews category on the HiNZ eHealth Forum

Read more Analytics news


Return to eHealthNews.nz home page