Policies regulating use of AI in health needed – new report
Tuesday, 30 April 2024
NEWS - eHealthNews.nz editor Rebecca McBeth
Health delivery entities must have policies regulating the use of Artificial Intelligence (AI) and health practitioners supervising AI are liable for decisions made using AI generated advice, a new report from the Office of the Prime Minister’s Chief Science Advisor says.
‘Capturing the benefits of AI in healthcare for Aoteaora New Zealand’ says employing AI technologies in healthcare has far-reaching impacts and robust discussion is needed on the best path forward.
The report has 17 principles and principle 5 says health delivery entities must have policies regulating the use of AI.
“Such policies should specify an assessment process for AI tools to go through before use and an ongoing evaluation process for accuracy, efficacy and safety, addressing issues such as ease of use, bias, security, and data sovereignty,” it says.
Principle 16 says “health organisations are responsible for decision-making about the purchase, provisioning, audit, evaluation, and authorisation of AI systems”, and principle 17 says “practitioners supervising AI are responsible for its operation and they remain liable for decisions made using AI generated advice”.
The New Zealand health sector is taking a cautious approach to adopting Generative AI (GenAI) or Large Language Models (LLMs).
You’ve read this article for free, but good journalism takes time and resource to produce. Please consider supporting eHealthNews by becoming a member of HiNZ, for just $17 a month.
Health NZ Te Whatu Ora is exploring the use of AI in clinical coding, but has issued a national directive that staff must not use Generative AI tools in relation to patient care, saying they have not been validated as safe and effective.
Independent health professionals such as GPs are not employed by Health NZ and around 400 are already testing the use of GenAI tools to assist in creating patient consultation notes.
The report says, “the use of AI as a ‘practitioner co-pilot’ can be mandated in domains in which its performance is subject to ongoing audit and evaluation showing that it is more accurate and no more biased than human decision-makers”.
Karl Cole, a GP at Papatoetoe Family Doctors, is using Nabla Copilot in his clinics. This summarises recordings of appointments with patients, but does not offer any advice or make decisions.
He says tools such as Copilot are being used safely globally, but GPs are concerned about their ability to independently assess new AI solutions and their liability if something goes wrong.
“There are always unintended consequences when you introduce new tools into a complex system like health. It has so many moving parts and the liability is on you,” he says.
Medicine is a highly regulated market in terms of drugs and devices, but AI seems to be in a “regulatory black hole” where users may presume it has been tested to a high level, when it has not, he tells eHealthNews.
Cole sits on the board of ProCare and the Royal NZ College of GPs and is helping to develop advice “for how we could have a primary care or community wide approach to AI, as it is very difficult for independent health professionals to each do their own assessment,” he says.
“The Medical Protection Society has a very good process for deciding if AI is right for you and Procare is releasing guidance this week,” he says.
The Therapeutic Products Act would have regulated some AI tools which fell under Software as a Medical Device (SaMD) legislation. However, the new Government plans to repeal the Act and is working on a replacement.
Cushla Smyth, chief executive of MTANZ, says medicines will definitely be covered under the new Act and medical devices are likely to be, however she did not know if SaMD would also now be covered.
In the United States the adoption of AI tools at Kaiser Permanente - a health care provider serving 12.5 million people - has sparked protests from nurses who argue that the technology is untested and unregulated and could make patient care worse.
College of Nurses Aotearoa spokesperson Carey Campbell says nurses must be part of discussions about developing and implementing Generative AI tools and LLMs from the very beginning in order to avoid a similar push back occurring locally.
“Nurses are where the rubber hits the road in healthcare and the best advocates for patients are the people right beside them, and in many cases that is nurses,” she says.
Campbell also wants to ensure nurses are at the table when decisions are made regarding policies, governance and use of AI in healthcare in Aotearoa New Zealand.
“My message for nurses in regards to digital health and AI is to get involved,” she says.
“It is not that AI is going to take jobs, but nurses or health professionals who use AI appropriately may take the jobs of those who ignore it.”
‘Capturing the benefits of AI in healthcare for Aoteaora New Zealand’ is a rapid report produced with chief science advisor at Manatū Hauora Ministry of Health, Ian Town. Read his My View in eHealthNews.
Listen to eHealthTalk podcast episode 43 with Karl Cole and Richard Medlicott on using a Generative AI tool in their practice.
To comment on or discuss this news story, go to the eHealthNews category on the HiNZ eHealth Forum
Read more Analytics news
Return to eHealthNews.nz home page
|