eHealthNews.nz: Clinical Software

Regulatory framework for AI in medicine an “urgent public health priority”

Tuesday, 19 March 2019  

Return to eHealthNews.nz home page

eHealthNews.nz editor Rebecca McBeth

An effective regulatory framework for the use of artificial intelligence in medicine must be considered as an urgent public health priority, the Royal Australian and New Zealand College of Radiologists says.

The college has developed a draft report, Ethical principles for AI in medicine, which outlines the most appropriate use of AI and machine learning, including how both can help drive better patient care.

RANZCR president Lance Lawler says, “new technologies such as AI are having a huge impact on healthcare.

“How radiology adapts to AI will have flow-on effects for patients and other healthcare professionals, which is why it was important for RANZCR to develop these principles.”

Created by the college’s AI working group, the paper says existing medical ethical frameworks are insufficient for the emerging use of ML and AI in medicine.

Lawler tells eHealthNews.nz that “we believe that the development of an effective regulatory framework for intelligent medical software must be considered as an urgent public health priority”.

The agreed principles will complement existing medical ethical frameworks and provide doctors and healthcare organisations with guidelines regarding the research and deployment of ML systems and AI tools in medicine.

The paper provides eight additional principles covering safety, avoidance of bias, transparency and explainability, privacy and protection of data, decision making on diagnosis and treatment, the liability for decisions made, application of human values, and governance of ML and AI.

“The first and foremost consideration in the development, deployment or utilisation of ML systems or AI tools ought to be patient safety and quality of care, with the evidence base to support this,” the paper says.

It explains that because ML and AI tools are limited by their algorithmic design and the data they have access to they are prone to bias, and particular care must be taken when applying a tool trained on a general population to indigenous or minority groups.

To minimise this bias the same standard of evidence used for other clinical interventions must be applied when regulating ML systems and AI tools, the paper says.

“Radiologists and radiation oncologists (who will be at the forefront of applying this technology) have a duty of care not only to understand these risks, but to provide competent expert advice to the rest of the teams involved with the eventual roll-out of these tools,” says Lawler.

The draft report is out for consultation and feedback should be submitted by 26 April 2019.

Read more news:

Using data to prove the value of allied health – ETIH speaker

Clinical IT leaders create national network


Return to eHealthNews.nz home page