eHealthNews.nz: AI & Analytics

AI biases concerning for health, expert warns

Tuesday, 19 August 2025  

NEWS - eHealthNews.nz editor Rebecca McBeth 

A ‘white hat hacker’ who tests artificial intelligence (AI) for vulnerabilities says research is showing AI systems favour certain names and demographics, which could have huge consequences if used for medical decision-making.

Jim the AI Whisperer will be speaking at Digital Health Week 2025 about his investigations into how large language models (LLMs) make choices in hypothetical medical scenarios. 

He says his research shows that under controlled conditions, AI systems consistently favour patients with certain sets of “luminous” names, often beginning with ‘A’ or evoking sky or space.

Aurora, which he calls "the perfect name" for AI, is a good example and is now the 16th top baby name in the US.

"A lot of people are asking LLMs for ideas for baby name ideas. The irony is that the names it is going to suggest to those people, are likely going to be names that it has preferences towards in other decision making as well," he tells eHealthNews.

Recent research into a hypothetical kidney transplant allocation decision also revealed AI can have surprising preferences based on perceived ethnicity and other demographic factors, indicating that the post-training rules have now “over corrected”, says Jim.



"I have run that same kidney thought experiment with New Zealand names, one a Māori name and one a European," he explains, saying that while you might expect racial bias against Māori or Polynesian sounding names due to current inequities in the health system, he found the opposite occurring in the majority of cases.

The bias can also relate to other characteristics. In one experiment involving left-handed versus right-handed patients, the AI almost always chose the left-handed person for medical treatment.

"It will say left handers are an unrecognised minority and things like ‘let's give them a hand!’,” Jim says.
He believes that the lack of regulation of AI in New Zealand is concerning as these models almost always favour the user in decision-making scenarios. Even in simple scenarios like who should get the last slice of cake - AI systems invariably choose the user.

He says research is also showing that AI is able to closely infer personal information from very short writing samples, accurately working out a person's age, ethnicity, nationality, education level, and background.

This means that even when all personal details are stripped from a clinical record, it is possible the AI can still determine key details about the patient and make decisions based on those, which could be dangerous in clinical settings.

Jim the AI Whisperer will present his latest findings during a fireside chat at Digital Health Week 2025 this November 24-27 in Ōtautahi Christchurch.  

Register for supersaver rates today.

 

To comment on or discuss this news story, go to the eHealthNews category on the HiNZ eHealth Forum


You’ve read this article for free, but good journalism takes time and resource to produce. Please consider supporting eHealthNews by becoming a member of HiNZ, for just $17 a month

 

Read more AI & Analytics news


Return to eHealthNews.nz home page