My View: Algorithms: Helping imperfect people to make better imperfect decisions
Sunday, 11 October 2020
VIEW – Kevin Ross, chief executive, Precision Driven Health
Guest column by Kevin Ross, chief executive Precision Driven Health

Algorithms are used by public agencies and services to help interpret data, which can both speed up and improve decision-making. There is a growing discussion around how we manage and process data in the decisions we make.
When your GP decides how much paracetamol to prescribe for your child, they use a simple algorithm – a set of rules or calculations that, when followed, lead to a quantity. When you are advised that there is some probability that your unborn child will
have a particular medical condition, another set of calculations – a different algorithm – is used.
There is a growing discussion around how we manage and process data in the decisions we make, leading to major legislative changes such as
the Government Data Privacy Regulation (GDPR) in Europe. This includes the use of algorithms.
In practice, the term algorithm is a little too broad to capture concerns. Algorithms are just rules that transform inputs to outputs or decisions,
and they were around before computers. They could be as seemingly simple as “Confirm you are eligible (the right age and healthy) for this immunisation” or as complex as “Offer this screening test to the 10% highest risk individuals”. Using algorithms
to analyse data and inform decisions comes with risks, because examples like these have major consequences for their subjects. We all have a notion of fairness that makes us want to at least understand how decisions are made about us and it’s important
that we are confident that algorithms are used in an ethical and transparent way.
Statistics New Zealand has partnered with other public agencies to publish a new Algorithm Charter for Aotearoa New Zealand,
believed to be a world first. The Charter notes that “Government agencies use data to help inform, improve and deliver the services provided to people in New Zealand every day. Simple algorithms can be used to standardise business processes to ensure
scarce resources are distributed equitably. More complex algorithms can be used to distil information from large or complex data sets”.
In the Charter, the founding 25 signatories, including the Ministry of Health, the Ministry of Social Development
and the Social Wellbeing Agency, commit to a range of measures, including:
-
Being publicly transparent about how decisions are informed by algorithms
-
Providing ‘plain English’ explanations about how algorithms work
-
Making available information about the processes used and how data is stored
-
Identifying and managing the biases informing algorithms
-
Embedding a Te Ao Māori perspective in the development and use of algorithms.
However, governance around fair, ethical and transparent algorithm use is not solely a government challenge. The private sector provides a lot of the tools used for public and consumer analysis and needs to be mindful of the same principles. The AI
Forum of New Zealand has developed a framework for Trustworthy AI in Aotearoa: AI Principles, which addresses several of the same challenges but in the context of the wider use of artificial intelligence (AI) technologies in New Zealand. AI introduces
the potential for algorithms themselves to be designed by other algorithms, challenging our framework for oversight. The same citizens are subject to the models deployed by both government and private entities, and it is pleasing to see that all
parties involved recognise the need for an appropriate ethical and legal framework to be applied.
Both the Charter and AI Principles outline a commitment to transparency, partnership, data, people, privacy, ethics, human rights, and human
oversight. These are lofty terms and goals, but what does this mean in practice? How do public agencies and private organisations demonstrate their commitment? Here is an example of how we have applied these to a critical project in New Zealand
health, the New Zealand Algorithm Hub.
Orion Health is developing a platform for health-related algorithms to be shared across New Zealand. This started as a COVID-19 challenge, to give access to common models for the outbreak – so that
we’re planning for the same scenarios. But the opportunity is much wider: if someone contracts COVID-19, how likely are they to need hospitalisation? Or ICU? Or to die? This information can be invaluable for individuals, whānau, clinicians and
planners as we seek the best possible path for treatment and management. These algorithms make a difference, so which algorithms are appropriate to use?
Informed by initiatives such as the Algorithm Charter and AI Principles, Orion Health
initiated a governance process around the selection and deployment of algorithms for the Hub. We ask a set of questions, covering the way an algorithm was developed, how it is intended to be used, and how it could possibly be misused. We draw
upon consumers, Māori, clinicians, ethicists, lawyers, data scientists and policy professionals. We want to support better decision making and need to ask some hard questions before we proceed.
Before selection and deployment, we always
ask whether an algorithm has been validated for the New Zealand population, and whether there could be any bias created by its intended or unintended use. A good example of this is nzRISK, a surgical risk calculator, which recently won an award
for innovation in the public good. This model for surgical mortality was specifically tuned to the New Zealand population, to replace previous tools which tended to under-estimate the risk faced by our most vulnerable groups.
Algorithms
can sometimes worsen inequities – automation of decisions could mean more efficiently making unfair decisions, especially if the algorithm is built based on historically inequitable behaviour. The most egregious international examples of these
include decisions on child welfare and criminal sentencing implicitly based on the colour of one’s skin. These examples are pertinent in New Zealand, where Māori have frequently experienced disproportionate negative outcomes, an equity gap which
could be exacerbated through algorithm use even with the best of intentions.
However, the risk of harm does not mean we should avoid algorithm use entirely. In fact, the data analysis that leads to the development of algorithms can also
identify historical bias, and therefore can help us to combat the bias that we don’t realise we have. For example, if Māori have been historically under-represented in accessing specialist healthcare services, then an algorithm can prioritise
their treatment or identify unmet needs.
Technology can and should be used to reduce bias. For example, software enables us to follow agreed standards to ensure that everybody who meets certain criteria is given the appropriate care.
Where studies have shown that certain groups have lower rates of referrals or appropriate prescriptions, simple checks of patient data can catch these cases to reduce the gaps in care. Good data science algorithms have the potential to address
these challenges on a larger scale.
The answer, as suggested by initiatives such as the Algorithm Charter, is to acknowledge the benefits and shortcomings of both algorithms and current practice, and ensure that we ask good questions early
and often. Adhering to the Charter is akin to agreeing to a code of practice, and our data science community will serve New Zealand well by holding ourselves to high standards as we develop, train, select and deploy algorithms to support human
decision-making.
Algorithms need not be perfect – they need to help imperfect people make better imperfect decisions than they would make without them.
Kevin Ross is Director of Research at Orion Health and CEO of Precision Driven Health.
If you want to contact eHealthNews.nz regarding this View, please email the editor Rebecca McBeth.
Read more NZHIT Views:
My View: We need to talk about our health data Industry View: What’s in a model?
Return to eHealthNews.nz home page
|