The following articles are from the Centre for Data Ethics and Innovation's March 2026 newsletter. The newsletter comes out quarterly, full of news, commentary, opinion, and education. Sign up by emailing dataethics@stats.govt.nz
The Data Ethics Advisory Group is a group of independent experts from across academia, industry, community, and public life who generously give their time to help government think through the tricky, real-world ethics of using data.
These are people who spend their days grappling with complex questions about trust, technology, fairness, and risk, and they’re here to make sure you don’t end up on the wrong side of the line-of-creepy.
DEAG meets with agencies to talk through live data challenges, novel ideas, or things that just don’t feel straightforward. Government agencies can test assumptions, pressure test decisions, and get early, practical advice before issues become problems. DEAG also brings together what it hears across agencies to identify patterns and system level risks, helping lift ethical practice across government.
And the best part? This advice is independent, informed, and free. No procurement. No consultants. Just thoughtful people helping government make better, more trusted decisions about data.
Pretty cool, really.
We’ve asked some of the DEAG crew to share perspectives from their day jobs, to help unpack a few of the gnarlier issues in the data ethics panopticon.
New Government Statistician and Stats NZ Chief Executive Colin Lynch is encouraging a focus on trust and transparency as the as digital transformation in public sector accelerates.
Colin has been at the helm of the National Statistics Organisation since January, bringing extensive leadership experience across the public and private sectors. Read more about Colin.
He has a strong focus on trust, ethics and customer service, which was evident when he addressed audiences at this month’s Digitising Government New Zealand conference in Wellington.
Read the full article on the Stats NZ website
Colin Simpson, Global leader in Health AI, epidemiology, and data science, DEAG Chair, and innovating ethically from way back when.
Being part of DEAG is a genuine pleasure. It’s a rare gathering of sharp minds, academics, practitioners, community thinkers, and industry folk, each bringing their own expertise to untangle tricky questions about risk, opportunity, and fairness. There’s a real joy in that blend of rigour, curiosity and putting people at the very heart of data.
A key thing I always advocate when we talk about what happens with people’s data is, and therefore we aim, to ensure that the interests of the public and maintaining public trust in our data systems are at the heart of what we do.
DEAG was set up precisely to help government agencies test ideas and policies in ways that reflect community expectations and keep pace with emerging uses of data. Good data practice isn’t just technical, it's relational.
Similarly, innovation is relational. It is grand to experiment and do new things, but it can’t outrun ethics. New tools and clever ideas only have value if they manage risks and avoid harm to people while unlocking benefits for communities. DEAG promotes the idea that innovation and ethics go hand in hand, because trust isn’t automatic, it’s earned, again and again.
Jonathan Godfrey, Lecturer, Statistician, Chair of Blind Citizens NZ, member of Disability Data and Evidence Working Group, good human.
I want to offer a simple but powerful reminder for anyone working with people - avoid assumptions. While it’s easy to observe someone’s behaviour and think you know, it’s much harder to understand their capability, preferences, or needs without actually engaging with them. When we assume what disabled people can or can’t do, we risk heading down the wrong path, often with the best of intentions, but still, the wrong path.
At the heart of good interaction is something universal, be yourself, just another bog-standard, but wonderful human. Disabled people don’t need special treatment, they need the same respect, warmth, and openness you’d offer anyone else. A smile, a greeting, a question. Even “imperfect” questions are better than staying silent, because conversation builds understanding and trust.
While technology can offer us a lot, emerging technologies often leave disabled people behind. Designers frequently build for the majority, assuming everyone has a smartphone or can navigate digital services. But not all technology can be made accessible and when that’s the case, we must have a Plan B that ensures people aren’t excluded from essential services. Accessibility isn’t innovation for innovation’s sake, it’s ensuring new systems don’t slow down or shut out the people who rely on them most.
This is true across the data lifecycle as well. Equity isn’t a single checkpoint. It must shape how data is collected, analysed, presented, and ultimately used. Are methods fair? Are disabled people represented accurately? Do outputs reflect their realities? Ethical practice requires asking these questions at every stage. Not just at the end.
So, for this newsletter, I leave you with a wero. Are your approaches truly equitable for disabled people, rainbow communities, and ethnic minorities? And are you meeting the public service commitment to make services more accessible?
Accessibility begins with curiosity, humility, and a willingness to ask.
Andrew Sporle, social researcher, data geek, bit of a legend.
Andrew has been in the Integrated Data Infrastructure (IDI) since before it was even a thing. Creator of the IDI Search App.
People love to describe the IDI by its sheer size, billions of records, decades of data, and its ability to provide research insights for the good of New Zealanders. But the interesting stuff isn’t in the scale, it’s in how the data actually came to be – primarily as administrative data.
Administrative data isn’t created for research. It’s created because someone needed to run a service, check eligibility, process an application, or record an interaction. In other words, the IDI reflects systems just as much as it reflects people. So, before you analyse anything, it’s worth asking: What was this data originally collected for? What gets recorded? What doesn’t? And who falls through the cracks?
Linkage quality is another quiet troublemaker. It’s not a small technical detail, it’s a source of bias hiding in plain sight. Different population groups link at different rates, which can nudge your results in directions you didn’t expect. If you haven’t checked coverage, linkage error, and whether your denominator even makes sense… well, you’re already on shaky ground.
Timing also matters more than most people realise. Policy changes, service exposure, and outcome measures rarely line up neatly. The IDI rewards researchers who are careful about cohorts and time windows and it and punishes wishful thinking.
And then there’s causality. The IDI’s richness doesn’t give you a free pass. Good identification still requires design, not wishful thinking.
But perhaps the most important reminder, and one Andrew has championed for decades, is that the IDI sits within Aotearoa, not somewhere abstract. That means doing right by Māori data. DEAG emphasises that Māori authority over Māori data is fundamental and woven into our data landscape, alongside commitments to tikanga driven, inclusive data practice. Good IDI work requires understanding context, partnering meaningfully, and ensuring Māori are not just represented but respected in methods, interpretation, and impact.
And finally, if your analysis can’t be explained clearly, that’s a red flag. Transparent methods, reproducible code, insights that describe uncertainty and visuals that show uncertainty aren’t optional extras. They’re part of doing the job properly.
The IDI is a taonga, but the value isn’t in the volume; it’s in thinking carefully about systems, structure, and bias. Method first. Models later.
Russell Craig, digital identity, ethics, AI, many hats, many lives
Periodically, we ask the DEAG members to pull from their vast knowledge and experience and tell us about the ethics we should be learning from past mistake. Here’s Russell on his time at Aadhaar.
Health ethics expert Kate O’Connor literally wrote the book, and a very funky human
The fear expressed by patients in the aftermath of the recent data theft from several GPs’ patient portals- ranging from anxiety to real terror- exposes the very close connection we have to our information, and the deep trust placed in those we give it to. Data is provided on the basis that it will be fiercely protected, available only to those with good reasons to have it and used specifically for the purpose it was given. The threat that it can be stolen, sold, or used against us without our permission and for harmful, exploitative or discriminatory purposes is frightening.
Looking from the outside, there are apparently violations of at least 2 ethical principles: harmlessness (non-maleficence) and respect for autonomy or self-rule. The fact that health information could be taken maliciously makes this situation particularly scary, but the principles should hold equally strongly in all organisations and agencies which have our sensitive personal information. These places must expect similar consequences for any misuse of data and unethical conduct: fearful, angry people who may have permanently lost trust in them.
In NZ health care and research, the watershed Cartwright Inquiry (1987 – 1988) into the “unfortunate experiment” gave patient protections and rights the force of law. Obtaining an informed and voluntary individual consent, for both active participation and the use of data in research, has been the default setting ever since.
As the use of data proliferates in previously unimaginable ways – including into AI - this country now has nearly 4 decades of experience in the ethical conduct of health research that those using data in other settings can and should learn from, particularly when using data without consent.
In NZ ethical and legal frameworks, the secondary use of already collected health data is ethical without consent in only a very limited set of circumstances. These include having a waiver of consent approved by a Health and Disability Ethics Committee (HDEC), or the Institutional Ethics Committees in tertiary institutions. Their approval in cases where there is strong public interest (which outweighs the interest in privacy and autonomy) will be based on robust plans for privacy safeguards throughout the project lifecycle and thereafter, as well as for the minimisation of risk. There must be no disadvantage posed by the data-use to patients or their relatives, and there must be strong justifications for why consent will not be sought. These could be ethical, scientific or practical reasons e.g. due to infeasibility or the introduction of bias to the results. There must be no reason to think that people would not give consent if asked, and appropriate consultation with Māori and other important stakeholders must be undertaken and their support obtained.
Those using or linking sensitive data for a purpose other than that for which it was obtained, but outside of health or academic research, and regardless of whether a prior ethics committee approval is required, should borrow from this blueprint for the protection of our rights and interests in our information.
Where consent for data use can be obtained, it should be. Data users should care about the consequences of their actions and perform the carefully planned tasks responsibly, transparently, and skilfully. Only by doing so can the benefits of the data use be obtained, whilst minimising our fear and anxiety about data-danger and maintaining our trust in the agencies which have our information.
Check out the guidance DEAG has already put out. This can help you navigate the real-world ethical challenges that come with using data.
Over the last couple of years, we have gathered the collective wisdom of DEAG. These gems come from grappling with tricky, high impact data decisions across government.
The guidance walks through the big questions (like - should we do this at all?) as well as the everyday ones that shape trust over time. It covers data collection, sharing, consent, engagement with communities, and how to design data products and services responsibly. There’s also targeted advice on social investment, working with group data, and using AI, including what to think about before buying or deploying new tools.
If you’re designing something new, changing how data is used, or just want a clearer ethical compass, this is a useful place to start. It’s grounded, thoughtful, and refreshingly focused on people - not just process.
Worth a read if you want to make good data decisions with confidence.