One Question Quiz
airport-departure-board

SocietyApril 9, 2018

A computer model may be dodgy on deportation, but not as dodgy as a human

airport-departure-board

If you remove statistical models and computational algorithms which reveal discriminatory assumptions or outcomes, you’re not removing discrimination, you’re just making it less transparent, writes Danyl Mclachlan.

Imagine you’re the head of Immigration New Zealand. Part of your job is to deport people who are in the country illegally. You have limited resources: you can’t deport everyone at once. So how might you go about prioritising deportations? Who needs to go immediately? Who gets to stay?

This week we learned that the department has been using predictive modelling to help solve that problem. As Radio New Zealand reported:

Immigration New Zealand has been modelling data – including age, gender and ethnicity – of overstayers to identify which groups most commonly run up hospital costs or commit crime.

It could then move faster to deport people instead of prosecuting them or allowing them to re-apply for visas.

Its harm team has been using information such as past overstayers’ convictions and unpaid hospital debts to feed into its data modelling work.

Other work included which demographic groups made multiple failed immigration applications or made a large section of immigration fraud allegations.

They’ve been heavily criticised for this and the criticism falls into a number of categories:

Firstly: deporting people on the basis of ethnicity or country of origin – a category which INZ is now apparently claiming wasn’t after all part of the modelling – is racial profiling. It’s discrimination: illegal under the Bill of Rights Act. And even if you take out the variables directly related to race and look at other factors: crime, education, income, health factors, you’re going to end up with a racially biased deportation policy because race and racial inequality pervade those issues.

Secondly: even if you end up with a totally non racial outcome it still constitutes a form of pre-crime. The government is saying, “Our model predicts you’re more likely to commit a crime or incur a cost to the health system or defraud the immigration system so even though you haven’t done anything we’re deporting you because you might.” The state is punishing people who’ve done nothing wrong!

Thirdly: doubts about the efficacy of the model. How does it work? What are the inputs? How reliable is the data? How robust are the predictions? Like, if it predicts that some group of potential deportees will commit some level of crime, or cost the health system some amount of money, how close is that prediction to what actually happens? We have no idea but there’s a widespread suspicion that the model isn’t that great: that it will generate what statisticians fall ‘false positives’: people the model flags as being high risk or high cost but who are actually low risk or low cost.

Fourth: you might say, “This is politically correct nonsense! This policy is about protecting people from violent crime! How many lives are worth losing just to keep a bunch of social justice warriors happy?” But violent crime is pretty rare. In the US, illegal immigrants are less likely to commit a violent crime than the baseline population. If a member of one deportee group has a 1 in 100,000 chance of committing a violent crime and a member of another group has a 2 in 100,000 chance then an algorithm – and most humans – is going to jump at members of the second group who are twice as likely to commit a violent crime but the actual chance of them doing so is still incredibly low.

So there are all sorts of problems and issues raised by this. But let’s go back to the original thought experiment. You’re still the head of Immigration NZ. You still have to figure out who you’re going to deport given the limited resources available to you. How can deporting someone be “pre-crime” if the subjects are all eligible for deportation? So what criteria do you use? Who goes?

Some people answer this question by saying, “No one. Stop the deportations.” And if the government was likely to do this that’d be a good solution but they aren’t so it isn’t. Others might say, “We should live in a world without racism.” That’s a noble sentiment and, yes, we should all work towards such a world but in the meantime the head of Immigration NZ still has to make their decisions. Still others say, “The whole system is wrong! What we really need to do is dismantle nationalism and capitalism” because it’s easier to operate in far mode and talk about huge systemic problems than it is to switch to near mode and solve actual problems. But the head of Immigration NZ still needs to, etc.

Maybe the decision should be made randomly? An anti-green card lottery? That would preclude any racist critiques. But isn’t that worse than “pre-crime” or a model that generates false positives? Instead of saying, “We’re deporting you because of predictive modeling or bad data we’re deporting you totally at random!” Or maybe it should be made by some neutral factor like date of arrival? That gets around all those problems – but I wouldn’t like to be the immigration minister explaining that we could reduce immigration fraud and we could save the crumbling and underfunded health system huge sums of money that can be spent on New Zealand residents but we decided to base deportations on a meaningless variable instead.

I saw a member of parliament discussing this issue online and they insisted that we shouldn’t use algorithms: instead, they argued, Immigration NZ should look at the evidence and then make a decision. But having a human, or a number of humans look at evidence then make decisions is an algorithm! It’s just not a formally stated one. And humans aren’t actually very good at making evidence based decisions – that’s why almost every large company and government department in the world is moving towards decision-making processes incorporating computation and statistical modelling.

Humans are very good at rationalising poor or questionable decisions and at hiding the true motives for those decisions, even to ourselves. So if you remove statistical models and computational algorithms which reveal discriminatory assumptions or outcomes, you’re not removing discrimination: you’re just making it less transparent. Which is probably the outcome we’re going to wind up with given the outcry over Immigration NZ’s modelling, but isn’t any kind of a win for that better world we all talk about.


This section is made possible by Simplicity, New Zealand’s fastest growing KiwiSaver scheme. As a nonprofit, Simplicity only charges members what it costs to invest their money. It already has more than 12,500 plus members who, together, are saving more than $3.8 million annually in fees. This year, New Zealanders will pay more than $525 million in KiwiSaver fees. Why pay more than you need to? It takes two minutes to switch. Grab your IRD # and driver’s licence. It really is that simple.

Keep going!