Borders and AI: Human rights–enhancing legal technologies

Credit: Alejandro Ospina

Can artificial intelligence (AI) be used to advance the rights of asylum seekers and other people on the move?

That might seem like a strange question. When we think of new border control technologies, dystopian visions typically come to mind. For example, the US Department of Homeland Security is investing in robot dogs to patrol remote regions of the US-Mexico border, the EU’s Frontex is deploying pseudo-scientific automated AI lie-detection video kiosks for travellers, and the Canada Border Services Agency is using flawed photo-matching software to strip refugees of their status.

There is much work to be done by human rights advocates to push back against these technologies. But, in addition to critiquing government uses of technology to control the movement of people across borders, we should also explore how technology can promote human rights. One promising avenue is the use of AI to shine light on problematic human border control decision-making. 

The past few years have seen a dramatic acceleration in the ability of generative AI tools trained on vast bodies of text to produce plausible responses to user prompts. OpenAI’s ChatGPT has thus far captured the most public interest, but there are many competing systems, from commercial platforms like Anthropic’s Claude to open-source alternatives such as BigScience’s Bloom.

Much of the discussion in human rights circles about these technologies has focused on their flaws. For example, generative AI often reflects biases in training data sets, including racial, gender, and religious biases. They also hallucinate by making up plausible but untruthful facts. Generative AI also raises concerns about privacy, intellectual property, environment impacts, and labor market disruption.

Despite these concerns, generative AI offers exciting opportunities to help human rights advocates undertake research involving large quantities of text that might otherwise be cost prohibitive.

Consider this example from my research on Canadian immigration decision-making. Over a decade ago, I completed an empirical research project on federal court judicial reviews of refugee determinations, which involved a dozen law student research assistants manually reviewing thousands of online federal court dockets. This project demonstrated that outcomes hinged on the luck of the draw—on which judge heard the case. In response, the federal court undertook measures to try to enhance consistency and fairness.

I repeated the study a few years later. This time I wrote a computer program using rules-based AI (i.e., simple instructions like, if this phrase is near this word but not this other word then infer x). I verified the outputs of that program against the data set from the prior study and a smaller set of manually collected new data. I then applied the program to thousands of new online federal court dockets. This project took much less research assistant time to complete. But it still required hundreds of hours of my time to write and verify the program, and it could not have been completed without the human-gathered large data set from the first study. As with the first study, the second study found that outcomes hinged at least in part on the luck of the draw. Again, this generated policy discussions about how to improve decision-making in this area.

I recently tried again on a different federal court procedure: last-minute stays of removal. This time I used generative AI, specifically, custom fine-tuned models on OpenAI’s platform. This project required only a few hundred examples of human collected data. It also took less than a week of my time to complete, with accuracy rates similar to those achieved by my human research assistants. If I had used the methodology from the first iteration of this research, it would have taken thousands of hours of research assistance. Again, the project demonstrated that outcomes hinged on which judge decided the case, with grant rates in stays of removal decisions ranging from 2.6% to 79.2%, depending on the judge.

What can we learn from this research trajectory about how AI can be used to advance the rights of refugees and other people on the move? 

First, when human rights advocates critique the use of AI in a border control setting—including raising concerns about bias and about nontransparent decision-making—we should keep in mind that many of these critiques apply with equal force to human decision-making. Among other things, this has implications for tools to mitigate the harms associated with border control AI, including the idea of ensuring that there is always a human in the loop—although this human is just as likely to be a source of bias as any AI tool they are overseeing—as is now required under some AI guidelines, such as Canada’s Directive on Automated Decision-Making

Second, we should recognize that a key problem with AI in the border control setting involves choices about where to direct these technologies. For example, we could choose to invest in technologies to make the journeys undertaken by asylum seekers safer instead of technologies that block their movements. Similarly, we could prioritize building AI tools to detect bias or racism by immigration officials instead of tools to supposedly detect fraud by travellers. Or we could use AI to scrutinize asylum decisions for fairness and consistency rather than scrutinizing materials submitted by asylum seekers for truthfulness. 

As we enter an era where states increasingly reach to AI in the border control space, we need to go beyond merely critiquing these technologies. Advocates for noncitizens should also experiment with using these technologies to study and critique human border control decision-making. And we should demand that states, as well as companies and individuals with highly sought-after technical skills, make better choices about what problems are priorities to tackle through emerging technologies.