Concerns Rise over AI’s Role in UK Immigration Enforcement As Calls for Transparency Grow
TEHRAN (Tasnim) – A new AI tool used by the UK Home Office for immigration enforcement has sparked concerns from rights groups, who argue it could lead to unchecked automation of life-altering decisions for migrants without adequate human oversight.
The UK Home Office’s AI tool for processing migrant cases, including adults and children, has faced backlash from campaigners, who argue it risks enabling "rubber-stamping" of life-altering decisions, according to the Guardian.
Critics have described the tool as a “robo-caseworker,” fearing it could “encode injustices” through algorithmic decision-making that could include deportations.
The government defends the tool as a way to improve efficiency, with officials asserting that a human ultimately reviews each decision. The AI is employed to manage a caseload of approximately 41,000 migrants facing potential removal.
Campaigners have urged the Home Office to withdraw the system, denouncing it as “technology being used to make cruelty and harm more efficient.”
A year-long battle over freedom of information requests has shed light on the system's operation. Released documents to Privacy International revealed that individuals affected by the AI are not explicitly informed that algorithms are involved.
The AI-powered tool, known as the Identify and Prioritise Immigration Cases (IPIC) system, uses data such as biometric information, ethnicity, health markers, and criminal records to streamline immigration enforcement.
The Home Office insists IPIC is a “rules-based workflow tool” aimed at recommending the next steps for caseworkers. Officials state that all recommendations are reviewed individually. The system also assists with EU nationals’ cases under the EU settlement scheme.
Jonah Mendelsohn, a lawyer at Privacy International, warned the tool could impact hundreds of thousands of lives, with people potentially unaware of how the AI is involved. He highlighted the need for transparency and accountability to avoid “encoding injustices” in immigration.
Migrants' Rights Network CEO Fizza Qureshi called for the system’s withdrawal, expressing concern over racial bias and increased surveillance on migrants due to extensive data sharing across government departments.
The system has been widely used since 2019-20, with past Home Office requests for disclosure rejected over fears that transparency could undermine enforcement.
Migration Observatory Director Madeleine Sumption acknowledged AI’s potential to improve decision-making but called for transparency. She noted that understanding AI’s impact on reducing unnecessary detentions would require insight into the system.
The new draft data bill, introduced in UK Parliament last month, would allow automated decision-making, provided individuals can appeal, request human intervention, and challenge such decisions.