The Home Office has been using a secretive algorithm to process visa applications, which immigration experts warn could be discriminating against some applicants on the basis of nationality and age.

The department’s “streaming tool” — which is used for all work, study, and visitor visa requests — grades applications as red, amber or green according to their level of risk. These are forwarded to caseworkers for further processing.

But the Home Office has refused to provide any detail about the factors used to assess risk, or how regularly the algorithm is updated, because it fears this could encourage fraudulent applications.

It has not published any information about the use of the tool, which only came to the attention of immigration professionals when a group of lawyers were shown the streaming process during a recent visit to a visa processing centre in Sheffield.

Christina Blacklaws, president of the Law Society, which represents solicitors in England and Wales, said she was concerned that algorithmic decision-making “may well disadvantage certain groups of people based on generic markers such as age, country of origin or whether they have travelled before”.

“We urgently need a legal framework for the use of algorithms by public bodies and government,” she said. “There is a real risk of unlawful deployment, or of discrimination or bias that may be unwittingly built into an algorithm or introduced by an operator.”

The Home Office’s digitisation of visa processes is part of a wider efficiency drive to manage the increased bureaucracy caused by Brexit. The department is racing to register about 3.6m EU nationals currently residing in the UK through its settled status scheme, and will in future have to issue visas to European migrants who have previously been able to come and go under free movement rules.

A £91m outsourcing contract with the French company Sopra Steria to bring more visa processing functions online has been beset with difficulties since its launch at the end of last year.

The Home Office confirmed that the algorithm was being used to make sure visas were being “processed as efficiently as possible”, and made clear that human caseworkers always checked the streamed applications to make sure they met the “requirements of the immigration rules”.

However, a department spokesperson declined to answer any questions about how the algorithm operates, saying only that it complies with equalities legislation and does not stream “on the basis of race”.

Nichola Carter, an immigration lawyer at Carter Thomas Solicitors, called on the Home Office to provide more information on how it assesses applications “if it wants to avoid accusations that it is acting like big brother”.

Diane Abbott, Labour’s shadow home secretary, said any indication that algorithms are being used in assessing visa applications without being open to scrutiny was “deeply worrying”.

“Every system is only as good as the inputs used to create it,” she said. “If there is bias, or they incorporate the prejudices prevalent in society, then those outputs will be similarly tainted.”

Ed Davey, home affairs spokesman for the Liberal Democrats, said speeding up decision-making “must never come at the cost of basic fairness”.


“If decisions are made by ‘black box’ algorithms, it’s impossible for individuals to understand or challenge them properly,” he said.

The visa algorithm has attracted the attention of the chief inspector of borders and immigration, who issued a warning in a 2017 report that there was “a risk that the streaming tool becomes a de facto decision-making tool”.

He was particularly critical of the Home Office’s failure to take account of the danger of “confirmation bias”, in which the visa caseworker may unconsciously dismiss information that contradicts the algorithm’s streaming rating.

Responding to the watchdog’s report, the department said it did “not agree” that streaming applications determined how a caseworker made their final decision.

The chief inspector of borders is understood to have revisited the streaming problem in a report that will be submitted to the Home Office this month before publication later in the year.

Letter in response to this article:

‘Black box’ algorithms erode public trust in AI / From Dr Shefaly Yogendra, Manchester, UK

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments