A number of border control checkpoints in the European Union are about to get increasingly—and unsettlingly—futuristic.
In Hungary, Latvia, and Greece, travelers will be given an automated lie-detection test—by an animated AI border agent. The system, called iBorderCtrl, is part of a six-month pilot led by the Hungarian National Police at four different border crossing points.
“We’re employing existing and proven technologies—as well as novel ones—to empower border agents to increase the accuracy and efficiency of border checks,” project coordinator George Boultadakis of European Dynamics in Luxembourg told the European Commission. “iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.”
The virtual border control agent will ask travelers questions after they’ve passed through the checkpoint. Questions include, “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” according to New Scientist. The system reportedly records travelers’ faces using AI to analyze 38 micro-gestures, scoring each response. The virtual agent is reportedly customized according to the traveler’s gender, ethnicity, and language.
For travelers who pass the test, they will receive a QR code that lets them through the border. If they don’t, the virtual agent will reportedly get more serious, and the traveler will be handed off to a human agent who will asses their report. But, according to the New Scientist, this pilot program won’t, in its current state, prevent anyone’s ability to cross the border.
This is because the program is very much in the experimental phases. In fact, the automated lie-detection system was modeled after another system created by some individuals from iBorderCtrl’s team, but it was only tested on 30 people. In this test, half of the people told the truth while the other half lied to the virtual agent. It had about a 76 percent accuracy rate, and that doesn’t take into consideration the variances in being told to lie versus earnestly lying. “If you ask people to lie, they will do it differently and show very different behavioral cues than if they truly lie, knowing that they may go to jail or face serious consequences if caught,” Maja Pantic, a Professor of Affective and Behavioral Computing at Imperial College London, told New Scientist. “This is a known problem in psychology.”
Keeley Crockett at Manchester Metropolitan University, UK, and a member of the iBorderCtrl team, said that they are “quite confident” they can bring the accuracy rate up to 85 percent. But more than 700 million people travel through the EU every year, according to the European Commission, so that percentage would still lead to a troubling number of misidentified “liars” if the system were rolled out EU-wide.
It’s slightly reassuring that the program—which cost the EU a little more than $5 million—is only being implemented in select countries in a limited trial period. It is crucial for such a system to collect as much training data as possible, from as diverse a pool of travelers as possible. But systems dependent on machine learning, especially ones involving facial recognition technology, are to date still very flawed and deeply biased. At a time when crossing borders is already contentious and unfairly biased, throwing a partial, imperfect “agent” into the mix raises some justified concerns.