Advancements in technology have the potential to propel our societies forward, but they can also be utilised to establish systems of control that perpetuate racism. Amnesty International is sounding the alarm following the approval of the EU regulation on the matter.

AI Violates Human Rights to Control Migrants in Europe


As technology advances, the prevalence of artificial intelligence (AI) systems in our daily lives is increasing. As information technologies continue to advance, their utilization is rapidly expanding across various industries. In addition to their original purposes, these programs are now being utilized for managing migratory phenomena, processing asylum requests, and enforcing border control measures. In this particular field, there is a significant concern that the implementation of this technology may result in the infringement of basic human rights, more so than in other domains.


Amnesty International has issued an appeal, signed by Agnès Callamard, the director general of the international secretariat of the NGO, to the European deputies responsible for the Artificial Intelligence Act. The regulation, created by the European Commission, seeks to establish a legal framework for the use of artificial intelligence. The appeal is based on concerns regarding the regulation's ability to effectively regulate the use of AI. Brando Benifei, the head of the delegation of the Democratic Party in Strasbourg and Brussels, is among the deputies. The committee is set to vote on the plenary text on May 11, with the plenary vote slated for mid-June.


According to Amnesty, the Protect Not Surveil network, a group of civil society organizations and academics focused on the intersection of digital and human rights, has emphasized the importance of regulations that safeguard individuals from the negative impacts of artificial intelligence. The organization argues that any form of "differential treatment" based on migrant status would be unjustified and a violation of EU human rights obligations.


In a recent letter, concerns have been raised regarding the impact of new information technologies on the human rights of migrants, refugees, and asylum seekers. The letter calls for a prohibition of these technologies in four specific areas that are believed to be incompatible with the aforementioned groups' rights. Automated risk assessment and profiling systems, predictive analytical systems, "deception detectors" based on artificial intelligence, and facial recognition are among the technologies being used to study possible migratory flows and assess the risk a person may pose based on their background.


Automated systems have been utilized by both the United Kingdom and the Netherlands in the past to assess and profile travelers entering their respective countries. These algorithms are designed to conduct a preliminary evaluation of visa applications received. In 2020, the English system was prohibited. This system collected specific details from visa applicants and utilized an automated process to allocate a color code to each individual based on a 'traffic light' system. The color codes assigned were green, orange, or red. Nationality was among the parameters considered, but not the only one.


Individuals who were assigned the color red experienced heightened scrutiny during the application process, resulting in frequent rejection. The algorithm, which was intended to streamline practices, has been rejected by the Joint Council for the Welfare of Immigrants (JCWI), an independent organization that advocates for the rights of migrants and asylum seekers. The Ministry of the Interior had previously defined the algorithm as a tool to improve efficiency. Geographic location has the potential to provide insight into a person's ethnicity, race, or religion, although it is not a definitive indicator. According to the JCWI, the combination of this information with data on education levels or job type could potentially lead to discriminatory profiling, which is unlawful.


The Netherlands had implemented a comparable system that was scrutinized by the investigative journalists' collective, Lighthouse Reports. The system was ultimately abolished after a report commissioned by the Foreign Ministry exposed its "structural racism." People hailing from countries deemed to be high-risk, such as Iran or Afghanistan, are not necessarily a threat. Rather, they may be individuals who are in peril and seeking refuge from persecution. It is crucial to emphasize the significance of approaching every case on an individual basis and without any preconceived notions or biases.


Artificial intelligence systems have the potential to gather a significant amount of data about individuals and utilize it to construct a comprehensive profile. The food preferences of a passenger on a flight can potentially reveal their religious affiliation. By analyzing whether a traveler has purchased a halal or kosher meal, one can make inferences about their religious beliefs. According to Mher Hakobyan, an Advocacy Advisor on the Regulation for the Amnesty Artificial Intelligence, airlines can transmit data to border guards who will then input it into a system that can create a profile of a potential migrant or asylum seeker. Despite the existence of such systems, which may already be implemented in various countries unbeknownst to us, it is important to note that the ultimate decision regarding asylum or visa applications still lies with a human being.


The manager's decision-making process will undoubtedly be influenced by the computer analysis. According to Hakobyan, recent studies indicate that automation bias, which refers to people's over-reliance on automated recommendations, may lead to flawed algorithmic decisions being endorsed by human involvement. This phenomenon has been observed in various fields, including law enforcement and justice.


Amnesty International has raised concerns about systems that use economic and geopolitical data to forecast the origin and direction of migration flows. According to Hakobyan, the implementation of such a system could potentially aid in managing the phenomenon effectively. However, it could also be misused by border police to carry out push-backs, which are prohibited by international law. Hakobyan further claims that such push-backs are already being carried out in various European countries, even without the assistance of AI.


The recognition of emotions, which is being touted as a new and improved lie detector, has the potential to be far more dangerous and even discriminatory. By analyzing facial expressions, this technology claims to be able to predict a person's true intentions. Technology firms frequently market this innovation as a valuable asset, despite the lack of established scientific evidence to support its efficacy. According to some experts, these systems have the potential to assist individuals with autism in interpreting the emotions and facial expressions of those with whom they are conversing. According to the analyst, the definition of 'normal' emotional expression is often viewed in a discriminatory manner. The potential use of these technologies in interrogations, particularly with migrants, raises concerns about the high risk of errors. The development of these programs may have originated in the Western world, however, it is important to note that an individual's response to a stimulus or emotion is also influenced by their cultural background and nationality. According to Hakobyan, the way people smile and the timing of it can differ across various countries.


The NGO has raised concerns about the use of retroactive facial recognition technology, which involves linking an individual's photo with images captured by surveillance cameras at the European Union's borders. This allows for a quick verification of whether the person has entered the territory irregularly, but raises questions about privacy and potential misuse of the technology. The effectiveness of these technologies may be called into question based on past experiences.


According to Hakobyan, there have been numerous instances in the United States where individuals have been apprehended and brought to court due to flaws in the system. He notes that this has occurred on multiple occasions with individuals who took part in Black Lives Matter protests and were wrongly accused of criminal activity. A theft suspect has been apprehended and charged for an incident that occurred in a different state. After being detained for several days, he was forced to undergo a trial and incur associated expenses in order to ultimately demonstrate that it was simply an error. For individuals who have migrated to a new country without financial resources and familiarity with the native language, such an occurrence can be a devastating experience.


Source: Alfonso Bianchi/TODAY
Previous Post Next Post