Technologies and Asylum Procedures

After the COVID-19 pandemic halted many asylum procedures across Europe, new technologies have become reviving these types of systems. Coming from lie diagnosis tools tested at the border to a system for validating documents and transcribes selection interviews, a wide range of systems is being utilised in asylum applications. This article is exploring www.ascella-llc.com/generated-post-2/ just how these systems have reshaped the ways asylum procedures will be conducted. This reveals how asylum seekers are transformed into required hindered techno-users: They are asked to conform to a series of techno-bureaucratic steps and keep up with unpredictable tiny changes in criteria and deadlines. This obstructs their particular capacity to get around these systems and to pursue their right for safeguard.

It also illustrates how these types of technologies happen to be embedded in refugee governance: They help the ‘circuits of financial-humanitarianism’ that function through a flutter of dispersed technological requirements. These requirements increase asylum seekers’ socio-legal precarity simply by hindering them from getting at the stations of safeguard. It further argues that analyses of securitization and victimization should be put together with an insight in to the disciplinary mechanisms of those technologies, through which migrants happen to be turned into data-generating subjects so, who are self-disciplined by their reliance on technology.

Drawing on Foucault’s notion of power/knowledge and comarcal know-how, the article states that these technology have an inherent obstructiveness. There is a double result: while they assistance to expedite the asylum process, they also produce it difficult with regards to refugees to navigate these types of systems. They are simply positioned in a ‘knowledge deficit’ that makes these people vulnerable to bogus decisions manufactured by non-governmental celebrities, and ill-informed and unreliable narratives about their conditions. Moreover, that they pose fresh risks of’machine mistakes’ which may result in inaccurate or discriminatory outcomes.