Biometric surveillance refers to the technological process through which individuals are identified, authenticated, or monitored based on their unique physiological or behavioural characteristics. These could include facial features, fingerprints, iris patterns, voice, gait, and even micro-expressions, all of which are transformed into quantifiable data templates for storage and analysis. The fundamental premise of biometric systems is that the human body itself becomes both a password and a record, or a form of “machine-readable identity” that links biological attributes to systems of governance and control. As these technologies have evolved, their scope has expanded beyond traditional uses in criminal forensics or access control to expand beyond traditional security measures and social administration. This expansion has led to what Marciano (2019) terms a shift from a “means of inspection” to a “form of control,” where biometric surveillance not only observes but structures the social order through classification and exclusion.
In recent years, the deployment of biometric systems at borders has intensified, transforming them into digital infrastructures that regulate human mobility. Canada’s biometric visa requirements, the European Union’s Entry-Exit System, and the U.S. Customs and Border Protection’s CBP One application exemplify the rise of digital borders, representing virtual frontiers where identity verification precedes the right to movement. As the European Digital Rights organization warns, such systems constitute “biometric mass surveillance, defined as the untargeted or arbitrarily targeted collection and processing of sensitive data in public or semi-public spaces without meaningful consent”. This normalization of pervasive monitoring, being justified in the name of efficiency and security, carries profound human rights implications by not only eroding the boundaries between safety and surveillance, but those between privacy and governance, as well as citizenship and datafication. In this emerging technological landscape, borders no longer merely demarcate territory, but they extend into airports, databases, and devices, embedding surveillance into the very architecture of global mobility.
- Exposing the Human Cost
Beyond policy rhetoric, biometric governance reorganizes lived experience in ways that are concrete and often impossible to appeal. Qualitative evidence shows that compulsory enrollment and ongoing datafied monitoring produce surveillance anxiety, behavioural self-censorship, and an erosion of trust in institutions. These intrinsic harms are amplified in environments lacking strong legal safeguards or transparent mechanisms through which individuals can seek remedy. Biometric traits cannot be changed and unlike passwords, you cannot “reset” your face, fingerprints, or iris. If this data is hacked, shared, or used for another purpose, the exposure becomes permanent. A single breach can follow someone for life, creating long-term risks of tracking, misidentification, or unwanted monitoring with no way for the person to protect themselves across time and jurisdictions. At scale, untargeted capture in border zones and other publicly accessible spaces chills participation in civic life and deters help-seeking, more specifically visible among refugees and precarious migrants who must weigh essential services against the cost of being indexed in transnational systems. These harms are not only psychological, but they are very distributive across many other domains. Bias in datasets and deployment practices compounds “failure-to-enroll,” misidentification, and disproportionate flagging for racialized groups. This produces cascades of exclusion from mobility, employment, and assistance. In parallel, however, the accelerating uptake of these systems amid broader global authoritarian currents tends to normalize automated triage at the border, where opaque models arbitrate credibility and risk while relocating accountability from human decision-makers to technical infrastructures.
- Interrogating Legality and Ethics
Placed against established human-rights benchmarks, biometric border practices repeatedly collide with the intertwined tests of legality, necessity, and proportionality, and with substantive protections for privacy, equality, due process, and even the right to seek asylum. A European analysis underscores that biometrics constitute a special category of sensitive data requiring strict purpose limitation and heightened safeguards. Yet untargeted or arbitrarily targeted capture in publicly accessible spaces normalizes suspicionless monitoring and thus reverses the presumption of privacy.
From a rights perspective, the immutability of biometric identifiers makes the harm structure qualitatively different from ordinary identifiers. Once compromised or repurposed, the data cannot be changed like passwords, which magnifies states’ positive obligations to prevent misuse, constrain retention, and guarantee redress. Empirical evidence shows that consent in such settings is often illusory. Individuals face de facto coercion when access to mobility or essential services depends on enrollment, undercutting any claim that processing is voluntary or proportionate. These conditions align with what human-rights commentators identify as function creep, when data gathered for one asserted purpose (e.g. administrative efficiency) later fuels broader surveillance or inter-agency analytics, a pattern incompatible with proportionality and purpose limitation.
Equally consequential are equality and discrimination concerns. Critical scholarship demonstrates that biometric infrastructures instantiate social sorting, operationalizing classifications that fall unevenly on racialized and marginalized groups. Moreover, in categorizing behaviours to be related to a specific ethnic group, it reinforces racialized prejudices. Error distributions (e.g., failure-to-enroll and misidentification) and deployment contexts produce material exclusions from mobility, protection, and services. This transforms verification into governance, where profiling and categorization help determine who is credible, risky, or deportable. These decisions are often buffered by opaque technical systems that essentially displace human accountability. The governance problem is intensified by cross-border data flows. As biometric records circulate among multiple agencies and jurisdictions, it becomes unclear who is the controller, who audits risk, and who remedies harm when shared data contribute to detention, denial, or refoulement. Technical and institutional complexity, or the “black-box” character of these systems and their aura of numerical objectivity further insulates decisions from contestation and shifts power away from affected individuals,as well as bfuscates accountability. In this situation, commentators call for legislative intervention, independent oversight, and even human-rights impact assessments prior to deployment, warning that expanding biometric portfolios in a broader authoritarian drift risks entrenching automated triage over rights-based adjudication.
- Highlighting the Invisibility of Digital Harm
One of the most insidious features of biometric surveillance lies in its administrative invisibility, which can be defined as its capacity to normalize coercion by embedding it within the bureaucratic routines of governance.
Marciano (2019) argues that biometric surveillance has evolved beyond the logic of inspection into a form of control that structures power through infrastructure, algorithms, and data circulation rather than through visible enforcement. This transformation is crucial as it displaces traditional notions of coercion, making domination appear procedural, even benign. The collection and processing of biometric data are framed as ‘neutral administrative acts’, such as document verification, entry registration, or identity confirmation. However, they cause a phenomena referred to as “governing by identity,” where access to rights, movement, and recognition becomes conditional upon being machine-readable. In this framework, the border ceases to be a spatial boundary and instead becomes a continuous digital architecture, present wherever identification occurs. The contemporary border now stretches across refugee registration centers, embassies, and even mobile phone applications, where algorithms, databases, and sensor networks enact surveillance automatically. This distribution of control transforms surveillance into a diffuse ecosystem, where it is simultaneously omnipresent, making its harms harder to locate and contest.
- Rights-Based Alternatives and Ethical Frameworks
If biometric surveillance represents a reconfiguration of power through data, then addressing its harms requires more than technical reform. It demands a reassertion of human rights as the organizing principle of digital governance. The current literature emphasizes that the problem is not merely technological but normative as biometric infrastructures were constructed without sufficient ethical oversight, allowing efficiency and security to supersede proportionality and accountability. This imbalance stems from the absence of clear, enforceable governance models regulating the collection, processing, and sharing of the new technology across public and private domains. The proliferation of such systems has outpaced both national legislation and international norms, creating a “juridical vacuum” in which administrative systems operate with the appearance of legality but without substantive compliance with human-rights standards. Addressing this gap requires embedding rights-based frameworks directly into the design, procurement, and evaluation of biometric technologies, and most of all treating human rights as ex-ante design constraints rather than ex-post remedies.
Central to such a framework is the development of comprehensive data-protection regimes that clearly define consent, purpose limitation, and data retention standards. The principle of necessity must guide any use of biometrics, meaning that collection should occur only when demonstrably required and when no less intrusive alternative exists. Because people at borders rarely have a real choice, “consent” cannot be treated as meaningful. When someone must give their fingerprints or face scan to receive a visa, cross a border, or access humanitarian aid, the decision is not voluntary. This makes individual consent an unreliable safeguard and shows the need for structural protections instead. Such protections include limits on how long data can be kept, strict rules on sharing information between agencies, and strong oversight. This oversight should not be limited to privacy offices but handled by independent, multidisciplinary bodies that can audit algorithms, detect bias, and enforce consequences when human-rights risks appear.
Recent policy developments offer preliminary steps toward such governance, but simultaneously reveal its limitations. The European Union’s AI Act and the Parliament’s ban on real-time remote biometric recognition represent the first coordinated attempts to align artificial-intelligence and surveillance regulation with fundamental rights. Yet, the persistence of biometric experimentation at borders often justified under emergency or security exemptions illustrates the fragility of legal safeguards in the face of political expediency. This tension underscores the need for Human Rights Impact Assessments (HRIAs) as a prerequisite to deployment, ensuring that proportionality, equality, and redress mechanisms are evaluated before and not after harm occurs. In parallel, genuine reform must interrogate the cultural authority of data itself, where the assumption that quantification guarantees truth. Without challenging this epistemological foundation, legal instruments risk legitimizing the very systems they intend to regulate.
Ultimately, a rights-based approach reframes the question of biometric governance from how to manage data securely to how to protect human dignity within digital infrastructures. This shift requires more than compliance checklists, as it calls for democratic accountability, participatory oversight, and transparency in algorithmic decision-making. In practice, this means that the deployment of biometric technologies must pass not only a technical feasibility test but also a moral legitimacy test rooted in international human-rights law. By centering privacy, equality, and agency, states can ensure that technological modernization does not become a mechanism of domination. As the current scholarship suggests, reclaiming the digital border as a space of rights rather than risk is not a matter of halting innovation, but of reorienting it toward justice.
Edited by Lauren Avis and Norah Nehme.

