Your Walk Betrays You: How AI Gait Recognition Ends Anonymity

You have never walked anonymously. You may have believed otherwise, pulling a hood over your head or choosing the busy side of the street, but the truth has been catching up for years. The way you shift your weight from one foot to the other, the cadence of your stride, the particular rhythm of your fingers on a keyboard, even the micro-fluctuations in your voice when you order a coffee: all of these patterns are, increasingly, as identifiable as your fingerprint. And unlike your fingerprint, you leave them everywhere, involuntarily, continuously, without ever pressing your thumb to glass.
Artificial intelligence systems can now identify individuals through subtle behavioural patterns and voice characteristics with startling accuracy. Gait recognition software deployed on the streets of Beijing and Shanghai can pick you out of a crowd from 50 metres away, even with your back turned and your face completely covered. Voice biometric systems in banking can authenticate your identity from a few seconds of speech. Wi-Fi signals bouncing off your body as you walk through a room can betray your identity through walls. The question is no longer whether these technologies work. It is what their proliferation means for the very concept of being unknown in a public space, and whether truly private human interaction remains possible in an age of pervasive, ambient identification.
The Expanding Biometric Frontier Beyond the Face
For over a decade, the surveillance debate has centred on facial recognition. Cities have banned it. Activists have marched against it. Researchers like Joy Buolamwini at the MIT Media Lab have exposed its profound racial biases, demonstrating through her landmark 2018 Gender Shades study that commercial facial analysis systems from IBM, Microsoft, and Face++ misclassified darker-skinned women at rates as high as 47 per cent while achieving error rates below 1 per cent for lighter-skinned men. Her work, co-authored with Timnit Gebru, catalysed a reckoning that led every audited US-based company to stop selling facial recognition technology to law enforcement by 2020.
But while the world was arguing about faces, a quieter revolution was unfolding. Behavioural biometrics, the science of identifying people through how they move, type, speak, and interact with the physical world, has advanced rapidly and without the same degree of public scrutiny. Unlike facial recognition, which requires a camera pointed at your face, behavioural biometrics can operate at a distance, through obstacles, and without the subject's knowledge or cooperation. This makes it, in many respects, a far more consequential threat to anonymity than the technology that has dominated headlines.
The gait biometrics market was valued at USD 1.25 billion in 2024 and is projected to reach USD 3.41 billion by 2032, growing at a compound annual growth rate of 13.38 per cent. Security agencies accounted for roughly 44 per cent of that market in 2024, with Asia-Pacific expected to see the fastest growth at 15.15 per cent annually through 2032. These are not speculative projections from fringe analysts; they reflect sustained investment by governments and corporations in technologies that identify you not by what you look like, but by what you do.
Gait Recognition Comes of Age
The idea that every person walks differently is not new. Forensic investigators have long known that gait is distinctive. What is new is the ability of AI systems to extract that distinctiveness from ordinary surveillance footage and match it against databases at scale.
The Chinese AI company Watrix, incubated by the Chinese Academy of Sciences, has developed gait recognition software that extracts a person's silhouette from video and analyses its movement to create a model of how that person walks. According to Watrix CEO Huang Yongzhen, the technology has been trialled by police in Beijing, Shanghai, and Chongqing, and a pilot system operates in Hubei and Guangdong provinces. The system can identify individuals from up to 50 metres away, from any angle, even when faces are covered and in darkness. “With facial recognition people need to look into a camera,” Huang told the South China Morning Post. “Cooperation is not needed for them to be recognised by our technology.”
The accuracy Watrix claims is striking: up to 96 per cent. The company, which was inspired by a US Defence Advanced Research Projects Agency (DARPA) study, has been in discussions with security firms in Singapore, India, Russia, the Netherlands, and the Czech Republic. Security officials in China's Xinjiang province, where the Uyghur Muslim population faces intense surveillance, have also expressed interest. The technology is not merely supplementary to existing surveillance; it fills the gaps that facial recognition cannot reach. It operates in conditions where faces are obscured, where lighting is poor, and where subjects are unaware they are being watched. Every person's posture, Huang has stated, is unique, like a fingerprint, and gait recognition is capable of identifying targets from any angle.
Nor is Watrix alone. In September 2024, NEC Corporation launched a gateless biometric authentication system capable of authenticating 100 people per minute while they are in motion. The system, initially deployed at NEC's Tokyo headquarters in July 2024, combines face recognition with gait-based matching technology to identify individuals in crowded areas without requiring them to stop or present credentials. NEC, which has been ranked first in face recognition benchmark tests by the US National Institute of Standards and Technology since 2009, has deployed its biometric technology in more than 50 countries and across 80 airports globally. The new system is being offered in Japan, the United States, and Singapore, and uses deep learning to re-identify tracked individuals even after they pass behind obstructions or through dense crowds.
The Voice That Gives You Away
Your voice is another behavioural signature that AI systems are learning to read with uncomfortable precision. Voice biometrics analyse characteristics including pitch, tone, cadence, and the physical properties of your vocal tract to create a unique voiceprint. Financial institutions have been early adopters: customers can authenticate transactions simply by speaking. The technology is marketed as frictionless and secure, a way to eliminate passwords and PINs. But a voiceprint, once captured, is not a password. It cannot be changed if compromised. And the infrastructure for capturing voice data is already ubiquitous: every smartphone, every smart speaker, every customer service line.
But the same technology that verifies your identity can also compromise it. Voice recordings are biometric identifiers as sensitive as fingerprints or retinal scans, yet they can be captured from a distance, harvested from voicemail messages, or scraped from social media posts. According to the 2024 Javelin Identity Fraud Study, American consumers lost more than USD 47 billion to identity fraud that year, with AI-generated synthetic identity fraud and voice cloning driving much of that figure. A survey by BioCatch found that 91 per cent of US banks are reconsidering voice biometric authentication due to AI cloning risks.
The threat is not theoretical. In April 2025, Hong Kong police dismantled a deepfake scam ring that used AI-generated video and cloned voice attacks to open accounts at HSBC, resulting in losses exceeding HK 1.5 billion, approximately USD 193.2 million. The UK government has published a briefing note on the ethical issues arising from public sector use of biometric voice recognition technology, acknowledging the tensions between convenience, security, and privacy. Some institutions store biometric voice templates indefinitely or share them with third-party vendors for AI training purposes, often without the knowledge of the individuals whose voices are on file.
The US Department of Justice has affirmed a broad definition of biometric identifiers that encompasses facial images, voiceprints and patterns, retina and iris scans, palm and fingerprints, and behavioural data such as gait and keyboard usage patterns. This definitional expansion matters because it signals that regulators are beginning to recognise what technologists have known for some time: the body is a broadcasting device, and everything it broadcasts can be recorded, analysed, and matched to an identity.
The Invisible Biometric Layer
Beyond gait and voice, there exists an entire category of behavioural biometrics that most people never consider. Keystroke dynamics, the study of how you type, can identify individuals based on the timing between key presses, the duration for which each key is held, and the rhythm of your overall typing pattern. These measurements, captured at millisecond precision, create a biometric template that is unique to each person and extremely difficult to replicate.
Research published in Discover Applied Sciences in 2025 highlights that keystroke dynamics can be used for continuous, real-time authentication, with any deviation from established typing patterns triggering an alert for possible unauthorised access. A 2024 study published in Sensors demonstrated that deep learning architectures combining convolutional and recurrent neural networks achieve high effectiveness in identifying users based on typing patterns. Forensic applications are also emerging: regardless of the number of machines a person uses, their typing pattern persists, making keystroke dynamics a potential tool for identifying anonymous online activity.
The transparency of this technology is part of what makes it so consequential. Keystroke dynamics require no specialised hardware. They operate via backend software implementation, and in most cases users are entirely unaware they are being profiled. This passive, invisible collection of behavioural data represents a fundamentally different kind of surveillance from a camera on a pole or a guard at a door. It is ambient, continuous, and nearly impossible to evade. Research from MDPI in 2023 also found that keystroke authentication is influenced by the language being typed, meaning bilingual users produce distinct profiles for each language, further enriching the data available for identification.
Wi-Fi Signals as a Surveillance Medium
Perhaps the most unsettling frontier in behavioural identification is the use of ordinary Wi-Fi signals to detect, track, and identify people. Wi-Fi sensing exploits the way radio signals interact with human bodies: as you move through a space, you cause reflections, refractions, and attenuations in the Wi-Fi signal, and these disturbances encode information about your body shape, movement patterns, and activities.
A comprehensive 2024 survey published in ACM Transactions on Sensor Networks documents how researchers have used Channel State Information from Wi-Fi signals to identify individuals based on their unique gait patterns, achieving accuracy rates above 90 per cent. Unlike camera-based systems, Wi-Fi sensing works through walls, in complete darkness, and without requiring any device to be carried by the subject. The technology leverages existing infrastructure, requiring only standard Wi-Fi access points and receiving devices.
Research published in Engineering Applications of Artificial Intelligence demonstrated human activity recognition through walls using deep learning models applied to Wi-Fi CSI data. The PA-CSI model, which combines phase and amplitude analysis with attention mechanisms, has achieved accuracy rates of up to 99.9 per cent on benchmark datasets. A specialised system called WiFind can detect, localise, and estimate body pose through walls, debris, and smoke, using nodes costing under USD 150 each.
The implications are stark. A person walking through a building equipped with standard Wi-Fi infrastructure could, in principle, be continuously tracked and identified without any visible surveillance equipment, without their knowledge, and without any possibility of covering their face or altering their appearance to avoid detection. Wi-Fi technology has evolved from its initial 802.11 standard to Wi-Fi 6 and the anticipated Wi-Fi 7, with each generation improving the resolution and sensitivity of sensing capabilities. The physical world is becoming readable in ways that were previously confined to science fiction.
Regulation Struggles to Keep Pace
The regulatory response to behavioural biometric surveillance has been fragmented and reactive, consistently trailing the technology it seeks to govern. The most significant legislative development has been the European Union's AI Act (Regulation (EU) 2024/1689), which entered into force on 1 August 2024 and began enforcing prohibitions on certain AI systems from 2 February 2025.
Article 5 of the AI Act prohibits real-time remote biometric identification systems in publicly accessible spaces for law enforcement, with limited exceptions. It bans AI systems that scrape facial images from the internet or CCTV footage, and prohibits biometric categorisation systems that deduce race, political opinions, religious beliefs, or sexual orientation from biometric data. Violations carry fines of up to 35 million euros or 7 per cent of global annual turnover, whichever is higher.
Yet the Act contains significant exceptions for law enforcement, allowing real-time biometric identification for targeted searches of victims of abduction or trafficking, prevention of imminent threats, and prosecution of serious crimes, all subject to judicial authorisation. These carve-outs have drawn criticism from organisations like European Digital Rights (EDRi), which argues they may legitimise the very practices the Act purports to ban. As a Stanford Law School analysis noted, despite omitting an outright ban on facial recognition in publicly accessible spaces, the AI Act will probably show its full potential in the years after its entry into force.
In the United States, Illinois' Biometric Information Privacy Act remains the strongest state-level protection, granting individuals a private right of action and statutory damages of USD 1,000 per negligent violation and USD 5,000 per intentional violation. BIPA class action settlements totalled more than USD 206 million in 2024, including the landmark Clearview AI settlement in which class members received a 23 per cent equity stake in the company, valued at approximately USD 51.75 million. In August 2024, Illinois amended BIPA to limit damages to one violation per person regardless of how many times data was collected, a change that contributed to a sharp decline in new filings. Clearview AI itself had amassed a database of more than 60 billion facial images scraped from social media platforms, news websites, and other publicly accessible online sources, prompting the wave of litigation.
San Francisco became the first US city to ban government use of facial recognition in May 2019, with Supervisor Aaron Peskin declaring, “We all support good policing but none of us want to live in a police state.” Yet even this landmark ordinance had limits: it carved out exceptions for federal facilities and did not apply to private businesses. Moreover, in the five years since the ban, San Francisco police admitted to circumventing it on six separate occasions.
The UK's Information Commissioner's Office launched its AI and biometrics strategy in June 2025, focusing on situations where risks are highest and public concern is clearest. The ICO plans to set a high threshold of lawfulness for AI systems that infer subjective traits, intentions, or emotions based on physical or behavioural characteristics. Public polling cited in the strategy found that 54 per cent of UK adults have concerns about facial recognition technology impacting civil liberties, and that concern about AI use for welfare eligibility has risen from 44 per cent to 59 per cent between 2022 and 2025.
When the Wrong Person Gets Caught
The dangers of these systems are not abstract. In January 2020, Robert Williams, a Black man living in Farmington Hills, Michigan, was arrested outside his home in front of his wife and two young daughters by Detroit police. He was detained for thirty hours in an overcrowded, dirty cell. The arrest was based on a facial recognition match from a blurry surveillance image of a shoplifting suspect at a Shinola store in Detroit. Williams was actually the ninth-best match in the system's results, and detectives had not investigated his whereabouts before making the arrest.
Williams' case, brought to public attention by the ACLU and the University of Michigan Law School's Civil Rights Litigation Initiative, became the first publicly reported instance of a false facial recognition match leading to a wrongful arrest. On 28 June 2024, the parties reached a groundbreaking settlement that established the nation's strongest police department policies constraining law enforcement's use of facial recognition, including a prohibition on arrests based solely on facial recognition results and mandatory training on the technology's risks and its higher misidentification rates for people of colour.
Williams' case was one of three known wrongful arrests in Detroit where police relied on facial recognition technology. All three individuals wrongfully arrested were Black. This pattern underscores the findings of Buolamwini's Gender Shades study and raises a critical question about behavioural biometrics more broadly: if the training data and deployment contexts of gait recognition, voice identification, and other behavioural systems reproduce the same biases, the consequences for marginalised communities could be severe. The Pew Research Center found in a 2022 survey that 28 per cent of Black adults said police would definitely make more false arrests if facial recognition were widely adopted, compared with just 11 per cent of white adults.
Surveillance and the Death of Free Assembly
The surveillance theorist and Harvard professor Shoshana Zuboff has described surveillance capitalism as “the unilateral claiming of private human experience as free raw material for translation into behavioural data.” Her framework, articulated in The Age of Surveillance Capitalism (2019), identifies a fundamental shift in which human experiences are extracted, computed, and sold as prediction products. “We are not surveillance capitalism's 'customers,'” Zuboff writes. “We are the sources of surveillance capitalism's crucial surplus.”
When behavioural identification systems operate in public spaces, they do not merely observe; they transform public space itself. Research consistently demonstrates that surveillance produces measurable “chilling effects” on freedom of expression, assembly, and political participation. A qualitative study published in the Journal of Human Rights Practice (Oxford Academic) documented, through interviews with 44 participants in Uganda and Zimbabwe, how the fear of surveillance undermines trust and interpersonal relationships, creating spirals of paranoia and mistrust that directly affect the right to freedom of assembly.
These findings extend well beyond authoritarian contexts. In the United States, the Department of Commerce's National Telecommunications and Information Administration found in a survey of 41,000 households that one in five Americans avoided online activity because of concerns about government data collection. The Electronic Frontier Foundation has highlighted studies showing that government surveillance discourages both speech and access to information on the internet. When people know they are being watched, they change what they say, where they go, and whom they associate with. The extension of these chilling effects from the digital to the physical realm, through gait recognition cameras, voice identification microphones, and Wi-Fi sensing systems, represents a qualitative escalation.
Amnesty International's Ban the Scan campaign, launched in 2021, has mapped the surveillance landscape of New York City, documenting more than 25,500 CCTV cameras across the city and revealing that the NYPD used facial recognition technology in 22,000 cases since 2017. The campaign found that the most surveilled neighbourhood across three boroughs was an area in Brooklyn with a population comprising 54.4 per cent Black residents, underscoring the racialised geography of surveillance infrastructure. Amnesty further documented how facial recognition was used at Black Lives Matter protest sites in 2020 to identify, track, and harass people exercising their rights to peaceful assembly.
The UN Office of the High Commissioner for Human Rights is preparing a thematic report, expected at the 62nd session of the Human Rights Council, analysing the impact of digital and AI-assisted surveillance on assembly and association rights, including chilling effects. A 2025 study in Big Data and Society, examining Extinction Rebellion protests in The Hague, revealed that surveillance technology produces effects beyond traditional chilling, including what researchers termed “hyper-transparency” and “hyper-alertness” among both protesters and police.
The Rise of Anti-Surveillance Fashion
The proliferation of behavioural identification systems has given rise to a counter-movement that sounds like it belongs in a cyberpunk novel but is entirely real: adversarial fashion. These are garments, accessories, and cosmetic techniques designed to confuse, disrupt, or defeat AI surveillance systems.
Artist and researcher Adam Harvey pioneered this field with his CV Dazzle project in 2010, which used cubist-inspired makeup patterns to defeat facial detection algorithms. The technique, named after the dazzle camouflage developed by British painter Norman Wilkinson for Allied ships during the First World War, works by obscuring key facial features until recognition systems can no longer detect a human face. Harvey followed this with HyperFace in 2016, which takes the opposite approach: rather than hiding faces, it floods the environment with false face-like patterns printed on fabric, exploiting algorithms' preference for the highest-confidence facial region.
More recently, the Italian company Cap_able has developed a patented process that algorithmically generates adversarial patterns, translating them into knitted garments that retail between USD 300 and USD 600. These garments combine visual adversarial patterns with infrared protection, aiming to disrupt both optical and thermal surveillance. Researchers have also published work on thermally activated dual-modal adversarial clothing that can defeat both visible-light cameras and infrared sensors simultaneously.
However, as technologist Adam Harvey himself has cautioned, “Camouflage, in general, should be considered temporary, but especially technical camouflage that targets quickly evolving algorithms.” The arms race between surveillance systems and countermeasures is inherently asymmetric: updating a neural network is cheaper and faster than redesigning a wardrobe. Moreover, the very act of wearing obviously adversarial clothing in a public space draws human attention, potentially marking the wearer as suspicious even as it confuses the machines.
The video surveillance industry, enhanced by AI, is projected to grow from USD 3.90 billion in 2024 to USD 12.46 billion by 2030, according to market research. Against this scale of investment, adversarial fashion remains a niche countermeasure, meaningful as a statement of resistance but limited as a practical solution to the erosion of anonymity.
Europe's Border Experiment in Behavioural Biometrics
While much of the debate about behavioural biometrics focuses on domestic surveillance, the technology is also reshaping the boundaries of national security and border control. The European Union's PopEye project, funded through a Horizon Europe grant of 3.2 million euros, represents a significant step towards integrating gait recognition into border security infrastructure.
PopEye, an acronym for “robust Privacy-preserving biOmetric technologies for Passengers' identification and verification at EU external borders maximising the accuracY, reliability and throughput of the rEcognition,” aims to identify individuals on the move, at distances of up to 200 metres, without requiring them to stop. The project combines gait recognition with 3D facial recognition, addressing the limitations of each technology when used in isolation.
The project follows a 2021 Frontex study that examined gait recognition in depth, suggesting that video, radar, and floor sensors could be used to identify people by how they walk. Led by the European Association for Biometrics, PopEye involves partners including the AIT Austrian Institute of Technology, Idiap Research Institute, KU Leuven, and Vrije Universiteit Brussel, among others. Pilot programmes are being conducted at the external borders of Romania and Finland, with the Finnish Ministry of Interior and Romanian Border Authorities serving as key participants.
The project's emphasis on privacy preservation and compliance with the EU's AI Act and GDPR reflects an awareness that the technology it develops operates in a sensitive legal and ethical space. Researchers from VUB and KU Leuven are leading efforts on integrated impact assessments to safeguard human rights and data protection. Yet the fundamental tension remains: a system designed to identify people at a distance, without their cooperation, is inherently a surveillance technology, regardless of the procedural safeguards that surround it.
The Economics of Knowing Who You Are
The security technologist Bruce Schneier, a fellow and lecturer at Harvard's Kennedy School and board member of the Electronic Frontier Foundation, has written extensively about the economics of surveillance and the asymmetries of power it creates. “It is poor civic hygiene to install technologies that could someday facilitate a police state,” Schneier has warned. He has illustrated the collapsing cost of surveillance with a telling comparison: covert human surveillance of an individual costs approximately USD 175,000 per month, while obtaining location information from a mobile provider costs as little as USD 30 per month.
Behavioural biometrics push this economic logic further still. Gait recognition can operate using existing CCTV infrastructure. Keystroke dynamics require only software. Wi-Fi sensing leverages networks that are already installed in virtually every commercial and institutional building. The marginal cost of identifying one additional person approaches zero, which means the economic incentive to deploy these systems is enormous and the barriers to mass deployment are vanishingly small.
This economic reality creates what Schneier has called an alliance of interests between corporate and government surveillance. Corporations collect behavioural data for authentication, fraud prevention, and customer profiling. Governments seek the same data for security, immigration enforcement, and law enforcement. The data collected for one purpose inevitably becomes available for others, a phenomenon that privacy advocates call “function creep.” The US Consumer Financial Protection Bureau has already issued guidance stating that biometric information, including keystroke frequency and behavioural monitoring, used in employment decisions must comply with the Fair Credit Reporting Act.
Zuboff's analysis cuts deeper. She argues that “the power to predict human behaviour is the power to modify human behaviour, and this is what surveillance capitalism is all about.” When every public interaction can be linked to a known identity through behavioural patterns, the entire notion of a public sphere where individuals can move, speak, and associate without being tracked becomes an anachronism. The right to privacy, she insists, is not merely about data protection; it is about the conditions necessary for human autonomy: “what should become data in the first place, that is where the line has to be drawn.”
What Remains of Anonymity
The question this article set out to address, what does the rise of AI behavioural recognition mean for anonymity in public spaces, has a disquieting answer. The technological trajectory is clear: identification systems are becoming cheaper, more accurate, more pervasive, and harder to evade. They are moving beyond the face into the body's every motion and utterance. They work through walls, in darkness, and across distances that make consent meaningless.
The regulatory response, while significant in certain jurisdictions, remains fragmented and reactive. The EU's AI Act represents the most comprehensive attempt at governance, but its exceptions for law enforcement create significant loopholes. BIPA has produced substantial financial penalties in the United States, but it is a single state's law, and its recent amendments have blunted its deterrent effect. The UK's ICO strategy is still in its early stages. Globally, there is no coherent framework for governing technologies that can identify people from their walk, their voice, or the way they type.
What is at stake is not merely a technical question about privacy settings or data policies. It is a question about the kind of society that emerges when public spaces become zones of continuous, ambient identification. Research on the chilling effects of surveillance demonstrates that when people believe they are being watched, they modify their behaviour, self-censor their speech, and withdraw from political participation. The extension of surveillance from visible cameras to invisible behavioural identification systems does not reduce this effect; it amplifies it, because there is no way to know when you are and are not being observed.
Truly private human interaction in public spaces, a conversation with a stranger that no system records, a protest march where participants cannot be individually identified, a walk through a city where your movements are not logged and matched against a database, is becoming technologically impossible. This does not mean it will vanish entirely; enforcement gaps, technical limitations, and deliberate resistance will preserve pockets of anonymity. But the default condition of public life is shifting, from one where anonymity was assumed to one where identification is the norm.
The technologies being installed today will outlast the political conditions under which they were deployed. Gait recognition cameras placed for counter-terrorism will not be removed when the threat recedes. Voice identification systems built for banking will not be dismantled when fraud declines. Wi-Fi sensing capabilities embedded in building infrastructure will persist indefinitely. The question is not whether these technologies will be misused, but when, by whom, and with what consequences for the freedoms that depend on the ability to move through the world unrecognised.
Bruce Schneier put it plainly: “Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance.” In a world where your walk, your voice, and your keystrokes are all that stand between you and identification, that protection is being quietly, systematically, irreversibly eroded.
References and Sources
Joy Buolamwini and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research, Vol. 81, 2018. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
South China Morning Post. “Chinese police test gait-recognition technology from AI start-up Watrix that identifies people based on how they walk.” November 2018. https://www.scmp.com/tech/start-ups/article/2187600/chinese-police-surveillance-gets-boost-ai-start-watrix-technology-can
NEC Corporation. “NEC Launches new system using Biometric Authentication Technology.” Press Release, 3 September 2024. https://www.nec.com/en/press/202409/global2024090301.html
GlobeNewsWire/SNS Insider. “Gait Biometrics Market Size to Hit USD 3.41 Billion by 2032.” 21 July 2025. https://www.globenewswire.com/news-release/2025/07/21/3118758/0/en/Gait-Biometrics-Market-Size-to-Hit-USD-3-41-Billion-by-2032-at-13-38-CAGR-Research-by-SNS-Insider.html
European Parliament. “EU AI Act: first regulation on artificial intelligence.” https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Stanford Law School. “No. 91: EU Artificial Intelligence Act: Regulating the Use of Facial Recognition Technologies in Publicly Accessible Spaces.” https://law.stanford.edu/publications/no-91-eu-artificial-intelligence-act-regulating-the-use-of-facial-recognition-technologies-in-publicly-accessible-spaces/
WilmerHale. “Year in Review: 2024 BIPA Litigation Takeaways.” 19 February 2025. https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250219-year-in-review-2024-bipa-litigation-takeaways
Bloomberg Law. “Clearview AI Gets Settlement Approved in Face-Scan Privacy Case.” 2025. https://news.bloomberglaw.com/litigation/clearview-ai-gets-settlement-approved-in-face-scan-privacy-case
CNN. “San Francisco just banned facial-recognition technology.” 14 May 2019. https://www.cnn.com/2019/05/14/tech/san-francisco-facial-recognition-ban
SF Standard. “SFPD skirted facial-recognition ban, lawsuit says.” 18 July 2024. https://sfstandard.com/2024/07/18/san-francisco-police-facial-recognition-violations/
ACLU. “Williams v. City of Detroit.” https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest
Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.
Oxford Academic. “Chilling Effects of Surveillance and Human Rights: Insights from Qualitative Research in Uganda and Zimbabwe.” Journal of Human Rights Practice, Vol. 16, Issue 1, 2024. https://academic.oup.com/jhrp/article/16/1/397/7234270
Amnesty International. “Ban The Scan New York City.” https://banthescan.amnesty.org/nyc/index.html
Storbeck, M. et al. “Surveillance experiences of extinction rebellion activists and police: Unpacking the technologization of Dutch protest policing.” Big Data & Society, 2025. https://journals.sagepub.com/doi/10.1177/20539517241307892
ICO. “Preventing harm, promoting trust: our AI and biometrics strategy.” June 2025. https://ico.org.uk/about-the-ico/our-information/our-strategies-and-plans/artificial-intelligence-and-biometrics-strategy/
ACM Transactions on Sensor Networks. “A Survey on WiFi-based Human Identification: Scenarios, Challenges, and Current Solutions.” 2024. https://dl.acm.org/doi/10.1145/3708323
ScienceDirect. “WiFi-based human activity recognition through wall using deep learning.” Engineering Applications of Artificial Intelligence, 2024. https://www.sciencedirect.com/science/article/abs/pii/S0952197623013556
Springer Nature. “Keystroke dynamics for intelligent biometric authentication with machine learning.” Discover Applied Sciences, 2025. https://link.springer.com/article/10.1007/s42452-025-07449-5
MDPI/Applied Sciences. “Authentication by Keystroke Dynamics: The Influence of Typing Language.” 2023. https://www.mdpi.com/2076-3417/13/20/11478
Bruce Schneier. “The Eternal Value of Privacy.” Schneier on Security, May 2006. https://www.schneier.com/essays/archives/2006/05/theeternalvalue_of.html
Biometric Update. “PopEye to strengthen EU border biometrics with gait recognition integration.” October 2024. https://www.biometricupdate.com/202410/popeye-to-strengthen-eu-border-biometrics-with-gait-recognition-integration
Mozilla Foundation. “How to Disappear: The Rise of Anti-Surveillance Fashion.” https://www.mozillafoundation.org/en/nothing-personal/anti-surveillance-fashion-privacy-ai/
GOV.UK. “Briefing note on the ethical issues arising from the public sector use of biometric voice recognition technology.” https://www.gov.uk/government/publications/public-sector-use-of-biometric-voice-recognition-technology-ethical-issues/
Pew Research Center. “Public views of police use of facial recognition technology.” 17 March 2022. https://www.pewresearch.org/internet/2022/03/17/public-more-likely-to-see-facial-recognition-use-by-police-as-good-rather-than-bad-for-society/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








