CCTV Focus

What exactly counts as biometrics today?

VMS Software In Focus Video Surveillance Market
A question that, two decades ago, would have interested only passport-equipment manufacturers and the occasional futurist, has now become one of the hottest battlegrounds in modern regulation. Cameras have learned to see far too much: emotions, age, gender, clothing details, posture, gait, and sometimes even what a person would rather hide from themselves. But while technology races toward prediction, the law struggles to keep it confined within the boundaries society still finds acceptable. And in every country, those boundaries look wildly different—so different that the same surveillance system may be harmless analytics in Moscow, a “high-risk AI system” in Brussels, the basis for a multimillion-dollar lawsuit in Illinois, and just “a smart camera” somewhere in Singapore.

At the center of this chaos lies a deceptively simple word: biometrics. It sounds futuristic, but legally it means something remarkably down-to-earth: information about a person that can be used to uniquely identify them and is used for that purpose. But behind this neat definition hides an ocean of nuance. Is analyzing emotions biometric? What about estimating age? Determining hair color? Detecting glasses? Assessing “look-alikes”? And what do we make of systems that track not humans at all, but cows, pigs, horses, and chickens?

If Russia answers most of these questions with surprising clarity, other countries approach them with far more caution—and far more severity. Europe builds a legal labyrinth so intricate that emotion recognition is nearly banned in schools and workplaces. The United States lives under a patchwork of state-level laws where one algorithm may be lawful in California and illegal in Illinois. China, with characteristic decisiveness, declares almost any facial analysis “sensitive data,” placing commercial use under tight scrutiny. And Russia stands apart as a kind of legislative pragmatist: if the technology is not being used to identify a person, it is not biometrics and therefore falls outside the harshest regulations.

The European model deserves special attention, if only because the EU has become the world’s trendsetter in high-precision regulation. Under GDPR, a face is not biometric data by itself—only when the face is processed to uniquely identify a person does it fall into the “special category” of data. Yet the moment European lawmakers tried to peer deeper—into emotion detection, psychological inference, behavioral signals—they discovered that their existing categories were insufficient. The new AI Act, entering into force in 2025, places emotion recognition into its own class: not biometrics, yet not ordinary analytics. It is a technology of “elevated risk,” particularly when used in employment or education. Europe concluded that a system capable of inferring emotional states automatically alters the balance between humans and organizations. And so, these systems are effectively banned in those environments, regardless of accuracy, purpose, or benefit.

The American landscape is the complete opposite. There is no unified federal biometric law; each state enforces its own rules. Illinois, with its legendary BIPA, has become the terrifying example everyone cites: a law that turned biometric data into the most expensive category of information in the country. Under BIPA, nearly anything involving the face can count as biometric data: templates, geometry, comparisons, embeddings, and sometimes even age estimation if it uses facial structure. Class-action suits have forced Meta, Google, and countless smaller companies to pay enormous settlements. In other states, however, no such restrictions exist, making the U.S. a legal roulette table: the same algorithm might be entirely legal in one jurisdiction and financially catastrophic in another.

China follows another path—one without hesitation or ambiguity. In Chinese regulation, any facial information is sensitive. Emotions — sensitive. Behavioral patterns — sensitive. Gait analysis — sensitive. Commercial use of such algorithms is allowed only under strict, high-compliance scenarios, and emotion analysis for commercial purposes often sits in the “prohibited” zone entirely. China’s approach avoids the subtleties of Europe and the fragmentation of the U.S.: if a system can analyze a face, it must be under tight state control.

Against all this, Russia’s regulatory framework feels almost disarmingly straightforward. Russian law doesn’t try to classify every camera-based feature as biometrics. On the contrary: emotions, age, glasses, beard, hair color, clothing—none of this is biometric data. Even gender detection is not biometric. All of it remains mere personal data, and only when the system uses the data to identify a specific individual does it cross into biometric territory. Russia draws a clean line: analytics without identification is permitted; identity recognition is regulated.

The key factor in Russian law is purpose. Not what the system can theoretically do, but what it is intended to do. If the camera sees a smiling person, the law doesn’t care. If it sees Ivanov and reports “Ivanov is smiling,” it has entered biometric processing. Russia’s logic is refreshingly consistent: once an algorithm starts establishing identity, it must follow the strict biometric rules of domestic legislation.

Sitting amusingly at the edge of this debate is agriculture — an industry in which “facial recognition” is developing almost as quickly as in smart cities. Cow-face identification, pig-behavior tracking, poultry-health analytics — all of this is already commonplace. But animals are not legal subjects of personal data in any country, and therefore no cattle-recognition system is biometric under any personal-data law. The only risk is accidentally recognizing the farmer, not the livestock.

The result of all these global differences is a strange, almost surreal landscape: the world does not have a universal definition of biometrics. Europe fears emotional manipulation. America fears lawsuits. China fears uncontrolled information. Russia fears identity-driven surveillance disguised as harmless analytics. Each region encodes its cultural anxieties into law.

Strip away the legal machinery and one insight remains: biometrics is a collision point between technology, culture, politics, and the collective psychological threshold of society. Emotions, age, behavior—humans have interpreted these signals intuitively for millennia. Now algorithms do it automatically, continuously, and at scale. Legislators seem far more frightened by that fact than the people being analyzed.

And ultimately, the true question is not what a camera can see. It is how society chooses to limit that part of digital vision that crosses the boundary between observation and power.