As part of an ongoing roundtable series dedicated to the sustainable use of artificial intelligence in legal work, the ELTE CSS Institute of Legal Studies and the Algorithmic Constitutionalism 'Lendület'/Momentum Research Group convened a roundtable discussion on February 5, 2026 focused on the use of artificial intelligence (AI) in law enforcement and surveillance. Moderated by Rudolf Berkes, the discussion featured two experts from the Faculty of Law Enforcement at the University of Public Service: Associate Professor Réka Gyaraki and Assistant Professor Levente Tóth. The discourse explored practical applications of the technology, data protection and ethical dilemmas, and implementation issues regarding the European Union's AI Act.
The discussion opened with a survey of practical AI tools already in use. The experts pointed out that AI already supports law enforcement in several areas, from transcription to facial recognition and predictive policing. Réka Gyaraki approached the topic from the perspective of the heavy workload faced by criminal investigators. Processing corridors full of documents for a single economic crime case without human error is virtually impossible. In the future, AI's role could be to find connections in massive case files where information might otherwise be lost due to the differing vocabularies of various investigators.
From an engineering perspective, Levente Tóth highlighted the evolution of video analytics. While earlier systems merely detected line-crossing or abandoned luggage, modern systems can recognize complex behavioral patterns and even facial emotions. According to standards, effective facial recognition requires a resolution of about 250 pixels/meter, which works well under optimal conditions, though weather or poor lighting can significantly reduce effectiveness. According to Tóth, 90% of Hungarian law enforcement cameras are manufactured by Hikvision and Dahua. While cost-effective, these systems represent a major cybersecurity vulnerability: the cameras contain communication ports that transmit data to foreign servers for "quality assurance"—ports that cannot be disabled by Hungarian authorities.
By analyzing past data, metadata, and even weather information, systems can predict the likely locations and times of future crimes (such as bicycle thefts in the Netherlands), allowing authorities to allocate their resources accordingly. Réka Gyaraki emphasized that the Hungarian police are also experimenting with their own developments, such as the "BÖBE" system, which detects traffic anomalies within cities. Globally, law enforcement is increasingly reliant on powerhouse platforms from providers like Palantir and Cellebrite for digital forensics and relationship mapping.
The panelists were quick to point out the technical limitations of these tools, particularly in facial recognition. While current cameras often have the resolution required by standards, real-world conditions like poor lighting or rain frequently lead to false identifications. In this context, a specific Hungarian case of false identification was discussed, in which a facial recognition system mistakenly implicated a young man in a shoplifting incident involving shampoo. Gyaraki explained that in such cases, the AI likely only provides a probability (e.g., 90%) and generates multiple potential matches. The error was not solely the machine's, but rather the human case officer's, who accepted the match as fact without further validation. Levente Tóth agreed that with current technology, AI-generated data alone cannot serve as evidence in court; it must be supported by other evidence and strict human oversight.
The discussion continued with the speakers’ perspectives on the recent debates and developments regarding the use of biometric identification in public spaces. Levente Tóth detailed the EU legal framework regarding biometric identification. According to the EU AI Act, real-time biometric identification in public spaces for law enforcement purposes is fundamentally prohibited, save for three very strict exceptions (e.g., terror threats, searching for missing persons, and investigating perpetrators of severe crimes). Post-remote analysis of existing footage is permitted but falls into the "high-risk" AI category. By August 2026, using such high-risk systems will require strict fundamental rights impact assessments, EU registration, and judicial authorization.
The speakers also addressed the possible "chilling effect" of mass surveillance on civil liberties. Referencing recent Hungarian legislation that extended biometric identification to misdemeanor cases, Tóth warned that "people may not dare to express themselves or attend demonstrations if they know facial recognition is being used." Gyaraki suggested that while the technical capability and regulatory possibility exists, the organizational and human protocols in the Hungarian police force act as a natural brake for mass surveillance. She emphasized that the police do not engage in mass, hobby-level profiling; every query is strictly logged, tied to a specific case number, and purposeful.
Gyaraki also offered a nuanced perspective on the fear of losing privacy, pointing out that citizens voluntarily share their locations and data every day via smart devices and social media (through check-ins, live streams, and photos). Thus, detecting participation in a demonstration often does not even require facial recognition, as digital footprints are revealing enough.
Furthermore, Levente Tóth highlighted the issues of AI "hallucinations" and discrimination stemming from training data. Because the majority of facial recognition systems have been trained on European faces, the error rate is significantly higher for individuals of other ethnicities. To avoid bias, the greatest task in the coming years will be data cleaning.
While Gyaraki stated that the police continue to develop technology in closely supervised working groups that adhere strictly to legal frameworks, Tóth predicted that in the longer term—as is already evident in medical imaging diagnostics—AI's accuracy will surpass human capabilities, and human validation may eventually be phased out of the process, if regulation allows. In the end, both speakers insisted that the "human-in-the-loop" principle should remain non-negotiable to protect fundamental rights.
The next event in the roundtable series will take place on April 9, 2026, featuring the contemporary legal and practical questions surrounding the use of AI in the judiciary.
_________________________________________________________
This report was prepared with the support of the ELTE CSS Algorithmic Constitutionalism Lendület/Momentum Research Group (LP2024-20/2024), funded by the Hungarian Academy of Sciences.
_________________________________________________________
The views expressed above belong to the speakers and do not necessarily represent the views of the Centre for Social Sciences.


Berkes Rudolf