As part of an ongoing series dedicated to the sustainable use of artificial intelligence in legal work, the ELTE CSS Institute of Legal Studies and the Algorithmic Constitutionality 'Lendület' Research Group convened a roundtable discussion on November 27 to explore one of the most fundamental questions posed by this new technology: the legal status of Artificial Intelligence. Situated amidst intense global debate and regulatory activity, most notably the finalization of the EU's AI Act, the central theme of the discussion was not to arrive at a definitive answer, but rather to critically examine the viable concepts and existing legal frameworks that could contribute to a sustainable legal environment for AI. The session, moderated by Boldizsár Szentgáli-Tóth, senior research fellow of the Institute, featured insights from two distinguished experts: Réka Pusztahelyi, associate professor of the University of Miskolc representing a deep academic perspective, and legal expert Szabolcs Korba, whose expertise blends decades of experience in both the academic and business spheres. The discussion offered a comprehensive analysis of the core legal challenges, critically examining the concepts of legal personality, the classification of AI within property law, and the paramount issue of liability.
The debate surrounding AI legal personality holds strategic importance, as it interrogates the foundational structure of the law. This concept, which gained significant traction in the mid-2010s, proposes the granting of legal rights and obligations to a non-human, non-corporate entity, forcing a re-evaluation of long-held legal categories. The central question animating the discussion was whether this represents a necessary evolution of the law to accommodate a new form of autonomous actor, or if it constitutes a fundamental category error with potentially dangerous implications.
The speakers presented a unified and robust critique against the proposition of granting AI full legal personality, grounding their arguments in legal history, psychological principles, and practical necessity. Szabolcs Korba drew a sharp contrast between the proposed legal status for AI and the historical development of corporate legal personality. The latter, he argued, arose organically from a compelling economic need to manage collective assets and limit liability, thereby facilitating commerce. In the case of AI, he asserted, there is no equivalent "urgent social constraint" or economic driver that would necessitate its recognition as a legal subject. Réka Pusztahelyi highlighted the profound risk of basing legal status on the human tendency to anthropomorphize tools. She argued that the emotional bonds people form with objects - from a cherished car to military robots that save soldiers' lives - are a natural aspect of human psychology. However, she cautioned that this emotional connection is an unsound and "incredibly dangerous" basis for conferring legal rights, as it conflates human sentiment with legal necessity.
To reinforce this point, Szabolcs Korba provided historical examples demonstrating that the human inclination to personify advanced tools is not a new phenomenon. From Caligula's desire to make his horse a senator to the naming of the first spinning machine the "Spinning Jenny," societies have long attributed human-like qualities to non-human entities and technologies without ever elevating them to the status of legal persons.
The discussion then turned to the primary counter-argument: that AI's capacity for autonomous action justifies its consideration for legal personality. The speakers methodically deconstructed this premise, arguing that existing legal mechanisms are fully capable of addressing AI's operational independence without resorting to a radical reclassification.
Réka Pusztahelyi explained that the legal system is already equipped with the conceptual tools to manage the consequences of autonomous AI actions. She proposed that such actions should be treated as "legal facts" - events that have legal significance - which can then be attributed to a responsible human or corporate entity. She cited the models developed by both UNCITRAL and the European Law Institute for automated contracting as clear evidence that the law can accommodate machine-generated outcomes without granting the machine itself legal standing. Szabolcs Korba reinforced this legal conclusion from a technical standpoint, emphasizing the fundamental nature of AI. He defined it as a human-created algorithm whose autonomy, however advanced, operates within predefined constraints set by its human creators. Consequently, he argued, responsibility must ultimately rest with the human designers, developers, and operators who define its operational parameters and purpose.
The panel also evaluated proposals to grant AI a special status analogous to that of animals or natural resources like rivers, which have been recognized as legal entities in some jurisdictions. The expert consensus was that these analogies are misplaced. Such legal constructs, they argued, are not about granting intrinsic rights to the animal or the river itself. Rather, they are human-centric legal tools designed for human purposes: to protect human emotional well-being and reflect our relationship with animals, or to safeguard a collective resource for the benefit of human society.
The comprehensive rejection of legal personality as a viable or necessary path forward naturally led the discussion to explore the complex challenges of classifying AI within existing legal frameworks. If AI is not to be considered a legal person, the law must find an alternative classification for it. The discussion proceeded to explore the viability of fitting AI into established legal categories such as property, corporate entities, or software, revealing the significant conceptual and practical challenges presented by each approach.
The classification of AI under property law revealed a key tension between doctrinal purity and functional pragmatism. The proposal to manage AI by placing it within a corporate structure, such as a limited liability company, was critically assessed and deemed unworkable. The speakers identified a central, self-defeating paradox in this approach: a technology deemed advanced enough for legal recognition would, under corporate law, be classified as legally incapacitated, thus requiring a human agent to act on its behalf and negating the very autonomy that prompted the discussion.
Finally, the panel analyzed the limitations of viewing AI simply as a form of software. While AI systems are built on software, this category is insufficient for several reasons. First, copyright law, the primary domain of software regulation, is designed to protect authorship and does not address the crucial issue of liability for actions taken by the program. Furthermore, the experts articulated a clear consensus that complex, self-learning, and adaptive AI systems are fundamentally different from traditional, static software programs, rendering the software analogy inadequate for capturing their unique legal challenges.
These classification challenges all point toward the most critical and overarching legal problem that demands a robust solution: liability. The discussion identified liability as the most pressing and complex legal challenge associated with AI. The speakers argued that the potential for AI to cause widespread, systemic harm - affecting millions of individuals simultaneously through a single algorithmic error - requires a fundamental re-evaluation of traditional, ex post liability models. These models were designed for discrete, individual incidents and are ill-equipped to handle the scale and complexity of AI-driven systems. The experts detailed their critiques of existing and proposed liability rules, finding them insufficient to address the unique nature of AI-related harm.
- Product Liability Directive: This framework was deemed too narrow. Its application is strictly limited to damages arising from a specific "product defect" and explicitly excludes purely economic losses, which are a likely form of harm from AI systems.
- National Tort Law: Applying established national tort concepts, such as the Hungarian rule for "hazardous operations", is fraught with difficulty. It remains ambiguous whether an AI system legally constitutes a "source of heightened danger," making the application of this strict liability rule uncertain.
- The "Black Box" Problem: The panelists underscored the immense practical difficulty of assigning fault when an AI's decision-making process is opaque. Tracing a harmful outcome back through a complex global value chain—involving the designer, data provider, manufacturer, and operator—to pinpoint a single responsible party is often practically impossible.
In response to these inadequacies, Szabolcs Korba advocated for a fundamental shift in regulatory philosophy from a legal-punitive model to a technologically-embedded compliance model. This forward-looking proposal contrasts the traditional approach of imposing sanctions after harm has occurred with an ex ante focus on prevention. His proposal centers on using disruptive technologies (e.g., blockchain) to regulate other disruptive technologies (AI), creating compliance and monitoring mechanisms that are technologically integrated and built directly into the systems. Such a paradigm moves the locus of control from courtrooms and regulators (ex post) to the technology's architecture itself (ex ante), a profound change that would be far more effective and less costly than traditional human-led oversight. He also highlighted the role of regulatory sandboxes as a vital collaborative tool where regulators and industry can work together to develop and test these new models.
This call for a new, adaptive regulatory paradigm is necessitated by the consensus, explored next, that the technology itself defies any form of static legal categorization. A clear consensus emerged among the experts regarding attempts to create rigid legal categories for different types of AI. Both Réka Pusztahelyi and Szabolcs Korba argued that the technology's rapid and interconnected evolution makes any static classification system - such as the risk-based tiers outlined in the EU AI Act - both impossible to maintain and a potential obstacle to innovation. As technology evolves and systems are combined, categories that seem clear today will quickly become obsolete, demonstrating the need for more flexible and principles-based regulatory approaches.
The roundtable discussion culminated in a strong and clear consensus: the pursuit of legal personality for AI is a theoretical distraction from the far more urgent and practical legal imperatives facing society. The experts were unanimous in viewing the "electronic personhood" debate as a misdirection of intellectual and regulatory energy. The true path forward, as identified throughout the discussion, lies not in creating new legal subjects but in adapting existing legal tools to new realities. This requires the focused development of robust and flexible liability frameworks capable of managing systemic risk, the establishment of clear lines of responsibility throughout the complex data, development, and operational value chains, and a strategic shift towards technologically-integrated, ex ante regulatory models that prioritize prevention over retroactive sanction. This pragmatic approach stands as a sober counterpoint to theoretical flights of fancy, offering a practical roadmap for regulators currently grappling with these challenges.
The next event in the series will take place on February 5, 2026, featuring the contemporary legal and practical questions surrounding the use of AI in law enforcement, especially surveillance.
_________________________________________________________
This report was prepared with the support of the Algorithmic Constitutionalism Research Group (LP2024-20/2024), funded by the Hungarian Academy of Sciences.
_________________________________________________________
The views expressed above belong to the speakers and do not necessarily represent the views of the Centre for Social Sciences.
__________________________________________________________


Rudolf Berkes