AI’s Escalating Sophistication Presents New Legal Dilemmas

AI Agents and the Rise of Virtual Task Makers

Artificial intelligence has evolved rapidly, progressing from simple automation tools to sophisticated systems capable of independent decision-making. At its core, AI refers to computer programs that mimic human intelligence by learning from data, recognizing patterns and performing tasks with minimal human intervention. A subset of AI, generative AI, specializes in creating new content through a Large Language Model, or LLM – such as text, images or code – based on patterns learned from vast datasets.[1]

However, AI is no longer limited to passive content generation; the rise of AI agents marks a shift toward autonomous digital systems that can make decisions, execute tasks and interact dynamically with their environment. AI agents, a programmed form of Generative AI, are advanced digital systems designed to perform tasks autonomously, rather than merely providing information in response to prompts. Unlike traditional AI models that generate knowledge-based outputs – such as answering questions or summarizing documents – AI agents take action, execute multistep processes and adapt dynamically to changing conditions. These agents can be assigned specific goals, process data in real time and make decisions to achieve desired outcomes, much like human employees operating under delegated authority. For example, in the financial sector, AI agents can automate fraud detection by continuously monitoring transactions, identifying suspicious patterns and flagging potential risks for review. In customer service, AI-powered virtual assistants handle inquiries, troubleshoot technical issues and even complete transactions, reducing response times while improving user experience. Companies like NVIDIA are at the forefront of this transformation, equipping AI agents with advanced reasoning capabilities that allow businesses to automate complex workflows, from customer service chatbots to AI-driven scientific research.

This increasing autonomy raises critical legal questions, particularly under agency law, which governs the relationship between a principal (who grants authority) and an agent (who acts on their behalf). As AI agents begin to function similarly to human agents – making decisions, forming contracts or even generating intellectual property – the legal framework must adapt to address accountability, liability and rights over AI-generated outputs.

The Principal’s Responsibility

Despite the autonomous nature of AI agents, the user – the principal – remains ultimately responsible for the agent’s actions. In the context of intellectual property (IP), for instance, if an AI agent generates content that infringes on another party’s copyright, the principal (user) may be held liable. The question of who owns the IP rights to the content created by AI agents further complicates matters, especially when the AI has been trained using a vast array of data that may contain copyrighted material.

Liability in AI Tools and the Principal-Agent Relationship

Who or What Constitutes an Agent?

A key issue in AI liability is whether AI systems should be treated as legal agents under traditional agency law. Agency relationships typically involve three elements: (1) a principal, (2) an agent and (3) a third party affected by the agent’s actions.[2] Under the common law, an agent acts on behalf of a principal and is subject to the principal’s control.[3]

Unlike human actors, AI lacks subjective intent, political liberties or autonomy in the legal sense. However, courts and regulators are increasingly faced with cases where AI-generated content causes harm or misinformation. The legal frameworks governing agency relationships, vicarious liability and product liability provide useful lenses for examining these issues. Traditionally, agency law requires that an agent acts on behalf of a principal, with the principal assuming liability for the agent’s actions. The Restatement (Third) of Agency explicitly states that computer programs cannot be considered agents:

[A] computer program is not capable of acting as a principal or an agent as defined by the common law. At present, computer programs are instrumentalities of the persons who use them. If a program malfunctions, even in ways unanticipated by its designer or user, the legal consequences for the person who uses it are no different than the consequences stemming from the malfunction of any other type of instrumentality.[4]

However, as AI grows more autonomous, agency law may require reexamination. While AI may not be an agent in the legal sense, courts may still attribute liability to its deployers.[5] In tort law, the application of respondeat superior – holding an employer vicariously liable for an employee’s actions within the scope of employment – offers a potential model for AI-related harms.[6] Applying this principle to AI would place responsibility on the human or corporate entity deploying the system, ensuring accountability without requiring courts to recognize AI as an independent agent.[7] Alternatively, regulatory frameworks could impose direct liability on AI developers or operators based on risk assessments, akin to the approach taken with autonomous vehicles.[8] Such frameworks would strike a balance between fostering innovation and safeguarding consumer protection and public safety.

Subjective Liability and AI Intent

AI does not engage in self-censorship out of legal concern, nor does it possess intent when generating outputs, as a human would. Consequently, the traditional rationale for subjective intent standards does not extend to AI-generated content, necessitating alternative liability frameworks. A reasonable person prompting an LLM should recognize the risk that it may produce defamatory material through hallucination.[9] Subjective intent standards serve to prevent liability from unduly suppressing legitimate speech and uphold the fundamental principle of mens rea in criminal law. Broader concern remains on preserving individual autonomy and participation in public discourse. In Counterman v. Colorado, the Supreme Court reinforced this principle by holding that criminal liability for online threats requires proof that the defendant subjectively understood the harm their statements could cause.[10] This distinction between human intent and AI-generated content highlights the need for tailored legal frameworks that balance accountability with the realities of AI’s non-intentional nature.

These considerations do not readily extend to artificial intelligence. Unlike human speakers, AI programs lack subjective awareness, autonomy or political liberty. An AI does not engage in speech as an act of self-expression, nor can it be censored in the way a human speaker might be. As such, the rationale for applying a subjective standard to AI-generated statements is unpersuasive, and strict liability or alternative frameworks for assessing AI’s role in harm may be more appropriate.

Despite existing legal classifications, advancements in AI challenge the assumption that it remains a mere instrument. As AI systems gain greater autonomy and decision-making capacity, the law may need to adapt. If AI entities begin fulfilling functions traditionally assigned to human agents, courts may reconsider agency doctrine or develop new liability frameworks to address these shifts.

Developer and User Liability: Who Is Responsible?

The challenge of matching AI-generated content to copyrighted works is not new. Platforms like YouTube already use automated detection systems to identify copyrighted material. AI developers inherently have access to training data, enabling comparisons between generated content and protected works. However, as AI advances in text, audio and visual generation, new complexities arise in identifying unauthorized derivative works.

A safe harbor framework could encourage companies to develop and implement effective filtering technologies. The objective is not perfect copyright enforcement but rather reasonable safeguards to minimize unauthorized reproduction. Filters would also need to address copyright-violating prompts, particularly as LLMs allow users to input extensive text. AI systems already apply filters for harmful content, and a safe harbor model could extend these obligations to copyright compliance, requiring developers to update filters and respond to takedown requests, akin to the Digital Millennium Copyright Act.

Infringement Risk and Fair Use

As artificial intelligence evolves, so do the legal challenges surrounding its outputs, with one of the most pressing concerns being copyright infringement, especially when AI-generated content closely resembles existing copyrighted works. A landmark case delves into the complexities of intellectual property rights in the digital age, particularly the contentious issue of alleged copyright infringement linked to the use of protected materials in training artificial intelligence models.[11] Stability AI, an AI image generator, was sued for allegedly using copyrighted photographs from Getty Images without permission to train its model, resulting in AI-produced images that closely mirrored the originals.[12] Cases like this highlight the growing tension between technological innovation and intellectual property rights, raising difficult questions about who bears legal responsibility when AI outputs infringe on protected works.

A key legal question in these disputes is whether AI-generated content is sufficiently transformative to qualify as fair use.[13] Courts have long assessed fair use by considering whether a work adds new meaning, expression or message, rather than merely replicating the original. However, this analysis becomes more complex when applied to AI, which lacks intent and creative discretion.

Although intent to infringe is not necessary for copyright liability, it is relevant in determining damages. Under U.S. copyright law, willful infringement – defined as a knowing and deliberate violation – can lead to statutory damages of up to $150,000 per work.[14] Additionally, a party that induces or encourages infringement can be held liable under the doctrine of contributory infringement.[15] However, because LLMs operate without intent, they cannot themselves engage in willful infringement or actively induce others to infringe.

Liability for AI-generated content will likely depend on whether those designing, deploying and using AI systems exercised reasonable care – a principle that courts have historically applied in assessing secondary liability for technology providers. This tension was underscored in a recent landmark ruling, where the court held that ROSS’s use of Thomson Reuters’s copyrighted content to develop a competing AI-driven legal research tool did not qualify as fair use under the U.S. Copyright Act.[16] As AI-generated content becomes more widespread, the legal system must adapt to balance innovation with copyright enforcement.

Developer Liability

AI developers bear responsibility for ensuring their systems do not facilitate widespread copyright infringement. In another case against Stability AI, artists accused the company of using their work without authorization to train models, arguing this constituted copyright infringement.[17] If developers train models on copyrighted materials without licensing agreements or a valid fair use defense, they risk legal liability. Compliance measures, such as implementing dataset transparency and licensing protocols, could help mitigate these risks.

User Liability

Even though the AI agent executes tasks autonomously, users (principals) remain liable for the actions of their agents. Under agency principles, users act as principals directing AI systems, meaning they could be held liable if the AI generates infringing content. This principle is particularly relevant in industries where intellectual property rights are paramount, such as publishing, entertainment and technology. A user who knowingly prompts AI (or an AI agent) to generate a work that infringes on an existing copyright could be held legally responsible for that infringement, even if the action was carried out by the AI agent. The user’s liability could stem from a failure to properly vet the AI agent’s outputs or an oversight in understanding the source of the content generated by the tool.

Transparency and Disclosure

To mitigate the risks of IP infringement, AI developers should implement transparency and disclosure mechanisms within their systems. Users should be made aware of how the AI tool generates content and whether the training data could include copyrighted materials. Additionally, clear licensing terms and attribution guidelines should be established to ensure that the AI agent’s outputs do not inadvertently infringe on intellectual property rights. As AI becomes more ubiquitous, these disclosure practices will be vital in safeguarding both users and developers from legal exposure.

Corporate Data: Data Privacy Concerns and Compliance Challenges

Data Privacy in the Age of AI Agents

As AI agents increasingly integrate into business operations, they inevitably process vast amounts of sensitive personal and corporate data. This raises significant concerns regarding data privacy and security. AI’s autonomous data processing capability poses a risk of unauthorized access or unintentional breaches, which can result in legal liabilities.

The case Doe v. GitHub, Inc. highlights the intersection of data privacy and AI, underscoring the potential legal ramifications of AI’s use of data without consent.[18] Though the case centers on copyright issues under the Digital Millennium Copyright Act, the underlying data security concerns are pertinent. Legal professionals must be vigilant about compliance with stringent regulations like the General Data Protection Regulation and the California Consumer Privacy Act. Artificial intelligence systems handling personal data must comply with privacy principles such as data minimization and the right to erasure. Violations can lead to heavy penalties under the General Data Protection Regulation, including fines of up to €20 million or 4% of global annual turnover.[19]

Data privacy concerns will be central to the deployment of AI agents. Lawyers should help businesses ensure that their AI systems comply with applicable data protection regulations. This may involve conducting regular audits of AI systems, implementing strong data security measures and developing transparent user consent frameworks. Ensuring that AI systems adhere to privacy laws and guidelines can mitigate the risk of non-compliance, particularly in industries where data privacy is a key concern, such as health care or financial services.

For businesses working with artificial intelligence tools, it’s essential to address how data privacy and compliance intersect with intellectual property ownership and contractual obligations. If AI systems are trained on sensitive or proprietary customer data, there may be additional legal risks, including data breach liabilities or unauthorized use of that data in future AI applications. Lawyers should help clients negotiate data sharing, protection and retention terms in their contracts to prevent such issues.

Transparency and User Consent

For companies deploying AI agents, transparency and user consent are paramount. Users must be informed about how their data is collected, processed and utilized, with clear, understandable disclosures regarding their rights. This includes the ability to access, correct and delete personal data. It’s essential that users can opt in or out and modify their preferences to maintain control over their data. Failure to meet transparency standards and obtain proper consent can expose companies to legal risks, regulatory scrutiny and reputational damage.[20]

Takeaways for Lawyers

Navigating Data Ownership and Liability

Lawyers must be proactive in advising clients about the potential intellectual property risks associated with AI-generated content. This includes clarifying ownership issues and ensuring that proper licensing agreements are in place for any third-party content used in AI training datasets. Legal practitioners should also be prepared to address liability concerns, particularly in cases where AI agents infringe on existing rights. When contracting parties evaluate prospective relationships involving AI technology, it’s crucial to consider how intellectual property and data rights will be allocated. For example, businesses integrating artificial intelligence tools, platforms or data sets must negotiate terms related to intellectual property ownership, especially when the customer’s data plays a significant role in the development, training or fine-tuning of an AI model.

In more complex scenarios, where a customer’s data is a cornerstone for training a model or fine-tuning it for a specific use case, there may be a need to evaluate the balance between intellectual property rights and data privacy. For instance, if the customer’s data is unique, proprietary or particularly valuable, they may see themselves as a collaborator rather than a mere user of the technology. Negotiations should ensure that both parties understand their rights regarding data ownership, transformation and any resulting outputs, particularly when these outputs could have implications for the business’s competitive position or regulatory compliance.

Preparing for Future Regulatory Challenges

As AI technologies continue to develop and impact various sectors, businesses and developers must stay vigilant about state-level regulations that are quickly gaining momentum. In 2025, states are expected to increasingly legislate on AI, particularly in the areas of employment, criminal justice, housing and education. Many state laws aim to promote transparency in AI decision-making, prevent algorithmic discrimination and ensure fairness. For example, California’s AI bills, effective in 2025, will enforce stricter rules on transparency and privacy.[21] As a result, businesses must be proactive in aligning their AI deployment with state requirements, especially around data minimization and algorithmic accountability.

At the federal level, comprehensive artificial intelligence regulation remains elusive. Federal agencies like the Federal Trade Commission and the Department of Justice are already enforcing AI-related issues under current frameworks. The FTC has particularly targeted companies that misrepresent AI capabilities or fail to protect sensitive consumer data.[22] As the federal government focuses on utilizing existing regulatory tools, businesses should anticipate more enforcement actions from agencies that emphasize transparency and fairness in AI usage. Additionally, the emergence of federal guidance, such as that from the National Institute of Standards and Technology, will further shape best practices for AI system development.[23]

The intersection of AI and data privacy regulation is poised to be a significant challenge in 2025. As states pass laws protecting personal data, including sensitive information like genetic and location data, companies must navigate a complex patchwork of regulations.[24] These evolving state laws are complemented by federal efforts to protect data privacy, with the FTC increasingly scrutinizing the collection, sale, and processing of sensitive data. As AI systems rely heavily on vast amounts of data, businesses must ensure that their data practices align with both federal and state privacy laws. Preparing for these dual pressures will require legal professionals to advise clients to include clauses that address future regulatory changes, particularly as governments and agencies begin to impose stricter standards on AI applications.[25]

AI’s rapid evolution demands swift action from legal professionals and policymakers to tackle issues of responsibility, liability, and privacy. As artificial intelligence becomes more autonomous, traditional agency law must be revisited to clarify accountability for AI-driven actions. Legal frameworks need to adapt, ensuring clear responsibility for developers, users and companies – especially concerning copyright and data privacy. Lawyers must stay ahead of emerging state and federal regulations on transparency, consent and data protection, using proactive strategies to mitigate risks. Establishing clear liability frameworks will help clients navigate this shifting landscape, ensuring innovation continues within legal and ethical bounds.

Nicoletta V. Kolpakov has nearly a decade of experience working on legislative and strategic consulting through her involvement with national think tanks and congressional campaigns, and more recently in blockchain, artificial intelligence, and decentralized finance projects. She will be graduating in May with a J.D. from New York Law School.

Endnotes:

[1] A large language model (LLM) is a type of artificial intelligence trained on vast datasets to generate human-like text based on probabilistic predictions. It specializes in language processing and can be fine-tuned for specific applications, such as legal analysis or customer support. Generative AI (GenAI) is a broader category of AI systems that create original content, including text, images, audio and video. While all LLMs are GenAI, not all GenAI systems are LLMs, as some focus on visual or multimodal content generation.

[2] Restatement (Second) of Agency § 1 (1958); see Restatement (Third) of Agency § 1.01 (2006).

[3] Agency, Wex Legal Dictionary, Oct. 2024, https://www.law.cornell.edu/wex/agency.

[4] Restatement (Third) of Agency § 1.04 Cmt. E (Am. Law Inst. 2006).

[5] Moffatt v. Air Canada, 2024 BCCRT 149, where the airline was held liable for incorrect information provided by its chatbot about bereavement fares.

[6] Id.

[7] When an AI system causes harm in the course of its designated functions, its operator could be vicariously liable, similar to how an employer is responsible for an employee’s actions.

[8] Miriam Buiten, Alexandre de Streel, Martin Peitz, The Law and Economics of AI Liability, Computer Law & Security Review (2023), vol. 48, doi: 10.1016/j.clsr.2023.105794.

[9] AI hallucination refers to instances where an AI model generates false or misleading information that seems credible but is not based on real data. This happens because AI predicts responses based on patterns rather than verifying facts. For example, an AI might fabricate a court case or a source that doesn’t exist. These errors stem from incomplete training data, lack of real-world grounding or the AI’s tendency to generate statistically likely – but not necessarily accurate – answers. See What Are AI Hallucinations?, Google Cloud, https://cloud.google.com/discover/what-are-ai-hallucinations, last accessed Mar, 3, 2025.

[10] Counterman v. Colorado, 600 U.S. 66 (2023).

[11] Getty Images v. Stability AI, No. 1:23-cv-00135 (D. Del.).

[12] Id.

[13]Katherine Lee, A. Feder Cooper, and James Grimmelmann, Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain, July 27, 2023, forthcoming, J. of the Copyright Soc. 2024, doi: 10.1145/3614407.3643696.

[14] 17 U.S.C. § 504(c).

[15] Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005).

[16] AI’s first real copyright judgment was filed in February, where the court held that ROSS’s copying of Thomson Reuters’s content to build a competing AI-based legal platform is not fair use under the U.S. Copyright Act. Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., D. Del., No. 1:20-cv-00613, opinion filed 2/11/25.

[17] Andersen v. Stability AI Ltd., 700 F. Supp. 3d 853.

[18] Doe v. GitHub, Inc., 22-cv-06823-JST (N.D. Cal. Jan. 3, 2024).

[19] General Data Protection Regulation (EU) 2016/679, art. 83 (penalties up to €20 million or 4% of global annual turnover); California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.155.

[20] California Consumer Privacy Act of 2018, Cal. Civ. Code §§ 1798.100–1798.199; General Data Protection Regulation (EU) 2016/679, arts. 12–22.

[21] See California AI Bills (2024-2025), California Legislative Information, https://leginfo.legislature.ca.gov. Similarly, Illinois’s amendments to the Human Rights Act now limit AI usage in employment contexts, preventing discrimination based on protected characteristics. Tennessee’s legislation will require impact assessments of AI systems to evaluate potential risks.

[22] AI Companies: Uphold Your Privacy and Confidentiality Commitments, Federal Trade Comm’n, Jan. 9. 2024, https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/01/ai-companies-uphold-your-privacy-confidentiality-commitments. See also In the Matter of 1Health.io Inc (Sept 7, 2023), where a genetic testing company’s alleged retroactive privacy policy changes, failure to obtain consent, lack of adherence to privacy policy and security failures resulted in FTC action. FTC Sends Refunds to Consumers Deceived by Genetic Testing Firm 1Health.io Over Data Deletion and Security Practices, Federal Trade Comm’n, Sept. 9, 2024, https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-sends-refunds-consumers-deceived-genetic-testing-firm-1healthio-over-data-deletion-security.

[23] AI Risk Management Framework, National Institute of Standards and Technology (NIST) (2024), https://www.nist.gov/itl/ai-risk-management-framework.

[24] Maryland’s Online Data Privacy Act, SB-0541 and HB-0567, set to take effect in October 2025, establishes stringent requirements for data minimization and consumer consent.

[25] E.g., if AI systems are used in highly regulated industries (such as health care or financial services), lawyers should ensure that the legal framework accounts for specific compliance requirements that may change over time.

Reprinted with permission from the New York State Bar Association ©2025.

Previous
Previous

Agents of Disruption: Legal Dilemmas and the Future of AI

Next
Next

How the Department of Government Efficiency Can Work