Insight
May 15, 2025

The Rise of Agentic AI: From Conversation to Action

As AI systems evolve from generating responses to autonomously executing tasks, organizations face complex legal challenges that will reshape the fundamental relationship between humans and machines
The artificial intelligence (AI) landscape is evolving at a dizzying pace. Just as organizations were adapting to generative AI and its capabilities for creating content, a new paradigm has emerged. Agentic AI — systems that don’t just generate responses but take actions on behalf of users — is rapidly becoming the focal point of enterprise AI strategies, bringing powerful capabilities and complex challenges. And before we fully grasped the agentic inflection point, AI agents began interacting with each other in multi-agent environments.

The Agentic Revolution

The shift began in late January, when OpenAI launched Operator. Rather than simply responding to queries with text, images, or code, AI models, such as Anthropic’s “Computer Use” and Perplexity’s “Buy with Pro,” were now making decisions and executing tasks autonomously. What followed was a cascade of corporate adoptions, with the world’s largest professional services firms announcing major initiatives within months.

EY unveiled EY.ai Agentic Platform on the same day that Deloitte launched Zora AI, while PwC quickly followed with its own agentic offering. This wasn’t mere coincidence but evidence of a fundamental shift in how enterprises view AI’s role — from assistant to actor.

Remember the hype around generative AI? That was five minutes ago in human terms but eons ago in an AI context. The rapid technological evolution has left even recent regulatory efforts struggling to keep pace.

From Co-Pilot to Pilot

Co-pilot is out; pilot is in. Where previous AI implementations were designed to augment human decision-making, agentic systems are increasingly taking the controls and even interacting agent-to-agent, circumventing guardrails and driving humans away from decision-making. This evolution represents both opportunity and risk across multiple domains.

Cybersecurity has emerged as an early proving ground. Microsoft’s recent integration of agentic capabilities into its security tools exemplifies this trend. The domain is particularly well-suited for autonomous AI systems — cybersecurity requires analyzing vast amounts of machine data instantaneously, a task in which human cognition becomes a bottleneck rather than an asset.

Legal and Ethical Implications

The transition from generative to agentic AI amplifies existing challenges while introducing entirely new ones. Think generative AI issues but more. 

Problems with accuracy and hallucination become considerably more concerning when AI doesn’t just provide incorrect information but acts on it. Access to personal information expands as agents need comprehensive visibility to optimize decisions. Automated decision-making — a heavily regulated domain in many jurisdictions — becomes the core function rather than a peripheral concern. And human oversight, often required by emerging regulations, becomes more difficult to implement meaningfully.

Consider a seemingly simple scenario: You ask your AI assistant to order a coffee maker to replace your old one. Without specific instructions, the AI prioritizes “best-rated” over “budget-friendly” and orders a $1,200 high-end espresso machine instead of a basic drip coffee maker you had in mind. This raises several critical questions:

  • Are contracts entered into by the AI agent legally enforceable?
  • Who bears responsibility for this error — the user, the AI developer, or the merchant?
  • What level of safeguards should AI developers implement to prevent such misalignments?
  • Can existing legal frameworks adequately address these novel situations?

These questions cast doubt on the validity of contracts entered into by AI agents. What happens when an agent makes commitments misaligned with a user’s preferences or — perhaps more problematically —  advances what it believes their preferences should be?

Legal Frameworks for Agentic AI

Interestingly, existing legal frameworks may provide guidance. The Uniform Electronic Transactions Act (UETA), adopted across most US states, defines an “electronic agent” as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.”

Under UETA, contracts can be formed by the interaction of electronic agents without human review, and actions taken by these agents are attributed to the person who authorized them. This suggests that if you configure an AI agent to make purchases on your behalf, you would generally be bound by those transactions — a principle that could extend to agentic AI systems.

However, this framework was developed for relatively simple automated systems, not sophisticated AI agents that make complex judgment calls based on perceived preferences. The gap between existing law and emerging technology creates significant uncertainty. Similarly, the European Union AI Act, which was temporarily halted to incorporate provisions for generative AI, now references “general-purpose AI” 227 times — but makes zero mentions of agentic systems. This highlights a fundamental challenge in technology regulation: prescriptive legislation is perpetually chasing technological evolution.

Key Legal Risks in AI Agent Development

Organizations developing or deploying agentic AI face several critical legal challenges:

  1. Transparency and Explainability: As AI agents make decisions with real-world consequences, transparency requirements become more stringent. The EU AI Act and California AI Transparency Act impose obligations that may be difficult to meet with complex systems. Organizations will need to implement comprehensive AI governance frameworks, including thorough documentation and auditing of decision-making processes.
  2. Bias and Discrimination: When AI agents move from recommending to acting, biased outcomes become more than theoretical concerns. Developers must validate training data for biases, conduct regular audits, and implement bias correction techniques like reweighting and adversarial debiasing. These safeguards must align with multiple regulatory frameworks, from fairness principles to civil rights laws and sector-specific regulations.
  3. Privacy and Data Security: Agentic AI’s expanded access to personal information raises complex privacy questions. How do existing frameworks like the General Data Protection Regulation and the California Consumer Privacy Act apply to processing activities by AI agents? Organizations will need to implement robust data minimization, anonymization, and pseudonymization practices, while conducting Data Protection Impact Assessments for higher-risk applications. Moreover, agentic AI expands the cybersecurity risk surface, since AI can now impose harm not only by creating content but also by taking actions in the real world. 
  4. Accountability and Agency: Perhaps the most challenging question is accountability: Who bears responsibility when AI agents make harmful decisions? Product liability, negligence, breach of contract — multiple legal frameworks could apply when things go wrong, creating a complex liability landscape for developers, deployers, and users.
  5. Agent-Agent Interactions: Multi agent use cases are becoming more common. These interactions allow agents to learn from external sources, enabling them to circumvent guardrails typically built into training datasets. They are more dynamic, complex, and drive humans further away from decision-making. 

The Road Ahead: Safeguards and Governance

As organizations deploy agentic systems, they’ll need to develop frameworks for appropriate oversight, clarify legal responsibilities, and establish boundaries for autonomous action. Users will need transparent information about what actions AI agents can take on their behalf and what data they can access. And developers will need to implement cybersecurity measures to prevent cascading system failures affecting various layers of multi agent ecosystems. 

Practical design considerations for agentic systems should include: 

  • Confirmation of Actions: Implementing verification steps for critical transactions, particularly those exceeding certain thresholds 
  • Error Detection: Creating processes that automatically alert users to anomalies and allow them to establish their own safeguards 
  • Error Correction: Providing straightforward mechanisms to reverse or modify agent actions 
  • Human Intervention: Enabling manual review for higher-risk transactions
  • Auditability: Maintaining comprehensive logs of all user interactions, responses, and related metadata

For business leaders, the transition to agentic AI represents both opportunity and risk. The potential productivity gains are substantial — having AI systems that can independently execute complex workflows could transform operations across sectors. But the governance challenges are equally significant.

The coming months will likely see organizations experimenting with agentic systems in domains with clearer regulatory frameworks or lower-risk profiles before expanding to more sensitive applications. As with previous technological shifts, early adopters will help define best practices that others can follow.

What’s clear is that agentic AI isn’t just another incremental advance — it represents a fundamental rethinking of the relationship between humans and machines. We’re moving from a world in which AI responds to our commands to one in which it anticipates our needs and acts accordingly. Managing that transition effectively will be one of the defining challenges for technology governance in the coming years.

 

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.

OSZAR »