Artificial intelligence (AI) agents are expected to work autonomously or semi-autonomously, making decisions, executing complex tasks at lightning speed, and often delivering insights and efficiency that surpass human capabilities. They promise to revolutionize how work gets done—but only if they don’t get lost or confused along the way.
The challenge? The underlying large language models (LLMs) often work in an information vacuum, cut off from the resources and context needed to create real value. AI agents are designed to fill this void, provided they are equipped to behave predictably, efficiently, and repeatably.
Specifically, AI agents need seamless interoperability with enterprise systems, data repositories, and other agents. Anthropic’s Model Context Protocol (MCP), Google’s Agent2Agent (A2A), and other emerging protocols address this by standardizing how agents gather, interpret, and act on external information, helping to move deployments beyond one-off prototypes.
As we will explore, the impact of these new standards could mirror how standardized application programming interfaces (APIs) transformed the internet, allowing e-commerce and extended supply chains to take off. For agentic AI, protocols like MCP or A2A could do the same, paving the way for new business models, such as digital workers that use advanced reasoning to navigate and execute complex business processes.
While early agentic AI projects have focused on routine tasks—akin to “robotic process automation 2.0”—more powerful use cases with greater return on investment are rapidly coming into focus.
“Very few companies outside of pure AI companies had implemented agentic AI last year at scale, but that’s changing quickly,” said Bassel Haidar, Booz Allen Vice President of AI for Civil. “Today we’re witnessing a fundamental shift from passive query-response AI models to autonomous, goal-oriented agents capable of planning, reasoning, and taking actions on behalf of the user to complete complex tasks.”
Traditional business process systems often break when requirements evolve. They fail when scaling adds complexity. When properly designed, agentic systems do not. They leverage context and past experiences to respond effectively to unexpected situations—without explicit instructions.
Today these systems are being actively operationalized. Fortune 500 companies are ending pilots and committing to ambitious use cases. Salesforce is memorably advertising its embrace of agents through AgentForce. Orchestration frameworks (e.g., LangChain/LangGraph) are emerging to seamlessly break down tasks, while highly specialized agents power domain-specific work.
LLM performance is also leaping ahead. “Today’s models hallucinate dramatically less than models two years ago,” said Alison Smith, Booz Allen Director of Generative AI. “As that continues to progress and reasoning capabilities improve, we’re going to see agentic AI adoption further accelerate.”
As the flurry of new protocols shows, communication standards like MCP and A2A will speed up this process, bringing full-scale agentic AI closer.
Government faces a key challenge in using AI: Even the most advanced systems may lack critical context. Isolated from agency-specific information, an AI agent could generate reports that violate federal data requirements, configure software in prohibited ways, or introduce security vulnerabilities.
A universal translator would meet this challenge by enabling AI systems to easily access agency data and other context sources. Since its launch about six months ago, MCP has started to do just that. A two-way communications protocol that is like a USB-C port for agentic AI lets agents quickly discover what they need to know about their enterprise surroundings.
“MCP abstracts the ‘context layer,’ allowing applications to be model-agnostic and compatible across multiple providers with minimal rework. Additionally, it standardizes how context is represented, making it easier to reason about and reproduce the inputs to a model,” said Haidar. “MCP defines a structured format for tool specifications, empowering AI agents with the tools they need to tackle various enterprise tasks.”
With access to a wide range of relevant information, systems, and tools, agentic AI systems can now better understand their environment, reason effectively, and act on behalf of the user.
At its core, MCP establishes a common language between your AI systems and your existing tools—databases, search engines, enterprise systems, and more. Users interact with an application. The application connects to an AI model. The model communicates through MCP to access the tools and data it needs.
Anthropic’s MCP seeks to standardize the language LLMs use to communicate with external services. This is done with the client, protocol, and server relationship.
Note: Any of these services can be hosted locally or externally; for example, you could use the MCP to communicate with a Postgres database running on your laptop or to query information stored on a Confluence site.
“Where things get interesting is with different types of MCP servers,” said Smith. “Tools allow you to search databases and send e-mails. Resources provide data for more complex tasks. And prompt templates make repeated interactions more efficient.”
Reusing these tools, resources, and prompts and making that information accessible and discoverable by the MCP clients gives the system access to an abundance of capabilities.
This advance is more than technical convenience, as Haidar explains in his recent LinkedIn blog post, “Agent-to-Agent Protocol: Redefining Software’s Potential.” In software development, he notes, it directly enables a shift from a deterministic, rule-based approach—built on explicit instructions and predefined logic—to an objective-based specification.
This new approach allows AI to break down an objective into smaller tasks, assign these tasks to various AI agents with different skills, and autonomously determine the best way to complete the work. As a result, you no longer need to provide step-by-step instructions for the AI to follow, allowing for greater flexibility and efficiency in achieving complex objectives.
MCP promises to dramatically accelerate the evolution from today’s often rigid, instruction-based AI implementations toward more flexible systems. But like any emerging technology, it comes with pros and cons.
The obvious strength is rigorous standardization. “MCP separates the development of host applications and AI systems from the tools these systems can use, allowing them to talk to each other,” said Ryan Swope, Booz Allen Senior Lead AI/Machine Learning Engineer. “This makes development much easier, as developers can now build applications or tools independently, knowing they'll work together through MCP without having to create custom connections for every new tool.”
Other advantages include reusability, modularity, and the ability to discover tools. However, the protocol also faces several challenges. First, it’s an evolving standard with incomplete documentation, potentially creating uncertainty for implementers.
Security concerns are another drawback. As Haidar explains, “Imagine an AI agent with an MCP server that can access your database, cloud storage, or hard disk. If the AI agent is hacked or misused, it has a master key to all your data, potentially leaking personal information and causing various damaging and nefarious acts.”
Giving external service access to autonomous agents increases the need for guardrails for both actions and permissions. HiddenLayer (a Booz Allen Ventures investment) recently assessed these risks and noted “MCP server developers should mind best practices when considering API security issues, such as the OWASP Top 10 API Security Risks” among other safeguards. Additional MCP limitations include dependency on model capabilities and potential scalability issues.
Google’s A2A standard, released in April 2025, differs from but complements MCP. A2A focuses specifically on communication between agents, rather than how agents connect to external tools and resources. It establishes a protocol for interactions between “client” agents that define tasks and “remote” agents that execute them.
In modern enterprises, organizations maintain licenses for specialized systems—Salesforce, ServiceNow, Workday, and others—with data distributed across multiple platforms. “As each platform develops its own agent capabilities,” Smith said, “the challenge is enabling communication between distinct agentic systems to create a holistic picture or implement enterprise-wide use cases.”
A2A addresses this need by solving integration across existing enterprise tools. Unlike protocols focused on agent-tool interactions, A2A specifically makes disparate systems consumable by each other.
Key considerations for government implementations include adapting MCP to accommodate complex information classification systems and legacy infrastructure not designed for dynamic information exchange.
“AI is becoming the new intelligence layer of the technology stack—transforming how software thinks, decides, and interacts,” said Haidar. “Just as APIs standardized communication between services, MCP standardizes how AI systems consume context, enabling consistent, portable, and reliable intelligence across tools, models, and platforms. For our clients to take advantage of that, we need to have this intelligence layer access their data to do something useful. MCP is a great way of accessing this data.”
Imagine a federal agency responsible for disbursing benefits. Increasingly, they want to deploy agentic AI to answer routine inquiries, process benefits updates, and assist with application issues. However, the complexity of each case—varying eligibility rules, document types, user preferences, and fraud safeguards—makes it challenging for AI agents to accurately provide a personalized response.
With MCP implementation, a lot changes. The system can automatically reference current policy rules, beneficiary history, recent application filings, and potential security alerts to formulate a response, all while maintaining proper access controls and creating comprehensive audit trails.
Government agencies can begin adopting agentic AI by starting with narrow, high-value use cases—such as automating document reviews, triaging benefits applications, or generating reports—where transparency and auditability are essential. MCP provides a standardized way to structure the context that AI agents consume, making their reasoning transparent, traceable, and portable across providers.
“For meaningful acceleration, three critical components need to mature at the same time,” said Haidar. “Standardized protocols for agent communication, robust governance frameworks addressing security concerns, and clear policy guidance specific to autonomous systems operating in sensitive government contexts.”
Agencies can begin by having their technical teams set up ready-made MCP servers using approved MCP clients in their environments. This allows for initial testing in controlled settings. Technical staff can follow step-by-step guides to build their first MCP server and connect AI systems to their internal data.
“A sound approach focuses on creating agency-specific adaptations that maintain the protocol’s core benefits while accommodating unique security requirements and legacy system constraints,” said Swope.
Ultimately, what may be most noteworthy about MCP is the breadth of industry support it has garnered, with OpenAI, Google, and others joining Anthropic in supporting it. This widespread embrace may reflect these companies’ belief that interoperability challenges are a chokepoint to agentic AI’s growth. And early reviews of Google’s A2A have been positive as well, with Microsoft recently announcing support. This suggests these protocols have staying power.
This is fortunate, as government needs trustworthy protocols between the existing IT infrastructure and agentic AI capabilities to maintain security boundaries while enabling new systems to access relevant information without friction. While current implementations may have limits, the technology is evolving rapidly. Agencies that begin exploring and implementing protocols now will be better positioned to take advantage of more sophisticated agentic AI capabilities as they mature.