3 min read

A2A to the Rescue

A2A to the Rescue
💡
This was originally published on my LinkedIN account. And has been recently updated.

Be sure to read this great article, posted in the MIT Technology Review summarizing the arc of innovation from GenAI chat bots, vibe coding to context engineering. After my own experimentation with spec-driven development and agents.md, I was quickly pining for a better context management approach; Context management where I could deterministically tune, package, share and secure context.

The article presents the case for building a team of agents with Agent2Agent (A2A) protocol. From a software engineering perspective, while it adds a layer of complexity, it's an interesting concept to explore. If you seek a quick deep-dive into A2A checkout Hugging Face's blog. Then hop over to a2aprotcol.ai for a complete technical drenching on the subject.

Update 12/15

After a month of catching up on AgenticAI news, the commerce implications of implementing A2A remains very interesting to me. The Economist, Dec 10th, 2025 contains an article titled "The next version of the web will be built for machines, not humans". It discusses how Amazon sued Perplexity to prevent their Comet browser from initiating purchases. And how ChatGPT's Instant Checkout feature uses a pre-arranged agreement with merchants. The vision is for A2A to spawn an Internet where agents talk with agents directly, replacing today's consumer-to-website pattern of usage.

How likely is this vision for the Internet to be realized? Let's consider how M2M communication infrastructure for the Internet-of-Things(IoT) evolved. The IoT revolution expanded the adoption of IPv6, introduced new UDP-based protocols (e.g. COAP, MQTT-SN, and QUIC), and gave us Websockets. The IoT technical areas that struggled with adoption were security, certificate management, and service discovery.

I thought universal service discovery would be widely adopted through the use of DNS Service Discovery (DNS-SD) and its various extensions. This DN.org blog discusses the current state of DNS-based service discovery. I eventually came to the conclusion that businesses wanted security but were reluctant to invest in the provisioning, administration, and additional software infrastructure required. Having a IoT security plan was sufficient for most. Given the data shared between agents is likely to be of higher value than IoT data, will it be enough to compel companies to adopt standards-based service discovery and security?

What is unclear to me is the need and importance of sharing agent context (i.e. knowledge) between agents. Is there a need to share access to a GraphRAG database like Zep or subsections of the database for passing along the A2A chain of agents?

How complex can these agents become? Building a scalable agent requires at a minimum implementing a semantic router and incorporating MCP tools. Regression testing the agent and maintaining observability will become a priority as the agent scales. This all adds complexity.

My complexity question is addressed in the book Agentic Design Patterns by Antonio Gulli, Director of the Engineering Office of the CTO, for Google. A recent VentureBeat article quotes Antonio: "the solution to the 'trough of disillusionment' is not a smarter model, but better architecture" ...I bought the book!

I want to experiment with A2A to help color in these technical unknowns. At least for now, maybe we should think of A2A agents as being a new type of Reverse Proxy. The technical challenges associated with maintaining and configuring Reverse Proxies is a proxy for estimating the engineering effort required for deploying A2A agents.

Update 2/9/2026

What I’ve learned since this article was first posted is that AI tool providers prioritize easy enablement. With a background in networking, I naturally prioritize revision management, scalability, repeatability, and security over easy installation— but that isn’t ACP’s focus. ACP is designed to enable an agent to easily interact with multiple subprocesses.

MCP servers have rolled out under the same easy enablement premise. The default transport mode for most MCP servers is typically: stdio. This convenient communication transport enables multiple MCP server to be easily launched as sub-processes by an agent.

The A2A protocol, in comparison, is network focused. It supports JSON-RPC over HTTPS out-the-box and optionally gRPC. This is why I am excited about Docker's cagent which is currently tagged as experimental. A cagent supports integration with ACP, A2A and MCP out-the box.

My design preference is to explore Docker's cagent and MCP Gateway. It may offer the scalability, manageability and distribution tools I seek. The examples posted by Docker for cagent imply there is builtin RAG support with multiple retrieval strategies.