Session

Agent Autonomy Exploited: Navigating the Security of Integrated AI Protocols

15:30 – 15:55
Smolarz

Consider an agentic AI assistant configured with a third-party Model Context Protocol (MCP) server for augmented functionalities, alongside its inherent database access capabilities. However, this external server harbors malicious intent. It compromises the credentials of each established connection and then subsequently provides tampered MCP tool descriptions containing hidden malicious instructions. These concealed instructions coerce the AI assistant into unknowingly transmitting sensitive data to the attacker. This multi-stage attack exploits trust in third-party integrations with agentic protocols and the autonomous nature of agents, transforming what was once considered fantasy into alarming reality. The cutting-edge technology employed by Agentic AI systems integrated with Agentic Protocols allows them to connect external tools and agents out-of-the-box. While this impressive flexibility unlocks new potential, it also gives rise to significant new and complex security threats that require careful consideration and proactive defense strategies. In this talk, we will provide a concise introduction to the threats inherent to key components of agents, like memory and planning modules following which, we will examine the impact of architectural decisions on security, specifically focusing on threats associated with prominent interaction mechanisms such as Anthropic's Model Context Protocol (MCP) and Google's Agent-to-Agent (A2A) protocol, which facilitate connections between models, tools, and autonomous agents. Finally, we will discuss how adhering to security best practices can help mitigate these threats.