Agentic Exposure: Hijacking Web-Browsing AI Assistants
2025 is shaping up to be the year of the AI web agent - autonomous assistants powered by LLMs that browse the web, control applications, and carry out tasks with minimal human input. From experimental projects to production tools, these agents are now embedded in everything from productivity tools to enterprise workflows. But beneath the buzz lies a serious problem: security has not kept up. In this talk, we’ll dive into the emerging attack surface of AI web agents, exploring how they can be hijacked through indirect prompt injections, context leakage, insecure configurations, and more. Using real-world demos, we’ll show how a single compromised web page or clever string of text can redirect agents, exfiltrate data, or leak context from their original prompting, turning powerful automation into a security liability. We’ll examine key examples from tools like Browser-Use, showing where they go wrong and what attackers can exploit. We’ll also look briefly at the bigger picture: how agentic workflows and new inter-agent protocols (like MCP and A2A) create risks that traditional web defences aren’t prepared for. If you’re experimenting with AI agents, or planning to - this talk is your early warning. Learn how attackers are already probing these systems and how to protect yourself before your helpful agent becomes your biggest liability.