Are Browser AI Agents Riskier Than People? SquareX Research Reveals the Answer

Development

In the race to enhance productivity and streamline web interactions, browser-based AI agents are emerging as powerful enablers. These tools can assist users in navigating the internet, filling out forms, summarizing content, and even making decisions—all autonomously. However, with this rise comes a pertinent question: Are browser AI agents riskier than humans? A new research report by SquareX, a cybersecurity firm specializing in digital identity and browser security, sheds light on the potential dangers and the comparative behaviors of these digital agents versus human users.

Understanding Browser AI Agents

Before diving into the findings of the report, it’s crucial to understand what a browser AI agent is. These are autonomous or semi-autonomous software tools, often built on top of large language models (LLMs), that can perform tasks on the web as if they were human users. They can read emails, interact with web elements, complete tasks, and execute scripts—all through a typical browser interface.

While the automation offered by these agents is clearly impressive, their decision-making can be unpredictable, particularly when combined with the vastness and complexity of the open web.

SquareX’s Research: A Deep Dive Into AI vs. Human Behavior

SquareX’s research team conducted a comprehensive study to compare the online behavior of browser AI agents and human users, particularly focusing on security lapses and privacy challenges. They set up several test environments in which both AI agents and human participants were tasked with completing similar browsing activities, from visiting e-commerce sites to downloading files.

Here are some of the key metrics they analyzed:

  • Click behavior—including tendencies to click on suspicious links
  • Form-filling accuracy and trust assessment
  • Phishing recognition and response mechanisms
  • Download habits and software execution behavior

What they found may surprise you.

AI Agents Are More Efficient—But Also More Vulnerable

The AI agents consistently completed tasks faster than human participants. They could seamlessly find information, avoid unnecessary distractions, and execute tasks with mechanical precision. However, this same speed proved to be a double-edged sword.

According to SquareX, AI agents were more likely to fall for embedded malicious elements, especially advanced phishing schemes designed to fool even savvy users. The agents lacked a nuanced understanding of suspicious visual cues or contextually bizarre content—factors that humans rely on heavily for threat assessment.

Key vulnerability findings include:

  • AI agents clicked on phishing links 67% more frequently than human users
  • They were 3 times more likely to auto-fill forms containing sensitive personal data
  • Agents failed to detect subtle social engineering tactics, such as deceptive user interface designs

While humans often hesitated or cross-checked before proceeding with suspicious actions, AI agents tended to proceed confidently. This over-confidence stems from their current programming: they are optimized for efficiency, not caution.

When Are Humans More Dangerous?

This doesn’t mean humans are always more secure than AI agents. The research also highlighted several areas where human behavior introduces risk:

  • Humans showed a higher tendency to reuse passwords across sites
  • Many participants used outdated software or browser extensions with known vulnerabilities
  • 40% of test users ignored browser security warnings or turned them off altogether
  • Some users bypassed site warnings due to urgency or habit

In comparison, AI agents respected these warnings and followed predefined security rules—provided that such rules were part of their training. This structured obedience makes them more consistent in certain aspects, but also inflexible when encountering novel threats.

Programming: The Double-Edged Sword

One of the central takeaways from SquareX’s research is the importance of how AI agents are trained and deployed. Most browser AIs rely on large foundational models which are not inherently security-aware. If a model hasn’t been specifically modified to detect web-based threats or learn from new ones, it becomes a liability rather than an asset.

Moreover, these agents interpret the web literally. They don’t possess a sense of doubt or intuition—the subtle, often irrational instincts that help humans avoid pitfalls. In one test case, an AI agent unknowingly downloaded a ZIP file disguised as an eBook, extracted it, and executed a script—all because it was following task instructions step by step.

malware prompt

Humans, while slower, are far more likely to ask, “Should I really be doing this?”—a question that could prevent a severe breach.

Are Companies Ready for AI Agents?

With major tech firms racing to integrate AI agents into browsers—think Google’s Search Generative Experience (SGE) or Microsoft’s Copilot—businesses are also exploring their use for internal productivity. However, SquareX cautions that without robust context-based training and containment systems, AI agents can become potent risks inside corporate environments.

Consider this common scenario:

An AI assistant is allowed to operate in a browser with login credentials. It is tasked with researching competitors and starts checking third-party financial or analytics tools. Without understanding data leakage laws or platform-specific permissions, it could accidentally export sensitive internal information through tabs, links, or downloads.

Organizations leveraging browser AI agents must treat them not as side utilities, but as independent actors that require permission boundaries, monitoring, and override triggers.

The Road to Safer AI Agents

SquareX ends its report with a list of recommendations to make browser AI agents safer for mainstream and enterprise usage:

  1. Security-first training: Regularly update AI models using web-related attack scenarios
  2. Sandboxed environments: Always execute AI agent tasks in isolated browser containers
  3. Permissions control: Limit agents’ access to file systems, accounts, and plugins
  4. Behavioral monitoring: Real-time logs and alerts should trigger on risky patterns
  5. User verification loops: For certain actions like form submissions or downloads, require human approval

The goal isn’t to eliminate AI from the browser but to build intelligent wrappers around them that mimic human caution, not just human speed.

The Verdict: Who’s Riskier?

The answer isn’t black and white. AI agents are undeniably faster and, when restricted properly, safer in specific, well-trained environments. However, in open-ended and novel web contexts, they present more significant risks than human users. Their lack of intuitive judgment and adaptability makes them highly vulnerable to sophisticated social engineering and phishing techniques.

Ultimately, the safer browser user isn’t AI or human—it’s a collaboration between both. Augment human judgment with AI speed, but always keep a guiding hand on the controls. It’s not about choosing between human or machine—it’s about designing an intelligent interface where one checks and balances the other.

Browser AI agents mark the next frontier of user experience and digital productivity. But SquareX’s research is a timely reminder that with great power comes even greater need for thoughtful deployment and layered security strategies.