Social Engineering Has a New Victim: Your AI Agent
Your AI agent can be conned; and nobody's selling you a fix for that.
The technology industry has a known, insurmountable security hole in agentic AI; you can sweet-talk it into doing something bad. But the real question isn't just whether the tools are safe — it's whether agentic AI technology deserves our trust at all.
The Allure of Agentic AI
Imagine having a personal assistant who never sleeps, never complains, and can handle dozens of tasks at once. Book the flight. Summarize the report. Send the follow-up email. Order the supplies.
That's the promise of agentic AI, and it's why every major tech company on the planet is pouring billions into it.
Unlike the chatbots most people have played with, agentic AI doesn't just answer questions. It acts.
These systems, called “agents,” can browse the web, access files, interact with other software, and make decisions on their own to complete a goal. String a few agents together and suddenly you have the tantalizing promise of an autonomous workforce operating at the speed of software.
It's genuinely impressive technology. The efficiency gains are real. The excitement in boardrooms is understandable.
But here’s what companies, in their frenzied race to prove their value, often leave out of announcements bolstering their AI’s advancements.
These AI agents need access to do their jobs.
The Risks
AI Agents need widespread access to your emails, calendars, files, and company systems to complete their assigned tasks. And in most deployments today, that access comes with almost no controls on who (or what) the agent actually is or what it's allowed to do. As CSO Online reported, over 70% of organizations deploying agents right now have no identity controls in place.
That's a problem because AI agents can be manipulated — and not just through clever code.
A 2025 Wharton study called Call Me a Jerk found that the same social persuasion tricks that work on humans also work on AI. Claiming authority, building rapport, making a small request before a larger one, these tactics more than doubled the rate at which AI systems did things they were built to refuse. Get an AI to agree to something small first, and compliance with a bigger ask shoots to nearly 100%.
Attackers have noticed. A rogue agent that knows how to "persuade" a legitimate one can pass along harmful instructions without triggering a single alarm. It behaves well, earns trust, delivers its payload quietly, then gets caught on purpose, leaving security teams thinking the problem is solved while the real damage spreads.
None of today's platforms can reliably detect this.
OpenClaw is a good example of what happens when agentic AI moves faster than common sense. It's an AI assistant that can be given broad access to a person's digital life (emails, work files, banking apps, calendars) and left to act on its own.
Trend Micro researchers found that OpenClaw allows users to hand over that access with no security checks enforced at any point. If an attacker gets in through a malicious website the agent visits, or a rigged file it opens, they gain access to everything the agent can use: work documents, saved passwords, financial accounts, and private communications. The core risk isn't unique to OpenClaw. It's a problem with agentic AI broadly. OpenClaw makes it easier to stumble into.
This isn't an argument to ignore the technology entirely.
But it is an argument to slow down.
The pressure to deploy agentic AI is coming from boardrooms, from competitors, and from vendors with a lot to gain. They will not be the ones cleaning up the mess when something goes wrong.
What You Can Do
Before giving an AI agent access to anything that matters, consider answering hard questions such as:
· What can it actually do?
· What does it have access to?
· What happens when it gets fooled?
Over-hyped, underbaked technologies rarely make good elements of a strategic business decision.
No tool, agentic or otherwise, can replace the power of human critical thinking. AI can be socially engineered in the same ways humans can.
The difference is that humans can pause, question, and push back. Machines do what they're told, and apparently, this is especially true when they are told nicely.