British mobile operator O2 deployed an AI persona named Daisy to waste scammers' time. The bot engages fraudsters in endless, warm conversations about trivial topics until they hang up. Traditional blockers simply reject the call, but Daisy occupies the attacker with a human-like rhythm. This shift moves design from passive defense to active psychological friction. Most teams build AI to clear the path for users. O2 built AI to clog the path for bad actors.
Designing for Friction Instead of Flow
The standard playbook for user experience demands removing every barrier. Teams measure success by how quickly a visitor reaches a checkout page or how few clicks it takes to find a support ticket. That philosophy breaks when the goal is to exhaust a target. Daisy succeeds because it ignores the principle of efficiency. It introduces artificial delays and circular logic to drain the scammer's patience.
This approach requires a new definition of utility. The feature is not useful if it helps the user complete a task faster. It is useful because it protects the user by wasting the attacker's resources. A product manager asking why the conversation is so long misses the point. The slowness is the mechanism.
The web has over thirty years of documented deceptive patterns: dark patterns, bait-and-switch flows, confirm-shaming dialogs. That history is not just a cautionary tale; it is a defensive playbook. Teams that study how manipulation spread through earlier interfaces can anticipate where it will appear in conversational AI and build the countermeasures before the damage starts.
Some teams find this counterintuitive. They worry that any friction will hurt conversion rates or brand perception. That fear is valid for legitimate users. It is not valid for the fraudster on the other end of the line. The design must distinguish between the two with surgical precision.
The Invisible Chatbot Problem
Most organizations deploy AI chatbots without a clear value proposition. Users often cannot tell if a bot is a search bar, a contact form, or a live agent. Research from Nielsen Norman Group shows participants struggle to distinguish these functions or fail to notice them entirely. When a bot's purpose is unclear, users ignore it. They scroll past the interface looking for a human or a standard search field.
Daisy avoids this pitfall by having a very specific, aggressive purpose. It does not try to help the user find a price or reset a password. It exists to simulate a human conversation for a specific, hostile audience. This clarity of intent makes the interaction effective. The bot does not need to be clever. It needs to be boring and persistent.
Ethics as a Design Constraint
Ethical design usually means being transparent about data collection. Users see a clear signal of what is stored and how it is used. Daisy operates on a different ethical axis. It prioritizes safety by deceiving a bad actor. The bot pretends to be human to waste time. That deception is the safety mechanism.
This creates a tension in the field. Designers are taught to build honest interfaces. A lie is rarely an acceptable pattern. Yet the lie here protects the user from financial loss. The ethical boundary shifts from "do not mislead" to "do not enable harm." Teams must decide when the target of the deception justifies the method.
Trust shows up when users feel their money is safe. They do not need to know the mechanics of the bot. They need to know the scam did not work. If the bot reveals its nature too early, the scammer disengages and finds a new target. The system only works if the deception holds.
The New Metric for AI Success
Most AI projects measure success by resolution time or user satisfaction scores. Those metrics are wrong for a defensive bot. Daisy should not be measured by how happy the caller is. It should be measured by how long the call lasts before the scammer hangs up. A longer call duration means a higher success rate.
This shift changes how research teams validate concepts. They cannot rely on standard usability testing. Asking a real scammer to participate is impossible. Teams must simulate the behavior or analyze call logs for drop-off points. The data tells a different story than a satisfaction survey.
Some teams find this hard to explain to stakeholders. It is difficult to sell a feature that succeeds by annoying a user. The value proposition is negative. It prevents a loss rather than creating a gain. That is a harder sell than a feature that increases revenue. The design leader must articulate that defense is a form of value.
The Boundary of Deception
The line between protection and manipulation is thin. A system designed to waste time can easily slip into harassment or abuse. The designer must set hard constraints on what the AI says and how it behaves. It must not insult the scammer. It must not threaten them. It must simply be a boring, unending loop.
This constraint requires a strict guardrail in the prompt engineering. The model must know when to stop. It cannot escalate the interaction. The goal is to neutralize, not to provoke. If the bot becomes aggressive, it risks legal liability or brand damage.
The future of AI ethics in UX will not be about making bots more human. It will be about knowing when to make them human and when to make them traps. The industry has over thirty years of documented deceptive patterns online: dark patterns, bait-and-switch flows, confirm-shaming dialogs. That history is a playbook. Teams building AI systems today can either wait for the same manipulative patterns to colonize a new medium, or they can adopt a defensive posture now and design the countermeasures before the damage compounds. The choice is not theoretical. It is already late.
Additional Reading
- Playing dumb: how AI is beating scammers at their own game — UX Design.cc | RSS | March 20, 2026
- What Is Your Site's AI Chatbot for? Users Can't Tell — Nielsen Norman Group | RSS | March 20, 2026