We counted and we installed twelve AI tools last month.

Twelve new AI-powered tools across our team in a single month. A code assistant, a meeting summarizer, two different writing aids, an image generator, a research agent, a browser extension that "enhances" search results, a Slack bot, a calendar optimizer, a document analyzer, and two agents that promised to automate parts of our workflow.

Every single one of them asked for access to something: our codebase, our emails, our documents, our calendar, our browsing history. Most of them got it, because the install flow was frictionless and the capability was real. Nobody stopped to read what data each tool was collecting, where it was being sent, or what happened to it after processing. We were moving fast, the tools were useful, and security review felt like the kind of overhead that slows down adoption without adding obvious value.

Then one of our agents forwarded a client's proprietary information to an external API endpoint we had never heard of. That was the wake-up call.

The Regulation Gap Is Real

The tooling landscape for AI is expanding faster than any regulatory framework can follow. New agents, extensions, and assistants ship daily. Some are backed by established companies with security teams and privacy policies. Many are not. The barrier to shipping an AI-powered tool is now so low that a single developer can build and distribute something that processes sensitive data for thousands of users, with no security audit, no privacy review, and no accountability structure beyond a GitHub profile.

Existing privacy regulations were built for a different era. GDPR, CCPA, and their equivalents assume a model where a company collects your data, stores it, and uses it for defined purposes. AI tools break this model in fundamental ways. Your data gets sent to inference endpoints, potentially logged, potentially used for training, potentially cached in ways that persist beyond your session. The tool might be a thin wrapper around a third-party API, meaning your data travels through multiple systems, each with their own retention policies and security posture. The privacy policy, if one exists, was probably generated by the same AI it describes.

There is no current certification standard for AI tools. No equivalent of SOC 2 that specifically addresses how an AI agent handles the data it accesses. No required disclosure about whether your inputs get used to train models, whether your data gets stored on servers you did not consent to, or whether the tool's "local processing" claim actually means local. We operate on trust, and trust without verification is just hope with better branding.

Prompt Injection Is the XSS of the AI Era

Cross-site scripting became a defining vulnerability of web applications because developers trusted user input. Prompt injection is the same pattern, one abstraction layer up: developers trust that the content their AI tool processes is benign.

Here is what this looks like in practice. An AI-powered email assistant reads your inbox to generate summaries. A carefully crafted email arrives containing instructions embedded in white text, invisible to you but readable by the model. Those instructions tell the assistant to forward your recent emails to an external address, or to include sensitive information in its next API call. The assistant complies because it has no mechanism to distinguish between your instructions and instructions embedded in the content it processes.

Researchers have demonstrated prompt injection attacks against every major model. An AI agent browsing the web on your behalf encounters a page with hidden instructions that redirect its behavior. A document analyzer processes a PDF containing embedded prompts that exfiltrate data from other documents in the same session. A code assistant reads a repository containing comments that instruct it to introduce subtle vulnerabilities.

The reason this matters more than traditional injection attacks: AI tools have broader access. A SQL injection compromises a database. A prompt injection compromises whatever the AI agent has permission to access, which in many current implementations is everything the user has access to. We are building tools with the access of an administrator and the judgment of a text predictor.

Malware Has Learned to Speak Naturally

Traditional malware relies on executing code. AI-era malware relies on executing persuasion.

We have seen tools that function exactly as advertised on the surface while quietly exfiltrating data through their normal API calls. The tool summarizes your documents, and it does it well. It also sends the full document text to an endpoint that has nothing to do with summarization. This is not a bug; it is the business model. The useful functionality is the distribution mechanism.

Agent frameworks compound this risk. An AI agent that can browse the web, execute code, read files, and make API calls is a universal attack surface. If the agent's instructions can be manipulated through the content it processes, every capability becomes a potential weapon. The agent does not need to be malicious; it needs to be manipulable. The malware lives in the data the tool reads.

MCP servers, plugin ecosystems, and tool-use frameworks all expand what an agent can do. Each expansion is also an expansion of what a compromised agent can be made to do. We celebrate when an agent gains the ability to send emails, modify files, and interact with APIs. An attacker celebrates the same capabilities for different reasons.

The "It's From a Big Company" Fallacy

There is a widespread assumption that tools from large, established companies are safe. This assumption confuses reputation with security. Large companies have more resources for security, but they also have more complex systems, more third-party integrations, and more surface area for things to go wrong.

The AI assistant built into your IDE sends your code to a cloud endpoint for processing. The meeting summarizer records, transcribes, and stores your conversations on servers governed by terms of service that can change without notice. The browser extension that "enhances" your search reads every page you visit. These are mainstream tools from known companies, and they all represent trust decisions that most users never consciously make.

Small, independent tools carry a different risk profile. They are more likely to have security gaps, less likely to have audited their dependencies, and far less likely to survive the kind of security incident that forces a public response. But the risk is at least visible: you know you are taking a chance on something unproven. The dangerous position is the middle ground, tools that look professional enough to seem trustworthy but lack the infrastructure to actually be trustworthy.

What Users Should Actually Do

Awareness is the minimum viable defense. Before installing any AI tool, ask three questions: what data does it access, where does that data go, and what happens to it after processing. If the tool cannot answer these questions clearly, that silence is information.

Scope your permissions. An AI writing assistant does not need access to your entire filesystem. A meeting summarizer does not need access to every meeting, forever. Most tools request maximum permissions because it simplifies development, not because the functionality requires it. Give the minimum access that makes the tool useful, and revoke access when you are done.

Treat AI agents like you would treat a new contractor with admin access. You would not give a contractor you just met unsupervised access to your entire infrastructure on their first day. AI agents deserve the same graduated trust. Start with limited scope, monitor what they actually do, expand access based on demonstrated behavior.

Sandbox aggressively. Run AI tools in isolated environments when possible. Use separate browser profiles for AI extensions. Keep sensitive data out of directories that AI tools can access. The inconvenience of separation is real; the cost of a breach is worse.

Watch the network. If a tool is sending data somewhere unexpected, you want to know. Proxy tools, firewall rules, and network monitoring are not paranoia; they are basic hygiene in an environment where every tool is a potential data exfiltration vector.

What Creators Owe Their Users

Building AI tools that access user data carries obligations that the current ecosystem largely ignores.

Be explicit about data flow. Every piece of data your tool touches should be documented: where it goes, how long it persists, who can access it, and whether it gets used for training. "We take privacy seriously" is not a privacy policy. Specific, verifiable claims about data handling are the minimum standard.

Minimize access by default. If your tool can function with read-only access to a single directory, do not request write access to the entire filesystem. Request the minimum permissions your tool needs, and explain why each permission is necessary. Users who understand why you need access are more likely to trust you with it.

Harden against prompt injection. If your tool processes untrusted content, and almost all tools do, you need defenses against instruction injection. Separate your system prompts from user content. Validate outputs before executing actions. Implement confirmation steps for high-risk operations. This is the equivalent of input sanitization for the AI era.

Publish your security model. If your tool uses third-party APIs, say which ones. If your tool stores data, say where. If your tool has been audited, publish the results. Transparency is not a competitive disadvantage; it is a trust differentiator in a market full of black boxes.

Fail toward safety. When your tool encounters ambiguous instructions, uncertain content, or potential injection attempts, the default behavior should be to stop and ask, not to proceed and hope. The cost of a false positive (asking the user to confirm a legitimate action) is trivial compared to the cost of a false negative (executing a malicious instruction without question).

The Trust Stack Is Our Problem to Build

The current state of AI tool security resembles the early web: powerful capabilities, minimal safeguards, and a user base that assumes everything works the way it should. We learned, eventually, that web applications needed input validation, authentication frameworks, HTTPS by default, and content security policies. We are at the same inflection point with AI tools, and we are making the same mistakes we made twenty years ago.

The difference is speed. The web took a decade to become critical infrastructure. AI tools are becoming critical infrastructure in months. The window between "this is a useful novelty" and "this is woven into every workflow" is closing faster than our security practices can adapt.

We do not have the luxury of learning these lessons slowly. Every team installing a dozen AI tools a month, the way we did, is making implicit trust decisions about data handling, security posture, and privacy guarantees that nobody is actually verifying. The regulatory framework will eventually catch up. The security tooling will eventually mature. The question is how much damage accumulates in the gap between now and then.

The answer depends on whether we treat that gap as someone else's problem or as something we actively manage. Right now, the responsibility sits with every person who installs an AI tool and every person who builds one. That is an unsatisfying answer, but it is the accurate one.

One final thought: This is where the vibe coding rubber hits the road. You've got an obligation beyond just spinning up an AI-purple website with a quick-and-dirty SaaS carousel and a Stripe swipe connection. You owe your users a safe experience where they know you've put in the time and that this isn't just your get-rich-quick side hustle that exposes their data. If you're unwilling to put in the time to provide that, you need be willing to accept the consequences.