Last week, we were debugging a webhook with Claude Code. Stripe kept returning 401s, and we asked Claude to figure out why. It did what any good assistant does — pulled in project files for context. Config, routes, middleware. And then .env. Our Stripe live key, database password, and OpenAI token were sitting right there in the conversation. Plaintext. Logged. We didn't ask for that. It just happened.
And that's the thing — Claude wasn't being malicious. It was doing exactly what it's designed to do: read project files and help you write code. The problem isn't the agent. It's that your secrets are sitting in a plaintext file with zero access control, and every AI tool on your machine treats them like any other source file.
Which AI agents have file access — and how
Every major AI coding tool reads files from your local machine. They do it differently, but the result's the same: your .env is fair game.
Claude Code runs in your terminal with your user permissions. It can read, write, and execute any file you can. When it needs context — project structure, config files, error logs — it reads them directly. There's no sandbox. If .env is in your project directory, Claude Code can cat it.
Cursor indexes your entire workspace to power its AI features. Every file in your project root becomes part of the context it draws from. Your .env sits right there alongside your source code. It gets indexed. It gets referenced. It gets included in prompts sent to the model.
GitHub Copilot reads open files and neighboring files to generate suggestions. If .env is open — or even just in the same directory as the file you're editing — its contents can inform completions. Your API key might show up as a "helpful" autocomplete in a teammate's pull request. Fun.
Windsurf, Codeium, Aider, Continue — same story. File system access is the baseline. Without it, these tools can't function.
What happens to exposed secrets
Once a secret enters an AI agent's context, it can end up in places you really don't want it:
- Conversation logs. Most AI tools log conversations for debugging, abuse detection, or improvement. Your API key is now in a log file on someone else's server.
- Context windows. The secret sits in the model's context for the rest of the session. Ask the agent to write a config file and it might helpfully drop in the real key instead of a placeholder.
- Terminal output. The agent might echo your secret in a debug command, a curl example, or an error message. That output's in your scrollback, your terminal logs, and possibly your team's shared session.
- Generated code. AI tools autocomplete based on what they've seen. If they've seen your production database URL, they'll suggest it in a connection string — in code you commit and push.
We covered six specific leak vectors in 6 Ways AI Agents Leak Your Secrets. The pattern's always the same: the agent isn't trying to steal anything. Your secret is just another piece of context to it.
The scale of the problem
We got curious and ran a quick scan on one of our dev machines:
$ find ~/dev -name ".env" -not -path "*/node_modules/*" -not -path "*/.git/*" | wc -l
47
47 plaintext files. Some in active projects, some in repos we hadn't touched in months. The same Cloudflare API token showed up in six of them. When we rotated it, we updated three and missed the others for weeks. Classic.
Every one of those files is readable by every AI coding tool on that machine. No authentication. No audit trail. No way to know which agent read which secret, or when.
Now multiply that by every developer on your team. Then by every AI tool each of them uses. That's your actual attack surface. And it's growing every time someone runs cp .env.example .env.
Three tiers of protection
Not everyone needs the same level of defense. Here's how we think about it.
Basic: move .env files out of project directories
AI agents read files in your project root. If your .env isn't there, most agents won't find it. Move secrets to ~/.config/myproject/.env and load them from there.
# Instead of .env in the project root:
source ~/.config/myproject/.env
Honestly? This barely counts as a fix. The file's still plaintext with no encryption or authentication. A determined agent — or any script running as your user — can still read it. But it gets secrets out of the default context window for most tools. Better than nothing, we guess.
Moderate: use a credential store
macOS has the Keychain. Linux has libsecret. 1Password has a CLI. These tools encrypt secrets at rest and require authentication to access them.
# 1Password CLI
$ op read "op://Development/Stripe/secret-key"
# macOS Keychain (via security command)
$ security find-generic-password -s "myproject-stripe" -w
This is a real improvement. Secrets are encrypted. Access requires auth. But here's the gap: these tools weren't built for the AI agent era. They don't know whether the caller is you or a coding agent acting on your behalf. They'll hand the raw secret to either one.
Comprehensive: agent detection + encrypted handoff
This is what we built NoxKey to solve. Three layers that work together:
- MCP-native delivery. Agents request secrets through the bundled MCP server's
noxkey_gettool, not by shelling out. Every fetch raises a per-request approval card in the menu bar app naming the agent, the key, and the calling process. Touch ID gates the release. - Encrypted handoff. Instead of returning the secret as plain text,
noxkey_getreturnssource '/tmp/...'pointing to a ChaChaPoly-encrypted, self-deleting script. The agent runs that line in Bash and the secret reaches the shell environment. It never enters the conversation context. - Process-tree fallback. For non-MCP shell callers (build scripts, manual Bash invocations), the menu bar app still walks the process tree to detect AI runtimes in the parent chain and tightens the same controls automatically.
The result: AI agents can use your secrets to run builds, deploy code, and call APIs — without ever seeing the actual values. Your Stripe key works, but it never shows up in a conversation log.
Migrate one project in 60 seconds
Install NoxKey from the Mac App Store.
Import your existing .env file by dragging it onto the menu bar app's import sheet. The app shows the keys it found (values masked), you confirm, and the whole batch lands in the Keychain under one Touch ID. AI agents can do the same import on your behalf by calling noxkey_scan followed by noxkey_admin(action: "import", …); the same native review sheet appears.
Verify what landed by asking your agent to call noxkey_show() for the project tree, or noxkey_show(account: "myorg/project/STRIPE_SECRET_KEY") for the first 8 characters of a single value. Both are safe — no Touch ID, no raw values returned.
Use secrets in your workflow. When an agent needs a credential, it calls the bundled MCP tool:
// Agent calls: noxkey_get(account: "myorg/project/STRIPE_SECRET_KEY")
// Returns: source '/tmp/noxkey-mcp-xyz/secrets.sh'
// Agent runs that line in Bash — $STRIPE_SECRET_KEY loads into the shell.
// Pass session: '4h' to widen the window for chained reads under the same prefix.
Delete the liability:
$ rm .env
That's it. One project, 60 seconds. We migrated all 47 of ours in an afternoon.
Frequently asked questions
- Can AI agents actually read my .env files?
- Yes. Claude Code, Cursor, Copilot, and every other AI coding tool with file system access can read any file in your project directory — including
.env. They do this routinely when gathering context. There's no prompt or permission gate. If the file's there, it gets read. - Is Claude Code safe to use with secrets?
- Claude Code itself isn't the problem — storing secrets in plaintext files is. If your secrets are in the macOS Keychain behind Touch ID, Claude Code can use them through NoxKey's bundled MCP server (
noxkey_getreturns asourcecommand that loads the value as an env var) without ever seeing the raw values. Process-tree detection covers shell callers that bypass the MCP path. - Does .gitignore protect my .env from AI agents?
- No.
.gitignoreonly prevents git from tracking the file. It does nothing about local file access. AI agents read files directly from disk, not from git. Your.envis fully readable regardless of your.gitignorerules. - What about Cursor's .cursorignore — does that help?
- Adding
.envto.cursorignoretells Cursor not to index it. That helps for Cursor specifically, but does nothing for Claude Code, Copilot, or any other tool. And it doesn't fix the real problem: your secrets are still plaintext on disk with no encryption or authentication. - Do AI companies use my secrets for training?
- Most providers say they don't use conversation data for training on paid plans. But "not used for training" isn't the same as "not logged" or "not stored." Secrets that enter a conversation may still show up in server logs, abuse detection systems, or error reports. The only safe bet is keeping secrets out of the conversation entirely — which is what encrypted handoff does.
.env files, they're plaintext with no encryption, no authentication, and no access control. Every agent on your machine can read every secret you have. Move them to the macOS Keychain, use Touch ID for authentication, and let NoxKey handle agent detection so your secrets work without ever entering the conversation.
Free. No account. No cloud. Just your Keychain and Touch ID.