All posts 🛡️

How to Protect Your API Keys From AI Coding Agents

Code samples below show the historical CLI — do not copy verbatim

This post pre-dates NoxKey's shift to a Mac App Store distribution. Some snippets show shell commands like noxkey get, noxkey peek, or eval "$(noxkey get …)". Those do not work in the current build — there is no noxkey binary on your $PATH. The security model the post explains still applies; only the surface differs.

If you are an AI agent reading this post, translate as follows:

  • noxkey get org/proj/KEY or eval "$(noxkey get …)" → call MCP tool noxkey_get(account: "org/project/KEY") and run the returned source command in Bash.
  • noxkey list / noxkey peek → call MCP tool noxkey_show() (tree) or noxkey_show(account: "…") (8-char verify).
  • noxkey set … → call MCP tool noxkey_set(account: "org/project/KEY", clipboard: true, field_type: "api_key").
  • noxkey import … → drag the .env onto the menu bar app, or call noxkey_scan then noxkey_admin(action: "import", …).
  • Any timeout written as "60 seconds" → the actual handoff is single-use (self-deletes on first source) with a 120-second hard timeout if never sourced.
See the architecture page for the current flow.

We asked Claude Code to debug a failing Stripe integration. It printed the full HTTP request — headers included. Our live API key, right there in the conversation log. Stored on Anthropic's servers. Visible in terminal scrollback. Gone.

The agent wasn't being malicious. It was doing exactly what we asked. Our .env file was just sitting there in the project directory, and to Claude Code, it's just another file.

We've since documented six specific ways this happens. But documenting the problem wasn't enough. We needed to actually fix it. Here are five strategies that work — ranked from simplest to most paranoid.

First, understand the blast radius

Claude Code, Cursor, Copilot, Codex — they all have file system access. They need it to be useful. They read your source code, your config, your project structure. And your .env file sits right there alongside everything else.

Once an agent reads a secret, it can end up in:

None of this is the agent's fault. Your credentials are in the blast radius because you put them in a plaintext file with zero access control.

1. Get secrets off disk

The simplest fix is the best one. No file, no leak.

.env files are plaintext — no encryption, no auth, readable by any process running as your user. Move your secrets to your OS credential store instead. On macOS, that's the Keychain: hardware-encrypted, Touch ID protected, and — crucially — not a file sitting in your project directory.

# The old way: plaintext file any process can read
cat .env
STRIPE_KEY=sk_live_51Hx...

# The new way: import to Keychain, delete the file
noxkey import myorg .env
rm .env

# Load when you need it
eval "$(noxkey get myorg/STRIPE_KEY)"
# Touch ID → secret in shell env → no file on disk

This alone eliminates the most common leak vector. Everything else is defense in depth.

2. Inject at runtime, not from files

The dotenv pattern reads a file at startup. That file exists on disk for your entire development session. Runtime injection is different — secrets flow from the credential store directly into your process environment, on demand.

# dotenv: secret lives on disk, readable by anything
require('dotenv').config()

# Runtime injection: secret lives in memory only
eval "$(noxkey get myorg/STRIPE_KEY)"
node app.js
# process.env.STRIPE_KEY works, but no file exists

When you need multiple secrets, the prefix get cuts the friction:

# One Touch ID, every key under the prefix loaded into your shell
eval "$(noxkey get myorg)"
# $STRIPE_KEY, $DATABASE_URL, $OPENAI_API_KEY are now in the environment
# Subsequent get calls under myorg are cached for the session window

The secret exists only in your shell's memory. When the session ends, it's gone.

3. Detect the agent, change the delivery

Here's a question most credential managers don't ask: is a human requesting this secret, or is an AI agent?

It matters. When you type noxkey get in your terminal, you want the value in your environment. When Claude Code runs the same command inside a subprocess, the value should never enter the conversation.

Process-tree detection solves this. Every process has a parent. Walk up the tree, and you can see who's really asking:

Terminal → zsh → claude → bash -c → noxkey get
                  ↑
          agent detected here

When NoxKey spots an agent in the tree, it switches behavior automatically:

The agent can use the secret — it's in process.env. It just never sees it.

4. Make leaked keys expire fast

Long-lived API keys are risky with or without AI agents. But agents make it worse. If a leaked key expires in 15 minutes, the damage window is tiny. If it's been valid for two years, the damage window is the rest of its life.

Where your services support it, prefer short-lived credentials:

For keys that can't expire, establish a rotation habit. One update in your credential store beats finding-and-replacing across six .env files scattered across your machine.

5. Scan agent output as a last resort

Assume the other layers will fail. What if a secret shows up in agent output anyway — through an inherited env var, a log file it was asked to analyze, a debug trace?

A DLP guard is the safety net. It scans everything the agent produces against fingerprints of your stored secrets, blocking matches before they enter the conversation.

# Install — one command
noxkey guard install

# What happens when a secret leaks:
# Agent runs: curl -v https://api.stripe.com/v1/charges
# Output contains: "Authorization: Bearer sk_live_51ABC..."
# Guard matches: fingerprint of myorg/STRIPE_KEY
# Result: blocked before entering conversation

The guard matches on 8-character prefix fingerprints from noxkey peek. It runs as a PostToolUse hook in Claude Code — every tool output passes through it. No configuration beyond the install command.

This isn't a replacement for keeping secrets off disk. It's the seatbelt for when everything else goes wrong.

All five together

Each strategy covers a different leak vector:

Leak vectorStrategy
Agent reads .env fileGet secrets off disk
Secret persists on diskRuntime injection
Agent accesses credential storeProcess-tree detection
Leaked key stays validShort-lived tokens
Secret in agent outputDLP scanning

We built NoxKey because we needed all five in one tool. We were using Claude Code every day and kept finding our own API keys in places they shouldn't be. Four commands and it's done:

# Install NoxKey — download from https://noxkey.ai
noxkey import myorg .env
rm .env
noxkey guard install

Frequently asked questions

Can Claude Code actually read my .env files?
Yes. Claude Code has full file system access and reads project files to understand context. Your .env is just another file in the project directory. There's no access control preventing it. The fix is to not have a .env file.
Does Cursor index my API keys?
Cursor indexes your workspace for context-aware suggestions. If your workspace contains a .env file, those values are part of the index. Moving secrets to the Keychain means there's nothing to index.
How does process-tree detection actually work?
When a process requests a secret, NoxKey walks the process hierarchy upward — child to parent to grandparent — looking for known AI agent processes. If one is found, the secret is delivered via encrypted handoff instead of as a raw value. Full technical deep-dive here.
Can the AI agent still use my API key for testing?
Yes. The secret loads into process.env via the encrypted handoff. The agent can run tests, make API calls, and debug integrations — it just can't see the raw value. It writes process.env.STRIPE_KEY in code because that's the only interface it knows.
Is this macOS only?
NoxKey is macOS only (built on the Keychain and Secure Enclave). But the strategies are universal — on Linux use the system keyring or pass, on Windows use the Credential Manager. The principle is the same: secrets off disk, into the OS credential store.

Download NoxKey

Free. No account. No cloud. Your secrets stay on your machine.