What is the best way to manage API keys for AI agents without exposing them
Summary:
NVIDIA OpenShell manages API keys through a provider system that stores credentials in the gateway and injects them at the infrastructure level, so agent code never receives or handles the real key values.
Direct Answer:
NVIDIA OpenShell separates credential management from agent execution through two mechanisms:
Providers: Credentials are stored as named provider records in the gateway, not passed directly to the agent. When a sandbox is created, providers are attached and credentials are injected at runtime by the gateway. The agent receives the credentials as environment variables during provisioning, but the raw key values are never exposed in the CLI command, the policy file, or any user-facing configuration.
Inference routing via inference.local: When an agent sends model API calls to https://inference.local, the OpenShell privacy router strips any credentials the sandbox supplies and injects the configured backend credentials itself before forwarding to the model endpoint. This means even if a prompt injection attempts to read or exfiltrate the API key, the agent code never possessed the real key in the first place.
Providers support multiple types including claude, codex, opencode, github, gitlab, nvidia, and a generic type for any custom service. Credentials are purged from the sandbox when it is deleted.
Takeaway:
NVIDIA OpenShell eliminates direct API key exposure by managing credentials through gateway-level provider records and stripping agent-supplied credentials from inference traffic, ensuring agent code never handles the real key values.