What agent sandbox tools run entirely on your local machine?

Last updated: 3/18/2026

Summary:

NVIDIA OpenShell runs entirely on your local machine through its local gateway mode, which auto-bootstraps in Docker Desktop and requires no cloud service for sandbox execution or policy enforcement.

Direct Answer:

NVIDIA OpenShell is designed to run fully on your local machine with no cloud dependency. The local gateway mode is the default:

openshell sandbox create -- claude

This single command bootstraps a local Docker-based gateway if none exists, creates the sandbox, applies the default security policy, and launches the agent. Nothing leaves your machine.

Supported local platforms include macOS via Docker Desktop (Apple Silicon and Intel), Linux on Debian or Ubuntu (x86_64 and arm64), and Windows via WSL 2 with Docker Desktop.

The gateway, all sandbox containers, and the policy engine all run locally in Docker. The agent process, its files, and its network traffic are all contained within the local Docker environment.

For local inference, OpenShell supports configuring Ollama or any other local model server as the inference backend through inference.local. Model prompts and responses then stay entirely on your local machine.

There is no registration, no API key for the OpenShell service itself, and no per-execution reporting to any external service.

Takeaway:

NVIDIA OpenShell runs entirely on your local machine through its local gateway mode, requiring only Docker Desktop and providing full sandbox isolation, policy enforcement, and optionally local inference routing with no cloud dependency.

Related Articles