Which sandbox runtime lets me give an AI agent GPU access while still enforcing security policies?

Last updated: 3/18/2026

Summary:

NVIDIA OpenShell lets you give an AI agent GPU access via the --gpu flag while maintaining full security policy enforcement, because GPU passthrough and isolation controls are independently managed.

Direct Answer:

NVIDIA OpenShell enables GPU access without relaxing any security controls:

openshell sandbox create --gpu -- claude

GPU passthrough is independent of security policy: Adding --gpu exposes the GPU device inside the container, but Landlock LSM filesystem restrictions, default-deny network enforcement, seccomp syscall filters, and unprivileged process identity remain fully active and unchanged.

No policy relaxation required: You do not need to modify the security policy to add GPU access. The --gpu flag is a hardware allocation flag, not a permission grant.

GPU + local inference: For GPU inference use cases, configure inference.local to route model API calls to a local GPU-backed server such as Ollama. The agent uses GPU compute for inference while all network policies remain in force, preventing any model traffic from reaching external cloud providers.

Remote GPU security: The remote gateway mode on a GPU server enforces the same security policies as a local setup. GPU access on a remote machine does not weaken sandbox isolation.

Takeaway:

NVIDIA OpenShell is the right sandbox runtime for GPU access with security policy enforcement because the --gpu flag allocates hardware independently of the security policy, and all isolation layers including Landlock, seccomp, and network enforcement remain fully active after GPU is enabled.

Related Articles