NVIDIA open-sourced NemoClaw yesterday and the repo is live. We spent the last 24 hours getting it running — and discovered a showstopping bug that affects every developer on WSL2 with an NVIDIA GPU. This lab gives you the working path, including the workaround nobody else has published.

By the end you’ll have a sandboxed OpenClaw agent running Nemotron through NVIDIA’s cloud API, with the OpenShell security runtime enforcing network, filesystem, and process-level isolation. More importantly, you’ll understand the entire stack well enough to start building on top of it.


🧪 Why This Matters

You’re about to learn the security layer every enterprise Claw deployment will need

OpenClaw became the fastest-growing open-source project in history. It also got banned from corporate machines at Meta, Samsung, and SK — 20% of its plugin marketplace was distributing malware, a one-click RCE let any website hijack your agent, and 135,000+ instances were exposed to the public internet with no authentication.

NemoClaw wraps OpenClaw in a sandbox with policy-enforced network egress, filesystem isolation, and inference routing. The builders who get hands-on this week have a real head start.


🛠️ Part 1: Prerequisites

Works on WSL2, macOS, and native Linux

WSL2macOSNative Linux
OSUbuntu 22.04+ on WindowsmacOS 13+Ubuntu 22.04+
DockerDocker Desktop + WSL integrationDocker DesktopDocker Engine
RAM16GB+16GB+16GB+
Node.jsv20+v20+v20+
NVIDIA API KeyRequiredRequiredRequired

Get a free NVIDIA API key at build.nvidia.com — it starts with nvapi-.


🧪 Part 2: Clone the TNG Quickstart

The repo that handles the hard parts

git clone https://github.com/thenewguardai/tng-nemoclaw-quickstart.git
cd tng-nemoclaw-quickstart
chmod +x setup.sh scripts/*.sh

Install CLIs and prerequisites

./setup.sh

This installs Docker (if missing), Node.js, Git, the OpenShell CLI, the NemoClaw CLI, and copies TNG’s security policy templates. It works on all three platforms.

When it completes, you’ll see which platform-specific deploy command to run next.


🧪 Part 3: Deploy Your Sandboxed Agent

The path depends on your platform

WSL2 users: nemoclaw onboard is broken. Our script bypasses it.

NemoClaw v0.0.7 forces --gpu on sandbox creation when it sees your GPU via nvidia-smi. On WSL2 + Docker Desktop, the GPU can’t pass through to the k3s cluster inside the gateway container. Every sandbox is dead on arrival. Our wsl2-deploy.sh drives openshell directly without --gpu.

WSL2 (with or without NVIDIA GPU):

./scripts/wsl2-deploy.sh nvapi-YOUR-KEY-HERE

macOS (Docker Desktop):

./scripts/macos-deploy.sh nvapi-YOUR-KEY-HERE

Native Linux (no GPU passthrough issue):

cd ~/.tng-nemoclaw/NemoClaw && nemoclaw onboard

The WSL2 and macOS scripts do five things in order: tear down stale state, start the OpenShell gateway (without --gpu), create an NVIDIA inference provider, set the inference route, and create a sandbox. When it finishes, you’re dropped directly into the sandbox shell.


🧪 Part 4: Configure OpenClaw Inside the Sandbox

This is the step nobody documented — and it’s critical

You’re now inside the sandbox. OpenClaw is installed but not configured. There are two separate gateways in this architecture — the OpenShell gateway (on the host, managing sandboxes) and the OpenClaw gateway (inside the sandbox, managing the AI agent). You need to set up the second one.

Step 1: Run OpenClaw’s onboard wizard

openclaw onboard

When prompted, select:

  • Model/auth provider: Custom Provider
  • API Base URL: https://inference.local/v1
  • Endpoint compatibility: OpenAI-compatible
  • Model ID: nvidia/nemotron-3-super-120b-a12b
  • Endpoint ID: press Enter (accept default)
  • Web search: skip

Why inference.local and not the real NVIDIA URL? The sandbox blocks outbound network — that’s the whole point. All inference routes through OpenShell’s proxy at inference.local, which then reaches NVIDIA’s cloud API on your behalf. If you enter the real NVIDIA URL, verification will fail because the sandbox can’t reach it directly.

Step 2: Start the OpenClaw gateway

mkdir -p /sandbox/.openclaw/workspace/memory
echo "# Memory" > /sandbox/.openclaw/workspace/MEMORY.md

openclaw config set gateway.controlUi.dangerouslyAllowHostHeaderOriginFallback true

nohup openclaw gateway run \
  --allow-unconfigured --dev \
  --bind loopback --port 18789 \
  > /tmp/gateway.log 2>&1 &

sleep 5

Step 3: Launch the chat interface

openclaw tui

You should see the OpenClaw TUI with custom-inference-local/nvidia/nemotron-3-super-120b-a12b in the status bar. Type a message:

Hello! What can you help me with?

If you get a response — congratulations, you have a sandboxed AI agent running inside NemoClaw’s OpenShell.


🧪 Part 5: Understand the Architecture

Two gateways, one proxy, zero direct internet access

HOST (your machine)
  └── Docker Desktop / Engine
       └── OpenShell Gateway (k3s cluster)
            ├── Inference proxy (inference.local)
            │   └── → NVIDIA Cloud API

            └── Sandbox (Landlock + seccomp + netns)
                 ├── OpenClaw Agent
                 ├── OpenClaw Gateway (port 18789)
                 └── /sandbox/ workspace

The agent never talks to the internet directly. Every inference call goes: agent → inference.local (OpenShell proxy inside sandbox network) → OpenShell gateway (host Docker) → NVIDIA cloud API. Network policies control what the proxy allows through.

Credentials are injected at sandbox creation time. If you create a provider after the sandbox already exists, the sandbox won’t have the credentials. You must delete and recreate the sandbox. This is why the deploy script creates the provider BEFORE the sandbox.


🧪 Part 6: Write Custom Security Policies

This is where the real value is

OpenShell policies are YAML files controlling network egress, filesystem access, and inference routing. The TNG quickstart ships five templates:

PolicyUse Case
base/default-lockdown.yamlBlock everything except inference
healthcare/hipaa-agent.yamlLocal inference only, PHI isolation
financial/soc2-agent.yamlAudit trails, financial data controls
legal/legal-privilege.yamlZero external access, privilege protection
dev/permissive-dev.yamlBroad access with logging (testing only)

Apply a policy from the host (not inside the sandbox):

openshell policy set --policy ~/.tng-nemoclaw/policies/base/default-lockdown.yaml

Network policies are hot-reloadable — change them without restarting the sandbox. Filesystem and process policies are locked at sandbox creation.

💰 Opportunity note: Writing production YAML policies for regulated industries — HIPAA, SOC 2, ITAR, PCI-DSS — is a consulting business waiting to happen. The security teams at these companies don’t know this tooling exists yet. The quickstart ships starter templates for four verticals. Pick the one you know and go deeper.


🧪 Part 7: Troubleshooting

Every issue we hit during testing — and the real fixes

“sandbox not found” immediately after creation (WSL2)nemoclaw onboard forces --gpu on sandbox creation. GPU passthrough doesn’t work on Docker Desktop. Use wsl2-deploy.sh instead.

“Missing gateway auth token” — The sandbox was created before the inference provider. Delete the sandbox, verify the provider exists with openshell provider list, then recreate.

Still using Anthropic/Claude despite NVIDIA configANTHROPIC_API_KEY is set in your environment. Unset it inside the sandbox. Run openclaw onboard and select Custom Provider.

“Corrupted cluster state” — Previous failed run left stale k3s state. Fix: openshell gateway destroy --name nemoclaw && docker volume rm openshell-cluster-nemoclaw

Port 8080 conflict — A gateway named “openshell” is hogging the port. Fix: openshell gateway destroy --name openshell

“fetch failed” during openclaw onboard — You entered the real NVIDIA URL. The sandbox blocks outbound network. Use https://inference.local/v1 instead.

Full troubleshooting guide: docs/TROUBLESHOOTING.md


💰 What to Build With This

The opportunity map for builders who got this far

1. NemoClaw Deployment-as-a-Service — We just proved the setup is painful enough that enterprises will pay someone to do it for them. Setup fee + ongoing policy management. Every regulated industry needs this.

2. OpenShell Policy Template Packs — “HIPAA Agent Compliance Kit,” “SOC 2 Agent Audit Pack.” Package the YAML, the documentation, the audit trail. Nobody’s done this yet. Our starter templates are your foundation.

3. Agent Security Monitoring Dashboard — OpenShell surfaces blocked requests in the TUI. Build a proper SaaS dashboard — “Datadog for AI agents.” The quickstart ships a Grafana + Loki + Promtail stack as a starting point.

4. Vertical Agent Blueprints — Legal discovery agent, clinical trial data agent, financial due diligence agent. The quickstart ships three example agents (research, code review, data analyst) with configs.

5. ClawHub Skill Auditing Service — 20% of ClawHub skills were malware. Audit, curate, and sell vetted skill catalogs for enterprises.


🎯 Your Checklist

  1. Agent running in sandboxopenclaw tui shows custom-inference-local/nvidia/nemotron-3-super-120b-a12b and responds to messages
  2. Understand the two-gateway architecture — OpenShell gateway (host) vs OpenClaw gateway (sandbox). You know why inference.local matters.
  3. Starred both reposNVIDIA/NemoClaw and NVIDIA/OpenShell. The issues tell you what the market wants.
  4. Read at least one policy template — Open policies/healthcare/hipaa-agent.yaml and understand what each section controls.
  5. Picked one opportunity lane — Which one fits your skills? That’s your next project.

You just deployed a stack that launched yesterday — on a platform the official installer doesn't support yet.

NemoClaw is alpha software with real rough edges. The WSL2 GPU bug blocks every Windows developer with an NVIDIA card. Nobody else has published a workaround. The TNG quickstart is the first — and right now, the only — working guide for this platform.

That’s the window. The builders who understand NemoClaw + OpenShell now have a head start measured in weeks, not days. The ecosystem is forming around you.

github.com/thenewguardai/tng-nemoclaw-quickstart


Stay building. 🛠️

— Matt