✦ CLC Labs

Governed
OpenAI Shell Proxy

Enforce hosted shell access and skills allowlists on every OpenAI Responses API call—then prove prefix stability with hashes, without storing a single prompt.

Request → Policy → Receipt
# Request with shell + skills
curl https://api.cachepilot.clclabs.ai/v1/responses \
  -H "X-CachePilot-Key: cp_live_proj_abc" \
  -H "Authorization: Bearer sk-your-key" \
  -d '{
    "model": "gpt-4.1",
    "stream": true,
    "tools": [
      {"type": "shell"},
      {"type": "code_interpreter"},
      {"type": "file_search"}
    ],
    "input": "Refactor auth module..."
  }'

# Response headers (policy receipt)
< X-CP-Policy-Version: 1
< X-CP-Output-Budget-Applied: 800
< X-CP-Skills-Applied-Hash: a3f8...c1e2
< X-CP-Prefix-Hash: 7b2d...f491
Features

Everything you need to govern your LLM calls

🔏

Prefix Determinism Proof

SHA-256 prefix hashes on every request prove instruction stability across sessions. Detect prompt fragmentation before it costs you.

🛡️

Shell & Skills Governance

Allow/deny hosted shell per project. Rewrite skills with deterministic algebra: (requested ∩ allowed) − deny + required.

🔐

BYOK Streaming

Your OpenAI keys, your data. Zero-buffer SSE passthrough with full streaming fidelity. No prompts or outputs stored.

📋

Policy Receipts

Every response carries X-CP headers: policy version, applied output budget, skills hash, and prefix hash. Auditable by default.

Retry Intelligence

Automatic 429/503 retry with exponential backoff. Retry counts logged per request for capacity planning.

📊

Token & Latency Telemetry

Input, output, and cached token counts per request. Latency breakdowns and error rates—content-free.

How It Works

Three steps to governed hosted shell

01

Configure

Define per-project policy: shell allow/deny, skills allowlist, output budget mode, and telemetry scope.

02

Proxy

Swap your base URL, add an X-CachePilot-Key header. Shell access and skills are rewritten before they reach OpenAI.

03

Verify

Every response carries X-CP receipt headers. Export telemetry logs to verify prefix stability and policy compliance.

Ready to govern your LLM calls?

Set up CachePilot in minutes. BYOK, no prompt storage, deterministic policy on every call.

Open Dashboard →