Regex, semantic analysis, and an LLM judge — scanning every tool call in under 10ms. Open-source SDK, three lines of code.
Free plan · 10K events/mo · No credit card required
from rune import Shield
shield = Shield(api_key="rune_live_...")
# Wrap any tool call — inputs blocked,
# outputs scanned automatically
@shield.protect(agent_id="my-agent")
async def call_tool(name, params):
return await agent.run(name, **params)
# Or scan manually:
result = shield.scan_input(user_message)
if result.blocked:
print(f"Threat: {result.threat_type}")
# ✓ Inputs blocked before execution
# ✓ Outputs scanned for data leaks
# ✓ Anomalies flagged in real timeInstall: pip install runesec
Pattern matching handles the obvious. Semantic analysis handles the obfuscated. An LLM judge handles the ambiguous. Every call passes through the ones you enable.
Deterministic pattern matching for prompt injection, secret exposure, PII, and command injection. Zero false positives on known threats.
Under 5ms · Every plan
Semantic similarity flags obfuscated prompts and encoded payloads that regex misses. Configurable confidence threshold for tuning.
Under 30ms · Starter and above
A judge model reviews edge cases with full conversation context and creates alerts for human review. Runs async, never blocks your agent.
Non-blocking · Pro and above
Paste text to see the L1 pattern scanner in action. The same regex rules that block threats in production, running in your browser.
Scan results will appear here
Add Rune to your existing agent code. No refactoring, no new abstractions.
Pattern-based rules catch prompt injections, data exfiltration, and command injection before your agent can act on them.
Semantic analysis detects obfuscated prompts and encoded payloads that regex patterns miss.
Know exactly what every agent is doing, and what Rune stopped it from doing. Event timelines, anomaly detection, and alert routing built in.
Define which tools each agent can call, with what arguments, under what conditions. YAML policies checked on every event, automatically.
Unusual call frequency, new tool combinations, sudden risk score spikes. Rune flags deviations from established agent patterns.
10K events on the free plan. Upgrade for more agents, deeper scanning, or longer retention. No surprise bills. No credit card to start.
Get started with up to 5 agents, free forever
For small teams shipping their first agents to production
For teams running production agents with full scanning
For companies with high-volume agent deployments
Usage-based pricing for unpredictable workloads
No included events — pay only for what you use. Includes 90 days retention.
All paid plans include overage pricing — never get cut off mid-month. Need higher limits or a custom contract? Contact us at hello@runesec.dev
Under 10 minutes. Install the SDK, create a Shield with your API key, wrap your agent. Three lines of code for most frameworks.
Rune works with OpenAI SDK, Anthropic SDK, CrewAI, LangChain, and MCP out of the box. The SDK is framework-agnostic — if your agent makes tool calls, Rune can intercept them.
L1 scanning adds under 5ms per call using regex pattern matching. L2 semantic analysis adds under 30ms (Starter plan and above). L3 LLM-based analysis runs asynchronously so it doesn't block your agent (Pro plan and above).
Yes. The policy editor includes a built-in test panel where you can simulate actions against your YAML policies and see the result before anything goes live.
For inputs: the tool call is blocked before it executes. For outputs: the response is flagged after execution and an alert is created. In both cases, an alert appears in your dashboard with the agent, event, triggering policy, and severity rating. You can route alerts to email, Slack, or webhooks.
No. Rune wraps your existing agent as middleware. Your logic, prompts, and tool definitions stay exactly the same.
L1 uses regex pattern matching for known threats. It's fast, deterministic, and available on every plan. L2 uses vector similarity to catch novel attacks that regex misses, starting on Starter. L3 uses an LLM judge to evaluate ambiguous threats with full context, starting on Pro. Higher tiers auto-enable when you connect with a paid plan API key.
Run Rune in dry-run or monitor mode in your test suite. It scans agent interactions during integration tests and catches issues before they reach production — without blocking your pipeline.
Yes. Run Rune in monitor mode in staging to observe threats without blocking, then switch to enforce mode in production. You can configure different modes per environment.
L1 scanning is deterministic pattern matching with zero false positives on known attack patterns. L2 semantic analysis has a configurable confidence threshold you can tune. L3 LLM-based analysis creates alerts for human review rather than auto-blocking.
Event metadata only: agent ID, threat type, severity, action taken, and timestamps. Content is scanned in transit and not persisted. We never train on customer data.
Three layers of runtime scanning. Three lines of code. Open-source SDK, free to start.
Free plan. No credit card. Under 10 minutes to first scan.