All Integrations
LangChain Integration

LangChain Security: Runtime Protection for AI Agents

Middleware-native security for LangChain and LangGraph agents

LangChain is the most popular framework for building AI agents with tool access, RAG pipelines, and multi-step reasoning. But every tool call is an attack surface. Retrieved documents can contain injected instructions. Agent loops can be hijacked. Rune's ShieldMiddleware plugs directly into LangChain's native middleware system to intercept every tool call and LLM interaction — no code changes to your agent logic.

Add Security in Minutes

pip install runesec[langchain]
from rune import Shield
from rune.integrations.langchain import ShieldMiddleware

shield = Shield(api_key="rune_live_xxx")
middleware = ShieldMiddleware(shield, agent_id="my-agent")

# Pass to agent — all tool calls are now scanned
agent = create_react_agent(model, tools, middleware=[middleware])

Full setup guide in the documentation

Why LangChain Agents Need Runtime Security

LangChain agents combine LLM reasoning with external tool access — file systems, databases, APIs, web browsers. A single prompt injection in a retrieved document can cause your agent to exfiltrate data, execute arbitrary commands, or ignore its instructions entirely. RAG pipelines are especially vulnerable because the agent trusts the content it retrieves.

Top Threats to LangChain Agents

criticalRAG Poisoning

Malicious instructions embedded in retrieved documents hijack agent behavior. A poisoned knowledge base entry can override system prompts and redirect agent actions.

criticalTool Call Manipulation

Attackers craft inputs that cause the agent to call tools with dangerous parameters — deleting files, querying unauthorized data, or sending requests to attacker-controlled servers.

highAgent Loop Hijacking

Multi-step ReAct agents can be manipulated mid-loop. An attacker injects instructions at step 2 that alter the agent's plan for steps 3 through N.

What Rune Does for LangChain

Native Middleware Integration

Hooks into LangChain's middleware extension point. No monkey-patching, no wrappers around your agent — just add ShieldMiddleware to the middleware list.

Tool Call Interception

Every tool call is scanned before execution. Rune checks the tool name, parameters, and context against your security policies and threat patterns.

RAG Pipeline Protection

Retrieved documents are scanned for injected instructions before they reach the LLM. Catches poisoned embeddings, manipulated search results, and hidden directives.

Policy Enforcement

YAML policies control which tools each agent can use, what parameters are allowed, and what data can flow in and out. Enforced at the middleware layer.

Common LangChain Use Cases

  • RAG-powered customer support agents with document retrieval
  • ReAct agents with database, API, and file system tool access
  • LangGraph multi-agent workflows with shared state
  • Research agents that browse the web and synthesize findings

Other Integrations

Secure your LangChain agents today

Add runtime security to your LangChain agents in under 5 minutes. Free tier includes 10,000 events per month.

LangChain Security: Runtime Protection for AI Agents — Rune | Rune