LangChain / LangGraph

Add runtime security to your LangChain agent in under 5 minutes. The middleware intercepts every tool call and LLM response, scanning for threats and enforcing policies — with zero changes to your existing agent code.

Installation

Terminalbash
pip install runesec[langchain]

This installs the core SDK and the LangChain integration. Requires Python 3.9+ and langchain-core >= 0.2.0.

Quick Start

Wrap your existing agent with Rune's ShieldMiddleware. Three lines of code.

agent.pypython
from langchain.agents import create_react_agent
from langchain_openai import ChatOpenAI
from rune import Shield
from rune.integrations.langchain import ShieldMiddleware

# Your existing setup
model = ChatOpenAI(model="gpt-4")
tools = [...]  # your tools

# Add Rune (3 lines)
shield = Shield(api_key="rune_live_xxx")
middleware = ShieldMiddleware(
    shield,
    agent_id="research-agent",
    agent_tags=["research", "prod"],
)

# Create agent with security middleware
agent = create_react_agent(model, tools, middleware=[middleware])

# Use as normal — all tool calls are now scanned
result = agent.invoke({"input": "Find revenue data for Q4"})

What Gets Scanned

The middleware hooks into LangChain's native extension points and scans at three stages:

Tool call inputs

Before any tool executes, Rune scans the function name and parameters for prompt injection, command injection, and policy violations.

Tool call outputs

After a tool returns, Rune scans the result for secrets, PII, and indirect injection payloads embedded in retrieved data.

LLM responses

Final model outputs are scanned for data leaks, PII, and secrets before reaching the user.

Common LangChain Threats

LangChain agents are particularly vulnerable to these threats:

  • Indirect prompt injection — Tools that fetch web pages, read files, or query databases can return data containing hidden instructions that redirect the agent. Learn more
  • Tool abuse via injection — A successful injection can cause the agent to call tools with malicious parameters (e.g., sending data to external URLs). Learn more
  • Secret leaking in tool outputs — Tools may return config files, environment variables, or database records containing credentials. Learn more

Adding a Security Policy

Restrict which tools the agent can use with a YAML policy. Create this in the Rune dashboard or apply it via the SDK.

policy.yamlyaml
version: "1.0"
rules:
  - name: restrict-research-agent
    type: tool_access
    allow: ["web_search", "read_file", "calculator"]
    deny: ["send_email", "execute_sql", "file_write"]
    match:
      agent_tags: [research]
    action: block

  - name: block-prompt-injection
    scanner: prompt_injection
    action: block
    severity: critical

When the agent attempts to use a denied tool, Rune blocks the call and generates an alert in your dashboard. The agent receives a safe error response and continues operating.

LangGraph

The same middleware works with LangGraph's graph-based agents:

graph_agent.pypython
from langgraph.prebuilt import create_react_agent
from rune import Shield
from rune.integrations.langchain import ShieldMiddleware

shield = Shield()
middleware = ShieldMiddleware(shield, agent_id="graph-agent")

# Works with LangGraph's create_react_agent too
agent = create_react_agent(model, tools, middleware=[middleware])

Configuration

Configuration optionspython
middleware = ShieldMiddleware(
    shield,
    agent_id="my-agent",             # Required: unique agent identifier
    agent_tags=["prod", "research"], # Optional: tags for policy targeting
    block_on_error=False,            # Optional: fail open if Rune is unreachable
    scan_outputs=True,               # Optional: scan tool outputs (default: True)
)

Complete Runnable Example

Copy, paste, and run to verify your LangChain integration:

langchain_rune_test.pypython
import os
assert os.environ.get("RUNE_API_KEY"), "Set RUNE_API_KEY"
assert os.environ.get("OPENAI_API_KEY"), "Set OPENAI_API_KEY"

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
from rune import Shield
from rune.integrations.langchain import ShieldMiddleware

@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression."""
    return str(eval(expression))  # noqa: S307

shield = Shield()
middleware = ShieldMiddleware(shield, agent_id="langchain-test", agent_tags=["test"])
agent = create_react_agent(ChatOpenAI(model="gpt-4"), [calculator], middleware=[middleware])

result = agent.invoke({"input": "What is 2 + 2?"})
print("Result:", result)
print("Stats:", shield.stats)

Next Steps