Security Guides

Secure Your AI Agents

Framework-specific security guides with real vulnerability examples, secure code patterns, and production checklists. Each guide covers the unique attack surface of your framework and how to protect against it.

Why Framework-Specific Security Matters

Every AI framework has its own security surface area. LangChain's callback system processes data differently than OpenAI's function calling, which works differently than CrewAI's multi-agent delegation. A security approach that works for one framework may miss critical vulnerabilities in another.

These guides go deep into each framework's architecture to identify where attacks enter, how they propagate, and exactly where to place runtime scanning for maximum protection. Each guide includes a vulnerability assessment, secure vs. vulnerable code comparisons, and a prioritized security checklist.

LangChainRead guide

Complete security guide for LangChain agents. Prevent prompt injection in RAG pipelines, secure tool calls, and add runtime protection to LangGraph workflows with working code examples.

RAG Document PoisoningTool Call Parameter Injection
OpenAIRead guide

Definitive security guide for OpenAI API agents with function calling. Prevent parameter injection, secure the Assistants API, protect multi-function chains, and add runtime security with working code.

Function Parameter InjectionAssistants API Code Interpreter Abuse
AnthropicRead guide

Definitive security guide for Anthropic Claude agents with tool use. Protect against long-context injection, secure tool_use blocks, monitor multi-turn conversations, and add runtime protection with working code.

Long-Context Hidden InjectionTool Use Block Exploitation
CrewAIRead guide

Security guide for CrewAI multi-agent systems. Prevent inter-agent escalation, secure tool chains, and protect crew workflows from cascading attacks with working code examples.

Inter-Agent EscalationCross-Agent Tool Chain Attacks
MCPRead guide

Security guide for Model Context Protocol (MCP) servers. Protect against malicious servers, verify tool integrity, enforce policies on MCP tool calls, and add a security proxy with working examples.

Malicious MCP Server ResponsesServer Integrity Compromise (Supply Chain)
LlamaIndexRead guide

Security guide for LlamaIndex RAG pipelines. Protect against index poisoning, secure query engines, and add runtime scanning to your retrieval-augmented generation stack.

Index PoisoningQuery Injection
AutoGenRead guide

Security guide for Microsoft AutoGen multi-agent systems. Protect agent conversations, secure code execution, and prevent inter-agent manipulation.

Conversational Agent ManipulationCode Execution Injection
DSPyRead guide

Security guide for DSPy programs and optimized prompts. Protect against injection in compiled programs, secure retrieval modules, and validate optimized signatures.

Optimized Prompt ExploitationRetrieval Module Poisoning

Frequently Asked Questions

Is generic AI security enough or do I need framework-specific protection?

Generic security catches common patterns but misses framework-specific attack vectors. For example, LangChain's RAG pipeline creates document poisoning risks that don't exist in direct OpenAI SDK usage. CrewAI's inter-agent communication creates lateral movement risks unique to multi-agent systems. Framework-specific guides address these gaps.

Which framework has the most security vulnerabilities?

Frameworks with more tool access and autonomy have larger attack surfaces. Multi-agent frameworks like CrewAI and MCP-based systems have the most complex security requirements because compromising one agent can cascade to others. Single-agent frameworks like direct OpenAI or Anthropic SDK usage have smaller but still significant attack surfaces around tool calling and output handling.

How do I secure a custom framework not listed here?

Rune's generic Shield class works with any Python-based agent. Wrap your agent's input/output processing with shield.scan_input() and shield.scan_output(). The framework-specific integrations add deeper hooks (tool call interception, middleware chains) but the core scanning works universally.

Add runtime security in under 5 minutes

Pick your framework, install the SDK, and wrap your client. Every input, output, and tool call is scanned automatically.

Start Free — 10K Events/Month
AI Agent Security Guides by Framework — Rune | Rune