All Integrations
OpenAI Integration

OpenAI Security: Protect Function Calling Agents

Transparent security wrapper for the OpenAI Python SDK

OpenAI's function calling turns GPT-4 into a tool-using agent. But every function call the model generates is an opportunity for attack. Rune wraps your OpenAI client transparently — same API, same types, same behavior — while scanning every message, function call, and response for threats. Your existing code works unchanged.

Add Security in Minutes

pip install runesec[openai]
from openai import OpenAI
from rune import Shield
from rune.integrations.openai import shield_client

shield = Shield(api_key="rune_live_xxx")
client = shield_client(OpenAI(), shield=shield, agent_id="my-agent")

# Use exactly like a normal OpenAI client
response = client.chat.completions.create(...)

Full setup guide in the documentation

Why OpenAI Agents Need Runtime Security

GPT-4 function calling agents execute real actions based on LLM-generated parameters. The model decides which function to call and what arguments to pass. A prompt injection can manipulate those decisions — causing the agent to call delete instead of read, send data to the wrong endpoint, or chain multiple function calls into an attack sequence.

Top Threats to OpenAI Agents

criticalFunction Parameter Injection

Attackers manipulate the model into generating function calls with malicious parameters — SQL injection in query arguments, path traversal in file operations, or shell commands in system tools.

highAssistants API Abuse

OpenAI Assistants with code interpreter or retrieval can be manipulated to execute arbitrary code, access uploaded files maliciously, or exfiltrate data through generated outputs.

highMulti-Function Chain Attacks

Attacker triggers a sequence of function calls that are individually benign but together accomplish a malicious goal — read credentials, then send them externally.

What Rune Does for OpenAI

Transparent Client Wrapping

shield_client() returns a drop-in replacement for your OpenAI client. Same API, same types, same return values. Security happens invisibly on every call.

Function Call Scanning

Every function call generated by the model is scanned before your code executes it. Catches malicious parameters, unauthorized functions, and suspicious call patterns.

Input & Output Scanning

Messages sent to the API are scanned for prompt injection. Responses are scanned for data leakage, PII exposure, and credential exposure before they reach your application.

Async Support

Full support for both synchronous and async OpenAI clients. Works with chat completions, assistants, and any endpoint that involves tool use.

Common OpenAI Use Cases

  • GPT-4 function calling agents with external tool access
  • OpenAI Assistants with code interpreter and file search
  • Customer-facing chatbots that execute backend actions
  • Autonomous agents that chain multiple function calls

Other Integrations

Secure your OpenAI agents today

Add runtime security to your OpenAI agents in under 5 minutes. Free tier includes 10,000 events per month.

OpenAI Security: Protect Function Calling Agents — Rune | Rune