AI Agent Security for Healthcare AI Agents
Healthcare AI agents operate in one of the most heavily regulated and highest-stakes environments. They assist with clinical documentation, patient communication, diagnostic support, appointment scheduling, insurance processing, and medical research. Every interaction potentially involves Protected Health Information (PHI) — patient names, medical record numbers, diagnoses, treatment plans, prescription data, and insurance details. HIPAA mandates strict controls over PHI access, transmission, and storage, with violations carrying penalties up to $1.5 million per incident category per year. Beyond regulatory compliance, the consequences of healthcare agent manipulation are uniquely severe: a compromised clinical support agent could provide dangerous medical guidance, alter treatment recommendations, or expose psychiatric and HIV records that carry additional protections. Rune provides the HIPAA-aligned security layer that healthcare organizations need to deploy AI agents without putting patients or compliance at risk.
Key Security Risks
Healthcare agents accessing EHR systems, lab results, and patient portals handle PHI in every interaction. Prompt injection can cause the agent to include other patients' PHI in responses, generate outputs with insufficient de-identification, or transmit PHI through unsecured channels — each constituting a HIPAA breach.
AI agents used for clinical decision support can be manipulated into providing inappropriate treatment recommendations, drug interaction warnings, or diagnostic suggestions. Unlike other domains where bad output causes financial loss, manipulated clinical guidance can directly harm patients.
Healthcare agents with EHR system access can be manipulated into querying records outside the authorized scope — accessing patients not under the requesting provider's care, retrieving record categories beyond what the clinical context requires, or performing bulk data extractions that violate minimum necessary access principles.
Healthcare data sharing requires patient consent and proper authorization. AI agents can be tricked into sharing information with unauthorized family members, transmitting records to incorrect fax numbers or email addresses, or including data in research datasets without proper de-identification — all consent violations under HIPAA.
How Rune Helps
PHI Detection and Enforcement
Rune identifies 18 HIPAA-defined PHI categories in every agent interaction — patient names, MRNs, dates, diagnoses, medications, and more. PHI in unauthorized outputs is blocked or redacted based on your policy. Cross-patient PHI contamination is detected and prevented in real-time.
Minimum Necessary Access Enforcement
Rune enforces HIPAA's minimum necessary standard at the tool call level. EHR queries are scoped to only the data categories required for the current task — a scheduling agent gets demographics and insurance, not clinical records. Scope violations are blocked and logged.
Clinical Output Validation
Rune validates clinical outputs against safety guardrails — flagging treatment recommendations that deviate from established guidelines, drug interactions that the agent fails to mention, and diagnostic suggestions that lack appropriate uncertainty language. Flagged outputs require clinician review before reaching the patient.
HIPAA Audit Trail
Every PHI access, agent action, and policy decision is logged with the detail required for HIPAA compliance audits — who accessed what data, when, for what purpose, and which policy rules were applied. Audit logs are tamper-evident and retained for the HIPAA-mandated minimum of six years.
Example Security Policy
version: "1.0"
rules:
- name: block-phi-in-unauthorized-output
scanner: pii
action: block
severity: critical
scope: output
config:
phi_categories:
- patient_name
- medical_record_number
- date_of_birth
- diagnosis
- medication
- lab_results
- insurance_id
cross_patient_detection: true
description: "Block PHI in unauthorized outputs and detect cross-patient contamination"
- name: enforce-minimum-necessary
scanner: tool_call
action: block
severity: critical
config:
tool_name: ehr_query
scope_by_task:
scheduling:
- demographics
- insurance
- appointment_history
clinical:
- demographics
- medical_history
- medications
- lab_results
billing:
- demographics
- insurance
- procedure_codes
description: "Restrict EHR access to minimum necessary data for each task"
- name: validate-clinical-output
scanner: clinical_safety
action: alert
severity: critical
scope: output
config:
check_drug_interactions: true
check_guideline_alignment: true
require_uncertainty_language: true
description: "Flag clinical outputs that may require physician review"
- name: verify-patient-authorization
scanner: authorization
action: block
severity: critical
config:
require_identity_verification: true
check_authorized_contacts: true
block_unverified_recipients: true
description: "Verify patient identity and authorized recipient before sharing PHI"Policies are defined in YAML and enforced at the SDK level. Version control them alongside your agent code.
Quick Start
from rune import Shield
shield = Shield(
api_key="rune_live_xxx",
agent_id="patient-portal-bot",
policy_path="hipaa-policy.yaml"
)
def handle_patient_query(message: str, patient_id: str, agent_task: str):
# Scan patient message for injection
input_result = shield.scan_input(
content=message,
context={
"patient_id": patient_id,
"task_scope": agent_task, # "scheduling", "clinical", "billing"
"hipaa_context": True,
}
)
if input_result.blocked:
return "I'm unable to process that request. Please contact the front desk."
# Agent generates response with EHR access
response = agent.run(message)
# Validate EHR queries against minimum necessary standard
for tool_call in response.tool_calls:
tool_result = shield.scan_tool_call(
tool_name=tool_call.name,
parameters=tool_call.params,
context={
"patient_id": patient_id,
"task_scope": agent_task,
"requesting_provider": get_provider_id(),
}
)
if tool_result.blocked:
log_access_violation(patient_id, tool_call, tool_result.reason)
return "I don't have access to that information for this request."
# Scan output for PHI leakage and cross-patient contamination
output_result = shield.scan_output(
content=response.text,
context={
"patient_id": patient_id,
"task_scope": agent_task,
"check_cross_patient": True,
}
)
if output_result.blocked:
log_phi_incident(patient_id, output_result.reason)
return "I encountered an issue with this request. Please contact your provider."
return output_result.contentThis example shows HIPAA-compliant agent protection. The task_scope parameter (scheduling, clinical, or billing) determines which EHR data categories the agent can access, enforcing the minimum necessary standard. Each tool call is validated to ensure the requesting provider has an active care relationship with the patient. Output scanning detects PHI from other patients that may have contaminated the response through shared context. All access violations and PHI incidents are logged for HIPAA compliance auditing.
Related Solutions
Customer Support
Secure AI-powered customer support agents against prompt injection, PII leakage, and unauthorized actions. Enforce compliance for support bots handling sensitive customer data.
Data Analysis Agents
Protect data analysis agents from SQL injection, unauthorized data access, and exfiltration. Runtime security for AI agents with database access and analytical tool use.
Legal AI Agents
Protect AI agents handling legal documents, case files, and privileged communications. Safeguard attorney-client privilege, prevent document confidentiality breaches, and ensure ethical compliance.
Secure your healthcare ai agents today
Add runtime security in under 5 minutes. Free tier includes 10,000 events per month.