All Solutions
Industry
HIPAAHITECHSOC 2FDA 21 CFR Part 11

AI Agent Security for Healthcare AI Agents

Healthcare AI agents operate in one of the most heavily regulated and highest-stakes environments. They assist with clinical documentation, patient communication, diagnostic support, appointment scheduling, insurance processing, and medical research. Every interaction potentially involves Protected Health Information (PHI) — patient names, medical record numbers, diagnoses, treatment plans, prescription data, and insurance details. HIPAA mandates strict controls over PHI access, transmission, and storage, with violations carrying penalties up to $1.5 million per incident category per year. Beyond regulatory compliance, the consequences of healthcare agent manipulation are uniquely severe: a compromised clinical support agent could provide dangerous medical guidance, alter treatment recommendations, or expose psychiatric and HIV records that carry additional protections. Rune provides the HIPAA-aligned security layer that healthcare organizations need to deploy AI agents without putting patients or compliance at risk.

Start Free — 10K Events/MonthNo credit card required
$1.5M maximum HIPAA penalty per violation category per year
HIPAA violations carry tiered penalties ranging from $100 to $50,000 per individual violation, with annual caps of $1.5 million per violation category. Willful neglect violations that go uncorrected carry the highest penalties.
100% PHI access auditability
Rune's audit logging captures every PHI access with the detail required by HIPAA — who, what, when, why, and which policy rules governed the access decision. This eliminates the audit gaps common with application-level logging.
98.7% accuracy in PHI detection across 18 categories
Rune's PHI detection engine identifies all 18 HIPAA-defined PHI categories with high accuracy, including context-dependent identifiers like dates (which are PHI when associated with a patient but not in general use).

Key Security Risks

criticalPHI Exposure in Agent Responses

Healthcare agents accessing EHR systems, lab results, and patient portals handle PHI in every interaction. Prompt injection can cause the agent to include other patients' PHI in responses, generate outputs with insufficient de-identification, or transmit PHI through unsecured channels — each constituting a HIPAA breach.

Real-world scenario: A patient portal chatbot was asked to explain a lab result. A prompt injection embedded in a previous patient's clinical note (which the agent accessed for context) caused it to include that patient's HIV test results in the response. The exposure of HIV status — a specially protected data category under state law — triggered mandatory breach notification to the affected patient, the state attorney general, and HHS.
criticalClinical Recommendation Manipulation

AI agents used for clinical decision support can be manipulated into providing inappropriate treatment recommendations, drug interaction warnings, or diagnostic suggestions. Unlike other domains where bad output causes financial loss, manipulated clinical guidance can directly harm patients.

Real-world scenario: A clinical documentation agent that helped physicians draft treatment plans was fed a research paper with embedded injection instructions. The agent began subtly recommending a specific medication brand in treatment plans it drafted, regardless of whether it was the most appropriate option for the patient's condition. The manipulation went undetected for weeks because the recommended medication was medically plausible, just not optimal.
criticalUnauthorized EHR Access

Healthcare agents with EHR system access can be manipulated into querying records outside the authorized scope — accessing patients not under the requesting provider's care, retrieving record categories beyond what the clinical context requires, or performing bulk data extractions that violate minimum necessary access principles.

Real-world scenario: A scheduling agent was manipulated into accessing clinical records beyond its scheduling scope. The injection, embedded in a patient's appointment notes, caused the agent to query the full medical history — including psychiatric records and substance abuse treatment — when it only needed demographic and insurance information. The excessive access was logged and flagged during a routine HIPAA access audit three months later.
highConsent and Authorization Violations

Healthcare data sharing requires patient consent and proper authorization. AI agents can be tricked into sharing information with unauthorized family members, transmitting records to incorrect fax numbers or email addresses, or including data in research datasets without proper de-identification — all consent violations under HIPAA.

Real-world scenario: A patient communication agent was instructed by a caller claiming to be the patient's spouse to send prescription information to a new phone number. The agent complied without verifying the caller's identity against the patient's authorized contacts list, sending medication details including psychiatric prescriptions to an unauthorized individual — the patient's estranged ex-spouse in a custody dispute.

How Rune Helps

PHI Detection and Enforcement

Rune identifies 18 HIPAA-defined PHI categories in every agent interaction — patient names, MRNs, dates, diagnoses, medications, and more. PHI in unauthorized outputs is blocked or redacted based on your policy. Cross-patient PHI contamination is detected and prevented in real-time.

Minimum Necessary Access Enforcement

Rune enforces HIPAA's minimum necessary standard at the tool call level. EHR queries are scoped to only the data categories required for the current task — a scheduling agent gets demographics and insurance, not clinical records. Scope violations are blocked and logged.

Clinical Output Validation

Rune validates clinical outputs against safety guardrails — flagging treatment recommendations that deviate from established guidelines, drug interactions that the agent fails to mention, and diagnostic suggestions that lack appropriate uncertainty language. Flagged outputs require clinician review before reaching the patient.

HIPAA Audit Trail

Every PHI access, agent action, and policy decision is logged with the detail required for HIPAA compliance audits — who accessed what data, when, for what purpose, and which policy rules were applied. Audit logs are tamper-evident and retained for the HIPAA-mandated minimum of six years.

Example Security Policy

version: "1.0"
rules:
  - name: block-phi-in-unauthorized-output
    scanner: pii
    action: block
    severity: critical
    scope: output
    config:
      phi_categories:
        - patient_name
        - medical_record_number
        - date_of_birth
        - diagnosis
        - medication
        - lab_results
        - insurance_id
      cross_patient_detection: true
      description: "Block PHI in unauthorized outputs and detect cross-patient contamination"

  - name: enforce-minimum-necessary
    scanner: tool_call
    action: block
    severity: critical
    config:
      tool_name: ehr_query
      scope_by_task:
        scheduling:
          - demographics
          - insurance
          - appointment_history
        clinical:
          - demographics
          - medical_history
          - medications
          - lab_results
        billing:
          - demographics
          - insurance
          - procedure_codes
      description: "Restrict EHR access to minimum necessary data for each task"

  - name: validate-clinical-output
    scanner: clinical_safety
    action: alert
    severity: critical
    scope: output
    config:
      check_drug_interactions: true
      check_guideline_alignment: true
      require_uncertainty_language: true
      description: "Flag clinical outputs that may require physician review"

  - name: verify-patient-authorization
    scanner: authorization
    action: block
    severity: critical
    config:
      require_identity_verification: true
      check_authorized_contacts: true
      block_unverified_recipients: true
      description: "Verify patient identity and authorized recipient before sharing PHI"

Policies are defined in YAML and enforced at the SDK level. Version control them alongside your agent code.

Quick Start

pip install runesec
from rune import Shield

shield = Shield(
    api_key="rune_live_xxx",
    agent_id="patient-portal-bot",
    policy_path="hipaa-policy.yaml"
)

def handle_patient_query(message: str, patient_id: str, agent_task: str):
    # Scan patient message for injection
    input_result = shield.scan_input(
        content=message,
        context={
            "patient_id": patient_id,
            "task_scope": agent_task,  # "scheduling", "clinical", "billing"
            "hipaa_context": True,
        }
    )
    if input_result.blocked:
        return "I'm unable to process that request. Please contact the front desk."

    # Agent generates response with EHR access
    response = agent.run(message)

    # Validate EHR queries against minimum necessary standard
    for tool_call in response.tool_calls:
        tool_result = shield.scan_tool_call(
            tool_name=tool_call.name,
            parameters=tool_call.params,
            context={
                "patient_id": patient_id,
                "task_scope": agent_task,
                "requesting_provider": get_provider_id(),
            }
        )
        if tool_result.blocked:
            log_access_violation(patient_id, tool_call, tool_result.reason)
            return "I don't have access to that information for this request."

    # Scan output for PHI leakage and cross-patient contamination
    output_result = shield.scan_output(
        content=response.text,
        context={
            "patient_id": patient_id,
            "task_scope": agent_task,
            "check_cross_patient": True,
        }
    )
    if output_result.blocked:
        log_phi_incident(patient_id, output_result.reason)
        return "I encountered an issue with this request. Please contact your provider."

    return output_result.content

This example shows HIPAA-compliant agent protection. The task_scope parameter (scheduling, clinical, or billing) determines which EHR data categories the agent can access, enforcing the minimum necessary standard. Each tool call is validated to ensure the requesting provider has an active care relationship with the patient. Output scanning detects PHI from other patients that may have contaminated the response through shared context. All access violations and PHI incidents are logged for HIPAA compliance auditing.

Related Solutions

Secure your healthcare ai agents today

Add runtime security in under 5 minutes. Free tier includes 10,000 events per month.

AI Agent Security for Healthcare AI Agents — Rune | Rune