Clinical Decision Support Agent

Build an AI agent that performs real-time drug safety checks during prescription workflows. Autonomous contraindication detection, interaction warnings, and dosage verification with complete FDA traceability.

The Problem

Clinical Workflows Need Real-Time Safety Verification

Manual lookup is too slow: Clinicians can't search FDA labels manually for every prescription. Real-time decisions need instant verification.

Generic drug databases lack context: Static databases show interactions but don't reason about patient-specific scenarios or provide FDA evidence.

Liability concerns: Incorrect safety checks lead to medication errors, adverse events, and malpractice exposure. Need verifiable decision support.

No audit trail: When adverse events occur, impossible to prove what information the system provided at decision time.

The Solution

AI Agent with Lemma Reasoning API

Deploy an autonomous AI agent that queries Lemma's Reasoning API in real-time during prescription workflows. The agent checks contraindications, drug interactions, and warnings with structured FDA evidence — all in under 2 seconds.

Instant safety checks with multi-hop reasoning over FDA data

Structured results with severity levels and relationship types

Complete FDA evidence for every warning with fingerprints

Automatic audit trail for compliance and liability protection

How It Works

1

Doctor Prescribes Medication

Doctor enters prescription: "Warfarin 5mg daily" for patient with atrial fibrillation. System triggers AI agent to perform safety checks.

2

Agent Checks Patient Context

Agent examines patient record: Age 72, currently on aspirin and ibuprofen, history of GI bleeding. Identifies potential risks.

3

Agent Queries Lemma Reasoning API

Sends structured queries: "Does warfarin interact with aspirin?" and "Is warfarin contraindicated with history of GI bleeding?" Gets instant structured answers.

4

Agent Alerts with Evidence

Returns prioritized warnings to doctor with severity levels, complete FDA evidence, and fingerprints stored in patient record for audit trail.

Complete Integration Flow

1

Agent Receives Patient Context

Patient Record:

• New prescription: Warfarin 5mg daily

• Current medications: Aspirin 81mg, Ibuprofen 200mg PRN

• Medical history: GI bleeding (2022)

• Age: 72 years old

2

Agent Queries Lemma Reasoning API

POST /v1/graph
{
  "question": "Does warfarin interact with aspirin?"
}
3

Lemma Returns Structured Relationship

200 OK - Drug Interaction Found
{
  "results": [
    {
      "drug": {
        "name": "warfarin",
        "brand_names": ["Coumadin"]
      },
      "relationship": {
        "type": "interaction",
        "condition": "increased bleeding risk with NSAIDs and antiplatelet agents",
        "severity": "serious"
      },
      "evidence": [
        {
          "text": "NSAIDs and antiplatelet agents increase the risk of bleeding...",
          "fingerprint": "11001100110011001100110011001100...",
          "section": "Drug Interactions"
        }
      ]
    }
  ]
}
4

Agent Generates Alert

⚠️ SERIOUS DRUG INTERACTION DETECTED

Warfarin interacts with patient's current medications (aspirin, ibuprofen). Increased bleeding risk, especially with history of GI bleeding.

FDA Evidence:

"NSAIDs and antiplatelet agents increase the risk of bleeding..."

[fp:11001100110011001100110011001100...]

Implementation Code

Python - Clinical Agent
import requests
from typing import List, Dict

LEMMA_API_KEY = "your_api_key"

class ClinicalSafetyAgent:
    """
    Autonomous agent for real-time drug safety checks.
    Queries Lemma Reasoning API to verify prescriptions.
    """
    
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.base_url = "https://api.lemma.la/v1"
    
    def check_prescription(self, new_drug: str, current_meds: List[str], 
                          patient_history: Dict) -> Dict:
        """
        Perform comprehensive safety check for new prescription.
        Returns: alerts with severity levels and FDA evidence.
        """
        alerts = []
        
        # Check 1: Drug interactions with current medications
        for med in current_meds:
            interaction = self._check_interaction(new_drug, med)
            if interaction:
                alerts.append(interaction)
        
        # Check 2: Contraindications based on patient history
        for condition in patient_history.get('conditions', []):
            contraindication = self._check_contraindication(new_drug, condition)
            if contraindication:
                alerts.append(contraindication)
        
        # Prioritize by severity
        alerts.sort(key=lambda x: self._severity_score(x['severity']), reverse=True)
        
        return {
            "drug": new_drug,
            "safe": len(alerts) == 0,
            "alerts": alerts,
            "audit_trail": self._generate_audit_trail(alerts)
        }
    
    def _check_interaction(self, drug1: str, drug2: str) -> Dict:
        """Check if two drugs interact using Reasoning API."""
        response = requests.post(
            f"{self.base_url}/graph",
            headers={
                "x-api-key": self.api_key,
                "Content-Type": "application/json"
            },
            json={
                "question": f"Does {drug1} interact with {drug2}?"
            }
        )
        
        response_data = response.json()
        data = response_data.get('data', response_data)
        
        if data.get('results'):
            result = data['results'][0]
            return {
                "type": "interaction",
                "severity": result['relationship']['severity'],
                "condition": result['relationship']['condition'],
                "evidence": result['evidence'],
                "recommendation": self._generate_recommendation(result)
            }
        return None
    
    def _check_contraindication(self, drug: str, condition: str) -> Dict:
        """Check if drug is contraindicated for condition."""
        response = requests.post(
            f"{self.base_url}/graph",
            headers={
                "x-api-key": self.api_key,
                "Content-Type": "application/json"
            },
            json={
                "question": f"Is {drug} contraindicated with {condition}?"
            }
        )
        
        response_data = response.json()
        data = response_data.get('data', response_data)
        
        if data.get('results'):
            result = data['results'][0]
            return {
                "type": "contraindication",
                "severity": result['relationship']['severity'],
                "condition": result['relationship']['condition'],
                "evidence": result['evidence']
            }
        return None
    
    def _severity_score(self, severity: str) -> int:
        """Convert severity to numeric score for prioritization."""
        scores = {
            "life_threatening": 4,
            "serious": 3,
            "moderate": 2,
            "mild": 1
        }
        return scores.get(severity, 0)
    
    def _generate_recommendation(self, result: Dict) -> str:
        """Generate clinical recommendation based on finding."""
        severity = result['relationship']['severity']
        if severity in ['life_threatening', 'serious']:
            return "DO NOT PRESCRIBE - Consider alternative medication"
        elif severity == 'moderate':
            return "CAUTION - Monitor patient closely if prescribed"
        else:
            return "AWARE - Inform patient of potential interaction"
    
    def _generate_audit_trail(self, alerts: List[Dict]) -> List[Dict]:
        """Generate audit trail with fingerprints for compliance."""
        return [
            {
                "alert_type": alert['type'],
                "severity": alert['severity'],
                "fingerprints": [e['fingerprint'] for e in alert['evidence']],
                "timestamp": datetime.now().isoformat()
            }
            for alert in alerts
        ]

# Example usage
agent = ClinicalSafetyAgent(LEMMA_API_KEY)

result = agent.check_prescription(
    new_drug="warfarin",
    current_meds=["aspirin", "ibuprofen"],
    patient_history={
        "conditions": ["gi bleeding", "atrial fibrillation"],
        "age": 72
    }
)

if not result['safe']:
    print(f"⚠️ ALERTS DETECTED: {len(result['alerts'])}")
    for alert in result['alerts']:
        print(f"  • {alert['severity'].upper()}: {alert['condition']}")
        print(f"    Recommendation: {alert['recommendation']}")
else:
    print("✓ No safety concerns detected")

Why This Works

Real-Time Performance

Reasoning API returns results in under 2 seconds. Agent can check multiple interactions and contraindications without slowing clinical workflow.

Structured Reasoning

Gets structured drug-condition relationships with severity levels. Agent can prioritize alerts and generate actionable recommendations automatically.

Complete FDA Evidence

Every alert backed by exact FDA label text with fingerprints. Clinicians see the evidence behind warnings, not just generic alerts.

Automatic Audit Trail

Fingerprints stored in patient record provide cryptographic proof of what information was available at decision time. Complete liability protection.

Ready to Build?

Deploy clinical decision support agents that verify prescriptions in real-time with complete FDA traceability.