Skip to main content

Building a GLP-1 Prior Authorization Agent

  • April 2, 2026
  • 0 replies
  • 12 views
Lu.Hunnicutt
Pathfinder Community Team
Forum|alt.badge.img+9

 

 

High volumes, time-sensitive approvals, payer variability, and mountains of unstructured clinical data make it nearly impossible to script your way through prior authorizations. This hands-on session tackled that problem head-on by walking participants through building a real, working GLP-1 prior authorization agent inside the Automation Anywhere platform.

 

Setting the Stage: What Is an Agent, Really?

Before anyone touched the control room, the session opened with a grounding conversation on what "agent" actually means: a system that coordinates multiple automations, tools, and people to achieve a business outcome, with enterprise controls and guardrails built in. That last part matters a lot in healthcare, where "fully autonomous" is not the goal. Autonomy is a dial, not a switch, and this agent is designed with a human-in-the-loop step that you'll see play out during the live build.

 

Why GLP-1 Prior Auth? Why Now?

The use case was chosen deliberately. GLP-1 drugs (Wegovy, Ozempic, Mounjaro, and others) are seeing explosive prescription growth, and payers are continuously adjusting their guidelines to keep pace. Each prior auth case requires staff to confirm eligibility, pull patient charts, review clinical notes, assess BMI and treatment history, match findings against payer-specific questionnaires, and submit through portals like Cover My Meds. That's 30 to 60 minutes per case, for a process that can run in the thousands per day.

About 80% of that workflow is consistent across auth types, which makes it automatable. The hard part is the middle: unstructured clinical notes that require reading, reasoning, and synthesis. That's exactly where a goal-based agent, powered by Automation Anywhere's Process Reasoning Engine, changes what's possible. As Stelle Smith pointed out during the session, swap GLP-1 for radiology prior auth, surgical procedures, or any analogous process, and the architecture holds. Only the connectors change.

 

The Build: Step by Step

Participants built directly inside a shared control room environment. Here's the breakdown of what was constructed:

1. Creating the AI Skill (Guideline Review)

The first build step was creating an AI skill called Guideline Review. This skill uses a model connection grounded in Automation Anywhere's Enterprise Knowledge system, which uses RAG (Retrieval-Augmented Generation) to pull payer-specific guidelines and questionnaires at runtime. Participants wrote both a system prompt (defining the role and frame of reference for the model) and a user prompt (defining the task), then mapped input variables for medication, payer, and payer-specific questions. The COSTAR prompting framework was referenced for anyone looking to go deeper on grounding and structuring prompts effectively.

2. Building the API Task (Guideline Review Tool)

With the AI skill saved, the next step was wiring it into an API task that the goal-based agent can call as a modular tool. Participants dragged the Execute AI Skill action into the canvas, browsed to the Guideline Review prompt, used the Quick Map feature to auto-associate input variables, and saved the output to a GuidelineReviewOutput variable. The critical step here was adding a description to the output variable. That description is what tells the goal-based agent how and when to use this tool during execution.

3. Configuring the Goal-Based Agent

The GLP-1 Prior Auth Agent was pre-built with eight tools already configured. Participants added the newly created Guideline Review API task as a ninth tool. The agent's prompt defines its role, goal, action plan, and end-state conditions (successful completion, failure, escalation). Automation Anywhere's Process Reasoning Engine can generate this action plan automatically from a short description of the business problem, using its trained knowledge of how processes behave and the tool descriptions you've provided.

4. Running the End-to-End Process

The full process flow (GLP-1PA Process) was triggered manually to simulate a medication order coming in from a system like Epic, Cerner, or Meditech. The agent kicked off, called multiple tools in parallel where inputs and outputs were independent, reasoned through patient chart data and payer guidelines, and surfaced a human review step with pre-populated answers for a staff member to validate before submission. The output was a formatted PDF summarizing the prior auth responses by diagnosis code, question, and answer.

 

Key Features Highlighted

Parallel Tool Execution: The agent calls multiple tools simultaneously when their inputs are not interdependent, significantly reducing total processing time.

Human-in-the-Loop: Before submission, the agent presents its answers for human review and editing. This is configurable and can be surfaced in the Control Room, embedded in Salesforce, or delivered directly in Microsoft Teams via connector.

AI Governance and Auditability: Every tool call, variable passed, and decision made by the agent is logged in Automation Anywhere's AI governance layer. For healthcare teams that live in audit cycles, this is not a nice-to-have. It's the feature that makes deployment possible.

Enterprise Knowledge Base: Payer guidelines are uploaded as documents and tagged with metadata. The RAG system associates the right documents to the right questions at runtime, and the knowledge base can be updated automatically as guidelines change. This is available on-premises.

Model Flexibility: The session addressed a common question about which LLM performs best. The practical answer: the right model is the one that solves your use case at the lowest cost. The platform supports multiple model connections, including GPT-4.1 for reasoning-heavy tasks, with Claude model support in early trial at time of recording.

 

On-Prem Availability

For teams running on-premises, nearly everything demonstrated is available: AI skills, API tasks, Process Composer, and the Enterprise Knowledge Base. The one nuance is the auto-generation of the agent action plan, which currently uses cloud-based model infrastructure. If you write your own action plan manually, you are fully on-prem compatible.