Skip to main content
Blog

February 2026 Dev Meetup Recap: Developing Agentic Processes

  • May 8, 2026
  • 0 replies
  • 27 views
Developer Meetup Recap
Shreya.Kumar
Pathfinder Community Team
Forum|alt.badge.img+4

 

Catch the recording at https://youtu.be/yDHamniWXHo?si=LsEM9yFt_1iMYvWY

Previously, on Dev Meetups…

 

If you attended the ‘From Idea to Agent Series’ (FIAS) during Pathfinder Summit and thought four parts were too less, this dev meetup would resonate with you. It was the hands-on extension to the Idea to Agent series and we built a live agent from the Agentic Process Automation (APA) idea that we had identified in earlier installments.

 

Developing Agentic Processes — Recap from the FIAS [00:10:21]

Before getting into the build, the session opened with a question everyone should ask before they login to the control room to build an agentic process: does this process actually need APA?

Traditional automations follow a path you set, whereas Agentic Process Automation adapts in real time - the agent reads changing inputs, decides which tools to invoke, and sequences them toward a goal without you hardcoding every branch. That makes it the right fit for processes with high variability, multiple context-dependent decision points, and cross-team scope. Not every process clears that bar, and that's fine.

normalise implementing a healthy mixture of agentic and traditional automation to automate your workflows 💅

 

Building the Customer Investigations Agent [00:27:00]

The scenario: a customer complaint comes in through a portal. The agent picks it up, reads the room, resolves the case, and documents everything — no human in the loop.

The build covered three tools: a getNextWorkItem API task to fetch the complaint, a Sentiment AI Skill built on AI Skills 4.1 to classify it as happy, neutral, frustrated, or hostile, and a disposeComplaint API task to close it out. Standard enough in structure — what made it instructive was the emphasis on variable naming. The agent reads variable names and descriptions at runtime to decide what data goes where and which tool to call next. That's not a nice-to-have. Sloppy descriptions mean an agent that guesses wrong.
Once the tools were wired into the customer investigations agent and the role and goal prompts were set, it ran. The agent fetched the complaint, analysed sentiment, determined a disposition, and marked the work item complete — autonomously. One detail worth noting: even without custom task names configured, the agent generated its own human-readable labels mid-run. "Analysing the sentiment." "Retrieving next available complaint." Not something you set — something it figures out.

 

🔮 What's Next