Skip to main content

During this virtual meetup, we discussed queues, device pools, workload package, and work items prioritization.

 

 

Concepts

 

Queue: A queue is one of the main building blocks of Workload Management (WLM). A queue holds data known as Work Items for further processing. The system distributes these Work Items to individual unattended Bot Runners in a device pool for processing.

Device Pool: Device pools provide built-in High Availability for the Bot Runner machines. You are not tied to a single Bot Runner machine, so if your Bot Runner machine is unavailable for any reason, your automation is not affected. The scheduled automation will automatically run on the next available Bot Runner machine, thereby providing high availability.

Running a Bot with Queue: Use this feature to collectively process all work items of a queue across all the Bot Runners present in one or more device pools.

Here are the links to packages showcased during the session.

IQ Bot Extraction

Workload

Analyze

 

Conclusion

Workload Management is useful to configure Automation 360 for effective automatic distribution of workload among the bot runners.

 

To learn about our upcoming meetups, here are the meetup group links in different cities for you to join. We look forward to meeting you in person once the conditions get better, till then let’s continue to meet online.

Bengaluru Meetup Group

Delhi Meetup Group

Chennai Meetup Group

Hyderabad Meetup Group

Pune Meetup Group

Singapore Meetup Group

We want to make sure the content at our meetup groups effectively meets the needs of our audience. In that regard, please feel free to let us know at developer@automationanywhere.com, the topics, concepts, and capabilities you’d like to see in the upcoming meetings.

Using the Control Room API..

  • How I can set a Failed status a work item?
  • How I can pass input parameters from Credential Vault to bot queue?

 

Thanks for the support


I’m curious about the overhead implications of splitting a batch process into a queue based process. In a scheduled batch bot, the bot launches once and processes its input batch. Refactoring the bot into a queue-based process, the bot is going to launch for every time.  What is the overhead time and resource consumption of each item launch?

 


@donvnielsen 

You are correct that splitting up a batch process has huge implications on the overall process design, but let’s reframe the conversation though, because you started with “taking a scheduled batch bot and breaking it up”.  Let’s talk about the work items themselves and where they come from.

Think about this question: How did the work items get into that batch?  Most of the time they slowly get added to the batch throughout the course of a workday.  Items from an entire workday are slowly accumulated, and when the scheduled time comes, all of them are run in sequence.  Imagine that workers or systems are adding 5-10 items per hour to this batch and by the end of the day we’ll have 200+ items waiting to be worked. This is the traditional scheduled batch design that you are likely used to.  

Introducing WLM into this isn’t to convert it from batch to to queue at the moment of being schedule for a multi-runner approach.  It’s so that we didn’t have to wait for batch at all!

The WLM queue offers an endpoint with a backlog that can take in each work item as it gets created.  Now instead of sitting untouched in a batch until the next day, within minutes of an item’s creation it will hit the WLM queue and the control room will identify an available bot runner and execute, completing in near real time.  For times when the number of work items coming in exceeds capacity of a single runner, WLM can grab from the rest of the device pool to maximize throughput.  For times when the device pool can’t handle the burst in volume, the WLM queue handles the backlog with a first in first out approach.

WLM’s offer a different way to handle automation design that allows for dynamically handling volume in near-real time.  The closest you could get with “scheduled batch” approaches it to make the schedule execute at a more frequent rate.  Instead of daily runs, you end up with hourly runs, or maybe even “run every 10 minutes” style schedules that become difficult to maintain.


Reply