Automation Pathfinder Program

Deciding on the Right Data to Prove Performance

  • 28 September 2022
  • 0 replies
  • 5 views

Userlevel 1
Badge +8

automation-pathfinder-portal-banner.png

Consistency is Key

 

If you expect to have 3-4 different federated teams building automations on the Automation 360 platform 18 months from now, expect to have about 12 different methods of reporting savings. Why? Because without guidance from the organization’s governing Automation Practice on capturing, storing, and reporting savings, everyone will come up with their own methods - and expect many of them to be less than accurate or auditable. Seek to answer questions like:

  • How are we measuring the value that our automations are generating?
  • How do we track how much money an automation saves?
  • What’s the best way to track these values across multiple bots?
  • How do business value metrics like money and/or time saved relate to operational metrics like the number of bot runs? And which ones do I report on to an executive?

 

See the Future

 

Whether you’re just starting out with Automation Anywhere, or your organization has been building bots for a while, one thing is certain: someone - with some level of authority - will soon ask you: “How is our automation program going?”. You'll be completely calm and collected in the face of the question because you knew all about how to track metrics after reading this blog.

 

To be ready for this conversation - and to have the data to back it up, we want to start with a really basic question: What data am I going to wish I had 3-6 months from now to clearly demonstrate the benefits of an automation program to our leadership team?

 

While your answer to that question may vary from company to company, and even within business units - the basics are generally the same. You’ll need to be able to report on the operational health of the program as well as the benefits provided to the organization through the implementation of the automations that have been built.

 

Let’s use the example of an automation process that starts with an Excel spreadsheet on a shared drive that needs its data validated and transposed to a web application for customer transactions.

 

During the requirements definition process for each automation opportunity, define:

  • What’s the human time that it takes for the task that is being automated to be completed?
    • For our example, let’s say that it takes a human 15 minutes to fully process each row of our sample scenario’s spreadsheet - validating the data is in the correct format and transposing it into the web app in question.
  • What’s the average hourly rate of the workers currently doing those tasks?
    • For our example, let’s say that the average hourly rate is $45/hour.
    • Depending on how specific you want to get, beyond just what the individual is paid, consider the cost of benefits, insurance, etc.

 

With those values defined for each opportunity you’re evaluating, the reporting per automation (standardized across use cases, federated teams, etc.) should be:

  • Number of Tasks Completed on Automations Executed
    • Using our example from above: If on Monday, the spreadsheet for processing had 100 rows for processing, and on Tuesday it had 150 rows for processing, we’d need to be able to report those differences as the savings from each of the two bot runs wouldn't be equal.
      • Note: This number could be 1, and that's OK. Not every automation opportunity lends itself to a variable number of tasks completed, just make sure you're recording how much work was actually done as best as possible.

 

Based on our example above, we could then report on:

 

Business Value Metrics

 

Time Saved

  • Time saved is a great reporting value, as you can clearly demonstrate the number of hours that bots were able to divert from wasting the time of your organization’s staff.

  • Human equivalent time per task is multiplied by the total number of tasks completed. You may call this “productivity hours generated” or “time saved” depending on the use case. (i.e. we used to only audit 10% of transactions, but now that we can automate 100% of transactions - would be productivity hours generated).

  • Example from above: Monday, the automation completed 100 rows for processing. 15 minutes per task x 100 tasks completed = 1500 minutes (or 25 hours) of time saved.

 

Money Saved

  • Money saved is the value that most leaders are going to be the most interested in, as this theoretically results in bottom line savings for your organization.

  • The human equivalent hourly rate is multiplied by the time to complete a single task to determine the savings per task.

    • This value can then be multiplied by the number of tasks completed to determine the total savings for all completed tasks

  • Example from above:

    • $45/hour x 15 minutes (time for a single task) = a cost of $11.25/task

    • $11.25 x 100 tasks (Monday’s processing) = $1,125.00

 

Operational Metrics

 

Bot Runs

  • This metric is less about the specific benefit to the organization, so much as it is a metric related to the health of the automation program.
  • This can be a helpful metric in reporting how the automation program is expanding quarter over quarter or year over year, but should not be the primary metric that is reported on, only used as a metric to support automation program health.

 

Automation Pipeline

  • A healthy pipeline of automation opportunities means there’s a backlog of work to be done that the automation team has yet to address.
  • Ideally, this wouldn't be an infinite list of “work to be done” - but just like reporting on Bot Runs, reporting on your pipeline can be an operational forward-looking metric on where your automation practice is going.
  • Consider expanding out this pipeline reporting to capture the expected dollar and time savings, so your reporting could include “savings in pipeline” - not just a count of automation opportunities waiting to be worked on.

 

Storing and Reporting

 

So we know what to track, and we also have some ideas on how best to report on it - but where do we store all of this? There are actually several options - based partially on the applications/platforms your organization is already using for reporting on workflow/operations platforms.

 

Bot Insight

 

Bot Insight is the first option - and the one that works most natively with Automation 360-created bots. Bot Insight is an embedded RPA analytics platform that delivers actionable business intelligence on what bots are doing and how they’re doing it. These insights help to report on the ROI of your automation program, as well as custom bot-specific values which may offer additional analytical insights into the operations of your bots. Dashboards are built into the Automation Anywhere Control Room, and access can be granted to business users and leaders outside of your automation practice to keep an eye on their automations.

 

Database

 

A do-it-yourself approach is to store the savings data and any other bot metrics in a database. The downside is that you’d need to set up a flexible database structure yourself and develop a way to report on the data. On the plus side, you can store whatever data you want with no limitations. If you’re going to go this route, consider a No SQL database (like MongoDB or DynamoDB). These database structures let you store any number of custom values for a bot run, which could be completely specific to the opportunity itself. This data might be details related to the transaction, data for future business insights, or audit information for the bot. The format for committing and reading data is typically JSON, so you have maximum flexibility.

 

Business Intelligence Platforms

 

Business Intelligence providers (Tableau, Power BI, etc.) provide interactive data visualization software focused on business intelligence. These customizable dashboards enable organizations to visualize the data from other platforms for greater insight into operations and tasks. Business intelligence platforms can be used in conjunction with Bot Insight using data connectors, or independently with data stored in another repository.

 

Conclusion & Actionable Takeaways

 

When you read the intro about 3-4 federated teams each coming up with 12 different ways of doing their reporting metrics, did the organizational leader in you get a bit anxious? With a clearly defined methodology of “how” and “where” metrics are saved and reported on, the performance conversations with leadership on how the automation practice is doing become an opportunity to demonstrate your vision and thought leadership. This will be much more satisfying than having to say “we’re crushing it” with no data or metrics to back up that claim.

 

Actionable Takeaways

  • Work with the developers within your automation practice to determine exactly what you want to capture for reporting metrics, where best to store the data, and how to include it in the automation frameworks/templates that you’re using for development - so you can ensure these values are being consistently captured.
    • If you’ve already started on your automation journey, check to see what (if any) metrics have been captured so far, and ensure that they are in line with your new approach to capturing and reporting on metrics. You may need to go back and add or fix the way metrics reporting is done on some previously completed automations.
  • Talk through some of these metrics with the business partners you’re delivering automations for - are they encompassing the types of values they need to capture and report on as well?
    • As we move into the Accelerate phase of Performance Management, you’ll be coming back to them for input on additional metrics, so get a pulse on what they want to capture and how it can be implemented.
  • Check out some popular YouTube videos about how Bot Insight can provide actionable business intelligence and insights into your automations:

0 replies

Be the first to reply!

Reply