Developer Meetup

Stellar Keynote Recap: Developer Meetup - Solution Pattern Modeling Frameworks

  • 15 May 2023
  • 0 replies
  • 87 views
Stellar Keynote Recap: Developer Meetup - Solution Pattern Modeling Frameworks
Userlevel 6
Badge +9

Welcome to our Stellar Keynote Recap Series!

 

As part of the 1st Annual Pathfinder Community Space Camp & Generative AI Showcase, we’re hosting live sessions with Community MVPs and industry experts to share the latest developments in intelligent automation—especially Generative AI!—and provide you with learnings and resources to drive success at scale. If you don’t have the chance to attend the live session or want to come back to reference some of the critical mission information that was discussed, we’ve captured key intel from each session to share with you!
 

Day 4 Developer Meetup

 

 

This session was developer-dedicated with in-depth discussions and demo use cases on our latest developer innovations to take your automations from a one-task solution to a superior end-to-end process solution. Our stellar Product team members—Geoffrey Laissus, Sr. Director of Product Management, Aziz Khan, Sr. Director, Product Management, and Jay Bala, SVP Product Management—joined Arjun Meda, Principal Developer Evangelist, to drop all the product knowledge you’ve been seeking!
 

INTRODUCTION

 

Business processes are complex. There is a collaboration happening between multiple users and on top of multiple systems—it could be systems we know very well, like Salesforce, it could be public websites, like Experian, but also internal databases and systems, etc. So everything is complex in terms of systems. It is also complex in terms of who is interacting in that process, and who and what are the touch points to the users in the process.
 

DEMO USE CASE

 

💫  Main Intel: Employee experience is equally as important as customer experience and impacts automation adoption within an organization. It should be a serious consideration when designing an automation..

Let’s look at a customer journey process—let’s say we are an insurance agent at Acme company, and we have our team comprised of the insurance agent, the team manager, an account executive on the sales side, and we have someone on the finance side. All those people will coordinate with each other to solve the customer journey. A customer will ask a question, and will provide details about their own situation, their own needs, and their own requirements. At some point there will be a quote generated and then they'll sign a contract and pay a bill.

Let's focus on the specific part of the process from the insurance agent’s role, Zoe. She’s on the phone with a customer, collecting inputs from the customer about details about a house that she wants to insure—What is the roof type? How many square feet? When was the house was built? With all these details, the agent generates a quote which is pretty high, so she'll need an approval before communicating the quote back to the customer. The process begins Salesforce, but in reality it could happen with internal systems, or could be also in SAP. If we're looking at this process as is, there are a lot of manual tasks and steps involved to ultimately get a quote to the customer. Zoe collects all the inputs from the customer and logs the details in Salesforce. She still has many manual steps after generating the quote—going to an internal system, grabbing the details from Salesforce to that quote’s application, and then sending the quote by email to the manager. The manager on the other end has to parse emails, so that creates a lag time, and it's not very convenient to have to parse email and approve by email. And then again, Zoe has to look at her emails for the approval and attach the approved quote to Salesforce. Then finally communicates it back to the customer. So a lot of manual steps and a long turnaround between the time we're getting the details from the customer and the customer actually receiving a quote. This is in essence what we're calling a five-star experience. Something we often underestimate when we're doing RPA and automation projects is that the employee also has their own rating and that will have a huge impact on how much the automation is adopted. Because if you're providing them a superior experience, they will be more likely to advocate and spread the fact that automation is great and changed their life to improve their overall employee experience. That’s truly your differentiation as a company. If I had to say what was the focus of my last 4 years at Automation Anywhere, it is employee experience. So this is our baseline, our 5 star experience when everything is manual.

 

UPGRADING FROM A 5- TO 12-STAR EXPERIENCE

 

💫  Main Intel: There are several options to continue improve a process end-to-end using different automations and triggers. Ultimately, Automation Co-Pilot, Process Composer, and API are powerful tools for developers to use to go beyond just automating 1 step and 1 task. You can go from being a task-automation partner to being a process-automation partner, engages users at the right time, with the right format, with an intuitive and compact user interface

Let's have a look at how we could improve the rating for our employees as well as for our customer, one improvement by one improvement, by just logging what automation can do.

First, scheduling automation is a very good way to improve experience and improve an overall process. In this case, an automation can generate the quotes every night. Let’s say Zoe and her team logged details for 5, 10, even 50 different quotes. We can have this automation running on remote devices scheduled every night at 2am to grab and extract all those quotes requests from Salesforce, extract the details, go into the quote application which is internal to the company, go through the UI to add fields like buttons and generate the quote as a PDF, then send it by email. That’s a massive gain of time for Zoe. This kind of automation is good for handling a few transactions at a time. But becuase it’s scheduled, there’s still a fixed frequency you can run this automation. So in this case, Zoe has to end a call with a customer or the automation won’t kick off. The customer still has to wait because the quote is generated overnight. So really, it’s only a slightly better experience only for Zoe.

Let’s improve it further. We can add email triggers. Whenever the email is received, the automation can extract the attachment, connect to Salesforce using APIs, and attach the quote to the sales record. Again another improvement for Zoe’s work.

Let’s now improve turnaround time by looking at how we could generate quotes every hour instead of overnight. It will be a distributed automation on remote devices in a device pool. One Zoe schedules a quote, it will send it to a queue, and a device will wake up, say every 5 minutes, to check the queue and match the fact that there are multiple devices in the queue. Whenever they are free, they will take the queued items, generate a quote, and send an email. This is best for handling a large batch of transactions. This is more of an 8-star experience because now Zoe is happy and the customer is happier as well to have a lower turnaround time.

What’s the next leap? If we were to automate the process in the right way, with the right tools, what would we do? This time, let’s focus on the team manager, Thomas. He’s working in SAP on a daily basis. We’re still asking him to look at his emails to download the quote, review it, and maybe connect back to Salesforce to compare details. This is where our 2 biggest innovations come in. We want to generate the quote on demand because we don’t want to end the call until we’ve resolved the query. So Zoe logs details into Automation Co-Pilot—which is living in Salesforce—while discussing details with the customer. Process Composer takes over and generates a quote and notifies the team manager in SAP because that’s what he works in all day, and he approves it in SAP. Then an API attaches the approved quote to the Salesforce record and sends the quote by email to the customer. All while Zoe was was on the call with the customer. Now it is a 12-star experience! Zoe can focus on talking to the customer, the customer receives the quote while on the call, and the team manager has no disruption in his work because it all happens in SAP for him.

Automation Co-Pilot and Process Composer are here to run automation on domain from any web application. The process is orchestrating enough approvals and back dates, and remote or local automation is also managed. It engages users at the right time, with the right format, with an intuitive and compact user interface. Also if there is a discussion between users or between users and bots, that is all possible in Process Composer.

Sometimes as an RPA developer, we focus on 1 or 2 steps to automate because we’re not aware or in touch with the right people. So it’s good to have this Automation Co-Pilot, Process Composer and API in mind when you’re automating because you can go beyond automating 1 step and 1 task. We can switch from being a task-automation partner to being a process-automation partner. These are tools in your toolbox and you can mix and match and compose all the tools together to solve any particular use case.

The Process Composer and Automation Co-Pilot are things we are actively investing in.

 

SOLUTION PATTERN MODELING FRAMEWORK

 

Most common solution patterns review:

  • Schedules - running automations on remote devices at a specified time and frequency. Best at handling a few transactions at a time attached to low SLAs.
  • Triggers - running automations remote or local whenever a system or user event occurs. Best at handling single transactions attached to medium/high SLAs.
  • Workload Management - running automations on distributed remote devices. Best at handling large batches of transactions.
  • Automation Co-Pilot & Process Composer - running automations on-demand from any web app. Orchestrating hand-offs, approvals, remote/local automation. Best at handling complex workflows with high SLAs.

3 main factors will shape your automation solutioning process: Experience, Architecture, and Orchestration

  • Experience - Is the automation running in tandem with the user? Is it on-demand? Is it reacting to a user event? Is there a decision to be made by the user? How are the outputs displayed to the user? Is the user in need of the automation when inside another application?
  • Architecture - Is the automation running on the local device or remote? Running in the background on the local device or in the foreground? Running on a single device at a time or on distributed devices? Relying on UI, API, database or other? Is there a need for document orchestration?
  • Orchestration - Is there a sequence of automations to run? Is there a user in the loop? Is it a long-running or short-lived automation? Is there SLA attached to the automation outcome? Is there a time window where the automation needs to run?

Solution pattern is first about understanding the initiation. Types of initiation include: Co-Pilot, run now, triggers, schedules, queues, and API. Then the orchestration, for which you can use the Process Composer or you can do it in a Master Bot. Next is execution, which could be local foreground, local background, remote device, remote pool, or cloud (API task). And finally, in terms of automation, you have multiple items in your toolbox including UI connector, API connector, iPaaS connector, rest soap database, document automation, and recorder. At the end of the day, you also want to provide some kind of monitoring bringing the user back in the loop. To do this you can use Co-Pilot for business kind of monitoring, or activity if you just want to focus on the circle of activity of your automations. you also have bot monitoring inside CoE Manager when you want to take personal monitoring to the next level.

It's a continuum of different tools interconnected to each other, and our documentation can help you understand what exactly can connect to the Process Composer, what is the local background, how many API connectors we have, and how we can use them.
 

LIVE COMMUNITY Q&A

  • When you're working with complex process automation, how do you ensure that the automation solution meets regulatory requirements, whether it's security, complaints, governance or regulatory?
    • Jay: A lot of times you have data that is being entered through Co-Pilot to some of these human input forms, so we are very sensitive to PII information. We are very aware of the fact that, in terms of regulatory rules around GDPR and other things, it's very important that the information captured meets those regulatory requirements. We have 2 or 3 types of capabilities. One is an ability to define fields that are actually obfuscated when you define the forms, so the data that's entered is will not be seen. The other aspect is the ability in the data storage itself you can hash the data so you will never get to see the actual data. We also have from a traditional automation perspective aspects around CyberArk, where you don't want credentials available and picked up only at runtime. So all of those are important, and from an overall governance perspective, we have the the ability inside the core platform itself to go from private to public. So we have a classic life cycle where not any automation that's actually built out can be easily moved to production.

 

  • How does Co-Pilot help ensure SLA submit by providing business users with real-time visibility? And how does it alert in case of any deviations from expected performance?
    • Geoffrey: First and foremost by design, because automation Co-Pilot is where the users are working, we have a direct touchpoint. We know we can notify them in Co-Pilot, and if we notify them in Co-Pilot, it means we notify them in Salesforce, for example, or wherever they’re working. But we have this way of touching the user very quickly where they are, and that's a good primary way to enforce the SLA, and to make sure that people are reacting in time. Because in a process, usually the automation is not a bottleneck like the actual users. Next, when you're using the process composer, you can have time out—expiration time or escalation time—within the process, you can have a task, let's say an approval task, and you can enforce the fact that if a user is not responding in 2 days you can skip the task and escalate to another user Or then you can enter a request, or fail, or send an email or ping someone in Teams. So you have all those capacities to enforce SLA and to continue the notifications, or to just build the process that way.

 

  • What integration does Co-Pilot provide with existing business systems? And how does it also integrate with external partners’ ecosystems?
    • Aziz: Integration is a key component of our platform. We have a strong connector framework that allows us to connect to a number of different enterprise applications. So out of the box you'll have the primary application that you would work with on a daily basis, whether it's Salesforce, Service Now, SAP, Workday, Genesis, etc. Then our platform provides you the ability to connect to any rest or soap endpoint, so you have the flexibility to do that. In addition to that, we're also working on developing a framework which will allow you to build connectors on the fly. So today you have to wait for us to build an integration to an application, and offer that as a package. But imagine the flexibility you'll have to have a tool within the product itself that allows you to build that on the fly for any application that provides a rest endpoint. So those are all the capabilities that essentially become exposed with any application within our portfolio, so Co-Pilot will inherit that out of the box, so you'll have connectivity to any application.

 

  • When you are working on a complex automation workflow, what techniques do you use to break down large processes into simpler and more manageable tasks?
    • Geoffrey: The process composer is here to compose the different building blocks of a process. There are a few building blocks. There's forms if you want to compose the user interface, you can have this user interface made with forms, and then you can have also the automation themselves for Api tasks, which are composed the Process Composer. Process Composer is a level of abstraction a bit higher than those assets the parts, the forms and and the API task. So your usability is built-in. So you can have your a team building those those blocks, the link for ample, a specific BoT that will connect with sales a specific Api that will connect to work day and again. as are we used the the connectors that we have, or use the Ui to mention. You can build this library, an internal library of of of building blocks of automation blocks which are boss forms and Api tasks. Then also refer to the external library of an inside the Botst, or what we have existing examples existing, existing automations running. And now inside the process composer, we can reuse them and you can. You can also select the Vertons, you you're you're using, you can. You can reuse them. And so it's. It's really meant to be to to enforce that and to promote that reusability and and promote the fact that yeah, all those blocks and and and bots and forms already there. And you can just you reuse them inside the process.

 

  • We’ve that Process Composer connects all the bots, forms, and multiple APIs—What steps do we take to ensure there is always compatibility and interoperability between all of these technologies, especially when we are connecting to external APIs and machine learning models?
    • Geoffrey: We take versionings and compatibility very seriously because we understand that a production and enterprise-grade automation will rely on them to be successful and resilient. In terms of capability, even if you’re building bots or API tasks on an older release, like .17, and you’re using them inside Process Composer now, that will be compatible. We maintain that backward compatibility. Also, let’s say a request in Co-Pilot is going on for 2 weeks, if in that time you updated a bot or form or release a new version of connector, we were keeping the versions for that request so that request will go through and not fail. On a more granular level, if you’re using connectors inside the bots or API tasks themselves, those are packages, and you have access to all versions of these packages to understand what default version is running on that specific release and you can choose if you want to enforce a specific version. If you want to select a specific version on your bots, you can also seamlessly update all your bots to run on a new version.

 

  • Can you set up 2 different processes with different triggers on the same runner?
    • Geoffrey: Yes, you can have different triggers that you add to the same device. There is a trigger management page within the automation. It will work as you build your triggers in your automations and you can now access them and deploy them to devices, and say “that device will run with that trigger,” but also mix and match one device with multiple triggers.

 

  • Isn’t the detection of a trigger in fact a continuous running bot?
    • Geoffrey: It is a bit more than that, it’s actually a service that will run and listen to every event. If those are user events, they will need the user to be connected, or if they are system events they can run in the background and they will listen to it. When you’re listening to a trigger, you’re not blocking the device and you’re not block the fact that it cannot be used. When triggers happen concurrently, there’s a queue system and it will be queued on the device itself. So it’s a little bit more than a bot because we can run the bot at the same time.

 

  • The use case which was showcased—isn’t that more of a BPA use case? What are the advantages of using RPA instead of BPA for this specific use case?
    • Geoffrey: It’s a natural journey as an automation developer to look at automation as a role, category, or other types of solutions than just RPA or task automation. We’re on a journey as a company to go from task automation to process automation because it’s what our customers are asking for—they want to take it to the next level. In this use case, it’s fine to generate quotes at night, but we want to truly make an impact on the customer and employee experience. We live in a world where demand is struggling and every company wants to take care of the demand and the customers, but they also want to take care of the employees. There is employee shortage and people are switching jobs and turnover in a contact center is no joke. So our tools are going on that journey. And we’re not the only ones doing that, which is why we have APIs and connectors because we can integrate with other tools doing that quite efficiently. It’s an interconnected toolbox with other vendors, and we’re proud to have a lot of technological partnerships with those vendors so we can work together.
    • Aziz: The key realization is that there will be data assets in silos across your organization. Your IT may be composing a whole bunch of data through mealsoft, snaplogic, apigee, or some other tool that they already house to manage data services. So it’s important for us to make sure you’re able to consume those data objects and reuse the work that your organization has already done. If you already build processes in those tools, then you can call out automations, API tasks, bot, and processes directly from those as well.
    • Jay: A lot of the VPN, VPA tools typically start out at the logical level. Then when it comes to physical implementation, sometimes what happens is there are data and application silos and it becomes a custom project to actually connect all of those. So the fact that we have our core platform which provides this level of connectivity means that stringing together a process that spans across these applications with the right workflow becomes easy to do. That’s where Co-Pilot and Process Composer shine.

 

  • If there is an application for which we don‘t have an API-related package, what does the process look like if somebody wants to build it on their own?
    • Jay: Typically challenges used to be that you had a mainframe application or terminal server, etc. that didn’t necessarily have a lot of API-based connectivity, so that’s where UI-based automation became a good way to get to data inside of them. As technology has evolved into more SaaS-based applications with more open APIs, we’ve realized for the Automation Success Platform to be truly successful, we need to have a combination of both. Aziz’s team has been working the past year to provide out of the box connectivity to Salesforce, Servicenow, Workday, all these tools based on APIs. And all of these are open API so they are typically backward compatible to 7 or 8 releases. This really provides resilient automations. That said, there is still a time and place for using recorder. If you’re still talking about Green Screen Terminal server, if you have situations where the only access is through a Citrix-based layer, you have to still go with image-based recognition.

0 replies

Be the first to reply!

Reply