Skip to main content

July 2024 | AI Agent Studio

  • 19 August 2024
  • 0 replies
  • 27 views
July 2024 | AI Agent Studio

 

 

In case you haven't heard about Product Club — the Pathfinder Community Product Club is a monthly virtual meetup led by Automation Anywhere product leaders that focuses on our latest proprietary product innovations. It offers a place for community members to stay informed, connect with product leaders, and gain insights into real-world applications of the latest innovations in intelligent automation.

P.S. If you can’t attend a meeting, no worries — we'll be dropping a recap of each month's session right here in our Product Club hub.

HOSTS

  1. Kate Ressler, Pathfinder Community Director
  2. Jason Trent, Director of Product Management
  3. Rinku Sarkar, Director of Product Management
  4. Smita Biswas, Lead Writer (Documentation Team)
     

TOPIC

For July’s Product Club, our team of product experts introduced the live audience to a new product coming out in a few weeks with the .33 release, AI Agent Studio, which addresses the challenge of creating automations using the power of generative AI with AI tools and AI governance.

Here’s a rundown of the session:

  1. Jason reviews the latest AI innovations at Automation Anywhere and introduces AI Agent Studio.
  2. Jason demos the new area of Control Room, Model Connections, and how to add AI skills to an AI Agent process.
  3. Rinku covers AI governance.
  4. Smita takes us through AI Agent product documentation and additional resources.

 

AI AGENT STUDIO

AI Agent Studio provides the tools and governance necessary to empower your automation developers to successfully build AI-powered agents.

What exactly is an AI Agent? AI Agents combine the power of foundational, generative AI models with the ability to execute and take actions. Essentially, an AI Agent is a combination of an AI skill + an action.

Often, foundational models aren't enough for you to use in your daily operations. We call this concept "model drift" or "hallucination," which requires you to enrich the model by introducing additional information relevant to your organization to tailor it to your use cases, which is known as "grounding." With AI Agent Studio, you can leverage our Enterprise Data capability to do just that, as well as the capability of prompt tuning to most effectively send prompts and generate a proper response in return.

Finally, take an action through UI or API automation, and we always recommend introducing human-in-the-loop in your scenarios when using generative AI, even if you're grounding that data. Leveraging AI Agents to assemble processes, execute multiple skills, and orchestrate across your enterprise is what will transform your business operations.

Underneath all of this capability within AI Agent Studio is an Automation Anywhere security and governance layer with a longstanding history of providing deep insights and controls around how you build bots.
 

DEMO: SUPPORT CASE HANDLING

For this product demo, we’re showcasing how AI Agent Studio can be leveraged to triage support cases, ensuring that the support cases are evaluated by AI and routed to the proper agent or person to provide additional information to the customer as quickly as possible. The demo involves 4 personas: Jake, an Automation Admin, Marcus, a Pro Developer, and Sue, a Citizen Developer.

First, Jason introduces a new area in Control Room called Model Connections. Here, you can establish connections to model foundations by connecting to an LLM, providing all credential and authentication details, and then sharing that model connection for use across the organization. Model Connections provides critical governance to ensure that the right teams and developers use the correct models. It also ensures that models you want to prohibit introducing into your environment can't be.
 

  1. Jake, the Automation Admin, is going to start by creating a new model connection called “Support Operations.” He first selects a vendor. Automation Anywhere supports the major hyperscalers, and today Jake will use Amazon Bedrock.
  2. Next, he will pass through authentication details from his credential vault and test the connection. Testing the model connection ensures the connection is viable, reusable, and give you the opportunity to verify credential info is accurate. The test completes and the model connection is ready to use.
  3. Now Jake needs to extend this new model connection to a role in the organization. We've mapped in a whole bunch of new permission capabilities around creating model connections, access to other parts of AI Agent Studio, specifically our AI governance pieces, and a new set of permissions that are available within the control room. For this demo we are going to map to our admin and pro developer roles.
  4. Time for Marcus, the Pro Developer, to jump in. He’s going to move to Automations and has a folder already created called AI Skills. He opens the folder and we see one prompt Marcus is started called “Tone Detection.”
  5. Marcus opens his file and we see the prompt includes a ”$texttoAnalyze$,” which is prompt input. This input is how, whenever an AI skill is used within an automation or an AI agent, it has the ability to dynamically insert information. The prompt is essentially instructing the model to ingest the input, evaluate it for tone and sentiment and return a score based on those factors. For this demo, the input text is shown in the side panel and comes from a sample support inbound e-mail message.
  6. Marucs selects ‘Get Response.’ The response the generative AI returns is a little underwhelming so Marcus wants to evaluate a different model.
  7. He clicks on ‘Choose’ and gets a pop-up with all the different model connections available to him. Now he selects the new Support Operations model connection Jake just deployed and clicks ‘Get Response’ again. Now that runs as an Anthropic Claude model on our Amazon Bedrock deployment and the response returned is a lot more verbose. It gives him not only the tone and the ratings, but also details information as to why this input is rated this way. This is incredibly helpful! Marcus feels good about this response, so he hits ’Save.’

Now we’ll see how Sue can use this AI skill in a process that she's designing and extend this into an AI Agent. She navigates to the Service Operations folder and opens a Support Case Triage process she’s been working on. With this particular process, it initiates when an e-mail is received in the Support inbox. The process parses through the e-mail to extract the body text and triage that for tone and sentiment.

  1. Sues opens AI Agent to add the AI skill into this automated process. To do that, she pulls in a brand new Generative AI Prompt Template package and adds it to the canvas.
  2. Now she has the opportunity to select an AI skill. She hits ’Browse,’ navigates to an AI Skills folder, and locates the Tone Detection skill that Marcus made available for her. She selects it and clicks ‘Choose.’
  3. She can see the preview of the prompt itself, as well as the prompt template input.. That's going to be the input case text, and I'll just go ahead and insert that.
  4. The response is returned and Sue is able to determine if the sentiment is equal to or below a certain value. Then she can insert a new case record within the case system, which happens to be on top of Salesforce.

 

AI GOVERNANCE

Through AI governance, you can monitor every interaction with large language models. Here’s the governance we’ve built into the Automation Success Platform and what’s coming soon:

  • Monitor and Audit - Our built-in audit capabilities enable you to monitor usage of AI within automations, ensuring users are following the responsible AI best practices when writing prompts, as well as ensuring model responses are accurate.
  • Platform Security Controls - Role-based access controls to data, integrations with SIEM, and encryption for data at rest and in transit.
  • Enforce Best Practices - With the .34 release, we will add rule enforcement through code analysis, which ensures that users are only using approved and qualified models when leveraging AI within automations.
  • AI Analytics - Also with the .34 release, we’re adding a rich set of dashboards to help you derive valuable insights and deep analysis of usage patterns across your automation state.
  • Data Privacy - On the roadmap is enhanced data privacy so customers reap the benefits of generative AI without compromising their data security and privacy controls.
  • Protect against Toxic Content - Also on the roadmap is an intelligent data masking approach that empowers users to freely use data with LLMs without sacrificing privacy. The objective is to support compliance-focused customers implementing responsible AI by taking a privacy-first approach and ensuring model outcomes are always as desired. We want to prevent any toxic language or toxic content in the model responses, so that you know when users write prompts or when models are sends responses, they are always appropriate.

ADDITIONAL RESOURCES

Access the full AI Agent Studio product documentation here, which can answer any questions you have regarding license requirements, roles and permissions, model connection, AI governance, and more.

You can also reference release notes from the latest .33 release here which includes specific release notes on AI Agent Studio. You can find all past and current release note documentation here.

Did You Know? If you are logged in, there are tons of special features you can access with our Product Documentation including adding content to a Watch List to receive a notification when updates are published, saving topics to your Favorites, and leaving feedback that comes directly to us. We highly recommend taking advantage of these helpful features!
To learn more, check out:

 

AUDIENCE FEEDBACK

We polled our live audience throughout the meeting to gain insights and share feedback. Here’s what we learned from our attendees during this session:
 

  • Governance is top of mind for the audience. We asked our audience to select their top three concerns, and rising to the top of the list was data privacy and regulatory controls. Here’s how the rest of the challenges ranked:

 

  • We are always curious what generative AI model vendor our customers are turning to most often. We asked our live audience who the preferred model vendor is in their organizations and the overwhelming majority reported OpenAI (43%) and Azure OpenAI (40%). While only 2% of the audience said they prefer Amazon Bedrock at the moment, Jason provided us with insight that he’s observed Amazon Bedrock garnering more and more use over the past quarter.

 

  • We highly value customer feedback when it comes to planning our roadmap. We asked our audience what capabilities they wanted to see prioritized for AI Agent Studio and the top response (36%) was an Automation Anywhere-provided generative AI model with no need to connect a third-party service. Many audience members also hope to see guardrails for masking sensitive data, native RAG service including data management, and AI-powered response evaluation capabilities prioritized.

SESSION Q&A

Thank you to our audience for submitting their questions! Unfortunately, we aren’t always able to answer them all during the live session. We also want to express our gratitude to our special co-hosts, Jason, Rinku, and Smita for providing their responses.

**Please note that all answers were shared live during the July 2024 meeting, and are subject to change. We strongly encourage you to contact your account management team for any specific licensing and pricing inquiries.

 

Q: What license do we need to access AI Agent Studio?
A: Enterprise license
 

 

Q: How does AI Agent Studio mitigate genAI model hallucentation?
A: In order to ensure that the model responses coming back from your foundational models, Automation Anywhere offers Enterprise Knowledge which is a native, Retrieval Augmented Generation (RAG) capability that allows customers to provide access to organization content (such as product documentation, knowledgeable articles, etc.) to ground responses to interactions with genAI models. In future releases, we are also bringing RAG support from hyperscalers such as Amazon, Azure, Google Cloud and others.
 

 

Q: Are we required to license AI Azure for AI Agent Studio?
A: Our open platform allows you to bring your own models which means you manage the licenses for your AI Azure. If you have an existing one, you can use it. However, to use the capabilities offered through the AI Agent Studio, you do need the platform enterprise license.
 

 

Q: Model availability?
A: The models that you're connecting to AI Agent Studio will be model vendors with which you've already established relationships.
 

 

Q: Are governance and guardrails after the fact?
A: Yes, what we showed you is the audit and monitoring capabilities to see what happened and what were the interactions. There are additional guardrails which model connections provides already. Admins typically configure connections to vetted, approved models. Along with that, in the next upcoming release, there will be enforcement through code analysis where you can enforce what models can be used within the automation. So that's another piece of guardrail we are adding. And in the future, there is on the roadmap to add sensitive data masking as a feature that will be available to protect any sensitive data that's embedded in prompts or responses.
 

 

Be the first to reply!

Reply