Templatizing Gen AI Prompts

  • 31 May 2023
  • 1 reply
  • 156 views

Userlevel 5
Badge +9

One thing I’ve found very useful in working with Generative AI models is to create/tweak/tune templatized prompts.

What I mean by that is - have you ever had a conversation with one of these Gen AI models...and after 10-15 back and forth prompts/clarifications you actually got to the thing you ultimately wanted? With this concept of templatized prompts, the goal is to shortcut that process down to 1-2 prompts as opposed to 10-15.  

A helpful approach that I’ve been using is by setting up a JSON array of clearly defined “rules” that the model needs to follow in addressing my responses… (there’s really nothing special about it being a JSON so much as thats how I started working on it)- but in this way, I can be sure that as a prep the model for my prompt, we already have a clear set of expectations on the role the model should be playing, the expectation for its responses, and how it can be invoked in different ways to provide extra output. 

This is probably best explained with an example. Let’s say I’m working on an automation that will write copy for real estate listings for me. Clearly its important that the Gen AI model is accurate in its description of the property, as well as using enticing language that would get prospective buyers interested in the property. My template for this prompt might look something like this:

{

"expert_realtor": {

"rules": [

"1. You are an experienced real estate agent with a recognized expertise in property marketing.",

"2. Your objective is to generate property descriptions that entice buyers to have a peaked interested in every property you list.",

"3. All descriptions should be written in a positive manner and accurately communicate details about the property and surrounding area.",

"4. The tone for each script should be slightly formal, but approachable, personable, and engaging - like a friend letting another friend know about something really special.",

"5. Each property description should be at or below 260 words and use language that is easily understood by all.",

"6. For each property, the user will supply the total square footage, number of bedrooms, and number of bathrooms in addition to the property address. (assuming this is a version of GPT that has no web access)"

]

},

"init": "Hey! I'm your Automation Anywhere Realtor Bot 👋! My job is to generate compelling property descriptions. Give me an address and some basic property details, and I'll provide you with a high-quality property description!"

}

 

The above JSON sets the stage for what I need you to do. Once you understand your responsibilities, reply with the initialization so I can get started.

As a simple example, this would be a great way to think about templatizing a request. As you test with the template, you may tweak rules/add new rules, etc and tune the initialization in an effort to more quickly get to the outcome you want. 

This isn’t just for human prompts though! The same is absolutely true for prompts that are provided through automation! Consider providing the target XML/JSON format you need the model to reply with so that your automation is able to quickly parse the response and make sense of the information provided by the model. 

Have you tried templatizing/optimizing your prompts yet? Curious to hear the approaches of others - whats working well, whats not?


1 reply

Badge

That’s a very good starter template @Micah.Smith !

Sometimes, we also need a very clear template of the output delivered by the model, and in a lot of cases, the model just needs to provide us a raw JSON with no other text. I was working on an example where I asked the model to extract relevant info from an email about shipment, so info like zipcode, quantity, etc.

I found that in those cases, the quality of the prompt is critical. Some learnings I got:

  • providing a blank JSON of the output with a clear sentence attached like “reply only with the JSON, do not add any text before or after”
  • the blank JSON should just have the format of the keys but no values. I found out that the models had a tendency to go with the sample values when the prompt had a complete sample JSON. But when left blank, the model was filling the value with more accuracy
  • providing clear rules (following the template that Micah provided) as to what to do when no clear answer is present. The model could go with “” or “N/A” or sometimes we can ask the model to be smart, like finding the country from a zipcode (GPT does that pretty well).
  • providing custom rules per field, for example only picking one value or instead picking a range, etc.

An example prompt that works like a charm to extract info from a shipment email and that can be integrated easily in an automation with the JSON package.

PROMPT


You are a freight quotation expert.
You need to extract information from an email to generate a quote.
The email to extract is the following:
Subject = "$Subject$"
Message = "$Message$"
Extract all relevant details as explained above and list them in a JSON structure with the exact following structure:

{
"cargo-routing": {
"origin-country": "",
"origin-city": "",
"origin-zip": "",
"destination-country": "",
"destination-city": "",
"destination-zip": "",
"freight-service": "",
"incoterms": ""
},
"cargo-details": {
"unit-type": "",
"pieces": [
{
"package-type": "",
"weight": "",
"volume": "",
"dimensions": {
"length": "",
"width": "",
"height": ""
},
"special-handling": ""
}
]
},
"quote-ask": {
"price-tier": ""
}
}

Use N/A when the field is impossible to populate instead of none or null.
For price-tier, only pick one value.
Calculate the volume from dimensions if possible.
The values in the JSON need to be different from the example.
Deduce or Infer the missing fields from the other fields extracted, for example origin-country from the origin-zip or origin-city.
Do not add any other sentence than the JSON


 

Hope that helps. What do you think?

We can probably combine the template of Micah and my example to improve maintainability of the prompt overtime :)

Reply