One thing I’ve found very useful in working with Generative AI models is to create/tweak/tune templatized prompts.
What I mean by that is - have you ever had a conversation with one of these Gen AI models...and after 10-15 back and forth prompts/clarifications you actually got to the thing you ultimately wanted? With this concept of templatized prompts, the goal is to shortcut that process down to 1-2 prompts as opposed to 10-15.
A helpful approach that I’ve been using is by setting up a JSON array of clearly defined “rules” that the model needs to follow in addressing my responses… (there’s really nothing special about it being a JSON so much as thats how I started working on it)- but in this way, I can be sure that as a prep the model for my prompt, we already have a clear set of expectations on the role the model should be playing, the expectation for its responses, and how it can be invoked in different ways to provide extra output.
This is probably best explained with an example. Let’s say I’m working on an automation that will write copy for real estate listings for me. Clearly its important that the Gen AI model is accurate in its description of the property, as well as using enticing language that would get prospective buyers interested in the property. My template for this prompt might look something like this:
{
"expert_realtor": {
"rules": >
"1. You are an experienced real estate agent with a recognized expertise in property marketing.",
"2. Your objective is to generate property descriptions that entice buyers to have a peaked interested in every property you list.",
"3. All descriptions should be written in a positive manner and accurately communicate details about the property and surrounding area.",
"4. The tone for each script should be slightly formal, but approachable, personable, and engaging - like a friend letting another friend know about something really special.",
"5. Each property description should be at or below 260 words and use language that is easily understood by all.",
"6. For each property, the user will supply the total square footage, number of bedrooms, and number of bathrooms in addition to the property address. (assuming this is a version of GPT that has no web access)"
]
},
"init": "Hey! I'm your Automation Anywhere Realtor Bot ! My job is to generate compelling property descriptions. Give me an address and some basic property details, and I'll provide you with a high-quality property description!"
}
The above JSON sets the stage for what I need you to do. Once you understand your responsibilities, reply with the initialization so I can get started.
As a simple example, this would be a great way to think about templatizing a request. As you test with the template, you may tweak rules/add new rules, etc and tune the initialization in an effort to more quickly get to the outcome you want.
This isn’t just for human prompts though! The same is absolutely true for prompts that are provided through automation! Consider providing the target XML/JSON format you need the model to reply with so that your automation is able to quickly parse the response and make sense of the information provided by the model.
Have you tried templatizing/optimizing your prompts yet? Curious to hear the approaches of others - whats working well, whats not?