Our motivation for this demo is twofold:
LLM prompts for simple tasks often take a familiar format:
Human
and Assistant
)Assistant:
and optionally some prefilling of the responseLet's try generating a prompt to elicit a relevant and accurate answer to a arbitrary question. In fact we'll generate two versions. They'll be generated by the LLM of course, using a seed prompt.
We don't want to create the same prompt twice. To introduce variation between the prompts we have two options:
E.g. one seed prompt can ask the LLM to generate a verbose, formal prompt and another seed prompt could ask the LLM to generate a concise, informal prompt.
n > 1
and temperature >= 0.9
This way we use one seed prompt but tell the model to generate n
possible
versions. With the model temperature high enough, we expect the versions to be
sufficiently different from one another.