Our motivation for this demo is twofold:
LLM prompts for simple tasks often take a familiar format:
Assistant:and optionally some prefilling of the response
Let's try generating a prompt to elicit a relevant and accurate answer to a arbitrary question. In fact we'll generate two versions. They'll be generated by the LLM of course, using a seed prompt.
We don't want to create the same prompt twice. To introduce variation between the prompts we have two options:
E.g. one seed prompt can ask the LLM to generate a verbose, formal prompt and another seed prompt could ask the LLM to generate a concise, informal prompt.
n > 1and
temperature >= 0.9
This way we use one seed prompt but tell the model to generate
versions. With the model temperature high enough, we expect the versions to be
sufficiently different from one another.