The OpenAI API offers various parameters to alter model output, with model and temperature being the most commonly used. Higher performance models have higher latency and cost more. Temperature measures how often the model outputs a less likely token, with higher temperatures resulting in more random and creative output, but not necessarily truthful. For factual use cases like data extraction and truthful Q&A, a temperature of 0 is best.
Top Highlights
Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and context
Articulate the desired output format through examples
Start with zero-shot, then few-shot (example), neither of them worked, then fine-tune
6. Reduce “fluffy” and imprecise descriptions
Instead of just saying what not to do, say what to do instead
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.