Best practices for prompt engineering with OpenAI API | OpenAI Help Center thumbnail
Best practices for prompt engineering with OpenAI API | OpenAI Help Center
help.openai.com
Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and context Articulate the desired output format through examples Start with zero-shot, then few-shot (example), neither of them worked, then fine-tune 6. Reduce “fluffy” and imprecise descriptions Instead
8 Users
0 Comments
102 Highlights
5 Notes

Summary

The OpenAI API offers various parameters to alter model output, with model and temperature being the most commonly used. Higher performance models have higher latency and cost more. Temperature measures how often the model outputs a less likely token, with higher temperatures resulting in more random and creative output, but not necessarily truthful. For factual use cases like data extraction and truthful Q&A, a temperature of 0 is best.

Top Highlights

  • Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and context
  • Articulate the desired output format through examples
  • Start with zero-shot, then few-shot (example), neither of them worked, then fine-tune
  • 6. Reduce “fluffy” and imprecise descriptions
  • Instead of just saying what not to do, say what to do instead

Tags

prompts
prompt
openAI
chatGPT
prompt engineering

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.