From the course: Introduction to Prompt Engineering for Generative AI

Fine-tuning your prompts

From the course: Introduction to Prompt Engineering for Generative AI

Fine-tuning your prompts

- [Instructor] Earlier we discussed few shot learning, where we saw prompts and completions. Now, if you had several hundred high quality prompts and completions, you can perform what's called fine-tuning. It requires at least one to 500 good examples, which is quite a bit, and it's definitely not free, but it allows you to get more out of a model. Now, a really good example of this is a GPT-based model that powers GitHub's Copilot. It's been fine-tuned on lots and lots of open source code. Now with fine-tuning, you can often use smaller models and achieve similar or superior results. Also, if your model is fine-tuned, you need less tokens in your prompt because the model already has a sense of what the completion should be like. Also, you can expect lower latency on some tasks. Now remember, for fine-tuning, you need extremely high quality examples of prompts and completions. You need to train the model you wish to fine-tune and choose the right model for that purpose. And finally, you can go ahead and use the fine-tuned model. There are costs associating with fine-tuning models, as this is a heavy computational operation. But a good fine-tuned model may save you quite a lot of money in the long run, requiring less tokens, and perhaps a smaller model to achieve the same results.

Contents