GPT-4o can now be fine-tuned to make it a better fit for your project | Infinium-tech
Earlier this year OpenAI introduced GPT-4o, a cheaper version of GPT-4 that is almost as capable. However, GPT is trained across the internet, so it might not have the tone and style of output you want for your project – you can try crafting a detailed prompt to achieve that style or, starting today, you can fine-tune the model.
“Fine-tuning” is the final polish of an AI model. It comes after the bulk of the training is complete, but it can have a strong impact on the output with relatively little effort. OpenAI says that just a few dozen examples are enough to change the tone of the output to better fit your use-case.
For example, if you are trying to build a chat bot, you can write several question-answer pairs and feed them into GPT-4o. Once the fine-tuning is complete, the AI’s answers will be much closer to the examples you gave.
You may have never tried fine-tuning an AI model before, but now you can give it a try – OpenAI is letting you use 1 million training tokens for free until September 23. After that, fine-tuning will cost $25 per million tokens, and using a tuned model will cost $3.75 per million input tokens and $15 per million output tokens (note: you can think of tokens as syllables, so a million tokens is a lot of text). OpenAI has provided detailed and accessible information. Documentation On to fine-tuning.
The company is working with partners to try out new features. Developers are developers, what they did was try to create better coding AI. Cosine has an AI named Genie, which can help users with finding bugs and fine-tuning options. Cosine trained it on real examples.
Then there’s Distill, which has developed a text-to-SQL Models (SQL is a language for looking up things in a database). It ranked first in the BIRD-SQL benchmark with an accuracy of 71.83%. For comparison, human developers (data engineers and students) got 92.96% accuracy on the same test.
You may be concerned about privacy, but OpenAI says users who fine-tune 4o have full ownership of business data, including all inputs and outputs. The data you use to train the model is never shared with others or used to train other models. But OpenAI is also monitoring abuse, in case someone tries to fine-tune the model in a way that would violate its usage policies.
Leave a Reply