Free fine-tuning allows OpenAI customers to train the GPT-4o mini model on additional data at no charge until September 23, starting with Tier 4 and Tier 5 users. Credit: Andrew Neel OpenAI is offering free fine-tuning on its new GPT-4o mini model, allowing users to train the model on additional data at no charge to enable higher performance for specific use cases. GPT-4o mini fine-tuning is available to developers in OpenAI’s Tier 4 and 5 usage tiers, which are the highest-priced tiers among OpenAI’s plans. OpenAI plans to gradually expand access to free fine-tuning to all tiers. Free fine-tuning will be offered now through September 23. Developers can start fine-tuning GPT-4o mini for free by visiting their fine-tuning dashboard, clicking “create,” and selecting “GPT-4o mini” from the base model drop-down menu. Each organization gets 2M training tokens per 24-hour period to train the model. Any overage will be charge $3.00 per 1M tokens. More details on free fine-tuning are offered in OpenAI’s fine-tuning docs. Compared to fine-tuning with OpenAI’s GPT 3.5 Turbo, GPT-4o mini is positioned to be more cost-efficient and more capable, with longer context and lower latency. GPT-4o mini was launched July 18 with the intention of expanding the range of applications built with AI by making intelligence more affordable. It enables a range of tasks such as applications that chain or parallelize model calls, pass a large volume of context to the model, or interact with customers through fast real-time text responses, such as customer support chatbots. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos