Microsoft adds RFT & SFT support in Azure AI Foundry for smarter model fine-tuning
The company calls RFT as "The Future of Adaptive AI in Azure OpenAI Service"
2 min. read
Published on
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
Microsoft is expanding its AI model fine-tuning techniques, like RFT and SFT, in Azure AI Foundry. The update brings more control, better performance, and new model support.
The notable addition is Reinforcement Fine-Tuning (RFT). This method helps improve model reasoning using feedback from specific tasks. It’s based on OpenAI’s earlier work, which showed a 40% performance gain using RFT over standard models.
Azure AI Foundry will soon support RFT for OpenAI’s o4-mini model. Microsoft recommends it in these use cases:
- Custom Rule Implementation: RFT thrives in environments where decision logic is highly specific to your organization and cannot be easily captured through static prompts or traditional training data. It enables models to learn flexible, evolving rules that reflect real-world complexity.
- Domain-Specific Operational Standards: Ideal for scenarios where internal procedures diverge from industry norms—and where success depends on adhering to those bespoke standards. RFT can effectively encode procedural variations, such as extended timelines or modified compliance thresholds, into the model’s behavior.
- High Decision-Making Complexity: RFT excels in domains with layered logic and variable-rich decision trees. When outcomes depend on navigating numerous subcases or dynamically weighing multiple inputs, RFT helps models generalize across complexity and deliver more consistent, accurate decisions.
Alongside RFT in Azure AI Foundry, Microsoft also introduced Supervised Fine-Tuning (SFT) for OpenAI’s new GPT-4.1-nano. This model works well for cost-sensitive AI tasks. Fine-tuning support for GPT-4.1 will go live in the next few days.
Microsoft also added fine-tuning support for Meta’s Llama 4 Scout 17B model. It supports a 10M token context window. This model is now available in Azure AI Foundry and Azure Machine Learning as a managed component.
User forum
0 messages