Microsoft has revealed the latest addition to its Phi series of generative AI models, known as Phi-4.
According to Microsoft, Phi-4 demonstrates significant advancements over its predecessors, particularly in solving mathematical problems. This improvement is attributed in part to the enhanced quality of its training data.
As of Thursday night, Phi-4 is available on a very limited basis, accessible exclusively through Microsoft’s newly launched Azure AI Foundry development platform. Access is restricted to research purposes under a Microsoft research license agreement.
Phi-4 represents Microsoft’s latest small language model, with a parameter size of 14 billion. It enters a competitive field alongside other small models such as GPT-4o mini, Gemini 2.0 Flash, and Claude 3.5 Haiku.
Small AI models like these are valued for their faster performance and lower operational costs, with consistent improvements seen in recent years.
Microsoft attributes Phi-4’s performance leap to the integration of “high-quality synthetic datasets” and high-quality human-generated content, combined with unspecified post-training optimizations.
The use of synthetic data and post-training improvements has become a focal point for many AI research labs.
Scale AI’s CEO, Alexandr Wang, highlighted this trend in a tweet on Thursday, stating, “we have reached a pre-training data wall,” which aligns with several recent industry reports.
Phi-4 also marks a milestone for Microsoft’s Phi series as the first model released since the departure of Sébastien Bubeck.
Previously a vice president of AI at Microsoft and instrumental in developing the Phi models, Bubeck left the company in October to join OpenAI.