Microsoft has unveiled a series of updates to its Azure AI platform, including the expansion of the Phi-3 family of small language models (SLMs).
The company has added two new models to the family – Phi-3.5-MoE and Phi-3.5-mini – designed to improve efficiency and accuracy.
One of the main advantages of Microsoft’s new models is their multilingual capabilities: they now support more than 20 models.
Microsoft Adds Two New Phi-3 Models
Phi-3.5-MoE, a 42-billion parameter Mixture of Experts model, combines 16 smaller models into one. By doing so, Microsoft is able to combine the speed and computational efficiency of smaller models with the quality and accuracy of larger models.
Phi-3.5-mini is considerably smaller, with 3.8 billion parameters, but its multilingual capabilities open up a wider global usage scenario. It supports Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish and Ukrainian.
Microsoft says based on user feedback, Phi-3.5-mini is a significant update to the Phi-3-mini model, which launched two months ago.
In addition to two new models, Microsoft has also introduced several new tools and services within Azure AI to make it easier to extract insights from unstructured data.
More broadly, Microsoft will launch the AI21 Jamba 1.5 Large and Jamba 1.5 models on Azure AI Models as a Service, with long-context processing capabilities.
Other announcements included the general availability of the VS Code extension for Azure Machine Learning and the general availability of Conversational PII Detection Service in Azure AI Language.
“We continue to invest in the Azure AI stack to bring the latest innovations to our customers, so you can build, deploy, and scale your AI solutions safely and confidently,” said Eric Boyd, Corporate VP of Azure AI Platform.