Microsoft releases DeepSeek R1 7B & 14B Distilled Models for Copilot+ PCs, allowing developers to build apps that were not possible

The models are now available to devs.

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

microsoft deepseek copilot+ PC

Microsoft has announced the availability of DeepSeek R1 7B and 14B distilled models for Copilot+ PCs via Azure AI Foundry. These new models build on the company’s vision of integrating AI seamlessly across cloud and edge systems, enabling developers to harness powerful AI capabilities directly on their devices.

The DeepSeek R1 7B and 14B models are optimized to run on Neural Processing Units (NPUs) in Copilot+ PCs, starting with Qualcomm Snapdragon X processors, then Intel Core Ultra 200V and AMD Ryzen processors. This optimization allows for efficient AI computation with minimal impact on battery life and thermal performance, freeing up CPUs and GPUs for other tasks.

The introduction of these models enables developers to build AI-powered applications that were previously not possible. The ability to run 7B and 14B parameter reasoning models locally on NPUs democratizes access to large-scale machine learning models, allowing for sustained AI compute power with less impact on device resources.

Microsoft’s DeepSeek models are optimized using the ONNX QDQ format and are available for download via the AI Toolkit VS Code extension. Earlier this year, the Redmond-based tech giant surprised everyone by bringing DeepSeek R1 to Azure and GitHub, even though the Chinese-based AI directly competes with OpenAI’s GPT-4o.

Go to the Windows Developer Blog, if you want to find out more.

More about the topics: AI, microsoft

User forum

0 messages