Microsoft revealed Project Volterra today at Build 2022, a Qualcomm Snapdragon-powered gadget aimed to allow developers to investigate "AI scenarios" using Qualcomm's new Neural Processing SDK for Windows toolset. The hardware comes with Windows support for neural processing units (NPUs), which are specialist CPUs designed for AI and machine learning applications.


Microsoft has already teamed with Qualcomm to produce AI developer gear. The businesses launched the Vision Intelligence Platform in 2018, which included "fully integrated" support for computer vision algorithms operating on Microsoft's Azure Machine Learning and Azure IoT Edge services. Even after Qualcomm's exclusive arrangement for Windows on Arm licenses reportedly expired, Project Volterra provides proof that Microsoft and Qualcomm are still bedfellows in this field four years later.



In a blog post, Panos Panay, Microsoft's chief product officer for Windows and devices, stated, "We believe in an open hardware ecosystem for Windows allowing [developers] more freedom and alternatives as well as the capacity to serve a wide range of situations." "As a result, we are continuously changing."



Dedicated AI processors, which accelerate AI computation while lowering battery usage, have become widespread in mobile devices such as smartphones. However, as AI-powered picture upscalers become more popular, manufacturers have begun to include such CPUs in their laptop lines. Apple's Neural Engine is included in M1 Macs, and the SQ1 processor is used in Microsoft's Surface Pro X. (which was co-developed with Qualcomm). Intel had previously stated that it will


Project Volterra will be available later this year, according to Microsoft, and will have a neural processor with "best-in-class" AI computing capability and efficiency. The main chip will be Arm-based and provided by Qualcomm, allowing developers to create and test Arm-native programs using Visual Studio, VSCode, Microsoft Office, and Teams.


Project Volterra, it turns out, is the precursor of Microsoft's "end-to-end" developer toolchain for Arm-native programs, which will include Visual Studio 2022, VSCode, Visual C++, NET 6, Windows Terminal, Java, Windows Subsystem for Linux, and Windows Subsystem for Android (for running Android apps). Previews for each component will begin rolling out in the coming weeks, with more open source projects — such as Python, node, git, LLVM, and others — to follow.


Developers will be able to execute, debug, and analyze the performance of deep neural networks on Windows devices with Snapdragon hardware, as well as incorporate the networks into applications and other programming, according to Panay. The Qualcomm Neural Processing SDK for Windows, which complements the Neural Processing SDK toolbox, includes tools for converting and running AI models on Snapdragon-based Windows devices, as well as APIs for targeting multiple processor cores with varied power and performance characteristics.


Qualcomm's new tooling helps products other than Project Volterra, notably laptops based on Qualcomm's new Snapdragon 8cx Gen 3 system-on-chip. The Snapdragon 8cx Gen 3 has an AI accelerator, the Hexagon processor, that can be used to apply AI processing to photographs and video. It was designed to compete with Apple's Arm-based hardware.


"When Project Volterra is released, the Qualcomm Neural Processing SDK for Windows, along with the Qualcomm AI Engine, part of the Snapdragon computing platform, will create better and innovative Windows experiences," a Qualcomm representative told TechCrunch via email. The combined AI processing capabilities of the CPU, GPU, and digital signal processor components (e.g. Hexagon) in top-of-the-line Snapdragon system-on-chips are referred to as the "Qualcomm AI Engine."


Project Volterra also has a cloud component called hybrid loop. It's a "cross-platform development pattern" for developing AI apps that span the cloud and edge, according to Microsoft, with the idea being that developers may choose whether to execute AI apps on Azure or on a local client at runtime. Should developers want more flexibility, the hybrid loop also allows them to flexibly move load between the client and the cloud.