Microsoft's Sigma is here and it helps you with tasks in real time

The system relies on HoloLens 2

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

SIGMA used by a tiny AI robot with HoloLens 2

Microsoft released the Situated Interactive Guidance Monitoring and Assistance (SIGMA), an open-source research prototype system for real-world tasks. Also, it combines mixed reality with AI tech to assist us with daily tasks. In addition, according to Microsoft, the system will give you instructions based on your skill set and adapt to any of your mistakes. Thus, SIGMA will make us fix objects around the house, cook, and build furniture.

How can you access Micosoft’s SIGMA?

SIGMA is available on GitHub, and you can access it right away. However, you will need HoloLens 2 because the system relies on the sensing and rendering capabilities of the headset. Also, it uses large visual models like Detic and SEEM to scan your environment for useful objects. On top of that, Sigma provides answers with the help of LLMs (large language models) like GPT-4.

Even if anyone can use SIGMA, the system is for researchers. In addition, Microsoft made it open-source to speed up the process by allowing them to avoid basic engineering. Thus, they can focus on the features instead.

Additional features for researchers

The developers of SIGMA built it on top of another open-source project known as the Platform for Situated Intelligence framework. Additionally, it provides tools for visualization, debugging, and maintenance. Also, the system has a client-server architecture.

Due to its PSI framework, SIGMA enables fast prototyping that allows researchers to create prototypes of AI assistants. Furthermore, they can use the built-in features to replay data collected.

Because of its architecture, SIGMA bypasses device limitations by sending various data streams containing RGB (red-green-blue), depth, audio, head, hand, and gaze tracking information to a desktop server. So, a more powerful device does the processing. Thus, in the future, it might become possible to use the system on other devices as well.

Ultimately, SIGMA is one more step forward in the AI race for Microsoft. Also, this system could eventually revolutionize research and bring AI assistants closer to us. After all, who wouldn’t like an assistant capable of providing professional answers at any time? In addition, it can be a great help in finding the objects that we need.

What are your thoughts? Are you going to try SIGMA? Let us know in the comments.

More about the topics: AI, microsoft, open source