Metaverse

Build .NET applications for the metaverse with StereoKit

Build .NET apps for the metaverse with StereoKit

Much of the Windows Mixed Reality platform depends on Unity. However, this is not always the best option for many reasons, especially the licensing model which is still very focused on the gaming market. There are alternatives. You can use WebXR in an embedded browser or work with cross-platform tools in Power Platform, which is built around the Babylon.js React Native implementation. But if you’re working with .NET code and want to extend it to augmented reality and virtual reality, you still need the .NET set of mixed reality libraries.

OpenXR: an open mixed reality standard

Fortunately, there is an open standards-based approach to working with mixed reality and a set of .NET tools to work with it. The Khronos Group is the industry body responsible for graphics standards such as OpenGL and OpenCL that help code get the most out of GPU hardware. As part of its remit, it manages the OpenXR standard, which is designed to let you write code once and run it on any headset or augmented reality device. With runtimes from Microsoft, Oculus, and Collabara, among others, OpenXR code should work on most platforms that can host .NET code.

OpenXR’s cross-platform and cross-device nature makes it possible to have a single codebase that can deliver mixed reality to supported platforms if you use a language or framework that works on all of those platforms. Since the modern .NET platform now supports most of the places you’re likely to want to host OpenXR applications, the Microsoft-sponsored StereoKit tool should seem like an ideal way to build those applications, especially with cross-platform UI tools like MAUI hosting non-OpenXR content . You can find the project on GitHub.

Since it is being developed by the same team as the Windows Mixed Reality Toolkit, there are plans to develop towards the ability to use Microsoft’s Mixed Reality Design Language. This should allow the two tools to support a similar feature set so you can bring what would otherwise be Unity-based applications into the broader C# development framework.

Working with StereoKit

StereoKit is exclusively designed to take your 3D assets and display them in an interactive mixed reality environment with a focus on performance and a concise (referred to as “concise” in the documentation) API to simplify code writing. It’s designed for C# developers, although there’s additional support for C and C++ if you need to get closer to your hardware. Although originally designed for HoloLens 2 and augmented reality applications, the tool is suitable for creating virtual reality code and using augmented reality on mobile devices.

Currently, support for the platform is focused on 64-bit applications, and StereoKit is delivered as a NuGet package. Windows desktop developers currently only have access to x64 code, although you should be able to use the ARM64 HoloLens Universal Windows Platform (UWP) on other ARM hardware such as the Surface Pro X. The Linux package has both x64 and ARM64 support; Android apps will only run on ARM64 devices (although testing should work through the Android Bridge technology used by the Windows Subsystem for Android on Intel hardware). Unfortunately, we cannot be fully cross-platform at the moment because there is no iOS implementation because there is no official iOS OpenXR version. Apple is focusing on its own ARKit tool, and as a workaround the StereoKit team is currently working on a cross-platform implementation of WebAssembly that should run anywhere there is a WebAssembly-compatible JavaScript runtime.

Developing with StereoKit shouldn’t be too difficult for anyone who has written .NET UI code. It’s probably best to work with Visual Studio, although there’s no reason you can’t use any other .NET development environment that supports NuGet. Visual Studio users will need to ensure they have enabled desktop .NET development for Windows OpenXR apps, UWP for apps targeting HoloLens, and mobile .NET development for Oculus and other Android-based hardware. You’ll need the OpenXR runtime to test the code, with the option of using a desktop simulator if you don’t have a headset. One advantage of working with Visual Studio is that the StereoKit development team has provided a set of Visual Studio templates that can speed up getting started by loading prerequisites and filling in some boilerplate code.

Most developers will probably want the .NET Core template, since it works with modern .NET implementations on Windows and Linux and prepares you for a cross-platform template in your development. Cross-platform .NET development is now focused on tools like MAUI and WinUI, so it’s likely that UWP implementation will become less important over time, especially if the team ships a WebAssembly version.

Build your first mixed reality C# application

When building in StereoKit, well-defined 3D primitives help simplify the creation of objects in the space of mixed reality. Drawing a cube (a mixed reality version of “Hello, world”) can be done in a few lines of code with another example, a free space drawing application, in just over 200 lines of C#. The library handles most interactions with OpenXR for you, allowing you to work directly with your environment instead of having to implement low-level drawing functions or having code that needs to handle different cameras and displays.

When writing code, you’ll need to consider some key differences between traditional desktop applications and working in StereoKit. Perhaps the most important is state management. StereoKit should implement UI elements in each frame, storing as little state as possible between states. There are aspects of this approach that simplify things considerably. All elements of the user interface are hierarchical, so turning off one element automatically removes its subordinate elements.

This approach allows you to attach UI elements to other objects in your model. StereoKit supports many standard 3D object formats, so all you need to do is load the model from a file before defining interactions and adding a layout area on the model, which acts as a host for UI elements and makes the object the top of the UI hierarchy. It is important not to reuse element IDs within a UI object as they form the basis of StereoKit’s minimal interaction state model and are used to track which elements are currently active and can be used in user interactions.

StereoKit takes a hands-first approach to mixed reality interactions, using hand-held sensors such as HoloLens’ tracking cameras where available or simulating them for mouse or gamepad controls. Hands are displayed in the interaction space and can be used to position other UI elements with respect to hand positions, for example, making the control menu always close to the user’s hands, no matter where they are in the application space.

If you need inspiration for implementing certain features, a useful library of demo scenes is available in the StereoKit GitHub repository. They include sample code for working with controllers and handling manual input, among other necessary elements of mixed reality interaction. The code is well documented and gives you plenty of tips on how to use key elements of the StereoKit API.

Removing Microsoft’s dependency on Unity for mixed reality is a good thing. Having its own open source tool ensures that mixed reality is a first-class citizen in the .NET ecosystem, supported by as much of that ecosystem as possible. Targeting OpenXR is also key to StereoKit’s success as it provides a common level of support across mixed reality devices like HoloLens, virtual reality like Oculus, and augmented reality on Android. You’ll be able to use the same project to target different devices and integrate with familiar tools and technologies like MAUI. Mixed reality doesn’t have to be a separate aspect of your code. StereoKit makes it easy to introduce it into existing .NET projects without the need for significant changes. After all, it’s just another UI layer now!

Copyright © 2022 IDG Communications, Inc.

Leave a Comment