Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Mixed Reality Headset Clarification

Hi, I get that the Acer and the HP Headsets should be capable of Inside Out Head Tracking and Displaying 3D Virtual Reality Graphics. But I'm wondering if they & the SDK are capable of actually mixing camera images into the graphics before being displayed in the headset. Is the headset capable of delivering color and depth of what a user is looking at into the render pipeline? Do the cameras on the front actually capture all the necessary information? Or are they only used for positional tracking?

Thanks for any clarification, I have experience tracking the ZED depth sensing camera in space and matching a 3D camera / projection to the ZED camera spec. Then being capable of exposing the ZED camera color & depth image buffers into the render pipeline as to Mix/Augment real world images with 3D graphics. The depth information allows the render process to tell if 3D graphics are in front or behind what is captured by the camera.

Are the headsets capable of mixing the camera images into the typical 3D pipeline before the 3D graphics are lens corrected?

Sign In or Register to comment.