Hello everyone.

We have decided to phase out the Mixed Reality Forums over the next few months in favor of other ways to connect with us.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

The plan between now and the beginning of May is to clean up old, unanswered questions that are no longer relevant. The forums will remain open and usable.

On May 1st we will be locking the forums to new posts and replies. They will remain available for another three months for the purposes of searching them, and then they will be closed altogether on August 1st.

So, where does that leave our awesome community to ask questions? Well, there are a few places we want to engage with you. For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality. If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers. And always feel free to hit us up on Twitter @MxdRealityDev.

Mixed Reality HMD Clarification

Hi, I get that the Acer and the HP Headsets should be capable of Inside Out Head Tracking and Displaying 3D Virtual Reality Graphics. But I'm wondering if they & the SDK are capable of actually mixing camera images into the graphics before being displayed in the headset. Is the headset capable of delivering color and depth of what a user is looking at into the render pipeline? Do the cameras on the front actually capture all the necessary information? Or are they only used for positional tracking?

Thanks for any clarification, I have experience tracking the ZED depth sensing camera in space and matching a 3D camera / projection to the ZED camera spec. Then being capable of exposing the ZED camera color & depth image buffers into the render pipeline as to Mix/Augment real world images with 3D graphics. The depth information allows the render process to tell if 3D graphics are in front or behind what is captured by the camera on a per pixel level.

Are the headsets capable of mixing the camera images into the typical 3D pipeline before the 3D graphics are lens corrected?

Answers

  • Is the headset capable of delivering color and depth of what a user is looking at into the render pipeline?

    This is not available in the HoloLens headset, by SDK to developers, so this will not be available in Immersive headset as its just plain Windows 10. The cameras are not Kinect 2 cameras, which is the only camera to allow access to the depth stream for developers

  • thanks for the response, while I completely back inside out head tracking, I think these development related hardware releases should be more clear on specifics that developers would care about, like what the sensors are actually capable of delivering to an application.

  • @Poppyspy: The reality is, there are only a very, very small number of the headsets out in developers hands, and they are the only ones who have the new SDK, so we don't really know what API's are available with the new headsets. It's not just Windows 10, the Immersive Headsets are also on the Mixed Reality Platform, so there are API's outside of the normal Windows 10. We just have not been given access to them yet.

Sign In or Register to comment.