Hello everyone.

We have decided to phase out the Mixed Reality Forums over the next few months in favor of other ways to connect with us.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

The plan between now and the beginning of May is to clean up old, unanswered questions that are no longer relevant. The forums will remain open and usable.

On May 1st we will be locking the forums to new posts and replies. They will remain available for another three months for the purposes of searching them, and then they will be closed altogether on August 1st.

So, where does that leave our awesome community to ask questions? Well, there are a few places we want to engage with you. For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality. If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers. And always feel free to hit us up on Twitter @MxdRealityDev.

Will we have access to the the depth sensors, IR cameras, and RGB cameras data streams?

It's no secret this thing runs like a Kinect. Will we have access to the following streams?

Best Answer


  • The closest I've seen is the locatable camera: https://developer.microsoft.com/en-US/windows/holographic/locatable_camera

    Which gives RGB camera output along with 3D viewing ray coordinates.

  • @Dwight_Goins_EE_MVP said:

    Currently the only thing that is exposed is the web camera. The depth is given to us at a much higher level through Spatial Mapping.

    I haven't gotten around to checking it out myself yet, but are you saying currently the coordinates are not exposed or that I misread the documentation and they will never be exposed via the locatable camera? The 'Images with Coordinate Systems' seems to strongly imply that the spatial coordinates are provided along with the image capture.

  • The coordinates are, but not an array of pixels which correspond to the depth in meters per pixel. That's a Kinect concept, not a HoloLens concept. The raw depth values are processed by the HPU and exposed to us by the Spatial Mapping API, if I'm not mistaken. I don't see anywhere you can get the depth as an image, but color as an image is exposed through Media Foundation and the media capture APIs.

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer

Sign In or Register to comment.