Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Depth buffer access / detailed local spatial mesh including user hands

Hello everyone!

I just wonder if there is an easy (or even hard) way to access the Hololens depth buffer data directly. We think that for our application, it would be great to get the direct depth map from the users viewpoint to do some custom processing and statistics of areas covered by the hands of the user.

I guess this is not the usual use case, but it seems there should be a way to get this out of the system or at least get a constant spatial mapping that way that it at least records the users hand. Of course I am talking about the hands being in the view of the user!

Why this would be great for us? Basically we need some custom processing over this: Guessing "protected area" in boxing games and things like that. There are a lot of things we could use this for without overdoing the custom processing so we are sure that what we want will not make the system slow down - just we would be happy to be able to do this and investigating if this is possible.

We plan to use c++ with DirectX and we are not afraid to do a little low level programming, so we don't really care if this is possible with Unity or not - however the answer to that part can be helpful for others of course I hope. Anyways we already plan to roll out our own engine for the Hololens so just wanted to point out that we don't even care if we need to implement something harder for achieving this properly! I hope there is a solution.

Richard Thier
IT expert
Grepton Zrt.

Answers

  • Options
    edited August 2016

    Raw Depth map values have not been exposed to us. Maybe if we ask nicely Microsoft and the HL team could provide a firmware update and API update that does this.

    However currently this data isn't available from the HPU.

    Check out this post for more info: https:/https://forums.hololens.com/discussion/207/access-to-raw-depth-data-stream

    If you look at Spatial Mapping you are given a Mesh, vertices, indices and normals you can use to recreate a surface area. Then apply a scaling factor to get the meters from each point in the vertex.

    This will give you depth from a surface mesh, but not a depth pixel map like we've seen in the Kinect devices.

    P.S. tag your original post with DirectX so it doesn't get lost in the Unity threads

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

  • Options

    Hello Dwight!

    Thank you for your answer. I understand how I get the depth from meshes and things like that, just I am quite unsure that we can use the spatial mapped meshes for the things we have in mind as it is really different from a depth stream. It seems that the spatial mapping is really great for that purposes that it is made for: to give a mesh of the (mostly static) surroundings in usable resolution that is kind of having a low/mid poly count with which holograms partial removal from the screen, interaction with walls etc. can be made. Indeed we would aim for a much more dense and unprocessed data like the guys in the other thread (but for a different purpose).

    I don't have a knee-deep understanding of the underlying technology of the HPU, so I don't know if the depth map at least exists or not at all, but if it is there just hidden by the abstraction layers, then it would be helpful to give access to it for the developers.

    The spatial mapping is of course great too, just not for this. Hopefully we can get this depth data out from the APIs soon.

    Best regards,
    Richard Thier
    IT expert - R&D dep. Grepton Zrt.

Sign In or Register to comment.