Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Unity locatable camera depth example

It's a bit hard to test with just the emulator. So I'm curious if someone can give a code example of grabbing frames via the locatable camera and parsing it with the depth information. For example, say I grab a low resolution frame from the camera and want to remove anything past 3 meters from the image, any examples of how I'd do this and how computationally intensive this is? Can I grab a frame a second without much slowdown? To me this has the potential to be quite powerful, but I can't really experiment without a device.

Comments

  • Sounds like you could use a pixel shader like the one mention here to render any pixels past 3 meters transparent.

    I too would be interested in seeing sample apps with source that leverage the locatable camera.

    It sounds like the Photo Booth app from Build is going to be opens sourced in the near future. I would speculate that there is a decent chance that some of the code in that app leverages Locatable Camera.

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • Thanks, I'll look into the shader. I'd really like some way to get an image output, not just a shader, that takes out items after a certain depth. This will allow for better data to send to libraries like OpenCV. The shader looks like it'll give me a good start on the math of it though.

Sign In or Register to comment.