Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Hand Occlusion

I wanted to get other people's advice about hand occlusion and how to implement it. I am defining "hand occlusion" as the ability of the user's hands to mask out holograms distal to the position of the hand such that when the hand is placed between the user's head and the hologram, that portion of the hologram which would be occluded is no longer rendered.

Hand occlusion is important for maintaining the illusion of mixed reality. If I move my hands "in front" of a hologram, I should be able to see my hand and not the hologram. For a brief time, the user's brain is trying to determine where in space the hologram actual exists. It serves to immediately break the illusion.

I believe that all the information needed to implement hand occlusion is already here we just need to take advantage of it. The depth sensors specifically don't perform spatial mapping on objects closer than 0.8 meters to avoid mapping the hands. https://developer.microsoft.com/en-us/windows/holographic/spatial_mapping_design

But, the Hololens is able to constantly determine the position and gesture of my hand. Surely we can make a dynamic mesh representing the shape of the hand similar to the dynamic spatial mapping mesh. That dynamic mesh could be rendered with a solid black material in Unity to be able to occlude objects more distal than the hand.

Best Answer

Answers

  • Options

    @mavasher,
    As @ahillier points out, the HoloLens only sees the hand in the ready or pressed poses. If the user holds their hand in any other position (even if it was previously ready) the HoloLens will no longer track that hand.

    Thanks!
    David

  • Options
    edited May 2016

    @DavidKlineMS said:
    @mavasher,
    As @ahillier points out, the HoloLens only sees the hand in the ready or pressed poses. If the user holds their hand in any other position (even if it was previously ready) the HoloLens will no longer track that hand.

    Thanks!
    David

    edit:: guess you're talking about the finger needing to be up or pressed for the hand tracking to be updated, which is true, I thought you meant the only tracking was default gestures. My bad.

    (original post)
    Are you guys talking about tracking like this or something different? :smiley:

    https://www.youtube.com/watch?v=vi4mW5O_bQ8&feature=youtu.be

  • Options

    Thanks for the information and @LeefromSeattle , good work - it's a step toward what I'm talking about.

    It looks like this is going to be hard to implement well enough that it will help maintain the illusion of the holograms existing behind the hands when they are in view. But I think it is such a key feature that future versions will have to integrate some solution to this problem.

  • Options

    Yeah I really wish I could just plug a leap motion into the hololens.....and get full skeletal tracking...being able to do finger gestures and interactions, hand rotations, etc, would make the hololens way more awesome...

  • Options

    @mavasher
    Here is what I have so far. I used leap motion for recognizing the hand. There is still small mismatch between the hand and the leap motion returned hand, but with few more position change it would match pretty good. I am also thinking about using chromakey shader to detect the color of the hand to do the trick but I am new to unity and hololens and have little luck with both(since hololens doesn't really have a video renedering process).
    https://youtu.be/Yohj2ffvERA

  • Options

    Wow. A few questions:

    How did you integrate the leap motion data? Since the Leap Motion device needs a USB3 port and appropriate drivers, I assume you're using a second computer to process the Leap input, correct? More than a few follow-up questions on this one...
    Are you using the Orion Leap Motion drivers?
    Where is the Leap Motion (i.e. is it below your hands or did you mount it on or near the HoloLens?

  • Options

    @Zen Neat stuff! @DocStrange 's approach makes sense to me, are you effectively streaming it to the HoloLens from another computer? If so, capture and transmission lag should be within the 16.7ms needed to render a frame.

  • Options

    @DocStrange Yes I am using a computer to process the leap motion data and send it to the Hololens wirelessly through wifi. Also yes I am using the Orion software Orion has some great capabilities but I am still waiting for features like Image hand that has been ported to Orion yet. I put(taped) the leap motion on top of the hololens.

Sign In or Register to comment.