Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Can we detect left vs right hand?

So far, no joy in docs or anything. InteractionManager gives a hand source id, but the id is assigned per tracked/visible interaction (ie once the hand tracking is lost, if you put the same hand back in FOV you get a new id).

Tagged:

Best Answer

Answers

  • Thanks for the link.

    Along with a bunch of other questions, I gotta wonder why? Is it a technical limitation? That seems somewhat... strange - surely its not difficult to detect. But if its not the case, then why the limitation?

    Maybe if/when we finally are granted access to the raw data, these questions will make sense.

  • @MrSteveT out of curiosity what would you do different if the user were to use the left hand vs. the right hand? From a usability standpoint how would you indicate to a user that using one hand vs. the other would yield a different result?

  • MrSteveTMrSteveT
    edited August 2016

    I would double the number of user commands that could be issued :). And I'm developing an interaction-heavy app, so having 1 single user action (drag hand) made bit of an icky workflow. One idea was move w/ right, rotate w/ left, and 2-hands for pinch-to-zoom.

    It's a trade-off between having the meaning of the movement hidden in some kind of in-scene toggle ("Why does it move now, when it caused a rotation a second ago") or having the meaning of the movement encoded in a users right-vs-left hand choice. From a usability standpoint, both require minimal learning, but one is much easier for the user to trigger. Also, using left vs right gives you consistent feedback on every user action, which seems a more natural learning curve. Not to mention I find selecting (to toggle drag action) with gaze very annoying once you start doing a lot of manipulation - your eyes are rarely looking straight ahead when moving around a scene.

    I've worked around the limitation by implementing 'steering-wheel' action for rotate, pinch for zoom, and 1 handed drag for positioning. I think it works fantastically - its certainly a lot more enjoyable to interact with :)

Sign In or Register to comment.