Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Combine hand position with gaze for cursor navigation and click

I've been using Hololens for 2 weeks now, and some co-workers have tried for a while, and what I always see, is that people almost intuitively try to use their hand not only to click but to navigate, and they try even after I explain to them that you use gaze to focus on an item and your hand to click.

I've been playing with the API for a while, and I've found that the Hololens actually knows your hand position when they are in the ready, click or bloom position, and that's how you can use manipulate in the api; but I've made some attempts to use a combination of gaze with this extra info to move around the cursor with your hand, and I think this is a far better solution than to just use gaze.

One of the biggest disadvantages of just using gaze, is that you have to move your head for even the most simple task, and this also means that while you move your head you can lose from your field of view some other important task, when, if you use something like a Bluetooth mouse, you don't need to.

I think a solution of hand position + gaze can work kind of how a Bluetooth mouse works, and in my opinion is a far better solution.

Comments

  • Options
    AlexDAlexD ✭✭✭

    I somewhat disagree. Yes, you can use / track hand position to move a cursor but that's against the very fundamental interaction model introduced by the HoloLens - GGV.

    Sure, you can move your cursor with the hand and we sort of do that in the shell when you are moving windows around but that is prone to errors and is limited by the tracking camera's field of view.

    Your example is actually a text-book example of where I would introduce Voice in my application. If you need to execute a task without breaking contacts (gaze contact) with an object, you could use a voice command to accomplish that. Unlike Gesture, that acts only on the Gazed object, Voice can work on any element in the scene.

  • Options

    I think that basing everything in the GGV model currently used by Hololens limits the potential of the device a lot.

    Gaze and click is way more prone to error, and you are limited by the center of the screen.

    Using voice is far from the best option. Recognition is not perfect, it isn't fast, in fact it's far more prone to errors than what I'm suggesting, also it isn't precise either, and at this point the only real world solution there is to use the Hololens as a professional work unit is with a Bluetooth keyboard and a mouse, which also break the GGV model in favor of usability.

  • Options

    I agree with the OP. I believe MS created the gaze and gesture control scheme as a semi-reliable method of interaction BECAUSE of the current limitations of the hardware. In actuality almost everyone who first uses the hololens wants to reach out and touch buttons and interact with objects directly with their hands. Distance of objects is also a factor, since you have to be fairly close to actually touch something with your hands, but people want to "press buttons" from interfaces even if they're far away spatially (and in most demos, they don't want you to be too close to objects).

Sign In or Register to comment.