The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Hand movement tracking and detection help

Hey,
I'm a student at University of Washington, currently developing a Hololens application for my VR capstone course. Currently, we are trying to get our application to detect hand movement (X, Y, and Z directions) but we are unable to get it to work. The Holograms 211 tutorial isnt very clear about how each piece of code interacts with each other to achieve hand navigation and the documentation for Navigation Events doesn't have any specifics for what is happening. We are hoping that someone could shed some light on what we need to accomplish hand movement detection.
We have tried using the holoacademy 211 course materials and incorporating the handsmanager, interactiblemanager, interactible, and cursorstates scripts into our project. We expected that the hand detection would trigger a change in the cursor type as seen in the tutorial but the cursor stays the same which probably means that the hands manager is not working correctly.
For some context, we are trying to do something identical to Skype's drawing tool so we need to detect the hand movement in order to draw on the canvas.
Thanks!
Best Answer
-
ahillier mod
Hey @boruil,
It sounds like you should use the Manipulation gesture. Navigation is used for scrolling along a single plane (x/y/z), but Manipulation will give you free-form motion like the drawing tool does in Skype.You cannot use Manipulation and Navigation at the same time. In the 211 course, you had to use a voice command 'Move Astronaut' to trigger Manipulation. That's because both gestures look identical to the system, so you need a trigger to distinguish between them. This could be done in your app when the user selects a 'draw' tool (disable Navigation events), or just don't implement Navigation if it is not needed in your app.
Search the 211 completed code project for Manipulation, that should give you a good idea about how to implement Manipulation support in your application. Specifically, look at GestureManager.cs, which creates a ManipulationRecognizer, and subscribes to/handles Manipulation events (started, updated, completed, canceled), and keeps track of the hand postion. Also, be sure to check out GestureAction.cs, which uses the position of the hand to move the astronaut around.
The hand detected asset should appear whenever the HoloLens detects the ready gesture (point your index finger towards the sky). Verify that you have the hand detected asset properly wired up to the cursor states script. If the asset isn't there, you won't see it! Also, you'll probably need to use the cursor from the 211 course for it to work properly.
Finally, to trigger Manipulation events: 1) begin with the ready gesture (index finger pointed to the sky) 2) move your index finger down to rest on your thumb, like a pinch or hold (Manipulation stared) 3) move your hand around (Manipulation updated) 4) move your index finger back up, away from your thumb (Manipulation completed), or 5) move your hand out of view (Manipulation canceled).
I hope this adds some clarity,
~Angela6
Answers
Hey @boruil,
It sounds like you should use the Manipulation gesture. Navigation is used for scrolling along a single plane (x/y/z), but Manipulation will give you free-form motion like the drawing tool does in Skype.
You cannot use Manipulation and Navigation at the same time. In the 211 course, you had to use a voice command 'Move Astronaut' to trigger Manipulation. That's because both gestures look identical to the system, so you need a trigger to distinguish between them. This could be done in your app when the user selects a 'draw' tool (disable Navigation events), or just don't implement Navigation if it is not needed in your app.
Search the 211 completed code project for Manipulation, that should give you a good idea about how to implement Manipulation support in your application. Specifically, look at GestureManager.cs, which creates a ManipulationRecognizer, and subscribes to/handles Manipulation events (started, updated, completed, canceled), and keeps track of the hand postion. Also, be sure to check out GestureAction.cs, which uses the position of the hand to move the astronaut around.
The hand detected asset should appear whenever the HoloLens detects the ready gesture (point your index finger towards the sky). Verify that you have the hand detected asset properly wired up to the cursor states script. If the asset isn't there, you won't see it! Also, you'll probably need to use the cursor from the 211 course for it to work properly.
Finally, to trigger Manipulation events: 1) begin with the ready gesture (index finger pointed to the sky) 2) move your index finger down to rest on your thumb, like a pinch or hold (Manipulation stared) 3) move your hand around (Manipulation updated) 4) move your index finger back up, away from your thumb (Manipulation completed), or 5) move your hand out of view (Manipulation canceled).
I hope this adds some clarity,
~Angela
Yeah I figured that out. Thank you so much!
How much detail can "finger motion" be captured? In other words, can Hololens track each joint in the hand/fingers? I realize that navigation gestures are part of the OS, but in a gaming environment how much detail can the hands and fingers track? Thanks.
None of this data appears to escape the HPU - unless there are undocumented calls (which you ought not develop on top of anyway), the SDK doesn't provide finger-and-join level abstraction. Hand position (which is pretty rough) and tap state data only.
This is something that I'd very much like to see made available soon - and I have high hopes that it will be - but if you've ever coded for gesture recognition, you know how finnicky it is. Since Windows Holographic is a platform, not just a device, there is a sort of sense in providing an very high level abstraction that can be swapped out with a clicker or a touchpad controller for future devices that may lack refined gesture recognition. The things it recognizes and converts to clicks and drags are the most predictable and repeatable of gestures.
I'd love to have Leap Motion-like skeletal geometry, but I understand why maybe you'd keep that out of the initial offering. No point encouraging people to use a feature that may never be as dependable as a mouse click. Air tap and air tap-and-drag meet that standard where many other gestural combinations would not.
HoloLens can only detect your hands when they are in air-tap or bloom position, right? Can I say this is useless? Can I know a hand position at anytime?