The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Inside-Out Tap Gesture Recognition
I am currently implementing a system that needs to look for the tap gesture itself.
I don't want to attach a script to every single object if all it will do is return "True" if the tap gesture was performed while the object was in focus.
One example is if I tap in open space I would like a cube to appear 10 Meters away from my face where I tapped.
Another example is when I tap a cube that ONLY has a transform component attached, I want to destroy it.
All I am looking to do is Inside-Out sensing rather than Outside-In.
Thank You
Best Answer
-
Optionsstepan_stulov ✭✭✭
Hey, @Nuka_DiLucca
I think your confusion comes from the HoloToolkit that has a bit more wiring of gestures and gaze which you seem to be trying to unwire, so to say. I'm not fully aware of how it works in the HoloToolkit so I cannot advise on that, although I believe it's possible too.
If you use pure Unity APIs for HoloLens without the HoloToolkit, you can simply use GestureRecognizer class, as I already said. Simply create an instance of it, set the list of recognizable gesture and start capturing. When a tap (or any other desired gesture) happens, your event handler will be called. That's it! No gaze, no objects, nothing else is required. An empty scene with a camera and an instance of GestureRecognizer created anywhere. You can look at empty space, at a collider or at Mona Lisa, it will behave the same:)
Tracking on HoloLens is always inside-out regardless of your application code. What you refer to is capturing gestures regardless of anything gaze related, which is what I described above. In fact gestures ARE global and require no object to be gazed at. It's just that HoloToolkit combines these features for easier application development so you can forget about gaze, gestures, etc. and focus on scene's interactive content, "tappables", "manipulatables", etc.
There should be something like "global gestures" somewhere in the HoloToolkit though.
Does this make it clearer?
Cheers
Building the future of holographic navigation. We're hiring.
5
Answers
Building the future of holographic navigation. We're hiring.
I am trying to understand the event system a little better. If I want to look for a tap event when
GazeManager.Instance.HitObject == null
do I have to grab the event handler from GestureRecognizer and look to see when an event is subscribed topublic event TappedEventDelegate TappedEvent;
?I'm just not understanding the process to go about receiving an empty tap.
i.e. In Unity if I tap empty space print "Hello World."
3A: When I say inside out sensing I am referring to the way tapping is recognized. Similar to VR where you have the HTC Vive with base stations that is considered outside in sensing. What I'm trying to simply phrase is the process of NOT having objects determine whether or not they have been tapped, but instead have a listener look for when an object it tapped.
i.e. An object looks for a tap gesture by using
public void OnInputClicked(InputClickedEventData eventData)
A listener looks for a tap gesture by using ????GestureRecognizer????
Hey, @Nuka_DiLucca
I think your confusion comes from the HoloToolkit that has a bit more wiring of gestures and gaze which you seem to be trying to unwire, so to say. I'm not fully aware of how it works in the HoloToolkit so I cannot advise on that, although I believe it's possible too.
If you use pure Unity APIs for HoloLens without the HoloToolkit, you can simply use GestureRecognizer class, as I already said. Simply create an instance of it, set the list of recognizable gesture and start capturing. When a tap (or any other desired gesture) happens, your event handler will be called. That's it! No gaze, no objects, nothing else is required. An empty scene with a camera and an instance of GestureRecognizer created anywhere. You can look at empty space, at a collider or at Mona Lisa, it will behave the same:)
Tracking on HoloLens is always inside-out regardless of your application code. What you refer to is capturing gestures regardless of anything gaze related, which is what I described above. In fact gestures ARE global and require no object to be gazed at. It's just that HoloToolkit combines these features for easier application development so you can forget about gaze, gestures, etc. and focus on scene's interactive content, "tappables", "manipulatables", etc.
There should be something like "global gestures" somewhere in the HoloToolkit though.
Does this make it clearer?
Cheers
Building the future of holographic navigation. We're hiring.