Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

GestureRecognizer vs. InteractionManager (Holographic Academy 211.2)

In Holographic Academy 211, chapter 2, we learn how to rotate a hologram based on a navigation gesture. The code uses both the InteractionManager's SourcePressed/SourceReleased events (mainly to get the focused object) and the GestureRecognizer's NavigationStarted/NavigationCompleted etc. events.

It's not clear to me what purpose the InteractionManager's events have here; they seem redundant. Why not just use the events in the GestureRecognizer? I can imagine a context where you might want to react to a "press" but not interpret it as a navigation gesture, I guess, but that doesn't seem to be going on in this exercise. Am I missing something?

Sign In or Register to comment.