Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Implement Rotation and Movement in one Recognizer

How can I implement rotation and movement in one recognizer, if I do not like designing two recognizers and transfer between them by voice commands?

Best Answer

Answers

  • Options

    This is not a new UX problem - the same is true in 2D. 2D editors have typically done it by providing "handles" for rotation to trigger only rotation of the 2D object. That's a pretty cumbersome UX though for natural manipulation.

    For a more natural approach, you can get inspiration from multi-touch screens where a single finger drag/drop does translation while two finger drag/drop can do both 2D translation and 2D rotation at the same time.

    Doing similarly in 3D would let you do translation and rotation more naturally: single handed tap and move would translate in 3D, double handed tap and move would translate in 3D, plus rotate in 3D as the hands moved relative to each other. [You could enable concurrent 3D zoom as well].

    If you want to see an example of this, check out the "Tower Blocks" app in the Store that uses something like this already - lets you move and rotate blocks to stack them into a tower.

Sign In or Register to comment.