Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Creating a new Gesture



  • @leprechaunne please be assured that several user research and comfort studies were done before introducing the standard gestures. I am sure we will evolve more in the future.

  • edited December 2016

    @neerajwadhwa please allow developers to build more gestures or introduce more official gestures with machine learning scripts. With Occulus Touch coming out this week, everybody in VR is excited about the hands! HTC vive controllers are also quite accurate already. There are lots of positive reviews for having accurate hand/finger representations in VR. Please don't let HoloLens' fantastic mixed reality fall behind in that regard.

    It seems it's possible at Microsoft Research:

  • My kingdom for a "Grab" gesture.

    How I would love a new gesture that was different than the Air Tap / Tap n Hold that cold allow you to move a hologram around. Intuition would be a full hand "Grab". And Grab n Hold.

  • This uses the Depth Camera of Kinect 2. The Depth Camera stream is not currently available from HoloLens device.

    The Intel camera shown is had Depth stream as well.
    Therefore while the software even if available for Kinect 2 and Intel Camera, would not work with HoloLens

  • As a developer, I didn't think about the degradation of my tendons while holding my arm(s) up in the air in front of me.

    However, the arm fatigue did grab my attention, from the very first use of the hololens, even during initial setup when I had to rest my arms.

    Using Fragments for any period of time caused me to rest my arms during the use of it. Fatigue that sets in within minutes.

    Finger tapping gets old quick, as Rift and Vive devs and users also learned very quickly. They now use hand held clickers (more complex than hololen's clicker which is more or less a wasted piece of hardware).

    The UX ends up being very poor and the desire to use the software quickly departs, and as well this affects the desire to use the hololens at all.

    Voice and Gaze do work very well though. However with unity, the voice features are not well exposed, but are there.

  • I agree with the comments. Not every possible need is fulfilled by the stock gestures, and even the stock gestures could use improvement. I think one of the most amazing things that happen with a new technology are the things that the original creators didn't envision for it, but the users did. Look at 3rd party keyboards for mobile phones, like Swipe. Or the economy and player tactics that emerge in games like Eve Online.

    I think complex, user created gestures are going to take this device's potential to the next level. I want users of my games and apps to feel like Tony Stark. Not E.T.

    That being said, if @neerajwadhwa is looking for suggestions, the type of gestures I am looking to make are: 1) better swiping as others have pointed out, 2) two handed gestures for zooming in and out, 3) a "grab" motion instead of a "pinch" is far more organic for dragging/holding, and 4) a "throw forward" motion for pinning an app or window (or even an object within apps) to a plane directly in front of you.
  • Also, before I forget. Thanks for hearing out my opinion and letting me get on my soap box. :smile:
  • @neerajwadhwa said:
    @ChaseBowman currently there are no such recommendations from a platform perspective but feedback noted.

    Good morning @neerajwadhwa , i'm an italian student and i'm searching a way to create/modify gestures on mixed reality.
    My question is: Is it possible to make this without any device for mixed reality like Hololens?

    Thanks in advance.

  • Hello
    I am also currently working on a HoloLens project which inhabits business related functions which should be aided by having a 3D interface for exploring and manipulating files. This is not the main purpose of the project, but it is still a significant part in it. I am quite intrigued about the possibilities laid out by the HoloLens, but also quite disturbed to hear that Microsoft is actively blocking (or purposely not supporting) API for custom gestures.

    The basic gestures (a single one, to be precise - tap; bloom is reserved) are not sufficient, especially given that the user has to gaze in order to trigger interactions. Now, that may be circumvented by using the hand position and checking for the tap gesture.

    I understand why Microsoft decides to maintain restrictions for the sake of shared interaction methods between applications. Yet, you have to consider the following: There are plenty of creative people out there, who can manifest great ideas nobody else did thought of (and had the ambition, opportunity and methods to manifest them). Setting such restrictions cuts those people basically out of the game. For the HoloLens to have actual value to customers (or to be precise: For the upcoming hardware which is designed for customers), you desparately need the application and utility it has to offer - which primarily comes from developers.

    The current gestures are placeholders at best - some standard to resort to. So either Microsoft vastly expands on the gesture front, or they allow and support developers to do so, or both. As of now, there is no way I can see HoloLens and its successor (along with the software we would create) to be sellable if the value of the software that can be created is vastly reduced due to enforcing arbitrary standards - by valuing equality over freedom - or rather by valuing arbitrary dogma over creativity. The trade-off is simply not worth it, and it will be reflected in the revenues for everybody involved.

    Our software has no chance to succeed if we have to force our customers to use the gaze and tap gesture. The best we can do now is to use hand position and tap gesture. But there would be many more gestures including swipe, grab, pinch (with no forefinger pointed upwards) which would be much better.

    Given the newest Microsoft technology to map the hands, it seems like it's no longer an issue about lacking technology, but about lacking philosophy. All we would need is to be able to build colliders in Unity to form hands. Additional API would be god's blessing. But to clarify: We will not use the gaze & tap gestures - especially because we need our users to be able to look freely. We want them to do work faster and easier than before - and not vice versa! Our software is meant to boost work effciency - not as a game. (And if we would make a game, the requirements for positive user interaction would be even higher. Oh, and imagine the possibilities... using real life environment as a map... something like car racing game... with multiplayer.)

    If Microsoft wants to adhere to some shared standards for users, then they have to either implement them themselves, or just recommend, but NOT enforce these standards. How do they even think they can rely on collected user experiences with technology barely accessible to real customers? The information collected by VR does not even apply, it's entirely a different context.

    The future is there, right before our eyes... yet Microsoft says gaze & tap - and it's game over.

  • I was able to get a look at the Holo Lens and I have to say that the gestures are not very ergonomic. And I watched People having Trouble with learning the air tap. For some People it is easy but for the majority, especially people in their 40's it seems to be not easy. Don't get me wrong, I liked the experience but letting slip down the ergonomic part for multi platform use isn't hte best way in my eyes. That's why I would like to be able to make custom Control Interfaces for it.

  • edited July 2020

    Now that the HoloLense 2 is out is this a possibility yet? One of the first things of feedback we always get is that the hand gestures quickly become uncomfortable, and put strain on the wrist, arm, and shoulder. Where quick, less pinpoint actions, decrease this and increase interaction speed for frequent actions.

Sign In or Register to comment.