The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Open discussion about new gestures for Hololens
New to the forum and to Mixed Reality Development and while I've only halfheartedly searched (don't hate me) for it I could not find an open discussion on adding new gestures.
From what I've gathered it's not impossible but nor is it recommended as it'd break the generic way Hololens is used across apps. I do see an interest from developers to know what people would want to add as a gesture but these are usually requests in comments or answers to questions about adding new gestures and not an actual open discussion on how an industrial standard should look like.
So if I've missed it I apologize and hope someone will point me in the right direction otherwise I think this is a discussion that should be held openly and not in the offices of Microsoft or it's competitors, after all defining a good UX now is key and we are the users.
I won't go in to what gestures I think should be added here as I think this thread would quickly get messy so if anyone knows any way of gathering input in an orderly fashion that'd be helpful.
While I think creative freedom is great I think that any gesture that you would like to suggest has to follow Microsoft's design guidelines for optimal user comfort and should include a simple sketch on how you feel the gesture should be used (simple hands and arrows similar to that of the "tutorial"). Chances are also that a gesture has already been suggested so maybe a poll to track votes would be one way of going at it.
Let me know what you think!
Some posted this video as a way of re-imagining the clicking gesture and while sci-fi isn't always the best guideline for functional UX I think it's worth posting to get some ideas flowing:
New gestures ... is hard ... so hard to do ... but what is easy ... VOICE. It's easy. Add voice commands ... and allow the user to add VOICE commands of their own choosing. Then your problem will be mostly solved. Did this in my program (but currently one off the market).
Now the problem you've not considered, because you are writing about hand gestures ... is the problem of arm fatigue .. the problem of what happens when SOMEONE does use your program ... for more than a few minutes at a time ... ARM FATIGUE ...
Now, the thing most people, especially me, never seem to tire of ... is talking. Plenty of energy to do that.
Also, I suspect, you will never read or respond to this post of mine ... because, well because Microsoft isn't interested in the Truth of this matter. And doest hide my posts ... but prove me wrong ... and RESPOND. Even if to call me a stupid idiot, just like a monkey minded fool.
Utekai ... of simple monkey mind ... because ... Truth is Simple ... and even a monkey mind like me can understand truth. Can you?
That's why I suggested that all gestures follow Microsfts design guide, it's pretty clearly stated that voice control is the primary way of using Hololens. Discomfort to the user in any way such as eye strain, fatigue, neck pains has to be carefully considered and precisely why we need a discussion, but that doesn't mean that natural movement shouldn't be used as a supplement just as gesturing is used in everyday conversation.
I have seen multiple people try the same gestures without having prior experience to AR or VR headsets with hololens because it felt like the natural thing to do.
examples such as swiping to discard or using two hands to resize an object.
Yes ... but constantly having arms up leads to fatigue. Supplementing with arm motions ... is supplemental to the main issue.
At this point I'm pretty sure you're not even reading my comments but I will leave one last comment as this thread isn't generating any views either way.
As stated in my first and second post, gestures SHOULD follow Microsoft's design guide, just because you're adding new gestures doesn't mean voice isn't the main intended use. I have never posted you should keep your arms up for an extended period nor that it does not lead to fatigue, quite the opposite.
Have you tried using Hololens in a crowded noisy room? Voice also has its weaknesses and to help cover these, especially for new, non-tech-savvy users adding natural feeling gestures would help with immersion and ease of use.
Looking at the gaming industry arm fatigue never stopped anyone from buying the Wii and has it stopped 6dof controllers for VR headsets? No.
Many of the concepts and intended use for Hololens doesn't require you to use and wear it a full day, having used it for many hours neck and head pain was a much larger issue and I use it with gestures mostly so I don't disturb others if I don't need to.
Now I don't actually believe Microsoft won't bring more gestures but I would like us users and developers to be there to help set a good standard UX as to not limit the entire industry when others follow suite.
This will be my last reply unless there's an interest and a possibility for having an actual discussion about gesture UX between us and Microsoft. I will not argue that flailing your arms for hours is a bad idea because no one is saying that it isn't.
This would be pretty great, but Microsoft is doing what it does best i.e. create half-assed products. I doubt we'll ever get full control to make them. One can hope. Here's a quote on this topic:
As far as more gestures are concerned I don't believe that they are going to come out with that many more gestures if they even come out with any more. However, HoloLens should soon support two-handed manipulation.
I've yet to find the Microsoft design guide that recommends voice as the primary way of using the Hololens.
There's a guy posting recent yootoob vids using other technologies with hololens to get various gestures recognized, and using handheld devices.
However, consider arm fatigue is ENORMOUS, except for the first few minutes until arm fatigue sets in, until it's noticed, and then it is noticed.
While arm fatigue wasn't in play as a person pulled out their wallet or credit card to buy the WII, it did set in and stopped people from playing with the WII. Have you bought a WII lately?
Its not that I'm not reading your comments, which I briefly did, it's that arm fatigue is going to set in after any extended use. Which then makes the Hololens drop much value after the fatigue takes center stage. You will spend much time developing the gestures, or linking in other technologies ... and then what?
Working over extended periods in a noisy environment likely isn't a good idea. And yes, I've tried working with Hololens in noisy as well as dark environments ... and, frankly, it sucks. But that is because very simple background noise filtering from audio inputs to detect commands isn't applied. That would be a benefit to all if Microsoft did apply this simple technique ... but why listen to me ... just an idiot right, that's me.
Now, according to recent market share data, oculus has about 50% share, vive has about 45% share, and hololens ... meh, what share they had is falling, falling ....
And the way microsoft supports individual developers, like me ... is only worthy of note that I get no support. I get what they offer, and nothing more. And basically, that is nothing from my viewpoint. And ... why would they change?
As far as basic gestures, Microsoft could have supplied developers with a basic binary system of 4 assignable gestures ... with event handlers for the developer to fill in ... but they didn't.
As far as extending existing gestures ... why not give a pallet of buttons, and use the existing gestures to pick the button to assign what you need. Or use tricks like a double look ... meaning if the item gets highlighted by Gaze twice within a couple seconds, or if the Gaze lingers, say for two seconds, then select ...
Lots of ways to get what you need without relying upon Gestures and producing arm fatigue.
Most people don't use the Hololens for an extended period of time ... and thus most people will never buy a hololens, because there is so little value for a pricey kit of technology that is hardly ever used.
Simple logic that seems to be missing from Microsoft. When all you do is Chase $Billions ... you will lose the few $$ that matter.
As devices give much more fined grained hand control than do actual hand signals, my guess is that is the direction Microsoft will go.
I mean it's not like Microsoft would actually say to their developers right? Got to keep them guessing ... but they do put out docs from time to time
Some devices are designed without voice ... so what's a tired-armed user to do?
Drop the arms and start clicking.
" For Windows Mixed Reality immersive headsets, motion controllers are the primary way that users will take action in their world."
That sums up Microsoft's open discussion about this issue, at least for VR headsets.
I've only come across one Microsoft app that uses voice as it's primary UI mechanism, and that app is currently off the market, and it is a hololens app, and not developed by Microsoft which apparently doesn't have devs with voice experience.
I'm very interested in this topic and verrrry surprised that I haven't been able to find more discussion of it.
Humans are so used to using their hands and arms to interact with things in the world, and it boggles my mind that Microsoft has so far chosen to limit that instinct.
The hololens is not a product used by millions of people everyday for 8 hours a day, so I'm confused by all the concern about arm fatigue and noisy rooms and uniformity across apps. It's a development kit. It's meant to explore the potential and possibilities that will make AR great.
I'm sure Microsoft has done lots of research into ergonomics, but the true UX innovation is going to come from people playing around and doing a lot of things that are terrible before stumbling on an interaction that works great.
AR is still super new. I wish Microsoft would embrace that rather than presuming they know exactly what the best experience is.