Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

object recognition

can we determine who an actor is, like Kinect can for Xbox One as an example?

can we recognize a 'blob shaped such a way' as a rough estimation of a person - that cylindrical thing with these protusions is 'bill' - if not. Looks like we can 'find a wall', can we 'find a bill'?

can we source sound and 'cheat' - that's bill speaking in my field of view.

I cannot find good documentation on object recognition with hololens

...notice me senpais...

Best Answer

Answers

  • Options

    I think what you are talking about is the humanoid detection that the Kinect does.

    The Hololens appears to be able to show Gestures.

    I have seen in the Windows 10 API (which I can't seem to find now) that hand position is in the API that the Hololens can pick up. This of course would be your own hand- not someone else's.

    Object tracking will be the critical function for many if not all good applications for the Hololens, I hope they expose as much tracking capability as possible. I haven't been able to find much though.

  • Options
    edited March 2016

    thank for your the infos and link to Project Oxford!

    ok, we have 'bill' because we interpreted a given frame. We know where his face is in that frame due to return from Oxford face api in our example.

    Its going to get very expensive very fast api use wise and compute/power wise if we re-verify his face with a web service every few frames.

    Now to find some method by which we can 'see' a blob as part of the frame and tag that as the identified bill - it can see the surfaces as they've demo'd but can it cut out a particular surface and let us interact with that as an object?

    the blob is moving, bill is moving; this causes problems if we think bill is a surface from a particular perspective.

    ...notice me senpais...

  • Options
    utekaiutekai ✭✭✭

    Recognizing and tracking objects and people and then 'augmenting' or enhancing the user's view with relevant information about that object or person will open the door wide to mixed reality applications. Using holograms is just a viewport thru the door but doesn't open the door. Holograms are interesting but very limiting.

    Because hololens is self-contained and wearable, it beckons the user to wear it and interact with the surrounding environment. And it shouldn't be assumed the environment must be static. So one if the limitations of great limitation is the need for a static environment for hololens to work well.

    As a user I want to learn information about the real objects that dynamically move through my view. That is the enhanced reality I seek, the primary value I seek.

Sign In or Register to comment.