Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

spatial mapping cameras

The documentation says: "The camera used for scanning provides data within a 75-degree cone, from a minimum of 0.8 meters to a maximum of 3.1 meters distance from the camera. Real-world surfaces will only be scanned within this field of view. Note that these values are subject to change in future versions."

Does this mean only the 1 depth camera is used for spatial mapping?

Or are the 4 IR sensors used for this, too? Or are the IR sensors just for gesture tracking?

James Ashley
VS 2017 v5.3.3, Unity 2017.3.0f3, MRTK 2017.1.2, W10 17063
Microsoft MVP, Freelance HoloLens/MR Developer
www.imaginativeuniversal.com

Best Answer

Answers

  • Options
    james_ashleyjames_ashley ✭✭✭✭

    Thanks @ChimeraScorn ,

    I actually know Kinect Fusion really well. :wink: My confusion is really about the difference between the roles of the EW cameras and the depth camera. It's much clearer now. EW is for positional awareness. Depth for spatial mapping.

    James Ashley
    VS 2017 v5.3.3, Unity 2017.3.0f3, MRTK 2017.1.2, W10 17063
    Microsoft MVP, Freelance HoloLens/MR Developer
    www.imaginativeuniversal.com

  • Options

    I somehow hope that the additional cameras at one point will also contribute to the integration phase ... e.g.: http://research.microsoft.com/en-us/projects/monofusion/

  • Options

    Just to see if I understood, does this mean that if I scan something from say 1 meter away, and then go closer to the object spatial mapping will no longer be able to recognize where I am?
    Is there a way to play around with the 0.8 meter value?

  • Options
    Jimbohalo10Jimbohalo10 ✭✭✭
    edited May 2016

    @DaTruAndi said:
    I somehow hope that the additional cameras at one point will also contribute to the integration phase ... e.g.: http://research.microsoft.com/en-us/projects/monofusion/

    There is another paper on how to create MonoFusion using a Microsoft LifeCam webcam.

    Lots of complex math, but PDF makes interesting reading. There is no code, just describing potential functionality http://research.microsoft.com/apps/pubs/default.aspx?id=199618

  • Options

    Are we able to do spartial mapping with kinect? i would like to accomplish the same as Hollolens with kinect. is it possible do you ahve some docs? what about hollolens SDK will it work? by just passing the kinect data as its pretty much the same?

  • Options

    @Chop With the Kinect it was called Surface Reconstruction. Just a name change but same underlying idea. You generate meshes in both cases but the level of detail is probably going to be different. What are you trying to do?

    James Ashley
    VS 2017 v5.3.3, Unity 2017.3.0f3, MRTK 2017.1.2, W10 17063
    Microsoft MVP, Freelance HoloLens/MR Developer
    www.imaginativeuniversal.com

Sign In or Register to comment.