Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Storing the "best" spatial anchor between multiple devices?

I am looking at an experience where multiple HoloLens devices should share the location of anchors. Updates do not need to be realtime or near/realtime. However, we would want to make sure that the "best" Anchor is stored for a given position.
Meaning, e.g. not the anchor of a devices that has only seen/experienced the smallest part of the room in that area in a specific angle, but rather the device that "has been around" and seen the relevant area, hopefully relatively recently, at high level of detail.
Is there any metric to "judge" the extensiveness and data quality of a given spatial anchor?

Just using the size of a serialization might not be the best.

Thoughts anybody?




  • I can't give you a definitive answer.

    I can give thoughts though.

    I do use size as an anti-metric. IE, if the anchor is smaller than a certain size I won't trust it. This does make my code fragile to any future optimizations to the sharing algorithm.

    I've recently started putting anchors near dense areas of spatial mapping. It would be better to use RGB, since spatial anchors use RGB to generate tracking points. This would take time to implement, and my approach would likely be pretty CPU intensive, so for the HoloLens I'm hoping the approach of using spatial mapping will be good enough.

    This post provided as-is with no warranties and confers no rights. Using information provided is done at own risk.

    (Daddy, what does 'now formatting drive C:' mean?)

  • DaTruAndiDaTruAndi ✭✭
    edited September 2017

    @Patrick thank you for your thoughts.
    You write "spatial anchors use RGB to generate tracking points".
    As far as I understand it, spatial anchors are sitting on top of spatial mapping, which itself is based on the internal 3D reconstruction coming from the ToF-based depth(IR-based) only. The environment understanding cameras (that have undisclosed specs) are then used to align with that model internally. Probably doing some edge detection and trying to match that somehow with what the spatial mapping geometry looks like.
    It would surprise me if the spatial anchors would actually store any information about RGB?

    That is just my understanding not based on any official or unofficial information.
    But maybe I missed reading an important document somewhere :)

    Unfortunately so much of the behavior is in the "undocumented" territory.

Sign In or Register to comment.