Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Understanding Spatial Anchors

Im currently using the the Holograms 240 project to sync an experience between multiple devices.The official microsoft dev page describes anchors a bit but Im trying to understand them in a bit more detail.

So suppose my initial Hololens device starts and it needs to create the anchors. From what I understand, it scans the room and searches for identifiable physical objects? So would it see a bottle of water, and register that physical object as the anchor? (Not sure)

Are spatial anchors determined purely by geometric characteristics? Or can anchors be identified by something such as color (Such as bright red piece of paper on the floor)

How does hololens choose what will be the spatial anchor when there are various potential anchors in the room?

Would it be possible to record a specific physical anchor, and clamp the experience to only search for that anchor? Kind of like how Vurforia uses image targets?

Im trying to streamline my AR experience and remove that waiting time of searching for anchors, creating anchors, sending anchor data across the sharing service and so on. Rather, Id just want all hololens devices in the expereince to search for a single real world reference point and not worry about learning about environment over the time. (I can guarantee a static anchor/reference point on start up for my current project)



  • Options

    I don't know how HoloLens works internally so I tend to treat the anchoring capability as a "black box" with APIs that I can use to achieve certain things that the docs talk about and I haven't found it too necessary to get bogged down in the 'how' given that the device is always scanning and doing its thing regardless of my implementation choices.

    The docs;

    are the definitive guide but my own thoughts on what Spatial Anchors give me are;

    1) The ability to stabilise content associated with them such that as the user roams, the anchor keeps the content in its real-world position.

    2) The ability to be exported from the device (if the spatial perception capability is available) as an opaque piece of data with APIs that can report success/failure.

    3) The ability to be imported to a device with APIs that report success/failure.

    4) The ability to be stored for some period of time.

    The device that does the export in (2) doesn't have to be the same device that does the import in (3) and the anchors might have been written to disk or passed across the network in the meantime.

    Combining (2), (3) and (4) means that a single device can potentially restore a hologram in a physical space at a later point in time if the anchor(s) have been stored and re-loaded.

    This is in spite of the device almost certainly presenting different coordinate systems to an app each time it runs in the same physical space even on the same device.

    It's the app's choice as to which anchors it might store, which it might restore and in what order it might attempt to restore them - there's potential for optimisation there.

    This also means that multiple devices can work out a common co-ordinate system in a physical space if they can all get a (local) position/orientation for a world anchor that they have all successfully imported. This enables the 'shared holograms' idea of content which is located differently on each app on each device but displays at the same physical place in the world.

    In that scenario, the [create->export->import->compute transform to local co-ordinate system] process around the anchor is providing an automatic, hardware-provided way of aligning the different co-ordindate systems of multiple devices.

    It's not the only way of aligning those co-ordinate systems. As you say, you could make sure that each hologram runs up an app at an exact, fixed position and orientation within a room. They would all then agree on the origin and direction of axes although you might still want to anchor for stability.

    Alternatively, you could place some 'alignment object' in the room and get each app user to align a digital hologram of that object with the real world object and then tap some button to say 'align'. I did a very rough experiment like this here;

    Or, you might automate that process through the use of something like Vuforia.

    I think the trade-offs would be around ease-of-use, accuracy and then around how much time an app spends exporting/importing/storing/sending anchors if you went with the automatic way of doing things.

    Where I've used anchors, I haven't found that they take much time to create and get located but I have found that they can be reasonably large blobs of data and some consideration might be given to how many anchors are created and how frequently they are passed over a network and the user experience while that is happening.

    I hope that helps - all mistakes mine, sorry in advance if you (or others) notice some :-)

Sign In or Register to comment.