Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Hololens user position

If I use the hololens for a Mixed reality application, I need to know not only the relative position of holograms to the hololens but also the relative position of things in reality to the hololens. How can you do that? (I understand that it does not have a GPS and that it wouldn't be effective anyway for indoors)

As an example of what I want to do let's say I have a layout of an apartment (living room, toilet, kitchen) and I want to navigate with my hololens. To do that I would need to know the relative position of things but the problem is that the user experience does not start exactly at the same point every time right?


  • stepan_stulovstepan_stulov ✭✭✭
    edited November 2017

    Hey, @hitoruna

    While from the perspective of the HoloLens device itself the whole world is moving around and it stays static for all practical purposes you as a developer (especially a Unity developer) can actually think in reverse: your environment (the Space) is static and you as a HoloLens wearer and all other objects move through the environment. That environment is represented as the Unity scene. The Hololens itself or its werarer's head is represented as the main camera (control of which if fully delegated to the HoloLens's internals). And the objects can either be moved by code or be stabilised against the environment using the WorldAnchors.

    When developing for HoloLens you must always assume only the world-anchored objects have their positions environment-fixed and even that has certain precision limits. The rest you must perceive as always unpredictable. That's why a lot of existing HoloLens applications either use certain heuristics to determine where the user is (the harder task for the developer) is simply offload that problem onto the user a'la please look at the floor for 5 seconds (the easier task for the developer).

    Does it clarify thigns a bit?

    PS: This thread would be better posted in "Holographic apps", not "Immersive apps". The latter is for the, well, immersive (opaque) headsets.

    Building the future of holographic navigation. We're hiring.

Sign In or Register to comment.