The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
What Is HoloLens's Spatial Mapping Lifecycle In Relation to Unity
Hello, everybody.
Glad to be here. My second day reading and playing around with some Unity examples of HoloLens.
My main question for now would be this: what are the different abstraction layers of the lifecycle of spatial mapping inside of HoloLens Unity application?
So far I managed to understand this:
HoloLens itself provides persistence and improving of precision in case of finding itself in "the same" room between the application runs within the application installation. This process is somewhat hidden and whats given to the developers is the language of surfaces: did a surface appear, did it disappear and could you give me the mesh of the surface by id?
On the Unity side, when using this API with conjunction PlaneFinding (from the HoloToolkit) you are speaking a higher level language: walls, floors, ceilings. It's up to you to interpret this data, serialise/deserialise it between application runs and what not.
I will highly appreciate for you, guys, to add to, remove from and modify this understanding. I also apologise that I don't have a specific question. I'm trying to understand where in the lifecycle of the spatial mapping I as a Unity developer stand, so that I can do my implementations accordingly. Reading the existing documentation leaves me slightly unsure of it.
Thanks!
Building the future of holographic navigation. We're hiring.
Best Answer
-
OptionsDavidKlineMS mod
@stepan_stulov,
Welcome! Glad to have you here.In Unity, you have both semantics and, in fact, you need to use your semantic 1 and the HoloToolkit to achieve your semantic 2.
Take a look at the Holograms 230 Spatial Mapping course. It demonstrates how to get the Spatial Mapping mesh data as you describe (data added, updated and deleted). The class also shows how to take that data and perform plane finding.
I hope this helps clear up your question.
Thanks!
David5
Answers
@stepan_stulov,
Welcome! Glad to have you here.
In Unity, you have both semantics and, in fact, you need to use your semantic 1 and the HoloToolkit to achieve your semantic 2.
Take a look at the Holograms 230 Spatial Mapping course. It demonstrates how to get the Spatial Mapping mesh data as you describe (data added, updated and deleted). The class also shows how to take that data and perform plane finding.
I hope this helps clear up your question.
Thanks!
David
Do we have to scan the room first for plane finding to work? How can I use plane finding on runtime?