The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
System's Spaces vs in-app mapping
Hi,
It seems the system needs to know where you are located (spaces) that's why I don't really understand why my app needs to scan the room itself instead of just using the current system space.
A related question : Tutorials are all showing how to save the result of the mapping into your desktop computer (for later use in unity) but the end-user will not have Unity, why don't you add tutorials to save/load these mapping results from the Hololens device itself ???
Thanks
Best Answer
-
ahillier mod
@DomDom,
Knowledge dump (or "How I like to think about spatial mapping"):The HoloLens is constantly scanning the environment and saves that data as 'spaces'. Applications can access that data when the 'Spatial Perception' capability is enabled. The information contained in 'spaces' is not directly usable by your Unity application. You need to create game objects with colliders (for physics) and renderers (for visualization) that represent the meshes contained in the 'spaces' file. Some applications will alter the spatial mapping data (blow holes in the mesh, convert them to planes, smooth the mesh, etc.), and you really don't want them doing this to the meshes saved in the 'spaces' file, or else it would negatively impact all other applications that need to use environment data.
When you call SpatialMappingObserver.Start(), you will begin to get Add/Update/Remove events from the observer. If you were to add some Debug.Log() messages to each of these events, you would probably see that the very first time you call Start() you actually get a ton of updates (vs Adds). This is because the HoloLens already knows about the environment (from the 'spaces' file), and is just updating the spatial mapping meshes currently in view with new information. If you were to turn off your HoloLens and then venture to a new area before restarting your app, then you would probably see a lot more 'Add' events after the initial .Start() call. Unfortunately, you cannot guarantee that the user has scanned their environment before running your application, or that the current 'spaces' file contains enough data to be useful to your application. This is where scanning becomes important to your application.
The HoloLens' knowledge of the room is constantly changing. Objects in the room can move around, and meshes will get more refined as users move closer to real-world surfaced (best scan results occur ~0.8-2m from a surface). To get the most up-to-date version of the mesh, it's good practice to have the user scan their area first. While you can certainly do this outside of your application (the user just needs to walk around while wearing the HoloLens), there is no guarantee that the user will actually take the time to do so. By providing a built-in scanning experience, you can ensure that the user has scanned their area and you can also do some processing of the mesh to ensure that their space is adequate for your scenario (if your application requires a wall, you want to ensure that at least one wall has been scanned before exiting the scanning phase).
Usually the raw spatial mapping meshes are not very useful to an application that will interact with the environment (physics, navigation, etc). That's where processing of the mesh is important. The Spatial Understanding and PlaneFinding features of the HoloToolkit can help here. The level of processing required (if any) is unique to each application. Ideally, if you have a lot of processing, you will save the resulting processed mesh off and then load it at the beginning of future sessions. If the reload is successful, then rescanning won't be necessary. However, you still need to support a scanning phase, in case the user has moved to a new location and loading the saved processed mesh fails.
Overall, how much scanning and processing is required is very much dependent on your application's design. Some applications won't use spatial mapping at all, so this won't be an issue. Others, like the 'Origami' tutorial, need to move/set an object on real-world surfaces that might be changing (like people), so constantly scanning the environment while using the raw mesh data is adequate (these apps have no built-in 'scanning' phase, as they just keep the SpatialMappingObserver running during the entire application). While other applications, like 'RoboRaid' 'Conker' and 'Fragments', have a dedicated scan time to ensure that they get quality meshes before running expensive processing of the spatial data (to place objects in sensible locations, build AI navigation paths, determine physics properties for various objects in the room, etc.). When you have a dedicated scanning phase, you can keep the user from getting bored (if you create a unique visual for the mesh, users will be happy to keep scanning their environment) as they wait for processing to complete and the rest of the application to begin.
I hope something in the long description above will prove helpful.
Happy scanning
~Angela9
Answers
The app doesn't actually need to scan the room. The spatial mapping features will pull in whatever data the currently loaded space has available (within the bounding volume). The spatial understanding feature does require that you look, but this is an artificial requirement, and it could be rewritten to not have this behavior.
The purpose of helping developers get their spatial map into Unity is to all you to iterate in the unity editor with a sense of real world scale and to enable you to experiment with processing / shading the spatial mapping mesh. It's not a feature end users would need to use.
===
This post provided as-is with no warranties and confers no rights. Using information provided is done at own risk.
(Daddy, what does 'now formatting drive C:' mean?)
@DomDom,
Knowledge dump (or "How I like to think about spatial mapping"):
The HoloLens is constantly scanning the environment and saves that data as 'spaces'. Applications can access that data when the 'Spatial Perception' capability is enabled. The information contained in 'spaces' is not directly usable by your Unity application. You need to create game objects with colliders (for physics) and renderers (for visualization) that represent the meshes contained in the 'spaces' file. Some applications will alter the spatial mapping data (blow holes in the mesh, convert them to planes, smooth the mesh, etc.), and you really don't want them doing this to the meshes saved in the 'spaces' file, or else it would negatively impact all other applications that need to use environment data.
When you call SpatialMappingObserver.Start(), you will begin to get Add/Update/Remove events from the observer. If you were to add some Debug.Log() messages to each of these events, you would probably see that the very first time you call Start() you actually get a ton of updates (vs Adds). This is because the HoloLens already knows about the environment (from the 'spaces' file), and is just updating the spatial mapping meshes currently in view with new information. If you were to turn off your HoloLens and then venture to a new area before restarting your app, then you would probably see a lot more 'Add' events after the initial .Start() call. Unfortunately, you cannot guarantee that the user has scanned their environment before running your application, or that the current 'spaces' file contains enough data to be useful to your application. This is where scanning becomes important to your application.
The HoloLens' knowledge of the room is constantly changing. Objects in the room can move around, and meshes will get more refined as users move closer to real-world surfaced (best scan results occur ~0.8-2m from a surface). To get the most up-to-date version of the mesh, it's good practice to have the user scan their area first. While you can certainly do this outside of your application (the user just needs to walk around while wearing the HoloLens), there is no guarantee that the user will actually take the time to do so. By providing a built-in scanning experience, you can ensure that the user has scanned their area and you can also do some processing of the mesh to ensure that their space is adequate for your scenario (if your application requires a wall, you want to ensure that at least one wall has been scanned before exiting the scanning phase).
Usually the raw spatial mapping meshes are not very useful to an application that will interact with the environment (physics, navigation, etc). That's where processing of the mesh is important. The Spatial Understanding and PlaneFinding features of the HoloToolkit can help here. The level of processing required (if any) is unique to each application. Ideally, if you have a lot of processing, you will save the resulting processed mesh off and then load it at the beginning of future sessions. If the reload is successful, then rescanning won't be necessary. However, you still need to support a scanning phase, in case the user has moved to a new location and loading the saved processed mesh fails.
Overall, how much scanning and processing is required is very much dependent on your application's design. Some applications won't use spatial mapping at all, so this won't be an issue. Others, like the 'Origami' tutorial, need to move/set an object on real-world surfaces that might be changing (like people), so constantly scanning the environment while using the raw mesh data is adequate (these apps have no built-in 'scanning' phase, as they just keep the SpatialMappingObserver running during the entire application). While other applications, like 'RoboRaid' 'Conker' and 'Fragments', have a dedicated scan time to ensure that they get quality meshes before running expensive processing of the spatial data (to place objects in sensible locations, build AI navigation paths, determine physics properties for various objects in the room, etc.). When you have a dedicated scanning phase, you can keep the user from getting bored (if you create a unique visual for the mesh, users will be happy to keep scanning their environment) as they wait for processing to complete and the rest of the application to begin.
I hope something in the long description above will prove helpful.
Happy scanning
~Angela
@Patrick @ahillier Thanks a lot for your explanations , the whole thing is becoming clearer everyday ;-)
@ahillier
We developers will be understanding HoloLens a little more everydays. Please provide us developers with the real-time raw depth data stream, the raw data used for updating the spatial mapping!
Hi @CurvSurf,
Since I came from Kinect, I can understand why you would like access to the raw depth streams, but unfortunately I do not have power to grant your request
Could you please file a feature request using the 'Feedback Hub' on the HoloLens?
Thanks!
~Angela
@ahillier
Hi Angela,
Does it mean any feedbacks made here are useless?
https://forums.hololens.com/discussion/1762/hololens-with-leap-motion-vuforia
https://forums.hololens.com/discussion/1302/what-happens-when-you-pair-hololens-with-kinect
https://forums.hololens.com/search?Page=p1&Search=kinect
Thanks,
Joon