The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Working with Spatial Understanding
Hi, I'm new to SpatialUnderstanding and have a couple of questions regarding SpatialUnderstanding so I'll leave it in one post...
(1) First off, I'm trying to map my room using Spatial Understanding then grabbing the values from the Vector3 array of vertices meshVertices (from SpatialUnderstandingCustomMesh -> Import_UnderstandingMesh() right after meshVertices is filled). I'm having trouble using those set of coordinates to identify what vertices are part of the floor. I know that in the SpatialUnderstanding example scene it identifies the floor through raycasting but I don't want to raycast, rather I just want my script to know what areas are part of the floor then use only those vertices. I tried just using specific vertices whose y values are within a certain range (y < -0.5 or something) but then I realized (I may be wrong here) that the coordinates of the vertices depend on the position that the HoloLens was deployed from, so that would not be optimal for different users. How would I be able to identify the set of vertices that form the floor considering the fact that position will always be varied?
(2) While playing around with placing objects on the SpatialUnderstanding mesh I noticed that my objects (with TapToPlace script attached from HTK) were not actually being placed on the mesh when the raycast was deployed. Rather, it would always hit the original scanned mesh. Not sure if this next part is how it works, but I looked in SpatialMappingSource -> CreateSurfaceObject(...) and there is a line that says:
surfaceObject.Object.layer = SpatialMappingManager.Instance.PhysicsLayer;
I thought that this would set the mesh to be a part of the spatial mesh layer (31), but is not the case. How can I use the SpatialUnderstanding mesh as my mesh instead of the "original" one?
(3) Is there a way to make SpatialUnderstanding take the scanned mesh from the HPU then process that right when the app starts instead of having the user scan the room as the app is setting up?
p.s. sorry for the essay
Best Answers
-
Optionsjbienzms mod
- Take a look at how the Surface Understanding example executes queries. There is a query for locating spaces "on the floor". I would recommend leveraging these queries instead of doing raycasting yourself. There are queries for spaces on walls, ceiling, etc. Beyond a smooth, water tight mesh, these queries are the core value of Spatial Understanding.
- This should be working. I would double-check that the original mesh just isn't getting in your way. When the Spatial Understanding mesh is shown the other mesh is hidden, but that is just done by changing the renderer if I remember correctly. I think the mesh may still be there and in your way. If not, please file an issue so that someone can look into why the SU Mesh isn't participating in collisions.
- Building the SU mesh is a fairly CPU intensive process. This is why the pattern starts with you scanning your "play space" and then clicking some UI to put it into understanding mode. When it transitions between scanning and understanding, a lot of processing happens that does things like smooth the mesh and make it water tight. And once this process is done, the SU mesh is locked down even if things move in the room. Since this is such an intensive process, it's not something that could (or should) be done continuously. If I remember correctly, there is a way to start over and maybe even a way to do an update. But just keep in mind this process takes a few seconds to complete and churns the CPU, so you really shouldn't be doing it all the time.
Perhaps there's a way you can use the SU mesh for queries but use the regular mesh for realtime collisions and physics? The two meshes are not exactly the same, but when I've used the regular mesh for physics interactions I've had to pay really close attention to see unexpected behavior. The one exception is the fact that the SU mesh is water tight meaning nothing could "roll off the end of the world" (though it may bounce off an invisible wall the user might not expect to be there).
Our Holographic world is here
RoadToHolo.com WikiHolo.net @jbienz
I work in Developer Experiences at Microsoft. My posts are based on my own experience and don't represent Microsoft or HoloLens.5 -
Optionsthebanjomatic ✭✭✭
@ccyuen There is an open pull request to address the issue with the collider mesh not being updated correctly. https://github.com/Microsoft/HoloToolkit-Unity/pull/542
5
Answers
Hello, @ccyuen.
Building the future of holographic navigation. We're hiring.
@ccyuen
Perhaps there's a way you can use the SU mesh for queries but use the regular mesh for realtime collisions and physics? The two meshes are not exactly the same, but when I've used the regular mesh for physics interactions I've had to pay really close attention to see unexpected behavior. The one exception is the fact that the SU mesh is water tight meaning nothing could "roll off the end of the world" (though it may bounce off an invisible wall the user might not expect to be there).
Our Holographic world is here
RoadToHolo.com WikiHolo.net @jbienz
I work in Developer Experiences at Microsoft. My posts are based on my own experience and don't represent Microsoft or HoloLens.
@ccyuen There is an open pull request to address the issue with the collider mesh not being updated correctly. https://github.com/Microsoft/HoloToolkit-Unity/pull/542
I didn't want to make another post so I'm just going to revive this dead thread...
I still don't understand how Spatial Understanding can tie the floor with specific vertices. In my case, I don't really care if it can recognize the floor (which is what, to my understanding, the queries in SU do) unless it can recognize the meshes/vertices that go along with that floor, all I need are the floor vertices. I went through the scripts trying to understand how they work but so far I haven't seen anything that I could use to do this. My temporary fix is just doing a raycast from the camera's position to the floor mesh and isolating vertices based on that y value. It's been working great, not too many inconsistencies however there are also extraneous cases such as if the user sits on a chair then the y value of the vertices will be significantly off. I'm kind of new to C# so it would be great if I could get some guidance on this issue, sorry for the trouble.
I also understand that working with exact coordinates is a bad idea and a pain in the ass, but I kind of need to in order to setup a grid system to implement A* (which is my project)
@stepan_stulov Even if I were to ask the user to look at the floor, how does SU actually grab the meshes/vertices from the recognized surface? I know that SU can recognize surfaces, but does it also bind the recognized surface with the meshes/vertices tied to it in some sort of object that I can access?
@thebanjomatic the PR fixed my issue with the collision thanks