The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Working with Spatial Understanding
Hi, I'm new to SpatialUnderstanding and have a couple of questions regarding SpatialUnderstanding so I'll leave it in one post...
(1) First off, I'm trying to map my room using Spatial Understanding then grabbing the values from the Vector3 array of vertices meshVertices (from SpatialUnderstandingCustomMesh -> Import_UnderstandingMesh() right after meshVertices is filled). I'm having trouble using those set of coordinates to identify what vertices are part of the floor. I know that in the SpatialUnderstanding example scene it identifies the floor through raycasting but I don't want to raycast, rather I just want my script to know what areas are part of the floor then use only those vertices. I tried just using specific vertices whose y values are within a certain range (y < -0.5 or something) but then I realized (I may be wrong here) that the coordinates of the vertices depend on the position that the HoloLens was deployed from, so that would not be optimal for different users. How would I be able to identify the set of vertices that form the floor considering the fact that position will always be varied?
(2) While playing around with placing objects on the SpatialUnderstanding mesh I noticed that my objects (with TapToPlace script attached from HTK) were not actually being placed on the mesh when the raycast was deployed. Rather, it would always hit the original scanned mesh. Not sure if this next part is how it works, but I looked in SpatialMappingSource -> CreateSurfaceObject(...) and there is a line that says:
surfaceObject.Object.layer = SpatialMappingManager.Instance.PhysicsLayer;
I thought that this would set the mesh to be a part of the spatial mesh layer (31), but is not the case. How can I use the SpatialUnderstanding mesh as my mesh instead of the "original" one?
(3) Is there a way to make SpatialUnderstanding take the scanned mesh from the HPU then process that right when the app starts instead of having the user scan the room as the app is setting up?
p.s. sorry for the essay