Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Working with Spatial Understanding

ccyuenccyuen
edited March 2017 in Questions And Answers

Hi, I'm new to SpatialUnderstanding and have a couple of questions regarding SpatialUnderstanding so I'll leave it in one post...

(1) First off, I'm trying to map my room using Spatial Understanding then grabbing the values from the Vector3 array of vertices meshVertices (from SpatialUnderstandingCustomMesh -> Import_UnderstandingMesh() right after meshVertices is filled). I'm having trouble using those set of coordinates to identify what vertices are part of the floor. I know that in the SpatialUnderstanding example scene it identifies the floor through raycasting but I don't want to raycast, rather I just want my script to know what areas are part of the floor then use only those vertices. I tried just using specific vertices whose y values are within a certain range (y < -0.5 or something) but then I realized (I may be wrong here) that the coordinates of the vertices depend on the position that the HoloLens was deployed from, so that would not be optimal for different users. How would I be able to identify the set of vertices that form the floor considering the fact that position will always be varied?

(2) While playing around with placing objects on the SpatialUnderstanding mesh I noticed that my objects (with TapToPlace script attached from HTK) were not actually being placed on the mesh when the raycast was deployed. Rather, it would always hit the original scanned mesh. Not sure if this next part is how it works, but I looked in SpatialMappingSource -> CreateSurfaceObject(...) and there is a line that says:
surfaceObject.Object.layer = SpatialMappingManager.Instance.PhysicsLayer;
I thought that this would set the mesh to be a part of the spatial mesh layer (31), but is not the case. How can I use the SpatialUnderstanding mesh as my mesh instead of the "original" one?

(3) Is there a way to make SpatialUnderstanding take the scanned mesh from the HPU then process that right when the app starts instead of having the user scan the room as the app is setting up?

p.s. sorry for the essay :)

Best Answers

Answers

  • Options
    stepan_stulovstepan_stulov ✭✭✭
    edited March 2017

    Hello, @ccyuen.

    1. You cannot rely on absolute coordinates when analysing spatial mapping. You'd rather normalise them between lowest and highest found points and find an area that mostly horizontal and mostly at the bottom of the mesh. You can also ask for user help and have her look at the floor for some seconds so you have a better starting point.
    2. and 3. As far as I understood, and please correct me otherwise, there is no way to functionally substitute HoloLens's spatial mapping to lead it into "believing" it's its own spatial mapping. It only works one way: you can only save snapshots of the otherwise completely independent and unstoppable scanning process. HoloLens scans "no matter what". Spatial anchors also work against that same realtime spatial mapping and not of a snapshot. In other words it's read-only. These snapshots are good for mesh analysis (floors, walls, etc.), for visualisation (that "scan wave" in the shell) or other post-process things but remember that you're always "looking at the past".

    Building the future of holographic navigation. We're hiring.

  • Options
    ccyuenccyuen
    edited April 2017

    I didn't want to make another post so I'm just going to revive this dead thread...

    I still don't understand how Spatial Understanding can tie the floor with specific vertices. In my case, I don't really care if it can recognize the floor (which is what, to my understanding, the queries in SU do) unless it can recognize the meshes/vertices that go along with that floor, all I need are the floor vertices. I went through the scripts trying to understand how they work but so far I haven't seen anything that I could use to do this. My temporary fix is just doing a raycast from the camera's position to the floor mesh and isolating vertices based on that y value. It's been working great, not too many inconsistencies however there are also extraneous cases such as if the user sits on a chair then the y value of the vertices will be significantly off. I'm kind of new to C# so it would be great if I could get some guidance on this issue, sorry for the trouble.

    I also understand that working with exact coordinates is a bad idea and a pain in the ass, but I kind of need to in order to setup a grid system to implement A* (which is my project)

    @stepan_stulov Even if I were to ask the user to look at the floor, how does SU actually grab the meshes/vertices from the recognized surface? I know that SU can recognize surfaces, but does it also bind the recognized surface with the meshes/vertices tied to it in some sort of object that I can access?

    @thebanjomatic the PR fixed my issue with the collision thanks :smile:

Sign In or Register to comment.