Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Mapping Small Objects

Is there any guidance on how large objects have to be in order to be resolved for spatial mapping and if there is a limit to the TrianglesPerCubicMeter? I am trying to map a face with a high TrianglesPerCubicMeter but it seems to be hitting some unspecified limit as even with a million TrianglesPerCubicMeter it doesn't increase the complexity of the mesh. Need as much detail as possible in a small space.

Best Answer

Answers

  • @digitalnelson - I'm starting to work on something with similar needs...assume not necessarily a direct hard limit but resource usage might tamper down....any suggestions for where you so far saw point of diminishing returns, e.g. unusable FPS, no perceptible changes in visual mesh.

    I think the other thing I'd like to experiment more with is how to have a better focus on just some known areas of interest, if possible to leave the resolution unchanged/low res outside that small area...e.g. if it's all or nothing setting, that due to the focal lengths, going to get high res for areas my app doesn't care about, so mainly have to use raycasting to only capture that detail when users is looking at the right spots to minimize overmapping. Curious what theoretical resolution would be possible with the hardware if had access to stream the spatial RAW image feeds directly to a workstation for interpretation...

  • I'm also interested in mapping small objects and tracking them. Hope to see some documentation on this very soon.

  • fwiw, related, I noticed there is a hard coded minimum size in the FindPlanes logic. See this discussion - http://forums.hololens.com/discussion/1764/scanning-a-window-or-door-frame.

  • The raw depth data are the only solution.

    The fatal bad luck of HoloLens team is the decision to omit the API for raw depth data stream. The team from Kinect did not recognize the 30+ years long existing industrial demands from other than games.

    For game, spatial mapping may be enough.
    For robot, raw depth data are indispensable.

    MS HoloLens without raw depth data:
    https://www.youtube.com/watch?v=bhp-uZASd_k

    Google Tango with raw depth data:
    https://www.youtube.com/watch?v=8ruiudETvuY

Sign In or Register to comment.