Hello everyone.

We have decided to phase out the Mixed Reality Forums over the next few months in favor of other ways to connect with us.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

The plan between now and the beginning of May is to clean up old, unanswered questions that are no longer relevant. The forums will remain open and usable.

On May 1st we will be locking the forums to new posts and replies. They will remain available for another three months for the purposes of searching them, and then they will be closed altogether on August 1st.

So, where does that leave our awesome community to ask questions? Well, there are a few places we want to engage with you. For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality. If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers. And always feel free to hit us up on Twitter @MxdRealityDev.

System's Spaces vs in-app mapping

Hi,

It seems the system needs to know where you are located (spaces) that's why I don't really understand why my app needs to scan the room itself instead of just using the current system space.

A related question : Tutorials are all showing how to save the result of the mapping into your desktop computer (for later use in unity) but the end-user will not have Unity, why don't you add tutorials to save/load these mapping results from the Hololens device itself ???

Thanks

Best Answer

Answers

  • The app doesn't actually need to scan the room. The spatial mapping features will pull in whatever data the currently loaded space has available (within the bounding volume). The spatial understanding feature does require that you look, but this is an artificial requirement, and it could be rewritten to not have this behavior.

    The purpose of helping developers get their spatial map into Unity is to all you to iterate in the unity editor with a sense of real world scale and to enable you to experiment with processing / shading the spatial mapping mesh. It's not a feature end users would need to use.

    ===
    This post provided as-is with no warranties and confers no rights. Using information provided is done at own risk.

    (Daddy, what does 'now formatting drive C:' mean?)

  • @Patrick @ahillier Thanks a lot for your explanations , the whole thing is becoming clearer everyday ;-)

  • CurvSurfCurvSurf ✭✭
    edited September 2016

    @ahillier

    We developers will be understanding HoloLens a little more everydays. Please provide us developers with the real-time raw depth data stream, the raw data used for updating the spatial mapping!

    • Intel RealSense: max 60 fps raw depth data stream
    • Google Tango: about 5 fps raw depth data stream.
    • MS HoloLens: no raw depth data stream, only spatial mapping!
  • Hi @CurvSurf,
    Since I came from Kinect, I can understand why you would like access to the raw depth streams, but unfortunately I do not have power to grant your request :)

    Could you please file a feature request using the 'Feedback Hub' on the HoloLens?

    Thanks!
    ~Angela

Sign In or Register to comment.