The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
WayPoints - Gaze at real world objects and pull info
I am trying to understand how did this developer achieved this Holo + Iot or this Floor change or this Prescription filling? How are the colliders made on Real world objects. I read something like the developer set the location of the objects ahead of time through waypoints? what exactly is waypoints? Any simple tutorial to kick start?
Best Answer
-
HoloSheep mod
Colliders are made on (attached to) gameObjects.
It is possible to programmatically create game objects based on encountered real world objects, but that is a lot of work and I think beyond the scope of this post.
My guess is that in two of the demos you refer to, the developer simply had two modes or phases to their app.
In the first phase they configured the app by placing gameobjects with colliders that were similarly sized and shaped to the real world objects at the physical locations of the real world objects. Probably saved a world anchor for each gameobject and the game objects were either invisible, non-rendering or super small. So basically you end up with colliders wrapping the real world objects of interest. Those colliders would then provide a way for GGV (Gaze, Gesture & Voice) to be used to trigger the related action.
In the second phase the recorded interactions take place where the previously placed invisible gameojbect's colliders encase the real physical objects and detect the GGV interactions.
My guess on the Floor example is that it is done with a combination of PlanFinding on the surface meshes to detect the floor and then a Quad aligned to that plane with dynamical textures.
HTH.
Windows Holographic User Group Redmond
WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
WinHUGR YouTube Channel -- live streamed meetings5
Answers
@HoloSheep If possible , Can you help me out with this?
@ctsholo
Colliders are made on (attached to) gameObjects.
It is possible to programmatically create game objects based on encountered real world objects, but that is a lot of work and I think beyond the scope of this post.
My guess is that in two of the demos you refer to, the developer simply had two modes or phases to their app.
In the first phase they configured the app by placing gameobjects with colliders that were similarly sized and shaped to the real world objects at the physical locations of the real world objects. Probably saved a world anchor for each gameobject and the game objects were either invisible, non-rendering or super small. So basically you end up with colliders wrapping the real world objects of interest. Those colliders would then provide a way for GGV (Gaze, Gesture & Voice) to be used to trigger the related action.
In the second phase the recorded interactions take place where the previously placed invisible gameojbect's colliders encase the real physical objects and detect the GGV interactions.
My guess on the Floor example is that it is done with a combination of PlanFinding on the surface meshes to detect the floor and then a Quad aligned to that plane with dynamical textures.
HTH.
Windows Holographic User Group Redmond
WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
WinHUGR YouTube Channel -- live streamed meetings
@ctsholo Thanks for your quick response. So I have to start with saving the anchor of invisible or super small gameObjects on the real world objects. And then restart the app, so that it always stays there till I destroy the gameObject ? , like this Post .
And for the Floor example - are there any sample projects like that? detecting floor / ceiling?
And regarding your statement " It is possible to programmatically create game objects based on encountered real world objects, but that is a lot of work and I think beyond the scope of this post" - Where can I learn more about this?