Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Vuforia's principle question: how does it work?

What I want to know is how does vuforia recognize 2D-Image or MultiTargets ,etc. Why when you wear Hololens and recognize Image in real world, the position relation between 3D-model and (real)Image would be consistent with the former which you set between 3D-model and (virtual)Image in Unity? Does it use Sift or Surf algorithm? But I've never found any proof and papers about that ever. What's the relationship between those coordinates? Could somebody give me some practical advice or recommend some papers about this . Many thanks!

Sign In or Register to comment.