Vuforia's principle question: how does it work?
What I want to know is how does vuforia recognize 2D-Image or MultiTargets ,etc. Why when you wear Hololens and recognize Image in real world, the position relation between 3D-model and (real)Image would be consistent with the former which you set between 3D-model and (virtual)Image in Unity? Does it use Sift or Surf algorithm? But I've never found any proof and papers about that ever. What's the relationship between those coordinates? Could somebody give me some practical advice or recommend some papers about this . Many thanks!