Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Support technician scenario


We are looking into HoloLens for a building a remote support system. If a technician puts on a HoloLens we want him to see information about the device. If he opens the door of the device, he has to see other information in his field of view. We were thinking of scanning a QR code with the HoloLens to show the correct device information. This is not difficult problem to tackle. There is one thing we don’t know where to get started. How can we can we position the correct information. Is there a way to detect real world object in the HoloLens SDK and do Collison detection? It looks a lot like the scene in this video on minute 2.19.

Can someone help us started?



  • Detecting/identifying real-world objects isn't something that is currently supported on HoloLens. If you got the QR-code route for identifying the device, you could render your application's information relative to where you detect the QR code in world space. For showing different information when the door to the device is, you could position a different QR code inside the device or have a (voice) command your technician uses to open the virtual representation of the device.

  • By slightly modifying the ZXing SDK, the most popular QR code reader/generator library, you can catch the position of four corners of a QR code, then you can actually track and trace its position within AR space to project the augmented information. Attaching a zebra pattern (like a tape) on the surface of a object could also do the job.

    The beginning of wisdom is the admission of one's ignorance.

  • We are currently doing something similar and the way we are planning to implement this is via anchors. Once an anchor is placed next to an item of interest (e.g. a lift terminal) then the relevant information can be retrieved for the item automatically.

  • I have had some success with this sort of application using the Vuforia api from Quaalcom. In my application if I look at a given marker a 3D interactive item appears. I have managed to show both 3D items and UI elements in this way. As a side note Vuforia is not free but you can start your project for free.

Sign In or Register to comment.