Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

A Reading Partner: Understanding pointing gesture and the Content(picture and texts)

Hello, I have a idea of creating a kind of reading partner. Here is a simple scenario: There is a real poster on the wall, a virtual reading partner(humanoid character) standing beside it. I will point to some texts on the poster asking what does it mean, and the partner will give me answer. So the main issues need to be solved are: the character need to understand where I am pointing at and what the content(picture or text) of the poster I am pointing at. Any links about the required knowledge are appreciated. Thanks!

Tagged:
Sign In or Register to comment.