Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

How to blend virtual and real footage in third person view?

In almost all hololens campaign videos, we can always see the footage that blends the user and the virtual world together, for example, the footage that the user is playing mine craft on the table. This blending technology help the audience to see the virtual world the user is seeing.

Even on the live event of launching hololens, the live streaming of blended virtual and real world is steaming from a camera mounted with a hololens to track the position of the camera.

I was wondering if this blending technology for live and post production is easily available to all Hololens developper. Since we all need this tech to produce our campaign video for our own app. Instead of screencasting, That's a better way to let the audience to see what the user see in the virtual world .

Will it be available in hololens sdk?

Comments

Sign In or Register to comment.