Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Selective Rendering in Mixed-Reality Live Preview?

Hi,
I'm doing some streaming work using the HoloLens locatable camera, and wanted to be able to encode some information onto the live preview images without it being visible to the actual HoloLens wearer.

Is there any way to render a virtual object in Unity such that it appears on the mixed reality video being captured from the locatable camera, but does not appear to the actual wearer?

In case it helps, my end goal is to be able to encode data (in image form) about the current user head pose as part of the live preview frame, such that we have a good guess about the world-space position of the user's head solely by looking at the video frames. My application requirements are such that I need to be able to know the camera pose of an arbitrary video frame from the locatable camera. The built-in Photo capture methods are unsuitable because they are meant for taking single photos, which is slow. Keeping track of the user's head pose locally in the HoloLens is also less desirable because there are difficulties in synchronizing the current HoloLens head pose with the current video frame a remote viewer is seeing.

Best Answer

Answers

  • @DanAndersen,

    Cool idea. I don't know of anything using default streaming through the dev portal or companion app, but you could build your own stream client with the companion app source code: https://github.com/Microsoft/HoloLensCompanionKit/tree/master/MixedRemoteViewCompositor

    Then send you pose data to the client separately and either composite it with the image or just update the pose information in a textbox.

    James Ashley
    VS 2017 v5.3.3, Unity 2017.3.0f3, MRTK 2017.1.2, W10 17063
    Microsoft MVP, Freelance HoloLens/MR Developer
    www.imaginativeuniversal.com

Sign In or Register to comment.