Hello everyone.

We have decided to phase out the Mixed Reality Forums over the next few months in favor of other ways to connect with us.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

The plan between now and the beginning of May is to clean up old, unanswered questions that are no longer relevant. The forums will remain open and usable.

On May 1st we will be locking the forums to new posts and replies. They will remain available for another three months for the purposes of searching them, and then they will be closed altogether on August 1st.

So, where does that leave our awesome community to ask questions? Well, there are a few places we want to engage with you. For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality. If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers. And always feel free to hit us up on Twitter @MxdRealityDev.

Selective Rendering in Mixed-Reality Live Preview?

Hi,
I'm doing some streaming work using the HoloLens locatable camera, and wanted to be able to encode some information onto the live preview images without it being visible to the actual HoloLens wearer.

Is there any way to render a virtual object in Unity such that it appears on the mixed reality video being captured from the locatable camera, but does not appear to the actual wearer?

In case it helps, my end goal is to be able to encode data (in image form) about the current user head pose as part of the live preview frame, such that we have a good guess about the world-space position of the user's head solely by looking at the video frames. My application requirements are such that I need to be able to know the camera pose of an arbitrary video frame from the locatable camera. The built-in Photo capture methods are unsuitable because they are meant for taking single photos, which is slow. Keeping track of the user's head pose locally in the HoloLens is also less desirable because there are difficulties in synchronizing the current HoloLens head pose with the current video frame a remote viewer is seeing.

Best Answer

Answers

  • @DanAndersen,

    Cool idea. I don't know of anything using default streaming through the dev portal or companion app, but you could build your own stream client with the companion app source code: https://github.com/Microsoft/HoloLensCompanionKit/tree/master/MixedRemoteViewCompositor

    Then send you pose data to the client separately and either composite it with the image or just update the pose information in a textbox.

    James Ashley
    VS 2017 v5.3.3, Unity 2017.3.0f3, MRTK 2017.1.2, W10 17063
    Microsoft MVP, Freelance HoloLens/MR Developer
    www.imaginativeuniversal.com

Sign In or Register to comment.