Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

About Rendering in DirectX

Hi,
As I reading Rendering in DirectX from
https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx

I am not clear about the instanced stereo rendering, especially the code for cameraPose loop inside Render()

Render()
{
......
for (auto cameraPose : prediction->CameraPoses)
{
...
//Specify render target and DepthStencilView for the cameraPose

        //Render one object such as m_spinningCubeRenderer->Render();
     ...
    }

......

}

I check the code of m_spinningCubeRenderer->Render() and it uses instanced stereo rendering to render to both left and right eyes. Since m_spinningCubeRenderer->Render() already render two eyes, why we need cameraPose loop? I am not clear what's cameraPose loop used for?

Thanks a lot.

YL

Answers

  • Options

    This might help you understand :
    https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx

    "The app begins each new frame by calling the CreateNextFrame method. When this method is called, predictions are made using the latest sensor data available, and encapsulated in CurrentPrediction object."

    In Short, Hololens is predicting where your head will be in the current frame/next frame so that it can position holograms correctly as your view matrix (your head) moves. It has more than one camera with its own view/projection matrix that you apply to your vertex points in the vertex shader. (Take a look at the sample shader that comes with the cube sample). The view matrix is an array with two matrices. One for each eye.

    http://www.redsprocketstudio.com/
    Developer | Check out my new project blog

Sign In or Register to comment.