Hello everyone.

We have decided to phase out the Mixed Reality Forums over the next few months in favor of other ways to connect with us.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

The plan between now and the beginning of May is to clean up old, unanswered questions that are no longer relevant. The forums will remain open and usable.

On May 1st we will be locking the forums to new posts and replies. They will remain available for another three months for the purposes of searching them, and then they will be closed altogether on August 1st.

So, where does that leave our awesome community to ask questions? Well, there are a few places we want to engage with you. For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality. If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers. And always feel free to hit us up on Twitter @MxdRealityDev.

About Rendering in DirectX

Hi,
As I reading Rendering in DirectX from
https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx

I am not clear about the instanced stereo rendering, especially the code for cameraPose loop inside Render()

Render()
{
......
for (auto cameraPose : prediction->CameraPoses)
{
...
//Specify render target and DepthStencilView for the cameraPose

        //Render one object such as m_spinningCubeRenderer->Render();
     ...
    }

......

}

I check the code of m_spinningCubeRenderer->Render() and it uses instanced stereo rendering to render to both left and right eyes. Since m_spinningCubeRenderer->Render() already render two eyes, why we need cameraPose loop? I am not clear what's cameraPose loop used for?

Thanks a lot.

YL

Answers

  • This might help you understand :
    https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx

    "The app begins each new frame by calling the CreateNextFrame method. When this method is called, predictions are made using the latest sensor data available, and encapsulated in CurrentPrediction object."

    In Short, Hololens is predicting where your head will be in the current frame/next frame so that it can position holograms correctly as your view matrix (your head) moves. It has more than one camera with its own view/projection matrix that you apply to your vertex points in the vertex shader. (Take a look at the sample shader that comes with the cube sample). The view matrix is an array with two matrices. One for each eye.

    http://www.redsprocketstudio.com/
    Developer | Check out my new project blog

Sign In or Register to comment.