The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Rendering in DirectX 11 (holographic app template)
Hi,
I have a question about the rendering in holographic app template.
The main code for rendering is:
for (auto cameraPose : prediction->CameraPoses)
{
//1. get device context for current cameraPose
//2. Set render targets to the current holographic camera
//3 Clear the back buffer and depth stencil view.
//4. refreshes view and projection matrix in constant buffer for
the holographic camera indicated by cameraPose and attach constant buffer to the graphics pipeline.
//5. spinningCubeRenderer->Render();
}
I checked the code for spinningCubeRenderer->Render() and found it uses instanced stereo rendering to simultaneously render to both left and right displays. So I think the above for loop just loop once, right? In another words, only one stereo holographic camera is added into the holographic space, right?
Thanks.
YL
Best Answers
-
OptionsAmerAmer ✭✭✭
Yes! the DeviceResources object contains two view matrices with an offset for each eye. In your samplevertex you'll see a viewindx matrix. Depending on the instance id, it will pull either left or right eye view matrix to apply to your vertices.
http://www.redsprocketstudio.com/
Developer | Check out my new project blog5 -
OptionsAmerAmer ✭✭✭
Yea because the lense has to predict where to place the holograms on the next frame. My understanding is that its trying to eliminate as much drift as possible by predicting your next head position and creating a new view/world matrix to compensate for the delay between movement and draw. I'm sure there is a lot more to this but that's my surface understanding.
http://www.redsprocketstudio.com/
Developer | Check out my new project blog5 -
OptionsAmerAmer ✭✭✭
Well it does one draw call for both eyes. It takes care of the stereo component in the shader.
http://www.redsprocketstudio.com/
Developer | Check out my new project blog5
Answers
Yes! the DeviceResources object contains two view matrices with an offset for each eye. In your samplevertex you'll see a viewindx matrix. Depending on the instance id, it will pull either left or right eye view matrix to apply to your vertices.
http://www.redsprocketstudio.com/
Developer | Check out my new project blog
Thanks AmerAmer. One more thing I want to confirm is the reason using cameraPose loop is to support MRC, right?
Yea because the lense has to predict where to place the holograms on the next frame. My understanding is that its trying to eliminate as much drift as possible by predicting your next head position and creating a new view/world matrix to compensate for the delay between movement and draw. I'm sure there is a lot more to this but that's my surface understanding.
http://www.redsprocketstudio.com/
Developer | Check out my new project blog
Thanks, AmerAmer.
I guess when we enable MRC, a mono camera will be added into the HolographicSpace and the CameraPose loop will loops twice to render both stereo and mono cameras.
Well it does one draw call for both eyes. It takes care of the stereo component in the shader.
http://www.redsprocketstudio.com/
Developer | Check out my new project blog