The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Geometry Shaders
Are geometry shaders available on HoloLens?
Having seen spatial mapping effects in Fragments and Young Conker, I am assuming geometry shaders are available on HoloLens, but I wanted to confirm.
Has anyone played with geometry shaders on HoloLens?
Best Answer
-
Alex480 ✭✭
Sure, in the example below I split regular vertex stream for one for each eye so a single render call is needed. The idea is the same as rendering reflection cube maps or shadow maps where the same geometry needs to be rendered many times from different views. I can't think of any external link right now but there should be many examples on that out there.
The mStereoView[2], and mStereoViewProjection[2] are part of the constant buffer and hold the left and right eye matrices.
It is also important to add the required field to the PS input struct:
uint nRTIndex : SV_RenderTargetArrayIndex;The spatial mapping shader is almost the same, but I add hard-coded bary coordinates in it to reduce the input size (specialized shader so it could be optimized - the one below is a generic one).
// --- Geometry shader [maxvertexcount(6)] void GS(triangle GSMaterialInput input[3], inout TriangleStream<PSMaterialInput> StereoStream) { // --- For each eye create a new triangle [unroll] for (int nEye = 0; nEye < 2; nEye++) { // --- PSMaterialInput output = (PSMaterialInput)0; // --- Assign triangle to the RT corresponding to this eye output.nRTIndex = nEye; // --- For each vertex of the triangle [unroll] for (int nVertex = 0; nVertex < 3; nVertex++) { // --- Transform position output.vPosition = mul(input[nVertex].vPosition, mStereoViewProjection[nEye]); output.vView = mul(input[nVertex].vPosition, mStereoView[nEye]); output.vWorld = input[nVertex].vPosition; // --- Normal output.vNormal = input[nVertex].vNormal; // --- Tangent and bitangent output.vTangent = input[nVertex].vTangent; output.vBitangent = input[nVertex].vBitangent; // --- Pass texture coordinates output.vTexCoord = input[nVertex].vTexCoord; // --- StereoStream.Append(output); } // --- New triangle StereoStream.RestartStrip(); } }
4
Answers
Yes they are. In fact quite useful to split the vertex data for stereo rendering and other things like the spatial mapping visualization and so on.
Mind sharing any of these resources or pointing to where they might be located?
Stephen Hodgson
Microsoft HoloLens Agency Readiness Program
Virtual Solutions Developer at Saab
HoloToolkit-Unity Moderator
Sure, in the example below I split regular vertex stream for one for each eye so a single render call is needed. The idea is the same as rendering reflection cube maps or shadow maps where the same geometry needs to be rendered many times from different views. I can't think of any external link right now but there should be many examples on that out there.
The mStereoView[2], and mStereoViewProjection[2] are part of the constant buffer and hold the left and right eye matrices.
It is also important to add the required field to the PS input struct:
uint nRTIndex : SV_RenderTargetArrayIndex;
The spatial mapping shader is almost the same, but I add hard-coded bary coordinates in it to reduce the input size (specialized shader so it could be optimized - the one below is a generic one).
@Alex480 Thanks this is good news.