The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
DirectX: Adding a Texture2D to render 2D content on 3D Surfaces
Hey guys,
I spent the whole day to modify the CSharp DirectX template, so I could render 2D content on a 3D surface.
Basically, I tried to create
- a Texture2D
- a Surface
- a Render Target/BitmapRenderTarget
- a SamplerState
and tried to hook all this in the PixelShader with SetShaderResource and SetSampler. But, to be honest, this is Day1 for me with D3D and HLSL, and I'm faaaar away from comprehending this.
I thought, in the Render method, I could first render the 2D content into the BitmapRenderTarget and then take that as a Texture onto the 3D object.
But I think I screwed up the PixelShader and/or VertextShader HLSL files.
To be honest, I have not at all understood what the 2 different VertextShader files are all about. Amongst other things... :-S
So, how exactly would I have to modify those to get this to work?
Will this approach work at all?
Thanks a lot - any help is really appreciated!!
Klaus
Best Answer
-
Options
The Vertex shader does the math to figure out points in world space, it's the job of the pixel shader to determine which color to display at a particular pixel, that's all.
Your texture2D is applied in the Pixel Shader. For every pixel it draws the corresponding textured color based on the UV coordinates. The projection, view and model matrices are already transformed to the correct positions by this stage.
What are you loading up in your UV Coordinates? It's possible all you're sending in is 1 coordinate. Or, if all you have is 1 pixel in one location then the other coordinates are most likely 0, or a coordinate that has no pixel color from the texture. Turn on DirectX Debugging, and look at your output window to see if you're getting any errors on any of the shaders.
Dwight Goins
CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
http://dgoins.wordpress.com5
Answers
Hi Klaus,
A couple of things to answer you question in a nutshell, yes this should work. You have to use a ColorTexture, and ColorSampler in the pixel shader to tell your shader to use the pixels from the texture.
The Vertex shader is used to calculate the Matrices: Model+View+Projection. These points are used to translate coordinates into world space. It's invoked for each vertex you pass into the vertex shader. Typically you pass in vertices, indices, and colors/texture coordinates, + any custom data you need for processing the other shaders.
After the vertex shader comes an optional geometry shader - which is used to do further point processing if needed.
Lastly the pixel shader which is used to determine which colors to show. It runs for every pixel in the view. The data is passed from the vertex to the geometry shader to the pixel shader.
The Hololens is a stereo scopic rendering engine, which means you have the ability to render stereoscopic views. Left eye and Right eye. The two different vertex shaders are used to support this mechanism. When running on an emulator (which does not support stereo view) you have to mimic the stereo design. The microsoft example you have does this through a custom shaders. There is a vertex shader which uses the latest DirectX 5.1 shader enhancement to support :
// On devices that do support the D3D11_FEATURE_D3D11_OPTIONS3::
// VPAndRTArrayIndexFromAnyShaderFeedingRasterizer optional feature
// we can avoid using a pass-through geometry shader to set the render
// target array index, thus avoiding any overhead that would be
// incurred by setting the geometry shader stage.
This basically allows an array to be used with instanced drawing, where the array index will be set automatically for you per each instanced which will be drawn. In the example you are drawing 2 instances (if the system is stereo, aka running on a Hololens or other stereo enabled device). Thus the index will be 0 and 1 respectively. If you don't have this support, then you have to set it manually which is what the geometry shader does in this case. The array is used for holding the vertices, indices, texture coordinates and etc for the two views (Left and Right).
By the time the Pixel shader gets the buffers, it knows which index (Left = 0 or Right = 1), to retrieve any buffer info it needs to determine which pixel color to send to the Rasterizer for rendering colors to the screen.
Now I say all that to say, what are you actually trying to render? Because I would look into the https://github.com/Microsoft/DirectXTK. It handles alot of this for you.
Now onto another point... You can't just render to any target. Typically you render onto the back buffer however in a Hololens app, the back buffer is not that readily available like normal DirectX apps. Take a look at the CameraResource.cs to take a hint on how to get to the back buffer. You have to cast it to a IResource which could be a Texture2D.
HTH
Dwight Goins
CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
http://dgoins.wordpress.com
Hey Dwight, fellow MVP - thanks for the quick response!
And thanks for the excellent explanation. I had a rough idea of how the rendering worked - but know it is much clearer!
So, here is what I want to achieve: We need a couple of virtual Info Displays we'd like to place as Holograms on a kind of 16:9 and not very deep cuboid over a couple of work places in a production plant.
This is actually all I need for the first version of the prototype in terms of 3D. Rendering the Info ON those displays with Direct2D (Lines, Circles, DrawText, etc.) is then the real important task. So, actually, I do not really care how I bring text, lines, areas and all that on those cuboids. I just thought, it was a feasible way to go.
So, to this end, I modified the PixelShader.hdsl to look like this, so I could use a Texture within the program code.
And I modified the VertexShader (Both) to look like this:
And this is, where I am comparatively lost, because I do not really know what I'm dealing with. Is this the right way, to extend the VertexShaderOutput? And: I have to extend both VertexShaders (for the Emulator use and the actual HoloLense deployment), is that correct? Can I invoke the Texture2D like this, because I'm not sure, that the perspective of what I'm 2D-rendering might not be taken into account.
Oh, and one more question: What did you mean by ColorTexture and ColorSampler? I tried to googlebing, but could not really find anything usefull?
I'm using CSharp/SharpDx, so I'm not sure if the Toolkit can help me here, or is it?
Thanks so much!
Best
Klaus
ColorTexture & ColorSampler was a typo I meant Texture2D and SampleState. Sorry about that.
Yes your shaders look fine
Now the next step is to change the shader vertex structure to now contain your texturecoordinate:
then load the texture and create the resourceView, and pass that into the shaders:
Then inside the Render() method
Dwight Goins
CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
http://dgoins.wordpress.com
Hey Dwight,
I'm not really getting this working, although I might doing some progress here!
:-)
So far, I can see my cuboid, and it is doing something with the texture. So, for example, when I "clear" the texture by drawing the Clear command into it, the texture really applies the Clear color to the cuboid.
But when I draw a line in a completely different color, it seems that with just one pixel of that line the area gets completely filled.
So, I'm wondering about this:
Shouldn't the texture not also be corrected for perspective in a way?
Are there any other things you could think of that would contribute to that effect?
Thanks (again!)!
Klaus
The Vertex shader does the math to figure out points in world space, it's the job of the pixel shader to determine which color to display at a particular pixel, that's all.
Your texture2D is applied in the Pixel Shader. For every pixel it draws the corresponding textured color based on the UV coordinates. The projection, view and model matrices are already transformed to the correct positions by this stage.
What are you loading up in your UV Coordinates? It's possible all you're sending in is 1 coordinate. Or, if all you have is 1 pixel in one location then the other coordinates are most likely 0, or a coordinate that has no pixel color from the texture. Turn on DirectX Debugging, and look at your output window to see if you're getting any errors on any of the shaders.
Dwight Goins
CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
http://dgoins.wordpress.com
Hey Dwight,
I got it finally working - thanks so much for all your support!
Hope to see you on the summit in Nov!
Best from Germany
Klaus
This is good to hear... and hopefully see you soon
Dwight Goins
CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
http://dgoins.wordpress.com