Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

DirectX: Adding a Texture2D to render 2D content on 3D Surfaces

Hey guys,

I spent the whole day to modify the CSharp DirectX template, so I could render 2D content on a 3D surface.

Basically, I tried to create

  • a Texture2D
  • a Surface
  • a Render Target/BitmapRenderTarget
  • a SamplerState

and tried to hook all this in the PixelShader with SetShaderResource and SetSampler. But, to be honest, this is Day1 for me with D3D and HLSL, and I'm faaaar away from comprehending this.

I thought, in the Render method, I could first render the 2D content into the BitmapRenderTarget and then take that as a Texture onto the 3D object.

But I think I screwed up the PixelShader and/or VertextShader HLSL files.
To be honest, I have not at all understood what the 2 different VertextShader files are all about. Amongst other things... :-S

So, how exactly would I have to modify those to get this to work?
Will this approach work at all?

Thanks a lot - any help is really appreciated!!

Klaus

Tagged:

Best Answer

Answers

  • Options
    edited July 2016

    Hi Klaus,

    A couple of things to answer you question in a nutshell, yes this should work. You have to use a ColorTexture, and ColorSampler in the pixel shader to tell your shader to use the pixels from the texture.

    The Vertex shader is used to calculate the Matrices: Model+View+Projection. These points are used to translate coordinates into world space. It's invoked for each vertex you pass into the vertex shader. Typically you pass in vertices, indices, and colors/texture coordinates, + any custom data you need for processing the other shaders.

    After the vertex shader comes an optional geometry shader - which is used to do further point processing if needed.

    Lastly the pixel shader which is used to determine which colors to show. It runs for every pixel in the view. The data is passed from the vertex to the geometry shader to the pixel shader.

    The Hololens is a stereo scopic rendering engine, which means you have the ability to render stereoscopic views. Left eye and Right eye. The two different vertex shaders are used to support this mechanism. When running on an emulator (which does not support stereo view) you have to mimic the stereo design. The microsoft example you have does this through a custom shaders. There is a vertex shader which uses the latest DirectX 5.1 shader enhancement to support :

    // On devices that do support the D3D11_FEATURE_D3D11_OPTIONS3::
    // VPAndRTArrayIndexFromAnyShaderFeedingRasterizer optional feature
    // we can avoid using a pass-through geometry shader to set the render
    // target array index, thus avoiding any overhead that would be
    // incurred by setting the geometry shader stage.

    This basically allows an array to be used with instanced drawing, where the array index will be set automatically for you per each instanced which will be drawn. In the example you are drawing 2 instances (if the system is stereo, aka running on a Hololens or other stereo enabled device). Thus the index will be 0 and 1 respectively. If you don't have this support, then you have to set it manually which is what the geometry shader does in this case. The array is used for holding the vertices, indices, texture coordinates and etc for the two views (Left and Right).

    By the time the Pixel shader gets the buffers, it knows which index (Left = 0 or Right = 1), to retrieve any buffer info it needs to determine which pixel color to send to the Rasterizer for rendering colors to the screen.

    Now I say all that to say, what are you actually trying to render? Because I would look into the https://github.com/Microsoft/DirectXTK. It handles alot of this for you.

    Now onto another point... You can't just render to any target. Typically you render onto the back buffer however in a Hololens app, the back buffer is not that readily available like normal DirectX apps. Take a look at the CameraResource.cs to take a hint on how to get to the back buffer. You have to cast it to a IResource which could be a Texture2D.

    HTH

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

  • Options

    Hey Dwight, fellow MVP - thanks for the quick response!
    And thanks for the excellent explanation. I had a rough idea of how the rendering worked - but know it is much clearer!

    So, here is what I want to achieve: We need a couple of virtual Info Displays we'd like to place as Holograms on a kind of 16:9 and not very deep cuboid over a couple of work places in a production plant.

    This is actually all I need for the first version of the prototype in terms of 3D. Rendering the Info ON those displays with Direct2D (Lines, Circles, DrawText, etc.) is then the real important task. So, actually, I do not really care how I bring text, lines, areas and all that on those cuboids. I just thought, it was a feasible way to go.

    So, to this end, I modified the PixelShader.hdsl to look like this, so I could use a Texture within the program code.

    Texture2D ShaderTexture : register(t0);
    SamplerState Sampler : register(s0);
    
    // Per-pixel color data passed through the pixel shader.
    struct PixelShaderInput
    {
        min16float4 pos   : SV_POSITION;
        min16float3 color : COLOR0;
        float2 TextureUV : TEXCOORD0;
    };
    
    // The pixel shader passes through the color data. The color data from 
    // is interpolated and assigned to a pixel at the rasterization step.
    float4 main(PixelShaderInput input) : SV_TARGET
    {
        return ShaderTexture.Sample(Sampler, input.TextureUV);
    }
    

    And I modified the VertexShader (Both) to look like this:

    struct VertexShaderOutput
    {
        min16float4 pos     : SV_POSITION;
        min16float3 color   : COLOR0;
        float2    TextureUV : TEXCOORD0;
        uint        viewId  : TEXCOORD1;  // SV_InstanceID % 2
    };
    
    // Simple shader to do vertex processing on the GPU.
    VertexShaderOutput main(VertexShaderInput input)
    {
        VertexShaderOutput output;
        float4 pos = float4(input.pos, 1.0f);
    
        // Note which view this vertex has been sent to. Used for matrix lookup.
        // Taking the modulo of the instance ID allows geometry instancing to be used
        // along with stereo instanced drawing; in that case, two copies of each 
        // instance would be drawn, one for left and one for right.
        int idx = input.instId % 2;
    
        // Transform the vertex position into world space.
        pos = mul(pos, model);
    
        // Correct for perspective and project the vertex position onto the screen.
        pos = mul(pos, viewProjection[idx]);
        output.pos = (min16float4)pos;
        output.TextureUV = input.TextureUV;
    
        //// Pass the color through without modification.
        output.color = input.color;
    
        // Set the instance ID. The pass-through geometry shader will set the
        // render target array index to whatever value is set here.
        output.viewId = idx;
    
        return output;
    }
    

    And this is, where I am comparatively lost, because I do not really know what I'm dealing with. Is this the right way, to extend the VertexShaderOutput? And: I have to extend both VertexShaders (for the Emulator use and the actual HoloLense deployment), is that correct? Can I invoke the Texture2D like this, because I'm not sure, that the perspective of what I'm 2D-rendering might not be taken into account.

    Oh, and one more question: What did you mean by ColorTexture and ColorSampler? I tried to googlebing, but could not really find anything usefull?

    I'm using CSharp/SharpDx, so I'm not sure if the Toolkit can help me here, or is it?

    Thanks so much!

    Best

    Klaus

  • Options
    edited August 2016

    ColorTexture & ColorSampler was a typo I meant Texture2D and SampleState. Sorry about that.

    Yes your shaders look fine

    Now the next step is to change the shader vertex structure to now contain your texturecoordinate:

             public VertexPositionCoordinate(Vector3 pos, Vector2 coord)
                    {
                        this.pos = pos;
                        this.coordinate = coord;
                    }
    
                    public Vector3 pos;
                    public Vector2 coordinate;
    

    then load the texture and create the resourceView, and pass that into the shaders:

                string textureName2 = "Content\\Textures\\holding_fire2.dds";
    
                var bitmapSource = TextureLoader.LoadBitmap(new SharpDX.WIC.ImagingFactory2(), textureName2);
                var texture2D = TextureLoader.CreateTexture2DFromBitmap(deviceResources.D3DDevice, bitmapSource);
    
                var samplerStateDescription = new SharpDX.Direct3D11.SamplerStateDescription();
                samplerStateDescription.Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear;
                samplerStateDescription.AddressU = SharpDX.Direct3D11.TextureAddressMode.Border;
                samplerStateDescription.AddressV = SharpDX.Direct3D11.TextureAddressMode.Border;
                samplerStateDescription.AddressW = SharpDX.Direct3D11.TextureAddressMode.Border;
                samplerStateDescription.BorderColor = new SharpDX.Mathematics.Interop.RawColor4(0f, 0f, 0f, 1f);
    
                samplerState = new SharpDX.Direct3D11.SamplerState(deviceResources.D3DDevice, samplerStateDescription);
    
                textureResource = new SharpDX.Direct3D11.ShaderResourceView(deviceResources.D3DDevice, texture2D);
    
     VertexPositionCoordinate[] cubeVertices =
                {
                    new VertexPositionCoordinate(new Vector3(-0.1f, -0.1f, -0.1f), new Vector2(0.0f, 1.0f)),
                    new VertexPositionCoordinate(new Vector3(-0.1f, -0.1f,  0.1f), new Vector2(0.0f, 0.0f)),
                    new VertexPositionCoordinate(new Vector3(-0.1f,  0.1f, -0.1f), new Vector2(0.0f, 0.0f)),
                    new VertexPositionCoordinate(new Vector3(-0.1f,  0.1f,  0.1f), new Vector2(0.0f, 0.0f)),
                    new VertexPositionCoordinate(new Vector3( 0.1f, -0.1f, -0.1f), new Vector2(1.0f, 1.0f)),
                    new VertexPositionCoordinate(new Vector3( 0.1f, -0.1f,  0.1f), new Vector2(1.0f, 0.0f)),
                    new VertexPositionCoordinate(new Vector3( 0.1f,  0.1f, -0.1f), new Vector2(1.0f, 0.0f)),
                    new VertexPositionCoordinate(new Vector3( 0.1f,  0.1f,  0.1f), new Vector2(1.0f, 1.0f)),
                };
    

    Then inside the Render() method

                context.PixelShader.SetSampler(0, samplerState);
    
                        context.PixelShader.SetShaderResource(0, textureResource);
    

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

  • Options

    Hey Dwight,

    I'm not really getting this working, although I might doing some progress here!
    :-)

    So far, I can see my cuboid, and it is doing something with the texture. So, for example, when I "clear" the texture by drawing the Clear command into it, the texture really applies the Clear color to the cuboid.

    But when I draw a line in a completely different color, it seems that with just one pixel of that line the area gets completely filled.

    So, I'm wondering about this:

    // Simple shader to do vertex processing on the GPU.
    VertexShaderOutput main(VertexShaderInput input)
    {
        VertexShaderOutput output;
        float4 pos = float4(input.pos, 1.0f);
    
        // Note which view this vertex has been sent to. Used for matrix lookup.
        // Taking the modulo of the instance ID allows geometry instancing to be used
        // along with stereo instanced drawing; in that case, two copies of each 
        // instance would be drawn, one for left and one for right.
        int idx = input.instId % 2;
    
        // Transform the vertex position into world space.
        pos = mul(pos, model);
    
        // Correct for perspective and project the vertex position onto the screen.
        pos = mul(pos, viewProjection[idx]);
        output.pos = (min16float4)pos;
        output.TextureUV = input.TextureUV;
    
        //// Pass the color through without modification.
        //output.color = input.color;
    
        // Set the render target array index.
        output.rtvId = idx;
    
        return output;
    }
    

    Shouldn't the texture not also be corrected for perspective in a way?

    Are there any other things you could think of that would contribute to that effect?

    Thanks (again!)!

    Klaus

    PS: Here is the code, that is creating the texture:
    
            private void SetupTexture()
            {
                var context = this.deviceResources.D3DDeviceContext;
                var factory = new SharpDX.Direct2D1.Factory();
    
                myTexture2d = new Texture2D(this.deviceResources.D3DDevice, new Texture2DDescription()
                {
                    ArraySize = 1,
                    BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
                    CpuAccessFlags = CpuAccessFlags.None,
                    Format = Format.B8G8R8A8_UNorm,
                    Width = 1280,
                    Height = 720,
                    MipLevels = 1,
                    OptionFlags = ResourceOptionFlags.None,
                    SampleDescription = new SampleDescription()
                    {
                        Count = 1,
                        Quality = 0
    
                    },
                    Usage = ResourceUsage.Default
                });
    
                var surface = myTexture2d.QueryInterface<Surface>();
    
                var pFormat = new SharpDX.Direct2D1.PixelFormat();
                pFormat.Format = Format.Unknown;
                pFormat.AlphaMode = SharpDX.Direct2D1.AlphaMode.Premultiplied;
                var rtp = new RenderTargetProperties(RenderTargetType.Default,
                                                     pFormat, 0, 0, 
                                                     RenderTargetUsage.None, 
                                                     FeatureLevel.Level_DEFAULT);
    
                myRenderTarget = new SharpDX.Direct2D1.RenderTarget(factory, surface, rtp);
    
                var samplerStateDescription = new SamplerStateDescription
                {
    
                    AddressU = TextureAddressMode.Border,
                    AddressV = TextureAddressMode.Border,
                    AddressW = TextureAddressMode.Border,
                    Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear,
                    BorderColor = new RawColor4(0f, 0f, 0f, 1f)
                };
    
                mySampler = new SamplerState(this.deviceResources.D3DDevice, samplerStateDescription);
    
                var brush = new SolidColorBrush(myRenderTarget, new RawColor4(1f, 0f, 0f, 1f));
    
                myRenderTarget.BeginDraw();
                myRenderTarget.Clear(new RawColor4(0.0f, 0.0f, 1f, 1f));
                myRenderTarget.DrawLine(new RawVector2() { X = 0, Y = 0 },
                                        new RawVector2() { X = 600f, Y = 600f },
                                        brush, 5f);
                //myRenderTarget.DrawLine(new RawVector2() { X = 0, Y = 200 },
                //                           new RawVector2() { X = 200, Y = 0 },
                //                           brush, 1);
                myRenderTarget.EndDraw();
            }
    
  • Options

    Hey Dwight,

    I got it finally working - thanks so much for all your support!

    Hope to see you on the summit in Nov!

    Best from Germany

    Klaus

  • Options

    This is good to hear... and hopefully see you soon ;)

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

Sign In or Register to comment.