Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

DirectX Depth buffer precision 24 bit?

I'm noticing the ANGLE project I use to translate OpenGL to direct3D consistently chooses a DXGI_FORMAT_R16_TYPELESS value (16 bit) for the depth buffer. Because of OpenGL->D3D depth space conversions, I tend to lose quite a bit of precision when I'm far (> 5 meters) away from holograms.

Question: Can I manually force this to 24 bits? Is there a performance problem incurred if I run the hololens with a 24 bit depth buffer?

Answers

  • Options
    Jimbohalo10Jimbohalo10 ✭✭✭
    edited December 2016

    Well looks like this page has a big choice
    DXGI_FORMAT_R24G8_TYPELESS
    A two-component, 32-bit typeless format that supports 24 bits for the red channel and 8 bits for the green channel.
    DXGI_FORMAT_D24_UNORM_S8_UINT
    A 32-bit z-buffer format that supports 24 bits for depth and 8 bits for stencil.
    DXGI_FORMAT_R24_UNORM_X8_TYPELESS
    A 32-bit format, that contains a 24 bit, single-component, unsigned-normalized integer, with an additional typeless 8 bits. This format has 24 bits red channel and 8 bits unused.
    DXGI_FORMAT_X24_TYPELESS_G8_UINT
    A 32-bit format, that contains a 24 bit, single-component, typeless format, with an additional 8 bit unsigned integer component. This format has 24 bits unused and 8 bits green channel.

    The problem seems to be selecting the right sort of 24 bit.
    I must confess never worked out which was better and as you can see its always used 32 bit because that's the CPU architecture of HoloLens.

    The problem is we don't know much about the GPU in the HoloLens.
    Is this program using the programmable like 8FPS or the GPU 30 to 60 FPS

    I feel sure the Capture Direct X debug for Visual Studio
    May give answers you want if you are rendering

    The thread seems to possibly have similar problems DirectX: Adding a Texture2D to render 2D content on 3D Surfaces

  • Options

    Thanks for answering. Angle converts GL_DEPTH_COMPONENT_24 to the right DXGI format it seems. Sofar (at least in the emulator), 24 bit works and my artifacts have gone away.

    My real question is, should I expect a performance hit on the hololens hardware now that I've gone from 16 to 24 depth.

  • Options
    FYI. It seemed to work just fine on the device.
  • Options
    Jimbohalo10Jimbohalo10 ✭✭✭
    edited December 2016

    Try loading the Microsoft Remote Desk assistant on your PC.

    Then load the HoloLens Microsoft Remote Desktop onto HoloLens.

    Logon to you PC from Microsoft Remote Desktop same as you would normally using PC IP address and MSA account.

    Run the program under HoloLens Emulator. You are now using the performance of the HoloLens device, rather than your PC Video Card.

    Running Galaxy Explorer is crippled displaying on a Windows 10 tablet in HoloLens Emulator, therefore you will see the performance from the HoloLens Device GPU.

    Look at DirectX/10.0/Direct3D/FPS, CPU usage and timers

    This introduces FPS Counters Class and could be used to record performance and all you do is change the parameters and compare on Pc CPU then using remote HoloLens GPU

    Ideally you could use the Portal as described in Windows Device Portal - Performance
    First try the 16bit version and store the results, then run the 24bit version and compare to 16bits

  • Options

    Dealing with Performance, I'm going to assume you're passing in the depth buffer as a constant buffer to a vertex shader. If so, the constant buffers used in the DirectX x86 (including HoloLens) have to pack into 16 bit aligned structures. It doesn't matter if you use 16, 24 or 32, they will always pack into 16 bit aligned. This means if you use 24 bits- you're internally using 2-16bits (32 bits)  with 8 bits being unused (24+8 unsued =32bits), so for performance you may want to stick some other value in those 8 unused bits, or convert your depth buffer into a full 32 bits. This is how the DirectX SIMD data types work on x86 platforms for heap allocated variables.

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

Sign In or Register to comment.