Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Spatial Surface Mesh returning NAN for positions

I'm trying to display a SpatialSurfaceMesh::VertexPositions::Data with a mesh, like in the "Spatial mapping in DirectX" example. However, the position data is returning NAN almost consistently for every other float. At first I thought this was because it uses a stride of 8 and has twice the amount of values as SpatialSurfaceMesh::VertexNormals::Data has. Although it even said NAN in the Graphics Debugger and I would think that the Graphics Debugger would know to skip the right spacing when the stride is set.

So I ended up creating a converter to change the array from stride of 8 to stride of 4. This didn't really help, the values were very oddly small. Then there was a lot of NAN further down the vertex list. I imagine that the problem could be how I'm converting the IBuffer to an array of bytes with the data (http://stackoverflow.com/questions/11853838/getting-an-array-of-bytes-out-of-windowsstoragestreamsibuffer). However, I have nothing to check this against since I can't find the sample source code anywhere. I could solve this on myown if I had the sample source code. The documentation even leaves a note, saying to see "GetDataFromIBuffer.h" for the definition of the CreateDirectXBuffer() helper function.

Has anyone come across this or does anyone know where the DirectX sample source code is located?

Best Answer

Answers

  • The only reference I can find is spatial_mapping_in_directx
    Which talks about how the API work

  • Thanks BillOrr, that was the problem.

  • @BillOrr said:
    Check that your input layout matches what is being returned from VertexPositions->Format. The position data is typically packed as R16G16B16A16UIntNormalized, or equivalently DXGI_FORMAT_R16G16B16A16_UNORM.

    I was mistaken. The type is actually R16G16B16A16IntNormalized/DXGI_FORMAT_R16G16B16A16_SNORM.

  • @BillOrr said:
    R16G16B16A16IntNormalized/DXGI_FORMAT_R16G16B16A16_SNORM.

    So, after a get a byte array from the IBuffer, does it actually contain a single DXGI_FORMAT_R16G16B16A16_SNORM structure per vertex? How can I convert it into, say, {float x, float y, float z}?

  • @Tennoheikabanzai said:

    @BillOrr said:
    R16G16B16A16IntNormalized/DXGI_FORMAT_R16G16B16A16_SNORM.

    So, after a get a byte array from the IBuffer, does it actually contain a single DXGI_FORMAT_R16G16B16A16_SNORM structure per vertex? How can I convert it into, say, {float x, float y, float z}?

    Found any solution? I am trying to achieve the same thing.

  • Only a 'suggestion' here rather than an attempt at an 'answer' but does it help you here if you use the SpatialSurfaceMeshOptions here to request a different format from the mesh APIs? For instance, R32G32B32A32Float?

  • @Khronossos said:

    @Tennoheikabanzai said:

    @BillOrr said:
    R16G16B16A16IntNormalized/DXGI_FORMAT_R16G16B16A16_SNORM.

    So, after a get a byte array from the IBuffer, does it actually contain a single DXGI_FORMAT_R16G16B16A16_SNORM structure per vertex? How can I convert it into, say, {float x, float y, float z}?

    Found any solution? I am trying to achieve the same thing.

    Oh. Yes, I did, quite some time ago, thought briefly about posting it here but didn't think anybody cares... Well, if anybody still does, the vertices are stored as a buffer of DXGI_FORMAT_R16G16B16A16_SNORM, which is four 2-byte half-precision floats. Homogenous coordinates, I guess, the 4-th value was always 1 for me. This is how I converted those half-precision floats into "real" floats (don't remember where I found the formula, but it works):

    inline float snorm_to_float(int16_t i)
    {
        float coeff = 3.051850947599719e-05; // 1 / ( 2^(n-1) - 1 ), where n == 16 (number of bits)
        float c = float(i);
        return c*coeff;
    }
    
    //...................
    int16_t *vertices_data = GetDataFromIBuffer<int16_t>(positions);
    //one vertex is four 2-byte half-precision floats
    for (int i = 0; i < n_vertices * 4; i += 4)
    {
        XMVECTOR vec;
        vec.m128_f32[0] = snorm_to_float(vertices_data[i + 0]);
        vec.m128_f32[1] = snorm_to_float(vertices_data[i + 1]);
        vec.m128_f32[2] = snorm_to_float(vertices_data[i + 2]);
        vec.m128_f32[3] = snorm_to_float(vertices_data[i + 3]);
        //the vertex coordinates are stored in `vec` now - apply transformations, etc.
        //..................
    }
    

    Probably obvious for those who have experience with DirectX. And you probably don't need to convert at all if all you need is to render (I wanted to do some additional processing). And should probably vectorize (e. g. SSE) the multiplication.

Sign In or Register to comment.