Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

World coordinates from meshes (VertexPositions->Data)

Hello,
I am trying to extract spatial meshes to an array of global coordinates, how can I extract them from mesh->VertexPositions->Data ? I don't need to render them so I don't want to do "CreateDirectXBuffer" stuff like in samples. PS: I am not using Unity.

auto options = ref new SpatialSurfaceMeshOptions();
auto formats = options->SupportedVertexPositionFormats;
options->VertexPositionFormat = DirectXPixelFormat::R16G16B16A16IntNormalized;
auto mapContainingSurfaceCollection = m_surfaceObserver->GetObservedSurfaces();
for (auto& pair : mapContainingSurfaceCollection)
{
    auto const& surfaceInfo = pair->Value;
    auto mesh = co_await surfaceInfo->TryComputeLatestMeshAsync(1000, options);
    auto positions = mesh->VertexPositions->Data;
    //IBuffer to Vector3[]?..
}

Answers

  • edited August 2016

    IBuffer is just a WinRT Wrapper around a BYTE *, so cast it to a BYTE *, then iterate through the array of Bytes.

    Each byte represents an x,y,z,w Float, if I remember correctly.

    Something like this:

        // First, we acquire the raw data buffers.
            Windows::Storage::Streams::IBuffer^ positions = m_surfaceMesh->VertexPositions->Data;
            Windows::Storage::Streams::IBuffer^ normals = m_surfaceMesh->VertexNormals->Data;
            Windows::Storage::Streams::IBuffer^ indices = m_surfaceMesh->TriangleIndices->Data;
    
    BYTE * ptrNormals = (BYTE *)normals;
    
            // Finally, we cache properties for the buffers we just created.
            m_vertexStride = m_surfaceMesh->VertexPositions->Stride;
            m_normalStride = m_surfaceMesh->VertexNormals->Stride;
                auto numOfVertices = (UINT)(verticesCount / m_normalStride);
    
                for (UINT i = 0; i < numOfVertices; i++)
                {
                    auto fred = (float) *ptrNormals;
                    ptrNormals++;
                    auto fgreen = (float)*ptrNormals;
                    ptrNormals++;
                    auto fblue = (float)*ptrNormals;
                    ptrNormals++;
                    auto fAlpha = (float)*ptrNormals;
                    ptrNormals++;
    
                    XMFLOAT3 normal = XMFLOAT3(fred, fgreen, fblue);
    
                    auto fX = (float)*ptrPositions;
                    ptrPositions++;
                    auto fY = (float)*ptrPositions;
                    ptrPositions++;
                    auto fZ = (float)*ptrPositions;
                    ptrPositions++;
                    auto fW = (float)*ptrPositions;
                    ptrPositions++;
    
                    XMFLOAT3 position = XMFLOAT3(fX, fY, fZ);
    
                    XMFLOAT2 texture;
                    texture.x = textureCoords.at(i).x;
                    texture.y = textureCoords.at(i).y;
    
                    vertices.push_back(VertexPositionNormalTexture(XMFLOAT3(position.x, position.y, position.z), XMFLOAT3(normal.x, normal.y, normal.z), XMFLOAT2(texture)));
    
                }
    

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

  • EgorboEgorbo
    edited August 2016

    @Dwight_Goins_EE_MVP Thank you so much for help!!
    what is verticesCount in your code? is it mesh->VertexPositions->ElementCount? if so why it's divided by m_normalStride? Thanks!

    PS: do these options work with your code?:

    auto options = ref new SpatialSurfaceMeshOptions();
    options->VertexPositionFormat = DirectXPixelFormat::R16G16B16A16IntNormalized;
    options->IncludeVertexNormals = true;
    
  • Vertices Count is : auto verticesCount = m_surfaceMesh->VertexPositions->ElementCount;

    I was doing something else - in your case don't divide by the stride. I gave a general idea on how to get the IBuffer data for the Positions. Again it's a Byte *. So that means you have to read the Byte * and for each byte convert it to the data you need. If the format is 2 byte, then that means every 2 Bytes you need to read it as 1 double.

    Your format is a 2Byte value for each element. 16bits for Red, 16 for Green, 16 for Blue and 16 for alpha. Each value in the 2Byte will be a float from 0.00 to 1.00 (Normalized from 0 to 255) with 0.00 = 0 and 1.00 = 255.

    HTH

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

  • I think this is what you need. I extracted it from a test project I use, so there might be some noise:;

    float3 vertexScale = m_surfaceMesh->VertexPositionScale;
        int InputVertexCount = m_surfaceMesh->VertexPositions->ElementCount;
        float spacing = 1.0f / 12.0f;
        // convert all the verts to approximate verts 
        // it would probably be better to check the surface of each triangle rather
        // than taking each vertex...
        for (int index = 0;index < InputVertexCount;index++)
        {
            // read the currentPos as an XMSHORTN4. 
            XMSHORTN4 currentPos = XMSHORTN4(rawVertexData[index]);
            XMFLOAT4 xmfloat;
    
            // XMVECTOR knows how to convert the XMSHORTN4 to actual floating point coordinates. 
            XMVECTOR xmvec = XMLoadShortN4(&currentPos);
    
            // STore that into an XMFLOAT4 so we can read the values.
            XMStoreFloat4(&xmfloat, xmvec);
    
            // WHich need to be scaled by the vertex scale before approximating.
            XMFLOAT4 scaledVector = XMFLOAT4(xmfloat.x*vertexScale.x, xmfloat.y*vertexScale.y, xmfloat.z*vertexScale.z, xmfloat.w);
    
            // Then we need to down scale the vector since it will be rescaled when rendering
            float4 nextFloat = float4(scaledVector.x, scaledVector.y, scaledVector.z, scaledVector.w);
            localRawVoxels->Append(nextFloat);
            nextFloat = ConvertToApproximateVector(nextFloat, spacing);
            m_rawVoxels->Append(nextFloat);
        }
    

    ===
    This post provided as-is with no warranties and confers no rights. Using information provided is done at own risk.

    (Daddy, what does 'now formatting drive C:' mean?)

  • XMSHORTN4* rawVertexData = (XMSHORTN4*)GetDataFromIBuffer(m_surfaceMesh->VertexPositions->Data);
    
    template <typename t = byte>
    t* GetDataFromIBuffer(Windows::Storage::Streams::IBuffer^ container)
    {
        if (container == nullptr)
        {
            return nullptr;
        }
    
        unsigned int bufferLength = container->Length;
    
        if (!(bufferLength > 0))
        {
            return nullptr;
        }
    
        HRESULT hr = S_OK;
    
        ComPtr<IUnknown> pUnknown = reinterpret_cast<IUnknown*>(container);
        ComPtr<IBufferByteAccess> spByteAccess;
        hr = pUnknown.As(&spByteAccess);
        if (FAILED(hr))
        {
            return nullptr;
        }
    
        byte* pRawData = nullptr;
        hr = spByteAccess->Buffer(&pRawData);
        if (FAILED(hr))
        {
            return nullptr;
        }
    
        return reinterpret_cast<t*>(pRawData);
    }
    

    ===
    This post provided as-is with no warranties and confers no rights. Using information provided is done at own risk.

    (Daddy, what does 'now formatting drive C:' mean?)

  • I also found that you might have to multiply each vertex by

    m_surfaceMesh->VertexPositionScale

    and then transform each vertex by

    SpatialCoordinateSystem^ scs = m_surfaceMesh->CoordinateSystem;
    Platform::IBox<float4x4>^ scsToWorld = scs->TryGetTransformTo(currentCoordinateSystem);
    vertex = transform(vertex, scsToWorld->Value);
    

    where currentCoordinateSystem was obtained as a Stationary SpatialCoordinateSystem.

    Finally, I was using sample code where DirectX stored the vertices in the DXGI_FORMAT_R16G16B16A16_SNORM format (not sure if you can change this), so I had to extract the actual vertices as follows:

    float* vertices = new float[m_numberOfVertexPositions*3];
    for (int i = 0; i < m_numberOfVertexPositions; i++)
    {
        short x = (raw_vertices[i * 4 + 0]);
        short y = (raw_vertices[i * 4 + 1]);
        short z = (raw_vertices[i * 4 + 2]);
        short w = (raw_vertices[i * 4 + 3]);
        float xx = (x == -32768) ? -1.0f : x * 1.0f / (32767.0f);
        float yy = (y == -32768) ? -1.0f : y * 1.0f / (32767.0f);
        float zz = (z == -32768) ? -1.0f : z * 1.0f / (32767.0f);
        float ww = (w == -32768) ? -1.0f : w * 1.0f / (32767.0f);
        float3 vertex = float3(xx * scale.x, yy * scale.y, zz * scale.z);
        vertex = transform(vertex, scsToWorld->Value);
        vertices[i*3 + 0] = vertex.x;
        vertices[i*3 + 1] = vertex.y;
        vertices[i*3 + 2] = vertex.z;
    }
    
  • I also found that you might have to multiply each vertex by

    m_surfaceMesh->VertexPositionScale

    and then transform each vertex by

    SpatialCoordinateSystem^ scs = m_surfaceMesh->CoordinateSystem;
    Platform::IBox<float4x4>^ scsToWorld = scs->TryGetTransformTo(currentCoordinateSystem);
    vertex = transform(vertex, scsToWorld->Value);
    

    where currentCoordinateSystem was obtained as a Stationary SpatialCoordinateSystem.

    Finally, I was using sample code where DirectX stored the vertices in the DXGI_FORMAT_R16G16B16A16_SNORM format (not sure if you can change this), so I had to extract the actual vertices as follows:

    float* vertices = new float[m_numberOfVertexPositions*3];
    for (int i = 0; i < m_numberOfVertexPositions; i++)
    {
        short x = (raw_vertices[i * 4 + 0]);
        short y = (raw_vertices[i * 4 + 1]);
        short z = (raw_vertices[i * 4 + 2]);
        short w = (raw_vertices[i * 4 + 3]);
        float xx = (x == -32768) ? -1.0f : x * 1.0f / (32767.0f);
        float yy = (y == -32768) ? -1.0f : y * 1.0f / (32767.0f);
        float zz = (z == -32768) ? -1.0f : z * 1.0f / (32767.0f);
        float ww = (w == -32768) ? -1.0f : w * 1.0f / (32767.0f);
        float3 vertex = float3(xx * scale.x, yy * scale.y, zz * scale.z);
        vertex = transform(vertex, scsToWorld->Value);
        vertices[i*3 + 0] = vertex.x;
        vertices[i*3 + 1] = vertex.y;
        vertices[i*3 + 2] = vertex.z;
    }
    
  • I also found that you might have to multiply each vertex by

    m_surfaceMesh->VertexPositionScale

    and then transform each vertex by

    SpatialCoordinateSystem^ scs = m_surfaceMesh->CoordinateSystem;
    Platform::IBox<float4x4>^ scsToWorld = scs->TryGetTransformTo(currentCoordinateSystem);
    vertex = transform(vertex, scsToWorld->Value);
    

    where currentCoordinateSystem was obtained as a Stationary SpatialCoordinateSystem.

    Finally, I was using sample code where DirectX stored the vertices in the DXGI_FORMAT_R16G16B16A16_SNORM format (not sure if you can change this), so I had to extract the actual vertices as follows:

    float* vertices = new float[m_numberOfVertexPositions*3];
    for (int i = 0; i < m_numberOfVertexPositions; i++)
    {
        short x = (raw_vertices[i * 4 + 0]);
        short y = (raw_vertices[i * 4 + 1]);
        short z = (raw_vertices[i * 4 + 2]);
        short w = (raw_vertices[i * 4 + 3]);
        float xx = (x == -32768) ? -1.0f : x * 1.0f / (32767.0f);
        float yy = (y == -32768) ? -1.0f : y * 1.0f / (32767.0f);
        float zz = (z == -32768) ? -1.0f : z * 1.0f / (32767.0f);
        float ww = (w == -32768) ? -1.0f : w * 1.0f / (32767.0f);
        float3 vertex = float3(xx * scale.x, yy * scale.y, zz * scale.z);
        vertex = transform(vertex, scsToWorld->Value);
        vertices[i*3 + 0] = vertex.x;
        vertices[i*3 + 1] = vertex.y;
        vertices[i*3 + 2] = vertex.z;
    }
    
  • MarcDubMarcDub ✭✭
    edited September 2016

    Based on the information found in this post, I wrote some code to access the surface mesh vertex information. Problem is that I sometimes get an error "Access violation reading location 0x00E01000" (location varies) when attempting to read q[4*i] (see below). Anyone has an explanation for that error to occur?

    int vertexCount = m_surfaceMesh->VertexPositions->ElementCount; short *q = (short *)m_surfaceMesh->VertexPositions->Data; for (int i = 0; i < vertexCount; i++) // goes through all vertices of the mesh { short xc = q[4 * i]; short yc = q[4 * i+1]; short zc = q[4 * i+2]; }

  • PatrickPatrick mod
    edited September 2016

    Is it expected to work that IBuffer can be cast to short(star)? That would be very convenient, but I don't think that you can know the memory layout of the Data object. I hope I'm wrong, because I would so much rather treat IBuffers like void(star)? 's.

    I posted in this thread how I get the actual data pointer from the buffer. Do you still have the same problem if you follow that approach?

    ===
    This post provided as-is with no warranties and confers no rights. Using information provided is done at own risk.

    (Daddy, what does 'now formatting drive C:' mean?)

  • MarcDubMarcDub ✭✭
    edited September 2016

    thank you Patrick. That was the problem. I had not noticed the IBuffer cast. It did work about 90% of the time though, which is not good enough by a long shot.

Sign In or Register to comment.