Hello everyone.

We have decided to phase out the Mixed Reality Forums over the next few months in favor of other ways to connect with us.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

The plan between now and the beginning of May is to clean up old, unanswered questions that are no longer relevant. The forums will remain open and usable.

On May 1st we will be locking the forums to new posts and replies. They will remain available for another three months for the purposes of searching them, and then they will be closed altogether on August 1st.

So, where does that leave our awesome community to ask questions? Well, there are a few places we want to engage with you. For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality. If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers. And always feel free to hit us up on Twitter @MxdRealityDev.

Displaying spatial mapping meshes like the 3D Viewer in c++

Hey,
I would like to disply the environment like the hololens web tool in the 3D view.
For that I generate meshes with the SurfaceObserver and send them to a server on my desktop PC, where I can read the send data and display the received meshes.
The problem I currently have is that their positions in the world view are not correct, they fit if I just use one meshCollection (amount of meshes returned from the observer) but if if move with the hololens and have a different position in the world the meshes get display in each other.
The following code shows how I apply the transfrom to the current mesh:

XMMATRIX scaleTransform = XMMatrixScalingFromVector(XMLoadFloat3(&m_surfaceMesh->VertexPositionScale));
XMMATRIX transform;

transform = XMLoadFloat4x4(&transformValue);
transform = scaleTransform * transform;

applyTransform = true;

for (int index = 0; index < vertexCount; ++index)
{
    // Read the current position as an XMSHORTN4
    XMSHORTN4 currentPosition = XMSHORTN4(rawVertexData[index]);
    XMFLOAT4 vals;

    // XMVECTOR know hot to convert XMSHORTN4 to actual floating point coordinates
    XMVECTOR vec = XMLoadShortN4(&currentPosition);

    // Store that into an XMFLOAT4 so we can read the values;
    XMStoreFloat4(&vals, vec);

    XMFLOAT4 scaledPos;

    // Scale by the vertex scale
    if (!applyTransform)
    {
        scaledPos = XMFLOAT4(vals.x * vertexScale.x, vals.y *vertexScale.y, vals.z * vertexScale.z, vals.w);
    }
    else
    {
        XMVECTOR scaledVec = XMVector4Transform(vec, transform);
        XMStoreFloat4(&scaledPos, scaledVec);
    }

The transformValue is generated like this:

SpatialCoordinateSystem^ currentCoordinateSystem = m_referenceFrame->GetStationaryCoordinateSystemAtTimestamp(prediction->Timestamp);
auto transformToOrigin = currentCoordinateSystem->TryGetTransformTo(m_stationaryReferenceFrame->CoordinateSystem);
Windows::Foundation::Numerics::float4x4 transformValue = transformToOrigin->Value;

The m_stationaryReferenceFrame->CoordinateSystem is the coordinateSystem at the start of the application and I use it as my reference point for all meshes.

Has anybody a suggestion what I should change or what I do wrong?

Sign In or Register to comment.