Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Coordinate System Transforms

Could anyone help with what should be a simple conversion:

I have a 3D coordinate (4x4 matrix) in camera space obtained from image processing (openCV), and would like to convert this to Unity world space.

Following examples from ARToolkit / HololensForCV / Spectator View, I have:
1. The cameraToUnity transform obtained by transforming the Windows::Perception::Spatial SpatialCoordinateSystem objects of the camera frame and unity
2. The cameraView transform obtained through the Windows::Media::Capture::Frame MediaFrameReference properties
3. The 3D coordinates from opencv in camera space

How I think it should work:
coordUnity = coordCamera * cameraToUnity * cameraView^-1

Something has gone very wrong in this thinking, and the unity coordinates don't make sense.
Anyone have an idea?

Sign In or Register to comment.