Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Locatable camera intrinsics given that the images are inverted

We use the camera intrinsics embedded in the video frames for each Hololens. Given that the intrinsics map between 3D features and pixels you'd think we would need to do something to the intrinsics when we invert the images. Since everyone will run into this problem I figured I'd ask before doing the math and possibly reinventing the wheel. What is expected that users will do with the embedded intrinsics after they invert the images? Are the intrinsics already published for the image after it is inverted or as given?

Sign In or Register to comment.