Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Mapping pixel coordinates to real world objects

Hello community,

I try and try, but I can't figure out, what I am doing wrong. I have to confess, that I am really new to the whole computer vision and holographic thing.
Let me explain what I try to do and then, where my problem is.
I try to build an app for the HoloLens that communicates with a Deep Learning Object Detection Framework. Then after receiving results the HoloLens should transform the pixel coordinates for the found objects, cast a ray in the direction of the found objects and on the hitpoint should build a primitive cube.
I found a project, which seems to be like mine (https://blogs.sap.com/2016/06/09/annotating-the-world-using-microsoft-hololens/) so I used their code for the transformation. I had to build a bit around, because I use my own DL-Server. But the results I get from my server should be the same as their results via Microsoft Cognitive Services(I get top,bot,left,right coordinates from the boundingboxes).

My result cubes are really off from the found objects. I tried to figure it out on my own (I deleted the Scalevector-Function from the other project, because if I understand it right it maps all objects to the size of the holo-screen. But if there are objects found out of the screen I want to map them too.) but I have no clue what I am doing wrong.

I read many threads about, I think, a nearly same problem, but I didn't find any solutions.

I hope I find someone, who could explain me what would be the best thing for me to do.

Kind regards
Matthias

Answers

  • CarlvHCarlvH
    edited September 2017

    Hey

    Maybe you take a look at this. The chapter "Pixel to Application-specified Coordinate System" should explain hwo to convert pixel coordinates to world rays which can be used to perform a Physics.Raycast-Operation with the spacial map to get a world point.
    Also keep in mind that you need to remember the camera position if there is a delay time between taking the image and receiving the result from your Deep-Learning-Network.

  • vertexvertex
    edited October 2017

    Is possible to map real world coordinates in pixel coordinates?

  • I have the same problem. I want to click on the picture with mouse and create a hologram in 3D world. But the objects are not where I expect them.

Sign In or Register to comment.