Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Converting 2D coordinates into 3D

Hello fellow hololens developers,

I want to convert 2D coordinates from a picture send from the hololens to another PC. The user of the PC picks a spot from the picture, sends the coordinates back to the hololens and the hololens should place an object at that spot.
My problem is to figure out how the 3D coordinates are measured. Currently I am using the holographicApp demo content with the spinning cube. I added 0.25f on each x and y coordinates of it's position. My intention was to figure out if it positions one quarter of the viewport, for the Y coordinate it seems to be right, but not for the X.

Let the picture speek for it. I assumed the cube at the blue circle.

I hope that makes any sense to you guys.
Thank you very much in advance for your help.

Cheers

Best Answer

Answers

  • Options
    stepan_stulovstepan_stulov ✭✭✭
    edited March 2017

    Ok, additionally to my previous post, which was only one direction. It's very similar. You just use one of the inverse methods like:
    https://docs.unity3d.com/ScriptReference/Camera.WorldToViewportPoint.html

    Building the future of holographic navigation. We're hiring.

  • Options
    stepan_stulovstepan_stulov ✭✭✭
    edited March 2017

    Also now I'm thinking you're trying to map the HoloLens's viewport onto a picture taken by HoloLens's camera? I believe this is where the discrepancy can come from. HoloLens's camera "sees" wider than what you see on its rendering displays, I'm pretty sure. These are two separate "visions". Normalized coordinate on the entire picture is not the same as normalized coordinate on the part of that picture that represents the edges of the viewport. So when the user selects the point, you need to calculate that point's coordinates not relativeto the entire picture but to the red rectangle / sub-picture. To calculate that you need to know the coordinates of the corners. Which you will need to either ask the user to specify or find somehow. But there is a problem with this as taking a photo doesn't include holograms as seen by the HoloLens wearer as far as I know. So you can't really place some markers to be found be either a human or image processing. Or is this not a picture taken by the HoloLens's camera but a render / snapshot of the mixed reality capture?

    Building the future of holographic navigation. We're hiring.

  • Options

    Ok, there seems to be a very good read with examples on how to transform between LocatableCamera's taken pictures and in-app 3D coordinates:

    https://developer.microsoft.com/en-us/windows/holographic/locatable_camera

    Building the future of holographic navigation. We're hiring.

  • Options
    edited March 2017

    @stepan_stulov said:
    Ok, there seems to be a very good read with examples on how to transform between LocatableCamera's taken pictures and in-app 3D coordinates:

    https://developer.microsoft.com/en-us/windows/holographic/locatable_camera

    Hi Stephan,
    thank you very much for your answers. I am using directX and the simplecommunication sample + mixedreality sample from Microsoft github page. The hololens streams a live video to an app. The user selects spots on the received video. The 2D coords should be sent back, and a marker should display at the excact position.

    Edit:

    I think it is not that simple than I thought. I came up with these steps, please correct me if I am wrong:

    1. Raycast through the spot, find collision with the mesh. There I can get the distance
    2. Create some kind of projection matrix with the distance and calculate the position of the marker

    But how? :dizzy:

Sign In or Register to comment.