Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Head pose, in DX and Unity API

Hi, I just installed VS2017 UWP, and created an example using the Holograhic DX11 template. I am trying to understand the generated code. I would appreciate if you could correct my misunderstanding.

Based on my experience with ARKit and ARCore, I was expecting some kind of a frame update() callback, with the timestamp, frame, and camera parameters. But in the generated Update() method, the closest thing I can find is "prediction.Timestamp.TargetTime" which is not the same as the current time. I supposed I can subtract "prediction.Timestamp.PredictionAmount" from it, but I feel like there is a more straight forward path.

Similarly, I was expecting the camera pose, but the prediction's CameraPoses is a list. I am not sure how many cameras I should expect. If I just want to use this for tracking the head pose in world coordinate frame (let's say sitting), should I just use the 1st camera in the list, assuming that it the L camera? When I look at the Unity approach, I am tempted to just consider the "MainCamera" to be the head, and use its pose. But this would be ignoring the subtlety of the multi-camera on the HoloLens (which I am planning to target). For just getting started and understanding head tracking offered by Windows, I would rather stay away from Unity (with its LH frame).

Thanks for reading

Answers

  • Options
    pstuevenpstueven ✭✭
    edited March 2018

    Hey @henry10210 ,
    I am not really sure what you are trying to achive with the time. What is your question?

    About the Camera: I have no experience in using DX with HoloLens. But in Unity you may use Camera.main.transform to get the current position and orientation of the HoloLens. The forward property is the direction of gaze.
    You don't need to use the single cameras(and actually I don't know how to do it in Unity) to use head tracking. Unity/the HoloLens handles the update of the main camera's position automatically.

  • Options

    Thanks for the answer pstueven, I know that if I used Unity/Unreal, I will get a pose (6 DOF) of the combined solution from the main camera. But I want the details: is that pose supposed to be the halfway point between my eye? It's not clear to me, so I thought I'd just look at the pose of all the cameras returned in the DX API. I don't know which one is left vs right, but I thought at least the pose of the individual cameras in the DX API is less open to interpretation than the Unity API. The timestamps are required to plot the pose as a function of time, because after all, there is no guarantee when I will receive the pose, and at what rate. This is of course assuming that the "cameras" list in the DX API are actually referring to the cameras shown in the Hololens teardown (i.e. not the center depth camera, but the 2 side cameras). If the Hololens pose in. the local world coordinate frame is actually referring to the pose of the center depth camera, when I don't understand why the DX API would return a list of cameras to me; because if I were writing this API, I would just put the Hololens body frame right where the depth camera sensor is.

    Just looking for clarification. Thanks

Sign In or Register to comment.