Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

How does HoloLens get view transform for rendering?

Hi,
As we do rendering we use the view transform provided by HoloLens. I am wondering how HoloLens get it?
BTW, as we do rendering, we use stereo rendering, right?

Many thanks.

YL

Answers

  • AmerAmerAmerAmer ✭✭✭
    edited January 2017

    Hololens uses your physical position at the starting point of the app as world origin and provides you the View matrix based on that position. It creates that matrix when it starts its spatial mapping and determines where you are relative to the room. It constantly adjusts that as it learns more about the room itself and as the spatial mapping becomes more accurate. When you walk around it gives you the view matrix from that origin.

    Stereo instanced rendering should be new default in Unity 5.5.x. Its a lot faster than the multi pass.

    http://www.redsprocketstudio.com/
    Developer | Check out my new project blog

  • Hi AmerAmer,
    Thanks for your high level explanation. However, I want to know some details.
    For instance, when we put on HoloLens on our head, we observe a hologram. If we slightly move the HoloLens but keep our head fixed, I found the hologram does not change (I am not sure if other people have the same observation), but now the view transform does change. So I am thinking if HoloLens track our eyes, which can be treated as two cameras?

    Thanks.

    YL

  • AmerAmerAmerAmer ✭✭✭
    edited January 2017
    Unless you apply transforms to a hologram it won't move. Your view matrix changes but the world matrix won't. Your device provided the world matrix and view matrix. The eyes are just offsets that are applied for each screen so that each eye sees the same hologram at slightly different angles to give you the depth perception.

    It's possible that you've locked your holograms to the view matrix and it moves with your hololens. All the cameras point out and there is no eye tracking

    http://www.redsprocketstudio.com/
    Developer | Check out my new project blog

  • Hi,
    I use a figure to explain my question.
    In the figure we treat the human eye as a camera and use the Human eye camera coordinate system represent it. The Holographic camera is a virtual camera and its position and orientation in the world space (or view transform) is provided by HoloLens.

    As we see the real world through HoloLens, we find the real object (observed by our eyes) and the hologram (rendering by HoloLens) is consistent. Namely the position and the orientation are same.

    Let's say we define a virtual cube object, which will be rendered by DirectX, in the same location and orientation as the real cube in the world space. If we want to reach consistency, I think the Holographic camera should observe the virtual cube using the same way as our eye observe the real cube. Namely the view transform to holographic camera should be the same with the view transform to human eye camera.

    This is just my speculation. If yes, how HoloLens reach this?

    Thanks.

    YL

  • Well simply, hololens does not track your eyes. (period) Also don't confuse actual cameras and your view matrix. View matrix is typically referred to as the camera in a virtual world.
    If you look at your view matrix there is one that represents your head (lens on your head). The view matrix is modified for each eye to fake the depth perception, usually done in shader. (look at your sampleVertex shader and you'll see that there is a 2 item viewmatrix array). Based on the instance index it applies the different matrix. Your geometry shader clones the instance with the second view matrix.

    When you place a cube in the real world, Hololens has spatial perception, meaning its hpu processes sensor data and constructs a mesh that can be viewed, modified. Just like you would place a mesh in virtual world, this spatial mesh exists in the same way. When you place a cube in that world, it renders it as if it is overlayed in the real space. There is really not a lot of magic here. Its rendering a "game level" like a game renderer would, the difference is that the spatial mapping provides the world and view coordinates.

    Also you have no control over what the holographic sensors do, all you can do for now is accept its data and process it how you wish. I think there is a lot of basic data that you are missing here. I think you should invest some time going over all the basic info on how lense works as well as how world coordinate systems and spatial mapping works.
    https://developer.microsoft.com/en-us/windows/holographic/holograms_230

    http://www.redsprocketstudio.com/
    Developer | Check out my new project blog

Sign In or Register to comment.