Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

What coordinate space is the InteractionManager's position in?

I'm having trouble understanding the Vector3 coming back to me from the InteractionManager, in receiving hand coordinates. The position does not seem at all related to the player's position as represented by Camera.main.transform.position, and it definitely doesn't seem to be in meters. An example coordinate is (-6.6, 2.1, -7.5).

The x coordinate seems to range between 0 and -10, and appears to represent which direction I am facing. I am unsure of how to interpret the y or z, but y seems to have basic correlation with world y space (but the number seems like it's reseting/regressing every few moments).

Is there a conversion to world space that I'm missing?

I would appreciate any help.

Answers

  • Options

    @mikerz total speculation on my part since I haven't started working with the InteractionManager yet, but looking at the docs it sounds like there are a number of events that fire (like SourcePressed, SourceReleased) so perhaps the parameters coming back are for purposes of relative position changes?

    Since the Gesture Frame is distinct and different from the Display Frame, maybe the "raw" position and "velocity" data only relates to changes within the gesture frame (which might not reflect world coordinates).

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • Options

    Thanks for the docs link, I had seen that earlier but couldn't find it again when I was specifically searching for InteractionManager documentation. I haven't really seen much about the "Gesture Frame" but I think you're correct in that the position is relative to it.

    My primary inspiration has been the HandInput class found in the GalaxyExplorer project; https://github.com/Microsoft/GalaxyExplorer/blob/master/Assets/Scripts/Input/HandInput.cs

    It has a public interface which can be queried to check if any hand is visible in the field of view, and in its implementation it is used to activate taps. It also queries for and stores the hand position -- this is the data I need to be able to interpret. The GalaxyExplorer project does not seem to use the position for anything, and the hand state is not publicly accessible.

    I would LOVE to see something like the implementation of the Navigation Gesture; I imagine it would at least help to answer the question of converting between display frame and gesture frame.

  • Options

    I would LOVE to see something like the implementation of the Navigation Gesture`

    @mikerz have you taken a look at the Academy 211 tutorial?

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • Options
    edited June 2016

    It seems like it's relative to the scene origin rather than relative to the camera.

    I just held my hand out in front and held it in the same spot while rolling around in my chair and the values move relative to the direction I'm moving in my chair.

    `
    void InteractionManager_SourceUpdated(InteractionSourceState state)
    {

        Vector3 handPosition;
        if (state.source.kind == InteractionSourceKind.Hand && state.properties.location.TryGetPosition(out handPosition))
        {
    
            Vector3 dposition = Camera.main.transform.position + handPosition;
            QueuedAction newAction = new QueuedAction { actionType = QueuedActionType.HandMoved, id = state.source.id, position = dposition, timestamp = Time.time };
            int handIndx = GetRegisteredHandIndex(state.source.id);
            Debug.Log(dposition + " " + handPosition + " handID: " + state.source.id + " handIndex: " + handIndx);
            lastHandPosition = handPosition;
            if (FocusedObject != null && dragging)
            {
                delta = handPosition - lastPos;
                destPos = delta * 5;
                FocusedObject.transform.position += destPos;
                //FocusedObject.transform.position = Vector3.Lerp(FocusedObject.transform.position, FocusedObject.transform.position + destPos, Time.deltaTime * 15);
            }
    
            if (dragVisual != null && handIndx ==0) {
                dragVisual.transform.position = handPosition;
                RemoteHeadManager.Instance.lhandPos = handPosition;
            }
    
            if (dragVisual2 != null && handIndx == 1) {
                dragVisual2.transform.position = handPosition;
                RemoteHeadManager.Instance.rhandPos = handPosition;
            }
            lock (actionQueue)
            {
                actionQueue.Enqueue(newAction);
            }
            lastPos = handPosition;
        }        
    }`
    
  • Options

    @LeefromSeattle correct its relative to where you were last in your scene. Do let us know you need help with the implementation in the Gesture 211 academy.

Sign In or Register to comment.