Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

An Hololens App design question

My customer is asking whether we could make such an app to train people's physical action.
They want to move both hands to proper position, then HoloLens will show the positive animation (turn some area from red to green)
Currently I am thinking to use "Network Manager" in Unity to accept an external signal.
Questions:
1) Any suggestions on above situation?
2) Have anyone used "Network Manager" of Unity in HoloLens app? How about the effect?
3) Any example for Hololens App to accept a network signal (for example: command with parameter in HTTP) directly and trigger some event

Thanks a lot!

Answers

  • Options

    Why not just use the internal handtracking through interaction manager? Will the hands not be visible the entire time?

  • Options

    Two hands need to hold a stick, and we need to calculate the direction of the stick was pointed to the right area. Thus one hand was guarantee to be visible through out the process.

  • Options

    I don't believe that is possible at this time as I don't think we can define custom sources for the interaction manager quite yet. It might exist in the HoloLens, but we do not have access to the required documentation or example code to train the source recognizer on a new object (the stick). You would most likely have to implement a computer vision library outside of the HoloLens or Unity SDK, or go back to have the object itself track position (which may be just as complex at this point). I haven't exactly looked for it, but I haven't come across any documentation that would assist with connecting and tracking an external device.

    You may need to wait for more resources and access to more APIs to become available unless you plan to write some pretty complex systems.

  • Options
    james_ashleyjames_ashley ✭✭✭✭

    @Wayne,

    I'm reading between the lines alot but it sounds like you are asking about using an external camera, like a Kinect camera, to communicate with hololens.

    Network manager may work but the tricky part will be getting the lag down to a reasonable level -- for physical rehab or sports training, probably in the 20-30 ms range. It will take some experimentation to get this right and also depends on your hardware. Your camera will also be putting on some lag for the motion analysis.

    The 240 Hololens tutorial has a great example of network sharing but you may have to dig through their prefab code to find it.

    James Ashley
    VS 2017 v5.3.3, Unity 2017.3.0f3, MRTK 2017.1.2, W10 17063
    Microsoft MVP, Freelance HoloLens/MR Developer
    www.imaginativeuniversal.com

  • Options

    @james_ashley said:
    @Wayne,

    I'm reading between the lines alot but it sounds like you are asking about using an external camera, like a Kinect camera, to communicate with hololens.

    Network manager may work but the tricky part will be getting the lag down to a reasonable level -- for physical rehab or sports training, probably in the 20-30 ms range. It will take some experimentation to get this right and also depends on your hardware. Your camera will also be putting on some lag for the motion analysis.

    The 240 Hololens tutorial has a great example of network sharing but you may have to dig through their prefab code to find it.

    Ah, I didn't interpret his question that way. Integrating a Kinect camera seems rather interesting. Good job reading between the lines here.

    Wayne,

    You might have some help/support if you started an opensource project for external camera/networking piece. This seems like something people could utilize in a wide range of apps if perfected.

  • Options
    DiegoVDiegoV ✭✭
    @james_ashley ,
    You are right. Scan time between 2 computer with the service, takes around 150ms-200ms with xsockets. Being able to get 20-30 ms I do not think is posible with an gateway server(like shareservice.exe is).
  • Options

    @Wayne as you already know, HoloLens provides high level constructs and APIs instead of raw depth data. Trying @james_ashley's suggestion seems like a good idea to experiment with.

Sign In or Register to comment.