Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Shared experience with Unity

edited April 2018 in Questions And Answers

We want to develop a shared experience in Unity that we can connect from anywhere in the world. As far as I understood this functionality works with executing an exe file to start a server on the PC.

In our application, there are CAD models that are restricted.

So my question is, what exactly is uploaded to the servers for sharing? Can we track what is transmitted? Does it include any information about CAD model? Or just the transforms of the objects are shared?

Is it possible to use another server possibility or cloud services like AWS or our own server somehow and still use shared libraries from MixedRealityToolkit?

How can we implement a shared experience without transmitting restricted CAD data?

Answers

  • You have a lot of options here.

    Usually, shared experiences are about multiple users seeing the same holograms in the same place in the physical world via a common coordinate system.

    That common coordinate system is usually established by using the HoloLens ability to create one or more world anchors representing physical coordinates/orientations and 'sharing' those anchors across devices. One device creates an anchor, exports it, sends it over a network and other devices import it and by doing so each device can determine the position of the anchor in its own local co-ordinate system and map to-from that system so that 'shared' content is seen by all participants in the same place in the real world.

    If you have users who aren't in the same physical space then there's no need for this sharing of world anchors so real-world positions become less important but objects can still be positioned relative to some 'parent' such that their relative positions are the same on all devices even if their real-world positions might be different.

    Beyond this, how much you 'share' over the network is up to you. For instance, if users interact to create/delete/move/rotate/scale objects then you can choose to share that for some/all objects over the network to keep everyone's scene in sync. Equally, it's your choice as to whether you share the positions of your users over the network and provide some UX to represent remote and/or co-located users.

    But, it's likely that you'd have some pieces of interaction (e.g. user 2 opens a menu) that don't share across devices (e.g. user 3 perhaps doesn't care that user 2 has opened a menu and doesn't expect to see it).

    How you go about doing the networking is up to you. Within the Mixed Reality Toolkit, I think there are two implementations - one makes use of Unity's UNET networking option and the other makes use of a 'sharing service' executable acting as a central server.

    Those approaches have differences and I don't think (last time I looked) that the toolkit has attempted to put an abstraction over the top of them and so the differences come through in terms of functionality and also in terms of some of the helper classes which do things like 'spawning' an object across all devices participating in sharing and 'syncing' data on those objects (including transforms) across devices.

    But, I think it's fair to say that in both cases the data which is transferred in the toolkit examples is typically small messages such as 'object of type X has been created' or 'object ID Y has been moved' and doesn't contain much about the internals of the models/meshes themselves - that wouldn't be getting sent over the network unless you had some reason and code to do it.

    So, I think that if you wanted to use some other networking technology then you might borrow some of the 'higher level' pieces from scripts in the toolkit but you'll perhaps have to re-plumb them on top of the networking technology that you want to use - more in the way of using the toolkit as a blueprint rather than a framework where you might plug in some new network implementation.

    I hope that helps, sorry if I've misinterpreted what you're trying to do & for any mistakes.

Sign In or Register to comment.