The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
WorldAnchorTransfeBatch Limitations?
Hello, everybody.
In my current project I'm serializing a list of world anchors into a WorldAnchorTransferBatch and send its serialized bytes to a server from a "mapper" client to be further be used by the "consumer" clients by deserializing their own WorldAnchorTransferBatches back from the bytes.
At the moment I'm using only 5 world anchors and my current batch is at big fat 25Mb. Practical examples will have 100+ world anchors.
I wonder if there are any officially known or empirically found limitation associated with the WorldAnchorTransferBatch? Namely:
- Will serialization/deserialization fail if the batch is too big?
- What is the relationship of the batch size to the number of anchors, size of the room, perhaps "inconvenience" of the anchoring?
I am also curious to hear about someone's practical examples of using big world anchor batches and especially transferring them over network.
Cheers!
Building the future of holographic navigation. We're hiring.
Best Answer
-
Options
@stepan_stulov said:
Hello, everybody.At the moment I'm using only 5 world anchors and my current batch is at big fat 25Mb. Practical examples will have 100+ world anchors.
For sharing experience it's enough to use one anchor per room. After you define anchor, you can transform all other objects relative to it. I don't see a reason to use multiple world anchors.
- Will serialization/deserialization fail if the batch is too big?
It may fail, or serialize after some attempts. Still having this issue with 25mb+ anchor.
- What is the relationship of the batch size to the number of anchors, size of the room, perhaps "inconvenience" of the anchoring?
It depends on spatial data, the more you have, the larger anchor will be created. You can look at Settings->System->Spaces for more info. Big room will have 150+mb of spatial data. You can try to clean active spaces, and World Anchor will be small enough for fast serialization, but it may lack occuracy at the start of your Sharing Experience
Hope my answer helped you a bit.
6
Answers
My team just came across this same issue, we are having to "chunk" the data out to our server and then have it in turn send the data out to any clients requesting it in "chunks" as well. The good news is you should only need to send out your anchors for a particular room once, after that you are just moving the objects in relation to the anchors.
For sharing experience it's enough to use one anchor per room. After you define anchor, you can transform all other objects relative to it. I don't see a reason to use multiple world anchors.
It may fail, or serialize after some attempts. Still having this issue with 25mb+ anchor.
It depends on spatial data, the more you have, the larger anchor will be created. You can look at Settings->System->Spaces for more info. Big room will have 150+mb of spatial data. You can try to clean active spaces, and World Anchor will be small enough for fast serialization, but it may lack occuracy at the start of your Sharing Experience
Hope my answer helped you a bit.
Thanks a lot for your answer!
Mostly for better local stability due to the leverage effect: the further the visual content is from its anchor the more angular flickering is noticeable.
Building the future of holographic navigation. We're hiring.