The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Holograms 220 Tutorial - Immersion: How do audio files get paired with source and mixer
After building, running and walking the scripts including the HoloToolkit Spatial Sound classes, I can hook how it works together in my mind except for one bit of magic that escapes my understanding: When in the SpatialMusic mode, how have the samples from Assets/Audio/Music found their way into a UAudioClip and how did the clip's output get routed to the correct mixer channel? The game objects' SurroundEmitters' refer to AudioSources that have a null AudioClip and Output, so that's not it. I've see code in other projects that load clips directly within C# scripts, but I see nothing like that here. How do the preloaded "Vocals" samples become part of the "Vox" emitter? When switching to SpatialMusic, how have the clips' content been routed to that submix if not in code or spec (or a spec I can find, at least...)? Please enlighten!