Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.


I tried using this method after configuring my project for spatial sound and it seems to work well. It also seems like a good method to use for generic sound effects such as selections and alert bells. Before I go peppering my code with calls to this method are there any notes that I should be aware of? Is the static method already configured for optimal spatialization? If not can I configure it to be?

Best Answer


  • It does -- thanks David!

    FYI - the context I'm using the method in does include visible source emitters. For example, I created a generic behavior for moving objects with a snapping effect. For each `snap' I am playing an audio clip using AudioSource.PlayClipAtPoint. I architected it this way so that I could use a single instance of a editing controller to coordinate editing a multitude of objects. It seemed impractical to attach an AudioSource to each object or to manage a pool of AudioSources. I agree that the quality is just fine for this purpose - we will stick with instanced audio sources for higher quality effects.

Sign In or Register to comment.