Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

xaudio2 vs audiograph emitters in directx

The developer pages for D3D give instructions for using xaudio2 to create spatialized sound in a hololens app. There was a presentation at Build announcing AudioGraph emitters and listener for creating spatialized sound as well. I can find no information about whether this can be used in a C++ hololens app as well. It seems like it should work, too. Assuming it does work, is there any reason to prefer xaudio2 over audiograph when longer sounds are being generated/streamed on the fly?

Best Answer


Sign In or Register to comment.