Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Only single camera is found by debugging, what's the point to use a HashMap to maintain camera res?

Throughout the Holographic DirectX template source code, a std::map is used as the container to host all camera resources. and most of camera related operations are done by traversing all cameras like this:
HolographicFramePrediction^ prediction = holographicFrame->CurrentPrediction;
for (auto cameraPose : prediction->CameraPoses)
{
// This represents the device-based resources for a HolographicCamera.
DX::CameraResources* pCameraRes = cameraResourceMap[cameraPose->HolographicCamera->Id].get():smile:

// Do something with this camera resource ...

}

However, when I debug the template application with a Hololens (Not emulator), I found there always is a single camera added in the std::map, and there always is a single camera contained in "prediction->CameraPoses", this can also be proven by the fact that the event "HolographicSpace::CameraAdded" is triggered only once.

Here are my questions:
(1) Under what situation there will be multiple camera resources?
(2) What is the physical (hardware) camera reflected / mapped by the single camera I found during the debugging?

Answers

Sign In or Register to comment.