Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

the Accuracy of Hololens' slam

I just want use hololens to be the GPS indoor. I've looked Hololens' slam obj file from Windows Device Portal. amazing! but it only have mesh without color and the resolution of model is not very good. so I bundled a kinect to grab rgb point cloud and use hololens as 6Dof sensor. I used unity Camera position and rotation values. but the slam result is not as good as Hololens' obj. So is the unity's Camera value believable? BTW I truly can’t find object DRIFT when I use HoloLens. ORG!!!!!!!!!!!!!!!!!!!!!!

Answers

  • Options

    more the Unity's camera value sometimes is believable. I thinks hololens' slam is good .does someone do the similar job? can someone tell me the correct step of using hololens‘s Position data?

  • Options

    A while back, I tried an experiment where I placed a calibration chessboard in front of the HoloLens and moved the HoloLens around so that the locatable camera could view the chessboard from different poses. For each frame I got from the locatable camera, I saved the HoloLens' estimate of the locatable camera pose and also the frame itself. I used this data with OpenCV to find the pose of the camera given only the locatable camera frames, and compared those poses with what the HoloLens thought was the pose.

    My results were that the average HoloLens pose error was about 2cm in translation and about 2 degrees in rotation. You may want to do a similar experiment on your own HoloLens, but what this means is that if you want to combine the HoloLens pose with other sensors, you will probably need to do some sort of bundle adjustment to account for this small error and get your reconstructed meshes to line up properly.

  • Options

    Hi Dan,
    thank you for your reply. It looks like we are doing the same thing. I'm the application engineer. not the slam researcher. I'm sure I can't make the similar slam system as good as hololens'. I fixed the kinect at the top of hololens. so the relative position between kinect and hololens is fixed. the only slam information is what I got from Unity's camera pose and rotation value.
    what confused me is the camera pose from unity sometime is good not always bad. as I can't get any information about the hololens slam system . maybe ICP,monocular vision imu.....I don't know. My thinking is can I use some kind of way to help hololens improve it's own slam system's robustness and accuracy? what you remind me is I maybe could put some chessboard around the room to let hololens
    self learning............., do you have some good idea about this?

    @DanAndersen said:
    A while back, I tried an experiment where I placed a calibration chessboard in front of the HoloLens and moved the HoloLens around so that the locatable camera could view the chessboard from different poses. For each frame I got from the locatable camera, I saved the HoloLens' estimate of the locatable camera pose and also the frame itself. I used this data with OpenCV to find the pose of the camera given only the locatable camera frames, and compared those poses with what the HoloLens thought was the pose.

    My results were that the average HoloLens pose error was about 2cm in translation and about 2 degrees in rotation. You may want to do a similar experiment on your own HoloLens, but what this means is that if you want to combine the HoloLens pose with other sensors, you will probably need to do some sort of bundle adjustment to account for this small error and get your reconstructed meshes to line up properly.

  • Options

    @HoloKinect said:
    Hi Dan,
    thank you for your reply. It looks like we are doing the same thing. I'm the application engineer. not the slam researcher. I'm sure I can't make the similar slam system as good as hololens'. I fixed the kinect at the top of hololens. so the relative position between kinect and hololens is fixed. the only slam information is what I got from Unity's camera pose and rotation value.
    what confused me is the camera pose from unity sometime is good not always bad. as I can't get any information about the hololens slam system . maybe ICP,monocular vision imu.....I don't know. My thinking is can I use some kind of way to help hololens improve it's own slam system's robustness and accuracy? what you remind me is I maybe could put some chessboard around the room to let hololens
    self learning............., do you have some good idea about this?

    I don't know of a way that would improve the HoloLens' accuracy without large modifications to the room the HoloLens is in. Adding some chessboards might help the HoloLens tracker have some high-contrast objects to use in its internal SLAM, but I don't know if that would help very much -- because the HoloLens is using some internal depth/stereo sensors as part of the sensor fusion, the improvement from high-contrast objects is probably only marginal.

    If you have full control over your physical environment, and are able to pay some extra money, you could try getting an external tracking system set up in your environment. Something like an Optitrack system, or using the Vive trackers, could let you place a tracker on the HoloLens itself and have its position determined by cameras around the room.

Sign In or Register to comment.