Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

3D Scanning & Texture Mapping Real World Objects?

I remember doing the 3D body scan with a Kinect at //Build 2015, so I'm curious if the same can be done with HoloLens seeing using its camera and the generated 3D environment mesh. Are there APIs that help map camera-captured textures onto the mesh? Or is this something we'd have to play with to figure out how to implement ourselves?

Comments

  • I would love this kind of functionality but I haven't seen any HL demos that do this. It sounds like the raw sensor data will not be directly available- I understand we developers will only have access to the higher level APIs.

  • It would be great to have some possibilities for object recognition. It doesn't have to be complete 3D scanning, I think that's asked too much.
    I'd be interested how fine the spatial mesh resolution can get? would it be enough to identify like lets say a bottle standing on a table? Or is that just unidentifiable polygon garbage? I could imagine a mesh refinement algorithm on specified fields of interests. If used efficiently, this wouldn't increase the mesh size so much... any info on that topic I have overlooked?

  • It is a very very difficult problem, and exactly what I hope to move to Hololens from Googles Tango tablet.
    The real question for Hololens is twofold;

    1. Can I get a 3D depth field map based on some ratio of structured light and pure visual field analysis for a given camera frame ?
    2. Can I get an accurate pose (location and attitude) for the camera at the time that info was collected ?

    <I'd love to get an absolute call on those two issues from an authority, as it's key to this entire question or we have to send the image stream somewhere and process ourselves >

    Once you have the data in hand, things get very nasty, very quickly - here's a post I did for Tango that roughly summarizes the problem -- the tl;dr is that all of the data has noise, absolutely nothing can be absolutely trusted, so it's really a horrible multidimensional statistical problem.

    https://plus.google.com/+MarkMullin/posts/Hm2RCf6iEPj

    Here's where I got to - this was real time processing of the data, but not on Tango -- https://www.youtube.com/watch?v=lkb0I5WG570 - there's 4 acts - the raw collected camera frames (jerky cause pose estimation is not always smooth), the frames processed in a timid fashion, then more aggressively and painting a live webcam (Osaka Japan) onto the scene, and then very aggressively while trying not to obliterate the furniture and large pet.

    my goal is to move that software onto the hololens for recognition of structural elements (walls,floors, ceilings,etc) and then send the really really hard stuff, e.g. vases, to a Caffe system and let it sweat it out. :smiley:

  • Using Kinect for Windows 2.0 SDK, you can use Kinect Fusion to scan objects or environments and generate accurate meshes.

    HoloLens v1.0 does not give developers access to raw depth data and does not support object recognition.

  • james_ashleyjames_ashley ✭✭✭✭

    @Bryan,

    Sounds like there's no raw access to depth data, but it appears you can record spatial maps with Windows Device Portal and then use them in the HoloLens emulator for development.

    Even without raw access, the spatial maps are stored in xef files -- which may be similar to the xef files that Kinect Studio v2 uses. So maybe it's possible to access depth data that way even though it's not the intended purpose for it.

    James Ashley
    VS 2017 v5.3.3, Unity 2017.3.0f3, MRTK 2017.1.2, W10 17063
    Microsoft MVP, Freelance HoloLens/MR Developer
    www.imaginativeuniversal.com

  • I'd definitely like to be able to see the ability to scan objects in the real world at a finer resolution than rooms are scanned.

  • @Bryan @mavasher @soundglider @MarkMMullin @neerajwadhwa @james_ashley @ng_hololens
    I've been presenting use cases for how Mixed Reality and Augmented Reality can work together on Twitter and I recalled Microsoft "MobileFusion" 3D scanning technology from my research which leads me to conclude that this 3D scanning app should be applicable to the HoloLens as less advanced mobile phone technology can capture 3D objects as 3D scans... Which should open the door to creating holograms of real physical objects very effectively. You can see the related MobileFusion article here in my tweet:


    I hope this helps.
    I will really appreciate feedback.
    Thank you.
    Jaswinder Brar.
  • I think this project is similar to what you want.

  • @javad329 said:
    I think this project is similar to what you want.

    Hi, the repository is unavailable at Github. Have you it else where?

Sign In or Register to comment.