Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

Overlaying holograms onto real world objects with precision

Hi All,

I am trying to overlay holograms onto real world objects to create a sort of "Color Map" that I can use to direct me where to paint lines and certain colors on a given object.

Ive found it difficult to overlie the hologram onto the object. Its a problem that I'm sure a lot of people have dealt with in blender or other modeling programs by "dropping" or "clicking" instead of inputting precise coordinates.

I was wondering if any one has ideas or suggestions about how to improve the ability to precisely overlay a hologram of an object onto the object itself. I am using 3D viewer beta to visualize the holograms.

Thanks!

Answers

  • Box01Box01
    edited October 2016

    just to raise my hands that I am looking for a solution for the exact issue.

  • utekaiutekai ✭✭✭
    edited October 2016

    I think the underlying issue with opencv is there isn't yet a free .Net Core wrapper, though there are .Net wrappers but they won't run on UWP. There is one available in the Unity asset store https://www.assetstore.unity3d.com/en/#!/content/21088 . The UWP port is still in beta and based on comments may not be an excellent choice.

    Likely opencv can run on a server and be fed images/video and provide results back to a hololens client. This of course would greatly reduce loading on the hololens, which may be significant or overtaxing. We'll soon find out.

  • Currently you can manually place your hologram over the real world object and then anchor it to the world so its fixed. So its a work around but will give you the result that you need. of course this requires that your real world object doesnt move or rotate.

    Healthcare IT professional by day - Indie GameDev for UWP and mobile platforms by night

  • CurvSurfCurvSurf ✭✭
    edited October 2016

    @Printer12
    The spatial mapping with the current version HoloLens is of sparse and inaccurate points (look the videos below).

    And, the current version HoloToolkit can extract only large enough planes (e.g. floor, wall, large table, etc.) from the spatial mapping.

    In order to position holograms onto/into a real object surface, we have to know the shape, size, position, and orientation of the object. Not an easy problem!

    MS must provide developers with the raw depth data streams with sensor position & orientation.

    Curvsurf will be preparing to release 'FindSurface' in 1Q 2017, a middleware in SDK for object detection, recognition, and measurement.

    Microsoft HoloLens
    HoloLens: Feature Extraction by Voice Commands

    Google Tango
    Improvements by Google Project Tango

    Intel RealSense
    Augmented Reality: Real time geometric feature extraction and annotation

  • @CurvSurf Thank you for this insightful reply. Another question, if one wanted to get a detailed spatial map of a smaller object, would bringing the lens closer to the object improve the resolution of the map for that object? Theoretically, could an app be designed where that data was stored, and an identical hologram was programmed to align with the object?

  • @Printer12
    The raw depth data stream is processed inside HoloLens and converted to the spatial mapping data with a preset number of triangles per cubic meter. The distances between the device and object surfaces have practically no effect.

    Theorectically possible, but in practice not easy task. In order to align a virtual object with a real object, e.g. a gymball, we have to know the shape, size, position, and rotation of the real object by processing the spatial mapping data. Only if then, we can select, scale, translate, and rotate the corresponding hologram onto the real object surface.

  • @DanglingNeuron said:
    Currently you can manually place your hologram over the real world object and then anchor it to the world so its fixed. So its a work around but will give you the result that you need. of course this requires that your real world object doesnt move or rotate.

    The only way I've been able to do this is through 3D viewer beta. Unfortunately, the clipping plane is only about a foot, so I can't get up close to my hologram. This creates problems with what I am trying to do. Is there a way to visualize holograms with a reduced clipping plane without building my own app in unity?

    @CurvSurf You've been so helpful, any chance you might have a suggestion for this as well? :)

  • CurvSurfCurvSurf ✭✭
    edited October 2016

    @Printer12
    HoloLens will not provide spatial mapping data closer than the reach of user's hands. Otherwise, there will be a bunch of practical problems from the side of hardware and software.

    We human can determine relatively accurately the lateral distance of an object point and its changes. But, determining the radial distance of an object point? Our stereo-vision has an error-ellipsoid (or error-rhombus) elongated in the radial direction. We can determine relatively accurately the distance between two buildings at the same distance but not the distance.

    We can place a hologram relatively accurately in the lateral but not in the radial direction. We may have to rotate the scene 90 deg horizontally or vertically and repeat again the placement. Then, there will come the next problem of rotational alignment! The placement of a virtual object onto a real object per visual feedback is not earning.

    The machine or algorithm will have no such difficulties.
    Mathematics is about numbers, the all!

    https://forums.hololens.com/discussion/comment/9434#Comment_9434

    https://plus.google.com/+CurvSurf

  • for precise placement try a xbox one s controller - it works pretty amazing, there is a setup in unity

  • rotation tilt and near interaction are big problems here

    i wonder if its possible to use a external orientation plane for hololens sensor, like a reflective grid... this would work for near interaction.

  • I haven't had a chance to try it out yet, but this seems like a job for the Vuforia integration: https://blogs.windows.com/devices/2016/11/17/innovating-with-microsoft-hololens-vuforia-sdk-is-here/#1rtu5haQTY5Dl8RL.97

  • What about a situation where you could barcode the things you want to automatically apply an interface against? We could recognize a QR code (object id) that's always positioned in a fixed point on the object and then overlay an interface at that reference point?

Sign In or Register to comment.