Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Hololens application

I wanted to ask you a question on Hololens application for Electrical Utility/Substation Engineering. We have 1000s of old Electrical High Voltage Substations those were built with paper/2D design. Can Hololens stitche together a 3D model of a Brownfield Substation ? If not, what would it take to get this done. Today to get a Station’s Smart 3D (Virtual Model), we perform a LIDAR Scanning of the Substation and then point clouds and then convert these point clouds into 3D model (vectoring)- the whole process takes 3 to 4 weeks for a Station (we have 3000+ Transmission Stations).

I read this on an article.
The Hololens uses a series of 3D video cameras that capture images of a person from all angles and then seamlessly stitches together a 3D model of the person that can be reconstructed, compressed and transmitted anywhere instantly

http://newatlas.com/microsoft-holoportation-hololens-virtual-reality-hologram/42501/

Answers

  • Options

    So the camera is pretty useless, unless you are recording video or skype calling. The spatial mapping sensors are rough. They work but the resolution is pretty low. You can tell a flat surface ok, but shadows and lighting conditions will really affect how the final mesh looks. If you load up the emulator and load in a room (Rooms tab) you can see roughly the resolution it captures. Its pretty rough. Its good for detecting edges and surfaces but to get the resolution detail you might be looking for that will require some magical algorithms to parse out the data. There is a spatial recognition set of algorithms provided by the studio that created cronker but even that stuff is approximations that could help.

    Also Microsoft's marketing team must have be drinking unicorn piss when they created the videos. The capabilities are quite hyped up. It takes a lot of work to get stuff to look remotely as good as they fabricate.

    http://www.redsprocketstudio.com/
    Developer | Check out my new project blog

  • Options

    It's not so much that the marketing team is wrong. But there is a misunderstanding of how this is working. The HoloLens is not capturing the person and stitching everything together. That is a separate system of camera's and Kinect sensors that are placed around the person. The HoloLens is the medium for viewing the person's 3d image remotely.

  • Options

    The "avatars" are premade meshes that represent the person, but you are right, there is a lot of misunderstanding of how it actually works.
    I've had conversations with people who've worked on the lens and the marketing side of things has a lot of fabrication and post effects that are just not possible on the lens, at least while keeping up with 60 fps. Their job is to show what could be possibly, but I'm not sure this version has the hardware to push those limits. Not with this atom cpu/apu combo. Its possible to get the effects to look similar to advertised but the actual framerate drops heavily. The experience is pretty much destroyed.

    http://www.redsprocketstudio.com/
    Developer | Check out my new project blog

Sign In or Register to comment.