Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Is it possible to use WebGL with Hololens? (repost)

sptspt ✭✭
edited March 2016 in Questions And Answers

Will it be possible to use WebGL with Hololens? If you made an app using WebGL, could it work? Would a WebGL app running in a browser get full hologram/input/output featuers?

I have reworded and reposted this as a question because my previous question was marked as answered when it has not been answered.

Let me say I am having troubles with this forum. This should be a question marked as unanswered. I see no 'help' topic that explains how this is supposed to work, so here I am rambling.
Im gonna add another comment right here to see how the view count goes up.
Yeah the view count goes up by 2 on every edit, when it should go up by zero.

Best Answers

Answers

  • Options
    sptspt ✭✭

    They could build the inputs into Edge. That would make sense to me.

  • Options

    Can I ask even further questions about WebGL on HoloLens?
    1. It seems that both Microsoft Edge or an embedded IE component in a UWP can run WebGL in HoloLens, doesn't it?
    2. If it works, how to make the background transparent? According to the current Unity Tutorial, I guess the BLACK color #00000000 might work, right?
    3. How to split views for both eyes? Just render two different views in one WebGL browser window or other special mechanisms to run HoloGraphics?
    4. Other questions such as camera movement callbacks I guess I will have to use UWP, or call JavaScript DeviceOrientation API?

    Thanks in advance!

  • Options

    @Rocky said:
    Can I ask even further questions about WebGL on HoloLens?
    1. It seems that both Microsoft Edge or an embedded IE component in a UWP can run WebGL in HoloLens, doesn't it?

    Yes I believe it can.

    1. If it works, how to make the background transparent? According to the current Unity Tutorial, I guess the BLACK color #00000000 might work, right?

    Technically, WebGL is being projected into the 2D plane which is contained within Edge browser. That isn't really being rendered in the space in HoloLens...

    1. How to split views for both eyes? Just render two different views in one WebGL browser window or other special mechanisms to run HoloGraphics?

    ...

    1. Other questions such as camera movement callbacks I guess I will have to use UWP, or call JavaScript DeviceOrientation API?

    That is interesting idea. If I understand correctly, you are trying to hook your Edge browser with your HoloLens and project 3D graphics with WebGL on your Edge browser based on the REST APIs available from HoloLens... I would say that is quite extreme approach to achieve something like that...

    Thanks in advance!

  • Options

    @kimdba is correct in that Edge isn't a path to creating holographic OpenGL apps at this time. To create a holographic app with OpenGL, you'll need to use Windows APIs to create a holographic space and at least one spatial coordinate system. You will also in some way use DirectX APIs to support stereo rendering.

    We do currently have experimental support for ANGLE on HoloLens. ANGLE is an open-source library that provides support for writing UWP apps in C++, using a subset of OpenGL; what we have today is an experimental way of extending that support to Windows Holographic. This includes a way for your app to provide a holographic space and a spatial coordinate system, and access to the stereo rendering pipeline that, in this case, is mostly managed for you by ANGLE. Note that the holographic space works together with spatial coordinate systems to track camera movement in our system - sample code for this is in the UWP app template in the experimental branch.

    Some helpful info:
    * For an overview, and for instructions on getting started, see the README file.
    * If you're already using ANGLE and you just need to see what we changed for this branch, take a look at commit 5ff1ffca. This includes the changes for ANGLE, as well as the changes for the app template.

    We are interested in potentially doing more with this. If you are able to share details, we'd love to hear more about the holographic apps you'd like to create using ANGLE.

    Thanks again,

    Mike.

  • Options
    mlfarrellmlfarrell
    edited June 2016

    Hahahaa. This is funny. I literally just started this today

    https://github.com/mlfarrell/angle/commit/a931bccba32ec25cf7a0c80525fdf76f984fb5ea

    Really glad to see you starting on this. Please use my one commit if it did anything that can help you guys or you haven't started yet. I'm so relieved that I don't have to adapt angle for holo completely by myself.

    To answer your question about the kinds of apps this can enable, I'm currently in the middle of porting my 3D modeling studio (vertostudio.com) to the hololens. I've gotten the initial rendering engine up and running already on vanilla angle on win10 xaml.

  • Options
    mlfarrellmlfarrell
    edited June 2016

    folded into above comment

  • Options

    @mlfarrell: Glad to hear that this is both helpful and timely. I see that you added the render target array index as a built-in - we are considering that as well. For now, the prototype is using shader variables with special names, which you can get from the app template.

    If your rendering engine is already running on ANGLE, then you should not have very far to go before you can render holograms. Let us know how it goes!

  • Options
    sptspt ✭✭

    I am interested in WebGL because it is intrisically multi-platform and robust with internet. For example, a WebGL app doesn't need to be installed or configured, they work just like web pages. They are web pages! Write once run everywhere is very appealing. I have server that synchronizes hundreds of users at once including PC users, android users and mac users using full encryption and compression, but it requires zero platform specific code, just internet standards and a browser. Also, it requires no license, which is a big deal.
    I also believe that WebGL scales better than Unity. Unity makes it very easy to make a simple app, but it can be get clunky as the project gets really statically complex. The price for being easy at the low end is inflexibility at the high end.

  • Options
    Jimbohalo10Jimbohalo10 ✭✭✭
    edited June 2016

    @spt
    Well WebGL works on the HoloLens Emulator Microsoft Edge Browser

    Run the link in a new tab WebGL browser test runs just same on Windows 10 Insider Desktop.

    If you want support I suggest you start a poll discussion, if you want more Edge support,really this is Window 10 Insider Feedback, as they control Microsoft Edge Development

    If you are really interested in this watch
    Channel 9 Build video: Hyper-Fast Web Graphics with WebGL

    For WebGL in Unity 5.4 here is the Unity Manual: PlayerSettingsForWebGL and

    Unity Manual: Building and Running a WebGL Project

  • Options
    sptspt ✭✭

    @Jimbohalo10 said:
    @spt
    Well WebGL works on the HoloLens Emulator Microsoft Edge Browser

    The 3d in edge is not stereoscopic for hololens, so no webGL is not really supported.
    In contrast, steroscopic 3d is supported by Occulus Rift with webGL.

  • Options

    @MikeRiches said:
    @mlfarrell: Glad to hear that this is both helpful and timely. I see that you added the render target array index as a built-in - we are considering that as well. For now, the prototype is using shader variables with special names, which you can get from the app template.

    Yeah, I find that the built-in provides cleaner code. Please feel free to use my code for that in your code base. (Though I'm not 100% sure about the name of the output variable).

    Also I noticed in your template code (I haven't studied it close enough to verify) you mentioned that the glClear call causes an implicit update to the holographic camera buffers. Hopefully this does not occur when clearing to an non-default offscreen framebuffer target? I only ask because my code base will heavily makes use of multiple frame buffers objects for offscreen rendering.

  • Options

    @MikeRiches said:
    @kimdba is correct in that Edge isn't a path to creating holographic OpenGL apps at this time. To create a holographic app with OpenGL, you'll need to use Windows APIs to create a holographic space and at least one spatial coordinate system. You will also in some way use DirectX APIs to support stereo rendering.

    We do currently have experimental support for ANGLE on HoloLens. >
    Thanks again,

    Mike.

    Hi the GitHub distribution is missing some files and declarations which stop the build so general users cannot build this namely "windows.graphics.holographic.h"
    when "#ifdef ANGLE_ENABLE_WINDOWS_HOLOGRAPHIC" is enabled.

    I also needed to grab some files from GitHub A more universal DirectX SDK can work with multiple versions of Visual Studio, MinGW, etc. to get this far

    Q:" You will also in some way use DirectX APIs to support stereo rendering."
    This may help Direct3D stereoscopic 3D sample C++
    Just curiosity build, keep up the good work :)

  • Options

    @Jimbohalo10: Thanks so much for trying this out! To clarify, you do need to use the Windows 10 SDK to build a UWP app in order for VS to find the headers and APIs for Windows Holographic. The Windows 10 app template from the prototype branch is a good starting point for this, and ANGLE_ENABLE_WINDOWS_HOLOGRAPHIC should be defined there in the project properties - if it's not, that's a bug.

    The example you found is a good example for rendering to stereo swap chains in a general sense. If you want to see code for rendering holograms, the best place to start is actually the Windows Holographic app template for Visual Studio (instructions here). Stereo render targets are just one aspect of drawing holograms; we also have holographic cameras with hardware-driven view and projection matrices, back buffers and viewports, spatial coordinate systems, and so on. If you want to learn more after reviewing the app template, we also have examples for specific developer tasks over on the Windows Universal samples repo - to find them, visit the Samples folder and look for folder names that start with "Holo". Currently, we have a code sample that shows how to acquire the spatial mapping data, and we have another code sample that uses spatial coordinate systems to locate the user and display a holographic UI message that they can see.

  • Options

    @mlfarrell: You should be able to still use the off-screen render targets. Each call to
    Framebuffer11::clear() will check to see if the frame buffer's color render target is based on a holographic swap chain; if so, it will set up the holographic frame. If not, it will skip that and continue as normal.

    Relevant code from Framebuffer11.cpp:

    #ifdef ANGLE_ENABLE_WINDOWS_HOLOGRAPHIC
    RenderTarget11* colorHolographicRenderTarget = nullptr;
    error = mData.getFirstColorAttachment()->getRenderTarget(&colorHolographicRenderTarget);
    if (error.isError())
    {
        return error;
    }
    SurfaceRenderTarget11 *surfaceRenderTarget11 = GetAs<SurfaceRenderTarget11>(colorHolographicRenderTarget);
    if (surfaceRenderTarget11 != nullptr)
    {
        HolographicSwapChain11* holographicSwapChain = surfaceRenderTarget11->getHolographicSwapChain11();
        if (holographicSwapChain != nullptr)
        {
            HolographicNativeWindow* holographicNativeWindow = holographicSwapChain->getHolographicNativeWindow();
            if (holographicNativeWindow != nullptr)
            {
                HRESULT hrFromCameraUpdate = holographicNativeWindow->UpdateHolographicResources();
                if (FAILED(hrFromCameraUpdate))
                {
                    return gl::Error(GL_INVALID_OPERATION);
                }
            }
        }
    }
    #endif
    
  • Options

    @MikeRiches said:
    @mlfarrell: You should be able to still use the off-screen render targets. Each call to

    Great thanks!

  • Options

    This library might do what you want: https://github.com/Microsoft/HoloJS

    Give it a try!

Sign In or Register to comment.