The Mixed Reality Forums here are no longer being used or maintained.
There are a few other places we would like to direct you to for support, both from Microsoft and from the community.
The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.
For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.
If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.
And always feel free to hit us up on Twitter @MxdRealityDev.
Can we get HWND from core window
Hello, I am trying to run our application in a hololens also I am using OpenGl ES & Angle, but facing several issues. Our application use HWND to bind the render context, but in a holographic app we have the core window and the holographic space. Is there a way I can get HWND from the core window and do the bind ?
Best Answer
-
Options
Hi gbhatt,
I don't think this is the correct approach. In the DirectX example, the CoreWindow is given to the Holographic API's to create a holographic space which internally creates the backbuffers and stereoscopic render targets to render to.
You don't have access to the HWND like in traditional Win32 apps and .Net to create render targets. So to get access to the render target you have to access the back buffers through the Device Resources, or specifically the camera resources in the given DirectX samples. Take a look at the CameraResources::CreateResourcesForBackBuffer() method. This allows you to get the camera backbuffers as a ID3D11Resource or IDirect3DSurface, which allows you to create a render target to a particular view, keeping in mind it's stereoscopic, so you have the left eye and right eye to render to.
At this point you have the render target to further draw to.
Dwight Goins
CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
http://dgoins.wordpress.com5
Answers
Hi gbhatt,
I don't think this is the correct approach. In the DirectX example, the CoreWindow is given to the Holographic API's to create a holographic space which internally creates the backbuffers and stereoscopic render targets to render to.
You don't have access to the HWND like in traditional Win32 apps and .Net to create render targets. So to get access to the render target you have to access the back buffers through the Device Resources, or specifically the camera resources in the given DirectX samples. Take a look at the CameraResources::CreateResourcesForBackBuffer() method. This allows you to get the camera backbuffers as a ID3D11Resource or IDirect3DSurface, which allows you to create a render target to a particular view, keeping in mind it's stereoscopic, so you have the left eye and right eye to render to.
At this point you have the render target to further draw to.
Dwight Goins
CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
http://dgoins.wordpress.com
Hi Dwight,
Thanks for explaining. Yes I got your point but in my case I am using openGl and Angle APIs which is doing the conversion from opengl to directX. Our application is written in OpenGL and to integrate it to hololens, I am trying to figure out which is the better way to do this as we cannot change the existing software, so I am doing some research to find a way to integrate the existing openGl application to be a hologram.