Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

How to develop 2D+3D hybrid apps (C#)?

Hi,

I read the documentation, and it seems possible to have multiple views in an app. If you create a 2D app with multiple windows in Visual Studio, you can switch between the windows (2D views) using the standard UWP API. If you create a 3D app with multiple scenes (3D views?) in Unity, I presume you can switch between these scenes.

My question is, if I want to have both 2D views and 3D views in an app, how do I go about doing that? Should I use Visual Studio and start with a Universal app (and, add a Unity scene in some way later)? Or, should I start with a 3D app in Unity (and, add XAML views later in some way)?

What's the recommended way of developing 2D/3D hybrid apps in C#?

Thanks,
~Harry+

Best Answer

«1

Answers

  • @harryplus your question is very similar to one of the topics in a thread I created earlier called UWP Build Type D3D vs XAML that I am still waiting to hear back on with some better clarity.

    I also just noticed another post that the moderators just merged with a Keyboard specific thread that is asking something similar and not really keyboard specific.

    There seem to be a lot of us who would like to know the answer to this, but it has been a little difficult to get a clear answer.

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • Hi @HoloSheep

    Yes, they are all related. I agree.

    If you look at the "App Model" doc, the general framework seems to be well-defined (e.g., using HolographicSpace for 3D view, etc.). But, since I'm going to use Unity (for 3D views), I need some specific directions as to how to go about building this "hybrid" type apps. Should I start with a 3D Unity app first? Or, should I start with a XAML app on VS first? How do I add generic XAML views from Unity? How do add 3D unity scene on VS? ...

    I have no clue as to how to get started. (I'm not even concerned about how to switch between these views at runtime at this point since I know that's possible.)

    Thanks for your reply,
    ~Harry+

  • All of my exploration on this so far leads me to believe that the answer lies in starting with an Unity 3D app with the XAML build flag as your starting point.

    My experiments and research on starting from the 2D UWP app lead me to believe that 2D apps are much more limited in their abilities and that most of the important volumetric stuff is all handled by the HoloLens Shell app that handles the 3D world for your 2D UWA. It looks like 2D UWP apps are very limited in their spatial abilities and seem to be in a wrapper for the most part that is handled and controlled by the shell. That said, it is possible that there maybe some sort of hole that can be punched through a 2D app that somehow gives a window, handle or access of some sort into volumetric space, but I have found no evidence of that being the case so far.

    Starting from a Unity 3D app gives you access to all the tough volumetric and holographic sweetness. We also have solid evidence already with the "keyboard" example that it is somehow possible to invoke 2D XAML windows from the 3D Unity world. Not to mention that "XAML" is the actual description text of the build setting in question from that type of Unity 3D app.

    So until we get something more official, my guess would be that the safer starting point would be going with a Unity 3D app with the XAML build setting for now.

    The 2D UWP doesn't really involve a lot of new learning if you are coming from a XAML background anyway. The new Holographic platform stuff that we all need to ramp up on looks to be on the Unity side of things (expect for those brave soles that want to build their own engine).

    HTH

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • @HoloSheep

    I'm relatively new to Unity, and I'm not entirely sure how to add XAML views within Unity. But, that "XAML" flag in the build type seems to be a promising direction.

    One other option I'm currently contemplating to experiment is to "aggregate" Unity-generated VS projects (one per scene?) in a UWP app project solution. Not exactly sure in detail, but I think it's doable.

    ~Harry+

  • Okay @harryplus couple things that might help to fill in parts of the bigger picture..

    2D UWP apps.

    When you create a typical UWP app in Visual Studio in most cases it will just run on and deploy to the HoloLens or the HoloLens emulator (as long as it doesn't do something in the unsupported list or trip over some yet to be discovered compatibility issue)

    When that 2D app runs on the HoloLens it appears much like a 2 dimensional sheet of paper inside the 3D volumetric world of the HoloLens. You could think of it as a paper thin floating screen or monitor that is displaying your 2D UWP app inside the 3D world.

    Some have gotten a little tripped up on the 2D vs 3D concepts and said "but wait, what if I create a 2D UWP app with 3D content inside it! maybe with OpenGL or a managed 3D library like SharpDX?" However, the 3D content inside a 2D UWP app would appear just as flat and 2 dimensional inside the HoloLens as it does on a typical monitor, it would just be displaying on a cool floating paper thin monitor when viewed through the HoloLens. So yes, it can be done, but there is little to be gained from it. Such an approach will not result in the desired 3D volumetric and holographic experience that the HoloLens is all about.

    So in summary, a regular UWP app created in Visual Studio doesn't usually contain any holographic platform specific code and when it runs on a HoloLens device it will appear as a 2D screen or view inside the 3D environment seen through the HoloLens.

    3D Unity apps.

    When you create a typical 3D Unity app for HoloLens (which for the record is also a UWP app) you are not really creating screens, views or scenes per say. The base scene in your project become a representation of the 3D world around you when that app is running. So anything that you place in that base scene will map to a real world location in the physical environment that you are in when wearing the HoloLens. In this kind of 3D app, things truly take on a 3D form and location and mix into your 3D physical space. This single app can have multiple 3D objects in different locations, some that look like flat screen tvs on different walls around the room others that could be some 3D shape or 3D holographic teleportation of someone sitting on a table in your room.

    Holo Shell

    Currently on the HoloLens only one application runs at a time.
    You start off running the shell. When the shell is running, it is responsible for dealing with the 3D volumetric aspects of the environment. When you are in the shell mode the interface is mostly 2D. Things are mostly paper thin views located in the space around you. Some pinned, some floating and following you around. The 2D UWP apps pretty much live and run in this world of the shell with one being allowed to be active at a time. Unity 3D apps however take over the responsibility for the 3D volumetric world from the shell when they run.

    Switching between apps/view

    From what I can tell there are 2 distinct and exclusive states while running on the HoloLens.
    1. 3D mode, where the physical world maps directly to a 3D space managed by the active application. This is typically what you get when you are running a Unity built HoloLens app most of the time. It is also the mode that the shell runs in and looks after while running the shell.
    2. 2D mode, where the underlying shell is in control of the 3D space that maps to your surrounding environment, but the shell serves up 2D paper thin view windows to applications that ask for them for the application to render its 2D content on.

    So, if you develop a Unity 3D app for HoloLens I believe it is in the first mode described above until you do something to request a special mode switch (like when you need to display the system keyboard, which is basically a 2D XAML system component). Unity 3D apps can switch back and forth from their default 3D mode over to a requested 2D mode and swap back and forth.

    NOTE: The flexibility and customizability of what kinds of requested 2D modes we can ask the system to swap over to is one of the things that seems to need some clarification.

    However, if you develop a typical UWP 2D XAML app for HoloLens it will start up in the 2D mode by default and seems to be a child of the shell which in this case owns or manages the 3D space. So switching while in this mode takes on a different twist since your origin 2D app does not really have command over the more complex 3D space in the first place. Switch to 3D from an app that is fundamentally 2D to start out with probably means handing off control to the Shell. Perhaps the shell can expose a limited set of 3D capable api's to your 2D app, but my suspicion is that it will be very limited compared to the oposite scenario where the base app has full control of the 3D complexity and capability to start with and is simply swapping out that world to display the occasional 2D component.

    I hope this brain dump helps more than it hurts.

    Apologies to all of the wonderful folks from the HoloLens group for any and all of the concepts I may have butchered in another overly long post. :p

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • @ahillier
    I'm not certain by any stretch, but the composition and rendering mentioned for the Windows Phone Unity and XAML mixture sounds to me to be more about combining the two 2D renderings (one of the XAML, and the other a directx rendered representation of the 3D unity scene) onto or into a single composed image for display on a 2D device like a phone or tablet.

    I am not sure if that would help with displaying 3D volumetric Holograms in the physical world from a 2D app. But hey, maybe.

    There must be someone (but probably a really busy and hard to track down someone) on the team that has the skinny on this one. Thank you for not giving up and for still keeping this one on your radar... your such an awesome mentor :)

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • Hi @ahillier

    Thanks for the helpful pointers. I quickly browsed the doc, and this seems to provide a solution for my problem. The doc (essentially) suggests, if I understand it correctly, that you just treat the Unity-generated visual studio solution just like a normal VS project solution, and you can add any UWP-specific constructs like XAML views. I presume I can create a secondary window with a XAML view to create an extra "2D view". http://docs.unity3d.com/Manual/windowsstore-projecttypes.html This almost sounds like a no-brainer solution (if it works). :smile:

    I'll try to test it when I have a chance, but it appears that this method of development will give me a Unity scene as the primary view ("3D view"). Then, I can add as many "2D views" as I want, and I can use the app switcher API to switch between these views.

    Thanks!
    ~Harry+

  • Hi @HoloSheep

    I think you are looking at things a little bit more too complicated than the way I understand it. The "App Model" doc, https://developer.microsoft.com/en-us/windows/holographic/app_model, clearly defines how the HoloLens system is supposed to work. There are 2D views and 3D views. 2D views live in the "Mixed world", and when you open a 3D view, it takes over the whole space ("Exclusive mode"?). If you "close" a 3D view, it goes back to the mixed world (or, maybe to the HoloLens Shell screen?). The way I see it, all you have to do is to provide an input UI for the user to switch between these 2D/3D views (or, implement some kind of trigger/automatic mechanism). For example, an app may launch with a 2D view as its primary view, which includes some kind of menu items. When a user "selects" one of the menu items (e.g., gesture or voice), the app switches its view from the 2D view (in the mixed world) to the exclusive 3D view, for instance. The 3D view may include an input "control" so that the user can close the view and go back to the mixed world, with the previous 2D view activated, etc.

    My apologies if I am misunderstanding you.

    The caveat it that I don't own a HoloLens, and it's just my imagination. But I am reasonably certain that it is, or should be, the way the HoloLens works.

    ~Harry+

  • @harryplus I am not certain but I think the "App Model" doc at that link may have seen some additions since I first read it about a month ago. Or maybe it didn't register when I first went through it.

    Either way, thank you for pointing it out. Some interesting bits in there.

    I find the use of "light up" a particularly interesting term in the following section:

    If your app started as a universal app for devices other than HoloLens, your primary view is 2D, but you could "light up" on HoloLens by adding an additional app view that's Holographic to show an experience volumetrically.

    I too will have to conduct some tests with both Unity and non-Unity UWP apps to see if I can understand this better and perhaps resolve my skepticism about it being either "a no-brainer" or even how feasible or supported it is when starting as a non-Unity UWP app at this point.

    @ahillier a couple of very basic examples from the team demonstrating these scenarios would be a ton of help and would shine some light on the topic.

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • Be aware that exporting the Unity project as a XAML app can cause problems right now with the app failing to start. See here:

    http://forum.unity3d.com/threads/known-issues.394627/

    The article you want to read into is App Model.

    "If the app uses XAML, then the XAML IFrameworkViewSource will control the first view of the app. The app will need to switch to the holographic view before activating the CoreWindow, to ensure the app launches directly into the holographic experience.

    Use CoreApplication::CreateNewView and ApplicationViewSwitcher::SwitchAsync to make it the active view."

    The question is when those two methods get called if you export your Unity project out as a XAML project. Unity does a LOT of stuff in the XAML world in a way that you never see it. It's all hidden in the UnityBridge class, which is part of their SDK and it all executes when your XAML app calls UnityBridge.Initialize.

    I'm wondering if you call CreateNewView and SwitchAsync before UnityBridge initializes if that will cause a problem. Or maybe if calling them after UnityBridge is initialized will cause a problem. You'd have to play with this and let us know. But definitely check out the App Model page linked above for more details.

    Our Holographic world is here

    RoadToHolo.com      WikiHolo.net      @jbienz
    I work in Developer Experiences at Microsoft. My posts are based on my own experience and don't represent Microsoft or HoloLens.

  • @jbienzms not sure but the known-issue link from the beginning of the month that you point to might be referring to the same app start issue described here and here which there are now a couple of work-arounds for.

    I too saw the paragraph that you highlight in the App Model doc, but was still unclear by what was meant by the

    The app will need to switch to the holographic view before activating the CoreWindow

    it didn't really say how to even create a "holographic view" to switch to from a 2D XAML app.

    I understand most of the words, just not the sentences :)

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • I think what it's saying is that just by having a XAML application, you automatically start as a 2D application. The "plumbing" under a XAML app will initialize a FrameworkViewSource that is in 2D mode. If you do nothing else, you will always run in 2D mode. However, calling CreateNewView should create a holographic secondary view.

    What I don't know is which override of CreateNewView we need to call. There is a version that takes no parameters and one that takes two. I'm worried we have to use the version that takes 2 parameters because that version allows you to specify a "runtime type" and we may have to pass some magic string to get the view created as a holographic view. But I haven't had a chance to test any of this yet. Simply creating a view even with the zero-parameter option may be all we need to do.

    Once that secondary view is created we can call SwitchAsync to switch between views. SwitchAsync takes in the ID of the view to switch TO, but it's important to realize that you have to call SwitchAsync using the Dispatcher that is on the view we are switching TO (not the one we are starting from). That would look something like this:

    var view = CoreApplication.CreateNewView();

    await view.Dispatcher.InvokeAsync(CoreDispatcherPriority.Normal, ()=>
    {
    ApplicationViewSwitcher.SwitchAsync(ApplicationView.GetForCurrentView().Id);
    });

    This article could help, but keep in mind that article assumes the secondary view is also a XAML view. That's why it's creating a Frame in that view, etc.

    Our Holographic world is here

    RoadToHolo.com      WikiHolo.net      @jbienz
    I work in Developer Experiences at Microsoft. My posts are based on my own experience and don't represent Microsoft or HoloLens.

  • @jbienzms

    I am with ya... but...

    I do have experience with the CreateNewView and ApplicationViewSwitcher api and dispatching. I have had a UWP app using these calls that was running on the desktop previous to the emulator, HoloLens tools and documentation coming out.

    Loading that UWP app into the emulator and running it was one of the first things I tried in the emulator since it is related to one of my HoloLens projects for work. As I mention elsewhere in the forum it worked great for spawning 2D views from a 2D app except for an issue that this api has with threads which has little to do with HoloLens.

    Each new view created from within our app results in a new 2D window that can be moved around and pinned in its own location.

    Now admittedly I have not tried passing any magic strings into the parameters of the CreateNewView call, but I have it working with 2D views in a 2D app.

    Suppose for a moment that there was a magic parameter to pass to CreateNewView and the secondary view that got created was magically a "holographic view"... then what? That is where I am at and why I stated above that the App Model docs are unclear. It is also why my previous explanation of my current understanding speculate that mixed mode apps are most likely only supported when starting first with a Unity 3D XAML app.

    How do you create and control the holographic content that you want to display in that secondary view? Unfortunately I am pretty sure that 3D XAML is still only available in WPF. Perhaps you could use DirectX, but then you get into the whole.. "you have to build your own spatial engine" rabbit hole.

    Trust me, I hope I am wrong and that there are simple ways to accomplish hybrid 2D apps starting from a plain (non Unity) UWP app, but based on a number of explorations and a lot of questions that I have previous asked, I am honestly a little skeptical. (but happy to be corrected and cured of my skepticism)

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • @HoloSheep @jbienzms

    Thanks for the links, etc. I have a slightly different take on this. Do not use the XAML build type. I can imagine limited, very special, use cases where you'll have to use a soft keyboard in a 3D view. But, in most cases, I think we need not, should not, use keyboards in a 3D view, and hence "D3D" build type should suffice.

    If you need to accept keyboard input from the user, just create a new XAML-based 2D view, and switch to it for the user input. And, when you are done, switch back to the 3D view. I could be wrong, but for this, I don't think you'll need to use the "XAML" build type.

    BTW, @HoloSheep, I've been wondering about that "light up scenario" as well. It'll be cool if we can see an example of that 2D -> 3D transition.

    This seems all too abstract. I'll really need HoloLens. When you are in Wave 27 or Wave 312 (jk), it's a bit discouraging. I hope I can purchase HoloLens before I retire. :wink:

    I think Microsoft should really ramp up HoloLens production for people like us.

  • @HoloSheep
    "Suppose for a moment that there was a magic parameter to pass to CreateNewView and the secondary view that got created was magically a "holographic view"... then what?"

    When you export an app from Unity with XAML as the option it generates a MainPage.xaml with a bunch of code to initialize the UnityBridge. I'm guessing that code inside of UnityBridge puts the window in 3D mode and that what you would do in your secondary window would be the same things that Unity does with its exported page (initialize the bridge on new page created for the secondary view). This may actually involve creating a Frame for the new window and navigating to that page generated by Unity.

    @harryplus
    Interesting approach. I'm a bit leery to go this route because I've not done enough deep C++ work combining an app that's primarily a DirectX app and XAML. I'm afraid there may be strange issues with threading or just a lot of work to do. But if this route works for you I'd like to know about it. This is probably the best route when all you need to do is show the keyboard and capture input. But there are other scenarios where I might want to develop a XAML companion version of the app but still pop up some of those views on HoloLens just because I've already created the UI and that particular piece doesn't greatly benefit from being in 3D. Just being able to show the XAML form and use the same code I've already written in C# for the companion app would be very handy.

    Our Holographic world is here

    RoadToHolo.com      WikiHolo.net      @jbienz
    I work in Developer Experiences at Microsoft. My posts are based on my own experience and don't represent Microsoft or HoloLens.

  • @jbienzms
    Maybe we are not talking about the same thing here.
    2D UWP apps (as I am referring to them) are Visual Studio XAML solutions and do not involve Unity. They do not provide an option to navigate to pages generated by Unity. They have no Unity components and hence the problem that if there were a magic 3D view in this kind of app, you would be left with an empty 3D window and no way to create 3D content for it or to control 3D, spatial environment or volumetric anything for that magically created 3D view in a traditional UWP 2D XAML app. (short of building your own 3D engine in DirectX)

    It sounds like you are talking about a Unity 3D app with the XAML build flag ... which I have been advocating/speculating all along looks like the only option for a mixed app, if there is one.

    The trick then becomes how do we mess with the Unity 3D XAML project's generated solution code to switch to our own xaml custom views instead of just the system keyboard.

    @harryplus
    In your last post you mention:

    If you need to accept keyboard input from the user, just create a new XAML-based 2D view, and switch to it for the user input. And, when you are done, switch back to the 3D view. I could be wrong, but for this, I don't think you'll need to use the "XAML" build type.

    Pretty sure that in order to display a XAML window in a Unity 3D app of any kind you need to choose the XAML build type. For example @ahillier posted the following in another thread:

    Using XAML as the build type is the trick for accessing the onscreen keyboard in HoloLens.

    The reason for needing the XAML build type for the keyboard is that the system keyboard is a XAML resource. I you create a custom XAML page of your own with an input box that in turn brings up the system keyboard... you would be leveraging 2 XAML components (both the custom xaml page and the system keyboard) which would make the XAML build type equally necessary wouldn't it?

    This too still leaves us with the issue of "how to call a custom XAML page up in a Unity 3D XAML app"?

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • Hi @HoloSheep @jbienzms

    You two raise slightly different issues regarding my earlier comment, but this is directed to both of you.

    It may seem a bit philosophical question, but I see a "view" (as defined in the HoloLens doc), rather than an app, as a fundamental building block. An app can contain one or more views. But, at any given moment, one view will be active, and as far as HoloLens is concerned that's the view it's dealing with. It does not care, very roughly speaking, whether there are more views in the app.

    Obviously, it's just a speculation on my part (and, gross oversimplification) since the HoloLens doc is rather "high level", and I have no way of knowing how the system was designed without having an access to their source code (or, maybe, to their design doc).

    If you take this viewpoint, what I previously suggested may seem a bit more reasonable. If you create a 3D view in Unity, and if that view requires access to soft keyboard (XAML based), then you will need to use the "XAML" build type. If that 3D view does not need soft keyboard, then you could just use "D3D" build type. As far as this particular 3D view is concerned, it doesn't care whether the app contains other 2D and 3D views (XAML-based or Unity-created) or whether other view requires user's keyboard input. As for the @ahillier 's comment (and, other "known issue" references, etc.), I don't think what they are saying is necessarily inconsistent with my interpretation. English, and any language, is an imprecise medium and there is always a bit of ambiguity.

    Also, I'm not particularly concerned about "threading" issues and what not, since if you take this "view are more or less independent" viewpoint, those issues are not really something an app developer has to worry about. The framework (HoloLens and Unity bridge, etc.) will take care of whatever is needed to provide this "one view at a time" illusion. Again, this is all speculation, but I'm pretty certain that this is what HoloLens system designers at Microsoft intended.

    I'll probably test this "Unity first and add 2D views from Visual studio later" approach over the weekend, and I'll let you guys know whether it works or not.

    ~Harry+

  • @harryplus the big long post that I did earlier in this thread attempted to clarify some of the subtleties involved.
    (I guess not as clearly as I intended, sorry :smile: )

    Bottom line...
    Two major kinds of apps:

    • 2D non-Unity UWP
    • 3D Unity UWP

    View has different meaning in the context of "inside the app" for both of those kinds of apps.

    The Shell adds yet another context to the meaning of views in relation to those apps.

    Unfortunately it is a little complicated, but that is what the 3rd dimension tends to bring to the party. :smiley:

    Windows Holographic User Group Redmond

    WinHUGR.org - - - - - - - - - - - - - - - - - - @WinHUGR
    WinHUGR YouTube Channel -- live streamed meetings

  • @HoloSheep

    I appreciate (and, I am sure others do as well) your effort to share your knowledge. Clearly, it was rather useful for me in getting a better understanding of HoloLens, in general. I feel that I have a clearer picture now than yesterday when I asked this question, through the discourse with you and others. As far as this particular question is concerned, I'm rather satisfied. I think we solved the "half" of the puzzle. I'll verify this, but I am sure that I can add 2D views to an app which includes at least one 3D view created by Unity. As for the other half, how to add a 3D view created by Unity to a general UWP app created by Visual Studio, I am not too concerned at this point. The development model, Unity -> Visual Studio, suits me, at least for now.

    I'll keep you posted.
    ~Harry+

  • Hello,

    I successfully created a 2D UWP app which is able to start a 3D view (Direct X or Unity). I can share the code if needed.

    I was not able to have the both "views" (2D and 3D scenes) together.

    Hope this help.

    Regards

  • @JonathanANTOINE said:
    Hello,

    I successfully created a 2D UWP app which is able to start a 3D view (Direct X or Unity). I can share the code if needed.

    I was not able to have the both "views" (2D and 3D scenes) together.

    Hope this help.

    Regards

    GitHub?

  • @JonathanANTOINE said:
    Hello,

    I successfully created a 2D UWP app which is able to start a 3D view (Direct X or Unity). I can share the code if needed.

    I was not able to have the both "views" (2D and 3D scenes) together.

    Hope this help.

    Regards

    Can you share the code?

    Thanks,
    Sathiya

  • Can you, say, on a timer or some other event, switch to a 3D app, and then after a hologram is done with an animation, or something, switch back to the 2D app?

  • PalmaPalma ✭✭
    edited August 2016
    >@JonathanANTOINE said:
    >Hello,
    >
    >I successfully created a 2D UWP app which is able to start a 3D view (Direct X or Unity). I can share the code if needed.
    >
    >I was not able to have the both "views" (2D and 3D scenes) together.
    >
    >Hope this help.
    >
    >Regards


    Please share and save my life
  • Hi @HoloSheep @jbienzms @JonathanANTOINE ,

    The scenario I have is that I want to launch a browser inside of the 3D unity world. In WPF there used to be a map 2D object to 3D object (in XAML), that allowed me to do this kind of thing. The scenario could be, for example, that i want them to purchase something inside of a game and to do this they need to use the browser that is placed on a wall inside of the 3D environment.

    This shouldn't be hard but it seems to be impossible to do. I know it can be done, though, because they did it in XAML. https://blogs.msdn.microsoft.com/wpf3d/2006/12/12/interacting-with-2d-on-3d-in-wpf/

    Okay, I know that is 2006. But you'd figure that we'd move forward. This is a common request, I would think...

  • Spawning a 2d XAML control/other 2d application inside of your Unity app in the way you describe isn't currently possible.

    ===
    This post provided as-is with no warranties and confers no rights. Using information provided is done at own risk.

    (Daddy, what does 'now formatting drive C:' mean?)

  • I fiddled with this over the weekend and got it to work. Basically, create a new view for your app ad described, activate it, and execute the ApplicationviewSwitcher. Blog post: hubs.ly/H0391GP0

  • I fiddled with this over the weekend and got it to work. Basically, create a new view for your app ad described, activate it, and execute the ApplicationviewSwitcher. Blog post: hubs.ly/H0391GP0

Sign In or Register to comment.