Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.

An idea for a movable courser for the HoloLens via a Eyetracker

LBochtlerLBochtler
edited March 2017 in Discussion

I would like to propose an idea for implementing an eyetracker into the HoloLens,

Since i cant contact anyone from the Hololens team directly, i decided to post here.

So here is my idea,

since the hololens is effectively a binocular HUD with a fixed position courser, i thought that adding a eye tracker that uses the HUD beamsplitters to view the eyes might be a good way of moving the courser to wherever the person is looking, instead of where the person is pointing their head.

My idea works based on either polarizing (option 1) or Dichromatic beamspitters (option 2) inserted just before the HUD beamsplitter. (though there is probably another beamsplitter type i haven't thought of yet, the operating principle stays the same though).

In option 1, the camera views the eye either in the visible light spectrum, via the second beamsplitter, thus allowing for the lens to have more working distance to the eye. The light from the normal HUD would be polarized, resulting in only a small loss of luminosity. The camera can then use the opposite polarization to view the eye, also with minimal loss of light intensity. The problem with this method is however that the illumination of the eye is done mostly by ambient light. so using it in the dark is not easily done, this is solved with option 2.

Option 2 would be to use a Dichromatic mirror to a view the eye in the near infrared part of the light spectrum. This allows for a few benefits, lower light loss of the HUD screen, and the ability to directly illuminate the eye. This can be done in a few ways, such as Retro illumination and or Direct LED illumination (LED's at an angle, relative to the center line of the HUD Beamsplitter. This works, since the human eye can not pickup Infraread light in any meaningful way, thus using Infrared light would not produce glare, thus allowing for a better user experience.

The only thing i have not yet found a good solution to, is discarding random eye movement, and only registering the actual position the user wants to look at. It would be very distracting for the curse to be flying about the the HUD all the time. Once this problem is also solved, i believe this would be a good addition to the functionality and usability of the Hololens.

In addition, the optical system needed for this eyetracker shouldn't add too much bulk to the hololens. (if a small sensor is used). Though i am unsure about the processing resources, the eyetracking might need to be done on a separate SOC, as to not bog down the main SOC with processing the eyetracker as well. Though this might not be necessary, i don't know enough about the HoloLens SOC and its utilization to make that judgement.

Edit:
If anyone wants an illustration of my concept, just say so, and ill sketch it.

Comments

  • Peter_NZPeter_NZ ✭✭✭

    I believe eye tracking was one of the features Microsoft were going to include in V2 (now rolled into V3). However, eye tracking is a feature I DON'T WANT, so when Microsoft implement it I hope they have the ability for users to turn it off. The reason being; it is common when using the Hololens to also being using a computer, looking as some paper or talking with colleagues and I don't want a white dot in the middle of the view - which is what you would end up getting with eye tracking!

  • @Peter_NZ I think it all comes down to implementation. One way I could see eye tracking being useful is to guide the cursor when in the 'ready' gesture. You'd probably want to slowly lerp toward the eye target from the default gaze vector as I agree having the cursor bouncing all over the place would be distracting.

    But at least in the case where you have the intent to tap on something, your eyes are probably going to be already looking at whatever it is you are trying to tap. If it means you don't have to turn your head as much or at all then that sounds like a good deal.

  • Peter_NZPeter_NZ ✭✭✭
    edited March 2017

    @thebanjomatic I see your point and it has some merit. I think your idea of slowly transitioning between gaze and eye tracking would be key in making this work successfully.

  • LBochtlerLBochtler
    edited March 2017

    @thebanjomatic I do like your idea. I have an alternative to offer. Using gaze time per point to draw the cursor into the current viewing direction. Basically, the point you are consciously looking at will have the most time per sample frame (not the camera frame). The eye will jolt around a bit, viewing things briefly, in comparison to the main target. These points would drag the cursor in their direction based on how long the user was looking at that point over the course of a frame. This would allow faster cursor action, while removing the jumping about problem. Though this would require a fast image sensor to be used to capture eye movements.

    Based on what i read about your idea @thebanjomatic, your method would have the cursor move relatively slowly, as every camera frame nudges the cursor to its new position. Where as my idea uses a stack of frames, used to time measurement to re position the cursor every integration frame. (integration frames would be every 100ms as an example, with 24 to 100 sub frames). I would think this is would create a more natural cursor movement sensation then having the cursor move slowly (relatively speaking of course) to the desired object.

    However this would come with a higher cost for the eye tracking camera, as well as needing more computational power to process the eye trackers. Also i forgot to add that more RAM would be needed, to act as a frame buffer for up to 4 sample frames, as well as the program memory its self. This whole thing should probably be done on a dedicated processor, and not the main SOC, thus freeing up resources on the SOC.

    Having this active while using the ready gesture is definitely a good idea.

    Edit 2:

    Another idea is to just use the spot most viewed per sample frame as the new cursor position, ignoring the math needed to nudge the cursor based on view time of all positions. That way, it could be done on the main SOC directly. Though the frames would still need to be stored in memory, as well as the program.

Sign In or Register to comment.