Category Archives: eye tracking

First lecture in BME autonomous robots and vehicles lab

Today was my first lecture in BME‘s Autonomous Robots and Vehicles Lab (Autonóm robotok és járművek laboratórium). This lab is lead by Bálint Kiss, who is my host during my Fulbright scholarship at in Hungary.

Today’s lecture covered the use of eye trackers in designing human-computer interaction. I talked about our work on in-vehicle human-computer interaction, and drew parallels to human-robot interaction. Tomorrow I’ll introduce the class to our Seeing Machines eye tracker, and in the coming weeks I’ll run a number of lab sections in which the students will conduct short experiments in eye tracking and pupil diameter measurement.

If you speak Hungarian, here’s the overview of today’s lecture (I’m thrilled to be teaching in Hungarian):

Szemkövetők használata az ember-gép interakció értékelésében

A University of New Hampshire kutatói több mint egy évtizede foglalkoznak a járműveken belüli ember-gép interfészekkel. Ez az előadás először egy rövid áttekintést nyújt a rendőr járművekre tervezett Project54 rendszer fejlesztéséről és telepítéséről. A rendszer különböző modalitású felhasználói felületeket biztosít, beleértve a beszéd modalitást. A továbbiakban az előadás beszámol közelmúltban végzett autóvezetés-szimulációs kísérletekről, amelyekben a szimulátor és egy szemkövető adatai alapján becsültük a vezető kognitív terhelését, vezetési teljesítményét, és vizuális figyelmét a külső világra.

Az előadás által a hallgatók betekintést nyernek a szemkövetők használatába az ember-gép interakció értékelésében és tervezésében. Az ember-gép interakció pedig egy központi probléma az autonóm robotok sikeres telepítésében, hiszen az autonóm robotokat nem csak szakértők fogják használni. Ellenkezőleg, ezek a robotok a társadalom minden részében felhasználókra találnak majd. A robotok ilyen széleskörű telepítése csak akkor lehet sikeres, ha az ember-gép interakció elfogadható a felhasználók számára.

Video calling while driving? Not a good idea.

Do you own a smart phone? If yes, you’re likely to have tried video calling (e.g. with Skype or FaceTime). Video calling is an exciting technology, but as Zeljko Medenica and I show in our CHI 2012 Work-in-Progress paper [1], it’s not a technology you should use while driving.

Zeljko and I conducted a driving simulator experiment in which a driver and another participant were given the verbal task of playing the game of Taboo. The driver and the passenger were in separate rooms and spoke to each other over headsets. In one experimental condition, the driver and the other participant could also see each other as shown in the figure below. We wanted to find out if in this condition drivers would spend a significant amount of time looking at the other participant. This is an important question, as time spent looking at the other participant is time not spent looking at the road ahead!

We found that, when drivers felt that the driving task was demanding, they focused on the road ahead. However, when they perceived the driving task to be less demanding they looked at the other participant significantly more.

What this tells us is that, under certain circumstances, drivers are willing to engage in video calls. This is due, at least in part, to the (western) social norm of looking at the person you’re talking to. These results should serve as a warning to interface designers, lawmakers (yes, there’s concern [2]), transportation officials, and drivers that video calling can be a serious distraction from driving.

Here’s a video that introduces the experiment in more detail:

References

[1] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[2] Claude Brodesser-Akner, “State Assemblyman: Ban iPhone4 Video-Calling From the Road,” New York Magazine. Date accessed 03/02/2012

Further progress towards disambiguating the effects of cognitive load and light on pupil diameter

In driving simulator studies participants complete both visual and aural task. The most obvious visual task is driving itself, but there are others such as viewing an LCD screen that displays a map. Aural tasks include talking to an in-vehicle computer. I am very interested in estimating the cognitive load of these various tasks. One way to estimate this cognitive load is through changes in pupil diameter: in an effect called the Task Evoked Pupillary Response (TEPR) [1], the pupil dilates with increased cognitive load.

However, in driving simulator studies participants scan a non-uniformly illuminated visual scene. If unaccounted for, this non-uniformity in illumination might introduce an error in our estimate of the TEPR. Oskar Palinko and I will have a paper at ETRA 2012 [2] extending our previous work [3], in which we established that it is possible to separate the pupil’s light reflex from the TEPR. While in our previous work TEPR was the result of participants’ engagement in an aural task, in our latest experiment TEPR is due to engagement in a visual task.

The two experiments taken together support our main hypothesis that it is possible to disambiguate (and not just separate) the two effects even in complicated environments, such as a driving simulator. We are currently designing further experiments to test this hypothesis.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” to appear at ETRA 2012

[3] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

Augmented Reality vs. Street View for Personal Navigation Devices

Personal navigation devices (PNDs) are ubiquitous and primarily come in three forms: as built-in devices in vehicles, as brought-in stand-alone devices, or as applications on smart phones.

So what is next for PNDs? In a driving simulator study to be presented at MobileHCI 2011 [1], Zeljko Medenica, Tim Paek, Oskar Palinko and I explored two ideas:

  • Augmented reality PND: An augmented reality PND overlays route guidance on the real world using a head-up display. Our version is simulated and we simply project the route guidance on the simulator screens along with the driving simulation images. Augmented reality PNDs are not yet available commercially for cars.
  • Street-view PND: This PND uses a simplified version of augmented reality. It overlays route guidance on a sequence of still images of the road. The images and overlay are displayed on a head-down display. Google Maps Navigation runs on smart phones and can be used with street view.

The following video demonstrates the two PNDs.

Our findings indicate that augmented reality PNDs allow for excellent visual attention to the road ahead and excellent driving performance. In contrast, street-view PNDs can have a detrimental effect on both. Thus, while further research is clearly needed, it might be best if navigation with a street view PND was handled by a passenger and not by the driver.

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

Towards disambiguating the effects of cognitive load and light on pupil diameter

Light intensity affects pupil diameter: the pupil contracts in bright environments and it dilates in the dark. Interestingly, cognitive load also affects pupil diameter, with the pupil dilating in response to increased cognitive load. This effect is called the task evoked pupillary response (TEPR) [1]. Thus, changes in pupil diameter are physiological measures of cognitive load; however changes in lighting introduce noise into the estimate.

Last week Oskar Palinko gave a talk at Driving Assessment 2011 introducing our work on disambiguating the effects of cognitive load and light on pupil diameter in driving simulator studies [2]. We hypothesized that we can simply subtract the effect of lighting on pupil diameter from the combined effect of light and cognitive load and produce an estimate of cognitive load only. We tested the hypothesis through an experiment in which participants were given three tasks:

  • Cognitive task with varying cognitive load and constant lighting. This task was adapted from the work of Klingner et al. [3]. Participants listened to a voice counting from 1 to 18 repeatedly. Participants were told that every sixth number (6, 12, and 18) might be out of order and were instructed to push a button if they detected an out-of-order number. This task induced increased cognitive load at every sixth number as participants focused on the counting sequence. A new number was read every 1.5 seconds, thus cognitive load (and pupil diameter) increased every 6 x 1.5 sec = 9 seconds.
  • Visual task with constant cognitive load (assuming no daydreaming!) and varying lighting. Participants were instructed to follow a visual target which switched location between a white, a gray and a black truck. The light reaching the participant’s eye varied as the participant’s gaze moved from one truck to another. Participants held their gaze on a truck for 9 seconds, allowing the pupil diameter ample time to settle.
  • Combined task with varying cognitive load and lighting. Participants completed the cognitive and visual tasks in parallel. We synchronized the cognitive and visual tasks such that increases in cognitive load occurred after the pupil diameter stabilized in response to moving the gaze between trucks. Synchronization was straightforward as the cognitive task was periodic with 9 seconds and in the visual task lighting intensity also changed every 9 seconds.

Our results confirm that, at least in this simple case, our hypothesis holds and we can indeed detect changes in cognitive load under varying lighting conditions. We are planning to extend this work by introducing scenarios in which participants drive in realistic simulated environments. Under such scenarios gaze angles, and thus the amount of light reaching participants’ eyes, will change rapidly, making the disambiguation more complex, and of course more useful.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[3] Jeff Klingner, Rashit Kumar, Pat Hanrahan, “Measuring the Task-Evoked Pupillary Response with a Remote Eye Tracker,” ETRA 2008

Visit to FTW, Vienna

On June 4, 2010 I visited the Telecommunications Research Center Vienna (FTW). My host was Peter Froehlich, Senior Researcher in FTW’s User-Centered Interaction area of activity. Peter and I met at the CHI SIG meeting on automotive user interfaces [1] that I helped organize.

Peter and his colleagues are investigating automotive navigation aids and are currently preparing for an on-road study. I’m happy to report that this study will utilize one of our eye trackers. My visit provided an opportunity for us to discuss this upcoming study and how the eye tracker may be useful in evaluating the research hypotheses. Part of this discussion was a Telecommunications Forum talk I gave – see the slides below:

I want to thank Peter and his colleagues at FTW for hosting me and I’m looking forward to our upcoming collaboration. I also want to thank FTW for providing funding for my visit.

References

[1] Albrecht Schmidt, Anind L. Dey, Andrew L. Kun, Wolfgang Spiessl, “Automotive User Interfaces: Human Computer Interaction in the Car,” CHI 2010 Extended Abstracts