Category Archives: pupillometry

UNH IRES: HCI summer student research experience in Germany

HCI Lab, Stuttgart

UNH ECE professor Tom Miller and I were recently awarded an NSF International Research Experiences for Students (IRES) grant. Our IRES grant will fund students conducting research at the University of Stuttgart in Germany.

Albrecht Schmidt

Under our NSF IRES grant, each summer between 2014 and 2017, three undergraduate and three graduate students  will conduct research for just under 9 weeks at the Human Computer Interaction (HCI) Lab of Professor Albrecht Schmidt at the University of Stuttgart. Professor Schmidt and his lab are among the world leaders in the field of HCI.

Student research will focus on two areas: in-vehicle speech interaction and speech interaction with public displays. For in-vehicle speech, students will relate the benefits and limitations of speech interaction with in-vehicle devices with real-world parameters, such as how well speech recognition works at any given moment. They will also work to identify why it is that talking to a passenger appears to reduce the probability of a crash, and how we might be able to use this new information to create safer in-vehicle speech interactions. Similarly, students will explore how speech interaction can allow smooth interaction with electronic public displays.

Stuttgart Palace Square (Stefan Fussan: https://www.flickr.com/photos/derfussi/)

Successful applicants will receive full financial support for participation, covering items such as airfare, room and board, health insurance, as well as a $500/week stipend. The total value of the financial package is approximately $8,500 for 9 weeks.

Details about the program, including applications instructions, are available here. Please note that this program is only available to US citizens and permanent residents. If you have questions please contact Andrew Kun (andrew dot kun at unh dot edu) or Tom Miller (tom dot miller at unh dot edu).

First lecture in BME autonomous robots and vehicles lab

Today was my first lecture in BME‘s Autonomous Robots and Vehicles Lab (Autonóm robotok és járművek laboratórium). This lab is lead by Bálint Kiss, who is my host during my Fulbright scholarship at in Hungary.

Today’s lecture covered the use of eye trackers in designing human-computer interaction. I talked about our work on in-vehicle human-computer interaction, and drew parallels to human-robot interaction. Tomorrow I’ll introduce the class to our Seeing Machines eye tracker, and in the coming weeks I’ll run a number of lab sections in which the students will conduct short experiments in eye tracking and pupil diameter measurement.

If you speak Hungarian, here’s the overview of today’s lecture (I’m thrilled to be teaching in Hungarian):

Szemkövetők használata az ember-gép interakció értékelésében

A University of New Hampshire kutatói több mint egy évtizede foglalkoznak a járműveken belüli ember-gép interfészekkel. Ez az előadás először egy rövid áttekintést nyújt a rendőr járművekre tervezett Project54 rendszer fejlesztéséről és telepítéséről. A rendszer különböző modalitású felhasználói felületeket biztosít, beleértve a beszéd modalitást. A továbbiakban az előadás beszámol közelmúltban végzett autóvezetés-szimulációs kísérletekről, amelyekben a szimulátor és egy szemkövető adatai alapján becsültük a vezető kognitív terhelését, vezetési teljesítményét, és vizuális figyelmét a külső világra.

Az előadás által a hallgatók betekintést nyernek a szemkövetők használatába az ember-gép interakció értékelésében és tervezésében. Az ember-gép interakció pedig egy központi probléma az autonóm robotok sikeres telepítésében, hiszen az autonóm robotokat nem csak szakértők fogják használni. Ellenkezőleg, ezek a robotok a társadalom minden részében felhasználókra találnak majd. A robotok ilyen széleskörű telepítése csak akkor lehet sikeres, ha az ember-gép interakció elfogadható a felhasználók számára.

Further progress towards disambiguating the effects of cognitive load and light on pupil diameter

In driving simulator studies participants complete both visual and aural task. The most obvious visual task is driving itself, but there are others such as viewing an LCD screen that displays a map. Aural tasks include talking to an in-vehicle computer. I am very interested in estimating the cognitive load of these various tasks. One way to estimate this cognitive load is through changes in pupil diameter: in an effect called the Task Evoked Pupillary Response (TEPR) [1], the pupil dilates with increased cognitive load.

However, in driving simulator studies participants scan a non-uniformly illuminated visual scene. If unaccounted for, this non-uniformity in illumination might introduce an error in our estimate of the TEPR. Oskar Palinko and I will have a paper at ETRA 2012 [2] extending our previous work [3], in which we established that it is possible to separate the pupil’s light reflex from the TEPR. While in our previous work TEPR was the result of participants’ engagement in an aural task, in our latest experiment TEPR is due to engagement in a visual task.

The two experiments taken together support our main hypothesis that it is possible to disambiguate (and not just separate) the two effects even in complicated environments, such as a driving simulator. We are currently designing further experiments to test this hypothesis.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” to appear at ETRA 2012

[3] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

Towards disambiguating the effects of cognitive load and light on pupil diameter

Light intensity affects pupil diameter: the pupil contracts in bright environments and it dilates in the dark. Interestingly, cognitive load also affects pupil diameter, with the pupil dilating in response to increased cognitive load. This effect is called the task evoked pupillary response (TEPR) [1]. Thus, changes in pupil diameter are physiological measures of cognitive load; however changes in lighting introduce noise into the estimate.

Last week Oskar Palinko gave a talk at Driving Assessment 2011 introducing our work on disambiguating the effects of cognitive load and light on pupil diameter in driving simulator studies [2]. We hypothesized that we can simply subtract the effect of lighting on pupil diameter from the combined effect of light and cognitive load and produce an estimate of cognitive load only. We tested the hypothesis through an experiment in which participants were given three tasks:

  • Cognitive task with varying cognitive load and constant lighting. This task was adapted from the work of Klingner et al. [3]. Participants listened to a voice counting from 1 to 18 repeatedly. Participants were told that every sixth number (6, 12, and 18) might be out of order and were instructed to push a button if they detected an out-of-order number. This task induced increased cognitive load at every sixth number as participants focused on the counting sequence. A new number was read every 1.5 seconds, thus cognitive load (and pupil diameter) increased every 6 x 1.5 sec = 9 seconds.
  • Visual task with constant cognitive load (assuming no daydreaming!) and varying lighting. Participants were instructed to follow a visual target which switched location between a white, a gray and a black truck. The light reaching the participant’s eye varied as the participant’s gaze moved from one truck to another. Participants held their gaze on a truck for 9 seconds, allowing the pupil diameter ample time to settle.
  • Combined task with varying cognitive load and lighting. Participants completed the cognitive and visual tasks in parallel. We synchronized the cognitive and visual tasks such that increases in cognitive load occurred after the pupil diameter stabilized in response to moving the gaze between trucks. Synchronization was straightforward as the cognitive task was periodic with 9 seconds and in the visual task lighting intensity also changed every 9 seconds.

Our results confirm that, at least in this simple case, our hypothesis holds and we can indeed detect changes in cognitive load under varying lighting conditions. We are planning to extend this work by introducing scenarios in which participants drive in realistic simulated environments. Under such scenarios gaze angles, and thus the amount of light reaching participants’ eyes, will change rapidly, making the disambiguation more complex, and of course more useful.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[3] Jeff Klingner, Rashit Kumar, Pat Hanrahan, “Measuring the Task-Evoked Pupillary Response with a Remote Eye Tracker,” ETRA 2008

Estimating cognitive load using pupillometry: paper accepted to ETRA 2010

Our short paper [1] on using changes in pupil size diameter to estimate cognitive load was accepted to the Eye Tracking Research and Applications 2010 (ETRA 2010) conference. The lead author is Oszkar Palinko and the co-authors are my PhD student Alex Shyrokov, my OHSU collaborator Peter Heeman and me.

In previous experiments in our lab we have concentrated on performance measures to evaluate the effects of secondary tasks on the driver. Secondary tasks are those performed in addition to driving, e.g. interacting with a personal navigation device. However, as Jackson Beatty has shown, when people’s cognitive load increases their pupils dilate  [2]. This fascinating phenomenon provides a physiological measure of cognitive load. Why is it important to have multiple measures of cognitive load? As Christopher Wickens points out [3] this allows us to avoid circular arguments such as “… saying that a task interferes more because of its higher resource demand, and its resource demand is inferred to be higher because of its greater interference.”

We found that in a driving simulator-based experiment that was conducted by Alex, performance-based and pupillometry-based (that is a physiological) cognitive load measures show high correspondence for tasks that lasted tens of seconds. In other words, both driving performance measures and pupil size changes appear to track cognitive load changes. In the experiment the driver is involved in two spoken tasks in addition to the manual-visual task of driving. We hypothesize that different parts of these two spoken tasks present different levels of cognitive load for the driver. Our measurements of driving performance and pupil diameter changes appear to confirm the hypothesis. Additionally, we introduced a new pupillometry-based cognitive load measure that shows promise for tracking changes in cognitive load on time scales of several seconds.

In Alex’s experiment one of the spoken tasks required participants to ask and answer yes/no questions. We hypothesize that different phases of this task also present different levels of cognitive load to the driver. Will this be evident in driving performance and pupillometric data? We hope to find out soon!

References

[1] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

[2] Jackson Beatty, “Task-evoked pupillary responses, processing load, and the structure of processing resources,” Psychological Bulletin. Vol. 91(2), Mar 1982, 276-292

[3] Christopher D. Wickens, “Multiple resources and performance prediction,” Theoretical Issues in Ergonomic Science, 2002, Vol. 3, No. 2, 159-177