Tag Archives: automotive

Announced: Cognitive load and in-vehicle HMI special interest session at the 2012 ITS World Congress

Continuing the work of the 2011 Cognitive Load and In-vehicle Human-Machine Interaction workshop at AutomotiveUI 2011, Peter Fröhlich and I are co-organizing a special interest session on this topic at this year’s ITS World Congress.

The session will be held on Friday, October 26, 2012. Peter and I were able to secure the participation of an impressive list of panelists. They are (in alphabetical order):

  • Corinne Brusque, Director, IFSTTAR LESCOT, France
  • Johan Engström, senior project manager, Volvo Technology, Sweden
  • James Foley, Senior Principal Engineer, CSRC, Toyota, USA
  • Chris Monk, Project Officer, US DOT
  • Kazumoto Morita, Senior Researcher, National Safety and Environment Laboratory, Japan
  • Scott Pennock, Chairman of the ITU-T Focus Group on Driver Distraction and Senior Hands-Free Standards Specialist at QNX, Canada

The session will be moderated by Peter Fröhlich. We hope that the session will provide a compressed update on the state-of-the-art in cognitive load research, and that it will serve as inspiration for future work in this field.

2012 PhD and MS positions

A PhD and an MS position are available in the Project54 lab at the University of New Hampshire. The lab is part of the Electrical and Computer Engineering department at UNH. Successful applicants will explore human-computer interaction in vehicles. We are looking for students with a background in electrical engineering, computer engineering, computer science, or related fields.

The Project54 lab was created in 1999 in partnership with the New Hampshire Department of Safety to improve technology for New Hampshire law enforcement. Project54’s in-car system integrates electronic devices in police cruisers into a single voice-activated system. Project54 also integrates cruisers into agency-wide communication networks. The Project54 system has been deployed in over 1000 vehicles in New Hampshire in over 180 state and local law enforcement agencies.

Research focus

Both the PhD and the MS student will focus on the relationship between various in-car user interface characteristics and the cognitive load of interacting with these interfaces, with the goal of designing interfaces that do not significantly increase driver workload. Work will involve developing techniques to estimate cognitive load using performance measures (such as the variance of lane position), physiological measures (such as changes in pupil diameter [1-5]) and subjective measures (such as the NASA-TLX questionnaire).

The PhD student will focus on spoken in-vehicle human-computer interaction, and will explore the use of human-human dialogue behavior [6-11] to guide the design process.

The work will utilize experiments in Project54’s world-class driving simulator laboratory which is equipped with two research driving simulators, three eye trackers and a physiological data logger.

Appointment

The PhD student will be appointed for four years, and the MS student for two years. Initial appointments will be for one year, starting between June and September 2012. Continuation of funding will be dependent on satisfactory performance. Appointments will be a combination of research and teaching assistantships. Compensation will include tuition, fees, health insurance and academic year and summer stipend.

How to apply

For application instructions, and for general information, email Andrew Kun, Project54 Principal Investigator at andrew.kun@unh.edu. Please attach a current CV.

References

[1] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” ETRA 2012

[2] Andrew L. Kun, Zeljko Medenica, Oskar Palinko, Peter A. Heeman, “Utilizing Pupil Diameter to Estimate Cognitive Load Changes During Human Dialogue: A Preliminary Study,” AutomotiveUI 2011 Adjunct Proceedings

[3] Andrew L. Kun, Peter A. Heeman, Tim Paek, W. Thomas Miller, III, Paul A. Green, Ivan Tashev, Peter Froehlich, Bryan Reimer, Shamsi Iqbal, Dagmar Kern, “Cognitive Load and In-Vehicle Human-Machine Interaction,” AutomotiveUI 2011 Adjunct Proceedings

[4] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[5] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

[6] Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First, to appear in PUC

[7] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[8] Fan Yang, Peter A. Heeman, Andrew L. Kun, “An Investigation of Interruptions and Resumptions in Multi-Tasking Dialogues,” Computational Linguistics, 37, 1

[9] Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” Automotive UI 2010

[10] Fan Yang, Peter A. Heeman, Andrew L. Kun, “Switching to Real-Time Tasks in Multi-Tasking Dialogue,” Coling 2008

[11] Alexander Shyrokov, Andrew L. Kun, Peter Heeman, “Experimental modeling of human-human multi-threaded dialogues in the presence of a manual-visual task,” SigDial 2007

Personal and Ubiquitous Computing theme issue: Automotive user interfaces and interactive applications in the car

I’m thrilled to announce that the theme issue of Personal and Ubiquitous Computing entitled “Automotive user interfaces and interactive applications in the car” is now available in PUC’s Online First. I had the pleasure of serving as co-editor of this theme issue with Albrecht Schmidt, Anind Dey, and Susanne Boll.

The theme issue includes our editorial [1], and three papers. The first is by Tuomo Kujala, who explores scrolling on touch screens  while driving [2]. The second is by Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, who address the issue of credibility in the context of automotive navigation systems [3]. The third paper is co-authored by me, my former PhD student Alex Shyrokov, and Peter Heeman. We explore multi-threaded spoken dialogues between a driver and a remote conversant [4]. The three papers were selected in a rigorous review process from 17 submissions, by approximately 50 reviewers.

 

References

[1] Andrew L. Kun, Albrecht Schmidt, Anind Dey and Susanne Boll, “Automotive User Interfaces and Interactive Applications in the Car,” PUC Online First

[2] Tuomo Kujala, “Browsing the Information Highway while Driving – Three In-Vehicle Touch Screen Scrolling Methods and Driver Distraction,” PUC Online First

[3] Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, “On Credibility Improvements for Automotive Navigation Systems,” PUC Online First

[4] Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First

Video calling while driving? Not a good idea.

Do you own a smart phone? If yes, you’re likely to have tried video calling (e.g. with Skype or FaceTime). Video calling is an exciting technology, but as Zeljko Medenica and I show in our CHI 2012 Work-in-Progress paper [1], it’s not a technology you should use while driving.

Zeljko and I conducted a driving simulator experiment in which a driver and another participant were given the verbal task of playing the game of Taboo. The driver and the passenger were in separate rooms and spoke to each other over headsets. In one experimental condition, the driver and the other participant could also see each other as shown in the figure below. We wanted to find out if in this condition drivers would spend a significant amount of time looking at the other participant. This is an important question, as time spent looking at the other participant is time not spent looking at the road ahead!

We found that, when drivers felt that the driving task was demanding, they focused on the road ahead. However, when they perceived the driving task to be less demanding they looked at the other participant significantly more.

What this tells us is that, under certain circumstances, drivers are willing to engage in video calls. This is due, at least in part, to the (western) social norm of looking at the person you’re talking to. These results should serve as a warning to interface designers, lawmakers (yes, there’s concern [2]), transportation officials, and drivers that video calling can be a serious distraction from driving.

Here’s a video that introduces the experiment in more detail:

References

[1] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[2] Claude Brodesser-Akner, “State Assemblyman: Ban iPhone4 Video-Calling From the Road,” New York Magazine. Date accessed 03/02/2012

Further progress towards disambiguating the effects of cognitive load and light on pupil diameter

In driving simulator studies participants complete both visual and aural task. The most obvious visual task is driving itself, but there are others such as viewing an LCD screen that displays a map. Aural tasks include talking to an in-vehicle computer. I am very interested in estimating the cognitive load of these various tasks. One way to estimate this cognitive load is through changes in pupil diameter: in an effect called the Task Evoked Pupillary Response (TEPR) [1], the pupil dilates with increased cognitive load.

However, in driving simulator studies participants scan a non-uniformly illuminated visual scene. If unaccounted for, this non-uniformity in illumination might introduce an error in our estimate of the TEPR. Oskar Palinko and I will have a paper at ETRA 2012 [2] extending our previous work [3], in which we established that it is possible to separate the pupil’s light reflex from the TEPR. While in our previous work TEPR was the result of participants’ engagement in an aural task, in our latest experiment TEPR is due to engagement in a visual task.

The two experiments taken together support our main hypothesis that it is possible to disambiguate (and not just separate) the two effects even in complicated environments, such as a driving simulator. We are currently designing further experiments to test this hypothesis.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” to appear at ETRA 2012

[3] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

2011 Cognitive Load and In-Vehicle Human-Machine Interaction workshop

I’m thrilled to announce the 2011 Cognitive Load and In-Vehicle Human-Machine Interaction workshop (CLW 2011) to be held at AutomotiveUI 2011 in Salzburg, Austria. I’m co-organizing the workshop with Peter Heeman, Tim Paek, Tom Miller, Paul Green, Ivan Tashev, Peter Froehlich, Bryan Reimer, Shamsi Iqbal and Dagmar Kern. 

Why have this workshop? Interactions with in-vehicle electronic devices can interfere with the primary task of driving. The concept of cognitive load helps us understand the extent to which these interactions interfere with the driving task and how this interference can be mitigated. While research results on in-vehicle cognitive load are frequently presented at automotive research conferences and in related journals, so far no dedicated forum is available for focused discussions on this topic. This workshop aims to fill that void.

Submissions to the workshop are due October 17. Topics of interest include, but are not limited to:

– Cognitive load estimation in the laboratory,
– Cognitive load estimation on the road,
– Sensing technologies for cognitive load estimation,
– Algorithms for cognitive load estimation,
– Performance measures of cognitive load,
– Physiological measures of cognitive load,
– Visual measures of cognitive load,
– Subjective measures of cognitive load,
– Methods for benchmarking cognitive load,
– Cognitive load of driving,
– Cognitive overload and cognitive underload,
– Approaches to cognitive load management inspired by human-human interactions.

For a detailed description of workshop goals take a look at the call for papers.

Augmented Reality vs. Street View for Personal Navigation Devices

Personal navigation devices (PNDs) are ubiquitous and primarily come in three forms: as built-in devices in vehicles, as brought-in stand-alone devices, or as applications on smart phones.

So what is next for PNDs? In a driving simulator study to be presented at MobileHCI 2011 [1], Zeljko Medenica, Tim Paek, Oskar Palinko and I explored two ideas:

  • Augmented reality PND: An augmented reality PND overlays route guidance on the real world using a head-up display. Our version is simulated and we simply project the route guidance on the simulator screens along with the driving simulation images. Augmented reality PNDs are not yet available commercially for cars.
  • Street-view PND: This PND uses a simplified version of augmented reality. It overlays route guidance on a sequence of still images of the road. The images and overlay are displayed on a head-down display. Google Maps Navigation runs on smart phones and can be used with street view.

The following video demonstrates the two PNDs.

Our findings indicate that augmented reality PNDs allow for excellent visual attention to the road ahead and excellent driving performance. In contrast, street-view PNDs can have a detrimental effect on both. Thus, while further research is clearly needed, it might be best if navigation with a street view PND was handled by a passenger and not by the driver.

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

Towards disambiguating the effects of cognitive load and light on pupil diameter

Light intensity affects pupil diameter: the pupil contracts in bright environments and it dilates in the dark. Interestingly, cognitive load also affects pupil diameter, with the pupil dilating in response to increased cognitive load. This effect is called the task evoked pupillary response (TEPR) [1]. Thus, changes in pupil diameter are physiological measures of cognitive load; however changes in lighting introduce noise into the estimate.

Last week Oskar Palinko gave a talk at Driving Assessment 2011 introducing our work on disambiguating the effects of cognitive load and light on pupil diameter in driving simulator studies [2]. We hypothesized that we can simply subtract the effect of lighting on pupil diameter from the combined effect of light and cognitive load and produce an estimate of cognitive load only. We tested the hypothesis through an experiment in which participants were given three tasks:

  • Cognitive task with varying cognitive load and constant lighting. This task was adapted from the work of Klingner et al. [3]. Participants listened to a voice counting from 1 to 18 repeatedly. Participants were told that every sixth number (6, 12, and 18) might be out of order and were instructed to push a button if they detected an out-of-order number. This task induced increased cognitive load at every sixth number as participants focused on the counting sequence. A new number was read every 1.5 seconds, thus cognitive load (and pupil diameter) increased every 6 x 1.5 sec = 9 seconds.
  • Visual task with constant cognitive load (assuming no daydreaming!) and varying lighting. Participants were instructed to follow a visual target which switched location between a white, a gray and a black truck. The light reaching the participant’s eye varied as the participant’s gaze moved from one truck to another. Participants held their gaze on a truck for 9 seconds, allowing the pupil diameter ample time to settle.
  • Combined task with varying cognitive load and lighting. Participants completed the cognitive and visual tasks in parallel. We synchronized the cognitive and visual tasks such that increases in cognitive load occurred after the pupil diameter stabilized in response to moving the gaze between trucks. Synchronization was straightforward as the cognitive task was periodic with 9 seconds and in the visual task lighting intensity also changed every 9 seconds.

Our results confirm that, at least in this simple case, our hypothesis holds and we can indeed detect changes in cognitive load under varying lighting conditions. We are planning to extend this work by introducing scenarios in which participants drive in realistic simulated environments. Under such scenarios gaze angles, and thus the amount of light reaching participants’ eyes, will change rapidly, making the disambiguation more complex, and of course more useful.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[3] Jeff Klingner, Rashit Kumar, Pat Hanrahan, “Measuring the Task-Evoked Pupillary Response with a Remote Eye Tracker,” ETRA 2008

Zeljko Medenica advances to candidacy

Last week my PhD student Zeljko Medenica advanced to candidacy. Zeljko plans to create a driving performance measure that would be sensitive to short-lived and/or infrequent degradations in driving performance. In previous driving simulator-based studies [1, 2] we found that glancing away from the road is correlated with worse driving performance. Importantly, this is true even when performance averages over the length of the entire experiment are not affected. Thus, Zeljko plans to explore the use of cross-correlation in creating a new, highly sensitive driving performance measure.

Zeljko’s PhD committee includes Paul Green (UMTRI), Tim Paek (Microsoft Research), Nicholas Kirsch (UNH) and Tom Miller (UNH). Thanks to all for serving!

References

[1] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

[2] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

Presentation at the 2011 Emergency Responders Workshop

Yesterday I participated in the work of the 2011 Emergency Responders Workshop (pdf) organized by WisDOT, CVTA and GLTEI. The workshop had two major goals. One was to provide a sampling of state-of-the-art technologies used by emergency responders. The other was to begin charting a path toward developing advanced technologies. Participants from emergency responder agencies, industry and academia discussed their vision for future technologies as well as barriers to progress.

My presentation focused on pervasive (or ubiquitous) computing for law enforcement. I encouraged participants to ask the following question:

“What should be the focus of R&D efforts targeting percom technologies for emergency responders?”

CVTA President Scott McCormick (in picture below) and WisDOT’s John Corbin led the meeting superbly – thanks to both for including me in this effort.

For more pictures from the event visit Flickr.