Category Archives: paper

LED Augmented Reality: Video Posted

During the 2012-2013 academic year I worked with a team of UNH ECE seniors to explore using an LED array as a low-cost heads-up display that would provide in-vehicle turn-by-turn navigation instructions. Our work will be published in the proceedings of AutomotiveUI 2013 [1]. Here’s the video introducing the experiment.

References

[1] Oskar Palinko, Andrew L. Kun, Zachary Cook, Adam Downey, Aaron Lecomte, Meredith Swanson, Tina Tomaszewski, “Towards Augmented Reality Navigation Using Affordable Technology,” AutomotiveUI 2013

Personal and Ubiquitous Computing theme issue: Automotive user interfaces and interactive applications in the car

I’m thrilled to announce that the theme issue of Personal and Ubiquitous Computing entitled “Automotive user interfaces and interactive applications in the car” is now available in PUC’s Online First. I had the pleasure of serving as co-editor of this theme issue with Albrecht Schmidt, Anind Dey, and Susanne Boll.

The theme issue includes our editorial [1], and three papers. The first is by Tuomo Kujala, who explores scrolling on touch screens  while driving [2]. The second is by Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, who address the issue of credibility in the context of automotive navigation systems [3]. The third paper is co-authored by me, my former PhD student Alex Shyrokov, and Peter Heeman. We explore multi-threaded spoken dialogues between a driver and a remote conversant [4]. The three papers were selected in a rigorous review process from 17 submissions, by approximately 50 reviewers.

 

References

[1] Andrew L. Kun, Albrecht Schmidt, Anind Dey and Susanne Boll, “Automotive User Interfaces and Interactive Applications in the Car,” PUC Online First

[2] Tuomo Kujala, “Browsing the Information Highway while Driving – Three In-Vehicle Touch Screen Scrolling Methods and Driver Distraction,” PUC Online First

[3] Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, “On Credibility Improvements for Automotive Navigation Systems,” PUC Online First

[4] Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First

Video calling while driving? Not a good idea.

Do you own a smart phone? If yes, you’re likely to have tried video calling (e.g. with Skype or FaceTime). Video calling is an exciting technology, but as Zeljko Medenica and I show in our CHI 2012 Work-in-Progress paper [1], it’s not a technology you should use while driving.

Zeljko and I conducted a driving simulator experiment in which a driver and another participant were given the verbal task of playing the game of Taboo. The driver and the passenger were in separate rooms and spoke to each other over headsets. In one experimental condition, the driver and the other participant could also see each other as shown in the figure below. We wanted to find out if in this condition drivers would spend a significant amount of time looking at the other participant. This is an important question, as time spent looking at the other participant is time not spent looking at the road ahead!

We found that, when drivers felt that the driving task was demanding, they focused on the road ahead. However, when they perceived the driving task to be less demanding they looked at the other participant significantly more.

What this tells us is that, under certain circumstances, drivers are willing to engage in video calls. This is due, at least in part, to the (western) social norm of looking at the person you’re talking to. These results should serve as a warning to interface designers, lawmakers (yes, there’s concern [2]), transportation officials, and drivers that video calling can be a serious distraction from driving.

Here’s a video that introduces the experiment in more detail:

References

[1] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[2] Claude Brodesser-Akner, “State Assemblyman: Ban iPhone4 Video-Calling From the Road,” New York Magazine. Date accessed 03/02/2012

Augmented Reality vs. Street View for Personal Navigation Devices

Personal navigation devices (PNDs) are ubiquitous and primarily come in three forms: as built-in devices in vehicles, as brought-in stand-alone devices, or as applications on smart phones.

So what is next for PNDs? In a driving simulator study to be presented at MobileHCI 2011 [1], Zeljko Medenica, Tim Paek, Oskar Palinko and I explored two ideas:

  • Augmented reality PND: An augmented reality PND overlays route guidance on the real world using a head-up display. Our version is simulated and we simply project the route guidance on the simulator screens along with the driving simulation images. Augmented reality PNDs are not yet available commercially for cars.
  • Street-view PND: This PND uses a simplified version of augmented reality. It overlays route guidance on a sequence of still images of the road. The images and overlay are displayed on a head-down display. Google Maps Navigation runs on smart phones and can be used with street view.

The following video demonstrates the two PNDs.

Our findings indicate that augmented reality PNDs allow for excellent visual attention to the road ahead and excellent driving performance. In contrast, street-view PNDs can have a detrimental effect on both. Thus, while further research is clearly needed, it might be best if navigation with a street view PND was handled by a passenger and not by the driver.

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

Towards disambiguating the effects of cognitive load and light on pupil diameter

Light intensity affects pupil diameter: the pupil contracts in bright environments and it dilates in the dark. Interestingly, cognitive load also affects pupil diameter, with the pupil dilating in response to increased cognitive load. This effect is called the task evoked pupillary response (TEPR) [1]. Thus, changes in pupil diameter are physiological measures of cognitive load; however changes in lighting introduce noise into the estimate.

Last week Oskar Palinko gave a talk at Driving Assessment 2011 introducing our work on disambiguating the effects of cognitive load and light on pupil diameter in driving simulator studies [2]. We hypothesized that we can simply subtract the effect of lighting on pupil diameter from the combined effect of light and cognitive load and produce an estimate of cognitive load only. We tested the hypothesis through an experiment in which participants were given three tasks:

  • Cognitive task with varying cognitive load and constant lighting. This task was adapted from the work of Klingner et al. [3]. Participants listened to a voice counting from 1 to 18 repeatedly. Participants were told that every sixth number (6, 12, and 18) might be out of order and were instructed to push a button if they detected an out-of-order number. This task induced increased cognitive load at every sixth number as participants focused on the counting sequence. A new number was read every 1.5 seconds, thus cognitive load (and pupil diameter) increased every 6 x 1.5 sec = 9 seconds.
  • Visual task with constant cognitive load (assuming no daydreaming!) and varying lighting. Participants were instructed to follow a visual target which switched location between a white, a gray and a black truck. The light reaching the participant’s eye varied as the participant’s gaze moved from one truck to another. Participants held their gaze on a truck for 9 seconds, allowing the pupil diameter ample time to settle.
  • Combined task with varying cognitive load and lighting. Participants completed the cognitive and visual tasks in parallel. We synchronized the cognitive and visual tasks such that increases in cognitive load occurred after the pupil diameter stabilized in response to moving the gaze between trucks. Synchronization was straightforward as the cognitive task was periodic with 9 seconds and in the visual task lighting intensity also changed every 9 seconds.

Our results confirm that, at least in this simple case, our hypothesis holds and we can indeed detect changes in cognitive load under varying lighting conditions. We are planning to extend this work by introducing scenarios in which participants drive in realistic simulated environments. Under such scenarios gaze angles, and thus the amount of light reaching participants’ eyes, will change rapidly, making the disambiguation more complex, and of course more useful.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[3] Jeff Klingner, Rashit Kumar, Pat Hanrahan, “Measuring the Task-Evoked Pupillary Response with a Remote Eye Tracker,” ETRA 2008

Co-chairing AutomotiveUI 2010

On November 11 and 12 I was at the AutomotiveUI 2010 conference serving as program co-chair with Susanne Boll. The conference was hosted by Anind Dey at CMU and co-chaired by Albrecht Schmidt.

The conference was successful and really fun. I could go on about all the great papers and posters (including two posters from our group at UNH [1,2]) but in this post I’ll only mention two: John Krumm’s keynote talk and, selfishly, my own talk (this is my blog after all). John gave an overview of his work with data from GPS sensors. He discussed work on prediction of where people will go, his experiences with location privacy and with creating road maps. Given that John is, according to his own website, the “all seeing, all knowing, master of time, space, and dimension,” this was indeed a very informative talk 😉 OK in all seriousness, the talk was excellent. I find John’s work on prediction of people’s destination and selected route the most interesting. One really interesting effect of having accurate predictions, and people sharing such data in the cloud, would be on routing algorithms hosted in the cloud. If such an algorithm could know where all of us are going at any instant of time, it could propose routes that overall allow efficient use of roads, reduced pollution, etc.

My talk focused on collaborative work with Alex Shyrokov and Peter Heeman on multi-threaded dialogues. Specifically, I talked about designing spoken tasks for human-human dialogue experiments for Alex’s PhD work [3]. Alex wanted to observe how pairs of subjects switch between two dialogue threads, while one of the subjects is also engaged in operating a simulated vehicle. Our hypothesis is that observed human-human dialogue behaviors can be used as the starting point for designing computer dialogue behaviors for in-car spoken dialogue systems. One of the suggestions we put forth in the paper is that the tasks for human-human experiments should be engaging. These are the types of tasks that will result in interesting dialogue behaviors and can thus teach us something about how humans manage multi-threaded dialogues.

Next year the conference moves back to Europe. The host will be Manfred Tscheligi in Salzburg, Austria. Judging by the number of submissions this year and the quality of the conference, we can look forward to many interesting papers next year, both from industry and from academia. Also, the location will be excellent – just think Mozart, Sound of Music (see what Rick Steves has to say), and world-renowned Christmas markets!

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Comparing Augmented Reality and Street View Navigation,” AutomotiveUI 2010 Adjunct Proceedings

[2] Oskar Palinko, Sahil Goyal, Andrew L. Kun, “A Pilot Study of the Influence of Illumination and Cognitive Load on Pupil Diameter in a Driving Simulator,” AutomotiveUI 2010 Adjunct Proceedings

[3] Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” AutomotiveUI 2010

Automotive user interfaces SIG meeting to be held at CHI 2010

There will be a special interest group (SIG) meeting on automotive user interfaces at CHI 2010. The lead author of the paper describing the aims of the SIG [1] is Albrecht Schmidt and the list of coauthors includes Anind Dey, Wolfgang Spiessl and me. CHI SIGs are 90 minute scheduled sessions during the conference. They are an opportunity for researchers with a common interest to meet face-to-face and engage in dialog.

Our SIG deals with human-computer interaction in the car. This is an exciting field of study that was the topic of a CHI 2008 SIG [2] as well as the AutomotiveUI 2009 conference [3], and the AutomotiveUI 2010 CFP will be posted very soon. In the last several years human-computer interaction in the car has increased for two main reasons. One, many cars now come equipped with myriad electronic devices such as displays indicating power usage and advanced driver assistance systems. Second, users (drivers and passengers) bring mobile devices to cars. The list of these brought-in mobile devices is long but personal navigation devices and mp3 players are probably the most common ones.

At the SIG we hope to discuss user interface issues that are the result of having all of these devices in cars. Some of the questions are:

  • How can we reduce (or eliminate) driver distraction caused by the in-car devices?
  • Can driver interactions with in-car devices actually improve driving performance?
  • Can users take advantage of novel technologies, such as streaming videos from other cars?
  • How do we build interfaces that users can trust and will thus actually use?
  • How can car manufacturers, OEMs, brought-in device manufacturers and academia collaborate in envisioning, creating and implementing automotive user interfaces?

The 2008 CHI SIG [2] attracted over 60 people and we’re hoping for similar (or better!) turnout.

References

[1] Albrecht Schmidt, Anind L. Dey, Andrew L. Kun, Wolfgang Spiessl, “Automotive User Interfaces: Human Computer Interaction in the Car,” CHI 2010 Extended Abstracts (to appear)

[2] D. M. Krum, J. Faenger, B. Lathrop, J. Sison, A. Lien, “All roads lead to CHI: interaction in the automobile,” CHI 2008 Extended Abstracts

[3] Albrecht Schmidt, Anind Dey, Thomas Seder, Oskar Juhlin, “Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2009”

Estimating cognitive load using pupillometry: paper accepted to ETRA 2010

Our short paper [1] on using changes in pupil size diameter to estimate cognitive load was accepted to the Eye Tracking Research and Applications 2010 (ETRA 2010) conference. The lead author is Oszkar Palinko and the co-authors are my PhD student Alex Shyrokov, my OHSU collaborator Peter Heeman and me.

In previous experiments in our lab we have concentrated on performance measures to evaluate the effects of secondary tasks on the driver. Secondary tasks are those performed in addition to driving, e.g. interacting with a personal navigation device. However, as Jackson Beatty has shown, when people’s cognitive load increases their pupils dilate  [2]. This fascinating phenomenon provides a physiological measure of cognitive load. Why is it important to have multiple measures of cognitive load? As Christopher Wickens points out [3] this allows us to avoid circular arguments such as “… saying that a task interferes more because of its higher resource demand, and its resource demand is inferred to be higher because of its greater interference.”

We found that in a driving simulator-based experiment that was conducted by Alex, performance-based and pupillometry-based (that is a physiological) cognitive load measures show high correspondence for tasks that lasted tens of seconds. In other words, both driving performance measures and pupil size changes appear to track cognitive load changes. In the experiment the driver is involved in two spoken tasks in addition to the manual-visual task of driving. We hypothesize that different parts of these two spoken tasks present different levels of cognitive load for the driver. Our measurements of driving performance and pupil diameter changes appear to confirm the hypothesis. Additionally, we introduced a new pupillometry-based cognitive load measure that shows promise for tracking changes in cognitive load on time scales of several seconds.

In Alex’s experiment one of the spoken tasks required participants to ask and answer yes/no questions. We hypothesize that different phases of this task also present different levels of cognitive load to the driver. Will this be evident in driving performance and pupillometric data? We hope to find out soon!

References

[1] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

[2] Jackson Beatty, “Task-evoked pupillary responses, processing load, and the structure of processing resources,” Psychological Bulletin. Vol. 91(2), Mar 1982, 276-292

[3] Christopher D. Wickens, “Multiple resources and performance prediction,” Theoretical Issues in Ergonomic Science, 2002, Vol. 3, No. 2, 159-177