Category Archives: simulator

Zeljko Medenica defends dissertation

Last November Zeljko Medenica defended his dissertation [1]. Zeljko explored new performance measures that can be used to characterize interactions with in-vehicle devices. The impetus for this work came from our work with personal navigation devices. Specifically, in work published in 2009 [2] we found fairly large differences in the time drivers spend looking at the road ahead (more for voice-only turn-by-turn directions, less when there’s also a map displayed). However, the commonly used driving performance measures (average variance of lane position and steering wheel angle) did not indicate differences between these conditions. We thought that driving might still be affected, and Zeljko’s work confirms this hypothesis.

Zeljko is now with Nuance, working with Garrett Weinberg. Garrett and Zeljko collaborated during Zeljko’s internships at MERL (where Garrett worked prior to joining Nuance) in 2009 and 2010.

I would like to thank Zeljko’s committee for all of their contributions: Paul GreenTim Paek, Tom Miller, and Nicholas Kirsch. Below is a photo of all of us after the defense. See more photos on Flickr.

Tim Paek (left), Zeljko Medenica, Andrew Kun, Tom Miller, Nicholas Kirsch, and Paul Green (on the laptop)

 

References

[1] Zeljko Medenica,  “Cross-Correlation Based Performance Measures for Characterizing the Influence of In-Vehicle Interfaces on Driving and Cognitive Workload,” Doctoral Dissertation, University of New Hampshire, 2012

[2] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

New York Times article discusses our work on in-vehicle navigation devices

Last week I was interviewed by Randall Stross for an article that appeared in the September 2 edition of the New York Times. Mr. Stross’ article, “When GPS Confuses, You May Be to Blame,” discusses research on in-vehicle personal navigation devices, including our work on comparing voice-only instructions to map+voice instructions [1].

Specifically, Mr. Stross reports on a driving simulator study published at AutomotiveUI 2009, in which we found that drivers spent significantly more time looking at the road ahead when navigation instructions were provided using a voice-only interface than in the case when both voice instructions and a map were available. In fact, with voice-only instructions drivers spent about 4 seconds more every minute looking at the road ahead. Furthermore, we found evidence that this difference in the time spent looking at the road ahead also had an effect on driving performance measures. These results led us to conclude that voice-only instructions might be safer to use than voice+map instructions. However, the majority of our participants preferred having a map in addition to the voice instructions.

This latter finding was the impetus for a follow-on study in which we explored projecting navigation instructions onto the real world scene (using augmented reality) [2]. We found that augmented reality navigation aids allow for excellent visual attention to the road ahead and excellent driving performance.

References

[1] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

[2] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” MobileHCI 2011

Video calling while driving? Not a good idea.

Do you own a smart phone? If yes, you’re likely to have tried video calling (e.g. with Skype or FaceTime). Video calling is an exciting technology, but as Zeljko Medenica and I show in our CHI 2012 Work-in-Progress paper [1], it’s not a technology you should use while driving.

Zeljko and I conducted a driving simulator experiment in which a driver and another participant were given the verbal task of playing the game of Taboo. The driver and the passenger were in separate rooms and spoke to each other over headsets. In one experimental condition, the driver and the other participant could also see each other as shown in the figure below. We wanted to find out if in this condition drivers would spend a significant amount of time looking at the other participant. This is an important question, as time spent looking at the other participant is time not spent looking at the road ahead!

We found that, when drivers felt that the driving task was demanding, they focused on the road ahead. However, when they perceived the driving task to be less demanding they looked at the other participant significantly more.

What this tells us is that, under certain circumstances, drivers are willing to engage in video calls. This is due, at least in part, to the (western) social norm of looking at the person you’re talking to. These results should serve as a warning to interface designers, lawmakers (yes, there’s concern [2]), transportation officials, and drivers that video calling can be a serious distraction from driving.

Here’s a video that introduces the experiment in more detail:

References

[1] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[2] Claude Brodesser-Akner, “State Assemblyman: Ban iPhone4 Video-Calling From the Road,” New York Magazine. Date accessed 03/02/2012

Augmented Reality vs. Street View for Personal Navigation Devices

Personal navigation devices (PNDs) are ubiquitous and primarily come in three forms: as built-in devices in vehicles, as brought-in stand-alone devices, or as applications on smart phones.

So what is next for PNDs? In a driving simulator study to be presented at MobileHCI 2011 [1], Zeljko Medenica, Tim Paek, Oskar Palinko and I explored two ideas:

  • Augmented reality PND: An augmented reality PND overlays route guidance on the real world using a head-up display. Our version is simulated and we simply project the route guidance on the simulator screens along with the driving simulation images. Augmented reality PNDs are not yet available commercially for cars.
  • Street-view PND: This PND uses a simplified version of augmented reality. It overlays route guidance on a sequence of still images of the road. The images and overlay are displayed on a head-down display. Google Maps Navigation runs on smart phones and can be used with street view.

The following video demonstrates the two PNDs.

Our findings indicate that augmented reality PNDs allow for excellent visual attention to the road ahead and excellent driving performance. In contrast, street-view PNDs can have a detrimental effect on both. Thus, while further research is clearly needed, it might be best if navigation with a street view PND was handled by a passenger and not by the driver.

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

Co-chairing AutomotiveUI 2010

On November 11 and 12 I was at the AutomotiveUI 2010 conference serving as program co-chair with Susanne Boll. The conference was hosted by Anind Dey at CMU and co-chaired by Albrecht Schmidt.

The conference was successful and really fun. I could go on about all the great papers and posters (including two posters from our group at UNH [1,2]) but in this post I’ll only mention two: John Krumm’s keynote talk and, selfishly, my own talk (this is my blog after all). John gave an overview of his work with data from GPS sensors. He discussed work on prediction of where people will go, his experiences with location privacy and with creating road maps. Given that John is, according to his own website, the “all seeing, all knowing, master of time, space, and dimension,” this was indeed a very informative talk 😉 OK in all seriousness, the talk was excellent. I find John’s work on prediction of people’s destination and selected route the most interesting. One really interesting effect of having accurate predictions, and people sharing such data in the cloud, would be on routing algorithms hosted in the cloud. If such an algorithm could know where all of us are going at any instant of time, it could propose routes that overall allow efficient use of roads, reduced pollution, etc.

My talk focused on collaborative work with Alex Shyrokov and Peter Heeman on multi-threaded dialogues. Specifically, I talked about designing spoken tasks for human-human dialogue experiments for Alex’s PhD work [3]. Alex wanted to observe how pairs of subjects switch between two dialogue threads, while one of the subjects is also engaged in operating a simulated vehicle. Our hypothesis is that observed human-human dialogue behaviors can be used as the starting point for designing computer dialogue behaviors for in-car spoken dialogue systems. One of the suggestions we put forth in the paper is that the tasks for human-human experiments should be engaging. These are the types of tasks that will result in interesting dialogue behaviors and can thus teach us something about how humans manage multi-threaded dialogues.

Next year the conference moves back to Europe. The host will be Manfred Tscheligi in Salzburg, Austria. Judging by the number of submissions this year and the quality of the conference, we can look forward to many interesting papers next year, both from industry and from academia. Also, the location will be excellent – just think Mozart, Sound of Music (see what Rick Steves has to say), and world-renowned Christmas markets!

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Comparing Augmented Reality and Street View Navigation,” AutomotiveUI 2010 Adjunct Proceedings

[2] Oskar Palinko, Sahil Goyal, Andrew L. Kun, “A Pilot Study of the Influence of Illumination and Cognitive Load on Pupil Diameter in a Driving Simulator,” AutomotiveUI 2010 Adjunct Proceedings

[3] Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” AutomotiveUI 2010

Estimating cognitive load using pupillometry: paper accepted to ETRA 2010

Our short paper [1] on using changes in pupil size diameter to estimate cognitive load was accepted to the Eye Tracking Research and Applications 2010 (ETRA 2010) conference. The lead author is Oszkar Palinko and the co-authors are my PhD student Alex Shyrokov, my OHSU collaborator Peter Heeman and me.

In previous experiments in our lab we have concentrated on performance measures to evaluate the effects of secondary tasks on the driver. Secondary tasks are those performed in addition to driving, e.g. interacting with a personal navigation device. However, as Jackson Beatty has shown, when people’s cognitive load increases their pupils dilate  [2]. This fascinating phenomenon provides a physiological measure of cognitive load. Why is it important to have multiple measures of cognitive load? As Christopher Wickens points out [3] this allows us to avoid circular arguments such as “… saying that a task interferes more because of its higher resource demand, and its resource demand is inferred to be higher because of its greater interference.”

We found that in a driving simulator-based experiment that was conducted by Alex, performance-based and pupillometry-based (that is a physiological) cognitive load measures show high correspondence for tasks that lasted tens of seconds. In other words, both driving performance measures and pupil size changes appear to track cognitive load changes. In the experiment the driver is involved in two spoken tasks in addition to the manual-visual task of driving. We hypothesize that different parts of these two spoken tasks present different levels of cognitive load for the driver. Our measurements of driving performance and pupil diameter changes appear to confirm the hypothesis. Additionally, we introduced a new pupillometry-based cognitive load measure that shows promise for tracking changes in cognitive load on time scales of several seconds.

In Alex’s experiment one of the spoken tasks required participants to ask and answer yes/no questions. We hypothesize that different phases of this task also present different levels of cognitive load to the driver. Will this be evident in driving performance and pupillometric data? We hope to find out soon!

References

[1] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

[2] Jackson Beatty, “Task-evoked pupillary responses, processing load, and the structure of processing resources,” Psychological Bulletin. Vol. 91(2), Mar 1982, 276-292

[3] Christopher D. Wickens, “Multiple resources and performance prediction,” Theoretical Issues in Ergonomic Science, 2002, Vol. 3, No. 2, 159-177

At the 2009 fall NIJ CommTech TWG meeting

On Wednesday and Thursday, Oskar Palinko, Mark Taipan and I participated in the NIJ CommTech Technical Working Group meeting. On Wednesday I gave the presentation below reporting on our lab’s progress.

View more presentations from Andrew Kun.

On Thursday we participated in the meeting’s demo session. We demonstrated the advantage of using voice commands to control a police radio over using the radio’s buttons. We used a single-computer driving simulator and a radio setup. Of course the first driving simulator experiment we published investigated this effect [1]. We also demonstrated accessing a remote database using the Project54 system running on a Symbol handheld computer. We expect that, once we get approval from the NH State Police to deploy such devices (NHSP is responsible for data access for all officers in the state), they will be a big hit with local departments.

One of the many people we had a chance to talk to at the TWG meeting is Gil Emery, Communications Manager at the Portsmouth, NH PD. Gil was interested in the handhelds and we may be able to work with him on using these handhelds as cameras that allow tagging pictures on the spot and then using a cellular network to transmit them to headquarters. This work would build on Michael Farrar’s MS thesis research.

You can see pictures from this event of Flickr.

References

[1] Zeljko Medenica, Andrew L. Kun, “Comparing the Influence of Two User Interfaces for Mobile Radios on Driving Performance,” Driving Assessment 2007

Two posters at Ubicomp 2009

Our group presented two posters at last week’s Ubicomp 2009. Oskar Palinko and Michael Litchfield were on hand to talk about our multitouch table effort[1] (a great deal of work for this poster was done by Ankit Singh). Zeljko Medenica introduced a driving simulator pilot, work done in collaboation wtih Tim Paek, that deals with using augmented reality for the user interface of a navigation device [2].

Oskar (center) and Mike (right)

Oskar (center) and Mike (right)

Zeljko (center)

Zeljko (center)

Oskar, Mike and I are working on expanding the multitouch study. We plan to start with an online study in which subjects will watch two videos, one in which a story is presented using the multitouch table and another with the same story presented using a simple slide show. Zeljko will head up the follow-on to the pilot study – take a look at the video below to see (roughly) what we’re planning to do.

Take a look at other pictures I took at Ubicomp 2009 on Flickr.

References

[1] Oskar Palinko, Ankit Singh, Michael A. Farrar, Michael Litchfield, Andrew L. Kun, “Towards Storytelling with Geotagged Photos on a Multitouch Display,” Conference Supplement, Ubicomp 2009

[2] Zeljko Medenica, Oskar Palinko, Andrew L. Kun, Tim Paek, “Exploring In-Car Augmented Reality Navigation Aids: A Pilot Study,” Conference Supplement, Ubicomp 2009