Tag Archives: automotive

Bryan Reimer visit to UNH

It was my great pleasure to host Bryan Reimer at UNH. Bryan is Research Scientist at the MIT Age Lab as well as Associate Director of the New England University Transportation Center. His research focuses on the measurement and understanding of human behavior in dynamic environments, such as in cars.

Bryan spent time in the Project54 lab discussing various aspects of driving simulator and field studies. He then gave a thought-provoking talk reviewing results from multiple studies exploring driver workload and distraction. I expecially enjoyed his discussion of physiological measures that can be used to estimate workload. E.g. Bryan has found that heart rate is a robust estimate of workload and is often more useful than the often-used measure of heart rate variability. Bryan also discussed work on validating driving simulator results through field studies. His data indicate that driving simulator results can be used to predict relative changes in workload measures under different situations in real-life driving. However, the actual values of the measures collected in simulator and field studies often differ significantly.

For more pictures visit Flickr.

PhD and MS position at the University of New Hampshire exploring in-car human-computer interaction

A PhD and an MS position are available in the Project54 lab at the University of New Hampshire. The lab is part of the Electrical and Computer Engineering department at UNH. Successful applicants will explore human-computer interaction in vehicles. 

The Project54 lab was created in 1999 in partnership with the New Hampshire Department of Safety to improve technology for New Hampshire law enforcement. Project54’s in-car system integrates electronic devices in police cruisers into a single voice-activated system. Project54 also integrates cruisers into agency-wide communication networks. The Project54 system has been deployed in over 1000 vehicles in New Hampshire in over 180 state and local law enforcement agencies.

Research focus

Both the PhD and the MS student will focus on the relationship between various in-car user interface characteristics and the cognitive load of interacting with these interfaces, with the goal of designing interfaces that do not significantly increase driver workload. Work will involve developing techniques to estimate cognitive load using performance measures (such as the variance of lane position), physiological measures (such as changes in pupil diameter) and subjective measures (such as the NASA-TLX questionnaire).

The work will utilize experiments in Project54’s world-class driving simulator laboratory which is equipped with two research driving simulators, three eye trackers and a physiological data logger. Laboratory experiments will be complemented by field deployments in law enforcement agencies such as the New Hampshire State Police, which operates over 300 police cruisers. Project54 has deployed a state-wide data update infrastructure for the New Hampshire State Police which allows remote updates to in-car experimental software and remote collection of experimental data.


The PhD student will be appointed for four years, and the MS student for two years. Initial appointments will be for one year, starting between June and September 2011. Continuation of funding will be dependent on satisfactory performance. Appointments will be a combination of research and teaching assistantships. Compensation will include tuition, fees, health insurance and academic year and summer stipend.

How to apply

For application instructions, and for general information, email Andrew Kun, Project54 Principal Investigator at andrew.kun@unh.edu. Please attach a current CV.

Co-chairing AutomotiveUI 2010

On November 11 and 12 I was at the AutomotiveUI 2010 conference serving as program co-chair with Susanne Boll. The conference was hosted by Anind Dey at CMU and co-chaired by Albrecht Schmidt.

The conference was successful and really fun. I could go on about all the great papers and posters (including two posters from our group at UNH [1,2]) but in this post I’ll only mention two: John Krumm’s keynote talk and, selfishly, my own talk (this is my blog after all). John gave an overview of his work with data from GPS sensors. He discussed work on prediction of where people will go, his experiences with location privacy and with creating road maps. Given that John is, according to his own website, the “all seeing, all knowing, master of time, space, and dimension,” this was indeed a very informative talk 😉 OK in all seriousness, the talk was excellent. I find John’s work on prediction of people’s destination and selected route the most interesting. One really interesting effect of having accurate predictions, and people sharing such data in the cloud, would be on routing algorithms hosted in the cloud. If such an algorithm could know where all of us are going at any instant of time, it could propose routes that overall allow efficient use of roads, reduced pollution, etc.

My talk focused on collaborative work with Alex Shyrokov and Peter Heeman on multi-threaded dialogues. Specifically, I talked about designing spoken tasks for human-human dialogue experiments for Alex’s PhD work [3]. Alex wanted to observe how pairs of subjects switch between two dialogue threads, while one of the subjects is also engaged in operating a simulated vehicle. Our hypothesis is that observed human-human dialogue behaviors can be used as the starting point for designing computer dialogue behaviors for in-car spoken dialogue systems. One of the suggestions we put forth in the paper is that the tasks for human-human experiments should be engaging. These are the types of tasks that will result in interesting dialogue behaviors and can thus teach us something about how humans manage multi-threaded dialogues.

Next year the conference moves back to Europe. The host will be Manfred Tscheligi in Salzburg, Austria. Judging by the number of submissions this year and the quality of the conference, we can look forward to many interesting papers next year, both from industry and from academia. Also, the location will be excellent – just think Mozart, Sound of Music (see what Rick Steves has to say), and world-renowned Christmas markets!


[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Comparing Augmented Reality and Street View Navigation,” AutomotiveUI 2010 Adjunct Proceedings

[2] Oskar Palinko, Sahil Goyal, Andrew L. Kun, “A Pilot Study of the Influence of Illumination and Cognitive Load on Pupil Diameter in a Driving Simulator,” AutomotiveUI 2010 Adjunct Proceedings

[3] Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” AutomotiveUI 2010

Talk at SpeechTEK 2010

On Tuesday (August 3, 2010) I attended SpeechTEK 2010. I had a chance to see several really interesting talks including the lunch keynote by Zig Serafin, General Manager, Speech at Microsoft. He and two associates discussed, among other topics, the upcoming release of the Windows 7 phone and of the Kinect for Xbox 360 (formerly Project Natal). We also saw successful live demonstrations of both of these technologies.

One of Zig’s associates to take the stage was Larry Heck, Chief Scientist, Speech at Microsoft. Larry believes that there are three areas of research and development that will combine to make speech a part of everyday interactions with computers. First, the advent of ubiquitous computing and the need for natural user interfaces (NUIs) means that we cannot keep relying on GUIs and keyboards for many of our computing needs. Second, cloud computing makes it possible to gather rich data to train speech systems. Finally, with advances in speech technology we can expect to see search move beyond typing keywords (which is what we do today sitting at our PCs) to conversational queries (which is what people are starting to do on mobile phones).

I attended four other talks with topics relevant to my research. Brigitte Richardson discussed her work on Ford’s Sync. It’s exciting to hear that Ford is coming out with an SDK that will allow integrating devices with Sync. This appears to be a similar approach to ours at Project54 – we also provide an SDK which can be used to write software for the Project54 system [1]. Eduardo Olvera of Nuance discussed the differences and similarities between designing interfaces for speech interaction and those for interaction on a small form factor screen. Karen Kaushansky of TellMe discussed similar issues focusing on customer care. Finally, Kathy Lee, also of TellMe, discussed her work on a diary study exploring when people are willing to talk to their phones. This work reminded me of an experiment in which Ronkainen et al. asked participants to rate the social acceptability of mobile phone usage scenarios they viewed in video clips [2].

I also had a chance to give a talk reviewing some of the results of my collaboration with Tim Paek of Microsoft Research. Specifically, I discussed the effects of speech recognition accuracy and PTT button usage on driving performance [3] and the use of voice-only instructions for personal navigation devices [4]. The talk was very well received by the audience of over 25, with many follow-up questions. Tim also gave this talk earlier this year at Mobile Voice 2010.

For pictures from SpeechTEK 2010 visit my Flickr page.


[1] Andrew L. Kun, W. Thomas Miller, III, Albert Pelhe and Richard L. Lynch, “A software architecture supporting in-car speech interaction,” IEEE Intelligent Vehicles Symposium 2004

[2] Sami Ronkainen, Jonna Häkkilä, Saana Kaleva, Ashley Colley, Jukka Linjama, “Tap Input as an Embedded Interaction Method for Mobile Devices,” TEI 2007

[3] Andrew L. Kun, Tim Paek, Zeljko Medenica, “The Effect of Speech Interface Accuracy on Driving Performance,” Interspeech 2007

[4] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

Visit to FTW, Vienna

On June 4, 2010 I visited the Telecommunications Research Center Vienna (FTW). My host was Peter Froehlich, Senior Researcher in FTW’s User-Centered Interaction area of activity. Peter and I met at the CHI SIG meeting on automotive user interfaces [1] that I helped organize.

Peter and his colleagues are investigating automotive navigation aids and are currently preparing for an on-road study. I’m happy to report that this study will utilize one of our eye trackers. My visit provided an opportunity for us to discuss this upcoming study and how the eye tracker may be useful in evaluating the research hypotheses. Part of this discussion was a Telecommunications Forum talk I gave – see the slides below:

I want to thank Peter and his colleagues at FTW for hosting me and I’m looking forward to our upcoming collaboration. I also want to thank FTW for providing funding for my visit.


[1] Albrecht Schmidt, Anind L. Dey, Andrew L. Kun, Wolfgang Spiessl, “Automotive User Interfaces: Human Computer Interaction in the Car,” CHI 2010 Extended Abstracts

Project54 on front page of New York Times

In a front page article of the March 11, 2010 edition of the New York Times Matt Richtel discusses in-vehicle electronic devices used by first responders. Based on a number of interviews, including one with me, Matt gets the point across that interactions with in-vehicle devices can distract first responders from the primary task for any driver: driving. The personal accounts from first responders are certainly gripping. Thanks Matt for bringing this issue to the public.

Enter Project54. According to Matt “[r]esearchers are working to reduce the risk.” He goes on to describe UNH’s Project54 system which allows officers to issue voice commands in order to interact with in-car electronic devices. This means officers can keep their eyes on the road and their hands on the wheel. The article includes praise for the Project54 system by Captain John G. LeLacheur of the New Hampshire State Police. The Project54 system was developed in partnership with the NHSP and almost every NHSP cruiser has the Project54 system installed.

Both the print and the online versions of the article begin with a picture of the Project54 in-car system. This great picture was taken by Sheryl Senter and it shows Sergeant Tom Dronsfield of the Lee, NH Police Department in action.

Alex Shyrokov defends PhD

Two weeks ago my student Alex Shyrokov defended his PhD dissertation. Alex was interested in human-computer interaction for cases when the human is engaged in a manual-visual task. In such situations a speech interface appears to be a natural way to communicate with a computer. Alex was especially interested in multi-threaded spoken HCI. In multi-threaded dialogues the conversants switch back and forth between multiple topics.

How should we design a speech interface that will support multi-threaded human-computer dialogues when the human is engaged in a manual-visual task? In order to begin answering this question Alex explored spoken dialogues between two human conversants. The hypothesis is that a successful HCI design can mimic some aspects of human-human interaction.

In Alex’s experiments one of the conversants (the driver) operated a simulated vehicle while the other (an assistant) was only engaged in the spoken dialogue. The conversants were engaged in an ongoing and in an interrupting spoken task. Alex’s dissertation discusses several interesting findings, one of which is that driving performance is worse during and after the interrupting task. Alex proposes that this is due to a shift in the driver’s attention away from driving and to the spoken tasks. The shift in turn is due to the perceived urgency of the spoken tasks – as the perceived urgency increases the driver is more likely to shift her attention away from driving. The lesson for HCI design is to be very careful in managing the driver’s perceived urgency when interacting with devices in the car.

Alex benefited tremendously from the help of my collaborator on this research Peter Heeman. Peter provided excellent guidance throughout Alex’s PhD studies for which I am grateful. Peter and I plan to continue working with Alex’s data. The data includes transcribed dialogues, videos, driving performance as well as eye tracker data. I am especially interested in using the eye tracker’s pupil diameter measurements to estimate cognitive load as we have done in work lead by Oskar Palinko [1].


[1] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

Automotive user interfaces SIG meeting to be held at CHI 2010

There will be a special interest group (SIG) meeting on automotive user interfaces at CHI 2010. The lead author of the paper describing the aims of the SIG [1] is Albrecht Schmidt and the list of coauthors includes Anind Dey, Wolfgang Spiessl and me. CHI SIGs are 90 minute scheduled sessions during the conference. They are an opportunity for researchers with a common interest to meet face-to-face and engage in dialog.

Our SIG deals with human-computer interaction in the car. This is an exciting field of study that was the topic of a CHI 2008 SIG [2] as well as the AutomotiveUI 2009 conference [3], and the AutomotiveUI 2010 CFP will be posted very soon. In the last several years human-computer interaction in the car has increased for two main reasons. One, many cars now come equipped with myriad electronic devices such as displays indicating power usage and advanced driver assistance systems. Second, users (drivers and passengers) bring mobile devices to cars. The list of these brought-in mobile devices is long but personal navigation devices and mp3 players are probably the most common ones.

At the SIG we hope to discuss user interface issues that are the result of having all of these devices in cars. Some of the questions are:

  • How can we reduce (or eliminate) driver distraction caused by the in-car devices?
  • Can driver interactions with in-car devices actually improve driving performance?
  • Can users take advantage of novel technologies, such as streaming videos from other cars?
  • How do we build interfaces that users can trust and will thus actually use?
  • How can car manufacturers, OEMs, brought-in device manufacturers and academia collaborate in envisioning, creating and implementing automotive user interfaces?

The 2008 CHI SIG [2] attracted over 60 people and we’re hoping for similar (or better!) turnout.


[1] Albrecht Schmidt, Anind L. Dey, Andrew L. Kun, Wolfgang Spiessl, “Automotive User Interfaces: Human Computer Interaction in the Car,” CHI 2010 Extended Abstracts (to appear)

[2] D. M. Krum, J. Faenger, B. Lathrop, J. Sison, A. Lien, “All roads lead to CHI: interaction in the automobile,” CHI 2008 Extended Abstracts

[3] Albrecht Schmidt, Anind Dey, Thomas Seder, Oskar Juhlin, “Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2009”