Tag Archives: hci

Coming up: CHI 2016 course on automotive user interfaces

At this year’s CHI conference Bastian Pfleging, Nora Broy and I will present a course introducing automotive user interfaces. Here’s the course abstract:

The objective of this course is to provide newcomers to Automotive User Interfaces with an introduction and overview of the field. The course will introduce the specifics and challenges of In-Vehicle User Interfaces that set this field apart from others. We will provide an overview of the specific requirements of AutomotiveUI, discuss the design of such interfaces, also with regard to standards and guidelines. We further outline how to evaluate interfaces in the car, discuss the challenges with upcoming automated driving and present trends and challenges in this domain.

Interested? Please register through the conference registration system and sign up for our course.

UNH IRES: HCI summer student research experience in Germany

HCI Lab, Stuttgart

UNH ECE professor Tom Miller and I were recently awarded an NSF International Research Experiences for Students (IRES) grant. Our IRES grant will fund students conducting research at the University of Stuttgart in Germany.

Albrecht Schmidt

Under our NSF IRES grant, each summer between 2014 and 2017, three undergraduate and three graduate students  will conduct research for just under 9 weeks at the Human Computer Interaction (HCI) Lab of Professor Albrecht Schmidt at the University of Stuttgart. Professor Schmidt and his lab are among the world leaders in the field of HCI.

Student research will focus on two areas: in-vehicle speech interaction and speech interaction with public displays. For in-vehicle speech, students will relate the benefits and limitations of speech interaction with in-vehicle devices with real-world parameters, such as how well speech recognition works at any given moment. They will also work to identify why it is that talking to a passenger appears to reduce the probability of a crash, and how we might be able to use this new information to create safer in-vehicle speech interactions. Similarly, students will explore how speech interaction can allow smooth interaction with electronic public displays.

Stuttgart Palace Square (Stefan Fussan: https://www.flickr.com/photos/derfussi/)

Successful applicants will receive full financial support for participation, covering items such as airfare, room and board, health insurance, as well as a $500/week stipend. The total value of the financial package is approximately $8,500 for 9 weeks.

Details about the program, including applications instructions, are available here. Please note that this program is only available to US citizens and permanent residents. If you have questions please contact Andrew Kun (andrew dot kun at unh dot edu) or Tom Miller (tom dot miller at unh dot edu).

2014 visit to University College London

Last week I travelled to London to give a talk at University College London (UCL). My host was Duncan Brumby, who also recently visited us at UNH. My talk introduced our work on in-vehicle human-computer interaction, touching on subjects from Project54 to driving simulator-based experiments.

It was great to talk to Duncan again, and it was really nice to meet some of his colleagues, including Anna Cox, Paul Marshall, and Sandy Gould. Thanks to all for hosting me.

While my trip was brief, I did get a chance to also visit the British Museum. This is one of my favorite places in the world, and here’s a photo of my favorite artifact from the museum’s vast collection, the Rosetta Stone:

You can see more pictures from my trip on Flickr.




First lecture in BME autonomous robots and vehicles lab

Today was my first lecture in BME‘s Autonomous Robots and Vehicles Lab (Autonóm robotok és járművek laboratórium). This lab is lead by Bálint Kiss, who is my host during my Fulbright scholarship at in Hungary.

Today’s lecture covered the use of eye trackers in designing human-computer interaction. I talked about our work on in-vehicle human-computer interaction, and drew parallels to human-robot interaction. Tomorrow I’ll introduce the class to our Seeing Machines eye tracker, and in the coming weeks I’ll run a number of lab sections in which the students will conduct short experiments in eye tracking and pupil diameter measurement.

If you speak Hungarian, here’s the overview of today’s lecture (I’m thrilled to be teaching in Hungarian):

Szemkövetők használata az ember-gép interakció értékelésében

A University of New Hampshire kutatói több mint egy évtizede foglalkoznak a járműveken belüli ember-gép interfészekkel. Ez az előadás először egy rövid áttekintést nyújt a rendőr járművekre tervezett Project54 rendszer fejlesztéséről és telepítéséről. A rendszer különböző modalitású felhasználói felületeket biztosít, beleértve a beszéd modalitást. A továbbiakban az előadás beszámol közelmúltban végzett autóvezetés-szimulációs kísérletekről, amelyekben a szimulátor és egy szemkövető adatai alapján becsültük a vezető kognitív terhelését, vezetési teljesítményét, és vizuális figyelmét a külső világra.

Az előadás által a hallgatók betekintést nyernek a szemkövetők használatába az ember-gép interakció értékelésében és tervezésében. Az ember-gép interakció pedig egy központi probléma az autonóm robotok sikeres telepítésében, hiszen az autonóm robotokat nem csak szakértők fogják használni. Ellenkezőleg, ezek a robotok a társadalom minden részében felhasználókra találnak majd. A robotok ilyen széleskörű telepítése csak akkor lehet sikeres, ha az ember-gép interakció elfogadható a felhasználók számára.

Announced: Cognitive load and in-vehicle HMI special interest session at the 2012 ITS World Congress

Continuing the work of the 2011 Cognitive Load and In-vehicle Human-Machine Interaction workshop at AutomotiveUI 2011, Peter Fröhlich and I are co-organizing a special interest session on this topic at this year’s ITS World Congress.

The session will be held on Friday, October 26, 2012. Peter and I were able to secure the participation of an impressive list of panelists. They are (in alphabetical order):

  • Corinne Brusque, Director, IFSTTAR LESCOT, France
  • Johan Engström, senior project manager, Volvo Technology, Sweden
  • James Foley, Senior Principal Engineer, CSRC, Toyota, USA
  • Chris Monk, Project Officer, US DOT
  • Kazumoto Morita, Senior Researcher, National Safety and Environment Laboratory, Japan
  • Scott Pennock, Chairman of the ITU-T Focus Group on Driver Distraction and Senior Hands-Free Standards Specialist at QNX, Canada

The session will be moderated by Peter Fröhlich. We hope that the session will provide a compressed update on the state-of-the-art in cognitive load research, and that it will serve as inspiration for future work in this field.

Award of Excellence at 2012 Undergraduate Research Conference

Two of my undergraduate research assistants, Josh Clairmont and Shawn Bryan, won an Award of Excellence at the 2012 Undergraduate Research Conference. The URC is UNH’s annual event aimed at engaging undergraduate students in research.

Josh and Shawn created a tangible user interface for the Microsoft Surface multitouch table.  Their interface allows users to play a game of air hockey on the Surface. Josh, a computer engineering senior, was in charge of creating the Arduino-based game controller. Shawn, a computer science senior, created the game on the Surface.

Here is a video introducing the work of Josh and Shawn:

Congratulations Josh and Shawn!

2012 PhD and MS positions

A PhD and an MS position are available in the Project54 lab at the University of New Hampshire. The lab is part of the Electrical and Computer Engineering department at UNH. Successful applicants will explore human-computer interaction in vehicles. We are looking for students with a background in electrical engineering, computer engineering, computer science, or related fields.

The Project54 lab was created in 1999 in partnership with the New Hampshire Department of Safety to improve technology for New Hampshire law enforcement. Project54’s in-car system integrates electronic devices in police cruisers into a single voice-activated system. Project54 also integrates cruisers into agency-wide communication networks. The Project54 system has been deployed in over 1000 vehicles in New Hampshire in over 180 state and local law enforcement agencies.

Research focus

Both the PhD and the MS student will focus on the relationship between various in-car user interface characteristics and the cognitive load of interacting with these interfaces, with the goal of designing interfaces that do not significantly increase driver workload. Work will involve developing techniques to estimate cognitive load using performance measures (such as the variance of lane position), physiological measures (such as changes in pupil diameter [1-5]) and subjective measures (such as the NASA-TLX questionnaire).

The PhD student will focus on spoken in-vehicle human-computer interaction, and will explore the use of human-human dialogue behavior [6-11] to guide the design process.

The work will utilize experiments in Project54’s world-class driving simulator laboratory which is equipped with two research driving simulators, three eye trackers and a physiological data logger.


The PhD student will be appointed for four years, and the MS student for two years. Initial appointments will be for one year, starting between June and September 2012. Continuation of funding will be dependent on satisfactory performance. Appointments will be a combination of research and teaching assistantships. Compensation will include tuition, fees, health insurance and academic year and summer stipend.

How to apply

For application instructions, and for general information, email Andrew Kun, Project54 Principal Investigator at andrew.kun@unh.edu. Please attach a current CV.


[1] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” ETRA 2012

[2] Andrew L. Kun, Zeljko Medenica, Oskar Palinko, Peter A. Heeman, “Utilizing Pupil Diameter to Estimate Cognitive Load Changes During Human Dialogue: A Preliminary Study,” AutomotiveUI 2011 Adjunct Proceedings

[3] Andrew L. Kun, Peter A. Heeman, Tim Paek, W. Thomas Miller, III, Paul A. Green, Ivan Tashev, Peter Froehlich, Bryan Reimer, Shamsi Iqbal, Dagmar Kern, “Cognitive Load and In-Vehicle Human-Machine Interaction,” AutomotiveUI 2011 Adjunct Proceedings

[4] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[5] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

[6] Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First, to appear in PUC

[7] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[8] Fan Yang, Peter A. Heeman, Andrew L. Kun, “An Investigation of Interruptions and Resumptions in Multi-Tasking Dialogues,” Computational Linguistics, 37, 1

[9] Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” Automotive UI 2010

[10] Fan Yang, Peter A. Heeman, Andrew L. Kun, “Switching to Real-Time Tasks in Multi-Tasking Dialogue,” Coling 2008

[11] Alexander Shyrokov, Andrew L. Kun, Peter Heeman, “Experimental modeling of human-human multi-threaded dialogues in the presence of a manual-visual task,” SigDial 2007

Personal and Ubiquitous Computing theme issue: Automotive user interfaces and interactive applications in the car

I’m thrilled to announce that the theme issue of Personal and Ubiquitous Computing entitled “Automotive user interfaces and interactive applications in the car” is now available in PUC’s Online First. I had the pleasure of serving as co-editor of this theme issue with Albrecht Schmidt, Anind Dey, and Susanne Boll.

The theme issue includes our editorial [1], and three papers. The first is by Tuomo Kujala, who explores scrolling on touch screens  while driving [2]. The second is by Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, who address the issue of credibility in the context of automotive navigation systems [3]. The third paper is co-authored by me, my former PhD student Alex Shyrokov, and Peter Heeman. We explore multi-threaded spoken dialogues between a driver and a remote conversant [4]. The three papers were selected in a rigorous review process from 17 submissions, by approximately 50 reviewers.



[1] Andrew L. Kun, Albrecht Schmidt, Anind Dey and Susanne Boll, “Automotive User Interfaces and Interactive Applications in the Car,” PUC Online First

[2] Tuomo Kujala, “Browsing the Information Highway while Driving – Three In-Vehicle Touch Screen Scrolling Methods and Driver Distraction,” PUC Online First

[3] Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, “On Credibility Improvements for Automotive Navigation Systems,” PUC Online First

[4] Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First

Video calling while driving? Not a good idea.

Do you own a smart phone? If yes, you’re likely to have tried video calling (e.g. with Skype or FaceTime). Video calling is an exciting technology, but as Zeljko Medenica and I show in our CHI 2012 Work-in-Progress paper [1], it’s not a technology you should use while driving.

Zeljko and I conducted a driving simulator experiment in which a driver and another participant were given the verbal task of playing the game of Taboo. The driver and the passenger were in separate rooms and spoke to each other over headsets. In one experimental condition, the driver and the other participant could also see each other as shown in the figure below. We wanted to find out if in this condition drivers would spend a significant amount of time looking at the other participant. This is an important question, as time spent looking at the other participant is time not spent looking at the road ahead!

We found that, when drivers felt that the driving task was demanding, they focused on the road ahead. However, when they perceived the driving task to be less demanding they looked at the other participant significantly more.

What this tells us is that, under certain circumstances, drivers are willing to engage in video calls. This is due, at least in part, to the (western) social norm of looking at the person you’re talking to. These results should serve as a warning to interface designers, lawmakers (yes, there’s concern [2]), transportation officials, and drivers that video calling can be a serious distraction from driving.

Here’s a video that introduces the experiment in more detail:


[1] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[2] Claude Brodesser-Akner, “State Assemblyman: Ban iPhone4 Video-Calling From the Road,” New York Magazine. Date accessed 03/02/2012

Further progress towards disambiguating the effects of cognitive load and light on pupil diameter

In driving simulator studies participants complete both visual and aural task. The most obvious visual task is driving itself, but there are others such as viewing an LCD screen that displays a map. Aural tasks include talking to an in-vehicle computer. I am very interested in estimating the cognitive load of these various tasks. One way to estimate this cognitive load is through changes in pupil diameter: in an effect called the Task Evoked Pupillary Response (TEPR) [1], the pupil dilates with increased cognitive load.

However, in driving simulator studies participants scan a non-uniformly illuminated visual scene. If unaccounted for, this non-uniformity in illumination might introduce an error in our estimate of the TEPR. Oskar Palinko and I will have a paper at ETRA 2012 [2] extending our previous work [3], in which we established that it is possible to separate the pupil’s light reflex from the TEPR. While in our previous work TEPR was the result of participants’ engagement in an aural task, in our latest experiment TEPR is due to engagement in a visual task.

The two experiments taken together support our main hypothesis that it is possible to disambiguate (and not just separate) the two effects even in complicated environments, such as a driving simulator. We are currently designing further experiments to test this hypothesis.


[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” to appear at ETRA 2012

[3] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011