Category Archives: hci

2014 visit to University College London

Last week I travelled to London to give a talk at University College London (UCL). My host was Duncan Brumby, who also recently visited us at UNH. My talk introduced our work on in-vehicle human-computer interaction, touching on subjects from Project54 to driving simulator-based experiments.

It was great to talk to Duncan again, and it was really nice to meet some of his colleagues, including Anna Cox, Paul Marshall, and Sandy Gould. Thanks to all for hosting me.

While my trip was brief, I did get a chance to also visit the British Museum. This is one of my favorite places in the world, and here’s a photo of my favorite artifact from the museum’s vast collection, the Rosetta Stone:

You can see more pictures from my trip on Flickr.

 

 

 

First lecture in BME autonomous robots and vehicles lab

Today was my first lecture in BME‘s Autonomous Robots and Vehicles Lab (Autonóm robotok és járművek laboratórium). This lab is lead by Bálint Kiss, who is my host during my Fulbright scholarship at in Hungary.

Today’s lecture covered the use of eye trackers in designing human-computer interaction. I talked about our work on in-vehicle human-computer interaction, and drew parallels to human-robot interaction. Tomorrow I’ll introduce the class to our Seeing Machines eye tracker, and in the coming weeks I’ll run a number of lab sections in which the students will conduct short experiments in eye tracking and pupil diameter measurement.

If you speak Hungarian, here’s the overview of today’s lecture (I’m thrilled to be teaching in Hungarian):

Szemkövetők használata az ember-gép interakció értékelésében

A University of New Hampshire kutatói több mint egy évtizede foglalkoznak a járműveken belüli ember-gép interfészekkel. Ez az előadás először egy rövid áttekintést nyújt a rendőr járművekre tervezett Project54 rendszer fejlesztéséről és telepítéséről. A rendszer különböző modalitású felhasználói felületeket biztosít, beleértve a beszéd modalitást. A továbbiakban az előadás beszámol közelmúltban végzett autóvezetés-szimulációs kísérletekről, amelyekben a szimulátor és egy szemkövető adatai alapján becsültük a vezető kognitív terhelését, vezetési teljesítményét, és vizuális figyelmét a külső világra.

Az előadás által a hallgatók betekintést nyernek a szemkövetők használatába az ember-gép interakció értékelésében és tervezésében. Az ember-gép interakció pedig egy központi probléma az autonóm robotok sikeres telepítésében, hiszen az autonóm robotokat nem csak szakértők fogják használni. Ellenkezőleg, ezek a robotok a társadalom minden részében felhasználókra találnak majd. A robotok ilyen széleskörű telepítése csak akkor lehet sikeres, ha az ember-gép interakció elfogadható a felhasználók számára.

Visiting Noble High School

This morning I discussed distracted driving research with students at Noble High School in North Berwick, ME. I was there at the invitation of David Parker who teaches physics. This year David and his students are exploring vehicle safety as part of their introduction to various aspects of physics.

Every time I talk to pre-college students, I want to communicate the idea that scientific research is exciting. With David’s students this was easy. From the beginning of my visit they were engaged in our conversation and they displayed critical thinking skills. I am sure this is gratifying for David and his Noble High School colleagues – their efforts are paying off.

Announced: Cognitive load and in-vehicle HMI special interest session at the 2012 ITS World Congress

Continuing the work of the 2011 Cognitive Load and In-vehicle Human-Machine Interaction workshop at AutomotiveUI 2011, Peter Fröhlich and I are co-organizing a special interest session on this topic at this year’s ITS World Congress.

The session will be held on Friday, October 26, 2012. Peter and I were able to secure the participation of an impressive list of panelists. They are (in alphabetical order):

  • Corinne Brusque, Director, IFSTTAR LESCOT, France
  • Johan Engström, senior project manager, Volvo Technology, Sweden
  • James Foley, Senior Principal Engineer, CSRC, Toyota, USA
  • Chris Monk, Project Officer, US DOT
  • Kazumoto Morita, Senior Researcher, National Safety and Environment Laboratory, Japan
  • Scott Pennock, Chairman of the ITU-T Focus Group on Driver Distraction and Senior Hands-Free Standards Specialist at QNX, Canada

The session will be moderated by Peter Fröhlich. We hope that the session will provide a compressed update on the state-of-the-art in cognitive load research, and that it will serve as inspiration for future work in this field.

Award of Excellence at 2012 Undergraduate Research Conference

Two of my undergraduate research assistants, Josh Clairmont and Shawn Bryan, won an Award of Excellence at the 2012 Undergraduate Research Conference. The URC is UNH’s annual event aimed at engaging undergraduate students in research.

Josh and Shawn created a tangible user interface for the Microsoft Surface multitouch table.  Their interface allows users to play a game of air hockey on the Surface. Josh, a computer engineering senior, was in charge of creating the Arduino-based game controller. Shawn, a computer science senior, created the game on the Surface.

Here is a video introducing the work of Josh and Shawn:

Congratulations Josh and Shawn!

2012 PhD and MS positions

A PhD and an MS position are available in the Project54 lab at the University of New Hampshire. The lab is part of the Electrical and Computer Engineering department at UNH. Successful applicants will explore human-computer interaction in vehicles. We are looking for students with a background in electrical engineering, computer engineering, computer science, or related fields.

The Project54 lab was created in 1999 in partnership with the New Hampshire Department of Safety to improve technology for New Hampshire law enforcement. Project54’s in-car system integrates electronic devices in police cruisers into a single voice-activated system. Project54 also integrates cruisers into agency-wide communication networks. The Project54 system has been deployed in over 1000 vehicles in New Hampshire in over 180 state and local law enforcement agencies.

Research focus

Both the PhD and the MS student will focus on the relationship between various in-car user interface characteristics and the cognitive load of interacting with these interfaces, with the goal of designing interfaces that do not significantly increase driver workload. Work will involve developing techniques to estimate cognitive load using performance measures (such as the variance of lane position), physiological measures (such as changes in pupil diameter [1-5]) and subjective measures (such as the NASA-TLX questionnaire).

The PhD student will focus on spoken in-vehicle human-computer interaction, and will explore the use of human-human dialogue behavior [6-11] to guide the design process.

The work will utilize experiments in Project54’s world-class driving simulator laboratory which is equipped with two research driving simulators, three eye trackers and a physiological data logger.

Appointment

The PhD student will be appointed for four years, and the MS student for two years. Initial appointments will be for one year, starting between June and September 2012. Continuation of funding will be dependent on satisfactory performance. Appointments will be a combination of research and teaching assistantships. Compensation will include tuition, fees, health insurance and academic year and summer stipend.

How to apply

For application instructions, and for general information, email Andrew Kun, Project54 Principal Investigator at andrew.kun@unh.edu. Please attach a current CV.

References

[1] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” ETRA 2012

[2] Andrew L. Kun, Zeljko Medenica, Oskar Palinko, Peter A. Heeman, “Utilizing Pupil Diameter to Estimate Cognitive Load Changes During Human Dialogue: A Preliminary Study,” AutomotiveUI 2011 Adjunct Proceedings

[3] Andrew L. Kun, Peter A. Heeman, Tim Paek, W. Thomas Miller, III, Paul A. Green, Ivan Tashev, Peter Froehlich, Bryan Reimer, Shamsi Iqbal, Dagmar Kern, “Cognitive Load and In-Vehicle Human-Machine Interaction,” AutomotiveUI 2011 Adjunct Proceedings

[4] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[5] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

[6] Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First, to appear in PUC

[7] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[8] Fan Yang, Peter A. Heeman, Andrew L. Kun, “An Investigation of Interruptions and Resumptions in Multi-Tasking Dialogues,” Computational Linguistics, 37, 1

[9] Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” Automotive UI 2010

[10] Fan Yang, Peter A. Heeman, Andrew L. Kun, “Switching to Real-Time Tasks in Multi-Tasking Dialogue,” Coling 2008

[11] Alexander Shyrokov, Andrew L. Kun, Peter Heeman, “Experimental modeling of human-human multi-threaded dialogues in the presence of a manual-visual task,” SigDial 2007

Personal and Ubiquitous Computing theme issue: Automotive user interfaces and interactive applications in the car

I’m thrilled to announce that the theme issue of Personal and Ubiquitous Computing entitled “Automotive user interfaces and interactive applications in the car” is now available in PUC’s Online First. I had the pleasure of serving as co-editor of this theme issue with Albrecht Schmidt, Anind Dey, and Susanne Boll.

The theme issue includes our editorial [1], and three papers. The first is by Tuomo Kujala, who explores scrolling on touch screens  while driving [2]. The second is by Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, who address the issue of credibility in the context of automotive navigation systems [3]. The third paper is co-authored by me, my former PhD student Alex Shyrokov, and Peter Heeman. We explore multi-threaded spoken dialogues between a driver and a remote conversant [4]. The three papers were selected in a rigorous review process from 17 submissions, by approximately 50 reviewers.

 

References

[1] Andrew L. Kun, Albrecht Schmidt, Anind Dey and Susanne Boll, “Automotive User Interfaces and Interactive Applications in the Car,” PUC Online First

[2] Tuomo Kujala, “Browsing the Information Highway while Driving – Three In-Vehicle Touch Screen Scrolling Methods and Driver Distraction,” PUC Online First

[3] Florian Schaub, Markus Hipp, Frank Kargl, and Michael Weber, “On Credibility Improvements for Automotive Navigation Systems,” PUC Online First

[4] Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First

Video calling while driving? Not a good idea.

Do you own a smart phone? If yes, you’re likely to have tried video calling (e.g. with Skype or FaceTime). Video calling is an exciting technology, but as Zeljko Medenica and I show in our CHI 2012 Work-in-Progress paper [1], it’s not a technology you should use while driving.

Zeljko and I conducted a driving simulator experiment in which a driver and another participant were given the verbal task of playing the game of Taboo. The driver and the passenger were in separate rooms and spoke to each other over headsets. In one experimental condition, the driver and the other participant could also see each other as shown in the figure below. We wanted to find out if in this condition drivers would spend a significant amount of time looking at the other participant. This is an important question, as time spent looking at the other participant is time not spent looking at the road ahead!

We found that, when drivers felt that the driving task was demanding, they focused on the road ahead. However, when they perceived the driving task to be less demanding they looked at the other participant significantly more.

What this tells us is that, under certain circumstances, drivers are willing to engage in video calls. This is due, at least in part, to the (western) social norm of looking at the person you’re talking to. These results should serve as a warning to interface designers, lawmakers (yes, there’s concern [2]), transportation officials, and drivers that video calling can be a serious distraction from driving.

Here’s a video that introduces the experiment in more detail:

References

[1] Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts

[2] Claude Brodesser-Akner, “State Assemblyman: Ban iPhone4 Video-Calling From the Road,” New York Magazine. Date accessed 03/02/2012

Further progress towards disambiguating the effects of cognitive load and light on pupil diameter

In driving simulator studies participants complete both visual and aural task. The most obvious visual task is driving itself, but there are others such as viewing an LCD screen that displays a map. Aural tasks include talking to an in-vehicle computer. I am very interested in estimating the cognitive load of these various tasks. One way to estimate this cognitive load is through changes in pupil diameter: in an effect called the Task Evoked Pupillary Response (TEPR) [1], the pupil dilates with increased cognitive load.

However, in driving simulator studies participants scan a non-uniformly illuminated visual scene. If unaccounted for, this non-uniformity in illumination might introduce an error in our estimate of the TEPR. Oskar Palinko and I will have a paper at ETRA 2012 [2] extending our previous work [3], in which we established that it is possible to separate the pupil’s light reflex from the TEPR. While in our previous work TEPR was the result of participants’ engagement in an aural task, in our latest experiment TEPR is due to engagement in a visual task.

The two experiments taken together support our main hypothesis that it is possible to disambiguate (and not just separate) the two effects even in complicated environments, such as a driving simulator. We are currently designing further experiments to test this hypothesis.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” to appear at ETRA 2012

[3] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

2011 opportunity for UNH CS students: multi-touch surface manipulation of geo-coded time series

When I think back to the recent BP oil spill in the Gulf of Mexico, the images that come to mind are of wildlife affected on beaches, idle fishing vessels, and a massive response that involved thousands of people across multiple states.

How can such a massive response be managed? There is no single answer. However, one thing that can help is to make data about various aspects of the disaster, as well as the response effort, accessible to those conducting the response activities. This is the role of the Environmental Response Management Application (ERMA). ERMA is a web-based data visualization application. It visualizes geo-coded time series, without requiring users to know how to access specialized databases, or overlay data from these databases on virtual maps. ERMA was developed at UNH, under the guidance of the Coastal Response Research Center (CRRC).

Nancy Kinner is the co-director of the UNH Coastal Response Research Center. Building on Nancy’s experiences with ERMA, she and I are interested in exploring how a multi-touch table could be used to access and manipulate geo-coded time series.

Seeking UNH CS student

To further are effort, we are seeking a UNH CS student interested in developing a user interface on a multi-touch table. The interface would allow a human operator to access remote databases, manipulate the data (e.g. by sending it to Matlab for processing) and display the results on a virtual map or a graph. This work will be part of a team effort with two students working with Nancy on identifying data and manipulations of interest.

What should the user interface do?

The operator should be able to select data, e.g. from a website such as ERMA. Data types of interest include outputs from various sensors (temperature, pressure, accelerometers, etc.). Data manipulation will require some simple processing, such as setting beginning and end points for sensor readings. It will also require more complex processing of data, e.g. filtering.

What platform will be used?

The project will leverage Project54′s Microsoft Surface multi-touch table. Here is a video by UNH ECE graduate student Tim April introducing some of the interactions he has explored with the Surface.

What are the terms of this job?

We are interested in hiring an undergraduate or graduate UNH CS student for the 2011-2012 academic year, with the possibility of extending the appointment for the summer of 2012 and beyond, pending satisfactory performance and the availability of funding. The student will work up to 20 hours/week during the academic year and up to 40 hours a week during the summer break.

What are the required skills? And what new skills will I acquire?

Work on this ream-project will require object-oriented programming that is necessary to control the multi-touch table. You will explore the application of these skills to the design of surface user interfaces as well as experiments with human subjects – after all we will have to systematically test your creation! Finally, you will interact with students and faculty from at least two other disciplines (civil/environmental and electrical/computer engineering), which means you will gain valuable experience working on multi-disciplinary teams.

Interested? Have questions, ideas, suggestions?
Email me.