At the end of October I was in Eindhoven for AutomotiveUI 2013. The conference was hosted by Jacques Terken, who along with his team did a splendid job: from the technical content, to the venue, to the banquet, and the invited demonstrations, everything was of high quality and ran smoothly. Thanks Jacques and colleagues!
The conference started with workshops, including our CLW 2013. The workshop was sponsored by Microsoft Research and it brought together over 30 participants. In addition to six contributed talks, the workshop featured two invited speakers. The keynote lecture was given by Klaus Bengler who discussed challenges with cooperative driving. Tuhin Diptiman discussed the implications of the 2013 NHTSA visual manual guidelines on the design of in-vehicle interfaces. Thanks Klaus and Tuhin for your engaging presentations! See pictures from CLW on Flickr.
The main conference included talks, posters and demos. Our group was productive: we had one talk and three posters. One of the memorable moments of the conference was the invited demos. I had a chance to ride in a TNO car using Cooperative Adaptive Cruise Control (CACC). See the video below (the driver and narrator is TNO researcher E. (Ellen) van Nunen – thanks Ellen!
During the 2012-2013 academic year I worked with a team of UNH ECE seniors to explore using an LED array as a low-cost heads-up display that would provide in-vehicle turn-by-turn navigation instructions. Our work will be published in the proceedings of AutomotiveUI 2013 . Here’s the video introducing the experiment.
This week I attended the Microsoft Research Faculty Summit, an annual event held at MSR Redmond. The 2013 event gathered over 400 faculty from around the world. I was honored to receive an invitation, as these invitations are competitive: MSR researchers recommend faculty to invite, and a committee at MSR selects a subset who receive invitations.
Below are some of my impressions from the event. But, before I go on, I first wanted to thank MSR researchers John Krumm, Ivan Tashev and Shamsi Iqbal for spending time with me at the summit. Thanks also to MSR’s Tim Paek, who has played a key role in a number of our studies at UNH.
Bill Gates inspires
Bill Gates was the opening keynote speaker. He discussed his work with the Gates Foundation and answered audience questions. One of the interesting things from the Q&A session was Bill’s proposed analogy that MOOCs are similar to recorded music: in the past there was much more live music, while today we primarily listen to recorded music. In the future live lectures might also become much less common and we might instead primarily listen to recorded lectures by the best lecturers. While this might sound scary to faculty, Bill points out that lectures are just one part of a faculty member’s education-related efforts. Others include work in labs, study sessions, and discussions.
MSR is a uniquely open industry lab
While MSR is only about 1% of Microsoft, it spends as much on computing research as the NSF. And most importantly, as Peter Lee, Corporate VP MSR, pointed out, MSR researchers publish, and in general conduct their work in an open fashion. MSR also sets its own course independently, even of Microsoft proper.
Microsoft supports women in computing
The Faculty Summit featured a session on best practices in promoting computing disciplines to women. One suggestion that stuck with me is that organizations (e.g. academic departments) should track their efforts and outcomes. Once you start tracking, and is creating a paper trail, things will start to change.
Moore’s law is almost dead (and will be by 2025) Doug Burger, Director of Client and Cloud Applications in Microsoft Research’s Extreme Computing Group, pointed out that we cannot keep increasing computational power by reducing transistor size, as our transistors are becoming atom-thin. There’s a need for new approaches. One possible direction is to customize hardware: e.g. if we only need 20 bits for a particular operation, why implement the logic with 32?
The Lab of Things is a great tool for ubicomp research
Are you planning a field experiment in which you expect to collect data from electronic devices in the home? Check out the Lab of Things (LoT), it’s really promising. It allows you to quickly deploy your system, monitor system activity from the cloud, and log data in the cloud. Here’s a video introducing the LoT:
Seattle and the surrounding area is beautiful
I really like Seattle, with the Space Needle, the lakes, the UW campus, Mount Rainier and all of the summer sunshine.
Over 35 people from government, industry and academia attended CLW 2012.
For CLW 2012 the organizers made the decision to involve a large number of experts in the workshop, instead of only including contributions by authors responding to our CFP. Thus, the CLW 2012 program included three expert presentations, as well as a government-industry panel with four participants. Each of these expert participants discussed unique aspects of estimating and utilizing cognitive load for the design and deployment of in-vehicle human-machine interfaces.
The expert presentations were followed by a government-industry panel. Chris Monk (Human Factors Division Chief at NHTSA) presented the NHTSA perspective on cognitive load and HMI design. Jim Foley (Toyota Technical Center, USA) introduced the OEM perspective. Scott Pennock (QNX & ITU-T Focus Group on Driver Distraction) introduced issues related to standardization. Garrett Weinberg (Nuance) focused on issues related to voice user interfaces.
Following these presentations, and the accompanying lively discussions, workshop participants viewed eight posters.
Evaluation At the end of the workshop we asked participants to indicate their level of agreement with these four statements:
I found the workshop to be useful.
I enjoyedthe workshop.
I would attend a similar workshop at a future AutomotiveUI conference.
This workshop is the reasonI am attending AutomotiveUI 2011.
The responses of 13 participants are shown below (the workshop organizers in attendance did not complete the questionnaire). They indicate that the workshop was a success.
Since the conclusion of CLW 2012 co-organizers Peter Froehlich and Andrew Kun joined forces with Susanne Boll and Jim Foley to organize a workshop at CHI 2013 on automotive user interfaces. Also, a proposal for CLW 2013 at AutomotiveUI 2013 is in the works.
Thank you presenters and participants!
The organizers would like to extend our warmest appreciation to all of the presenters for the work that went into the expert presentations, the panel discussion, and the poster papers and presentations. We would also like to thank all of the workshop attendees for raising questions, discussing posters, and sharing their knowledge and expertise.
You can see more pictures from CLW 2012 on Flickr.
The report on the AutomotiveUI 2012 conference, co-authored by Linda Boyle, Bryan Reimer, Andreas Riener and me, was recently published by IEEE Pervasive Computing . The reference on this page points to my final version of the paper. You can also download the paper with the published layout directly from the IEEE here.
Continuing the work of the 2011 Cognitive Load and In-vehicle Human-Machine Interaction workshop at AutomotiveUI 2011, Peter Fröhlich and I are co-organizing a special interest session on this topic at this year’s ITS World Congress.
The session will be held on Friday, October 26, 2012. Peter and I were able to secure the participation of an impressive list of panelists. They are (in alphabetical order):
James Foley, Senior Principal Engineer, CSRC, Toyota, USA
Chris Monk, Project Officer, US DOT
Kazumoto Morita, Senior Researcher, National Safety and Environment Laboratory, Japan
Scott Pennock, Chairman of the ITU-T Focus Group on Driver Distraction and Senior Hands-Free Standards Specialist at QNX, Canada
The session will be moderated by Peter Fröhlich. We hope that the session will provide a compressed update on the state-of-the-art in cognitive load research, and that it will serve as inspiration for future work in this field.
Do you own a smart phone? If yes, you’re likely to have tried video calling (e.g. with Skype or FaceTime). Video calling is an exciting technology, but as Zeljko Medenica and I show in our CHI 2012 Work-in-Progress paper , it’s not a technology you should use while driving.
Zeljko and I conducted a driving simulator experiment in which a driver and another participant were given the verbal task of playing the game of Taboo. The driver and the passenger were in separate rooms and spoke to each other over headsets. In one experimental condition, the driver and the other participant could also see each other as shown in the figure below. We wanted to find out if in this condition drivers would spend a significant amount of time looking at the other participant. This is an important question, as time spent looking at the other participant is time not spent looking at the road ahead!
We found that, when drivers felt that the driving task was demanding, they focused on the road ahead. However, when they perceived the driving task to be less demanding they looked at the other participant significantly more.
What this tells us is that, under certain circumstances, drivers are willing to engage in video calls. This is due, at least in part, to the (western) social norm of looking at the person you’re talking to. These results should serve as a warning to interface designers, lawmakers (yes, there’s concern ), transportation officials, and drivers that video calling can be a serious distraction from driving.
Here’s a video that introduces the experiment in more detail:
Augmented reality PND: An augmented reality PND overlays route guidance on the real world using a head-up display. Our version is simulated and we simply project the route guidance on the simulator screens along with the driving simulation images. Augmented reality PNDs are not yet available commercially for cars.
Street-view PND: This PND uses a simplified version of augmented reality. It overlays route guidance on a sequence of still images of the road. The images and overlay are displayed on a head-down display. Google Maps Navigation runs on smart phones and can be used with street view.
The following video demonstrates the two PNDs.
Our findings indicate that augmented reality PNDs allow for excellent visual attention to the road ahead and excellent driving performance. In contrast, street-view PNDs can have a detrimental effect on both. Thus, while further research is clearly needed, it might be best if navigation with a street view PND was handled by a passenger and not by the driver.
Light intensity affects pupil diameter: the pupil contracts in bright environments and it dilates in the dark. Interestingly, cognitive load also affects pupil diameter, with the pupil dilating in response to increased cognitive load. This effect is called the task evoked pupillary response (TEPR) . Thus, changes in pupil diameter are physiological measures of cognitive load; however changes in lighting introduce noise into the estimate.
Last week Oskar Palinko gave a talk at Driving Assessment 2011 introducing our work on disambiguating the effects of cognitive load and light on pupil diameter in driving simulator studies . We hypothesized that we can simply subtract the effect of lighting on pupil diameter from the combined effect of light and cognitive load and produce an estimate of cognitive load only. We tested the hypothesis through an experiment in which participants were given three tasks:
Cognitive task with varying cognitive load and constant lighting. This task was adapted from the work of Klingner et al. . Participants listened to a voice counting from 1 to 18 repeatedly. Participants were told that every sixth number (6, 12, and 18) might be out of order and were instructed to push a button if they detected an out-of-order number. This task induced increased cognitive load at every sixth number as participants focused on the counting sequence. A new number was read every 1.5 seconds, thus cognitive load (and pupil diameter) increased every 6 x 1.5 sec = 9 seconds.
Visual task with constant cognitive load (assuming no daydreaming!) and varying lighting.Participants were instructed to follow a visual target which switched location between a white, a gray and a black truck. The light reaching the participant’s eye varied as the participant’s gaze moved from one truck to another. Participants held their gaze on a truck for 9 seconds, allowing the pupil diameter ample time to settle.
Combined task with varying cognitive load and lighting. Participants completed the cognitive and visual tasks in parallel. We synchronized the cognitive and visual tasks such that increases in cognitive load occurred after the pupil diameter stabilized in response to moving the gaze between trucks. Synchronization was straightforward as the cognitive task was periodic with 9 seconds and in the visual task lighting intensity also changed every 9 seconds.
Our results confirm that, at least in this simple case, our hypothesis holds and we can indeed detect changes in cognitive load under varying lighting conditions. We are planning to extend this work by introducing scenarios in which participants drive in realistic simulated environments. Under such scenarios gaze angles, and thus the amount of light reaching participants’ eyes, will change rapidly, making the disambiguation more complex, and of course more useful.
 Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)