Alex Shyrokov defends PhD

Two weeks ago my student Alex Shyrokov defended his PhD dissertation. Alex was interested in human-computer interaction for cases when the human is engaged in a manual-visual task. In such situations a speech interface appears to be a natural way to communicate with a computer. Alex was especially interested in multi-threaded spoken HCI. In multi-threaded dialogues the conversants switch back and forth between multiple topics.

How should we design a speech interface that will support multi-threaded human-computer dialogues when the human is engaged in a manual-visual task? In order to begin answering this question Alex explored spoken dialogues between two human conversants. The hypothesis is that a successful HCI design can mimic some aspects of human-human interaction.

In Alex’s experiments one of the conversants (the driver) operated a simulated vehicle while the other (an assistant) was only engaged in the spoken dialogue. The conversants were engaged in an ongoing and in an interrupting spoken task. Alex’s dissertation discusses several interesting findings, one of which is that driving performance is worse during and after the interrupting task. Alex proposes that this is due to a shift in the driver’s attention away from driving and to the spoken tasks. The shift in turn is due to the perceived urgency of the spoken tasks – as the perceived urgency increases the driver is more likely to shift her attention away from driving. The lesson for HCI design is to be very careful in managing the driver’s perceived urgency when interacting with devices in the car.

Alex benefited tremendously from the help of my collaborator on this research Peter Heeman. Peter provided excellent guidance throughout Alex’s PhD studies for which I am grateful. Peter and I plan to continue working with Alex’s data. The data includes transcribed dialogues, videos, driving performance as well as eye tracker data. I am especially interested in using the eye tracker’s pupil diameter measurements to estimate cognitive load as we have done in work lead by Oskar Palinko [1].


[1] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

Automotive user interfaces SIG meeting to be held at CHI 2010

There will be a special interest group (SIG) meeting on automotive user interfaces at CHI 2010. The lead author of the paper describing the aims of the SIG [1] is Albrecht Schmidt and the list of coauthors includes Anind Dey, Wolfgang Spiessl and me. CHI SIGs are 90 minute scheduled sessions during the conference. They are an opportunity for researchers with a common interest to meet face-to-face and engage in dialog.

Our SIG deals with human-computer interaction in the car. This is an exciting field of study that was the topic of a CHI 2008 SIG [2] as well as the AutomotiveUI 2009 conference [3], and the AutomotiveUI 2010 CFP will be posted very soon. In the last several years human-computer interaction in the car has increased for two main reasons. One, many cars now come equipped with myriad electronic devices such as displays indicating power usage and advanced driver assistance systems. Second, users (drivers and passengers) bring mobile devices to cars. The list of these brought-in mobile devices is long but personal navigation devices and mp3 players are probably the most common ones.

At the SIG we hope to discuss user interface issues that are the result of having all of these devices in cars. Some of the questions are:

  • How can we reduce (or eliminate) driver distraction caused by the in-car devices?
  • Can driver interactions with in-car devices actually improve driving performance?
  • Can users take advantage of novel technologies, such as streaming videos from other cars?
  • How do we build interfaces that users can trust and will thus actually use?
  • How can car manufacturers, OEMs, brought-in device manufacturers and academia collaborate in envisioning, creating and implementing automotive user interfaces?

The 2008 CHI SIG [2] attracted over 60 people and we’re hoping for similar (or better!) turnout.


[1] Albrecht Schmidt, Anind L. Dey, Andrew L. Kun, Wolfgang Spiessl, “Automotive User Interfaces: Human Computer Interaction in the Car,” CHI 2010 Extended Abstracts (to appear)

[2] D. M. Krum, J. Faenger, B. Lathrop, J. Sison, A. Lien, “All roads lead to CHI: interaction in the automobile,” CHI 2008 Extended Abstracts

[3] Albrecht Schmidt, Anind Dey, Thomas Seder, Oskar Juhlin, “Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2009”

Estimating cognitive load using pupillometry: paper accepted to ETRA 2010

Our short paper [1] on using changes in pupil size diameter to estimate cognitive load was accepted to the Eye Tracking Research and Applications 2010 (ETRA 2010) conference. The lead author is Oszkar Palinko and the co-authors are my PhD student Alex Shyrokov, my OHSU collaborator Peter Heeman and me.

In previous experiments in our lab we have concentrated on performance measures to evaluate the effects of secondary tasks on the driver. Secondary tasks are those performed in addition to driving, e.g. interacting with a personal navigation device. However, as Jackson Beatty has shown, when people’s cognitive load increases their pupils dilate  [2]. This fascinating phenomenon provides a physiological measure of cognitive load. Why is it important to have multiple measures of cognitive load? As Christopher Wickens points out [3] this allows us to avoid circular arguments such as “… saying that a task interferes more because of its higher resource demand, and its resource demand is inferred to be higher because of its greater interference.”

We found that in a driving simulator-based experiment that was conducted by Alex, performance-based and pupillometry-based (that is a physiological) cognitive load measures show high correspondence for tasks that lasted tens of seconds. In other words, both driving performance measures and pupil size changes appear to track cognitive load changes. In the experiment the driver is involved in two spoken tasks in addition to the manual-visual task of driving. We hypothesize that different parts of these two spoken tasks present different levels of cognitive load for the driver. Our measurements of driving performance and pupil diameter changes appear to confirm the hypothesis. Additionally, we introduced a new pupillometry-based cognitive load measure that shows promise for tracking changes in cognitive load on time scales of several seconds.

In Alex’s experiment one of the spoken tasks required participants to ask and answer yes/no questions. We hypothesize that different phases of this task also present different levels of cognitive load to the driver. Will this be evident in driving performance and pupillometric data? We hope to find out soon!


[1] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

[2] Jackson Beatty, “Task-evoked pupillary responses, processing load, and the structure of processing resources,” Psychological Bulletin. Vol. 91(2), Mar 1982, 276-292

[3] Christopher D. Wickens, “Multiple resources and performance prediction,” Theoretical Issues in Ergonomic Science, 2002, Vol. 3, No. 2, 159-177

Promoting the CEPS-BUTE Exchange Program

In an effort to promote the CEPSBUTE exchange program I gave the following presentation to two similar audiences here at UNH. Last Monday Kent Chamberlin hosted me in his ECE 401 class (the introductory ECE course) and I had a chance to talk to about 75 ECE freshmen. Today I gave the presentation to Bob Henry’s TECH 400 students (TECH 400 introduces the CEPS majors to CEPS undeclared students).

View more presentations from Andrew Kun.

My main point was this: spending a semester abroad gives students a competitive advantage because it proves that they can adapt to change. Of course spending a semester in Europe allows students to travel and I spent some time promoting my favorite travel guide author, Rick Steves 🙂

NSF SBIR review panel

On Thursday I participated in a Phase II National Science Foundation Small Business Innovation Research (NSF SBIR) panel. While I’ve been to Phase I panels before, this was my first Phase II panel. In Phase I companies can request up to $150,000 for 6 months to a year. A company that receives a Phase I award, and successfully delivers on its grant, is eligible to compete in Phase II with a proposal for up to $500,000 for two years. 

The one thing that always strikes me at the SBIR panels is that proposals have to make a good business case. Panels include both technical experts and business experts and a proposal has to clear the bar with both sets in order to be recommended for funding. I’ve always taken it for granted that an NSF proposal (SBIR or scientific) should make a good argument for why the technology or scientific innovation is worth funding. However, before my involvement in the SBIR review process, I didn’t really think much about the business case to be made when requesting funding for a business venture. In this respect I’m hardly alone: engineers usually don’t spend much time exploring the business side of running a business. At the UNH ECE department we’re looking into alleviating this problem through the involvement of Brad Gillespie in our senior project courses. Brad is a UNH ECE alumnus, Microsoft veteran and business strategy consultant. Read about Brad’s last visit to UNH ECE and check back for more on this in a future post.

So, if you’re a technical person planning to submit an SBIR proposal (note that many federal agencies run SBIR programs, not just the NSF), my advice is this: bring in people who can help you think through (and coherently present in the proposal) a business plan for your venture. Without a compelling business plan your proposal will not be funded.

At the 2009 fall NIJ CommTech TWG meeting

On Wednesday and Thursday, Oskar Palinko, Mark Taipan and I participated in the NIJ CommTech Technical Working Group meeting. On Wednesday I gave the presentation below reporting on our lab’s progress.

View more presentations from Andrew Kun.

On Thursday we participated in the meeting’s demo session. We demonstrated the advantage of using voice commands to control a police radio over using the radio’s buttons. We used a single-computer driving simulator and a radio setup. Of course the first driving simulator experiment we published investigated this effect [1]. We also demonstrated accessing a remote database using the Project54 system running on a Symbol handheld computer. We expect that, once we get approval from the NH State Police to deploy such devices (NHSP is responsible for data access for all officers in the state), they will be a big hit with local departments.

One of the many people we had a chance to talk to at the TWG meeting is Gil Emery, Communications Manager at the Portsmouth, NH PD. Gil was interested in the handhelds and we may be able to work with him on using these handhelds as cameras that allow tagging pictures on the spot and then using a cellular network to transmit them to headquarters. This work would build on Michael Farrar’s MS thesis research.

You can see pictures from this event of Flickr.


[1] Zeljko Medenica, Andrew L. Kun, “Comparing the Influence of Two User Interfaces for Mobile Radios on Driving Performance,” Driving Assessment 2007

Two posters at Ubicomp 2009

Our group presented two posters at last week’s Ubicomp 2009. Oskar Palinko and Michael Litchfield were on hand to talk about our multitouch table effort[1] (a great deal of work for this poster was done by Ankit Singh). Zeljko Medenica introduced a driving simulator pilot, work done in collaboation wtih Tim Paek, that deals with using augmented reality for the user interface of a navigation device [2].

Oskar (center) and Mike (right)

Oskar (center) and Mike (right)

Zeljko (center)

Zeljko (center)

Oskar, Mike and I are working on expanding the multitouch study. We plan to start with an online study in which subjects will watch two videos, one in which a story is presented using the multitouch table and another with the same story presented using a simple slide show. Zeljko will head up the follow-on to the pilot study – take a look at the video below to see (roughly) what we’re planning to do.

Take a look at other pictures I took at Ubicomp 2009 on Flickr.


[1] Oskar Palinko, Ankit Singh, Michael A. Farrar, Michael Litchfield, Andrew L. Kun, “Towards Storytelling with Geotagged Photos on a Multitouch Display,” Conference Supplement, Ubicomp 2009

[2] Zeljko Medenica, Oskar Palinko, Andrew L. Kun, Tim Paek, “Exploring In-Car Augmented Reality Navigation Aids: A Pilot Study,” Conference Supplement, Ubicomp 2009

Visiting Budapest University of Technology and Economics (BUTE)

After my trip to Automotive UI 2009 I flew to Budapest, Hungary. The UNH College of Engineering and Physical Sciences has an exchange program with BUTE and I went to promote this program to BUTE students. I also got a chance to meet two people responsible for implementing the program “on the ground” in Budapest, Eszter Kiss and Máté Helfrich. Eszter is the person who looks after the UNH students (and many others from all over the world) from the time they arrive in Budapest, so I was very happy to meet her and express UNH’s gratitude for all of her efforts.

Eszter organized a talk in which I presented some of the reasons why a semester at UNH would be beneficial to BUTE students (see the slides). The discussion that followed my presentation was excellent, with students asking questions about many aspects of the exchange program, as well as a new summer internship program. The discussion was in Hungarian, which was fun, as I don’t use this language for work very much 🙂

You can see more pictures about my visit on Flickr.

Automotive UI 2009, Essen

Last Monday and Tuesday I was in Essen, Germany, at the Automotive User Interfaces 2009 conference. This was the first Automotive UI conference and it was quite successful with around 60 participants, according to conference chair Albrecht Schmidt. Here’s Albrecht welcoming us to AutoUI ’09 and the University of Duisburg-Essen:

I gave a talk at the conference about our latest navigation study that investigated the influence of two personal navigation devices on driving performance and visual attention. This was collaborative work with Tim Paek of Microsoft Research. For more information on our findings check out the paper or take a look at the slides:

View more presentations from Andrew Kun.

Visiting MERL

Three weeks ago I visited Mitsubishi Electric Research Laboratory in Cambridge, MA. My hosts were Bret Hersham and his colleagues Garrett Weinberg and Bent Schmidt-Nielsen. The proximate reason for my visit was that one of my PhD students, Zeljko Medenica, worked under Bret as a summer intern.

As part of my visit I saw the MERL driving simulator, which is an excellent adaptation of a computer game for research purposes (read more about it in Garrett and Bret’s Automotive UI 2009 paper). I really like the driving courses that they can use (e.g. winding mountain roads and narrow village streets) and I’m impressed with the performance of the simulator’s chair which shakes and tilts.

After the simulator tour I gave a talk on our latest navigation study, which compared driving performance and visual attention when using two personal navigation aids: one that displays a map and provides spoken instructions and another that provides spoken instructions only. The talk was based on our Automotive UI 2009 paper.

Finally, I had a chance to talk to Fatih Porikli, who showed me some great videos of his work on recognizing pedestrians. We also discussed possible collaboration on learning grammars for using voice commands to tag photos. More about this in another post.

Associate Professor, Electrical and Computer Engineering, University of New Hampshire