Tag Archives: hci

2011 opportunity for UNH CS students: multi-touch surface manipulation of geo-coded time series

When I think back to the recent BP oil spill in the Gulf of Mexico, the images that come to mind are of wildlife affected on beaches, idle fishing vessels, and a massive response that involved thousands of people across multiple states.

How can such a massive response be managed? There is no single answer. However, one thing that can help is to make data about various aspects of the disaster, as well as the response effort, accessible to those conducting the response activities. This is the role of the Environmental Response Management Application (ERMA). ERMA is a web-based data visualization application. It visualizes geo-coded time series, without requiring users to know how to access specialized databases, or overlay data from these databases on virtual maps. ERMA was developed at UNH, under the guidance of the Coastal Response Research Center (CRRC).

Nancy Kinner is the co-director of the UNH Coastal Response Research Center. Building on Nancy’s experiences with ERMA, she and I are interested in exploring how a multi-touch table could be used to access and manipulate geo-coded time series.

Seeking UNH CS student

To further are effort, we are seeking a UNH CS student interested in developing a user interface on a multi-touch table. The interface would allow a human operator to access remote databases, manipulate the data (e.g. by sending it to Matlab for processing) and display the results on a virtual map or a graph. This work will be part of a team effort with two students working with Nancy on identifying data and manipulations of interest.

What should the user interface do?

The operator should be able to select data, e.g. from a website such as ERMA. Data types of interest include outputs from various sensors (temperature, pressure, accelerometers, etc.). Data manipulation will require some simple processing, such as setting beginning and end points for sensor readings. It will also require more complex processing of data, e.g. filtering.

What platform will be used?

The project will leverage Project54’s Microsoft Surface multi-touch table. Here is a video by UNH ECE graduate student Tim April introducing some of the interactions he has explored with the Surface.

What are the terms of this job?

We are interested in hiring an undergraduate or graduate UNH CS student for the 2011-2012 academic year, with the possibility of extending the appointment for the summer of 2012 and beyond, pending satisfactory performance and the availability of funding. The student will work up to 20 hours/week during the academic year and up to 40 hours a week during the summer break.

What are the required skills? And what new skills will I acquire?

Work on this ream-project will require object-oriented programming that is necessary to control the multi-touch table. You will explore the application of these skills to the design of surface user interfaces as well as experiments with human subjects – after all we will have to systematically test your creation! Finally, you will interact with students and faculty from at least two other disciplines (civil/environmental and electrical/computer engineering), which means you will gain valuable experience working on multi-disciplinary teams.

Interested? Have questions, ideas, suggestions?
Email me.

2011 Cognitive Load and In-Vehicle Human-Machine Interaction workshop

I’m thrilled to announce the 2011 Cognitive Load and In-Vehicle Human-Machine Interaction workshop (CLW 2011) to be held at AutomotiveUI 2011 in Salzburg, Austria. I’m co-organizing the workshop with Peter Heeman, Tim Paek, Tom Miller, Paul Green, Ivan Tashev, Peter Froehlich, Bryan Reimer, Shamsi Iqbal and Dagmar Kern. 

Why have this workshop? Interactions with in-vehicle electronic devices can interfere with the primary task of driving. The concept of cognitive load helps us understand the extent to which these interactions interfere with the driving task and how this interference can be mitigated. While research results on in-vehicle cognitive load are frequently presented at automotive research conferences and in related journals, so far no dedicated forum is available for focused discussions on this topic. This workshop aims to fill that void.

Submissions to the workshop are due October 17. Topics of interest include, but are not limited to:

– Cognitive load estimation in the laboratory,
– Cognitive load estimation on the road,
– Sensing technologies for cognitive load estimation,
– Algorithms for cognitive load estimation,
– Performance measures of cognitive load,
– Physiological measures of cognitive load,
– Visual measures of cognitive load,
– Subjective measures of cognitive load,
– Methods for benchmarking cognitive load,
– Cognitive load of driving,
– Cognitive overload and cognitive underload,
– Approaches to cognitive load management inspired by human-human interactions.

For a detailed description of workshop goals take a look at the call for papers.

Augmented Reality vs. Street View for Personal Navigation Devices

Personal navigation devices (PNDs) are ubiquitous and primarily come in three forms: as built-in devices in vehicles, as brought-in stand-alone devices, or as applications on smart phones.

So what is next for PNDs? In a driving simulator study to be presented at MobileHCI 2011 [1], Zeljko Medenica, Tim Paek, Oskar Palinko and I explored two ideas:

  • Augmented reality PND: An augmented reality PND overlays route guidance on the real world using a head-up display. Our version is simulated and we simply project the route guidance on the simulator screens along with the driving simulation images. Augmented reality PNDs are not yet available commercially for cars.
  • Street-view PND: This PND uses a simplified version of augmented reality. It overlays route guidance on a sequence of still images of the road. The images and overlay are displayed on a head-down display. Google Maps Navigation runs on smart phones and can be used with street view.

The following video demonstrates the two PNDs.

Our findings indicate that augmented reality PNDs allow for excellent visual attention to the road ahead and excellent driving performance. In contrast, street-view PNDs can have a detrimental effect on both. Thus, while further research is clearly needed, it might be best if navigation with a street view PND was handled by a passenger and not by the driver.

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

Zeljko Medenica advances to candidacy

Last week my PhD student Zeljko Medenica advanced to candidacy. Zeljko plans to create a driving performance measure that would be sensitive to short-lived and/or infrequent degradations in driving performance. In previous driving simulator-based studies [1, 2] we found that glancing away from the road is correlated with worse driving performance. Importantly, this is true even when performance averages over the length of the entire experiment are not affected. Thus, Zeljko plans to explore the use of cross-correlation in creating a new, highly sensitive driving performance measure.

Zeljko’s PhD committee includes Paul Green (UMTRI), Tim Paek (Microsoft Research), Nicholas Kirsch (UNH) and Tom Miller (UNH). Thanks to all for serving!

References

[1] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

[2] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

PhD and MS position at the University of New Hampshire exploring in-car human-computer interaction

A PhD and an MS position are available in the Project54 lab at the University of New Hampshire. The lab is part of the Electrical and Computer Engineering department at UNH. Successful applicants will explore human-computer interaction in vehicles. 

The Project54 lab was created in 1999 in partnership with the New Hampshire Department of Safety to improve technology for New Hampshire law enforcement. Project54’s in-car system integrates electronic devices in police cruisers into a single voice-activated system. Project54 also integrates cruisers into agency-wide communication networks. The Project54 system has been deployed in over 1000 vehicles in New Hampshire in over 180 state and local law enforcement agencies.

Research focus

Both the PhD and the MS student will focus on the relationship between various in-car user interface characteristics and the cognitive load of interacting with these interfaces, with the goal of designing interfaces that do not significantly increase driver workload. Work will involve developing techniques to estimate cognitive load using performance measures (such as the variance of lane position), physiological measures (such as changes in pupil diameter) and subjective measures (such as the NASA-TLX questionnaire).

The work will utilize experiments in Project54’s world-class driving simulator laboratory which is equipped with two research driving simulators, three eye trackers and a physiological data logger. Laboratory experiments will be complemented by field deployments in law enforcement agencies such as the New Hampshire State Police, which operates over 300 police cruisers. Project54 has deployed a state-wide data update infrastructure for the New Hampshire State Police which allows remote updates to in-car experimental software and remote collection of experimental data.

 Appointment

The PhD student will be appointed for four years, and the MS student for two years. Initial appointments will be for one year, starting between June and September 2011. Continuation of funding will be dependent on satisfactory performance. Appointments will be a combination of research and teaching assistantships. Compensation will include tuition, fees, health insurance and academic year and summer stipend.

How to apply

For application instructions, and for general information, email Andrew Kun, Project54 Principal Investigator at andrew.kun@unh.edu. Please attach a current CV.

Talk at SpeechTEK 2010

On Tuesday (August 3, 2010) I attended SpeechTEK 2010. I had a chance to see several really interesting talks including the lunch keynote by Zig Serafin, General Manager, Speech at Microsoft. He and two associates discussed, among other topics, the upcoming release of the Windows 7 phone and of the Kinect for Xbox 360 (formerly Project Natal). We also saw successful live demonstrations of both of these technologies.

One of Zig’s associates to take the stage was Larry Heck, Chief Scientist, Speech at Microsoft. Larry believes that there are three areas of research and development that will combine to make speech a part of everyday interactions with computers. First, the advent of ubiquitous computing and the need for natural user interfaces (NUIs) means that we cannot keep relying on GUIs and keyboards for many of our computing needs. Second, cloud computing makes it possible to gather rich data to train speech systems. Finally, with advances in speech technology we can expect to see search move beyond typing keywords (which is what we do today sitting at our PCs) to conversational queries (which is what people are starting to do on mobile phones).

I attended four other talks with topics relevant to my research. Brigitte Richardson discussed her work on Ford’s Sync. It’s exciting to hear that Ford is coming out with an SDK that will allow integrating devices with Sync. This appears to be a similar approach to ours at Project54 – we also provide an SDK which can be used to write software for the Project54 system [1]. Eduardo Olvera of Nuance discussed the differences and similarities between designing interfaces for speech interaction and those for interaction on a small form factor screen. Karen Kaushansky of TellMe discussed similar issues focusing on customer care. Finally, Kathy Lee, also of TellMe, discussed her work on a diary study exploring when people are willing to talk to their phones. This work reminded me of an experiment in which Ronkainen et al. asked participants to rate the social acceptability of mobile phone usage scenarios they viewed in video clips [2].

I also had a chance to give a talk reviewing some of the results of my collaboration with Tim Paek of Microsoft Research. Specifically, I discussed the effects of speech recognition accuracy and PTT button usage on driving performance [3] and the use of voice-only instructions for personal navigation devices [4]. The talk was very well received by the audience of over 25, with many follow-up questions. Tim also gave this talk earlier this year at Mobile Voice 2010.

For pictures from SpeechTEK 2010 visit my Flickr page.

References

[1] Andrew L. Kun, W. Thomas Miller, III, Albert Pelhe and Richard L. Lynch, “A software architecture supporting in-car speech interaction,” IEEE Intelligent Vehicles Symposium 2004

[2] Sami Ronkainen, Jonna Häkkilä, Saana Kaleva, Ashley Colley, Jukka Linjama, “Tap Input as an Embedded Interaction Method for Mobile Devices,” TEI 2007

[3] Andrew L. Kun, Tim Paek, Zeljko Medenica, “The Effect of Speech Interface Accuracy on Driving Performance,” Interspeech 2007

[4] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

Albrecht Schmidt visit to UNH

Last month (April 16) Albrecht Schmidt visited UNH and the Project54 lab. Albrecht gave an excellent talk introducing some of the research problems in pervasive computing and specifically touching on the latest results from his lab, which were just published at CHI 2010 [1, 2]. I was especially interested in the work on helping users find the last place of interest on a map quickly. Albrecht and colleagues track the user’s gaze and when the user looks away, they place a marker (or gazemark) on the map. When the user looks back at the map she can start where she left off: at the place of the marker. Clearly this could be very useful when looking at GPS maps in a car. In such a situation the driver has to keep going back and forth between the map and the road and you want to minimize the time spent looking at the map (the road being the more important thing to look at!). The gazemarks introduced by Albrecht’s group may help. It would be interesting to conduct a driving simulator study with gazemarks.

After the talk Albrecht spent about an hour with students from the Project54 lab and those in my Ubicomp Fundamentals course. This was a more intimate setting for conversations about Albrecht’s research. Finally, Project54 staff and students spent a couple of hours discussing Project54 research with Albrecht – our work on handheld computers, on driving simulator-based investigations of in-car user interfaces and our budding efforts in multi-touch table interaction.

I am grateful to the UNH Provost’s Office for helping to fund Albrecht’s visit through a grant from the Class of 1954 Academic Enrichment Fund.

References

[1] Dagmar Kern, P. Marshall and Albrecht Schmidt, ” Gazemarks: gaze-based visual placeholders to ease attention switching,” CHI 2010

[2] Alireza Sahami Shirazi, Ari-Heikki Sarjanoja, Florian Alt, Albrecht Schmidt, and Jonna Häkkilä, J. “Understanding the impact of abstracted audio preview of SMS,” CHI 2010

Alex Shyrokov defends PhD

Two weeks ago my student Alex Shyrokov defended his PhD dissertation. Alex was interested in human-computer interaction for cases when the human is engaged in a manual-visual task. In such situations a speech interface appears to be a natural way to communicate with a computer. Alex was especially interested in multi-threaded spoken HCI. In multi-threaded dialogues the conversants switch back and forth between multiple topics.

How should we design a speech interface that will support multi-threaded human-computer dialogues when the human is engaged in a manual-visual task? In order to begin answering this question Alex explored spoken dialogues between two human conversants. The hypothesis is that a successful HCI design can mimic some aspects of human-human interaction.

In Alex’s experiments one of the conversants (the driver) operated a simulated vehicle while the other (an assistant) was only engaged in the spoken dialogue. The conversants were engaged in an ongoing and in an interrupting spoken task. Alex’s dissertation discusses several interesting findings, one of which is that driving performance is worse during and after the interrupting task. Alex proposes that this is due to a shift in the driver’s attention away from driving and to the spoken tasks. The shift in turn is due to the perceived urgency of the spoken tasks – as the perceived urgency increases the driver is more likely to shift her attention away from driving. The lesson for HCI design is to be very careful in managing the driver’s perceived urgency when interacting with devices in the car.

Alex benefited tremendously from the help of my collaborator on this research Peter Heeman. Peter provided excellent guidance throughout Alex’s PhD studies for which I am grateful. Peter and I plan to continue working with Alex’s data. The data includes transcribed dialogues, videos, driving performance as well as eye tracker data. I am especially interested in using the eye tracker’s pupil diameter measurements to estimate cognitive load as we have done in work lead by Oskar Palinko [1].

References

[1] Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010

Automotive user interfaces SIG meeting to be held at CHI 2010

There will be a special interest group (SIG) meeting on automotive user interfaces at CHI 2010. The lead author of the paper describing the aims of the SIG [1] is Albrecht Schmidt and the list of coauthors includes Anind Dey, Wolfgang Spiessl and me. CHI SIGs are 90 minute scheduled sessions during the conference. They are an opportunity for researchers with a common interest to meet face-to-face and engage in dialog.

Our SIG deals with human-computer interaction in the car. This is an exciting field of study that was the topic of a CHI 2008 SIG [2] as well as the AutomotiveUI 2009 conference [3], and the AutomotiveUI 2010 CFP will be posted very soon. In the last several years human-computer interaction in the car has increased for two main reasons. One, many cars now come equipped with myriad electronic devices such as displays indicating power usage and advanced driver assistance systems. Second, users (drivers and passengers) bring mobile devices to cars. The list of these brought-in mobile devices is long but personal navigation devices and mp3 players are probably the most common ones.

At the SIG we hope to discuss user interface issues that are the result of having all of these devices in cars. Some of the questions are:

  • How can we reduce (or eliminate) driver distraction caused by the in-car devices?
  • Can driver interactions with in-car devices actually improve driving performance?
  • Can users take advantage of novel technologies, such as streaming videos from other cars?
  • How do we build interfaces that users can trust and will thus actually use?
  • How can car manufacturers, OEMs, brought-in device manufacturers and academia collaborate in envisioning, creating and implementing automotive user interfaces?

The 2008 CHI SIG [2] attracted over 60 people and we’re hoping for similar (or better!) turnout.

References

[1] Albrecht Schmidt, Anind L. Dey, Andrew L. Kun, Wolfgang Spiessl, “Automotive User Interfaces: Human Computer Interaction in the Car,” CHI 2010 Extended Abstracts (to appear)

[2] D. M. Krum, J. Faenger, B. Lathrop, J. Sison, A. Lien, “All roads lead to CHI: interaction in the automobile,” CHI 2008 Extended Abstracts

[3] Albrecht Schmidt, Anind Dey, Thomas Seder, Oskar Juhlin, “Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2009”