Tag Archives: navigation

LED Augmented Reality: Video Posted

During the 2012-2013 academic year I worked with a team of UNH ECE seniors to explore using an LED array as a low-cost heads-up display that would provide in-vehicle turn-by-turn navigation instructions. Our work will be published in the proceedings of AutomotiveUI 2013 [1]. Here’s the video introducing the experiment.

References

[1] Oskar Palinko, Andrew L. Kun, Zachary Cook, Adam Downey, Aaron Lecomte, Meredith Swanson, Tina Tomaszewski, “Towards Augmented Reality Navigation Using Affordable Technology,” AutomotiveUI 2013

New York Times article discusses our work on in-vehicle navigation devices

Last week I was interviewed by Randall Stross for an article that appeared in the September 2 edition of the New York Times. Mr. Stross’ article, “When GPS Confuses, You May Be to Blame,” discusses research on in-vehicle personal navigation devices, including our work on comparing voice-only instructions to map+voice instructions [1].

Specifically, Mr. Stross reports on a driving simulator study published at AutomotiveUI 2009, in which we found that drivers spent significantly more time looking at the road ahead when navigation instructions were provided using a voice-only interface than in the case when both voice instructions and a map were available. In fact, with voice-only instructions drivers spent about 4 seconds more every minute looking at the road ahead. Furthermore, we found evidence that this difference in the time spent looking at the road ahead also had an effect on driving performance measures. These results led us to conclude that voice-only instructions might be safer to use than voice+map instructions. However, the majority of our participants preferred having a map in addition to the voice instructions.

This latter finding was the impetus for a follow-on study in which we explored projecting navigation instructions onto the real world scene (using augmented reality) [2]. We found that augmented reality navigation aids allow for excellent visual attention to the road ahead and excellent driving performance.

References

[1] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

[2] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” MobileHCI 2011

Augmented Reality vs. Street View for Personal Navigation Devices

Personal navigation devices (PNDs) are ubiquitous and primarily come in three forms: as built-in devices in vehicles, as brought-in stand-alone devices, or as applications on smart phones.

So what is next for PNDs? In a driving simulator study to be presented at MobileHCI 2011 [1], Zeljko Medenica, Tim Paek, Oskar Palinko and I explored two ideas:

  • Augmented reality PND: An augmented reality PND overlays route guidance on the real world using a head-up display. Our version is simulated and we simply project the route guidance on the simulator screens along with the driving simulation images. Augmented reality PNDs are not yet available commercially for cars.
  • Street-view PND: This PND uses a simplified version of augmented reality. It overlays route guidance on a sequence of still images of the road. The images and overlay are displayed on a head-down display. Google Maps Navigation runs on smart phones and can be used with street view.

The following video demonstrates the two PNDs.

Our findings indicate that augmented reality PNDs allow for excellent visual attention to the road ahead and excellent driving performance. In contrast, street-view PNDs can have a detrimental effect on both. Thus, while further research is clearly needed, it might be best if navigation with a street view PND was handled by a passenger and not by the driver.

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Augmented Reality vs. Street Views: A Driving Simulator Study Comparing Two Emerging Navigation Aids,” to appear at MobileHCI 2011

Talk at SpeechTEK 2010

On Tuesday (August 3, 2010) I attended SpeechTEK 2010. I had a chance to see several really interesting talks including the lunch keynote by Zig Serafin, General Manager, Speech at Microsoft. He and two associates discussed, among other topics, the upcoming release of the Windows 7 phone and of the Kinect for Xbox 360 (formerly Project Natal). We also saw successful live demonstrations of both of these technologies.

One of Zig’s associates to take the stage was Larry Heck, Chief Scientist, Speech at Microsoft. Larry believes that there are three areas of research and development that will combine to make speech a part of everyday interactions with computers. First, the advent of ubiquitous computing and the need for natural user interfaces (NUIs) means that we cannot keep relying on GUIs and keyboards for many of our computing needs. Second, cloud computing makes it possible to gather rich data to train speech systems. Finally, with advances in speech technology we can expect to see search move beyond typing keywords (which is what we do today sitting at our PCs) to conversational queries (which is what people are starting to do on mobile phones).

I attended four other talks with topics relevant to my research. Brigitte Richardson discussed her work on Ford’s Sync. It’s exciting to hear that Ford is coming out with an SDK that will allow integrating devices with Sync. This appears to be a similar approach to ours at Project54 – we also provide an SDK which can be used to write software for the Project54 system [1]. Eduardo Olvera of Nuance discussed the differences and similarities between designing interfaces for speech interaction and those for interaction on a small form factor screen. Karen Kaushansky of TellMe discussed similar issues focusing on customer care. Finally, Kathy Lee, also of TellMe, discussed her work on a diary study exploring when people are willing to talk to their phones. This work reminded me of an experiment in which Ronkainen et al. asked participants to rate the social acceptability of mobile phone usage scenarios they viewed in video clips [2].

I also had a chance to give a talk reviewing some of the results of my collaboration with Tim Paek of Microsoft Research. Specifically, I discussed the effects of speech recognition accuracy and PTT button usage on driving performance [3] and the use of voice-only instructions for personal navigation devices [4]. The talk was very well received by the audience of over 25, with many follow-up questions. Tim also gave this talk earlier this year at Mobile Voice 2010.

For pictures from SpeechTEK 2010 visit my Flickr page.

References

[1] Andrew L. Kun, W. Thomas Miller, III, Albert Pelhe and Richard L. Lynch, “A software architecture supporting in-car speech interaction,” IEEE Intelligent Vehicles Symposium 2004

[2] Sami Ronkainen, Jonna Häkkilä, Saana Kaleva, Ashley Colley, Jukka Linjama, “Tap Input as an Embedded Interaction Method for Mobile Devices,” TEI 2007

[3] Andrew L. Kun, Tim Paek, Zeljko Medenica, “The Effect of Speech Interface Accuracy on Driving Performance,” Interspeech 2007

[4] Andrew L. Kun, Tim Paek, Zeljko Medenica, Nemanja Memarovic, Oskar Palinko, “Glancing at Personal Navigation Devices Can Affect Driving: Experimental Results and Design Implications,” Automotive UI 2009

Visit to FTW, Vienna

On June 4, 2010 I visited the Telecommunications Research Center Vienna (FTW). My host was Peter Froehlich, Senior Researcher in FTW’s User-Centered Interaction area of activity. Peter and I met at the CHI SIG meeting on automotive user interfaces [1] that I helped organize.

Peter and his colleagues are investigating automotive navigation aids and are currently preparing for an on-road study. I’m happy to report that this study will utilize one of our eye trackers. My visit provided an opportunity for us to discuss this upcoming study and how the eye tracker may be useful in evaluating the research hypotheses. Part of this discussion was a Telecommunications Forum talk I gave – see the slides below:

I want to thank Peter and his colleagues at FTW for hosting me and I’m looking forward to our upcoming collaboration. I also want to thank FTW for providing funding for my visit.

References

[1] Albrecht Schmidt, Anind L. Dey, Andrew L. Kun, Wolfgang Spiessl, “Automotive User Interfaces: Human Computer Interaction in the Car,” CHI 2010 Extended Abstracts

Two posters at Ubicomp 2009

Our group presented two posters at last week’s Ubicomp 2009. Oskar Palinko and Michael Litchfield were on hand to talk about our multitouch table effort[1] (a great deal of work for this poster was done by Ankit Singh). Zeljko Medenica introduced a driving simulator pilot, work done in collaboation wtih Tim Paek, that deals with using augmented reality for the user interface of a navigation device [2].

Oskar (center) and Mike (right)

Oskar (center) and Mike (right)

Zeljko (center)

Zeljko (center)

Oskar, Mike and I are working on expanding the multitouch study. We plan to start with an online study in which subjects will watch two videos, one in which a story is presented using the multitouch table and another with the same story presented using a simple slide show. Zeljko will head up the follow-on to the pilot study – take a look at the video below to see (roughly) what we’re planning to do.

Take a look at other pictures I took at Ubicomp 2009 on Flickr.

References

[1] Oskar Palinko, Ankit Singh, Michael A. Farrar, Michael Litchfield, Andrew L. Kun, “Towards Storytelling with Geotagged Photos on a Multitouch Display,” Conference Supplement, Ubicomp 2009

[2] Zeljko Medenica, Oskar Palinko, Andrew L. Kun, Tim Paek, “Exploring In-Car Augmented Reality Navigation Aids: A Pilot Study,” Conference Supplement, Ubicomp 2009