Category Archives: merl

MERL gift

I’m happy to report that I received a gift grant in the amount of $5,000 from Mitsubishi Electric Research Laboratories (MERL). The gift is intended to support my work on speech user interfaces and it was awarded by Dr. Kent Wittenburg, Vice President & Director of MERL.

This gift comes in the context of ongoing interactions between researchers at MERL and my group at UNH. Kent and Bent Schmidt-Nielsen hosted me several years ago for a demonstration of the Project54 system (I drove to Boston in a police SUV, which was fun), and I also gave a talk at MERL last fall.  In 2009 my PhD student Zeljko Medenica worked as a summer intern at MERL under the direction of Bret Harsham (Bret recently gave a talk at UNHon some of this work – see picture below). Zeljko is headed back to MERL this summer and he will work under the direction of Garrett Weinberg.

I greatly appreciate MERL’s generous gift and I plan to use it to help fund a graduate student working on speech user interfaces. I hope to report back to Kent, Bent, Bret and Garrett on the student’s progress by the end of this summer.

Visiting MERL

Three weeks ago I visited Mitsubishi Electric Research Laboratory in Cambridge, MA. My hosts were Bret Hersham and his colleagues Garrett Weinberg and Bent Schmidt-Nielsen. The proximate reason for my visit was that one of my PhD students, Zeljko Medenica, worked under Bret as a summer intern.

As part of my visit I saw the MERL driving simulator, which is an excellent adaptation of a computer game for research purposes (read more about it in Garrett and Bret’s Automotive UI 2009 paper). I really like the driving courses that they can use (e.g. winding mountain roads and narrow village streets) and I’m impressed with the performance of the simulator’s chair which shakes and tilts.

After the simulator tour I gave a talk on our latest navigation study, which compared driving performance and visual attention when using two personal navigation aids: one that displays a map and provides spoken instructions and another that provides spoken instructions only. The talk was based on our Automotive UI 2009 paper.

Finally, I had a chance to talk to Fatih Porikli, who showed me some great videos of his work on recognizing pedestrians. We also discussed possible collaboration on learning grammars for using voice commands to tag photos. More about this in another post.