Tag Archives: talk

2014 Visit to National University of Public Service

Last week I visited the Hungarian National University of Public Service. I was there at the invitation of Zoltán Székely. Zoltán is a major in the Hungarian Police, a lawyer, and an instructor at the University. I met Zoltán through Bálint Kiss, who is my host at BME during my Fulbright scholarship in Hungary.

Zoltán Székely introduces the proposed “Application of Robotics for Enhanced Security” effort

Zoltán is leading an effort to make robots part of law enforcement work in Hungary, and beyond. He invited me to the kick-off meeting for this effort, where I gave a talk sharing some of our experiences at Project54. The Project54 effort at UNH addressed ubicomp in the law enforcement setting.

Zoltán has brought together a diverse and talented group of researchers and practitioners for this effort. I am looking forward to seeing the results of their efforts.

You can see pictures from my visit on Flickr. My favorite is this one of the beautiful University building where the meeting was held:

National University of Public Service, Budapest, Hungary

2014 visit to University College London

Last week I travelled to London to give a talk at University College London (UCL). My host was Duncan Brumby, who also recently visited us at UNH. My talk introduced our work on in-vehicle human-computer interaction, touching on subjects from Project54 to driving simulator-based experiments.

It was great to talk to Duncan again, and it was really nice to meet some of his colleagues, including Anna Cox, Paul Marshall, and Sandy Gould. Thanks to all for hosting me.

While my trip was brief, I did get a chance to also visit the British Museum. This is one of my favorite places in the world, and here’s a photo of my favorite artifact from the museum’s vast collection, the Rosetta Stone:

You can see more pictures from my trip on Flickr.

 

 

 

2014 Visit to University of Szeged

The home of the Research Group on Artificial Intelligence

Yesterday I visited the University of Szeged. I was there at the invitation of fellow Fulbrighter Mark Jelasity. Mark is a member of the Research Group on Artificial Intelligence. He and I met at a recent event organized by Fulbright Hungary, where Hungarian Fulbright scholars reported on their experiences in the USA.

The primary purpose of my visit was to give a talk at the Institute of Informatics; thanks to Peter Szabó for including my talk in the Institute’s seminar series. I was really excited, as this was the first time in my career that I gave a talk in Hungarian. For fun, here are the Hungarian title and abstract:

Szemkövetők használata autóvezetéses szimulációs kisérletekben

Absztrakt: A University of New Hampshire kutatói több mint egy évtizede foglalkoznak a járműveken belüli ember-gép interfészekkel. Ez az előadás először egy rövid áttekintést nyújt a rendőr járművekre tervezett Project54 rendszer fejlesztéséről és telepítéséről. A rendszer különböző modalitású felhasználói felületeket biztosít, beleértve a beszéd modalitást. A továbbiakban az előadás beszámol közelmúltban végzett autóvezetés-szimulációs kísérletekről, amelyekben a szimulátor és egy szemkövető adatai alapján becsültük a vezető kognitív terhelését, vezetési teljesítményét, és vizuális figyelmét a külső világra.

As part of the visit I had a chance to talk to Mark about his research (see Mark’s Google Scholar profile). Mark’s interest is in distributed learning, which might have fascinating applications in the automotive domain.

Szeged fish soup

My visit was also exciting on a personal level, as I was born in Szeged, and my family often travelled there during my childhood. I walked around the town a bit to reminisce, and Mark treated me to an excellent lunch of fish soup.

See pictures from my trip on Flickr.

Eitan Globerson discusses piano and brain at BME

Yesterday I attended a talk by Eitan Globerson at BME. Professor Globerson is a conductor, pianist, and a brain scientist. His talk explored the brain mechanisms involved in playing the piano. A key mechanism is automaticity, which allows pianists to produce the complex musical sequences at very high tempo. I really enjoyed this talk, with its mix of performing music and discussing brain imaging.

Professon Globerson was hosted by BME professor Bertalan Forstner. Thanks Luca Szegletes for inviting me. See more pictures from the talk on Flickr.

Duncan Brumby visit to UNH

In December 2013 Duncan Brumby visited UNH ECE. Duncan is a senior lecturer (assistant professor) at University College London (UCL). His research includes the exploration of how people interact with mobile devices. As part of this work Duncan is interested in in-vehicle interactions, which are also of interest to me.

Duncan gave a talk to my ECE 900 class, in which he discussed a number of studies that explored “interactions on the move.” I really liked the fact that Duncan not only presented results, but also addressed nuts-and-bolts issues of interest to graduate students, from how to find a research topic, to how to handle reviewer comments.

See more photos from the visit on Flickr.

2013 Liberty Mutual visit to UNH ECE

Yesterday I hosted four researchers from the Liberty Mutual Research Institute for Safety: Bill Horrey, Yulan Liang, Angela Garabet, and Luci Simmons. This visit follows my recent visit to Liberty Mutual this summer.

As part of the visit, Bill gave a talk to my ECE 900 class. He discussed the wide variety of research performed at his institute, with an emphasis on the vehicle-related work that he is involved in. As part of this work Bill and colleagues conduct studies on a test track with an instrumented vehicle, which they brought along:

After the talk Tom Miller and I had a chance to show our visitors our driving simulator lab and discuss a host of research issues. It was fun – thanks Bill, Yulan, Angela and Luci.

See pictures from the visit on Flickr.

 

Visit to Liberty Mutual Research Institute for Safety

Last month I had a chance to visit the Liberty Mutual Research Institute for Safety in Hopkinton, MA. My host was Bill Horrey, senior research scientist at the institute. I also had a chance to talk to Marvin Dainoff, Yulan Liang, Angela Garabet, and a number of other researchers.

The institute is truly impressive. First, as Marvin explained to me, this is an organization devoted to independent research. The institute is funded by Liberty Mutual, but sets its own agenda. The only “constraint” on this agenda is that the research has to be related to Liberty Mutual’s business. Given that this business covers areas from driving, to homes, to health care, this is hardly a constraint. Furthermore, the institute is committed to publishing its work in peer-reviewed publications. In fact, publications are the institute’s central measure of success. The institute has a hallway with three (!) whiteboards, where staff keep track of their publications for the year. Each year the goal is to fill all three boards by December.

The institute has a number of very interesting labs. Of course for me the most interesting one was the driving simulator lab (the instrumented vehicle is a very close second!). The simulator is made by Real Time Technologies and the lab also has a head-mounted eye tracker.

So thanks to Bill and colleagues for hosting me. I really enjoyed talking to them about some research problems (including their recent work on drowsy drivers [1]), as well as the technical details of running a simulator lab. You can see a few more pictures about my visit on Flickr.

 

References

[1] William J. Horrey, Yulan Liang, Michael L. Lee, Mark E. Howard, Clare Anderson, Michael S. Shreeve, Conor O’Brien, Charles A. Czeisler, “The Long Road Home: Driving Performance and Ocular Measurements of Drowsiness Following Night Shift-Work,” Driving Assessment 2013

Towards disambiguating the effects of cognitive load and light on pupil diameter

Light intensity affects pupil diameter: the pupil contracts in bright environments and it dilates in the dark. Interestingly, cognitive load also affects pupil diameter, with the pupil dilating in response to increased cognitive load. This effect is called the task evoked pupillary response (TEPR) [1]. Thus, changes in pupil diameter are physiological measures of cognitive load; however changes in lighting introduce noise into the estimate.

Last week Oskar Palinko gave a talk at Driving Assessment 2011 introducing our work on disambiguating the effects of cognitive load and light on pupil diameter in driving simulator studies [2]. We hypothesized that we can simply subtract the effect of lighting on pupil diameter from the combined effect of light and cognitive load and produce an estimate of cognitive load only. We tested the hypothesis through an experiment in which participants were given three tasks:

  • Cognitive task with varying cognitive load and constant lighting. This task was adapted from the work of Klingner et al. [3]. Participants listened to a voice counting from 1 to 18 repeatedly. Participants were told that every sixth number (6, 12, and 18) might be out of order and were instructed to push a button if they detected an out-of-order number. This task induced increased cognitive load at every sixth number as participants focused on the counting sequence. A new number was read every 1.5 seconds, thus cognitive load (and pupil diameter) increased every 6 x 1.5 sec = 9 seconds.
  • Visual task with constant cognitive load (assuming no daydreaming!) and varying lighting. Participants were instructed to follow a visual target which switched location between a white, a gray and a black truck. The light reaching the participant’s eye varied as the participant’s gaze moved from one truck to another. Participants held their gaze on a truck for 9 seconds, allowing the pupil diameter ample time to settle.
  • Combined task with varying cognitive load and lighting. Participants completed the cognitive and visual tasks in parallel. We synchronized the cognitive and visual tasks such that increases in cognitive load occurred after the pupil diameter stabilized in response to moving the gaze between trucks. Synchronization was straightforward as the cognitive task was periodic with 9 seconds and in the visual task lighting intensity also changed every 9 seconds.

Our results confirm that, at least in this simple case, our hypothesis holds and we can indeed detect changes in cognitive load under varying lighting conditions. We are planning to extend this work by introducing scenarios in which participants drive in realistic simulated environments. Under such scenarios gaze angles, and thus the amount of light reaching participants’ eyes, will change rapidly, making the disambiguation more complex, and of course more useful.

References

[1] Jackson Beatty, “Task-Evoked Pupillary Responses, Processing Load, and the Structure of Processing Resources,” Psychological Bulletin, 276-292, 91(2)

[2] Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011

[3] Jeff Klingner, Rashit Kumar, Pat Hanrahan, “Measuring the Task-Evoked Pupillary Response with a Remote Eye Tracker,” ETRA 2008

Bryan Reimer visit to UNH

It was my great pleasure to host Bryan Reimer at UNH. Bryan is Research Scientist at the MIT Age Lab as well as Associate Director of the New England University Transportation Center. His research focuses on the measurement and understanding of human behavior in dynamic environments, such as in cars.

Bryan spent time in the Project54 lab discussing various aspects of driving simulator and field studies. He then gave a thought-provoking talk reviewing results from multiple studies exploring driver workload and distraction. I expecially enjoyed his discussion of physiological measures that can be used to estimate workload. E.g. Bryan has found that heart rate is a robust estimate of workload and is often more useful than the often-used measure of heart rate variability. Bryan also discussed work on validating driving simulator results through field studies. His data indicate that driving simulator results can be used to predict relative changes in workload measures under different situations in real-life driving. However, the actual values of the measures collected in simulator and field studies often differ significantly.

For more pictures visit Flickr.

Co-chairing AutomotiveUI 2010

On November 11 and 12 I was at the AutomotiveUI 2010 conference serving as program co-chair with Susanne Boll. The conference was hosted by Anind Dey at CMU and co-chaired by Albrecht Schmidt.

The conference was successful and really fun. I could go on about all the great papers and posters (including two posters from our group at UNH [1,2]) but in this post I’ll only mention two: John Krumm’s keynote talk and, selfishly, my own talk (this is my blog after all). John gave an overview of his work with data from GPS sensors. He discussed work on prediction of where people will go, his experiences with location privacy and with creating road maps. Given that John is, according to his own website, the “all seeing, all knowing, master of time, space, and dimension,” this was indeed a very informative talk 😉 OK in all seriousness, the talk was excellent. I find John’s work on prediction of people’s destination and selected route the most interesting. One really interesting effect of having accurate predictions, and people sharing such data in the cloud, would be on routing algorithms hosted in the cloud. If such an algorithm could know where all of us are going at any instant of time, it could propose routes that overall allow efficient use of roads, reduced pollution, etc.

My talk focused on collaborative work with Alex Shyrokov and Peter Heeman on multi-threaded dialogues. Specifically, I talked about designing spoken tasks for human-human dialogue experiments for Alex’s PhD work [3]. Alex wanted to observe how pairs of subjects switch between two dialogue threads, while one of the subjects is also engaged in operating a simulated vehicle. Our hypothesis is that observed human-human dialogue behaviors can be used as the starting point for designing computer dialogue behaviors for in-car spoken dialogue systems. One of the suggestions we put forth in the paper is that the tasks for human-human experiments should be engaging. These are the types of tasks that will result in interesting dialogue behaviors and can thus teach us something about how humans manage multi-threaded dialogues.

Next year the conference moves back to Europe. The host will be Manfred Tscheligi in Salzburg, Austria. Judging by the number of submissions this year and the quality of the conference, we can look forward to many interesting papers next year, both from industry and from academia. Also, the location will be excellent – just think Mozart, Sound of Music (see what Rick Steves has to say), and world-renowned Christmas markets!

References

[1] Zeljko Medenica, Andrew L. Kun, Tim Paek, Oskar Palinko, “Comparing Augmented Reality and Street View Navigation,” AutomotiveUI 2010 Adjunct Proceedings

[2] Oskar Palinko, Sahil Goyal, Andrew L. Kun, “A Pilot Study of the Influence of Illumination and Cognitive Load on Pupil Diameter in a Driving Simulator,” AutomotiveUI 2010 Adjunct Proceedings

[3] Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” AutomotiveUI 2010