A PhD and an MS position are available in the Project54 lab at the University of New Hampshire. The lab is part of the Electrical and Computer Engineering department at UNH. Successful applicants will explore human-computer interaction in vehicles. We are looking for students with a background in electrical engineering, computer engineering, computer science, or related fields.
The Project54 lab was created in 1999 in partnership with the New Hampshire Department of Safety to improve technology for New Hampshire law enforcement. Project54’s in-car system integrates electronic devices in police cruisers into a single voice-activated system. Project54 also integrates cruisers into agency-wide communication networks. The Project54 system has been deployed in over 1000 vehicles in New Hampshire in over 180 state and local law enforcement agencies.
Both the PhD and the MS student will focus on the relationship between various in-car user interface characteristics and the cognitive load of interacting with these interfaces, with the goal of designing interfaces that do not significantly increase driver workload. Work will involve developing techniques to estimate cognitive load using performance measures (such as the variance of lane position), physiological measures (such as changes in pupil diameter [1-5]) and subjective measures (such as the NASA-TLX questionnaire).
The PhD student will focus on spoken in-vehicle human-computer interaction, and will explore the use of human-human dialogue behavior [6-11] to guide the design process.
The work will utilize experiments in Project54’s world-class driving simulator laboratory which is equipped with two research driving simulators, three eye trackers and a physiological data logger.
The PhD student will be appointed for four years, and the MS student for two years. Initial appointments will be for one year, starting between June and September 2012. Continuation of funding will be dependent on satisfactory performance. Appointments will be a combination of research and teaching assistantships. Compensation will include tuition, fees, health insurance and academic year and summer stipend.
How to apply
For application instructions, and for general information, email Andrew Kun, Project54 Principal Investigator at firstname.lastname@example.org. Please attach a current CV.
 Oskar Palinko, Andrew L. Kun, “Exploring the Effects of Visual Cognitive Load and Illumination on Pupil Diameter in Driving Simulators,” ETRA 2012
 Andrew L. Kun, Zeljko Medenica, Oskar Palinko, Peter A. Heeman, “Utilizing Pupil Diameter to Estimate Cognitive Load Changes During Human Dialogue: A Preliminary Study,” AutomotiveUI 2011 Adjunct Proceedings
 Andrew L. Kun, Peter A. Heeman, Tim Paek, W. Thomas Miller, III, Paul A. Green, Ivan Tashev, Peter Froehlich, Bryan Reimer, Shamsi Iqbal, Dagmar Kern, “Cognitive Load and In-Vehicle Human-Machine Interaction,” AutomotiveUI 2011 Adjunct Proceedings
 Oskar Palinko, Andrew L. Kun, “Exploring the Influence of Light and Cognitive Load on Pupil Diameter in Driving Simulator Studies,” Driving Assessment 2011
 Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter Heeman, “Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator,” ETRA 2010
 Andrew L. Kun, Alexander Shyrokov, and Peter A. Heeman, “Interactions between Human-Human Multi-Threaded Dialogues and Driving,” PUC Online First, to appear in PUC
 Andrew L. Kun, Zeljko Medenica, “Video Call, or Not, that is the Question,” to appear in CHI ’12 Extended Abstracts
 Fan Yang, Peter A. Heeman, Andrew L. Kun, “An Investigation of Interruptions and Resumptions in Multi-Tasking Dialogues,” Computational Linguistics, 37, 1
 Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman, “Spoken Tasks for Human-Human Experiments: Towards In-Car Speech User Interfaces for Multi-Threaded Dialogue,” Automotive UI 2010
 Fan Yang, Peter A. Heeman, Andrew L. Kun, “Switching to Real-Time Tasks in Multi-Tasking Dialogue,” Coling 2008
 Alexander Shyrokov, Andrew L. Kun, Peter Heeman, “Experimental modeling of human-human multi-threaded dialogues in the presence of a manual-visual task,” SigDial 2007