Spencer Castro is a Postdoctoral Fellow in the Underrepresented and Disadvantaged Scholars Program. He is also a former National Science Foundation Pre-Doctoral Graduate Research Fellow (GRFP) at the University of Utah, working with Dr. David Strayer. Spencer was awarded the NSF GRFP for research on the capacity of attention under cognitive workload, particularly in the context of technology and multitasking. He focuses on the validity of reaction time and accuracy as measures of different aspects of workload, as well as quantifying the risk of adverse outcomes due to these workload metrics in driving. He employs advanced cognitive modeling techniques to examine the mechanisms of attentional capacity, multitasking, and performance. In a recent publication in the Journal of Experimental Psychology: Human Performance and Perception, Spencer and collaborators propose new mathematical models for analyzing reaction time data that capture the classically difficult tradeoff between speed and accuracy.
As a member of the Paiute and Southern Sierra Miwuk Nations, Spencer was awarded a Postdoctoral Fellowship for Underrepresented and Disadvantaged Scholars from the University of Utah to support his on-going research on cognitive modeling. Spencer is a strong advocate for minoritized groups and is the president of the Diversity G.A.P. (Graduate Application Preparation) at the University of Utah, which prepares underrepresented students to apply to graduate school.
People around the world endanger the lives of themselves and others every day by dividing their attention across multiple tasks, such as driving and talking on a cell phone. These dangers result from splitting and overtaxing our limited voluntary attentional efforts. Current tools for measuring attentional effort, also known as cognitive workload, lack insight into cognitive factors that can cause fatal errors. With the advent of new distracting technology in cars, if we do not effectively measure cognitive workload fatal human errors may grow. To quantify cognitive workload under a simulated driving-like task, the current study details our application of mathematical modeling to an International Standard for measuring ongoing cognitive workload in the vehicle. This research provides a framework for accurately quantifying cognitive workload and the factors that contribute to it, which will allow future researchers and policy makers to determine the danger inherent in many tasks within the vehicle.
Human operators – particularly in demanding defence jobs – experience workload levels varying from light to complete overload. These workload fluctuations can be associated with sub-optimal performance, which can lead to poor outcomes or even mission failure. Automation, in the form of artificial intelligence that can take over routine tasks and/or recommend smart options, promises to alleviate some of these concerns but raises its own problem. Under-load can cause mind-wandering, sometimes called “automation neglect”, leaving the operator ill prepared for emergencies, and automated recommenders, lacking the situational awareness of human assistants, can cause failures by intruding at critical times or providing options that overload the operator’s capacity. This project aims to develop a hardware and software package for monitoring and predicting operator engagement and workload in real time. -Andrew Heathcote, PI
R package for advanced methods of analyzing reaction time data - under development here.
Previous research demonstrates that people increasingly utilize multiple displays along with mobile devices simultaneously and that this split in attention has detrimental effects on goal-directed behavior. However, few studies have assessed the impact of the physical attributes of mobile devices–including dimensions, weight and screen size–on attention. Understanding how device dimensions and screen size affect attention is an essential first step in creating safety guidelines for high-risk industries that utilize displays, such as automotive and aeronautics engineering. The aim of this work is to determine to what extent the display dimensions and screen size of mobile devices influence attention. To explore this question, participants interacted with mobile devices of varying sizes while performing a change detection task on a stationary device located behind and above the mobile device. Results of this study suggest that those using a smaller mobile device achieved higher performance on the background change detection task than those using a larger device while having similar performance in the mobile device task. This work demonstrates that when attention is divided, larger displays may be more attentionally demanding. We recommend that when users or designers are required to consider multitasking between a foreground and background task, to optimize background performance, they should utilize smaller foreground displays.
We developed an approach of converging measures from neurological, physiological, and behavioral outcomes to determine their predictive qualities on the road, and found that simple behavioral detection tasks can simultaneously provide an ongoing measure of different components of workload (Castro, Cooper, & Strayer, 2016; Cooper, Castro, & Strayer, 2016) while having minimal impacts upon driving performance. This simple Detection Response Task (DRT) consists of a small portable microcontroller, a stimulus, and a button that provides millisecond-accurate RTs, wirelessly sends data to a computer, can be paired with other DRTs for work with dyads or groups, can be configured for Go No-Go or choice tasks, and can be easily deployed in mobile real-world settings.