Browsing by Author "Clegg, Benjamin A."
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Fast, Accurate, But Sometimes Too-Compelling Support: The Impact of Imperfectly Automated Cues in an Augmented-Reality Head-Mounted Display on Visual Search Performance(Institute of Electrical and Electronics Engineers, 2023-01) Warden, Amelia C.; Wickens, Christopher D.; Rehberg, Daniel; Ortega, Francisco R.; Clegg, Benjamin A.While the visual search for targets in a complex scene might benefit from using augmented-reality (AR) head-mounted display (HMD) technologies by helping to efficiently direct human attention, imperfectly reliable automation support could manifest in occasional errors. The current study examined the effectiveness of different HMD cues that might support visual search performance and their respective consequences following automation errors. A total of 56 participants searched a three-dimensional environment containing 48 objects in a room, in order to locate a target object that was viewed prior to each trial. They searched either unaided or assisted by one of the three HMD types of cues: an arrow pointing to the target, a plan-view minimap highlighting the target, and a constantly visible icon depicting the appearance of the target object. The cue was incorrect in 17% of the trials for one group of participants and 100% correct for the second group. Through both analysis and modeling of both search speed and accuracy, the results indicated that the arrow and minimap cues depicting location information were more effective than the icon cue depicting visual appearance, both overall, and when the cue was correct. However, there was a tradeoff on the infrequent occasions when the cue erred. The most effective AR-based cue led to a greater automation bias in which the cue was more often blindly followed without careful examination of the raw images. The results speak to the benefits of AR and the need to examine potential costs when AR-conveyed information may be incorrect because of imperfectly reliable systems.Item How history trails and set size influence detection of hostile intentions(Springer Science and Business Media LLC, 2022-05) Patton, Colleen E.; Wickens, Christopher D.; Clegg, Benjamin A.; Noble, Kayla M.; Smith, C. A. P.Previous research suggests people struggle to detect a series of movements that might imply hostile intentions of a vessel, yet this ability is crucial in many real world Naval scenarios. To investigate possible mechanisms for improving performance, participants engaged in a simple, simulated ship movement task. One of two hostile behaviors were present in one of the vessels: Shadowing—mirroring the participant’s vessel’s movements; and Hunting—closing in on the participant’s vessel. In the first experiment, history trails, showing the previous nine positions of each ship connected by a line, were introduced as a potential diagnostic aid. In a second experiment, the number of computer-controlled ships on the screen also varied. Smaller set size improved detection performance. History trails also consistently improved detection performance for both behaviors, although still falling well short of optimal, even with the smaller set size. These findings suggest that working memory plays a critical role in performance on this dynamic decision making task, and the constraints of working memory capacity can be decreased through a simple visual aid and an overall reduction in the number of objects being tracked. The implications for the detection of hostile intentions are discussed.Item Supporting detection of hostile intentions: automated assistance in a dynamic decision-making context(Springer Science and Business Media LLC, 2023-11) Patton, Colleen E.; Wickens, Christopher D.; Smith, C. A. P.; Noble, Kayla M.; Clegg, Benjamin A.In a dynamic decision-making task simulating basic ship movements, participants attempted, through a series of actions, to elicit and identify which one of six other ships was exhibiting either of two hostile behaviors. A high-performing, although imperfect, automated attention aid was introduced. It visually highlighted the ship categorized by an algorithm as the most likely to be hostile. Half of participants also received automation transparency in the form of a statement about why the hostile ship was highlighted. Results indicated that while the aid’s advice was often complied with and hence led to higher accuracy with a shorter response time, detection was still suboptimal. Additionally, transparency had limited impacts on all aspects of performance. Implications for detection of hostile intentions and the challenges of supporting dynamic decision making are discussed.