Developing and Evaluating an Augmented Reality Interface to Assist the Joint Tactical Air Controller by Applying Human Performance Models

The authors developed a 3D augmented reality head mounted display (DARSADS-SVS HMD) interface to support the Joint Tactical Air Controller (JTAC). The JTAC’s job is to integrate information about enemy attack units and nearby friendly forces and direct aircraft equipped with weapons to neutralize the enemy via close air support (CAS), while also safely routing air traffic. The JTAC’s numerous and often overlapping tasks involve maintaining detailed situational awareness (SA) of a large quantity of information, and making rapid decisions that carry life-or-death consequences. Thus, the JTAC role requires many different cognitive operations across different mission phases. Designing an effective human-factored system that supports maximum SA while minimizing cognitive load required the authors to harness computational cognitive models of SA-supporting visual scanning, display layout, 3D frame-of-reference transformations, clutter, legibility and working memory. The authors applied such models to different phases of the JTAC mission (e.g., airspace management, call-for-fire), establishing a Figure of Merit (FOM) for each given design by summing FOMs across models, thus creating a mechanism to evaluate designs based upon their balanced impact on competing cognitive drivers. Models were differentially weighted for each phase, according to the relative importance of the relevant cognitive process to the phase in question. In this research paper, the authors illustrate two such design comparisons.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01707963
  • Record Type: Publication
  • Files: TRIS
  • Created Date: May 24 2019 4:23PM