A Human-Interactive Course of Action Planner for Aircraft Carrier Deck Operations

Aircraft carrier deck operations present a complex and uncertain environment in which time-critical scheduling and planning must be done, and to date all course of action planning is done solely by human operators who rely on experience and training to safely negotiate off -nominal situations. A com...

Full description

Bibliographic Details
Main Authors: Michini, Bernard J. (Contributor), How, Jonathan P. (Contributor)
Other Authors: Massachusetts Institute of Technology. Aerospace Controls Laboratory (Contributor), Massachusetts Institute of Technology. Department of Aeronautics and Astronautics (Contributor)
Format: Article
Language:English
Published: American Institute of Aeronautics and Astronautics, 2013-10-23T13:40:08Z.
Subjects:
Online Access:Get fulltext
Description
Summary:Aircraft carrier deck operations present a complex and uncertain environment in which time-critical scheduling and planning must be done, and to date all course of action planning is done solely by human operators who rely on experience and training to safely negotiate off -nominal situations. A computer decision support system could provide the operator with both a vital resource in emergency scenarios as well as suggestions to improve e fficiency during normal operations. Such a decision support system would generate a schedule of coordinated deck operations for all active aircraft (taxi, refuel, take o ff, queue in Marshal stack, land, etc.) that is optimized for effi ciency, amenable to the operator, and robust to the many types of uncertainty inherent in the aircraft carrier deck environment. This paper describes the design, implementation, and testing of a human-interactive aircraft carrier deck course of action planner. The planning problem is cast in the MDP framework such that a wide range of current literature can be used to fi nd an optimal policy. It is designed such that human operators can specify priority aircraft and suggest scheduling orders. Inverse reinforcement learning techniques are applied that allow the planner to learn from recorded expert demonstrations. Results are presented that compare various types of human and learned policies, and show qualitative and quantitative matching between expert demonstrations and learned policies.
United States. Office of Naval Research (Science of Autonomy Program)