|Acquisition Program: ||FNC: Naval Expeditionary Maneuver Warfare, MCSC PM Motor Transportation|| Objective: ||Develop human-centric tools for supervisory control of autonomous unmanned ground vehicles based on natural language dialogue that enhances the likelihood of mission success, lowers the cognitive workload of the operator, simplifies the interface hardware, enables heads-up & hands-free operation and improves operator trust in vehicle autonomy.
|| Description: ||Emerging Marine Corps strategy is that small units will be deployed beyond the limits of traditional logistics support. One potential solution to this problem is the use of autonomous ground vehicle transport of supplies and casualty evacuation. The DARPA Urban Challenge indicated that autonomous navigation of ground vehicles is feasible. However, in tactical situations involving small units, complex vehicle control interfaces are not practical, impose training burdens, have high cognitive demands, render the operator vulnerable, limit situation awareness, and preclude the operator from performing other duties. There exists a need to develop the technologies for supervisory control of autonomous vehicles that is natural and intuitive. In small combat units, teamwork is based on natural language dialogue and gesture interactions, where gesture serves to disambiguate spatial and object references. Research to develop natural language dialogue system integrated with the control architecture of autonomous vehicles would enable a natural supervisory control of the vehicles. This would include the ability of the vehicle to express its current status (with respect to goals and subsystems). This could be supplemented by the use of hand and arm gesture, Wiimote pointing and gesture or laser designators to help disambiguate spatial references in cases where there is line of sight control.
|| ||PHASE I: Identify natural language dialogue algorithms and architectures suitable for integration with the control systems of autonomous ground vehicles. Conduct research into autonomous control architectures that represent plans, goals and actions that are addressable by natural language. Conduct design study of integration of natural language dialogue and control architecture. Analysis of minimal display requirements to supplement natural language based control. Identify gesture based control language and gesture recognition algorithms suitable for ground vehicle control.
|| ||PHASE II: Based on the natural language dialogue software and control system identified and developed in phase I, design and build a prototype software and hardware system, using an existing vehicle capable of outdoor operation.
|| ||PHASE III: Further develop the prototype dialogue system integrated with autonomous control and demonstrate its capability for vehicle control in an outdoor environment. Evaluate ease of use of the interface and performance of the vehicle control.
PRIVATE SECTOR COMMERCIAL POTENTIAL/|| ||DUAL-USE APPLICATIONS: Supervisory control of farm equipment, warehouse vehicles, surveillance patrol vehicles, port freight handling equipment.
|| References: ||
1. Juraj Dzifcak, Matthias Scheutz, Chitta Baral, and Paul Schermerhorn
(2009) "What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution." In Proceedings of the International Conference on Robotics and Automation, Kobe, Japan, May 2009.
2. Pierre Lison and Geert-Jan M. Kruijff (2009). "Efficient parsing of spoken inputs for human-robot interaction." In Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 09), Toyama, Japan.
3. M Loper, N Koenig, S Chernova, S Jenkins, C Jone (2009) Mobile human-robot teaming with environmental tolerance. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, p. 157-164.|
|Keywords: ||natural language dialogue, autonomous ground vehicles, gesture, control architecture|