|Acquisition Program: || Objective: ||The objective is to provide technology to automatically discover objects in a scene and perform a context-based assessment for mobile users. The capability should support hands free operation for military personnel engaged in field.
|| Description: ||An operational need exists to better support tactical users in knowledge discovery that is specific to information seen or heard in their area of operation. Military operators are engaged in activities that may require hands free operation of devices (voice or other I/O). New mobile devices feature GPS for location, network connectivity and numerous applications (apps). It is desired to use devices as digital assistants to make operators aware of significant entities or events in their area of operations and alert them to dangers. Devices need to ingest imagery and sounds (from warfighter mounted or proximity sensors), identify entities and behaviors within a scene, automatically send entity/behavior knowledge representations to the ISR enterprise, provide for the discovery of knowledge in the enterprise relevant to the knowledge representation and for the display of discovered information on retrieved imagery that is aligned to warfighter position and heading. The handheld system needs to describe objects or behaviors with sufficient clarity to allow related information, including cultural data, held within the ISR enterprise to be discovered. Upon receipt of entity/behavior contextual information, the handheld system needs to display tactically relevant context on location and heading registered imagery which can be displayed on a mobile device. Algorithms need to be developed that can filter displayed information based on operator mission tasking.
The Department of Defense deals with very large quantities of data and requires methods to rapidly assess information relevant to users. Mobile devices such as iPhone and Android offer mobile computing assistance and communications. However, these devices need to be adapted for military environments. Mobile devices offer direct or indirect contextual awareness of surroundings. Indirect awareness occurs by means of networks and direct awareness by means of sensor or user device inputs.
Challenges for this topic include 1) entity (e.g. location, vehicle, weapon fire, facility) and behavior (e.g. crowd, clan symbol, loitering) recognition in scenes, 2) translation of images/sounds to knowledge objects that can be understood by a larger ISR enterprise 3) space/time/context based knowledge retrieval on received knowledge objects, 4) display of context data on location and heading registered imagery on a mobile device, and 5) keeping the warfighter and area sensors in a common coordinate (space and time) system. A mature system should make effective use of direct and indirect sources of information to maintain a current state of awareness in a dynamic data environment.
Technology needs to be developed to allow a warfighter tactical device to convey scene understanding, add context and display a composite view to the tactical warfighter. Focus of the topic is on bandwidth constrained expeditionary warfighters. The vision is to provide a personal digital assistant that will make the user aware of relevant visually oriented critical information in a changing environment by enabling an accurate representation of the environment to be transmitted to the ISR enterprise. The goal is to greatly increase the effective utilization of large data stores. User safety in dangerous and unfamiliar areas is of the highest priority. The system should automatically generate alerts such as “building ahead is associated with terrorist activity,” “entering area of recent RPG attack,” and “explosion detected 1 km north by an overhead sensor”.
The OSD is interested in innovative R&D that involves technical risk. Proposed work should have technical and scientific merit. Creative solutions are encouraged.
|| ||PHASE I: Complete a feasibility study and research plan that leads to a matured technical approach and a successful proof of principle demonstration for hands free operations of mobile devices for military users. Identify the critical technology issues that must be overcome to achieve success and track key risk reduction activity. Prepare a research plan for Phase 2 that addresses critical issues for prototyping.
|| ||PHASE II: Produce a prototype system that is capable of producing context based spatially oriented information assistance and operator alerts. Demonstrate device operation in a hands free mode (e.g. voice, sign or other means). Test the device in a lab setting in two or more contexts. Develop a concept of operation in support of a specific user community with potential situational dangers and need of assistance alerts. Outline methods to incorporate input from smart area EO/IR and acoustic sensors. The prototype will need to track area sensors and the warfighter in a common coordinate system. The existence of a searchable enterprise knowledge store can be assumed.
|| ||PHASE III: Produce a system capable of deployment in an operational setting. Test the system in an operational setting in a stand-alone mode and as a component of larger network. The work should focus on capability required to achieve transition to program of record of one or more of the military Services. The capability should use open standards and military guidance for net-centric operations. Performance metrics should be defined with the transition program partner and assessment made.
|| References: ||1. Data Sharing in a Net-Centric Department of Defense, DoD Directive No. 8320.2, Dec 2, 2004.
2. B. Sutherland, “Apple’s New Weapon To help soldiers make sense of data from drones, satellites and ground sensors, the U.S. military now issues the iPod Touch”. Newsweek April 2009. http://newsweek.com/id/1945623
3. Prince McClean, “Inside Google's Android and Apple's iPhone OS as core platforms”, Nov 5, 2009.
4. H. W. Gellersen, A. Schmidt and M. Beigl, “Multi-Sensor Context-Awareness in Mobile Devices and Smart Artifacts”, Mobile Networks and App. 7, 341–351, 2002.
|Keywords: ||knowledge discovery, augmented reality, human-computer interaction|