|Acquisition Program: |
| ||The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 3.5.b.(7) of the solicitation.|| Objective: ||Develop new tools for the creation of entity and unit behaviors to be embedded in military simulation systems. These tools should leverage novel paradigms involving human-computer interaction so that users can apply their knowledge of the desired result without being encumbered by unnecessary details of primitives, composition rules, programming constructs, and the like.
|| Description: ||The OneSAF Objective System (OOS) has made a significant step forward in allowing non-programmers to create new semi-automated behaviors in the simulation. A behavior is a complex, tactical task performed by combat entities and units within the simulation. The OneSAF program has a well-defined behavior modeling language, a robust initial set of behaviors, and a graphical tool to allow non-programmers to modify or create behaviors without writing or recompiling software. The current behavior composer uses a graphical programming language to do this. This language and the Behavior Composer graphical user interface allow the user to combine and compose more primitive behaviors into aggregate or “higher-level” behaviors using a graphical flow-chart-type metaphor. Newly-defined behaviors can then be used in the creation of more advanced behaviors, and so on.
The behavior composer, however, still requires the user to have some familiarity with the notion of behavior primitives, composite behaviors, and programming constructs, such as iteration, conditional, and looping statements. In addition, it can be difficult for the user to understand how a completed or draft behavior will really work in a variety of situations, perceive anomalies or gaps in the intended execution of the behavior, or have confidence that the behavior is robust under the desired conditions. There are at least two factors underlying this difficulty: completeness under a variety of environmental conditions and factors and the robustness of the defined behaviors, especially when a defined behavior is itself used as a composition primitive.
Another impediment to a more natural behavior definition system is the general problem of sketch recognition. It would be highly advantageous if the user could communicate with the computer by sketching, instead of laboriously dragging and dropping icons in a pre-defined graphical user interface.
Two basic tenets of human-computer interaction (HCI) are the computer should do what it does best, thus allowing the human to what he or she does best, and the computer should support the human in the achievement of the human’s goals. In this domain, the computer could use its knowledge of the library of behaviors, the composition and control-flow rules, and other knowledge to allow the user to iteratively and interactively converge on the desired behavior in a natural fashion. The result should be a more powerful and natural behavior creation capability that reduces the problems of completeness and robustness. If the computer had the ability to recognized hand sketched input, this could greatly increase the utility, power, and ease-of-use of the system.
The goal of this research effort is to develop new paradigms and new tools and systems for computer-human interaction that could be used by a next-generation behavior composer to interactively work with the user to modify or create behaviors. The vision is for the user to interactively and iteratively draw a storyboard of the behavior, as the user might do when instructing new soldiers in the proper execution of the tactical behavior. The system is intelligent enough to query the user for more information, ask elucidating questions, and interact with the user in a natural way, much like new soldiers or students in a teaching situation.
While it is envisioned that the developed tool could benefit many simulation systems and tools, the focus of this research is on the human-computer interactions and the development of the tool. The purpose of this research is not to define a new behavior modeling language or simulation system. To focus this research effort on the intended area, the use of OOS; the OOS modeling language; and OOS entities, units, and behaviors is specified. Researchers who propose a project under this STTR should coordinate with the OneSAF program office for access to OOS software and documentation and present with the proposal indication that the program office will grant access.
|| ||PHASE I: In Phase I, the performer will outline the computer human interaction (CHI) technologies and techniques envisioned to implement a next-generation behavior composer. Story boards or other prototype mock-ups are expected. As part of the final report, plans for Phase II are required. There is no expectation that a prototype would be built during Phase I, but the report must address how the performer plans to implement a prototype in Phase II and how that prototype will comply with the OneSAF Objective System (OOS) architecture and interfaces. In addition, the plan for Phase II must address the set of behaviors that will be built with the tool in Phase II to demonstrate the efficacy of the proposed approach.
|| ||PHASE II: In Phase II, the researcher will be expected to build a system that fully supports the OOS behavior modeling language and could potentially supplement or replace the current composer tool. The researcher will demonstrate that a range of behaviors can be created and modified with the developed tool. As part of the final report for Phase II, the researchers will present their plans for Phase III, including other simulations with which the next-generation composer tool will be integrated.
|| ||PHASE III Dual Use Applications: Direct potential uses of this technology include simulation systems for homeland security, emergency response, civil law enforcement, large-scale wildfire response, and others. In Phase III, the researchers will apply the proposed tool to a simulation system other than OOS and/or a domain other than combat modeling.
|| References: ||1. Surdu, J.R., One Semi-Automated Force (OneSAF) Objective System (OOS): Program Overview
2. Karr, C.R.., OneSAF Behaviors, OneSAF User Conference, 1993.
3. Henderson, C., Grainger, B., Composable Behaviors in the OneSAF Objective System, Proceedings of the 2002 Interservice/Industry Training, Simulation, and Education Conference, Orlando, Florida, December 2-5, 2002.
4. Henderson, C., Grainger, B/. Behavior Modeling in the OneSAF Objective System (accompanying presentation to ), http://list.onesaf.org/html/baselined/papers/iitsec_02__final.pdf.
5. Alvarado, C., Davis, R., SketchREAD: a multi-domain sketch recognition engine, Proceedings of the 17th annual ACM symposium on User interface software and technology Santa Fe, NM, USA 2004, Abstract and full paper available here: http://portal.acm.org/citation.cfm?id=1029637&dl=acm&coll=&CFID=15151515&CFTOKEN=6184618
6. Forbus, K. Usher, J. and Chapman, V. 2003. Qualitative spatial reasoning about sketch maps. Proceedings of the Fifteenth Annual Conference on Innovative Applications of Artificial Intelligence, Acapulco, Mexico.
|Keywords: ||OneSAF, Behavior Composition, Military Simulation.|