---------- NGA ----------

4 Phase I Selections from the 11.2 Solicitation

(In Topic Number Order)
Computer Visioin Group, Inc
P.O. Box 569
Leakey, TX 78873
Phone:
PI:
Topic#:
(401) 427-0860
Ozge C. Ozcanli
NGA11-001      Awarded:12/16/2011
Title:Geo-registration of Aerial Imagery Using 3-D Volumetric Models
Abstract:With the advancement of aerial imaging sensors, high quality data equipped with partial sensor calibration models is available. There is a recent research activity in computer vision community that aims to reconstruct 3-d structure of the observed scenes relying on the content of the imagery in fully automated ways. However the research has not matured into robust systems ready for operational settings. In this proposal, a novel architecture that reconstructs the 3-d geometry of the scene in the form of a geo-registered 3-d point cloud given imagery from multiple sensor platforms is presented. The 3-d cloud is equipped with LE and CE measurements through propagation of errors in the sensor calibration and the geometry reconstruction stages. The CVG team proposes to use a volumetric probabilistic 3-d representation (P3DM) and dense image matching to reconstruct the geometry and the appearance of the scene starting from a set of images with partial calibration data. The P3DM technology is at Technical Readiness Level (TRL) 4, with critical modules of the system parallelized and implemented on GPU hardware for real-time processing.

Toyon Research Corp
6800 Cortona Drive
Goleta, CA 93117
Phone:
PI:
Topic#:
(805) 968-6787
Andrew Brown
NGA11-001      Awarded:12/16/2011
Title:Fully Automated Dense 3D Modeling, Geo-Registration and Error Modeling
Abstract:Toyon Research Corporation proposes research and development of advanced algorithms and efficient software for performing high-resolution georegistered 3D reconstruction with associated error models. The technology will be capable of processing sequences of 2D images to automatically generate and georegister 3D models, with no user intervention required. Rapid processing will be achieved by leveraging modern computer architectures that enable massively-parallel processing using graphics processing units (GPUs), in addition to multi-core central processing units (CPUs). In Phase I, feasibility of high- resolution 3D modeling, georegistration, and development of statistically-consistent error models will be demonstrated. Phase I R&D will also include speed benchmarking and analysis of massively-parallel GPU hardware-accelerated processing to guide design of the Phase II architecture. Implementation and rigorous testing, evaluation, and validation of the developed technology will follow in Phase II. Integration of the developed technology in NGA analysis processes will also be pursued in Phase II

Aptima, Inc.
12 Gill Street
Woburn, MA 01801
Phone:
PI:
Topic#:
(781) 496-2430
Stacy Pfautz
NGA11-002      Awarded:12/16/2011
Title:IMAGINE: Imagery Management through Agile, Geo-Interactive, Natural Embodiment
Abstract:Overhead imagery analysts employ computer-based software as Electronic Light Tables (ELTs), to perform detailed analysis of aerial images in search of elements of interest. Conventional display design for ELT software requires analysts to take their eyes away from the image they are analyzing to perform routine functions. This interaction overhead typically leads to losses in visual momentum and in situation and context awareness, and requires significant cognitive shifting, which all contribute to degraded performance. To overcome these challenges, an immersive and context-based method of interacting with overhead imagery software to support and improve visual search is needed. Aptima proposes to develop IMAGINE (Imagery Management through Agile, Geo-Interactive, Natural Embodiment), a tool and framework for interacting with imagery that leverages direct, embodied control beyond typical input devices. IMAGINE will be a multi-modal, naturalistic, and flexible environment that layers on top of existing software and that includes •Control devices to supplement conventional mouse/keyboard input systems; •A context-driven command and visualization model that will interact with imagery software to interpret the analyst’s input based on current task and interaction environment; •Output displays such as large screens or mobile device interfaces to complement and augment the computer screen-based visualization of imagery and its manipulation.

Hadron Industries
90 Airport Road
Concord, NH 03301
Phone:
PI:
Topic#:
(855) 267-4253
Klee Dienes
NGA11-002      Awarded:12/16/2011
Title:Novel Methods of Interacting with Overhead Imagery for Broard-Area Search
Abstract:We will develop a novel system for interacting with overhead imagery that allows image analysts to use more of their bodies, and to maintain eye contact, during image manipulation and analysis. We will build upon an existing spatially-aware, embodied human-computer interface system—Oblong Industries’ G-Speak—integrating it with current IMINT tools. G- Speak is the state-of-the-art platform on which to build a successful solution for overhead image analysis. G-Speak already employs a gesture interface, but has not been integrated with image technology. Technical objectives to achieve integration include first defining and developing body gestures to perform image analysis tasks in G-Speak. This includes obtaining direct feedback from image analysts. We will then integrate G-Speak with external image exploitation packages ESRI and/or ENVI, ERDAS, VITEC). Finally, we will demonstrate capabilities of the proof-of-concept system by completing a traditional image analysis workflow: a human-led search for Elements of Interest in overhead imagery. Demonstration will show the system effectively allows analysts to use intuitive body gestures to perform image analysis tasks, thus recruiting greater body involvement, and prevents breaks in visual contact.