SITIS Archives - Topic Details
Program:  SBIR
Topic Num:  A10-143 (Army)
Title:  Perception for Persistent Surveillance with Unmanned Ground Vehicles
Research & Technical Areas:  Ground/Sea Vehicles, Sensors

Acquisition Program:  PM Future Combat Systems Brigade Combat Team
  Objective:  Develop a system to allow an unmanned ground vehicle to extract useful information from surveillance video, to reposition itself to optimize data collection and communications, and to conceal itself.
  Description:  Persistent surveillance is a major role envisioned for Unmanned Ground Vehicles (UGV’s). Persistent surveillance refers to the use of networked assets over a wide area and extended duration to collect and process sensor data to produce actionable intelligence, which includes the type, location, and movement of identifiable threats or targets, as well as generally suspicious activities. Persistent surveillance dynamically reallocates assets and does not imply complete and constant coverage and can employ a mix of different platforms and sensors providing complementary coverage and resolution. Common platforms used for persistent surveillance include unmanned aerial vehicles (UAV’s) and stationary unattended ground sensors (UGS’s). The focus of this topic is the use of UGV’s for persistent surveillance. Unlike UAV’s, UGV’s can remain in place without significant energy expenditure and can maneuver inside buildings, tunnels and constrained spaces. Unlike UGS’s, UGV’s can reposition and reorient themselves and, with a manipulator arm, they can improve their positions for surveillance, cover and conceal their positions, and emplace and retrieve UGS’s. A major disadvantage is that UGV’s have to deal with the many obstacles on the ground. Although there are many potential sensors that can be used in persistent surveillance, in this topic we are focusing on visual (and perhaps infrared) imaging sensors and acoustic sensors. Communications bandwidth is and will remain a limited resource. Even with video compression technologies, there is insufficient bandwidth to upload all video and high-resolution still images from all persistent surveillance network nodes. Artifacts due to heavy video compression would degrade most analysis applications and viewing all the data would overwhelm analysts. Local processing is therefore preferable to central processing to extract actionable intelligence from the sensor data and to plan sensor position adjustments. An individual node can determine whether or not there has been a significant change in the situation that would warrant transmitting a package of sensor-level data. The scenario to be addressed in this topic is that a UGV has been placed at a particular location in order to perform surveillance of a particular area, whose location has been provided by maps, landmarks and/or GPS. Capabilities desired for the UGV include positioning in the correct orientation to view the desired area, finding a location that offers concealment, periodically adjusting position/orientation to improve concealment, data collection and/or communications, and extracting actionable information from the sensor stream. In the real world one or more of these functions may be performed via tele-operation, but the intent of the research is to determine how much can be performed autonomously by the UGV. Information of interest includes detection and analysis of humans and vehicles, analysis of traffic patterns, and identification of suspicious activities or behaviors. The intended platform size is on the order of 20 Kg and the platform is expected to function for 72 hours, so energy efficient algorithms are of interest. UGV platform and payload development, including sensors and communications, are outside the scope of this topic.

  PHASE I: The first phase consists of scenario/capability selection, initial system design, researching sensor options, investigating signal and video processing algorithms, and showing feasibility on sample data. Documentation of design tradeoffs and projected system performance shall be required in the final report.
  PHASE II: The second phase consists of a final design and full implementation of the system, including sensors and UGV software. At the end of the contract, extraction of actionable information and autonomous local maneuvering shall be demonstrated in a realistic outdoor environment. Deliverables shall include the prototype system and a final report, which shall contain documentation of all activities in the project and a user's guide and technical specifications for the prototype system.

  PHASE III: The end-state of this research is to further develop the prototype system and potentially transition the system to the field, in support of OEF/OIF missions and objectives. Potential military applications include monitoring highways, overpasses, intersections, buildings and security checkpoints. Potential commercial applications include monitoring high profile events, border security and commercial and residential surveillance. The most likely path for transition of the SBIR from research to operational capability is through collaboration with robotic companies from industry or through collaboration with the Robotic Systems Joint Project Office (RS JPO).

  References:   1. http://www.afcea.org/signal/articles/templates/SIGNAL_Article_Template.asp?articleid=97&zoneid=88 (Persistent Surveillance Comes Into View) 2. http://www.dtic.mil/srch/doc?collection=t3&id=ADA497188 (Automated Knowledge Generation with Persistent Surveillance Video) 3. http://www.tardec.info/roboticsrodeo/Documents/PersitentStare.pdf (Persistent Stare Scenarios) 4. http://www.ee.washington.edu/research/nsl/papers/iscas-08.pdf (Human Activity Recognition for Video Surveillance) 5.http://www.robots.ox.ac.uk/~lav//Publications/robertson_reid_cviu2006/robertson_reid_cviu2006.pdf (A General Method for Human Activity Recognition in Video) 6. http://mha.cs.umn.edu (Monitoring Human Activity) 7. http:// www.araa.asn.au/acra/acra2004/papers/marzouqi.pdf (Covert Robotics: Covert Path Planning in Unknown Environments)

Keywords:  robotics, surveillance, autonomy, image processing, ground vehicle, human activity

Questions and Answers:
Q: 1. The last sentence in the topic description suggests that the system should use the robot's existing on-board sensors. Is this correct? If so, what sensors are available on the robot: camera, GPS, IMU, laser scanner? What is the available resolution/framerate/FOV of the visible/IR camera?

2. Can we assume a map of the environment (including 3D structures suitable for concealment) is available, or should this be built autonomously by the robot?
A: 1. That sentence was meant to mean the sensors should not be custom made, rather they should be COTS sensors or proven sensors that your company owns (only visual, infrared and/or acoustic sensors should be used). The main focus of this effort should be on the development of intelligent video analytics capabilities and autonomous maneuvering capabilities for the robot. Also, the robot should be COTS or a robot that your company owns. There are no specific restrictions on the resolution/framerate/FOV of the visible/IR camera, however, it should be kept in mind that there is limited communications bandwidth available, as mentioned in the solicitation.

2. A map of the environment should not be assumed. Yes, this should be built autonomously by the robot.
Q: Under the phase 1 objectives - it states that an initial system design is required - now having said that - does the initial system design entail a software system? a hardware system? or both?
A: Both. However the emphasis of this SBIR should be on the software development (intelligent video analytics, autonomous maneuvering algorithms, etc.). Hardware for the system should primarily be COTS or proven hardware that your company already owns.
Q: Is this solicitation eligible for a phase 1 option?
A: Yes.
Q: Are laser scanners permitted for the design?
A: No.
Q: If additional sensors are required that are not owned by the company submitting a proposal - may they be included in the costs for the proposal?
A: Yes, but they would need to be delivered to the Government at the end of the contract.
Q: Would there be interest in abstract models for interpreting abstract sensors in pursuit of the defined goals: positioning, concealment, extracting actionable information from sensor artifacts, detection and analysis of humans and vehicles, analysis of traffic patterns, identification of suspicious activities or behaviors? --From a company that does not have direct access to robotic vehicles or sensors (only a new way to process information).
A: We are looking for both the software and hardware for this effort. The hardware (sensors, communications, UGV platform, etc.) can be purchased COTS. There are no restrictions on the UGV platform that is used for Phase I/II.

Record: of