While low-level counter-insurgency operations in the Middle East have been dragging on for nearly two decades, a war between the United States and a true near-peer adversary would prove far more costly. To deter such a conflict, America and her allies should field advanced wide-area airborne sensors to detect and track, in real time, a rival power’s troop/vehicle/ship movements in and around disputed regions. The early warning provided by these sensors would give relevant parties the opportunity to deescalate before tensions spiral out of control.
It could start with ethnic separatists fighting in a neighboring country or with an unresolved dispute over an old, meandering border. It might even begin at sea with the construction of an artificial island. Indeed, there are any number of ways the United States and allied nations could find themselves clashing with a near-peer adversary.
Such a great power conflict would be costly to all sides—which might make it even more likely to happen, since a well-armed irredentist might be tempted to act opportunistically, overwhelming a weaker neighbor, then quickly consolidating gains in hopes of delivering a fait accompli. To prevent a slide into war, America and her allies will need to demonstrate that they can pre-empt such surprise attacks.
This means better aerial surveillance.
“If we can fly powerful sensors over an area and pick up, in real time, suspicious activity being carried out by our near-peer competitors—for example, building runways, moving in supply trucks, or otherwise bolstering their logistics capabilities—this will allow us to tamp down an escalation before it goes too far,” contends Doug Rombough, VP of Business Development at Logos Technologies, a sensor company based in Fairfax, Va.
The concept has been coined “deterrence by detection” by the Center for Strategic and Budgetary Assessments. And to be done efficiently, it will take more than standard, narrow-field video cameras. America and her partners will need to employ wide-area motion imagery (WAMI) systems, mounted on unmanned aircraft and working in conjunction with other advanced sensors—all backed by powerful airborne edge processors.
Persistent surveillance with WAMI systems
The very first WAMI systems were 1500-pound beasts deployed on counter-IED missions in Iraq and Afghanistan, in 2006 and 2009. They flew on turboprop aircraft like the MC-12W Liberty and had to land before analysts could access their collected imagery. But that was then.
Current-generation systems developed by Logos Technologies, such as BlackKite and the Laureate Award-winning RedKite, are light enough to be carried on a wide range of platforms, including Group 3 tactical unmanned aircraft. Yet they are still powerful enough to take in a small city-sized area, detecting and tracking in real time hundreds of targets of interest at once. They can also transmit this real-time imagery to users on the ground.
“Each of these lightweight WAMI systems can monitor 12 square kilometers while operating at 12,000 feet above the surface,” Rombough says. “Users can stream on their mobile devices up to 10 different live video feeds pulled from that vast coverage area.”
Should a user want to take a closer look at a person, vehicle, or site, the WAMI can also cue a hi-res video camera to make a positive identification.
Besides pilot and sensor operator safety, putting a WAMI on an unmanned aircraft allows it to take advantage of the platform’s long endurance for true wide-area persistent surveillance. A WAMI user can follow, for example, multiple supply trucks for hours in real time and, with the aid of recorded imagery, even track their movement “back in time” to hidden points of origin.
A flying concert of sensors
Where WAMI really shines, though, is when combined with other sophisticated sensor modalities (LIDAR, SIGINT, synthetic aperture radar, hyperspectral cameras, etc.) in a single airborne pod. In such a configuration, each sensor contributes its data collection to a larger intelligence picture.
Developed by Logos Technologies, the platform-agnostic Multi-Modal Sensor Pod (MMSP) weighs less than 100 pounds and can house a WAMI system, hyperspectral imager, and a high-definition spotter (or a different combination of sensors, if so desired).
It’s a capability that has already caught the interest of the U.S. Department of Defense.
“The user can fly the MMSP over an area, and the system will characterize the entire landscape and anyone moving across it. The hyperspectral sensor detects spectral signatures of interest while the WAMI tracks movement and the spotter zooms in on those targets,” Rombough explains.
All of this is done in real time, with operators on the ground getting the combined sensor data before the aircraft even lands.
Converting it all into real-time intelligence
Of course, the sensors in an MMSP generate an enormous amount a data. A 100-megapixel WAMI system operating at two hertz churns out terabytes per hour—and then there’s the data generated by the hyperspectral imager and high-definition spotter to consider. To process of all that data in real time requires a powerful, embedded computer.
The Multi-Modal Edge Processor, a customizable mix of hardware, software, and firmware, weighs only 33 oz. and fits in the palm of hand. Yet it can chew through WAMI data at up to one billion pixels per second, LIDAR returns at up to six billion points per second, and hyperspectral bands at up to three million spectra per second.
Altogether, the lightweight WAMI system, the long-endurance aircraft, the platform-agnostic MMSP, and edge processor provide a formidable ISR capability that, put into action in sufficient numbers, can convince near-peer powers not to try dangerous gambits.
“It’s all about being credible. Once they realize you can catch them in real time, and you’re prepared to respond, there is a greater chance of working the diplomatic channels,” Rombough notes.