Automating Radar Target Recognition with Simulated Data

The Challenge: Need for Simulated Target Data to Rapidly Train ML Models

As technological advances accelerate the pace of battlefield activity and decision making, it is increasingly critical that the Air Force automate the process of identifying enemy assets using data from sensors, signals intelligence, imagery, video, and other intelligence sources. A chief reason for this is there are not nearly enough analysts available who can manually analyze all the data being created. Moreover, to prevail on the modern battlefield, the Air Force must significantly compress the decision cycles between data collection and target identification and acquisition.

In 2021, the Air Force was working specifically on the challenge of automating target identification using synthetic aperture radar (SAR) images. Developed in the 1950s, SAR technologies make it possible to construct a radar-generated detailed image of an object in real time. Currently, the task of identifying enemy assets from SAR images is highly manual and time-consuming.

The process can be largely automated with machine-learning models, which learn over time how to distinguish one object from another by analyzing available SAR data. However, it takes a great deal of data to train the models to accurately recognize objects. And there is far less SAR data available than, say, video or photographic imagery. 

For newer enemy weapons systems, the Department of Defense (DOD) may have few, if any, SAR images on hand. And even if a government agency possesses the needed SAR images, they can be hard to access due to organizational jurisdictions, bureaucratic silos, and security-clearance hurdles.  

The Approach: Using AI and Other Capabilities to Generate Simulated SAR Images

One answer is to generate simulated, or synthetic, SAR data to train the machine learning (ML) models. In June 2021, an Air Force client tasked Booz Allen with developing simulated SAR images of specific ground vehicle targets and training ML models to recognize and identify those targets.

Simulating data from remote sensors, such as SAR, is much more difficult than simulating data from optical sensors, such as a camera. Optical sensors collect data in the visible, near-infrared, and short-wave infrared portions of the electromagnetic spectrum. In contrast, remote sensors, such as those used by SAR, utilize longer wavelengths of the spectrum—often referred to as bands—which give them special properties, such as the ability to see through clouds or forests.

Generating simulated data of this type and then training machine learning models to analyze the data require highly specialized expertise, tools, and resources that are not widely held. However, Booz Allen has advanced capabilities in all of those areas.

Within 5 months, the Booz Allen team developed a solution that generated the needed quantities of simulated images and used that data to train algorithms that could be used for an end-to-end automated target recognition system.

Moreover, the Booz Allen team did this using only 120 simulated SAR images for each target class—one image for every three degrees of a 360-degree view—which is a small fraction of the hundreds or thousands of images that are typically employed to train an automated target-recognition model.

To develop this SAR simulation solution, Booz Allen brought together its industry-leading expertise in artificial intelligence (AI) and experience in Air Force mission areas. Using computer-aided design models of the relevant ground targets, the Booz Allen team simulated radiating those ground vehicles with SAR radar waves from many angles. This produced the simulated SAR images of the targets.

The team then supplemented those simulated images with real SAR images of the ground targets taken from an unclassified internet-accessible repository known as MSTAR. Finally, the team tested various combinations of real and synthetic SAR data on deep learning neural networks developed for radar data by the University of California at Berkeley. 

The Solution: Synthetic Data Improves Automated Radar Target Recognition

By bringing all these capabilities together, Booz Allen generated the necessary simulated SAR images and trained machine learning models to identify five specific classes of ground targets with greater than 90% accuracy.

This effective SAR simulation solution was an important proof of concept for the Air Force client. It demonstrated that the processes, approaches, and algorithms employed by Booz Allen are highly capable of not only synthetically generating remote-sensing data, but also of integrating that simulated data into a machine-learning pipeline for specific automated target-recognition use cases.

Those capabilities are critical components of any end-to-end automated target-recognition solution. While certain components of the solution—such as the neural network algorithms used—are specific to SAR simulation, many other features lend themselves to generating a variety of simulated data, including those used by other types of remote battlefield sensors such as infrared and multi-spectral imagers.

This accomplishment builds upon many years of research and experience by Booz Allen in developing synthetic data for remote sensors, such as radar, and incorporating that capability into a broader pipeline system for practical use in mission-supporting use cases. 

The views, opinions, and/or findings contained in this case study are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the Department of the Air Force or the Department of Defense. 

Contact Us