As artificial intelligence (AI) systems become increasingly important to national defense, intelligence, and citizen services, the nation’s susceptibility to manipulations of those systems also grows. Damaging attacks against AI systems are no longer theoretical—they are being launched on commercial and government entities by adversaries from individual bad actors to nation-states that seek to challenge U.S. interests and ideals.
Creating a better and safer future through AI requires the nation to secure its AI systems against a real and evolving set of adversarial cyberattacks. In the words of the National Security Commission on Artificial Intelligence, “Adversaries may target the data sets, algorithms, or models that an ML system uses in order to deceive and manipulate their calculations, steal data appearing in training sets, compromise their operation, and render them ineffective.”
As the single largest provider of AI services to the federal government, Booz Allen works closely with implementers, researchers, and leaders across the government to build, deploy, and field machine learning (ML) algorithms that deliver mission advantage—and are resilient to adversarial attacks. Using a range of techniques, including differential privacy, adversarial training, red teaming, operational monitoring, and others, we help government realize the benefits of AI systems while thwarting cyberattacks.