Responsible AI, Quantified

Hear from Geoff Schaefer, who leads our responsible AI practice, on the state of AI ethics and things AI ethicists should make central to their work to improve outcomes.

Click Expand + to view the video transcript
Expand + Collapse

What does it mean to live a good life? How can AI help us flourish? These are questions that AI ethicists should make central to their work. We should consider an AI system’s potential benefits and risks in concert with one another. In fact, a more robust—and historically accurate—ethical calculus will focus on the net good that an AI system will generate over its lifespan. As we think about the future of AI ethics, the field should emphasize three questions: First, what is the maximal good an AI system can do? Second, what are the potential risks in its design? And third, how can we mitigate those risks to achieve the maximal good? The order of these questions is intentional, as they shift our focus from harms to happiness and from failure to flourishing. This will help us open up new missions and needs for AI ethics to support. After all, ethics was never about compliance. Nor was it simply about the difference between right and wrong. Instead, it provided the overriding question of philosophy in ancient times: How can we be happy and flourish? Revisiting this ancient question will ensure that the future of AI ethics is bright, useful, and critical to the advancement of society. In other words, AI ethics can help us live lives that are, indeed, well-lived. The field is just getting started. 

Click Expand + to view the video transcript
Expand + Collapse

Hi, I’m Geoff Schaefer. I lead our AI ethics and safety work at Booz Allen Hamilton. Today, I want to talk about the future of AI ethics. The field is still in its earliest, most formative days. This gives us a unique opportunity to shape its evolution—and I think there’s a compelling reason to rethink how we apply the tools, techniques, and concepts in AI ethics today. Before we dive in, let’s set the stage. The field of AI ethics is in a paradoxical state. On the one hand, it has never been healthier. It is one of the fastest-growing subfields of AI. There has been a profusion of ethical guidance in the form of frameworks, principles, and toolkits from organizations big and small, public and private. The issues of bias, fairness, transparency, and other focal points have been cemented as domains in their own right. And it is now considered impolitic to establish an AI practice without an associated body of work in responsible AI. As we think about the future of AI ethics, the field should emphasize three questions: First, what is the maximal good an AI system can do? Second, what are the potential risks in its design? And third, how can we mitigate those risks so that the maximal good can be achieved? The order of these questions is intentional, as they shift our focus from harms to happiness, and from failure to flourishing. This will help us open up new missions and needs for AI ethics to support. After all, ethics was never about compliance. Nor was it simply about the difference between right and wrong. Rather, it provided the overriding question of philosophy in ancient times: How can we be happy and flourish? 

Click Expand + to view the video transcript
Expand + Collapse

We discussed before that the most prominent questions in AI ethics are focused on mitigating risk and avoiding harm. Examples include ensuring the privacy of users, reducing bias in training data, and maximizing the explainability of a system’s outputs. But if we shift our attention from focusing exclusively on questions of harm to the concern of human flourishing, we’re able to expand our ethical calculus in helpful ways. And this has important implications for thinking about the types of AI systems that we should build and how we should build them. Let’s demonstrate this by looking at a case study. Some of the most promising applications of AI can be found in healthcare, including precision medicine and early-stage diagnostics. Yet, two of the most common AI ethics questions in this space are: How can we protect patient privacy? And how can we ensure our training data isn’t biased? Well, both of these questions are clearly very important; neither one of them directly addresses the principal goal of healthcare: reducing human suffering. To better align with this goal, what if we asked questions such as: How can we better treat rare diseases? Or, how can we prevent a patient from getting sick in the first place? These questions drive much of the science behind our most promising medical advances, but they rarely factor into our ethical calculus of the role AI plays in these same advancements. The AlphaFold AI system from DeepMind is a perfect example. This algorithm has been widely touted as having cracked biology’s 50-year challenge of protein folding. Proteins are the building blocks of life, but deciphering their structure—a process called “folding”—is notoriously difficult, taking months or years to achieve, if at all. But understanding protein folding is essential for everything from developing medicine to understanding antibiotic resistance. And while there are hundreds of millions of proteins known to science, AlphaFold has already folded over 200 million of them. AlphaFold has rightly garnered the attention and admiration of the AI community and beyond. But we don’t celebrate it as an ethical algorithm. Why not? Given the criticality of protein folding to modern medicine, it stands to become one of the biggest drivers of human flourishing in history. And it will do so in many different ways, from the elimination of debilitating disease to the strengthening of our immune systems. A more holistic and future-focused approach to AI ethics, then, would encourage the development of similar types of AI systems across a range of critical sectors and applications that impact human lives. In the case of AlphaFold and similar types of AI systems, this might include increasing access to their powerful technology to ensure they benefit all of society. Coming full circle here, what does it mean to live a good life? How can AI help us flourish? These are questions that AI ethicists should make central to their work. We should consider an AI system’s potential benefits and its potential risks in concert with one another. In fact, a more robust—and historically accurate—ethical calculus will focus on the net good that an AI system will generate over its lifespan. 

At Booz Allen, we believe that responsible AI practices enable our organization and clients to meet modern global challenges. We believe harnessing the power of AI—safely, securely, and transparently—is one of the great transformative opportunities of this generation. 

From Aristotle to Algorithms

At Booz Allen, we help agency leaders bridge the gap between Aristotle—the philosophical underpinnings of AI ethics—and algorithms, the operationalized AI applications that enable complex missions across the defense, national security, and civil sectors.

One key to realizing this modern approach to responsible AI is enabling decision-makers to measure the ethical risk of their AI systems systematically. With a quantitative scorecard of their systems’ “ethical surface area,” they can more effectively capitalize on proven strategies to de-risk and recalibrate those systems. This will not only ensure their AI ecosystem is measurably responsible but will enhance the overall mission performance of their individual AI systems.

Our practical and quantitative approach to responsible AI accelerates agency progress from theoretical principles to concrete models and actions to enable the design and deployment of AI systems for any mission in any sector.

A Practical, Quantitative Approach to Responsible AI

Decorative Icon
Industry-first ethical risk framework and criteria for ethical test and evaluation
Decorative Icon
Ethical X-ray of an AI system’s architecture
Decorative Icon
Deployment-focused evaluation to increase mission success
Decorative Icon
Actionable recommendations to reduce ethical risk
Decorative Icon
Different assessment types and timelines for unique mission needs
Decorative Icon
Validation that an AI system is safe to operate ethically

How an AI E/ATO Works

With an expanding suite of tools and solutions, including the Booz Allen Ethical Authority to Operate (AI/E ATO) Assessment, we provide agencies with a quantitative approach to operationalizing the responsible AI principles most relevant for their sector and mission.

1. Map the AI System’s Architecture:

First, an E/ATO decomposes an AI system into its architectural components and analyzes each one’s novel ethical risk. 

2. Identify the Strategic Vulnerabilities:

Second, it zooms out to provide a more strategic view of the external forces and interconnected systems that will influence the design and operation of the AI system.

3. Conduct Ethical Risk-Scoring:

Third, it quantifies these two dimensions using a custom Ethical Risk Framework that’s tied directly to the organization’s ethical principles. 

4. Stress-Test Its Ethical Robustness:

Finally, it provides a set of custom ethical test and evaluation (T&E) criteria to stress-test an AI system’s alignment to the organization’s ethical principles.

Outcome

A detailed mapping of your AI system’s ethical risk

Outcome

A holistic view of the sources, types, and factors driving that risk

Outcome

Quantifiable scorecard that turns your ethical principles into action-focused guidance

Outcome

Measurable, ethics-focused criteria to embed in your T&E plan

Driving Responsible Innovation with Quantitative Confidence

Regardless of the principles, policies, and compliance standards, Booz Allen helps agencies quantify the real-world human impact of their AI systems and put ethical principles into practice. This support makes it easy to build and deploy measurably responsible AI systems with confidence.

But it’s important to remember that a best-in-class approach to responsible AI doesn’t stop with identifying ethical risk, as critical as that is. That’s why Booz Allen helps agencies take the additional step of turning the work of ethical mitigation into an opportunity to make technical improvements to AI systems while bringing innovative insights that ultimately lead to new ideas for the design and use of AI systems.

Here’s how: The rich insights we uncover during our ethical and compliance analysis provide a roadmap for designing and redesigning systems so our clients can achieve their critical mission goals more effectively. Through assessing and reducing ethical risks—especially in response to the National Institute of Standards and Technology’s AI Risk Management Framework—we can help agencies capture these performance gains and seize opportunities to innovate in ways that eliminate other critical risks to mission success.

Mission-Focused Responsible AI

Explore AI Insights for Federal Innovators

Velocity, an annual Booz Allen publication, studies the complex issues that are emerging for mission and technology leaders on the front lines of government innovation.

Read this year's cover story, "The Age of Principled AI," to learn more about how agencies can optimize the ethical rigor and real-world mission performance of their AI systems.

Contact us to learn more about responsible AI