As the adoption of artificial intelligence (AI) grows throughout government, there has never been more awareness of the need to build and maintain AI systems with a clear understanding of their ethical risk. Every day, these systems shape human experience, bringing issues of trust and privacy, equity, autonomy, data integrity, and regulatory compliance continually into focus. But how do agencies turn a commitment to abstract ethical AI principles into a fully operational responsible AI strategy—one that delivers not just transparency and reduced risk, but also innovation that improves mission performance?
As AI increasingly drives decisions that affect both individual lives and critical government missions, decision makers urgently need to understand whether their AI systems are ethical or not—and to do so in a data-driven way. However, evaluating the ethical dimensions of an AI system is challenging because it requires an organization to pull off the difficult feat of “quantifying the philosophical.”
Consider the many frameworks, principles, and policies that define the field of responsible AI—such as the Department of Defense’s (DOD) AI Ethical Principles, the Principles of Artificial Intelligence Ethics for the Intelligence Community, and the Blueprint for an AI Bill of Rights. These frameworks provide agencies with overarching guidelines that are essential in defining an ethical vision. But they offer few tangible tools and little practical guidance necessary to operationalize responsible AI.
What’s needed is a rigorous, risk-based method for assessing the ethical risk of AI systems—and a corresponding roadmap for taking continuous and concrete action to ensure these systems are fully and responsibly in line with mission objectives.