Our end-to-end innovation ecosystem allows clients to architect intelligent and resilient solutions for future missions.
We're building value and opportunity by investing in cybersecurity, analytics, digital solutions, engineering and science, and consulting. Our culture of innovation empowers employees as creative thinkers, bringing unparalleled value for our clients and for any problem we try to tackle.
Empower People to Change the World®
Learn how we’re driving empowerment, innovation, and resilience to shape our vision for the future through a focus on environmental, social, and governance (ESG) practices that matter most.
Our 29,200 engineers, scientists, software developers, technologists, and consultants live to solve problems that matter. We’re proud of the diversity throughout our organization, from our most junior ranks to our board of directors and leadership team.
This article summarizes “Enhancing Trust in AI Through Industry Self-Governance,” which was published online April 2021 in Journal of the American Medical Informatics Association. Access the full article.
In an article for the Journal of the American Medical Informatics Association (JAMIA), Dr. Joachim Roski, Booz Allen health analytics leader; Dr. Ezekiel Maier, Booz Allen analytics leader; Dr. Kevin Vigilante, Booz Allen chief medical officer; Elizabeth Kane, Booz Allen health operations expert; and Vanderbilt University Medical Center's Dr. Michael Matheny, present insights that organizations of industry stakeholders can use to adopt self-governance as they work to maintain trust in artificial intelligence (AI) and prevent an "AI winter."
Industry stakeholders see AI as critical for extracting insights and value from the ever-increasing amount of health and healthcare data. Organizations can use AI to synthesize information, support clinical decision making, develop interventions, and more—creating high expectations for AI technologies to effectively address any health challenge. However, throughout the history of AI development, streaks of enthusiasm have been followed by periods of disillusionment. During these AI winters, both investment in and adoption of AI best practices wane.
“To counter growing mistrust of AI solutions, the AI/health industry could implement similar self-governance processes, including certification/accreditation programs targeting AI developers and implementers. Such programs could promote standards and verify adherence in a way that balances effective AI risk mitigation with the need to continuously foster innovation.”
- “Enhancing Trust in AI Through Industry Self-Governance,” JAMIA, April 2021
Today, publicity around highly touted but underperforming AI solutions has placed the health sector at risk for another AI winter. To respond to this challenge, we propose that industry organizations consider implementing self-governance standards to better mitigate risks and encourage greater trust in AI capabilities.
Building on the National Academy of Medicine’s AI implementation lifecycle, we created a detailed organizational framework that identifies 10 groups of AI risks and 14 groups of mitigation practices across the four lifecycle phases. AI developers, implementers, and other stakeholders can use this analysis to guide collective, voluntary actions to select, establish, and track adherence to trust-enhancing AI standards.
Without industry self-governance, government agencies may act to institute their own compliance requirements. However, industries that have proactively defined, adopted, and implemented standards complementary to government regulation have reduced the urgency of public-sector action while allowing for the appropriate use of available resources. Industry self-governance also enables exceptional agility to respond to evolving technologies and markets.
When considering self-governance, there are a number of key success factors to take into account. These include the creation of an industry-sanctioned certification and accreditation program. It’s also important to understand that self-governance success is based on stakeholder confidence that all standards and methods have been developed in coordination with consumers and patients, clinicians, AI developers, AI users, and other key parties.
While AI advancement continues with government support, there are also signs of a technology backlash, underscoring the need to mitigate AI-related risks. The government-led management of such public risks occurs in various ways; however, targeted, AI-specific legislation does not yet exist. Diverse organizations of health industry stakeholders could step in to help manage AI risks through self-governance. Using evidence-based risk mitigation practices could be effective across the industry and simultaneously promote and sustain user trust in AI, fending off the next AI winter.