The future of how AI will unfold is uncertain, but we do know that this is the AI moment and that the future is bright. AI will become part of everything from accelerating the economy to revealing scientific breakthroughs. That's why, for this year's edition of Velocity Magazine, we are placing the spotlight on AI with timely insights at the intersection of mission and technology. In this AI issue, experts from Booz Allen and across the industry explore fundamental challenges and opportunities as we harness emerging AI capabilities for government. I'm excited to be sitting here today with John Larson, who leads our AI practice to talk about some of the major issues covered in Velocity, at this remarkable moment of transformation. Hello, John. It's great to have you here. How are you today?
I'm doing great. It's wonderful to be with you today.
Thank you. So let's dive in. There's been so much hype around AI these days, and in particular, generative AI, and the government has been doing this for years. So what's different today? What's so transformative that I think about is that what's happened is in the past, most of us have been consumers of AI, right? We don't get to actually do it, right?
You had to be a practitioner. You have to be a coder. You had to be a mathematician. What's happened now is that the generative AI, through its language interface, through prompt engineering, has allowed all of us to be generators of AI to engage with AI in a way that you never could have before. And what's exciting is when you think about the government mission, the government has this amazing mission obligation to serve the citizenry, and in doing so, they have to take complex policies and help those citizens navigate those policies to get services to meet the requirements, whatever it may be. And generative AI is going to help speed and accelerate that. So that's one way in which it's truly going to change. I think how government interacts both on the government side and providing those services more effectively and impactful or meeting their missions. And then on the citizenry side, right, the ability to consume and interact with the government is going to change. And then the last thing I'll think about is, is all of the data that the government gathers and what they do with it to help change their policies, to inform missions, the ability to interact with that through a generative framework is going to allow for deeper insights and more individuals to achieve those insights, to query data, to build tables, to analyze it. And so that's sort of, I think what's really exciting and why what we're seeing today is fundamentally different. There is, of course, a lot of apprehension about the application of AI in industry and in the federal government. What are what is causing this fear and anxiety today? We tend to fear things that disrupt our work lives. Right? There's a lot of fear around how is this going to change employment? And the thing I think about a lot is it's not, will it create disruption? Will there be displacement? Yes, there will. But I think it's important to remember it's going to change how we all work. I think the more we embrace it, the more it's going to make us more productive. It's going to allow us to focus on things that really are intellectually more interesting and challenging. It's going to cognitively offload some of the repetitive tasks. I think overall, on balance, the net is, I think the good is going to outweigh the bad. And I think with the right frameworks, we can ensure that we harness the power of this technology for the good and we mitigate those downside risks.
Sounds like a lot of education and awareness needs to happen. In the meantime. A lot of education, a lot of awareness, and I think a lot, of sort, of evaluation of understanding what are the risks and the benefits. So, John, you touched briefly on the idea of responsible AI, and I know this is the cover story in Velocity. Can you say more about how your thinking about responsible AI and what the journey is here?
What responsible AI is about is about harnessing the power and ensuring that the innovation can take place in that framework that allows it to do so, commiserate with our core values and democratic principles. And so what we're looking at, I think from that framework is an evaluation of what are the respective risks. How do you quantify those risks? What is the mission that that AI is being pointed to? How do you quantify that and understand the value it creates? And then it's a classic benefit cost analysis of here are the risks. How, what's the probability of those risks? Right? And what is the impact of those risks and what is the impact of the AI model? And the good that it's going to do, or the value it's going to create. And then you ask yourself to sort of weigh those two things. And when you do that, you're able to sort of evaluate the merits of that approach, that AI in the context of its application. The notion of responsible AI is discussed conceptually, but what are we really doing here? What are the mechanism? What are the controls? What are the guardrails? How is it really applied, you know, in a industry setting? Yeah, it's a great question, right? Because there's sort of a theory and you go from theory to practice to application. And so when we think about it, we're thinking at that integration layer. When you build these models, you go through this process called AI Ops. It's the process from data ingestion, to the models, to the deployment of those models, the learning of those models and then the reintegration, it's this entire environment. And in that AI Ops process, that's where you need to integrate those responsible AI frameworks in place, right? So when you bring them down to that level and you ingrained them in the actual process along the way. So when you think about things like the data ops you want to ingrain in that process data bias detection, and you put that in there and it's operationalized so that when you run the model and you're developing this data, it's going to already look at and try to understand are there data biases that I need to be worried about that will propagate through the model and have negative impacts? Then when you get to the model training you're integrating and what are the guardrails that we want to adhere to? And now in that model, operations framework, it will ensure that those guidelines that we've defined are embedded so that the model will learn within the bounding box that you've created. And then as you look at the results, you're embedding in those results, a model drift detection to ensure that the model continues to do what was intended to do and that you can monitor its results and ensure that it doesn't start to manifest any bias. And that's how we think about it from the theory, the practice, to application.
So, John, as we experience AI transformation, what do you think we can learn from disruptions of the past?
I think you can look at history and you can see how these major innovations and transformations at first seem disruptive, but over time they become ubiquitous and you start to use them every day in every way, and I think that's what we're going to be with AI. It's probably going to happen faster. It's probably going to happen in a scale that we can't comprehend from these prior technologies. But that's where I think we're going to head and it's going to be an incredibly exciting time. And I could not think of a better time to be in this space.
So, John, thank you so much. I really appreciate the time we spent today. Amazing conversation. In this era of great change, the way we approach, build, and deploy AI for the mission today will set the stage for responsible innovation, and I can only start to imagine where we'll go from the power of AI for government and across society.
This conversation is just a glimpse into those possibilities. For in-depth insights at the intersection of mission and technology, explore the AI issue of velocity, at BoozAllen.com/Velocity.