Booz Allen technology leader Andrew Savala thrives on change. The technology he works with evolves at such a blistering pace that a life of learning is required. From founding a small tech company focused on payment kiosks to orchestrating sophisticated multi-agent artificial intelligence (AI) systems, Andrew's story shows how Booz Allen's people are always evolving, looking for new ways to expand our impact in the AI era.
I was a software engineer out of college, writing software that ran on embedded-systems hardware based around remote alarm monitoring. Eventually, I went into business for myself. I had a small company, and we were coders for hire. The niche we fell into—it wasn’t intentional—was writing software for self-service payment kiosks. Our clients were mainly government municipalities. The software ran on devices like bill acceptors to pay for parking tickets. Because this was my business, I had to wear many hats. So, I did software engineering but also figured out sales and marketing. Later, I was a fractional chief technology officer, consulting with startups on their technology needs and the viability of their ideas. I joined Booz Allen in 2024 as an artificial intelligence solution architect, where I worked on architecting and engineering generative AI solutions for our customers.
It was helpful to realize early on that the end goal is not just to build an amazing thing. It’s to take an idea and build something that positively affects people’s lives. I learned to work with clients to understand their pain points. I would ask myself, “Okay, this thing we’re building, if I could get a genie to instantly materialize it, so what? What problem is it solving? Who’s going to buy it today? Who cares if we build it?” It’s how I made sure we were solving a real problem people cared about and not just building cool tech. Today, this same mindset helps me and my team think through how agents should work in our agentic solutions.
I was initially hired to do generative AI work within the Chief Technology Office. We were experimenting with agents, but the language models were still too immature to build production-ready agents. Since then, the language models have evolved, and today I’m focused on engineering agentic AI solutions. It was a natural step to move from building generative AI to developing autonomous AI agents capable of reasoning over the environment and advanced decision making. I have a solutions architect role, too, so I help design what pieces of cloud technology we’re going to use and how we build a solution.
Definitely the people. I like the team I work with. Like me, they’re very excited about agentic AI. It’s the hottest topic in AI now. It’s fun to work with people who are excited about what they’re doing, and I genuinely feel that they’re rooting for me and the success of our team. It’s very positive.
There are two things. First, the fun part of the problem, and part of what attracted me to it, is that we’re working with large language models (LLMs). If you know LLMs, you know they don’t behave the same way every time they’re prompted; they are non-deterministic. The second, and this is related, is having to be good at debugging and troubleshooting. It’s tempting to only look at the surface-level agent output—people refer to it as “vibe checking”—but it’s important to look at your data and understand why the AI is doing what it’s doing and what path it’s taking. Digging deeper, asking tough questions, and investigating aren’t the glamorous part of the job, but those steps are critical to systematically improving agents and ensuring their reliability in production environments.
I think it’s this idea of AI taking everybody’s job. It's more helpful to recognize how important it is to leverage AI and be someone who knows how to use the tools. AI cannot fully be trusted yet, so it will remain imperative for people to validate the output. We’re not going to let AI design our bridges without double-checking that it’s right, so we cannot assume intellectual laziness.
I think both have their place. Human-in-the-loop is where the human is a link in the communication chain. The AI can’t bypass humans. It must go through a human reviewer to validate the AI’s decisions at critical checkpoints. Then there’s human-on-the-loop, where the human is essentially supervising. The AI runs autonomously, but humans can interact with it if they want. They both have their place. With autonomous systems like coding agents, having the human sit on the loop and seeing what the agents are doing is useful, with the ability to redirect and stop the agents as needed. But there are times when it’s only responsible to have the human in the loop, for safety or quality control, and require the AI output to go through the human reviewers.
It depends on where you are in your journey. Someone who has a non-technical role will likely have a bigger chasm to cross. If you’re not in engineering, learning a programming language, typically Python, would be a good first step. Fortunately, today, you do not have to be an expert coder because AI coding tools are getting really good. I would say learn an AI coding tool like Cursor, Windsurf, or Claude Code. They are the prominent ones today that will give you superpowers when it comes to the actual coding and being able to build rapidly. Also get really good at using LLMs like Claude, ChatGPT, and Gemini. One of the best ways to do that is to find a reason to use it at some point every day. Give yourself practice prompting. Tell it your plan and ask it, “Do you see any holes in my approach to solving this problem?” The more you do this, the more you’ll naturally see how to use it. It’s an art form, not unlike prompting Google to get the right answers, but more conversational in nature.
Being part of the tech industry means the world moves fast. With AI, it’s moving even faster. To me, it’s fun being a continuous learner and solving new challenges. I make it a point to learn something new every day. My go-to for learning about the latest developments in AI is usually YouTube or a podcast. I’m typically listening to something while exercising or driving—inevitably, someone will say something I don’t know. I find myself picking up a lot of new knowledge this way.
What excites me most is using AI to help reduce—I’d like to say “eliminate”—human toil. Using AI to do the things that are repetitive and that people don’t want to be doing. Essentially, empowering people to work on more creative, nuanced things and tougher, bigger problems. I want to give people more time to do work that they care about, serve our nation, and support our customers.
One is a multi-agent system for IT incident triage. A supervisor agent is responsible for responding to IT incidents and generating a report for the operations teams. The incidents are being ticketed in a system, and the supervisor agent responds in real time as tickets are created. A team of supporting agents works with the supervisor agent, who orchestrates them to go through and pull in additional context on the problem, perform network investigations, comb through log files, and assess the impact. The agents can use networking tools and investigate log files to diagnose the root cause of the issue. The system understands and prioritizes the incidents based on the organization’s policies and escalates the ticket to the appropriate engineer. Another is around insider-threat investigation. AI agents search through log files and help the human investigator analyze disparate data sources, pull together pertinent information, and generate actionable intelligence insights.
Make AI part of your daily workflow now. The people who thrive next won’t be those who avoid AI but those who instinctively know when and how to leverage it. That intuition only comes from consistent daily practice.