Hey, good evening, everyone, and thank you all for being here. As they said, my name is Susan Penfield and I'm the firm's chief technology officer. And I want to welcome you to our Helix, which is our Center for Innovation. And we opened about a year ago, really proud of the space and a lot of our amazing clients and all of our teams come through here on a regular basis. So welcome to tonight's program, The Age of Principled AI. We are excited to be hosting this event in partnership with Fast Company about a topic that I think is near and dear, certainly to all of our hearts, but certainly near and dear to Booz Allen as well. AI has been a uniquely powerful tool for many years now within industry and in the federal government. But this past year has clearly been a turning point for the ubiquity of AI in our lives, at work and at home. And this conversation about responsible AI is becoming timelier and more pressing. So just last week, I know you all know the White House signed the executive order on safe, secure and trustworthy artificial intelligence. With this, federal agencies now have detailed guiding principles and actions to put into place as they explore the emerging uses of AI. Across society, we are seeing organizations harness the power to unravel puzzles in all sectors, spanning from physics to emerging uses of AI. While the future of AI is absolutely bright, this turning point comes with questions that we all need to address in order to transform with purpose. How do we control and govern AI? How do we mitigate unintended consequences? And who has a role in ensuring that we get this right as an industry? These questions are very important questions to consider in the context of critical government missions. For the federal innovation community the consequences of bias, of errors of cybersecurity threats and ethical breaches are not just bad for business. They impact our national security and may affect whether the American public has access to vital services and entitlements. The paradigm for government AI adoption therefore, must be different, and in fact, it must set the standard for other industries. How we approach, build and deploy AI today for the mission will set us on a path to responsible, accelerated and long term impact. As AI pushes the boundaries of what once thought possible, we need to harness not hinder, the immense opportunity. But we need to do so with practical, scalable and technical approaches to responsible AI for the mission. I hope you will find tonight's discussion that the age of principled AI isn't theoretical or philosophical, but is truly made possible by a collective commitment to real standards, real parameters and governance that is already starting to take hold across our industry. So without further ado, I would like to transition to our main event where we are convening voices from across government and industry to explore responsible AI in action. So let's please welcome our panelists to the stage. Come on up. Come on up. Come on up. Thank you. And I will just take a minute to introduce each of you. Our moderator for this evening is Greg Lindsay. Greg is a contributing writer for Fast Company. An author, journalist, a fellow at Cornell Tech, Jacob's Institute, and a frequent speaker on all things emerging technology. And, oh, by the way, he went undefeated on Jeopardy against IBM Watson. Pretty incredible, right? And I'm also pleased to welcome our panelists. First, Navrina Singh, the founder and CEO of Credo AI. Her company is on a mission to empower enterprises to deliver responsible AI. She was appointed to the National AI Advisory Committee for the Department of Commerce. She is an executive board member of Mozilla and advisor on the World Economic Forum Council for AI Policies. And this list could go on and on, Navrina So please welcome Navrina. Thank you. She's an amazing woman. Okay. Dr. Matthew Johnson, a senior technical adviser in responsible AI at the DOD's Chief Data and Artificial Intelligence Office, finally known as the CDAO in our group. His team is putting AI principles into practice across the DOD and he has a fascinating background in philosophy and cognitive science. Welcome, Matthew. Next, John Beezer. John is a senior advisor for the Senate Committee on Commerce, Science and Transportation. There he focuses on digital media, AI and other key areas for the committee and brings his extensive background in private sector innovation and product development. Welcome, John. And last but not least, Randal Meyer, chief counsel and legislative director for Representative of Nancy Mace. He brings to this discussion an extensive background in government. He was legislative counsel to Senator Rand Paul and in the private sector. And after the panel, you will hear from my great colleague, John Larson, executive vice president, a member of the chief technology office, and also the leader of our AI practice here at Booz Allen. John also serves on the advisory boards for General Assembly, AI in Data Science, as well as for the AI Education Project. Thank you all for being here. I turn it over to you, Greg, and I look forward to an amazing discussion. Thank you. Thank you so much, Susan. On behalf of the Fast Company team, it's a pleasure to be hosting this tonight in partnership with our friends at Booz Allen Hamilton. And it's a pleasure to be here. And yes, as you can tell, they asked the guy who's played game shows against A.I. to be the host on behalf of the magazine. So, so much expertise from from the magazine here. But more seriously, it's a pleasure. And and I know I've been working I know Booz Allen Hamilton's been working this event for a long time, involved in planning it. And I must salute them and their impeccable sense of timing with the White House executive order. And let's start there. Let's EO let's go. And I want to start with our Hill representatives here. Obviously, you know, with the White House putting out an executive order, not in consultation with with Congress here, obviously, they did to an extent. But I want start with John on this, which is, yes, I like what you know, the EO obviously has incredible ramifications, provisions across what it means for the Cabinet departments, which will discuss, but also various aspects of American life beyond national security, etc.. What does it mean for you and your work and what does it mean to actually have the sort of, you know, effort at creating a coherent national policy around AI? The short answer is a lot. And you mentioned it was done without consultation, and that's true. Nobody ever called me up and said, what should we put in it? But there are parts of it that read like transcripts of calls we've had with the White House. So I don't think it was entirely created in a vacuum. Obviously, it's huge. We had a little bit of a debate. There was a statement made something like this the most substantial or the most extensive executive order, possibly the most number of pages ever. I did check around. There have been more significant executive orders. So so it is not the most significant, but it's up there. It's it's a big deal for me personally. We you know, right up front, it kind of addresses safety and security and puts a lot of the weight on Department of Commerce, NIST and OSTP, which are all commerce, science and transportation jurisdiction. So there's a pretty substantial chunk that's national security and defense. That's that's not us. But there's a pretty big chunk that's consumer oriented. That is us. So we're very pleased to see it. You know, I think there's naturally rivalry between the branches and we would love to have a comprehensive AI bill out, passed and being implemented as we speak, but we were not really designed for that, nor do we have the capability to do that. So I think we are moving fast. We will move quickly. You know, I came into this game working on privacy. So I think I'm I come by my skepticism honestly. I'm not convinced we will have a bill by the middle of next year, but it's not out of the question. How's that? That's great. Well, I was I say, well, you know, the one valence of this question is are finally like, we're a year into ChatGPT, and now we finally have an effort at a national coherent policy. And the flipside of that is this is too much too soon. We are now trying to basically regulate something that has barely been born. Randall, I'd love to hear your thoughts on this one, because I imagine you're on that side of the camp. So the executive order touches on a lot of important areas. We are talking about transparency, we're talking about national security. But there's also a number of areas where you can see in the legal architecture of it that it's pulling out from statutes that aren't exactly designed for this. There's one portion that pulls from the Defense Production Act, which is a 1950 statute that was meant to convert refrigerator factories into bomb making factories. That's not the best statutory authority to use for an ordinary governance of how something should work in the ordinary regulatory process. So some components of the order there kind of speak to me and the need for serious statutory authorities to come through Congress. So there is actually more of a well adapted baseline from which the agencies can work and improve on in the order in the future, especially when it comes to the next rulemaking processes or what goes through for carrying out an EO of that size in particular. So I think we're going to be running into some tension with the statutory authorities. I think that some of those areas can even open up potential judicial challenges and it really highlights the need for getting a product out of Congress that has bipartisan consensus to provide a very strong foundation for what the next phase of regulation looks like. Hey, if I could really quickly. I agree with that. Great. We’re off to a great start here. Congress united. Yes. Round of applause. I love it. All right. Oh, you're good. All right, Well, good. I'm glad we got some comity on the hill over here, but I want to I want to go to two to who was consulted. Obviously, Navrina is not going to talk about her work with Nayak on this on on the EO. But obviously, you've been dealing these issues for a while. We were discussing beforehand you had team members in Bletchley Park for the UK aspect of this. What parts about the EO jump out for you? Where where do we go from here in this? And particularly with the debates that are happening in the UK, that are happening in Canada where I reside, etc., about yeah, what is happening more broadly about this moment for regulating AI? Yeah. So you know, it's interesting because for us we were in the conversations for the past two and a half years. I am not representing National Advisory Committee on this panel today, but as you can imagine, I'm actually really excited to see the EO come out. And the reason for that is because as a technologist who's built like products for the past 20 years, where we always embrace like this experimentation and iterative approach, you saw that with ChatGPT where the world was a test bed, we threw it out in the market and just hoped it would work. I think we are seeing some semblance of that with policymaking which is needed. I think this is like the first time you're going to see a lot of adaptive policymaking in action. The things that I like about the EO is I think it's an extension of, first, the AI Bill of Rights, which was put out last year. And so there's a big focus on people, policy, and then procurement is an angle which I'm really excited about because there are certain levers, if you think about it, investment and procurement are two great levers to actually make change happen. And so the OMB policy, which I think is still out as a draft we are looking for everyone's feedback, is going to be a really critical mechanism to really put a lot of this transparency requirement, impact assessment requirements, independent audit requirements, risk management frameworks actually in place. So I think to answer your question, Greg, on what's happening in the global ecosystem. So as a very young startup, we in the past three and a half years actually have been involved at the global scale. We're very involved with the EU AI Act, which as you can imagine, is a very risk based approach that European Commission is taking. We've been very involved with Singapore, Canada, Australia and then obviously UK just snuck in, which was great. But having said that, I think what we are seeing right now is a coming together of policymakers and technologists to really think about what is the first seedlings, if you will, of adaptive policymaking. Is it perfect? Absolutely not. Do we expect perfection? Yes, and I think that's a problem that we all have. But as a startup, there are two things that we are being very mindful about. One is we do want this to really go into that implementation phase. So this is where we are spending a lot of time. And I'll share a little bit more about what Credo AI is doing in just a moment. But the second thing, which I think is really critical in this moment, is I think we should really stop thinking about perfection and we should start thinking about how can we make sure these multistakeholder groups are actually coming together to put something in place. And so the EO for me is a testament to those first steps. And yes, there's a lot of focus now on NIST and OSTP and OMB trying to actually put in place like what is the next implementation phase looks like. But I think it's a great starting point. Thank you. I saved Matt for last because the most substantive piece of this, obviously you're already in the weeds of this. I mean, this idea, this framing that, you know, now we have a national policy for it, but obviously DOD has been working on this for quite a while. Would love for you to sort of talk about what the EO means for your work, given that you're already fairly long in this? Certainly. So I think with the EO, there's kind of two axes that that are very clear from this. The first is where it says there has to be a whole of government approach. I think what's really helpful about this is this isn't the starting line for going about this work, right? My colleagues across the entire USG have been doing this work for quite some time and we've been in dialog and we've been in communication. But what's really helpful about this is it kind of raises the national consciousness and sets up the equity and responsibilities for how we work together more closely on that. And from the DOD's perspective, right. 3.5 million person organizations, the biggest employer in the entire world, basically any AI enable capability use case you can think of somebody in the DOD is trying to do it right. And it's really critical for whatever case they're thinking of. And so looking across the diversity of use cases across the USG, there's a lot of potential for us to collaborate and work more closely together. So that's one dimension. And like I said, you know, this work has already been happening, but but now this is giving the kind of catalyst to really be knitting these efforts and lessons learned together more closely. The other dimension we can think about is where it says right at the beginning of the EO that this has to be a whole of society effort as well. And I think what's really important right, is to realize that these aren't just issues that to do with the technology, right? This isn't just solved through better devsecops processes. These are also socio technical issues and you have to be thinking about the broader environment in which these technologies are designed, developed and deployed and used and eventually retired. And so this is got to be a whole society effort, engaging stakeholders throughout that entire product lifecycle and making sure we're tracking how this is influencing our values, our norms, our overall society, and making sure that the technologies we design are not just aligned with those values, not just instantiating them, but but also spreading them. So I'll leave it there. Great. All right. Well, thank you for that, Matt. I want to come back to our gentlemen on the Hill. Now that we know you're an agreement on these matters, I’m going to see how long I can keep you in agreement on this, but I want to frame it, too, as well. You know, I mean, obviously, going back a bit to what Navrina was talk about with global frameworks and sort of global cooperation, but also there's global competition to develop AI as well. There is, of course, a sort of national security aspect to it and a geopolitical aspect to it as well. And I'm curious, I'll go with you first, Randal, on this. Your thoughts on this, too, because this goes back to the innovation piece. I mean, we've seen, you know, leaders of tech companies. I know Mark Zuckerberg has raised this, for example, about the fact that, you know, if we do not allow untrammeled innovation to happen, we run the risk of being left behind by potential competitors in the rest of the world. And I'm sort of curious how you see that fit together about really that sort of aspect of national security and more also American economic fitness with regard to these other regimes that are too. So from the American economic fitness perspective, when the Internet came about in the early 1990s, we took the approach of the Section 230. Whatever anyone thinks about today, we took an approach, a very large open Internet pattern. And it's not to say that we didn't solve problems as they came up. The Economic Espionage Act of 1996, the Child Online Privacy and Protection Act, before we all got to Y2K. So the the American Innovation Scheme that we did for Internet technologies, the Internet of Things worked extremely well. We this is not to say there weren't exigencies. It's not to say that Sandra Bullock in the Net wasn't a scary movie back in the 1990s, and ordering pizza online is still way easier than that movie portrays. But the overarching point is that we had so many businesses come up, succeed, fail, grow. Everything that you require in a dynamic economic marketplace. Now, some of the concerns that I've heard from colleagues, from European partners as well about the AI Act or what the EU does is they know that they only have one of the top ten developers of AI companies that are actually in the EU right now so they can license whatever they want. But if no one's there to build, well, that's kind of a problem. So from an approach perspective, what I would hope to see is that we replicate ensuring that we do have the freedom of innovation that we successfully applied to the Internet and internet of things, apply it to new sets of technology. And we do have to deal with exigencies in national security. We're going to have to deal with child privacy protection. All of these things have to get dealt with. But we need to do it in a way that allows for the problem to actually arise and understand what it is to solve it rather than theorizing about what it could be, and then solving the theoretical problem before we get there. That's great. I would throw it to John on this one because so here's my favorite fact about John that I learned beforehand, which is that John wrote one of the first papers about this thing called digital currencies that could exist in 1996. So I want to throw it to you, John, since of you seem like the perfect person to discuss in the context of we can foresee some of these things as they develop. And in fact, 1996, the Telecommunications Act was only a few years after the commercialization of the Internet. So that's a fairly compressed timetable for what we're talking about. So I'm curious, your thoughts is perhaps as a counterpoint about, you know, how can we anticipate threats, dangers and regulate them appropriately or at least put that framework in place? Yeah. So thank you. I'm going to go ahead and disagree with Randal. All right. I knew it wouldn’t last You know, I there's a famous photo. I don't know if everyone's seen it, but it was it was this tsunami that hit Indonesia 20 years ago. And there's one little guy down on the beach looking at this giant wave coming in because he'd misread what it meant when the tide went out. And, you know, the implication that picture is that guy's toast. I relate to that guy. I'm not the least bit concerned that something I or we do is going to block the tidal wave of innovation that is coming at us. You know, even if we screw up badly and way overdo it, I think what we're looking at is just a massive wave of change that is going to happen. And so, you know, I'm not I'm not sitting here going, you know, this is just a delicate little flower and better be careful not to step on it. It's a giant force that is coming at us. So I think we have to do something. We have to anticipate as well as we possibly can. And and and take action. So So that being said, I think I think we're in an interesting situation where, you know, industry is actually asking to be regulated. They're kind of looking for safe harbor, basically. Essentially, everybody knows they're playing with fire. And if maybe there was just this checklist, we could go down and make sure we've done everything right, then will deploy this and kind of see what happens. And so, you know, I think that, you know, we do have at least an early version of that checklist, and that's the NIST RMF. And maybe I'll take an extreme position here, even though I greatly admire the NIST RMF, there's sort of in my mind, sort of a competing document, which is the blueprint for AI Bill of Rights. And and in the the Nayak report that came out recently, there was sort of a dissenting note saying, well, we talked a lot about risk management. We barely even talked about rights based approaches and, you know, I don't actually have an extreme position on this, but I do think we have focused mostly on how do we make it easy to deploy without too many hurdles when in fact we are playing with fire. And I don't think it's enough to say, hey, we had a fire extinguisher, right? You know, you still burn down the house. So, so, so to a certain extent, we need to make sure that rights are protected. And so, you know, just in terms of broad goals, that's kind of what I'm thinking. You know, how do we do both? And I would just quickly point out that the the the executive order, as I was reading it, I'm like, how are they going to play this? And I thought they rather brilliantly said, we need to use a risk based approach to implement the rights outlined in the OSTP. Like, Oh nice. So it's actually where I am. It's right down the middle. We need to think about both. So and can I disagree or agree? So. So thank you. I love your tsunami example. You know, one of the things that I truly believe is that NIST AI RMF is not just a risk based approach. It's actually a rights based focused outcomes. Because again, if you think about the NIST AI RMF, which is a very horizontal hygiene approach to how do you govern, map, measure, mitigate some of the baseline risk, none of those matter unless it's in the context, it's not in the context of an AI use case and an industry. So one of the next steps that NIST has been focusing on is building profiles. And these profiles are very use case specific, industry specific. And when you start thinking about these profiles as an example, when you're using a machine learning based or AI based system for hiring, the impact there is that there could be unfair outcomes for certain demographics, right? But you can't really pull that right in the risk AI RMF. So the NIST AI RMF was intentionally built so that you can actually start thinking about what is the disparate impact. Are certain demographics going to be left out? And then when you map it to a particular use case and industry, that's when the rights approach comes in. So at least I have looked at NIST AI RMF from a very different lens because there's a horizontal component, but then there's a vertical component. And then, yes, you know, I truly believe that when the EO said that it's a risk based approach to actually get to the rights, I see there's a pathway to it. And I think this is where I'm just like encouraging everyone to start being a little bit more flexible because their, everything, AI is moving so fast. The capabilities, the complexities of the technologies and as a technologist, we don't even understand all the risks of the gen AI systems and happy to go down the list of risks that we know and risks that we don't know. So I think this is where we need to be a little bit adaptable. To going back to something that you said, Randal, I think it's really important that within the context, understanding the risk and trying to do something about it is really critical. Well, thank you, Navrina. I'm sitting here listening to this and I'm thinking we're having a wonderful intellectual debate about AI, and I'm looking at Matt and I'm thinking, I'm just imagine this, the Matt sitting here thinking, Well, this is all great, but I actually have to operationalize this inside the world's largest single organization. So my question for you is, Matt, how do we move from the checklist to to John's point, like just going from checking boxes to how do we actually operationalize it at every level of DOD, and of course, all the other government departments. What's your strategy for that? Yeah, thanks so much. So so this actually perfectly tees up, you know, kind of like the value proposition of my team for the DOD, right? So in 2020, the DOD becomes the first military in the world to adopt these five AI ethical principles. And exactly as you're saying, you know, it's fine to have these five principles, but what does this actually look like if you're doing development work on the ground, if you're an operational end user, what happens when they conflict? What happens when you're a high conflict scenario? How do you translate between all of those levels? So last year, the DSD issued the responsible a strategy and implementation pathway that outlined 64 lines of effort for how we were actually going to do this. And it's everything from education to data and model card efforts to test and evaluation harnesses and tools and so on. Right. And we thought, oh, maybe this will solve it. We'll just build these 64 things and then we'll finally know how to operationalize these five principles. Right? So it turns out that actually if you just build those capabilities out, it's hard to know how to find the right tool and know how to use it at the right time. Right? So what I spearheaded was I was the chief architect of the DOD's responsible AI toolkit. And what we did is we did really deep dives into all of these assessments and checklists like John was talking about into NIST's AI RMF and playbook. And we basically did a bunch of tabletop exercises and we looked at what the delta was between those resources and what was going to actually be usable for the DOD, given its wide variety of use cases and risk profiles and so on. And so what we did was we tried to combine all of them and basically turn all of those pain points we identified into design principles, right? So for instance, one of them is that a lot of these are very static documents, right? They work really well for certain particular use cases, but when new things come down the pipeline, they're not really able to address them. That was a major pain point for some of the generative AI cases that we were dealing with for the DOD's generative AI task force. Another one is it kind of assumes that the program manager has perfect knowledge about everything that's happening on the project, and they have like really high levels of technical facility across all of these different disciplines that are involved. So you can just give them one of these assessments and they know how to fill it out or they know who to talk to, Right? And so to turn those pain points into design principles on the first one, we basically wanted to build it out in a really modular, intolerable way. So what we did was we created this backlog of a whole bunch of assessment questions and we tagged it so that you can actually kind of set these filters and auto populate an assessment and checklists for you. On the second pain point, like I mentioned, we labeled every item through RACI matrix so that you can actually go in and set your work roll and it'll kind of auto populate everything that's relevant to you and then it gets all rolled up. So I only mention this because I'm very pleased to announce that this will actually be publicly released very soon as an interactive web app. And you can see what all of these kind of functionalities look like. And the reason for doing this is because, you know, we think the DOD needs to be really transparent about the kind of standards it's holding itself to, both from the perspective of public trust, but also, you know, so that our industry partners are able to see this is how we understand those five principles in practice from the perspective of interoperable with efforts like JADC2 and CJADC2. We want to have common processes that we're putting out that we can put our stuff through, our partners can put their stuff through so that, you know, we can see how these are grounded in shared values. So we're doing a lot of work with the toolkit across USG and with International allies for that purpose of interoperability. So yeah, there's a long answer, but stay tuned. It's it's coming very soon. Well, it's great. Great something to tease here, we have some news you could use, piece of it. And I love getting granular so that's great. And speaking of getting granular, so Navrina, I wanna go back to you quickly to give another granular example, particularly when it comes to how to put guardrails in place. We're talk about innovation and getting it out there and and deployment. But how do we actually I'm curious your thoughts about some examples that you've worked on with Credo and others about putting the guardrails around it, because I think the EO also particularly mentioned red teaming is an aspect of this. There's been a lot of discussion of red teaming, which is a tactic that I'm a big fan of, but also I've heard the criticism about like data in society and some other NGOs that like Red teaming isn't enough. It isn't holistic enough. And I'm curious to your approach about how we should think about this holistically with some of these challenges of shaping. Yeah, absolutely. Thank you so a little bit around Credo AI, We are an AI governance SAS platform that basically provides oversight and accountability across your entire AI lifecycle. So very similar to what Matthew was mentioning. There are a couple of core elements of how we operationalize responsible AI. The first is really aligning on principles and as you can imagine, aligning on what good looks like within an organization. A tough challenge because there are multi stakeholders, but we within Credo AI way to help an organization align on either industry, best practices, company policies, regulations, existing or emerging, and standards like NIST, ISO and others. And once you've aligned on what that good looks like, you use those alignment metrics as a mechanism to interrogate your data sets, your models, your AI use cases, as well as your processes. And this is something that we fundamentally believe in and I'm sure in generative AI all of you are seeing this quite a bit. That model level interrogation is just a teeny tiny piece because of the dual nature of these models, it's how it's going to be used. And the governance at the point of use is really, really critical. So within our platform we are able to do that interrogation across your entire pipeline, but we focus on contextual use cases to really figure out what that alignment looks like. And then lastly, I think you said that really well, that once you've done this interrogation, what does that really mean? It really needs to show up as trust supports. Mechanism, transparent ways that you can actually share the outcomes of these interrogations with the right stakeholders, whether those stakeholders are internal or external. So just concrete example, you know, we work a lot in private sector and now we are foraying into the government sector. Booz Allen has been our strong partner this year, but as you can imagine, a lot of our government clients USGs included, they are using Credo AI for both back office optimization where they're actually using these tools for everything from hiring to answering questions of their employees, etc. But also you can think about it in warfighting and situational awareness use cases. So when you start thinking about, let's say a situational awareness use case, what is really critical is how do you build trust in that entire pipeline so that when you have a soldier on the ground, they understand that the final response from the CI systems has been reviewed, has had the right oversight, has had the buy in of the commanders, etc.. So you can imagine a governed AI system actually acts as a force multiplier for soldiers on the ground. So we are seeing some really innovative use cases. Obviously, with gen AI right now, most of our customers are still in the sandbox phase. That means within a govern sandbox they are making sure that all the hallucination challenges, IP leakage issues, factuality issues are first addressed before they unleash the power of gen AI within their organization. Thank you. I want to tie together some of these threads right now. and go back to our gentleman from the Hill taking John’s sort of comments about NIST and OMB earlier and also obviously the toolkit Matt’s developed with DOD is doing and the tools you're developing. And going back to what you said near the beginning? Randal, actually about sort of the you know, the EO betrays the fact that there's really no bodies with statutory power for this. I want ask the two of you first, you, Randal, about particularly if Congress were designed something, some entity with the full statutory power, what would it look like and what should do? I'm curious because the number one, in polls I think most Americans don't want Congress to design that, poll is pretty low there. But two, is it I mean, is it something that looks like, you know, I mean, is it a new government agency? Is it something is there a Department of AI? Is it something looks like the Consumer Finance Protection Bureau or some other agency? I'm curious, from your position on the Hill, what kind of entity should be in charge of coordinating these efforts across all of this to make sure we're not having redundancy, duplication, etc.? So from that you get different perspectives from different political and regulatory philosophies in the Hill for sure. How you doing, John? But what you've touched on with licensing companies and having a basically a national licensing structure for artificial intelligence as gone over quite unpopular really with the policy staff of the American public. The notion being license to do business across sectors is something that usually is disfavored in polling for Americans. But we're looking at what can I say? So from the Republican House regulatory perspective, I think what you're going to be seeing is something which answers the question of what are the actual risk cases We need to be handling quickly? What is national security issues? What are the vulnerability issues from an intelligence community perspective that we need to have nipped immediately because we are in an international competition with China and other foreign actors over this technology, its applications and national security. But from a new agency perspective, I also just add of having known that a new agency on top of what four agencies that are doing the exact same thing right now has ever added anything helpful for getting a quicker outcome. We're looking this NIST, OSTP, Department of Commerce, the existing agency’s that handle federal procurement and IT, to be given better authorities to handle the technology and the tidal wave that's coming in. But that that is a different perspective, of course, than licensing everyone from the first instance and moving that into even potentially a new agency so that those kinds of policy discussions are something that we're still having. And there is a difference between the approach that the both parties want to take. That being said, usually some compromise comes out of that over time. Well, John, love to hear your thoughts from the other chamber. And also, if you could weave in or if you have thoughts on this is, to your point there about licensing I think was interesting that the EU AI Act I forget if it's in the current draft, but at one point they were going to come down very hard on open source models that they wanted very tight regulation of what foundation models be allowed to run because they were very distrustful of having that kind of of un-interrogated innovation. But yeah, just to quickly follow that in the open source community is something that does have concern because it is fundamentally American and free to have an open source community. It is it is part of what helped build the Internet and Linux. It is incredibly important to have that, to protect it and to allow it and not to regulate out of nonexistence. Yeah, John, curious your thoughts in the other chamber. I'm going to partially agree. I figured we would. I think there's a bill that's not public, but I think probably everybody who cares has seen a copy of it. So I'm going to be a little bit vague. You too. The first time I read it, it had self-certification in there as the regulatory mechanism, and I just kind of sneered at that like, Oh, sure, Scout's honor, that's not going to work. But then some pretty serious discussions got started within my group about, well, exactly what are we going to do? Sure, we'll stand up a new agency and maybe we'll have that by March. Like no. Well, what about existing authorities and so forth? And what eventually became clear to me is that there's kind of this straight line graph between, you know, that goes from what we can do now, which is almost nothing and what we can do three, five, ten years out, which could be quite substantial. Right? So I finally came all the way around a self self-certification as what we can do now. So I think and I think, you know, we're probably all more or less in agreement on that. But the, the open questions are, you know, how long till it's not now anymore. You know, how, how fast do we have to move to, to ramp up and ultimately where are we going. So I can kind of just talk about some of the stages that I see coming. And I think it's all open to discussion and we are discussing it. So, you know, I think sort of the next entity that probably needs to exist is third party auditing. And I think that's something that, you know, probably exists to a certain extent now. And can be ramped up over the next year or two. So that's probably next. I think there probably does need to be a central agency of some sort, but but not a central agency like the AI agency. More of a coordinating role. And then I really did agree with pretty much everything, Randal said. Although as a Republican, I noticed he's unable to say the letters FTC. It's always there. But I would I mean, I think I think, you know, a lot of people sort of say, well, NIST would be great, but they're not you know, they're not a enforcement agency. Right. And so, you know, NIST I wonder how many people even heard of NIST two years ago. They're like, they are the stars of of the AI world. Because because we're essentially flying blind. And NIST is all about measurement, right? So and you know all about them more than I do, you know, we need control panels. We need to know what's happening. And NIST is all about that. But again, in terms of enforcement, their hands are tied. So FTC or similar agency. And as I've mentioned previously, I'm looking at this from the consumer side. There's a whole national security and defense side that I'm not really acquainted with. So can I add a few point? Yes, add a few. So self certification and third party audits won't work if that's the goal post for us, because for third party audits, you actually need standards against which you can audit and without those standards you won't. There's going to be no benefit to a third party audit. It's pretty much going to be a third party assessment, which is a very different thing. And I think this is one of the issues with this very fast evolving space is that this conflation of terms. So I truly believe that where we are today, if we were to propose a third party assessment, that means just different set of eyes, that is possibility. But third party audit as a goal post really needs a lot of, I would say, intermediate steps. The second thing is self-certification. I mean we've seen the commitments that came out from White House around the foundation models. The issue with those commitments are who's measuring those and how they're going to actually be uphold, who's accountable for it, How often are those commitments going to be checked? There's no mechanism for it. So it's something that we have been pushing the White House, some of the ecosystem partners hard on is one, sure, you can start with those commitments, but they need to be spread throughout the value chain. So not only the foundation model builders, but the application developers and consumers have to all see those commitments. Secondly, NIST or some other mechanism has to come up with those benchmarks, which has to have the ability for a third party assessment, not an audit. Without those standards, I don't think those are going to work. So I think something I don't know, this is a topic we can talk forever, but I think if this bill I looks like I'm not in the inside here, whichever bill this is, I would love to get on the inside. I think if those are the goal posts, we might have an issue. Yes. All right. Well, I want to go to Matt real quick on this, because obviously, Randal mentioned procurement and I was thinking about this, obviously. I mean, we got to discuss procurement and addition to the formal power of regulation. There is, of course, the soft power, the power to procure. And The New York Times, of course, had a story today on startups and defense tech. And AI and I love if you hadn't read the piece like the hero's journey of the pieces, like, Oh man, we got a lobby up here if we're going to get anywhere. So I'm curious, Matt, like how obviously, how is procurement going to change? How is it changing with AI and how are you going to use procurement to basically shape the outcomes you want as well? How does that change? Yeah, definitely. So. I mean one of the things our team thinks about a lot is, you know, how do we incentivize responsible AI? And we've been thinking primarily in terms of carrots rather than sticks, right? Because as Navrina was saying, like a lot of these standards by which you could build a kind of regulatory scaffolding, it's going to take some more experimentation before those are ready. So we've really been thinking about these in terms of of mostly in terms of carrots. And you're absolutely right, one of the big carrots we have with the DOD and a $900 billion a year budget is funding. Right. And so we've been thinking a lot about in the acquisitions and procurement process, how do we set in place these very clear requirements and criteria to demonstrate that your technology is aligned with the DOD ethical principles and our values? And that's a big piece of why we developed the responsibly a toolkit, right? Because it's one thing to say, okay, show how you're aligned with the principles and then gets some kind of vague hand-waving. You know, narrative about it. It's another to actually have different benchmarks and processes and artifacts that you can be measured against. And so that's one of the things where we're really trying to shape this overall ecosystem because surprise, there are also others who are trying to shape this ecosystem, right? And so when people ask what does US value align technology look like? Well, the easiest way to answer that question is to say, well, what does it not look like? Right? And it probably looks something a lot like Belt and Road, right? So the flip side of all that great stuff that was in the EO about data protection is surveillance technology, right? And so, you know, we really believe that our strength is going to be through our industry partners and we want to be using these kind of and our funding to help shape that ecosystem in a way that really not only instantiate and represents our values, but but can also spread them right as US technology spreads. And one last piece I'll mention really quickly is if you look at some of our near-peer competitors in the kind of policy that's coming out, what their governments are saying, they are aiming to be leaders in not just AI but AI norms and standards by 2035. And it seems like this really boring thing and people don't want to get involved in these bodies, take time to sit on these councils. But it's really important if you look at who is actually sitting on these bodies, setting these standards, right, because you can think about the kind of leg up that things like, you know, Ericsson had decades ago. Right. In the telecommunications sphere, if you set the standards that are aligned to the technology that your country has, it's a huge economic benefit to them because you already have a leg up. But there's also a values benefit to it, right, Because these standards aren't just, you know, abstract things. They can also be aligned to things like data protection and other ways in which values are instantiated. So, you know, responsible AI looks like this kind of soft, cushy, amorphous thing, but actually I think is a tremendous source of soft power and a huge potential source of our ability to exercise integrated deterrence. Right. If we can spread the technologies that spread US values. Navrina, I want you to jump bit on this, since you are on many of those councils yourself in terms of how is this play out when it comes to protecting national security information, the cybersecurity aspect of this? Obviously, you know, I've so in addition to being on game shows, I've also been a writer and reports by the Army Cyber Institute for NATO and also reports for the US Secret Service on misinformation, on basically attacks on US citizens, soldiers and law enforcement using AI and other tools. I'm curious what you're seeing is at the development of these standards, principles, etc., to make sure that we are protecting national security information and developing the appropriate cyber defense tools, weapons to protect against potential incursions. Great question. And I think, again, it's not a simple answer because there's a lot of different components that need to come together. So as an example, incentive alignment, I think that's a pretty big one, like was the incentive for all these different parties, whether it's NATO and our ally nations, was the incentive for them to actually adopt responsible AI principles which are aligned with obviously, you know, the things that they care about for their citizens, but also what they care about from national security. I think the second thing is tools. I think there's lots more work that needs to happen. We are seeing, for example, standards and protocols and tools emerge in everything from adversarial attacks to data poisoning to model inversion attacks. So I think how do you actually come up with a solid set of standards that you can actually measure against? Because at the end of the day, again, to operationalize a lot of these principles has to move away from just the high level policy to actual implementation of the model end data set and in the system level. And I would say that the third thing that we don't unfortunately talk much about, but we've started to see even in the EO, is capacity building. So one of the biggest challenges we've seen on the private sector side, which now I'm seeing a lot in the government, is you can imagine that in the private sector, data science, machine learning, AI experts were basically a very important, I would say, commodity. And now what we are recognizing is the risk compliance policy folks really need to come into the AI ecosystem. And so how do you actually, without being an AI expert, bring your expertise to AI is a big focus of discussion in, I would say, private sector and in the EO, if you've seen the high literacy is a pretty big component. So what I'm seeing in these councils is really discussions around how do you actually establish whether it is, you know, AI councils within USG, how do you establish the right kind of AI expertise, How do you actually educate different individuals? You know, we work actually with NATO as well, and they've come up with great set of what something called SPRU principles of responsible use. But they are taking that a step further. They are already implementing that across the allied nations and more importantly now there's a discussion around certification, which I think Matthew will know better than than I would. So I would say those are some of the core components that we are seeing show up. Great. All right. We're almost out of time, but one last substantive question for the gentleman from the hill as well, and I'll give John first this time, which is we started talking about procurement, obviously from a DOD perspective, but obviously procurement is going to be big in terms of, you know, staffing, building out the expertise in across all other government agencies at the federal level, etc.. And I'm sort of curious like what, you know, basically your thoughts on how do we start building out the level of appropriate skills, expertise, roles, etc., so that basically decision makers and other federal departments can make the right decisions when it comes to procuring AI systems and the appropriate solutions and technology? How do we start thinking about thinking about thinking about AI? Right, Right. I've got an answer, and it might take you a second to understand that I'm going towards procurement. I'm very interested in the problem of inauthentic media. And I think I think there's sort of different levels of understanding. And on the surface, you sort of it's sort of simple, but the deeper you get into it, the trickier it is, and you know, on the surface it's like, well, we just need to spot, you know, movies that are fake. How hard can that be? Well, first of all, it's very hard. Secondly, there is zero room for error If you you know, and it's not just movies, it's any kind of document or media content. If you have some sort of detection service that says this is fake and it's right 99.9% of the time, it is shutting down someone's right to speak freely 0.1% of the time, which is, you know, it's never going to get to that level of perfection. And the danger on the side of a false negative is high. Maybe you could build it to be a little more, you know, lean in the other direction. But in the other direction, when you say something is legitimate and it's not, you have just defeated the whole purpose and validated something that's not. So this idea that you can just sort of spot inauthentic media is is very problematic. Go a little further. And it's like, you know, when we talk to experts about this, they say, well, we expect to see this, but we don't see it. What we do see are cheap fakes, which is a scene from a movie that has nothing to do with the topic that it's being used to illustrate. So it's phony, but there's nothing AI about it except quite possibly the image search that found it. Right. So you have a piece of content that is in no way modified or phony or it's phony, but it's not created by AI. It is found and used through AI. And then there's other techniques like micro-targeting where, you know, the phony information you're getting is different from the phony information I'm getting because I has the ability to really figure out what everybody's buttons are. So it gets deeper and deeper and deeper. And, you know, again, at the heart of all this is we don't have a system where we tell people what they can and can't say, Right? So, you know, we can't afford to get it wrong. And it's a very, very inexact science. So I'm let's just say provisionally a big fan of the C2PA standard for this. And we've been talking to them and we asked them how robust is it? And they go, well, we're pretty confident in it. Well, can we legislate that? And the answer is, well, not yet. What they want is to deploy at scale, which is my one contact with procurement. We're trying to get some government agencies to adopt the standard and use it at scale. Gotcha. All right. The last word is yours, Randal, because your thoughts on these issues as well. A lot of procurement. So I kind of have two sets of responses. The first one is procurement obviously going to exist in the defense side. And I think I'm telling anyone in this room anything special was saying there will be procurement opportunities in the DOD, right? But there are a lot of federal agencies that have a lot of problems and have a lot of issues that they deal with on a daily basis where I can make a substantial advantage in the rest of the federal budgets landscape as well. There's only $900 billion going to DOD. There's hundreds and hundreds of billions dollars going elsewhere. One example I think we learned about it. We both went to an AI camp together at Stanford. One great example that's really helped shape like how I try to think about this for federal agencies and a lot of staffers kind of take this and gone with it, at least from our side, like the Social Security Administration is employing AI to assist judges with trying to keep more consistent results in cases across disability claims. That is incredibly valuable from a service perspective for the government that is going to provide more consistent service, it's going to provide better service, it can provide cheaper service and it can buy quicker services for citizens. That that is still the fundamental value of what can be brought from a new tool into agencies. So what I would encourage is not just looking exclusively at the defense space, but the incredible varying ways that this incredible tool can make life better for people who are dealing with some of the hardest things, Social Security, disability, FTC complaints. There's a vast array of areas where this could be employed to make life better. So from the human side, I'd say innovation is also one of the strongest parts that we need to be still driving that because we haven't figured out all the ways we're going to make life better for people yet. Can I get a round of applause for our panelists? Thank you all so much. Here. File off folks, take your seats here, because as we file off here, I would like to invite John Larson to come up and deliver some closing remarks. Thank you, John. I was trying to summarize the points and, you know, so you can take a photo of my crib sheets afterwards. It was certainly a very informative panel. Let me try to first thank you all for participating. I really appreciate that you all took the time to sit up here. And so, Greg, thank you for moderating. Navrina, Mathew, John, Randal, really appreciate all your perspectives. And Fast Company, thank you for helping to pull this off. So first, I want to say that. The second thing, a couple of things I want to highlight that I thought were really important. First, I think we did. Right? We break some ice, we got agreement, partial agreement, and disagreement. So I feel really good about that. You know, I think the sort of perspective was that there does need to be sort of, I think, overall some sort of statutory power behind what's been sought here. And so I think that was exciting to sort of hear that level of agreement. Obviously, the devil's in the details, but we do need to bring some statutory power to this. The other thing I think really resonated with me was this sort of all of society, right? This is a problem and a challenge and an opportunity that we all have to embark on to harness this for the greater good is going to necessitate that we all are involved. It takes expertise, not just like myself a AI practitioner, but it takes expertise. I think Navrina, you were talking about the array of experts that are required to do this accurately. And so that's true in terms of the technical application of this, but it's also true as a country. This is a moment, this is an economic opportunity for us as a nation and for us as a world to figure out how to solve some of the most critical challenges we face. This technology is going to help us do that if we do it in the right way. And so I think that's a really important point. The third thing I walked away with was we talked a lot about the use cases and this is really important, and I think this is why I struggle with this idea of a monolithic agency that regulates this, because I think it really does matter at the point of application. And one of the best examples I can think of is, you know, if you create an application, an AI application that can identify and predict cancer with a high of accuracy, the governance of that application is going to be very different If you give that to your doctor versus you if you give that to your insurance company. Right? And so it's very difficult to have one monolith regulating those types of things. And so I do think the use case is really important. And we heard that multiple times up here that you have to understand what the use case is to really understand how we want to think about the challenges that we're facing. The other thing I walked away with, and Matt thank you for all the sort of details and Navrina around the operationalization, right? The scale, the speed, the complexity of these algorithms is such that you can't do this through process alone. There's a lot of, I think, lofty language out there. You really need to operationalize this into the pipelines. This is something we spend a lot of time at Booz Allen focused on is how do you bring the appropriate guardrails and guidelines into the actual engineering of the model so that you aren't dependent on a process or a checklist. And so I think solving that operationalization issue is really critical. The other thing I noted that I thought was really, really cool was sort of this notion of soft power. Matthew, I thought that was a really good point. Like responsible AI, we talked about this age of principled AI because we think it is incumbent on us to be the leaders in this space because we need to bring our values, our principles on how we see the world. And those democratic values need to shine through in how we see AI unfold. And the way you do that is by being the leader and setting the pace and make everyone else follow you. And I think, Matthew, you really hit that point. Where if we're involved in these different types of organizations, if we're helping to create the thought leadership around those organizations, we can drive to the use and application of these tools in a way that comports with our values, which is absolutely critical. And so I think those were sort of the things that I really took away from this conversation. I thought it was really powerful and really eye opening. There is enormous opportunity here, and I think we have collectively overcome similar challenges in the past, and I think we can do that again for the future. And I'm excited about that. And we have to harness and not hinder the growth and adoption of AI in a safe, secure and trustworthy way. I think this year's edition of Velocity was focused on AI, but if you read this, you're going to find something in that magazine for almost everyone in this room, because We think of it as AI and. And it’s why it's so important that we get this right, because AI is not going to be limited to a narrow application. It is going to become ubiquitous. It's going to be found in everything that we do and every tool that we apply. And so we have to think about it in that AI and application. We think it's going to transform how the government does every single one of their missions and change the way we achieve our goals and outcomes for us as a nation. So thank you all so much. Have a wonderful evening. We'll look forward to talking to you all. Thank you.