Insights from the Space+AI Summit

Hear from the Experts

Introduction and Formal Welcome

Video Duration: 4:42 
  • Ginny Cevasco, Booz Allen Vice President, National Security
  • Judi Dotson, Booz Allen President, Global Defense Business
Full Transcript of Video

Good morning and welcome to our Space and AI summit hosted by Booz Allen Hamilton. I hope at least some of you are as excited as I am to be here. We are thrilled to have you. My name is Jenny Sasco. I work in the national security sector here at Booz Allen and I'm delighted to be your host and emcee to guide you through our incredible lineup today. You'll be hearing from a variety of distinguished speakers, the former director of the James Webb Space Telescope, the agency chief technologist of NASA, and some of the foremost experts on the application of artificial intelligence to space for government. We are extremely grateful to all of them for taking time to be here with us today. We have a full house here at the Helix and also hundreds. I heard more than 700 joining us online. Let's share a round of applause. Before we begin in earnest. I wanted to share just a couple of housekeeping notes for both our in-person audience and our virtual attendance attendees joining us via the webcast for those of you in the room or virtual. We're gonna save 10 or 15 minutes at the end of every panel or, or a conversation for questions. If you're here in the room, raise your hand and we'll make sure that a mic gets to you. That's important. So the folks online can hear your questions too. For those of you online, just drop your question into the chat and we have people monitoring that and they'll make sure that your questions get forwarded up to the panelists. We have a jam-packed schedule today so I won't hold us up any longer. I would like to introduce you to our global defense president Judy Dodson, who will welcome you to the Helix. Good morning. We are so excited to be here today and I've been speaking on behalf of the global defense team as well as the planners, the marketers. We have been thinking about this summit for months and see that the turnout and to understand who's online, what the topics are amazing. And we're so grateful for those of you who joined us today. So I'm gonna take a minute to talk a little bit about Booz Allen. For those of you who may not be as familiar with Booz Allen. We're a technology and consulting company, we believe in mission first, which means that we partner our mission experts with our technologists to make sure that we're delivering the right capabilities to our clients today. We have more than 2000 people supporting space clients across the globe. Including at NASA Space Force and N and NGA along with some other clients, we're really proud of the work that we do, particularly in, in uh important missions like improving ground systems and becoming more um specific around Space Domain awareness. And we do this, as I said with the mission first approach using the latest technologies. But what we realize is as proud as we are, we cannot do it alone. It takes government leaders, it takes industry experts and it takes people from ac across the commercial sector to come together to develop the solutions that are needed in today's space challenges. To that end. It brings me great excitement to announce that booz Allen ventures has made a strategic investment in Albedo Winston. You're here somewhere. Yes, it's amazing. Albedo is the first company to operate satellites in very low earth orbit which enables them to offer ultra high resolution commercial imagery from space. We are so excited to be working with you and to be a part of your success and we can't wait to see where you take this industry. Thanks for being here. So as our space capabilities grow, I'm thrilled to add to this partnership in the portfolio. Further, adding, adding, enhancing our ability to accelerate U space capabilities with the power of data. When we talk to clients across the country, that's what we hear about data. How do we manage it? How can we take full advantage of the investment that we have made using the advanced technologies and capabilities? And Albedo will be on our third panel today to talk about the power of space data with us. 

Applying AI to Space Domain Awareness—to Fuse, Predict, and Decide

Video Duration: 45:03
  • Moderated by Tony DiFurio (OODATOPIA)
  • Lt Col Ashton Harvey (NRO)
  • Major Sean P Allen (USSF)
  • Pat Biltgen (Booz Allen) 
Full Transcript of Video

Good morning. Uh So as Jenny said, we're gonna talk here in our first panel about artificial intelligence and how it applies to space Domain awareness. And what I first want to do is introduce our three panelists starting with Lieutenant Colonel Ashton Harvey. Colonel Harvey is currently the Chief technology officer at the national work ons office specifically in the inw ground Systems program office. Prior to that, he was a services chief fellow at DARPA and he's held many roles across the Department of Defense and the Air Force. He also holds a Ph.D. in engineering and operations research from George Mason University. Welcome, Colonel Harvey. Next up is Major Sean Allen. The major is currently at Space Systems Command where he has the honor of being the inaugural chief of the SD, a tap lab. Prior to that, he was a mission director at the NS DC. He's done a lot of work on hardware and software prototyping and when he was at the Space Security defense program, and he also worked a lot on OPIR and SAC systems. Welcome, Major Allen. And our third Panelist is Doctor Pat Biltgen Doctor Biltgen is a leader in Booz Allen's Artificial Intelligence Group where they apply artificial intelligence across the defense intelligence and space clients. Uh He spent more than 15 years in the defense industrial base working on lots of national security missions. We have the honor of realizing that he just released his book on AI for Defense and Intelligence that's out there on Amazon. So I'm sure we're all gonna quickly pick up a copy in the lobby or rush out to read it so we can talk to pat about it. Doctor Biltgen holds a bachelor's, a master's, and a Ph.D. in aerospace engineering from Georgia Tech. Welcome, Doctor Biltgen. Thank you. So, before we dive into this topic of artificial intelligence, I want to say, it's fascinating to me that we're here talking about this. Growing up as a kid prior to the internet prior to cell phones, mobile phones, as we know it today. UmArtificial Intelligence. The only thing we ever heard about it was when we were watching Star Trek every week and we heard Captain Kirk in the, and the group talk about this all and powerful computer with all this intelligence on the enterprise. And so I find it amazing that we are living it now. And where they were foreshadowing in TV series and movies about what artificial intelligence would do in space. This team and the, the team across the nation is making it happen. So we're gonna talk about what is that making it happen, and have a little bit of discussion of what it is and where we should be focusing. So we're gonna dive in, the first topic I'm going to ask Major, I want to start us off with and I think we start with the basics, since this is a very complex topic, umand we only have a short amount of time. Major, maybe you could start our discussion a little bit about talking about just the basic foundational needs that we have in the government with regards to AI. Before we start diving into maybe the solutions and the priorities. OK. Thanks, appreciate that. Something I will say is I'm the only person here who doesn't have a Ph.D. So I, I have done, you know, some technical program management, but I am an operator. So all of my perspectives are me sharing the observation about how we implement AI from the operator's perspective. So, one question that we were talking about earlier just before we came in, why don't we see widespread adoption of AI, you know, ChatGPT took off this last year. This is not new. What makes it hard to adopt from in the operational context Instead of giving you the list of reasons why I think it, you know, it's hard to adopt. I saw something change when Colonel Raj Agrawal took over this last summer as the space based Delta Two, the SD A and battle management commander. He made one simple change in how he described kill chains. But one of the buzzwords has been, let's get the kill chains to close. We've said that for several years. He turned that and he said my priority to avoid operational surprise is to detect the start of a kill chain. And for whatever reason, that small change in language has converted very well or translated very well to software engineers, machine learning ops, folks, data scientists to say I can do event detection. That's something that I can measure. And so, that's where I've been focused has been on. What does it mean to avoid operational surprise by detecting the start of a kill chain? And then how can I break that down into a set of tasks that we could implement AI to help us with? So is that getting at it no small challenge, right? Is any of your others want to add to that? Well, well, Tony, one aspect would be you know, it's major Allen's point if you're trying to detect those early precursors, uh the adversary is trying to prevent you from doing that. And so a lot of their signatures are very weak signatures or they're trying to deceive you with certain things and humans tend to have cognitive biases or preconceived ideas of this is what I think that means. And the enemy knows that too. So an area where AI can help in trying to identify that back to the comment about big data is that it's really good at finding latent patterns and weak signals. And OK, it may not always be right, but it can at least suggest those to the operator and say here are five things that I thought were weird and the operator would go, oh, you know what, I didn't notice four of those. And that's a very common thing that we see in a lot of cases. in almost every domain where the AI will find non intuitive things, the human usually dismisses those as the AI being wrong. And in many cases, actually, it was just something we never imagined. And in that case, Pac maybe you could talk about or others. So as I understand it, AI really in some ways has the potential to be more powerful in the human brain, but it needs time, right? It needs data to learn and takes time. So how does that work for and against the desires? We have to speed up and improve the kill chain and speed of the mission at the same time? Be accurate. So I'd like to share my thought on that. If you can't clearly define what those objective functions are that you're trying to train a model to detect. then more data is not gonna help you more training time, more CPU hours, whatever is not you, we need to think more clearly and be more specific in what are the tasks that are required for battle management. So I think one challenge in the Space Domain Awareness ops community is it's complex. There are many, many things that must go that, that we have to do correctly to, to achieve space superiority. But then it had, there's a tendency for it to be all things to all people. So if SDA is all things to all people getting more AI sprinkled in on top of that and hoping the operators are gonna adopt it, it's probably not an effective strategy. So I think Colonel Gerald's comments about, you know, the way that we are going to achieve space superiority, avoid operational surprise means we first must detect the start of a kill chain. That, that is a slightly different way of thinking about this problem because now I can say how are, what are the finite list, the number of ways I could be surprised what are the specific attack vectors by which an adversary can um can, can come at me, right? And now we can start saying to detect an event or what are those weak signals? Those may end up being indicators that an adversary is attempting to surprise me by mimicking a payload when he a different type of system or pretending to be debris when he's actually a payload. Those are the, those are things I think we can actually specify get data train models and incorporate them into battle management functions, right? Yeah, I think that's a great point. So well, Sean is more on the operation side and I've spent many hours playing at phone calls where he's out on the floor trying to work through different problems. I sit in a spo and we build things and so I try to think about how do I create a problem that I actually can solve? Someone hands me a stack of nebulous requirements. We need, we need more things, more, faster and more better. How do I turn that into something I can actually turn around and give to folks like Pat and say, OK, here is a decomposed problem. Here are objective measures, here's where you're going. So I think setting, you know, your example is Colonel Ol setting a good framework to scope a problem to give shape to it and to decompose it into subportions that make them chewable, understandable problems that reduce the the mental complexity of trying to frame the problem for someone. So I don't have to solve all of SDA I can focus on. How do I detect that this event is about to happen? How do I detect that a maneuver has happened? How do I classify that? A maneuver? It is non nominal. How do I do these different types of things? building that systems engineering structure around decomposing the problem providing good IC DS API S between those problems allows you to scope it down to a manageable size where people can really start to chew on it. And I think that that's, you know, one of the things you've seen out of space systems command and the NRO there's been a lot of thought leadership in recent years trying to understand how to decompose that problem well and communicate that to our industrial base. So that folks like Pat or others out there can actually put real code behind that and turn that into approaches that can automate things that are typically manual processes. And as we have trusted automated methods, we can then get to the point where we can gain the operators, operators trust to institute very understandable AI approaches to solve those problems where it's appropriate. Yeah. And, and Tony, I think that's a critical point. I'm glad that Major Allen used the word Sprinkle. That's, that's a word that we use a lot is like, oh, let's just Sprinkle a little AI on it or like, let's do AI but to the colonel's point, it's very important. Like systems engineering has fallen a little out of favor cause you're like, oh it's got these processes and I got to decompose these things and let's just prototype it and see what happens. There is a place for that about getting things in front of the operator and saying, what do you think? But you spend a lot of cycles bringing the wrong rock if nobody can articulate the problem. So most of the systems that are successful in the Department of Defense are where the operators and the nerds can work together and say, hey, I got this requirement. It passed down through Colonel Harvey's spoe and was mistranslated 19 times from the original person you talked to. So now that we see it, what do we really mean by maintain custody of object? Let's figure out what each word means. Sounds really pedantic to say we're gonna do a definition exercise of each word in that sentence. But then you understand, like to maintain custody, you need to see it all the time, one shot every five minutes, you don't want to lose it for an hour. Like help me understand what this word means from an operational standpoint because a lot of those nerds don't have the operational experience. And so we can imagine from a physics or math standpoint, this is what I think they mean, but those systems are never successful, you have to talk to the operators and say, show me what you're doing and let me try to understand how I might make your life better. And it sounds like what we're, we're talking a lot about here is the challenge that both government and industry have working together really is to, to harness the technology push that's going on in artificial intelligence, but responsibly apply it in a rigorous methodology. So we can get incremental advances to our operational community to help the mission, right? Not just spend a lot of time and money, just pushing the edge and hoping something is gonna give us because we're learning, but we also need tangible outcomes fast. So I think that leads us into the next kind of topic to dive in just a little deeper. And, and that is, is where do we see right now? And in the coming years, some areas that you think that AI is particularly suited for Space Domain awareness, you, you started talking a little bit about the fact that there's so much out there we want to collect on. and it'll be more and more as time goes on and part of this will help us to make sense of it and then decide, you know what to do in the back end. Can you talk a little bit about? and maybe kind of Harvey lead us off here is where do you see some specific areas that maybe not big art requirements but smaller requirements we see evolving that can point people that are trying to bring solutions to the table. Yeah, absolutely. So, I think Sean again, you know, had a really good insight there about defining the objective function. If the government is not clear with what we want. And I, and I don't mean, you know, in words, I mean, in, in the mathematical where we're trying to go, what are the numbers that we're trying to hit? What are the things we're trying to do? Folks will never really get there, right. So you see that with there's been a lot of research post public public publicized in Amos around sensor tasking. A lot of it has been focusing on how you do sp tasker better, right? You know, it's, it's I, I want to maintain objects in space that aren't moving and you know, don't lose them. Well, that's not super interesting. We can do that. We have, we have an algorithm that can do that. There's a lot of other things that don't have good algorithmic approaches in that sensor tasking world that there are other tasking types that the 18th SD SS is defined to turn those into actual objective functions to then bring together what is the value of going to look at something after it maneuvered to refine that, that state versus what's the, the, the, you know, the, the resources we spend to do characterization shots on an object that we're not quite sure if it's, if it's actually what it says it is turning that into an objective function will help folks actually really solve that. But until that happens, folks are gonna spin on that, we've seen a little bit of good research at Amos where people started to some of them, but not on all of them. Kind of moving away from the decision problem of sensor tasking you look at at classification and we talked a little bit about like AI algorithms are really good at looking about a lot of a lot of streaming data and saying this is a change detection. This is, you know, this is not nominal, this is nominal. There's a lot of places where that can be a really good opportunity to apply AI techniques to those. Um And then, you know, just because I feel like I'm required to talk about ChatGPT. You know, there are a lot of opportunities to leverage large language models to look at unstructured data that otherwise we couldn't pass into an algorithm very easily to potentially pull out insights whether that be news reporting in foreign languages because they can read the foreign language, I can't. And or that be, you know, looking at Twitter posts, media, video, turning video into text that can then be searched to be able to look for other left of launch indicators that might tell me I need to spend more cycles looking at something. Can I jump on? Absolutely. It's my opinion that if you want your innovative new research prototype AI thing to gain adoption in the operational setting, it has to be very clearly attached to a battle management function. So if you can't articulate that in a short sentence, then the rest of the research paper is gonna go unread. So one place where I see missed opportunity is with regard to anomaly detection on orbit. I think there is huge opportunity to make use of existing algorithms that but they're not well integrated because we haven't yet described to the operational community. This is why this type of anomaly when detected gives you decision making power in your protection of an asset. So one example from this past past three months in the lab, if somebody tells me that an object is stable or unstable, that's that's somewhat trivial may not require machine learning. But it might lots of time ragged time series data from heterogeneous network of unc calibrated optical sensors. If you wanna know, you know, the sparse data, tell me quickly whether that target goes from unstable to three axis stable that change. can tell me something very useful about the intent of the target and then I can communicate to a battle manager if objects on this list become stable, that may tell you the next proximity event is potentially hostile, right? So being able to answer those operational questions in that language, not in machine learning language, is gonna help people to adopt your, your whatever it is. If you got some cool long, short term memory thing or whatever, that's doing a neat ragged time series processing rock on, tell me whether or not this target is potentially hostile and if I need to take action against it. So and it sounds like in the example and he brought up ChatGPT, right? We're not talking about at at this point where a lot of people are worried are, are we using AI to replace the human? We're talking about using AI to enable the human? We want our operators to be able to work on things that that they should be spending time on and let now this AI engine start to really deal with volumetrics of the data. And, as you said, merely help us with the back end decision making course of action development and, and things like that. Pat, have you seen any other areas than we were just talking about here? That, that you think really fits this, this near term application of AI. Well, you know, ChatGPT and I are very good friends. So thank you for asking the question. One of the things that's interesting about, OK. So that is a class of algorithms called large language models that has enabled a lot of new applications. But large language models are probabilistic token generators. OK. What they do is they take language and they say I'm breaking these words up into tokens which are parts of speech. And the big breakthrough, the large part was that if you give it enough human language, it says, oh, I figured out how nouns and verbs are related and I figured out how certain facts are related. And by the way, it doesn't always get it right. But it says I can predict the next token in the sequence. One of the things that surprises people is um is its ability to do things for which it hasn't been trained. So I have an example in the book where I said chat GP T here is an Excel file that contains ship movements, find transshipment events. So transshipment events is when two ships come together and they offload something to transfer them to each other. And it says I found six transshipment events and here they are, but you go, it shouldn't know what that is or how to do that analysis yet it produces six events that are correct. And so the researchers have found like it's predicting patterns that are not all based on language. So in space domain awareness, you could use this to take a series of motion patterns and treat them all as tokens and then say, can you predict the next token which would be the future position of this spacecraft? Now we have physics based ways of solving that problem. You go like, hey, I know the math, but it's possible that these algorithms are going to find different math. And some of the companies that are doing um motion prediction for self driving cars instead of doing the kinematics of here's my velocity, my acceleration, my angle and I should be here. They're treating it as a token prediction problem and using large language models to guess where the vehicle should be in the future. So that's an example of a domain where you go like, but you shouldn't be using text based language models to drive a car and some researchers are going well, they work. And as you know, there's like a lot of things in our society where I don't know how that works and someone tried to explain it to me and it doesn't make any sense, but it does seem to work. And so I know that doesn't give us like a good feeling as technical people. Like when Major Allen goes, can you explain to me how this would work? And I go like sir, it's preventing operational surprise. I have no idea how it's doing it and I can't explain it. But have you felt surprised lately? And so that's a kind of a, a weird way of looking at the problem. I mean, like the kernel that checks my requirement is going, this is not going well for you right now, but we're entering a new domain where we're, we're saying like these are algorithms that are solving problems and we may not be able to understand why. One less side, there's a lot of chemistry that we don't understand. Like when you do chemistry in high school and you're like, I have this reaction and then it goes to this. It wasn't until I was in grad school when I was in a combustion course where um Professor Lewin at Georgia Tech was like, by the way, there are hundreds of precursor reactions that happen between those two things and we can't actually observe them like we know that they're there. But we simplified this for you in undergrad and said this turns into this, but there's a lot of other stuff going on that's unobservable by any sensor. And, and that may be true in medicine. That may be true in psychology. That may be true in how I fill up my expense reports. So all those things could be um could be like totally new domains for us where AI could solve a problem. And we really like explainable AI like it's a great buzzword. It's a great concept but you go like we may have to realize like we can't explain it, but it works. So how all, all three of you? So that's a really good segue into. So we are pushing the boundary here, right? Even though in a lot of cases now we're we as humans are trying to develop AI to augment what we do our human operations. We usually use our, we hold our human brain and analysis the gold standard. How do we know as we develop these A I solutions and hundreds and thousands of contractors working with all different parts of the government on different solutions. How do we know or how are we going to work towards trusting? And I don't mean trusting as in, you know, the, the ideas of Terminator, it's trust us that what we said it was going to do meets the outcome. Um especially when that maybe that gold standard is the human and how, how do we now measure a machine against the human performance? So if we talk a little bit about how we see both government and industry embarking it down, you know, we've talked a little bit about the invention of AI, but now how do we get into that operational side of how do I know I can trust it that I can step back and you're not worried about what happens, maybe not tomorrow but a month from now or as it learns, are we feeding that back? So yeah, I'll throw in an opinion. Even if you're doing something super novel, very interesting, you know, whatever large language models for orbit determination, we do need to measure the outcome and evaluate the performance against some benchmark. And there's 50 years of good data science standard practices on how to do that. And in the space domain community, I think a lot of the, the bottom of the data science hierarchy of needs goes unmet. In many cases, access to data, publicly available, expert labeled data sets so that people could throw some spicy new hotness against it and go OK. I knew that there were six ship transit things in this data. how would you know that there are six unless an expert who knows the answer is again, it's supervised kind of a technique. But I'm not the expert in AI. It seems likely that the next few years, we're still gonna see a lot of growth in supervised machine learning techniques. And our community has done a poor job, could do a lot better in solving the bottom of this pyramid. Like let's go get uh all of the commercial G pr data related to a finite set of class label events, maneuver, proximity RF changes, attitude changes, launches reentries, whatever those events happen to be, make those easily accessible to the machine learning community with nice clean labels on them. Like I, I would argue that that's a nice place to start. Then I may not actually care or what. Maybe I won't worry is the right word. Whether you're misusing some cool technology, at least I can measure the performance. And then I can tell you whether I'm gonna spend a nickel on it. So the data is really important. I mean, sorry, go for it. So the data is really important. You make a great point this morning, you made a post about the EM n uh handwriting digit data set that came from back in the nineties. Almost all computer vision algorithms were advanced because of a data set called image net, which was 100 and 20 million images that were just scraped off the internet. And but they were labeled using a data set that was a taxonomy of it was a whole taxonomy of words like it goes down animal mammal, cat and then what type of cat? So the people that were labeling, it had a set of words from a data set called lang net that allowed them to label the images that was a major undertaking. And you go like well, gotta start somewhere. Uh On the satellite imagery side, the government released a data set called X view, which is uh a data set of a million images in 60 different classes that almost all the satellite algorithms have been trained against. And so at some point, you have to, you have to say like how much money are we spending, trying to tune up algorithms against bad data when we could just go, let's just go take the however long it takes five years to go create a gold standard dataset and then you're going to see that progress. And by the way, if you don't believe me, Tesla built the world's largest machine learning supercomputer and they're sucking data off millions of Teslas sending it back to the mothership. And they're going like every time you're in autopilot and you go like this and it goes boo boo and disengages autopilot, you just labeled the data, you said whatever the car was doing, the human did not think was correct. Their data says actually they're right more often than we are. Jury is still out on that. Literally, the jury's still out on that one. But um is that also while my insurance bill keeps going up there? It's you Tony actually remember to your original question, how do we trust it? You know, 50% of us are below the median at any task. So that means like two of us are below median drivers aggressive. OK. So, and then lastly, the trusting part would be if you have that labeled data, that is a gold standard that people have validated, that the algorithms can be run against trust comes with time. And the really tricky thing about A I is, even when it's right, 99.9% of the time, um, this is a phrase DARPA uses the phrase statistically impressive but individually unreliable. It's like you see the one example that's really obviously wrong and you're like, oh, this is never gonna work, but it was right. So many times that you didn't even notice and the one time it's really, really, really badly wrong, like, none of us have ever made a mistake. It just goes like, ah, this is never gonna work. But when you have trained operators, they go, I just can't accept that. And now you're building your way back up from the, the pit of despair. Right? Yeah. So, uh uh for one, it feels like my parents in that sense, right? The one thing you did wrong, they remember. Uh also on the note of trust, I did make a note to review Pat's expense reports for unobservable data. So we, we'll get around to that. Uh Actually Jenny approves them all. Oh, there we go. They're probably fine then. Um So, uh you know, I want to circle back to, you know, uh one of the first interactions I had when uh Sean stood up the the tap lab, was, hey Sean,uh I think you're gonna do great things. Also, I spent a year plus with a bunch of really smart optics people and really smart ma mathematicians trying to build a high quality simulated data set that I knew ground truth because I often I have great space data and I have no idea what ground truth is because we don't actually instrument a lot of the satellites or I don't have access to the GPS receivers on board all the satellites. So I don't really know what truth is and I have no GPS data for debris. So I'm just guessing what, what was actually going out there. But with high quality simulated data, um there's tradeoffs, I I all models are wrong. Some are useful. Hopefully this is a useful one. You know, I really know what ground truth was in the simulation because I made it. So my first thing was take this, it, it, it's not safe to go. Don't take this. You know, we, we to be a good partner. I need it to be able to understand. as his folks are coming up with new innovative ways of approaching problems. What does that look like on a, a data set that I've seen other people work on? I know what the performance is. And I think your point about image net about putting out challenge problems with good data that clearly define a problem, scope it down and say I want here is a, here's a whole bunch of data home of investment. I want you to do this really well. can, can really issue a statement which small teams that can't afford all of that upstart cost that can't build Tesla's supercomputers and get pat to train it every day with with his driving skills to, to be able to reduce that start up cost and allow them to work. So I think, you know, that that is an area in the space domain awareness problem set where we, we have some under investments and a little bit of that is because of classification, you know, ask, just ask Os D Plumb uh about right, Assistant Secretary of Plum about overclassification in the space world. You just made a bunch of statements about it. So, it, it is a challenge. And I, and I, you know, I, we, we talked about two ways to get around it, commercial data is not beholden to dod classification guides, simulated data if done, right? Can sidestep a lot of those problems. So there are ways around it. We just need to find a good, effective way of doing it and establish those problems that are tied to operator needs that can then have good verification validation behind them to build that operator trust. Did you do an OG legend of Zelda reference? Yes, that people like me will not recommend somebody in this room caught it. Yes, I know, I'm sure. So let me, I wanna touch on one more topic before we get a chance to let the people in the audience and virtually ask us some questions. But we, we're delving into different aspects of the human play in this. So I wanna go down that path a little bit more. So let's talk about the workforce like we saw over the last 10, 15, 20 years where um the services and the intelligence community had to really address how do we evolve our workforce to deal with the new cyber technology. Now we have artificial intelligence. How do we see this on both the government and new side? Really pushing the boundaries of our workforce? Are we going to be able to keep up with it? Do we have the people, can we recruit? Is it affect, you know, skill codes and things like that? Because we know the commercial world is pushing on it very heavily. Um We know we have our own classified issues with the workforce and how it limits things. So can we talk a little bit on that, that part of the human element when it relates to the workforce and applying it to the technology? Um especially when we're working in a sensitive topic area like SD A? Yeah, I got, got a thought on that. So I think uh those very technical skills, a literacy and AI software development, cloud computing, all, all of these various technical skills even in technical fields, not everybody is A, you know, enterprise software engineer or a cloud computing expert. Those are those 10 are, these are partitioned out skill sets. So if we want adoption of AI technology, we want trusted data sets that have been scrutinized and measured and built up rigorously. And I want workforce development. This sounds like a multidiscipline kind of activity where I'm, I'm gonna have to have people who are physically co located from very different backgrounds. So we're doing a three month, it's called the Apollo Accelerator, but they're innovation cycles, right? Build a prototype, show it off three months. This first cohort. Um There was one gentleman fresh out of his undergrad did not know what a restful api was but could solve, you know, quadratics and astro dynamic stuff with pen and paper for fun. There were dev ops guys, I had machine learning folks from national labs and every single person on the team benefited very quickly by being in proximity with experts in something different. That sounds trivial, get making the incentive so that people want to show up in the same place. UmAnd talk about something I is a huge deal. If you don't have organic ways to grow cross discipline teams, then you're gonna have to mandate it and that may be very challenging. So, yeah, yeah, I, I think you make a great point there. So being, you know, a guy who sits in a natural program office. I always think t shaped skills, right? You need to be deep in an area. You need to have an expertise that other people can rely on you, but you need to build out an understanding of who's to your right, who's to your left sufficiently, you know how to work with them because you know, I need coders who understand the domain they're working in. I need, I need domain experts who understand how to work in a system engineering process. I need system engineering guys who understand that. What they, what you know, the the thoughts and dreams that they're putting into this Visio diagram have to get turned into code. And, and have enough of an understanding, appreciation of the other folks around them and humbleness to know how to interact with them, to ask the right questions at the right time. And then the connective tissue, I, I, I see the benefits of, of working closely together As a guy living in DC with customers out in Colorado, I will say you can make virtual work. But there is a barrier. And, and you need the right collaboration tools to help bring some of those barriers down so that a conversation 2000 miles away can still look like you're sitting across the table from somebody can still feel organic and, and easy. and not like you're interrupting someone. So those are all important aspects. But yeah, I do think fluency in the techniques to understand what algorithmic approaches might be valid here is, is a very basic thing and you've seen it with, with, with him already, right? You know, he's a very I'm an operator and then he starts talking about, you know, labeled data ETSS and large language models and you know, obviously fluent in the techniques that he's trying to operate around. I think that's important. And do you see coming from uh the program of World program office world also the need to because we're talking about diversity, but diversity also includes leadership, right? Many of us that are in leadership ranks or not, you know that into the technology. So they need to also be educated, right? So that they can be part of this diverse team, making the right decisions and understanding this new technology, right as it comes up. So we're making programmatic decisions, right? Yeah. Yeah. II, I think as a, as a guy sitting in a, in a, you know, doing what ground software or just software development in a place that's used to launching satellites. There's a cultural shock there where, you know, software takes about 2.5 seconds to field when I hit my little get push and I compile and now I've launched a new software version. That paradigm is very different than someone who spends a decade building a satellite that needs to work the first time. I can launch it. I can check it and go. No, that didn't work. Oh, well, let me go fix a few things. so it's a very different paradigm and there's a lot of communication that you need to do to educate them on what your risk posture should be based on. What is the cost of making a mistake? Now, that being said, it's easy to push code. That doesn't mean you should push it directly to prod and then go out for the weekend and on a Friday, of course, you know, you got everything done. It's 530 on a Friday hit. Push, go to prod wish Sean the best. hope his weekend shift goes well. That's a recipe for disaster. But you know, at the same time, if I take the risk posture of every, every push to prod is a spacecraft launch, I'm not gonna innovate in cycles and I close the loop with China. So Pat, I'd like to ask you to kind of wrap this up here before we go for the audience. And I think your perspective coming from probably one of the the largest industry leaders in artificial intelligence. How do you see the workforce challenge and, and how you maybe you're dealing with it? Well, I re with Mr Robinson's comment about the calculator where it was like the calculator is gonna ruin math. And you know, the freak out that professors have about, oh, they're gonna use ChatGPT to write papers. And I kind of go like, I don't know why we write papers anymore. And they go like, hey, Pat, that's a dumb thing to say from a guy that just wrote a book. And you go, like, I know on page four it says the reason why I'm writing this is I don't think people are gonna read books anymore because I actually think it's like if you have this companion that can answer all your questions, you can even be like, tell me a bedtime story and it does, you be like, hey, tell me everything about the James Webb space telescope and it does. And so I think that especially like Children today who are growing up with the calculator that's called AI, there's a magic box that will make text appear OK. You still have to check it and make sure it's correct. You still have to check against references. You still have to say is that my voice that's coming out of the box. But what I often tell our leaders in the intelligence community is like, if you have someone that is 16 years old today that will be entering your workforce when they're 2223 and you sit them down and go, I need you to write me a 10 page paper a 10 page prose report on what's going on in Ukraine. They're gonna freak out. They're gonna go. I haven't written a 10 page paper ever. I haven't ever typed that many keys in a sequence in my life. I would go to my friend and go. I need, here's my prompt and I need a paper on Ukraine and it needs to cover the following topics and I need you to pull this reference and I need a map of this city and I need to know what are the major industries of the city, what it connects, how it works. But that person is gonna know everything that they need to put into crows. And so I think a lot of our workflows are gonna change and the tradition of, you know, I mentioned expense reports, but I also have to do like my annual performance appraisal which the company requires is a giant wall of pros. And I won't say where that prose we can use to streamline that I mean, I'm not going to say where that prose came from, but the our expectations of things that we produce, I think will change and to major Allen's point about it's about operations. It's about avoiding surprise. Like I don't know that I need to write a five page report that says here's how it avoided surprise today. I just need to avoid surprise. So I do think that the young people that are entering the workforce are going to bring a new set of skills, a new appreciation for human machine teaming. By the way, this happened before in the Iraq war. There's a famous quote that says the Iraq war was fought in Chat and it was like the 18 year olds that you deployed in 2003 were used to using Chat. And they're like, oh, there's this thing called Merch on CNET and I'm gonna chat to someone and say the drone is over here, here's what it's armed with, here's the target and they're just chatting like they were at home when they were playing video games. So I do think Tony, you're gonna see that kind of transformation where young people are gonna bring new skills in and we have to try really hard not to squash them and say you have to do it the way we did this in 1990 because by the way, 19 nineties are just like a thing that had like Seinfeld and friends and that's all that people remember from that entire decade. Two great things though. They, they will last forever. So with that, thank you. We're gonna take time to, to turn to the audience. For some questions, our virtual audience can get some lights that are gonna be passed around. I have several questions posed here. And the first is a two part question I'm gonna start with. The second part. Are you concerned about warfighting scenario? And adversarial activity myopia within the future mission development framework? Ok. Well, I think definitely we heard that one and, and I think we have our operational person on here all ready to chomp at the bit at that one. So major, I already uh can you repeat it slow for me? Are you concerned about war fighting scenario and adversarial activity myopia within the future mission development framework? Yeah, I, I'm not sure that I'm either qualified. I mean, yeah, we mean that we have myopia to adversary actions like a bias toward interpreting their behavior. That kind of a thing? Can we get the first part of that question? The first part is, have operators been surprised by Dr MS for quote, being surprised unquote in the space war fighting domain that have been developed by individuals and teams outside of the dod. So you make ad RM you build your system to achieve those goals and it turns out it was a bad DRM is that we're getting at. Yeah, it's absolutely possible. Yeah. And, and I mean that, yeah, multidiscipline is a huge deal here, right? Like you don't have the guy that is the one changing the gears on the truck, isn't the one designing the architecture that he sits inside typically? So? Well, I think on the question too, uh I think maybe not necessarily the perfect intent of that, but we did to touch on this earlier is the fact that, you know, this is a cycle of what we do on our side and how the adversary reacts. And then it's like I call it the the detector of the radar detector and then the detector is Ay. Yeah. So, so I think so one example of interpreting that Tony would be like, OK, we went to the same school that they did, we use the same book. So when you go, hey, what's the best way to do this? You go, well, if you turn to this page, here's the way to do this maneuver with the minimum amount of propellant. And you go, well, why do you think they're gonna do it with the minimum amount of propellant with the same challenge with humans use doing that. That's right. So there an area for AI might be, I would do this with the minimum amount of propellants because only recently we've started to say dynamic space operations and freedom of maneuver. And what if I could refuel things? So I like to conserve propellant because my vehicles are really expensive. But you go, is that what the other guy is doing? Maybe he doesn't care because he can launch vehicles whenever he wants or he can refuel them or doesn't care about burning up a whole rock AI engine could potentially course of action ahead of. If I was building an AI engine to mess with Sean, I would do things like that and I would be like, it doesn't do things that make physics sense or economic sense. It does things that win. And there's, yeah, I just, I want to jump in here now that I understand it better. I do have a strong opinion about this. Yes. we got it going. Yeah. Yeah. Yeah. Yeah. So, I think, you know, time travel is not real as far unless you guys got something going on in the back room. But there are limits to how an engagement can be performed. Physics will constrain those things. That's a math problem. Now does the weapon system that I'm worried about, what is its performance? Now, there's an envelope and that's, you know, informed by intelligence and I can make predictions and estimates and maybe AI can help me with that assessing the intent though, I feel like this is the trap. We be very careful of test the null hypothesis, which is why I'm a little bit concerned about hyperfocus on anomaly detection. You can be a strange guy all day long. If you're not holding me at risk, there's no imminent use of force. So now we need to have a policy discussion. I think we, these things have been ongoing for some time. But what are the norms of behavior for hostile intent? How do you evaluate them? And then we should be messaging those things very broadly. to say there are certain behaviors that we do not tolerate regardless if others imminent use of force. Now. And that, that's a discussion for leadership in the government, right. So by the way, Star Trek postulated AI before it was, it also postulated time travel. So maybe we'll be back here in a few years talking time travel with everyone. Right. So, all right. Do we have another one time for one more question here in the room? Oh, no stealing. So history is kind of rife with some of the examples. And you guys talked about both new technology and operational surprise when you talk about the Trojan Horse Germany bypassing the ma line with fast armor, refitting merchant ships. How will A I help the operator identify those things that achieve operational surprise that we don't know the tactic for that? We don't know the kill chain for how will A I help identify new things we aren't expecting? So this is really around the idea of really early indication work that we're not used to, right? So we're perfect. Colonel Harvey. Yeah. So I'll say you achieved operational surprise when I saw you specifically raise your hands. Yeah, I know that it was gonna be. So you know, we kind of talked a little bit about like how, how do you not get so focused on like, oh, I'm gonna do this and then fuel approach.I will call out a a reach. I think we're a long way from getting there in the space community of being able to throw AI A course of action, determination type stuff. But you've seen in some of the reinforcement learning approaches, you know, before large language models became a thing, everyone was excited about reinforcement learning to solve all the problems. There was a series of successes that eventually got to something called Alpha Star, which was deep mind's approach to solving the Starcraft problem, which was a set of three asymmetric forces um that any two of them would be chosen at a time to go head to head on a complex map with fog of war. We had partial observable. So there's all kinds of problems are analogous to reality, right? War is typically asymmetric, you have different forces, you have different capabilities, you fog of war, you don't see everything, you know what it once look like. You don't know what it looks like right now. All kinds of problems that uh and what they did is they said, OK, here's here's how we think you should solve the problem by giving it millions of playthroughs of how people have done that. And we have a lot of exercise data of how we think people have white carded, you know, the what red's gonna do and what blue is gonna do um get you know, putting that together to teach an algorithm how to start playing the game that is war. And once you have a foundation of letting it play the game, let it fight itself, let it fight itself. So many times that you start to have instances like they showed in alpha star where it perform maneuvers, perform moves that no human had yet been documented as having done. So it managed to find action space that was admissible by the game. that, you know, people had not previously thought it wasn't doing things that people were incapable of doing. They limited. So it couldn't just act faster than people can act because computers are really fast and people can only act to a certain Hertz rating, which is still crazy fast when you get to the really good experts. But they didn't let it do superhuman things, but it did do things that humans had not done. And I think that that's at least one approach that I've seen that might get us to the point where we start to find action space that the adversary could uh explore that we haven't explored, that could then give us a chance to learn how to defeat that those unexplored actions that, that you can start. Once you've seen it in a simulation, once you've seen it in a playthrough, you can start building your indications, you can say, ok, this is what it took to set that up, these are the observables that could have gotten out of this. This is how it could have detected that. But, but you need to, you need an opportunity to see that event to start thinking through how you decompose that problem and how you turn that into a real alert that an operator isn't gonna say that's not gonna happen. That's that, that's not happening. I don't know what you're talking about. So if I pop up an alert today that says, you know, this, this spacecraft is gonna do this crazy maneuver and it's going to, you know, spend a ton of delta V and like circle past three different things and do something we've never seen before. Uh the sean that's working on the off floor today is gonna go That's a mistake and he's gonna throw it away. He needs some kind of confidence that says, yeah, that's a real thing. I've seen that in X rays. I've seen that play out. I understand what this might be doing. I know what I should do now. Well, that's perfect wrap to this session. So I want to thank you all for taking the time. Thank the panel for having such a quality discussion in a very little amount of time. So thank you all. Thank you. 

Taking the “Ground” Out of Ground Systems with AI-Enabled Virtualization and Automation

Video Duration: 51:00
  • Moderated by Mimi Geerges (CSPAN)
  • Jimmy Spivey (NASA)
  • Josh Perrius (Booz Allen)
  • Brian Schmanske (Former NRO, Georgetown University)
Full Transcript of Video

Thank you all for joining us. Thank you for having me. It's always a challenge to be right after lunch. So guys, please do not put anybody to sleep. It's gonna be on you guys. If you do, I'm joined by Jimmy Spivey. He's the chief of mission Systems at Johnson Space Center for NASA. Brian Szymanski is former director of integrated intelligence uh system office at NRO and Josh Perry is at the end, Senior Vice President Space Intelligence Business for Booz Allen. So let's start Jimmy with you. Tell us about how AI is being used at your work at NASA and specifically with the ground systems and what are the areas you're looking at in the future? So today, we I would say we deploy some early level AI in, in the monitoring of, of space systems. Historically, it's been a flight controller and an astronaut who have been trained mostly that the the monitoring of the systems came to the flight controller in the mission control center in Houston and that person was trained in their systems expertise and they monitored systems for, for problems and failure. SSA also to monitor, you know, where they are in the mission, the astronauts responded to a lot of incidents that we've had over the number of years because they were, had hands on to the spacecraft that's evolved over a number of years. And, you know, the last 25 plus years, we're able to find the International Space Station. And as we progress through that continuous having humans in space, we've used certain systems to monitor the performance of the spacecraft. We leak detection for fluid systems and things like that. But for in some cases, we've even used it to predict as the previous panel talked about to predict some things early on in space station, we had some issues with our intend is acquiring certain communication satellites. And so we used again what I would say an early version of AI once we realized that what part of the mission, what times of the year because it affected the, the angle and, and of the antenna selection on the satellites, we use that. We gathered a lot of data and put that into, into software systems that then could be predictive and say, hey, this time of the year, this orbit, this situation or this attitude of the space station. These are the times when you're gonna see failures. And then our, our mission planning team could actually put that into the timeline and they could do that as far as a year in advance, which really helped us with, with a lot of our mission planning and Brian to you. Now on, your experience in the IC and your thoughts regarding both generative AI large language models and the shift to a more automated collection management and tasking. Sure, thanks. Yeah, my experience in the IC, I joined the NRO after some time in the Marine Corps, I was born in Huntsville, Alabama. So I was always fascinated with space and I really joined the NRO just to get into space. And I worked on a lot of space systems. After about 15 years in the NR I went over to work CIA and I was pulled back to the NRO with something called the commercial GE on activity where we started evaluating commercial start-ups, the space start ups and new space and how can we apply these new space assets toward NRO problems? It, it became aware to me at that time that space was really becoming a commodity, the space segment and the magic was on the ground. So I had an opportunity to go over to the I two spoke where we're starting to automate a lot of the processes, sense making and, and uh collection management activities There. At the time I retired maybe a little over a year and a half ago, it was just before ChatGPT made its big splash. We're still doing a lot of heuristic models. So the heuristic models being uh an analyst would tell us these are the types of activities we see, we see before an event. once we see those activities that we task accordingly, the problem with these models, although they are great in automation and of the process was they're limited because you needed the analyst to predict what was gonna go happen. So there's a lot of very touch labor analyst intensive processes. Once ChatGPT made a splash, I thought, hey, right. This is something we could probably use to help collection management. The problem that I see though, even though it's great promise is the training of the ChatGBT models. You need to bring it into the IC. You need to train it on its lexicon and vernacular, which is different and you also need to train it based on years of experience. These are the drivers that drove collection. These are the collection decisions that were being made and more support. Most importantly, if that was me as a log engineer. So I understand feedback. Most importantly, you need to know what the impact of that question was. And, and having all that information together, I'm not sure exists that that's gonna make it difficult to adopt these tools. And I think that the biggest thing was really what was the impact, did the collection have the desired impact? And, and when you take the image, when, when you look at past performance, you really have one data point, you don't really know what the alternatives were to that particular piece of a collection. And then once we train the models, suppose we use the last 20 years, the last 20 years have been spent on counterterrorism with a particular constellation. And it's that model that we train based on the historic information gonna be representative of what we're gonna see moving forward when the collection architecture is gonna change. And our focus is pivoted away from counter terrorism to end up paycom. Josh, what do you do about that situation? I mean, the obviously, when you train AI, it's going to be historical data. So how do you get it to predict what could happen in the future? isn't that impossible? I mean, it's definitely difficult. I mean, and I think if it goes back to kind of basics where you know, getting access to the data, making your systems open and enabling the insertion of those kinds of algorithms and machine learning models and stuff into your systems. There's still a lot of work to be done on just those kind of bare basics to get the system ready to handle those. So in some cases where you don't have the perfect data set yet, or you're gonna be collecting that data, just kind of instrumenting and designing and architecting your systems to allow that future flexibility is really, really important. And and that's what we found is that, you know, looking at how do we design these fully open systems that are modular that allow you to insert technology and capabilities that you might not predict. I think, you know, ground systems have a history of being kind of exquisitely designed, you know, long lead kind of monolithic systems that do one thing really well designed for their particular constellation. And we need to really you know, we use the phrase sometimes perforate the stove pipes to get that data out of those systems and make it available so that you can do the analysis on it. You can like to plug it into algorithms, you can really make it interoperable more quickly than trying to like kind of over architects, a bunch of IC DS up front knowing like where the system interfaces are going to be. Well, speaking of design, you wrote a blog post and the title was Why Are space system experts designing ground systems? So how should ground systems be designed? So, I mean, there was a little tongue in cheek with the idea of trying to point out like a lot of times also, historically, we have you know, these really brilliant, you know, we've talked to a bunch of them today and we'll see more, you know, space engineers, people who are experts in either the sensor phenomenology or orbit mechanics and they design these amazing spacecraft and they're like, oh yeah, we need to build a ground system afterwards, I guess, to control this and to get the data out of it. And but really if you look at, you know, the way software has evolved really rapidly over the last definitely decade,  there's just so many differences and changes in how we do things that really need software engineers who are focused on that skill set and that capability to be designing and building your ground systems. Because there's, there's these new paradigms with Desch ops and open architectures and, and just so many different techniques that you can't expect people whose job it is to be really, really great at space, to understand exactly how to build those. And you need to have those ground systems, software focussed people being the ones designing that and bringing those capabilities into those ground systems, designing them for that future capability. Because as we talked about, I think on one of the earlier panels also, you know, you I think it was Colonel Harvey talked about this, you this design,  the mindset of. OK? I have to make this perfect because I'm not gonna be able to go up in space very easily and manipulate it is different in software, right? So we gotta fail fast, move quickly, design it, but you also have to design it with flexibility in mind because you're not gonna be able to change that sensor or that platform in space very easily. You're gonna need to make all the mission changes that are gonna need to come on the ground. We're, we're gonna delve into that a little bit more. But Jimmy, when, when considering modernization and going to that next level, you've said, quote, silicon is cheap and carbon is expensive. What do you mean by that? Well, in my world cost drives a lot of things. And usually my experience has been, it's the people that we have to not really design and deploy the systems, but then to maintain the systems is very expensive. And so an automated system that actually helps you maintain the existing system that you have. It's really, I think a goal that we have to go forth in the future. So anytime you have a complex system, like for instance, the mission systems and the mission control center, it's continually monitoring, you know, an onboard spacecraft with anywhere from nowadays now, six to the 10 astronauts on board and we wanna make sure that they're safe, the spacecraft is safe and we execute the mission and the number of people it requires the carbon, it requires to make sure those systems are up to date and our maintenance properly. We do software updates sometimes weekly for the multiple systems that we have that all takes people. So a system that can really automate that, that maintenance of your sustaining engineering like we like to call it is really something that we need to move to in the future. Josh, back to you, you mentioned Dev ops other open software development frameworks. How does that work when you're when you're dealing with the tension between the old systems and the new systems? Yeah. No, it's a great, great question. It's a big challenge because, you know, the infrastructure for these space systems, especially in the defense of the IC. A lot of them are pretty mature, I guess it would be a nice way to say it. And so you have different parts in place and some of them inside them have really great business logic and algorithms that have been thought through, but they're just not to the modern technology standard. And so what we've looked at doing is we stand up these software frameworks that have that open architects. Sure, our Dess ops enabled and a microservices approach so that we can sometimes take that legacy software as is and deploy it initially into those frameworks and, and kind of following design patterns there isolate that and modernize it in the future. So we can kind of take on different pieces that we need to that we need to fix at different times depending on the mission that allows us to kind of get that investment quickly while also moving to that more open and more maintainable architecture where we can, you know, patch automatically and all those kinds of values of doing stuff in des ops environment. So that's one thing, the other is a lot of data services around enabling interoperability. So we do a lot of translation of kind of legacy formats and legacy data into modern and common format so that we can act as bridges between different systems, whether it's a modernized or the legacy. And we have really a 30 year history of doing that kind of data translation in the IC between ground systems so that you don't have to do these Big Bang enablement to have all your systems get upgraded at once. But you can kind of do a piece meal with middleware in between to enable a rolling updates. And what about hardware? So, you know, that's a challenge. I mean, and through the years, you know, we've been part of these migrations of taking things from, you know, deck alpha type open V MS systems all the way through to now in the cloud. And you know, historically, it's been kind of massive rewrites or full redesigns as you do those steps to move to different hearts platforms as nice as we got into now where the hardware is more commodity and using kind of container based technologies like Kernes and Docker and stuff like that. We can, the things that we are mostly now building and designing to deploy in the cloud, we can flex that into either on Prem or edge solutions because that hardware standard enables us to move across the different platforms on the ground. Brian, I want to ask you about using AI in space. So using AI to analyze data in orbit without having to bring it down to earth and then storing it and parsing it and doing all that in space. What are the possibilities there? Yeah, this is, this is one of those areas that always seems to be in question. There's such a tremendous amount of processing power on the ground. And when you see the improvements and communications with stuff like Starlink, it just seems easy to bring the raw data down process and then push the product back out to whoever needs it wherever they are in the world. But I think there's some instances where when constellations of satellites need to work together, that you want to do some analysis up in orbit to retire the latency, reduce the latency and allow the satellite constellation to work better amongst each other. I think there's some promise there. It's still being worked and I can point to SD A as a  driver of some of these requirements because they, they do everything unclassified, they're most recently. Sbir I think ta seven or, or was talking about a need for battle space management, command and control and communications hardware and middleware to do certain types of processes. It already thinking about things to do in orbit right now that the compute power that you can do in orbit is limited. Because when you think about image processing that's even more data intense, isn't it? It is. Yeah. So you're limited by the number of models you can run. you're limited on the accuracy of those models. But in some cases, it might be good enough if you just want to point the next sensor to an area of interest. So the types of things they want to do in orbit AI/ML images and signal processing, they want to do parallel processing, they want to do distributed processing. So these are all things I think they need to be enabled in order to really do AI/ML in orbit at the same time or, or effectively. And you also see some start ups that are starting to do space qualified compute processing in orbit that are supposed to launch anywhere between 24 and 25. Jimmy turning to you in talking about the vast quantities of spa of data coming from space, storing it, parsing it being able to have access to it. What are you thinking about in, in those in that case, specifically about the amount of data and how to deal with it? Yeah. So I think it's, it's key that the data piece for AI to work, it needs enormous amounts of data and the data needs to be formatted and it needs to be accessible. So any system you deploy that you want to use an AI tool for something, it's got to have all three of those for us, the amount of data we bring down, we're limited based on the speed of the systems that we have to bring it down. And then then the storage, the storage that data I think is really key for AI to work. We talked a lot about cloud computing and cloud data storage. I think the future of AI and the cloud technology go hand in hand without the data AI is it can't do its job. You know, we talk about it learning and enormous amounts of data, repetitive data so we can learn and then it can deploy certain things, make do predictive things for you do leak detection on critical spacecraft systems, maybe even systems. you know, think about AAA Mars mission where the time from com is 30 minutes that in today's world, you know, flight controller in the mission control center can send a command to the space station and it's there within three or four seconds. Now you're talking about 30 minutes for a command to get to a spacecraft 30 minutes before you get Indy and response or say, hey, did it get there in the proper order? Did the computer take it on board? And so a lot of that data needs to be re on the computer and accessible so that AI could do its job if you use AI to monitor your spacecraft for for long future. So I think it's the key that the data storage, the data location and how those systems get to that data. And then it's got to understand the format and my world formatting data is huge because we have so many international partners, so many commercial partners and everybody looks at data a little bit different the translation of the data. So in a format from, so it can go from one system to the next and is successful is key and using a cloud technology for that data storage. Again, I think is key for to make AI a really applicable tool for human space flight of the future. Josh, your thoughts on that and specifically how you go from one thing to the other. Do you have to get your cloud, get all your data in the cloud and all that done before you start thinking about AI? So no, I mean, you, you should be thinking about it right away because I think to the point of the earlier panel, there's needs to be the system engineering and the mission and problem decomposition to really think through what are you trying to solve with those techniques. But I do think that from a technology perspective, it's very helpful to get that infrastructure in place in the cloud and, and that shared data link, which is more than just storage, right? So, I mean, there is a nice part of the fact that cloud technology and and the advances in kind of umin computing has made kind of raw storage very cheap, right? So we can hold on to a lot more data than we used to. But there's still lots of challenges back to the carbon over silicon comment which are left is that there's a lot of work to get that properly engineered and stored in a way that people can get access to it, to have kind of a tier of that data so that you're storing the most time sensitive data that you need to get it right away in environments where you can get to it quickly. Versus other ones where you can maybe have more of a cold storage kind of setup. So all of that design and engineering needs to be done. And that's a lot of work that we've been part of in the intelligence community and, and it has been going forward moving forward really well to enable that development and design of um machine learning algorithms and, and AI is to get all that data co-located, have a really good description and data catalog of what that data is and what's available and how to interact with it, have data service that can translate the data as needed from one format to another so that you can bring in your data, scientists and mission experts,  AI/ML engineers to really start working right away to have impact. You know, I've read some studies that suggest when you start up right away and try to do an AI/ML type project, you know, your engineers, your high paid Ph.D.s, you know, these brilliant people. even though I was a marine with a Ph.D., it's always exciting. I was in the Marine Corps also. Sorry. But, you know, they spend 90% of their time just formulating data and getting it prepared and getting ready to work on the algorithms. You can do all that infrastructure have that in place. They can really focus their energy on that hard problem that they're solving or that particular algorithm to get the impacts that are needed. Brian, your thoughts on that as far as the cloud and the culture shift required because I know the IC is not gonna like having their data somewhere where they can't see it and hold it right. There's a lot of issues to be resolved. The IC likes to still pipe their data. They don't like to share. There's good reasons for that. There's bad reasons for that. Even with our contractor base, right? I think we had a lot of interesting AI programs that got built up organically. They all had their own mission engineering that was done their own data engineering that was done. So they built up their own stove pipes, they put their applications on top of the stove pipes, which is fine except nobody else can use their data. So now the data is locked up. What we'd like to do is free up the data. because there's a lot of cost in data and the data engineering, the data management and, and like Josh said, provided an environment where we could bring in a data science team to come in and they can immediately start working on the problem because the data is already accessible. The data security stuff has already been worked out. They can just log in and start working and save six months on a program. Jimmy, you in NASA, you have to integrate across a lot of different  things. You've got your ground systems have to speak to other ground systems, you have to speak to international partners, you have to speak to your commercial partners. You've got to speak to the general public. So how are you thinking about horizontal integration of your ground systems? Well, it's really huge today that what we have to do. And so the technology we need to do that is key and looking forward to the Artemis campaign that even grows exponentially. We're gonna have more commercial partners, more international partners or there are a lot, many of the same that we have today and how we share that data and integrate data and our systems together. And then the ma and also too, we have to protect their data, right. There's proprietary data there and, and we're very sensitive that at NASA Johnson Space Center about making sure that we only share the data that's supposed to get shared and don't share the data that can't be. And so it's very difficult problem across the board. So many of our systems are designed to do that. But I admit that a lot of times today, it's kind of a brute force method that the way we do that today, where we, we have certain access to certain systems and then we cut off and then a lot of that is, is managed through our distributive architecture in our where folks can get access to the MC C systems through, through VPN S, how they log in. But then when they log in all that is managed internally to the system, so as we grow partners that be that problem becomes greater, and that horizonal integration is, is always a difficult challenge. And as we bring on new partners and new companies and at times when some companies, when the NASA mission isn't there, if there's not astronauts going or coming back from the moon or on the, you know, on the surface, there are ideas for commercial partners to actually operate certain things. How do then we maintain those systems? So the right people have the right access to the data that they need. And then once again, very difficult problem protecting the data that we can't share. So it is a continual battle and, and there could be some AI systems in the future that help manage all that. So you can see this, you can't see this. But we're just not there yet. Josh, what are the solutions to some of those issues then? So, I mean, it's a great point. we talk about, I've been talking a little bit about AI and ML and, and some of those infrastructure things about how to enable the mission, but they also can be turned inward to kind of help, make sure the system is resilient and secure. And so there's lots of opportunity to use AI/ML in kind of a Zero Trust architecture environment so that you're collecting all that log data and all that and, and you have a good um security architecture in place to keep track of who's accessing what? Because lots of these problems in the space domain are multi domain. So there's aspects in the commercial and we talk about commercial integration on the  D ODNI C. But also when you look at like space traffic management as a problem, even though we're not the SD A panel right there, it's a multi domain issue where there's lots of commercial providers. There's lots of people in space who aren't part of the defense or intel or, or part of the adversary, you know, environment. So all that data needs to be shared. And so having an approach that can quickly discern, you know, bad actors and bad behavior that might be happening in the system or, or just faults and, and things like that in order to design the system for more resiliency, helps a lot and then and on the cyber piece as well to help you know, keep track of those same things. But I think the, the big one around the AI is predicting faults and looking at that. so getting all that data together and that openness of your architecture enables that as well to get that data shared. And we will start taking questions from the audience and from our virtual audience as well. So you can start getting that ready as we start wrapping up your thoughts, Brian on cybersecurity. Yeah, I think AI can play a large role in cybersecurity in making the ground systems and that the entire architecture frankly more resilient. I think Josh already mentioned what AI can do on the problem detection side, right? They can look for anomalies that in your systems, they can look for unusual behaviors of your users and systems on your network. They can integrate cyber intelligence into the framework because if you see something going on somewhere else in the world, you can start taking the protective measures on your own network to protect against that potential threat. I think once you've identified a problem or that you've experienced an attack, you can automatically take uhallocate your resources, your compute resources to isolate where that problem is and to protect the rest of your network and to make sure that you can continue on with your mission. And at the same time of as you're isolating that attack, you can automatically deploy kind of temporary protective measures before you call the security teams to do the actual research. Brian, from your perspective, where is the IC on that spectrum? From human on the loop? from human in the loop to human on the loop, right? They're still very anchored and human in the loop. Why is that, is it just because they don't trust computers or no? I, I think, I think the real reason is not that they don't trust computers, but not that they trust computers. But the architecture really hasn't changed much for the last several decades. So they have these processes that have worked And they see no reason to change those processes because they, they've been working fine. Why do you want to accept some technical risk associated with changing? And when you don't have to, I don't think that that works in the era of proliferated LEO architectures like SD A is proposing, I'll point back to what SD A is doing because they publicly acknowledge these things. Right. At that time, the the temporal sampling rates are much higher than what you're experiencing today with the less proliferated architectures. It gives you less time to identify objects of interest or activities of interest. It gives you less time to task the next opportunity to do a collection in order to maintain contact or custody of that particular object and, and just to make sure you know what's going on. So I think that the more satellites you get in orbit, the less time you have to react and you're gonna have to be forced into automation in order to really take effective use of the assets that you have in orbit. And maybe if I could just add their, you know, cyber threats or, or a daily and growing thread in our world as well. And just using AI is like these two gentlemen described to kind of be that front wall we've looked at maybe could you use some of the AI to monitor our firewalls that we use to protect data in the mission control center to, to look for those threats? The other comment I would add to that too is that, we have to be worried too, that bad actors could use AI on the other end so they could start using AI as the threat initiator to try to break into those things. So we may have to fight fire with fire in that regard. So, Jimmy, what's, if cost wasn't such a driver for you? What, what's the art of the possible? What if you could just dream, what would it be? Oh, wow, we'll get in trouble here. But I think a system that takes minimal number of people to sustain, you know, if we could have a reliable AI system, like I'll just use the example of cyber threats. If we have a reliable AI system that is our front line of defense for cyber threats, then the number of people that we have deployed doing that today can go down and that saves us money. So that's a dream world, the, on board, monitoring systems or ground monitoring systems to look for, you know, leak detection on board our spacecraft or, you know, planning the next three months of the mission, you know, continuous iss operations, the planning and then replanning., you know, a lot of times we're planning for a launch and it's bad weather. There's not really anything we can do about what, what we'll say I can take care of weather for us. I don't think so, but there's nothing we can do about weather. So now a launch slips 34 or five days and now you're constantly replanning if you had a system that could replan things like that with minimum number of humans involved at the end of the day, that's gonna save me a lot of money, which is the my dream goal. That is your dream. OK. Well, we'll take questions from the audience and from our friends virtually as well. If there's anybody that wants to ask a question, anybody, anything coming from virtual. Yes. OK. My might not. Oh, so looking forward a little bit here, right? We do theme stuff down to the ground, the processing here. Ground Control down here. That's highly data intensive, bandwidth intensive. That's not always possible. Is there any notions in any y'all's communities of taking that and moving some of the ground stations functions to the edge and using it in the space as opposed to on the ground? Is that what's the road map that looks like? So, I mean, I'll start, I mean, I really resonated with what Dr Schmancy was saying about, you know, the I think at this point, the use cases are fairly limited of what makes sense to push to the edge because the swap is so constrained on up in space. I do think that as we can train algorithms and train especially around either, you know, image detection and you know, for suffered find radios and things like that, that you can push some of those things to the edge, there's a value to that. And I think that's some of those things also as, and the other example I thought you gave, which is a good one, which is, you know, similar to how we think sometimes about you know, UAs Ss and things that work in, in concert having communication. And some of the things that space Development agency is doing around this idea of the proliferated architecture where it's almost like a swarm where there's cross-link communications and they, and they do stuff in that way. Though those seem like good approaches, but I do think a lot of work has to be done still on getting the compute improved in space. And because there's been a real shift to smaller and cheaper satellites, so that's not helping uh really drive up the computer capability. But as you know, Moore's law continues, we should see better and better computing that, that gets smaller and lighter that we can get up there. And as the launch costs go down, we've already seen that right. That'll get us the ability to get heavier things uh into space more regularly. So I, I do think it's, it's coming but it's a little farther out because of those constraints, in, in my opinion. Yeah. When I see stuff like those requirements show up on SB IRS and S TT DRS, it, to me that suggests that the technology is still very immature and it's quite a long way out. I think maybe the stuff that you're seeing on the commercial group, There's always questions on flying commercial hardware in orbit and it's not space qualified and how long will it last? And there's been arguments one way or the other if it's but I think with the military and the IC seem to put a lot of stock in space qualified, you'll probably see a lot less of that reliance on the commercial side. They're more willing to take the risk and just put a computer on a bus and, and launch it to see what happens. So if that works in 2025 you can see an acceleration of, of more and more of the ground compute moving up into orbit. We have a question in from online question from our virtual audience understanding that AI uses large language models. Has any research been done into the use of large action models to execute on human intentions before command request is input? But you know what that means? I think I follow the logic of the question but I don't know of anything around the large action. I've never, I might be a new term. I've never heard that term for me. yeah. No, but I think it is, I mean, the, you know, maybe stepping back from the wording to the intent, the idea that using AI/ML to more predict what people are gonna do. I mean, that's definitely something that a lot of work is, is being done around or, or research and thinking around. you know, we've done some work around be belief models but it takes a lot of I think what I should see reference to kind of a priority knowledge from analysts of how people have behaved in the past and what they're likely to do, kind of being kind of put into those models. It's hard to do that with the machine learning models because we don't have a lot of data collected on actions from, from people. Yeah, I would, I would if I was looking for this, I would look to the medical community. I know that there some work being done for people with Alzheimer's that they tend to get agitated and then have violent outbursts. And there's using, looking at ways to use AI to predict when these outbursts were going to occur. So you could start taking kind of corrective action to calm the person back down. So, if there is work being done in this area, it's probably first being done in the medical community. And then once it shows promise, I suspect it would, it would be moved over to the IC. What like computer vision. So go ahead. Yes. My question is generated around the workforce. So Jimmy from a NASA perspective and Doctor Bryan from a intelligence community perspective, Jimmy, your missions are maybe to use AI and data to protect the humans going up. Doctor Bryan, your data is gonna be used to protect the humans who are doing missions here on the ground. But I wanna ask you a question about the human supporting the mission. What is the evolution of those people that are in the background, right? You know, in the back rooms and uh supporting the missions? cause we have a lot of talk about, you know how that will enable, you know, both human space flight and then the intelligence operators. But talk to me a little bit about what you see in the future of the evolution of the human supporting your missions for from my area, human Space flight, we want to be able to automate things so that we can have fewer people in those back rooms at times. And but in order to do that, AI has to be reliable. I can't stress that enough. It has to be reliable and there's not a lot of trust in my community right now. And I would say in the human Space flight community of those systems where I would, you know, take a room of people today and then make it half a room or even a quarter of a room. Right. So the eyes on the data, the eyes monitoring the system, it's gonna be very, I think it's gonna be a tough stretch, but we're gonna have to prove ourselves along the way and, and I think we can do that because we, I've already seen in my time in the, in the human space flight world, the evolution of those systems and that, you know, the tools we have today for our flight controllers on console to monitor the spacecraft are much more than when I started there. And I was sitting monitoring a space shuttle, lots more, lot more automation, a lot more long term leak detection, things like that, that definitely tell the human you, this is what's going on with your spacecraft. The thing I think I struggle with and maybe my community does is we're the ones through that experience or through just the learning of just flying missions. We're the ones who were able to put together those tools and those algorithms and teach the machine to do that. And if I stop doing that because I get to a certain port, how do I do that in the future? So I'm always gonna have to have somebody there that understands how to operate a spacecraft system. To think about every failure that can happen. We do that all the time. It's like we, a lot of times our engineering counterparts will say you all, you guys always think this will fail and this will fail and this you stack failures. But on a but we have seen things in human space flight that, you know, that surprised us a system we thought would never freeze up. Happened to me. on the space shuttle mission froze, you know, three redundant computers. Oh, they'll never go down at the same time. It happened on iss early in the program. I think it was a six A where we lost three of the main MD MS. They all went down. We had a backup system that saved that day. There were those who said you don't need a backup system because you have 33 will never fail. So those are the kinds of things that I've just learned over my career that, you know, we always think about what is the worst thing that can happen and then we train ourselves to do that. And what in recent years, we put that. No kidding. Maybe paranoia of spacecraft failures. We put that into our automated tools. So for me, the future of that is if will I ever be smart enough that I will just put all that in a tool and walk away. I don't know, we don't trust it today. Yeah, when I was, oh, sorry, sorry, go ahead. Yeah, when I was in I two. So we started doing some things a little bit differently. I would be getting these Air Force officers who had master's degrees in data science from Stanford or PH.D.s from MIT or UVA. And, and our instinct was the tournament, the program managers and cotis to run contracts and instead of doing that and we thought, well, look, why don't we see if we can actually have these guys use their skills because they're the smartest guys we had and they're coming just out of school. They may have been smarter than the guys in the industry whose degrees might be getting a little bit stale. So we started building environments that they could actually code tools within its phone. And actually, I think it was, it was being pretty successful. We'll see if it works on long term when they actually have to maintain tools. But we looked at stuff that the Air Force was doing with Kobe II Meru and Castle Run and it seemed very promising. My thought was always that the guy who best knows the problem and how to solve the problem. Or the people, I'll generalize it. the people who are closest to the problem, The analysts are on the edge, who see the problem every day. They know best how to solve the problem. And we want to push the AI all the way out to them, the data scientists out to them to. So the guys, the closest to the s are the ones who are actually writing code and solving the problem and which, which brings more problems. But I think that we'll just have to encounter those problems over time if we really want to scale AI and ML I think it's good enough to use right now. We just need to start doing it and it will continue to improve over time. Another question from virtual. Yes. Could you please summarize in a single sentence? What your expectations are for the integration of AI in space 10 years into the future? Oh Lord. 17 sides. Yeah. Um You can ask, you can ask a professor to do one sentence. Don't ask Brian. It will be a long sentence. OK? It will be expensive. I think a lot depends. You're waiting for that next big hurdle and something to be proven, right? That you can actually have the compute power that will fly and you'll be able to get the communication amongst the satellite swarms that they can communicate with one another and do distributed processing. It could be within, within five years. And we'll be able to realize that vision, five years to actually have confidence in the tools and then another 3 to 5 years to actually build a satellite and that, that operates in this manner. So in 10 years, maybe we'll be there with a kind of a very limited capability in orbit or, or it might never happen because we just can't get the compute power. OK. I'll try one sentence, reliable AI to help enable, execute and protect the human space flight mission at a lower cost. Wow. Yeah, that was one sentence. Josh, wrap things up for us. What's at stake here to get this right. Sure. Yeah, I think there's, I mean, all the things we're talking about, there's so much promise, I think in the technology and, and where we're going, I, I think there's sort of three things that I think we should sort of walk away with. you know, is one is that we need to make sure that our ground systems are constantly using the latest technology and, and sort of uhdevelopment paradigms in order to keep them moving fast to keep that we can attract the right talent into the mission space to work on them. All those things I think are, are really important on that level. Second, I think we need to continue to design those systems to be open and flexible. And as low as less proprietary as possible in order to inject new technology quickly to have the flexibility to move them around. Because, you know, we joke like 23 years ago, nobody was talking about large language models, right? I mean, that that was kind of blew up out of nowhere. And so there's, you know, to try to guess like 10 years, for example, in the last one is really hard. Two years from now, there could be a brand new topic, you know, be something, you know, space plus whatever, you know, that we're talking about here because the technology moves so quickly. So we need to really design for that openness and flexibility in our systems. And then I think the other thing that kind of resonated for me in the discussion today is partnerships. I mean, there's a lot between industry, government, commercial academia as all these things are moving forward and everyone's thinking about, and you know, we've seen this in space, the democratization, so many more people are involved than used to be that there's just a greater diversity of thinking and more opportunity to kind of pull in different thoughts. So I think we need to keep focused on that going forward to make this all happen. All right, gentlemen, thank you so much for joining us here. Thank you. 

Data Is the Challenge; AI Is the Opportunity: Turning the Flood of Space Data into Actionable Insights

Video Duration: 52:28
  • Moderated by Dr. Jim Reilly (Booz Allen, Former USGS Director and NASA Astronaut)
  • Steve Kitay (Microsoft)
  • Winston Tri (Albedo)
  • Dr. Brent Bartlett (NVIDIA)
  • Dr. Neil Jacobs (Booz Allen, Former NOAA Administrator) 
Full Transcript of Video

The subject of our panel though is we're going to talk about this flood of information that's coming from space systems and it's a lot and it's coming from a lot of different sectors. How do we utilize this? The first two panels have talked about that in some applications. And we're going to talk a little bit more about it in terms of how do you incorporate these, what are now becoming huge volumes of information that drive you to getting actionable information or intelligence to your user, to your customer. And we have a great panel here with us today. I'll introduce him here in a second, but I also want to give you one other perspective. So when I showed up at the US GS, uh most people don't know, they think NASA owns the land SAT system, but it's actually owned and operated by the US Geological Survey. I showed up as the director and of course, I was again old enough to have seen the very first LSAT imagery and as a geologist, those were magical images, right? We could see things at scales that just weren't available anywhere else. And at the time I saw that first set of images that came off of lands. That one, it changed my perspective on how you do things with imagery to begin with. But really space based imagery. So when I became the director of the US G SI was interested in how much information we had available and it turns out that we have somewhere around five petabytes of information. Uh just from the land site, 52 years of land site information. What does that mean? Well, how, what does that mean? In terms of scale? That's about 330,000 high end servers in terms of the amount of information that we have. Not only that's all calibrated, right? So it's not calibrated, validated. So it's equivalent of information. And so that was the first thing that led into. Um how do we manage this cloud hosted systems? Of course, revolutionized how we manage data. Now, what do we do with it? That's the subject of our panel here today is what are we gonna do with this space space data? And there's semaphore going on up here. I'm not exactly sure what's happening, you know, in terms of there. OK, good. We're good to go. OK. So with our panels today, we have a, a great panel to work with this amount of information that I just mentioned five petabytes, that's just landside alone. But think about all the commercial sector that's now bringing imagery to the marketplace. How do you incorporate that? What does it take to normalize that information? And so we're gonna get into the AI applications of how we manage this information with the whole objective of how do we deliver actionable intelligence or actionable information that your customer, what does your customer need and what do they need at the speed of decision. And so our panel includes Winston Tree who's here on the end. And Winston is the co-founder and the chief product officer at Albedo, one of the companies that is now doing imagery from the commercial sector. That's a piece of the marketplace that you can now apply to what we were just talking about. He supports the sales product and software engineering teams making satellite imagery imagery more broadly accessible before Albedo. Winston worked at Facebook as a software engineer. So if you really want an interesting post, you know, to put on Facebook here, We were talking about this earlier 15 minutes after we're finished with this panel, just go outside, just look up and wave and you'll get a great post from Albedo. No, I'm kidding. And then next to him is Steve Kay and, and Steve is the senior Director of Azure Space and Microsoft. And before this position, he was a member of the senior executive service as the Deputy Assistant Secretary of Defense for Space Policy, where he had a key leadership role in the establishment of the US Space Force. And Steve was also an active duty Air Force military officer and has held civilian positions in the intelligence community and on Capitol Hill. And in spite of his appearance, he apparently has a lot of experience in a very short period of time in his short lifespan. Uh And then next to him is Brent. Doctor Brent Bartlett is a senior solutions architect with NVIDIA. He's also an image scientist who developed software and hardware to provide in dance solutions to challenging geospatial problems. Prior to this role, he was Chief operating officer at BMIO and co-founder of TIC Technologies and enjoys exploring the intersection of traditional remote sensing with the emerging AI techniques. And then finally, on my immediate left here is a colleague of mine Dr Neil Jacobs, former acting no administrator and early pioneer at Epic and Unified Modeling. And he's worked in both the public and the private sector for the past 15 years. He has also served as the chair of forecasting improvements for the American Meteorological Society and was affiliated with the World Meteorological Association. So I have a great panel and we're gonna kick off with a few questions and we'll probably address them to the individuals but feel free to, to jump in everybody who would like to, can offer opinions on this. But I want to start with Neil and the US government agencies are now acquiring or have available ever growing volumes and types of information from both government and from commercial providers. And this rapidly expanding commercial industry is now seeking ways of delivering actionable intelligence. So when we look at AI and machine learning applications, which we've been talking about today, how can they be applied to generate integrated data solutions to any customer from these different sources? Well, a lot of people, I mean, if you've been watching the news lately, you've probably seen everyone talking about AI being used for weather forecasting. And to me it's a really, really fascinating time to be in weather and climate modeling because of all the new capabilities. But one of the things that I think it's left out of the conversation, whether you're using it for forecasting or not is the training data sets and all these AI systems that are actually being used right now to predict the weather. They don't actually forecast the weather, they actually forecast what the model that they were trained on would say about the weather. And that's a big difference because those models can't go away. It's an initial value problem that going forward in the future is probably gonna be exclusively initialized from space based data. And the problem with that is there's really no way to avoid moving the data around, particularly with polar orbiting satellites because they down link the data at the poles. And this was a shocking number to me coming from numerical weather prediction world with the tremendous volume of data that we had access to. You'll probably be surprised to know that less than 1% of what is actually thrown at a model gets used in the model. And as the datasets become higher and higher resolution both in space and time, that number gets much, much smaller. And the reason why is because you need your weather forecast now, it doesn't do you any good to learn what the weather is gonna be if it takes me three days to make a two day forecast. And with the latency issue being so critical in high resolution rapid cycling models for weather forecasting, the choice, the correct choice. But the frustrating choice was made that lower resolution data with lower latency is better than high resolution data that gets to the model too late. And anytime someone develops a more sophisticated sensor that collects higher resolution data. The first thing that runs through my mind is how much more thinning am I gonna have to do to get the file size down to where I can get it to where it needs to be in time. And that's where I think we're gonna see the mix of AI, not in the prediction side but in the preprocessing side of the satellite data. And ultimately, I think the holy grail would be on edge where you're actually processing a lot of this stuff on orbit and down linking the really useful important stuff. And that may vary day to day, hour to hour, minute by minute, depending on what's going on in the atmosphere. But to me, that's the biggest challenge. And I think a lot of the manufacturers of these instruments are thinking about this. How much data can I produce high resolution? They're not necessarily always talking to the people who use the data and where they're using the data. So doing a lot of the preprocessing on edge to me is the ultimate. I really think that you're gonna see a mix of edge computing along with AI on orbit to solve this challenge. Jim, if I can build upon the comments that were just made and first off, I wanna say thank you. Thank you for having me here. Thanks for Booz Allen for hosting it. This is a terrific event. So I just want to add a little bit of context on the remarks about the size of data. And you started off by mentioning the six petabytes of data coming off the land sat satellite. And I went and looked at some of the numbers to try and put context into when we're seeing this massive amounts of data, at least even if you just look at earth observation satellite. So there's about 8000 little over 8000 active satellites in orbit today. About 1200 of them are earth observation satellites. Now, those numbers are projected to grow anywhere between 10 and 14 times over the next 10 years. So these numbers are going up rapidly from what they are today. If you translate the amount of data coming from these earth observation satellites in into HD movies like on Netflix, it's 100 billion movies. And if you think about that and just the sheer amount of information in 100 billion movies and being able to extract information out of that, it's just this tremendous challenge as well as this great opportunity for us. Yeah, and, and I wanna maybe I, so again, thank you for having us at this event. Right, Bartlett from NVIDIA. I kind of wanted to actually build on that even a little bit more. So in my background in remote sensing, we, we've talked about raw data but we, I think if you actually look at the holdings of each of these satellite platforms, it's even kind of compound, it's worse, right? You have raw data that comes off the satellite, but then we do a lot of processing, we call it sort of like level zero cleaning, level one, level two, et cetera. Um And so these derived products that, that get pulled out of the raw data, all get storage as well, right? So we have this kind of compounding storage debt that grows over time. So I think it's, it's good in that it was it's being made available. But I think it does, it's not sustainable going forward perhaps to store kind of like that much data that's just dried from this one source. So one of the things that I think is an opportunity for us is to look at ways to uh more efficiently store those raw pixels And then rethink the architecture around using tools like accelerated compute to create those products on demand versus sort of like storing everything forever. So, we've sort of heard two things coming out here. One is just the sheer volume of data and that's gonna expand almost exponentially as the private sector gets more and more involved. And we're already seeing that second is how do you manage that so that you deliver actionable information to your customer? Obviously, there are some challenges uh and limitations but also capabilities. Would you guys like to discuss potentially how AI/ML applications computing at the edge which Brent you brought up? What are those, what are those capabilities? What are our advantages? But also when we talk about computing at the edge, what are some of the limitations that we're going to have to address? that's open to anybody. So I can start because it, it does come down to, you know, we're, we're starting this kind of conversation on this panel about starting with the data and then, and then how you use artificial intelligence, machine learning generative AI these different tools to then understand the data. The data management is a foundational piece of it. So at Microsoft, one of the areas that we're working on is something called the planetary computer, which is the ability to bring in massive geospatial data sets. So you had mentioned, you know, five petabytes earlier, we're currently hosting in the cloud over 70 petabytes of uh geospatial data across different space systems to then serve as the platform to bring artificial intelligence, machine learning tools on that. So that's in the hyperscale cloud. That's something that you know, there is tremendous amount of compute and storage resources which really does then provide this great foundation for these other tools on top of it. So, yeah, so I guess I'll jump in again. One of the things that I think we're also getting excited, like very familiar with planetary computer, it's an awesome resource. You know, I think one of the things that has gotten folks excited about the opportunity of things like generative AI is the ability of the technology to some extent today is still very new to sort of democratize computing, right? So you can now access a computer in a way that you've never been able to before without necessarily needing to know low level like assembly language or Python at a higher level. So I think it'll be really interesting to see where that type of technology goes relative to vision, right. So we're starting to see attention mechanisms start to work well with foundation model building with imagery and remote sense geospatial data. So, yeah, I think big opportunity to sort of democratize insights in a way that hasn't, you know, been available before and more easily access the information there in those, what you say 70 I did mind. Yeah, so that would be exciting. So you bring up the concept of democratization of data, right? What to you? And I'll just start with Winston and come all the way this way on the panel. What does that mean to you? I think at ALBEDO. So we're focused on one, capturing the imagery itself. So, you know, we'll have the imagery and next year once we launch our satellite, but getting that imagery out in front of users in a timely manner. So, you know, we have a lot of dod IC folks in the building. you know, that time value, the value of that imagery decreases over time and at a certain time it falls off a cliff, right? So getting that imagery in front of users when they need to and them accessing it in the ways that they need to, I think you mentioned formats So, I mentioned formats earlier but being able for being putting controls in front of the user so that they can specify what kind of data, how fast they need it and then immediately be able to access that. And that helps with the latency problem that helps with the US usability problem. So I think that nutshell for in terms of democratization is making everyone has a sort of level and fair playing field where they understand how to get access to that data. Yeah, so I love that term democratization of data. And, when I think about it, to me, it really does come down to the accessibility of these tools and technologies. And you know, we're here at this space plus AI summit and those are two major, you know, one of course a domain, another a technology, but both going through what I would say this democratization moment. Whereas, you know, space you know, historically was the domain of governments and in fact, superpowers if you go back to, you know, the history of the start of the space age, but now you have all this commercial activity with companies like Albedo and others that are now bringing these capabilities to not only governments but organizations in different sectors and across the entire globe. Additionally, AI is at this moment where lots of people now have access to leveraging the technology. And I think, you know, chatGPT which has been, you know, talked about some today can really be credited for that, for having this moment of now putting A I into the fingertips of everybody. And I guess, you know, one of the things I would close is, you know, something that, our CEO Satya Nadella said, I believe he said it this week or last week at the World Economic Forum is that computers provided information at your fingertips. AI is now providing expertise at your fingertips. Yeah, that, that's really interesting way to frame it. There's a lot of good quotes. So I think maybe the part I'll kind of layer on to that is going circling back a little bit to the edge comments, right? So, I think an interesting thing that that will kind of arise out of this, we've talked about the huge amount of maybe you can call it like store and forward, right? You're storing up data and then you're down linking it and then storing it back on earth. And there's a certain value that you can drive from those historical holdings. There's another I think interesting way to use AI in conjunction with these large number of proliferated low earth orbit constellations go up where you have sort of IND individualized tasking that is AI driven, right? So instead of having to have a lot of expertise on like how to fly constellation and you need to be a government to do it. I think the tooling has the opportunity to bring us to a point where people can at a very cost-effective price point sort of get real near term data collected in a particular area, maybe it was a farmer with their field or something. And using AI tuning and, and enable them to do this type of hyperlocal analysis that they haven't been able to before. You know, when I was at NOAA, this is a pretty interesting use case on democratization of data because a lot of times if you ask them what they think about that, they'll say, oh, it means everyone gets equal access to it. There's a congressional mandate that requires Noah to make all their data, all their code, everything freely available to the public. And I asked for the global modeling code about 20 years ago and they said it's free. Here's a tarred up version of all our FORTRAN code. It's 9 million lines and it's hard coded for IB MA IX. So you're gonna have to go buy $100 million supercomputer if you actually want to run it. So just having access to it is not enough. And so for the last probably 10 years, all of their code has been refactored, it's all on github and refactored. So it can be run on almost any architecture which really opens a lot of doors because the problem was, yeah, this code is freely available and all these university researchers and postdocs and pis and grad students who had all these great ideas on how to improve the weather and climate model forecast couldn't test their ideas because they didn't have a security clearance. And the only people that actually had the machine, which the code would run on was a government agency. And yeah, the code's freely available, but where were they gonna run it? And so now all the codes out there, everyone can access it. I can run it on anywhere from, you know, commercial cloud to my laptop and it's a force multiplier because now we're seeing the science and research community, the universities coming up with cool new ideas on ways to use the code, ways to improve the code and they're sharing that back. The other thing is just having equal access to the observations. Well, that's great. But you still need the code, the data simulation systems, whether you want to run the model or you wanna do your training now, all that's on cloud too. So a lot of the cloud service providers are hosting the data, you can run everything there. There's almost no reason to this is probably something we should touch on at some point to move the data around. You know, that's what I've been telling people just stop moving the data, you know, move your processing to the data. So that brings up a topic we've sort of touched on a couple of times now and in the panel previous, they were also touching on it. And that is getting back to computing at the edge and doing your processing at the edge. In other words, on board the spacecraft, which does two things. One is that one allows faster dissemination of actionable intelligence, but also frees up the bandwidth restrictions that we've always had on the, the way I think of it as an si pipes going down, right? So the information that's coming back as Jimmy Spivey mentioned coming back in mission control. So all of that information has to come back from what I'm hearing in one form, if nothing else, at least in an archive and access capability. But then there's the actionable intelligence piece which seems to be the domain more of the on resident or on spacecraft processing capability. How do you guys see that as as requirements if you want to think of it that way, it's almost like two separate sets of requirements. And what does that mean in terms of how we would apply generative AI as we were talking about in the back room and I'll kick that open to whoever wants to jump on that one. I can, I can go first again. So I, I think one of the areas I'm kind of tracking right now is sort of how, you know, we, we've talked a lot about large language models, but some of the tooling that have been is being built around large language models. So using a large language model almost as a new form of operating system. So and one that runs very efficiently, right? So autonomous agents are kind of coming out where you have the large language model, the almost acting like a decision maker on which different tools it might run to, to do an action. So in terms of applying that kind of methodology into to space processing, I think there's some interesting opportunities to kind of rethink what it means to be an edge in in space. So, you know, you can try to place a processor onto a satellite that also is running coms and it's also running mission control and the payloads. But you can also think about putting specialized satellites that are almost like floating little data centers, right? And using cross communication links to get data to those processing nodes. So I think as like our architecture is mature, there'll be more opportunity to do that kind of compute in orbit. And, and like you said, provide those insights back down. Yeah. I'll build on the generative AI piece and I think that it's really amazing what this technology is doing in so far as empowering others. And I remember when it first kind of kicked off out there, you know, chatGPT and open AI you know, people are saying, OK, well, it can create haikus and jokes and things like that. But then pretty soon what you saw, you know, Stanford was using it for instance and they showed how it could pass the bar exam and it wasn't just passing the bar exam with the multiple choice, it was the written section as well and it wasn't just doing average, it was doing at the top 90%. So you start thinking about what these capabilities can then provide for space. And I think that it really goes across all elements of the life cycle, whether you're building and there's aspects of these capabilities, you know, for instance, on github, there's something called copilot where you can actually create code with natural language, then there's training and testing elements where you're able to look into large amounts of information and then ask questions of it and have that information at your fingertip or in operations where it may be helping you. You know, we did something for instance, where we worked with the Defense Innovation Unit to show that intersection of large language models and geospatial information. And if you ask questions and then bring that together to be able to get responses with what's happening on the space data. And I, I think that that's really where the future lies is this multi multi modal aspect where you're gonna have these different technologies that are coming together. It's not necessarily just the large language piece. You're gonna have the visual piece, you're gonna have an audio piece and it's really each of these coming together. Yeah. No, that's great. I at Albedo, we, we aren't necessarily using edge processing on our satellite. We kinda believe in the adage space is hard, so we don't try to overcomplicate things up there. At least for now. but on the ground really looking forward to be able to using, you know, all the things we've seen come about in the past year, chatGPT. maybe as like a silly example. I love playing basketball outside. So I once we start being able to image the earth, wanna find all of the undiscovered basketball courts and play on them. So I think, you know, just as Steve just pointed out, like you're using it not only as an existing search function where you're not searching PO I data and Google maps, you're searching, you know, real features that are from fresh imagery. But also using that in another part of the chain. So using this chatGPY like function to you know, develop more training data, right? Like that's an easier process instead of having to go through and, you know, one by one label things starting out with some sort of prediction. So I think the integration of these tools that we're seeing and we will continue to see in parts of the chain will really make the big difference. Like I don't really think it's a you know, silver bullet or Yeah, I think it's change comes in parts. So from that standpoint, we think about what do I want to know? And when do I want to know? It is really kind of what we're getting to and democratization to me almost means, you know, what is the information I want to know versus what's the information Neil wants to know and they're gonna be different. And so having that ability to anticipate what's required for the operator or the customer. in the case that, that Jimmy brought up in the last panel is what does the crew need to know? Right. So it's an ops guy. That's what I want to know. What do I need to know now and what do I need to do about it? What do the folks in mission control in an integrated center? What do they need to know? And those two things are completely different, but the more we move towards this this adaptive capability, we will now be able to deliver actionable intelligence in different flavors, right? What it gets to what are some of the challenges that you see in being able to deliver that actionable intelligence? And you can take it either from ground processing or processing at the edge. However, you want to approach it, what are the challenges that we see in front of us? And how could we use generative or adaptive? AI to be able to help solve some of those problems? So, I can start on that. But one of the things we did and you know, a challenge sometimes comes down to, like you're saying, there's a timeliness piece of it, of there's certain information that's it's perishable, it needs to be known right away. And, we did, you know when talking about edge processing. One of the examples we did is we worked with NASA to bring artificial intelligence on board the International Space Station. And that was specifically a project that we did with Hewlett Packard Enterprise was something that they have called the Space Born Computer. And, but we worked with the NASA scientist to use computer vision technologies to be able to rapidly identify if there's damage to an astronaut's gloves after they go on a spacewalk. Because when they're out on the spacewalk, they're using their hands, they're doing different experiments or maintenance on the space station and any sort of damage to their gloves could be a serious issue to the safety and health of that astronaut. So when they come in, they use the imagery, they take images and, and can even use video to be able to rapidly identify and assist them in space on the edge to see if there's uh anything that they should be um concerned about. So that's a great example. Having done a few of those myself, there's um and gloves are the worst part because the mass of the spacesuit, most people don't necessarily know that you put me the suit and the equipment on me. It's about £750 mass and you're working in a weightless environment, you're putting a lot of force on those gloves, right? The ability to, to basically what I understand is that you can now scan, just do an image analysis of the gloves and have on board an assessment. What are the weaknesses in this glove that I need to worry about? And, and then that information of course, gets transferred down as more or less. I don't want to say the archive set, but the set that now goes down to Jimmy's mission control team that says this is the decision that they're making based on this information and then they have the ability to vote. Yay or nay, I would guess on something like that. Yeah, that's actually really interesting that I, I think the interplay of. So we talked a lot of, you know, this is a, a space panel but we, we haven't talked a lot about sort of like auxiliary sensors that are in a particular environment, how they can enhance data that's coming from space. So as an example that my dad's actually a farmer and grew up doing throwing hay bells around and it was always a challenge like to go out and be like, ok, the hay is ready or we're gonna cut it today and do we have enough time for it to dry before the rainstorm comes in going back to the weather prediction challenge earlier? And it's also kind of like when is it dry enough? Right. So it, you know, I was brainstorming the other day, my dad about this and he's like, why can't I just, you know, you guys do AI, right? Like, why can't you tell me, you know, when, when the, when to do this? And he's like, I'll put a sensor on my tractor, right? Like, you know, and, and it's not that far of a leap going from precision agriculture use case to say like tactical environment, right? So I think that is an opportunity that we have to have these AI systems inform kind of like which sensing that we can place in situ in an environment that will make that insight that we try to drag out of the satellite imagery a lot easier to obtain. So, yeah, I've been thinking about this actionable information cause in, in my world, it always comes down to probabilities. And if I'll give you an example, like dealing with the weather forecast, if I was to tell you, it's gonna be 30% chance of rain, you don't know if that means there's slightly less than one in three chances you're gonna get rained on or is it 30% of the day and it's 100% of rain for 30% of the day or what that, you know, actionable information when you provide someone with probabilities is really tricky. I mean, just look for example, at hurricane landfall or tornado warnings and people are inherently irrational. You know, if you, if there's a, there's a study out there and everyone thought, oh, if, if we could extend the lead times in tornado warnings will save more lives. Well, it's, it's actually not true. There's, it actually, if someone thinks they have a lot of time they may think. Oh, it's, it's, it's, yeah, I got 20 minutes, I'll go get my kids out of school or, or something and they'll get in their car and get caught on the road. And so translating information, particularly probabilities into actionable information by that. I mean, a, a decision that we hope someone would make to protect themselves to me is a big challenge. And even though we've been talking about edge as a, as it is applied to space based capabilities that initialize these models. I think there's another use case that we'll probably see coming here soon. And that's once we have the capability to replicate these forecasts, using AI that runs on such a tiny fraction of compute, you could literally do it on your phone and then you have the opportunity for a phone that knows your habits and your location uh to have a custom tailored solution with you at all times and that's where you're gonna get your information from. Yeah, I love that example. I think as you know, I see the value chain as you know, we start with the sensor which is where albedo is at. And at the end there is the value provided to the end user or the customer, right? So it's a big value chain. And as much as a seller of satellite image, you would like to say we're the only piece of information you need to drive that actionable, you know, insight decision. we aren't right. You need to combine all these multimodal sources of data and to put it into context, you know, if there's a hurricane coming, that's not the same. And the decision you want to make is when should I leave? That's not the same decision for everybody, right? Like if I'm a single person and I would live alone in an apartment, I can just pack up and go. But if I have a family and six kids and a small car and a dog. How am I gonna fit everything in the, in the car? Right. And I have to leave and plan and pack earlier. So I think it's you need people in the middle of the chain to provide that value, that additional translation to make it specific to the end user. and there's a degree of generalization that you can do, but in the end, it has to, it has to make sense for that customer. So really that gets down to when you look at the whole chain, as you were just mentioning, Winston, and you want to democratize the information to each of us individually on this panel, for example, you know, it depends on what is it that I want to know and when do I want to know it? And so there's an in between step that seems to have to exist, right? That's now looking at the patterns of your behaviors versus my behaviors and then tailoring that information so that it becomes ingestible by the end user of the customers if if I can build upon that because it does come to down to your data as, as you're saying, and you know, we're here in the nation's capital, you know, a couple of miles from the, the White House and from Capitol Hill, and there's a certain amount of public data that, you know, we've talked about before certain satellites. But then there's data that is not going to be public. For instance, we had, you know, panelists up here earlier from the Department of Defense, from the National Cons office in the intelligence community. And they're going to have data that they are going to need to you know, keep at the right security classification level and and keep only to those that have access. And what's really interesting is these capabilities. So for instance, you know, Azure Open AI, which is based upon the GPT AI technologies enables enterprises or government to be able to protect their data. So it's not just using the large model that's out there but being able to ground it with their own data. So they're getting insights from these models based on what they're putting into it to include those models. And we announced late last year are going, those capabilities are going to the classified clouds. So governments are now able to bring their data or the US government in particular is able to bring their data and specifically be able to have the benefit of these technologies with it. So we've, we, one thing we haven't really touched on yet Today is the term digital twins. And so what I'm becoming aware of is that when we talk Digital twin, we might have five different opinions on what a Digital twin is just on this panel alone. So in terms of the application of a digital twin, first, what is a digital twin mean to you? And then second, what does it mean to your customer? How do you see the digital twin being an application of the generative AI support that allows you to drive to a true digital twin. Where do you see some of the challenges and some of the risks associated with those uh that I could go first. We happen to have a whole, a whole suite of products around Digital Twin, right? So, I sort of decoupled them in my head from generative AI and I think as the months kind of tick by, they start to kind of merge together more and more. But we do have a tool called Omniverse NVIDIA that was kind of came into fruition over quite a bit of work internally in quite a few years before I was an NVIDIA. Um But to use the term we use. But yeah, the Omniverse Metaverse concept kind of was out just a little bit before I would say the big splash of generative AI and it, at least from my perspective, I support the Dod and the federal sector and NVIDIA and I see it as a really enabling tool to to take a lot of things that we do physically in the world today, things like testing, you know, testing email, and, and bring that into the digital world and there's a, a lot of advantages to that. So, you know, we can iterate more quickly. as we kind of quote, we, we call it the, the real to sim gap as we kind of like close the gap between reality and the physical world. We're able to kind of iterate and create our systems and test them a lot more efficiently and safely. And then, you know, as, as AI starts to permeate that ecosystem, we're starting to see some really compelling things happen. So, NVIDIA actually created this tool called Chat us D. I won't get into the weeds of what universal scene description is, but it's essentially a way of creating this general file format that can encapsulate the 3D virtual world. And so, uh it's a language like any other. And it's possible to train generative AI to understand that language. So, yeah, there, there's a lot of a lot of kind of fascinating tech capabilities that are coming out of that where you can do things like describe like if you're in the dod and you're trying to do a mission planning to go rescue a sailor that's stranded. You know, you can actually leverage these these AI tools within this virtual world that's grounded in and the physics of our world. and come to a rapid answer and what, what type of solution you might wanna approach, like which, which helicopter platforms are most suitable and if that doesn't work backup plans. So yeah, that's, that's kind of what it means to me is just at, at just the beginning here, but like being able to do things a lot more efficiently and rapidly and safely. If you wanna touch on that, I, I love where you went with that thinking of digital twins plus artificial intelligence, I would expand that to then plus cloud computing plus augmented reality. And it's really the intersection of these different technologies coming together and the hardest use cases is where things really start getting interesting. And you know, I would say that that's the type of stuff that we are working on. In fact, we the Space Force recently announced a contract that it has with Microsoft called the I three E integrated Immersive Intelligent Environment. And that was a follow on contract to the initial six months where we demonstrated some capabilities to include things like how do you have a digital twin of the International Space Station? You bring that telemetry down, you're then being able to visualize and understand that in an augmented reality environment, you're then able to bring artificial intelligence tools to understand when anomalies or situations are happening to it, you're doing it in a cloud environment that expands beyond a physical location, but to multiple sites. And you can start seeing the integration of these different technologies addressing use cases in ways that really never have before. That's an interesting one. Let me follow that for just a second. So when you mentioned bringing the environment of the space station down to the ground, for example, I could conceivably see how mission control architecture could now adapt to that, particularly as Jimmy mentioned, the farther out we go in space, the longer the latency is gonna get. And when we talk about distances to Mars, we're looking at 40 minutes round trip potentially when we're at 250 million miles and two weeks of occultation where unless you have relays, you're really out of communication. So having that capability, say in mission control would allow with enough fidelity a mission control architecture to operate near real time capability even though they are 40 minutes away, so to speak. And Jr I would expand it into, it's not only mission control, but it's those people training and learning before they're going to mission control to be in those environments and then it's people building and it's people building the next generation of the system to actually be figuring out what the next requirements are, what the features, what the new things they're gonna add to it. So it really ends up, these technologies that we're talking about do end up transcending across them. And you know, the probably the most important thing I think that I would say today to encourage this audience and anybody listening is, how are you getting involved? How are you leveraging these technologies? How are you experimenting with them? How are you getting your hands on them? You know, there was a fortune article in the last week or so. or maybe a couple of weeks that the headline caught my eye, which was essentially AI might not replace your job, but it will definitely become your coworker. And the thing I ask you all is it becoming your coworker yet? And if it's not, why not? I think that there's a use case with the digital twin that really applies to satellite observations in space and, and I'll give you some background on how we used to do it. It would be design a satellite, launch it into space, pull down the data, let's say, for example, there's a 10 year usable lifespan of this instrument. Well, it may take the scientist five years to figure out what to do with the data, how to quality control it, how to optimize it and use it in the model. By the time it actually is providing a positive impact, the lifespan of the instrument on orbit is half over, you could eliminate that process by just having a digital twin of the earth. And you can synthetically extract data on the ground using your digital twin and simulate what the satellite would produce, feed that back into the models and optimize everything before you even launch. To me. That is one of the most exciting use cases. And we've, we've done these experiments in the past, we call them observing system simulation experiments, But they've always only been as good as what we call the nature run, which is sort of our best guess of the atmosphere, which is orders of magnitude more crude than a very sophisticated digital twin. And so that's where I think at least from my world and, and earth observations. an exciting use case of digital twin. Actually, I just need to jump in real quick. I, I actually just got an email before I got on the panel that they were releasing access to Earth 20 So jump on Earth 20 where we, we're actually pursuing that actively. That's something that our CEO is really passionate about of kind of building a supercomputer that would be able to simulate the entire earth at a reasonable useful resolution. And I know that watching the engineering teams iterate over the past six months to a year, a a lot of that was figuring out how to pull all these feeds into that. So, yeah, that's uh something that jumps out. Well, unfortunately, we could go for another hour,, on this, but we're running out of time so we wanna make sure we do have an opportunity for questions. So if you have a question, raise your hand and come to you. Sure, this year. Thank you. Good afternoon. I'm Sandra Irwin with space news. I have a question for Steve Kay, or if the other panelists would like to weigh in as well. It's about the Department of Defense and their initiatives with AI. Uh we, we heard that they're doing a lot with trying to promote AI, trying to increase use of AI. even the Secretary of the Air Force said we need more tools to analyze all the data. Like all the things you guys just talked about. So Steve, you mentioned this Azure GPT tool that you know, the government is using but is that the solution to the dod problems or why is this continuing to be a problem? Given all the technology that's available? Why are they not using it at the scale? Maybe that they should be using it? Thanks. So it's, it's a great question and speed and adoption of these new technologies can often be a challenge, not only I would say with the Department of Defense, but in the United States government writ large, but we are seeing um great people, great leaders that are trying to promote the speed of adoption and recognizing how much these new tools and technologies, for instance, Azure Open AI in the Air Gap Cloud can help empower their workforce to achieve more. I would say, you know, we're, we're getting there. But I think there is a lot more work to go to get that speed of adoption and then it's moving from demonstrations to scale global capability. Because at the end of the day, if we're talking about the Department of Defense, you need to get the capability into the hands of the operators who are trying to deter conflict, maintain peace. And if peace fails that they're prepared to win. So it's making sure that technologies get all the way out to them and it gets beyond that demonstration. So I would say that there's pockets where we see it happening, which is exciting. But I think, you know, overall we are looking for more adoption and faster speed and, you know, I think that it's not just Microsoft, but there is an ecosystem of innovation, you know, Albedo NVIDIA Booz Allen here. In fact, actually, they've got some really exciting stuff I saw in the back. And it's helping the department leverage these across the board. We'll get two of them. Do you have any examples of Digital Twin gone wrong or like what are some common mistakes that uh maybe to look out for, for that sort of thing. I, I could do a quick, I see lots of signs flashing in so I'll try to be brief. Um So I, I think one of the I'll just talk to some of the challenges of Digital Twin. We talked about opportunities. You know, the things that I see come up over and over again is, you know, we, we get very excited about making this perfect replica of a facility or um the whole earth. the oldest state of this building. The reality is you have to have a pretty broad skill set on your development team to make that happen. People who are experts at 3D design, getting all these like these lights with these swishes in here. The buildings, the material attributes that, that are real closing that like sim to real gap is a real challenge. making sure you get the physics right? And then typically, like as soon as you move out of like R GB like visible that's pretty well described into like infrared or different sensing modalities like RF or radar, the challenges become compounded. So I think that those are some of the things that, that are starting to be addressed. But those, those are like, hopefully gonna gonna kinda like mature over time. OK? I wanna thank our panel. It was a great opportunity to have this discussion with you today. Um And thank you very much for joining us here at Booz Allen. We really appreciate your insights today. It was very helpful. Thanks very much. 

Closing Keynote

Video Duration: 27:16
  • A.C. Charania, Agency Chief Technologist, NASA
Full Transcript of Video

So good afternoon, everyone. Thanks for staying at the summit. So to listen to me talk before I start, I I did want to say something probably as an assign. Today is at NASA, we are celebrating the day of remembrance. So, every year and we take the whole week at NASA to think about um the previous journeys and experiences we've had at NASA and the lessons we can learn in terms of safety and a safety culture. So we remember Columbia Challenger and Apollo one. So we actually take this week to have our employees kind of listen to our leadership reinstall the culture of safety, but also innovation at the agency. So, today is a special day we've had this week we've got different ceremonies at Arlington National Cemetery Cemetery. And with our workforce. So that this is a week where sometimes at NASA, we reflect on the past now, today, we're actually looking at the future. And so, as I was telling someone earlier, this is a good book. And for you all, I think if for those of you that stay today at the beginning of the day, We look back in terms of the James Webb Space Telescope, the incredible achievement, how we achieve that, the data that came out of that. Now as we look forward, how do we drive that innovation within the agency and within the nation? So first I'll start off a little bit about me. I'm not gonna powerpoint slide you to death here today. I'm the agency, chief technologist or AC T as I call it at NASA headquarters. We support our senior NASA leadership Bill Nelson Pam Maroi and Jim Free in terms of technology strategy, policy and economics, in terms of our particular office at the agency at NASA headquarters, I'm in the Office of Science Tech Office of Technology Policy and Strategy within the A suite. And basically look at several moves ahead in terms of the agency in terms of technology disruption, places we can infuse technologies and how we're investing within the agency. And I've been at NASA about a year, first year as a civil servant coming from commercial industry uh 23 years in space and aviation. So prior to joining NASA, I was actually at an aviation autonomy start up called reliable robotics where we were trying to automate the world's cargo airplanes to fly themselves. It's a very interesting problem there dealing with human machine interfaces. prior to that working for Jeff Bezos at blue origin in terms of maturing the lunar lander program at blue. in that sense, trying to have a large cargo Landers and crude Landers that use machine learning and other algorithms to land humans on, on the moon. So, prior to that worked for Richard Branson for five years at Virgin Galactic. And that's particular particular instance, looking at the launcher one program, basically looking at autonomous rock and the left wing of the 747 and launching small satellites to space. So, and prior to that 12 years at a company in Atlanta called Space Works, doing a lot of advanced concept systems analysis, starting new companies. And actually part of that started at Georgia Tech in Atlanta, Georgia. And actually Doctor Bilen right there who wrote a new book, I believe on AI that I think you'll find here. Actually, he and I went to school together. So it's good, good to see Doctor Bin here at Booz Allen. So I kind of feel a little bit at home with, with Pat here. So so at NASA as agency chief psychologist, my portfolio is to look at our technology strategy, look at where we can leverage the disruption that's happening. So things coming into the agency actually two major technology areas I've been focused on beyond some of the others that you might traditionally think of in terms of technology, those include quantum sensing and AI. Those have actually over the last year, our of us in particular has been supporting the agency thinking about those technologies more strategically as a chief technologist. Though I'm not kind of responsible for cybersecurity or it, we do have a Chief information Officer, chief data officer at NASA. And I work very closely with those entities and personnel and staff and leaders on those kinds of collaborative areas. So for instance, an area where we collaborate together is AI, we look at AI obviously for our it functions and, and other things, but also AI for engineering, digital engineering, machine learning and how we use it. And that also brings together our chief scientists. So actually our Chief sciences at NASA, our CIO myself and others, we many times collaborate together of how we as an agency can leverage these tools and technologies. All right. So let's see if this works and slide. All right. So once again, I'm not gonna PowerPoint you to death. Yeah. For me, as I was thinking about this, how are we using AI? And really in the day for us it's our strategic objectives. And ultimately, if you look at the NASA vision, it is to help humanity and benefit humanity. Now we decompose that objective into looking at Science National Posture and inspiration. Both that is the knowledge of we as humans have. How do we help us industry, academia and how do we inspire those three pillars of Science National Posture and inspiration kind of drive a lot of our thinking of what, what we're doing and there's decomposition of those objectives and stakeholders that play into that. So for instance, in science, we have a National Academy, the cl survey process where we uh canvass our scientific community for the top scientific objectives we should attempt in planetary science and helios its uh and earth science. And so there are processes that drive the scientific objectives and those that get vis promulgated into actual missions or science missions, space science missions, planetary missions, human space exploration. We also uh mature technologies. So different parts of the agency are working on research development internally with NASA civil servants as well as with industry and academia. Now, in all those three areas, we've been using AI machine learning for years. So in terms of generator AI Yes, you know, last year, I think we were all there when ce PT came on board, I was at reliable robotics and it was an awesome time to be in the room when the coding, coding team realized this was available to them. It was like a eureka moment to still recall that moment very, very crisply last year when I was in industry. But separate from that, we've been using AI machine learning for many years at NASA from Earth Science data to how we're looking at different data sets and missions. So for us, it's not a new thing specifically our science portion of NASA. But now we're looking at digital engineering, Digital Twins, as I mentioned in the previous panel, we've been looking at that for many years. And how do we use that in missions? And now how do we use these large language models obviously? But also how do we couple AI with things like 3D manufacturing. So different examples of how we're using AI or been thinking about using AI as we go forward are in these actual missions. So as we look in autonomous missions on other planets on the lunar surface or on Mars, we've got a helicopter on Mars. That's the ingenuity helicopter. You see that's flood more than 60 missions. How do we in future generations of Mars helicopters that we're looking at instill more autonomous operations to decouple such uh helicopters from, let's say the rovers that, that they are leveraging in terms of like a base station. So those kinds of missions we're looking at, we've got missions on the lunar surface, looking at autonomy. We've got a program with Gapl called cadre that's looking at a collaborative set of rovers that use autonomy. And also as we look out further in terms of decreasing the life cycle cost of these missions, how can autonomy AI support that kind, those kinds of missions and in operations, we also look at science obviously and a lot of that science is looking at earth as observational data in many different wavelengths from earth to the planets of our solar system. This particular picture is of a project we did with Nasa God or and academia that looked at losing a human train data set to then identify and characterize how many trees are in Western Africa. So a human train data set was used on tens of thousands to identify tens of thousands of trees. Then that data set was used to identify more than a billion trees and just the population of those trees are potentially arid areas. Now, that data can now be used to help determine the carbon footprint of all those trees in Western Africa. And we are using those kinds of AI machine learning techniques all across the board. The other thing I we think about both in terms of myself, the cio and other parts of their agencies as we go forward in time. How do we work more digitally with all these tools? So machine learning, Digital twins model based system engineering AI, all those tools coupled with kind of physical tools like we have with 3D manufacturing to actually work faster. For me the 21st century in aerospace and in the agencies also, how do we execute rapidly bring decision making down to lower levels and and can execute rapidly on these missions with the constrained budgets that we have and working digitally, I think will allow us to operate faster, reduce overall life cycle cost and hopefully develop safer systems. So this actual picture I think is representative of projects we've got ongoing at NASA Goddard where researchers are coupling A I machine learning with actually 3D printing to then 3d print structures that humans would never have imagined that would be more rigid or way less. And so thus, the AI machine learning capabilities are developing things that will make these systems lighter and cheaper that we would never have imagined as humans to create. So I think a lot of tools and techniques that I think beyond just large language models and generative AI that we can provide to our civil servant workforce and to the industry and to, and to then promulgate that out to the contractors that are supporting us at NASA next slide. And then here I've got different examples and I'll probably go through of different uses of AI and ML. Some of these are, I would say examples that we're using some of these, we've actually used an actual mission. So these kinds of examples range from kind of the observational side. So starting from the right hand side, we've been using this particular example is kind of cool. And Doctor Viler won't like this. This was actually a collaborative project between NASA Langney commercial industry and Georgia tech uh where we actually took some data from Calypso Liar. Calypso is one of our spacecraft that you see there, we took some of the data that was coming from Calypso and work with academia and researchers to look at how could we identify the health of coral reefs. So for instance, a lot of researchers are looking at which coral reefs on the planet are at most risk. And so this, in this particular case, we were using Calypso data not particularly developed to help in this problem with trained data sets to identify. OK, which of the coral reefs that the Calypso data is observing are at most risk. And so that was kind of a cool experiment we did. And I think the results of this were published of this particular show were published in 2021 during COVID. But it was kind of a cool test case study of using earth observational data in partnership with academia to look at a problem. The original instruments were never designed or optimized to solve. So which is kind of cool, there's multiple other examples of using Earth observation data in this way, in terms of AI and ML another example I kind of like is the middle example. So OK, landing on Mars is pretty difficult and a lot of nations have tried the last few times. we've been pretty successful and I think we're celebrating 20 plus years since the Mars Pathfinder landing. So kind of a cool anniversary, but it's pretty hard to land on Mars, even harder to land in terms of a particular place you want to land in. So if you're trying to land in a place and return samples back to earth, you want to land in a place where you think you'll have pretty good samples that might have evidence of possibly life on the surface of Mars or, historical life in that sense. So you need high landing accuracy. Well, where are the ways you can do? That is actually a way we've been using and we've used on the perseverance rover to land it on the surface of Mars to reduce what we call the landing ellipse. So a landing ellipse is the uncertainty of where you might land on the Martian surface or the lunar surface. And technologies where a camera system on this vehicle that's entering the Mars atmosphere and coming down is comparing the pictures it's seeing with the data sets we've gathered from previous Mars orbiters and during that comparison, it can then tell itself where it is. There's no gps on MARS. So you got to figure out where you are. And so using these terrain, relative navigation technologies for identification of landmarks to seeing where you are maneuvering yourself to land and reduce the landing ellipse error that you have on the Martian surface. So we're using similar techniques, we're thinking about using similar techniques like trn on the lunar surface. We are funding several. We are the anchor tenants, anchor customers of several small lunar Landers that are moving gonna land on the lunar surface and they're using some of these technologies as well. Another reason why I'm kind of excited about that one is when I was at blue origin, we actually tested some of those technologies on New Shepherd, which is the vertical takeoff, vertical landing sub orbital vehicle at blue origin. So once again, we've done terrestrial tests with Blue origin Masten on these technologies. And I've actually flown these technologies on Mars and looking to apply these technologies and comparing orbital data sets uh with the real time images these Landers are seeing to help them land more accurately. So kind of cool there. Another one is a period table of life project called Pedal. And this is kind of a open source framework that was developed a few years ago to look at how to use AI to help examine linkages in large language models relative to biological data. So once again, a lot of stuff people are looking at, but this was actually even a few years ago. So before chatGPT, we were actually developing these kinds of frameworks to look at how to how to help using biological models, understand some of these large language models. So kind of a cool experiment there. Once again, the idea of using AI/ML for earth is a great application, but we also explore the rest of the solar system. So in this case, we're also looking at how we can use AI/ML to look at other planetary bodies. What excites me about this century is the achievements that NASA could possibly help it. And some of those achievements include our human exploration goals of going to the moon and landing humans on Mars, but also extends to future observatory. So you have we talked about James Webb. I also hope that in this century we can also image an exoplanet. And so we have a habitable worlds observer program whose objective is that basically to return a picture uh to humanity of a planet in another solar system. But in addition, there are other very cool missions regarding life that could be here in our solar system. So Europa and Enceladus, the moons of Jupiter and Saturn are icy worlds where we think there might be hydrothermal vents underneath ice layers uh that could be areas of possible life. So, in this particular case, can we use images of Europa one of the moons in our solar system to detect different ice plates and use AI AND ML to help detect those ice plates. It's kind of cool application. We've also got missions going to Titan and we are interested in the composition of Titan. And so using AI/ML to take observational imagery from Titan to observe methane cloud detections or formations. Another example of how we're using possibly AI/ML to help in the search for life is JP L has looked at how to use AI and ML to help develop autonomous underwater robots to be used on Europa. So once again, we're using AI/ML to help us identify ice plates in Europa. We're also using AI/ML in missions that we're formulating maybe two or three decades from now to look at autonomous systems that would then go through the ice melt and go explore these ocean worlds. These icy ocean worlds was actually kind of cool about that. One is when I was a freshman at Georgia Tech with Doctor Bilen, that was one of my first undergrad your design projects, which is to look at a Europa Lander mission. So it's kind of cool. We never, I think I imagine 25 years ago we'll be using AI/ML to help us develop the missions, analyze the orb imagery. And then also use that to help NAV help these robotic underwater submarines navigate these ocean worlds. So kind of a cool set of examples of how we're using AI/ML across the agency. You know, and as we, if you want to know more, there's obviously you, you can do a lot of searching all federal government agencies are, are many of them also put out an AI inventory. So if you actually search NASA AI inventory, you can get a list of some of these projects that we're working on relative to AI/ML. And that's an annual inventory. We update other other federal agencies update that as well. Another resource for you all to understand AI/ML autonomy technologies we're developing at NASA is a website we call Tech where we list many of the technologies we're investing in. So if you're kind of wondering what are you investing in at NASA? Well, Tech is a good website for you all to go to, to understand not just AI/ML but a whole whole range of technologies. And the other thing, I think we all relative to this summit and the government we're wrestling with is AI/ML and our strategic understanding, strategic use of that. And so, for instance, I think earlier today, there were conversations about the executive order and an OMB letter relative to all federal government agencies. And so I think if I came back here a year from now, all those White House stakeholder interests for AI and ML will result in a chief AI officer at many of these agencies. So actually, I would encourage you next year, probably to have a chief AI officer across the federal government panel because I think we will have named many of these agencies, those individuals and define those roles and responsibilities, which is kind of cool to think about. So, not saying what panels you should have next year, but that might be a good one to have because we'll have those named individuals. It's a pretty exciting time at NSA Relative to the challenges we have of human exploration science, urban air mobility, which we also work on and leverage some of these autonomy, technologies and AI technologies. And going back to the moon for me, you know, and I was speaking to Pat earlier, you know, it was really strange to think that, you know, 25 years ago, we were at Georgia Tech and who knew that? You know, I'd be at NASA while we're headed back to the moon and, you know, Pat's writing a book on AI, you know, couldn't imagine and stuff like that, right? But for me, it's all about kind of not just the inspiration science, but the strategic objectives of these kinds of endeavors. And what I'll leave you with is all of these technologies that are helping humanity. And particularly for me, I look at the lunar surface and lunar exploration, which is pretty important at NASA and to our stakeholders and leave you with the final thing of what AI/ML and all these sorts of technologies can get us to, which is what I would call the lunar moment. As we look at investing these technologies and lunar exploration. For me, it's also about the moment that humans walk on the lunar surface. And from that moment forward, there will always be a human on the lunar surface, right? And these kinds of technologies will enable us to enable those humans to more permanently for longer term and sustainably live in these planetary bodies and live in space. I think it's a pretty powerful tool that I think 25 years ago we would never have imagined would be in our tool set. I think the machines we designed the data we look at uh the experiments we do. and the knowledge we gain will be enabled by all these tools and we're a part of it. So thank you very much. All right, maybe one or two. Yeah. Yeah. So, oh Pat, yes, Doctor Belcher, please. I got. So I see. I do have a question. So you know, you were doing commercial space like before. It was cool for school. I've really been working that for a huge chunk of your career. A lot of start ups, innovative concepts, high risk things. Now, I have read that sometimes when people with that background going to the government, there are cultural challenges because the government has structured processes, ways of doing things. So when you were appointed chief technologist, I thought like this is really exciting both for you and for NASA. But can you give us, I'm just really curious on your perspective of how you bring that commercial perspective into NASA and then either challenges or approaches to try and help a federal agency adopt a commercial mindset. I, I think once again, I'm not coming into NASA as a novice, I've dealt with NASA for 23 years, 25 years. And NASA part of NASA should pay for grad school when I was at Georgia Tech NASA funded our research lab. And so I'm kind of a product of, of the space agency in some sense. So I think I have submitted contracts ranging from $50,000 to 6 billion to the government. You know, I propose that I've been involved in lobbying stakeholders, I've been involved in putting technical teams together and understand that ecosystem of, of working programs with stakeholders, with the government with NASA. And so, and going to NASA headquarters in my career and other centers. So I was pretty familiar with the ecosystem and how NASA operates. So that didn't really surprise me. So I knew what I was getting into. So, first of all, because people warned me or there, you know, it's like, oh, are you sure about this? So, I think having that experience is helpful, I also probably an approach I use is probably kind of a platonic kind of approach of questioning. So once again, you know, these meetings, you kind of in order to get people to maybe reach a conclusion, you ask, well, from fundamental principles or first principles, why this, why this, why this? Um And then I, and so I think I can also point to successes. So the CS program with commercial cargo to the International Space Station, commercial crew resupply to the International Space Station, these other models that have worked successfully, those successes help us advocate for additional public private partnerships. When I was in industry, we helped create public private partnerships from um I would say from scratch, but in collaboration with the government and so those successful partnerships, we can now point to, to continue the momentum of these partnerships. So one of the reasons I came to the government was to continue this momentum of partnerships and say, hey, these have been successful. Let's start expanding them. And so now what you see the human landing system program, which is a public private partnership of spacex and Blue Org and developing propellant depot based single stage lunar Landers for humans. Those are not architectures that NASA probably would have come out with on its own. But in co collaboration with industry and their cof funding, we can now work with them to mature those technologies support them where we can. And so I think those smaller successes over the last two decades, conti and continuing those successes in low Earth orbit and beyond using first principles, I think. you know, and also I, I think finally, for me, icing on the cake of why to take this job in the government is also leadership. So I don't know if many of, you know, our deputy NASA administrator, Pam Melroy, you know, that was, I knew Pam when she was at DARPA and that was probably the icing on the cake for me to take this job of. there's a spirit of public private partnership at NASA. I know that because I was part of that in industry. you know, we have great partnerships with companies that didn't exist 20 years ago that are pretty substantial and have great capabilities. And finally, the internal leadership like Pam Melroy, you know, is supportive of these kinds of collaborations. I think those kinds of things gave me energy. confidence to come into this job that people have an openness and leadership itself has an openness to these kinds of collaborations. We've got to be measured. Also, for me, it's trying to talk about how the world, how this world we are in happened and relating internally. What I realized is sometimes I tell people at NASA stories of how we got here. You know, because sometimes even that history is not written of how this innovation happened with industry or this public private partnership. So, I don't know that probably a long way to answer to your question. But yeah. All right here. Probably one more. Yeah. One. Oh, did you want to answer? All right. OK. All right. Well, that's it. Well, thank you very much. 

Closing Remarks

Video Duration: 6:47
  • Chris Bogdan, Booz Allen Executive Vice President, Space
Full Transcript of Video

And to wrap up our day. Now, it's my great pleasure to introduce to you, Booz Allen space lead, retired Lieutenant General Chris Bogdan. OK. We're supposed to end at three. So I have about 20 minutes of comments for you. I'm only kidding. First, I wanna thank everyone uh for being here today. And thank those who have tuned in virtually. At one point in time, we had nearly 400 folks listening to, to the panels. So that was, that was pretty impressive. Thank you. Folks out there in, in virtual land. I hope that you found today's space AI summit somewhat insightful and fascinating. And our purpose here was to make you think to make you think about what the future could be when it comes to space. And AI, I know it truly has had me thinking about some things. Throughout the day, we heard a variety of topics and insights from, from our colleagues, Greg Robinson from NASA kicked us off with what it means to be a space pioneer and the challenges he and his team faced putting up the webb telescope and then let us to understand that many of the technologies that they developed on the web telescope were one of a kind. And 1st, 1st of a kind and they put them together and it actually worked and when it worked, it created tremendous advances and will create tremendous advances in what we know about the universe. AI holds the same exact kind of promise. We just have to take that challenge and meet it. We've had some incredibly insightful panels today too on Domain awareness and ground systems and space data and how AI can really improve space missions like ISR and like Earth observation and climate and domain awareness and traffic management and including modernizing ground systems um for future complex constellations. And finally, we were very grateful for AC to come by and talk to us about how AI is being used at NASA today and in the future and, and there's some tremendous promise there. I thought your charts were about the best I've ever seen. I'd like to get a copy of those. They were really pretty awesome. So what lies ahead for AI and space? What I heard today was lots and lots of challenges and we know they're out there. But I also heard a lot of potential for AI and digital technologies and, and data technologies across many, many different organizations and agencies including our military partners, including our intel community, including a civil and and commercial space operators. with respect to really figuring out how to harness all the power of the data that space is providing us. And I think AI is a huge part of that solution. So leave here today thinking about how AI can help your mission and help you harness the power of your space data. We here at Booz Allen, we're positioning ourselves to being a leading provider of space data as well as creating modern ground systems and an integrator of complex space constellations. And we know for sure that AI is an important part of the tool set that we need to bring our clients. So we're thinking long and hard about those challenges and long and hard about how AI can bring that to our clients. We talked a little bit about it today, probably the biggest hindrance I feel today to using AI in this domain and other domains is culture and trust. OK? And we heard some really good stories about how professors thought calculators were going to ruin math for, for Children. And it, and it didn't turn out that way. II I think we're on the precipice of something really, really significant when it comes to AI. And so we have to overcome the barriers of, of culture and overcome the barriers of trust, but it's incumbent upon us the those folks who understand AI and use AI to help our clients understand what we're doing and how we're doing it to build that trust and also to help them to adopt these capabilities within the culture of their organizations. We can't forget that you can't just give folks technology and expect it to do wonderful things. You gotta take the human dimension into, into account. And I think that's an important part of moving forward with ethical AI and building that trust and having people understand and change the culture to use it properly. So, with that, on behalf of myself and Judy Dotson, our boss in global defense here at Booz Allen and all of my great booz Allen teammates who made this happen and I'm gonna give you a round of applause for that, my booz Allen team. We wanna thank each and every one of you for coming. I would know I was supposed to say I was supposed to look straight into the camera and tell all 400 people that are out there virtually. This concludes our event for you. So you can click leave in the little box up on your web page. But for those of you who are here, please, before you leave, take a stop and take a look at our demo on Space Domain orders and traffic management. But even more important. There's snacks out there to include brownies and cookies. And on the way out you can get some Booz Allen Oreo cookies. And if you're really, really risk taker, you can eat some freeze dried astronaut ice cream. Hm. I tried the cookies. I have yet to try the ice cream, but I will. If you want me to do it first, I will. But it's out there too. So thank all of you for coming wonderful venue. I hope you're gonna leave here thinking long and hard about how AI can help your mission and, and how we can help bring this tremendous power to our space clients. Thanks very much. 

Note: We are unable to publish the opening keynote due to contractual limitations.

Questions on how to advance your space missions? Contact us here.

Solve Complex Space Challenges