AI & Defense: The Joint Warfighter and Managing Technological Innovation
With Drew Nowicki @ Department of Defense Chief Digital and Artificial Intelligence Office and USAF Reserve
Commonweal Ventures backs top entrepreneurs building companies that matter for America in sectors like clean energy, defense, fintech, manufacturing, and health. In these markets, partnership with government can be a critical unlock. Commonweal helps founders mobilize public sector commitments to build category-winning companies. We focus on pre-seed and seed stage investments.
We spoke with Drew Nowicki, Team Lead for the AI & Data Accelerator @ DoD Chief Digital and Artificial Intelligence Office and concurrently a field grade officer in the USAF Reserve.
Topics of discussion:
How is the DoD thinking about artificial intelligence?
What AI use cases are driving value for commanders in the field?
Where is the military headed from a technology<>doctrine perspective?
When will this start to impact how wars are fought and how militaries around the world think about their tactics/techniques/procedures, capabilities, and policies?
Hint: It already has…
This interview is based on Drew’s experience as a technologist, not in an official position of the United States Government.
There's been a lot of analysis (and hang-wringing) in the press and among policymakers about how AI is going to affect the American workforce. Jobs are changing. Is AI affecting the kind of roles and responsibilities members of the armed services have?
That's a good question. To remain somewhat neutral in my response, I think senior DoD leadership is well aware of how disruptive AI can be. It's somewhat similar to the conversations on cyber before the word “cyber” became ubiquitous. We’ve always had forms of communications and information management as well as data, well before "cyber" became a word. Then with cyber operations, it created an action and new capability out of what was traditionally perceived as a utility. I think AI is having that same effect around the world. It's not limited to the government or technology companies. Data and AI are having profound impacts to workflows across all industries.
It certainly is creating a need to retool individuals within the workforce and advance digital talent management. For the most part, it is my opinion that we're generally in the stage of leveraging AI as more of an enabler to the workforce than a displacement factor. It's about how we enable individuals to be more productive and leverage their cognitive skills, not necessarily spend too much time on onerous work or work that might be completed with better accuracy and efficiency with an AI algorithm or perhaps a robot. Even today, many citizens are leveraging Generative AI in their personal lives in addition to at work…it’s astounding how quickly GenAI became available than expected.
Can you give us some examples of how generative AI is being used in the armed services? I understand how it would be useful but can't quite picture what that means day-to-day.
A good example is business operations: running an AI algorithm specifically performant at identifying keywords and applying prompt engineering so that you can query volumes of files to find answers to questions in a way that is logical and provides meaningful results. The scenarios are almost limitless because of how much data is generated within the government, particularly within the military. Email traffic, PowerPoint presentations, PDF files…the sources of data goes on. It still is ultimately the decision of the human user of the outputs from GenAI to determine validity or usefulness.
I think when AI in the military gets thrown around, the popular press loves imaging Terminator robots instead of software running your logistics stack. How does the DoD approach controls around AI for lethal and nonlethal uses?
I know lethality is a controversial area. Going back to policy, there are specific DoD directives such as DoD Directive 3000.09 on “Autonomy in Weapon Systems” that cite, explain, and describe the scope of what defines an autonomous weapon or how autonomy is applied in a lethal setting versus a non-lethal application such as ISR (Intelligence, Surveillance, and Reconnaissance) capabilities. There is governance and policy that's explicit for those reasons. You also have the other aspect of human-level review of the technologies as well as thorough use case understanding on any given scenario and risk calculus.
The Department is moving at a very gradual, incremental manner with risk calculus along the way in lockstep. Not at the speed of vendor engagement or of industry excitement. It's very methodical and that's why responsible and trusted AI is so important. The DoD has published information on a Responsible AI Strategy and Implementation Pathway in addition to releasing a Responsible AI Toolkit.
How do you think about the relationship between civilian trust in government and cutting-edge technological progress?
I think it all goes back to a very foundational level. You should have a trusted workforce in which factors such as security clearances and ethics training, as well as zero trust on the cyber side, remain important. Reliable product managers and acquisition officials should be empowered to make informed, conscientious decisions that are paramount for current day and future development of AI and its implementation within the government.
There is a checks and balances aspect as well. The scientific community is involved. When you look at the executive order pertaining to AI, NSIT standards, or the other best-of-breed standards out there are key building blocks.
What are some trends in technology that you are paying close attention to?
I'll open the aperture to what I'm seeing not only within government but also in industry. I think the wave of robotics is going to hit sooner than expected. Reading some of the different interviews that Axios and other media outlets have conducted with former government officials and military leaders, I think we're closer than the timelines suggested. I mention this because of what we've already seen in the world. An example that comes to mind is the drone wars between Armenia & Azerbaijan. Those were very significant and a watershed event. Two sovereign nations revisited geopolitical terms because of the impact drone warfare had on a territorial dispute. There's more to unpack there, but that's just at the top wave level. In the present day with the war in Ukraine, there are so many examples of where semi-automated weaponry, drones and other technologies are being applied, emphasizing the dependency on digital infrastructure and hardening of systems needed because of the consequential electronic warfare environment.
Aggregating these examples and going back to the development of doctrine, those are timely lessons that get studied and reviewed so that weapon system designs, tactics, techniques, and procedures can evolve and change.
Tell me about the joint warfighting concept.
For context, here is a report on the Joint Warfighting Concept from the National Defense University. Read an excerpt below:
In July 2023, General Milley introduced the key tenets of the JWC, which seek to reinforce the NDS force development priorities: "infrastructure, logistics, command and control, dispersal and relocation, and mobilization." The JWC tenets are:
Integrated, combined joint force: Seamless integration of all military Services across all warfighting domains, enabling them to function as a unified force. This involves synchronized planning, shared situational awareness, and effective communication across different Services, fully aligned and interoperable with key allies and partners.
Expanded maneuver: Fluidly moving through space and time, including but not limited to maneuvering through land, sea, air, space, cyber, the electromagnetic spectrum, information space, and the cognitive realm.
Pulsed operations: A type of joint all-domain operation characterized by the deliberate application of joint force strength to generate or exploit advantages over an adversary.
Integrated command, agile control: Seamless command and control (C2) across all domains, integrating sensors, platforms, and decision-making processes to achieve real-time battlespace awareness and enable rapid decision-making.
Global fires: Integration of kinetic and nonkinetic fires to deliver precise, synchronized global effects across all domains and multiple areas of responsibility.
Information advantage: The rapid collection, analysis, and dissemination of information using advanced technologies to enable decision-making superiority and action.
Resilient logistics: The rapid movement of personnel and equipment to places and times of our choosing.
Some could argue the military has changed over the years from a parochial way of organizing, training, and equipping to, now, more of a joint force. The Joint Warfighting Concept is the spirit of how multi-domain operations will be conducted. Technology powered by AI is changing the way commanders make decisions and integrating operations cross different combat commands and different geographies in real-time. It encourages the joint force to rethink conflict, deterrence, and competition by adapting to new threats, partnering with allies, and innovating. The orchestration of what they call the OODA loop (observe, orient, decide, act) is executed at remarkable speeds now compared to a few years ago.
Can you give me some examples of how AI is helping commanders make quicker and better decisions?
For instance, with GIDE (Global Information Dominance Experiment) there are some great examples with information and decision advantage such as logistics, understanding your order of battle, where your blue force assets are in comparison to threats that may be out there. Looking at it from a global perspective, you're considering the logistics aspect because things such as fuel and ammunition are paramount with the ability to project force. Having trusted, reliable information based with the capabilities of AI is a game changer.
Do you know how they used to process all of that data before GIDE was implemented?
I would politely impart that it probably was unstructured data within spreadsheets, email presentations, and legacy systems requiring significant manual input.
Doesn't sound great.
No, it doesn't sound great, but it's been a journey, right? The technology that existed several years ago was probably adequate at the time and for the workflows. Today’s compute power was unimaginable as we continue breakthroughs expected by quantum science.
What is an AI bill of materials? Help me understand how the AI supply chain works.
The AI bill of materials is something that has spawned fairly recently. A good example to compare it to is a software bill of materials, where they've taken some of the best practices on mitigating risk within the software supply chain having a better understanding of all AI model components. They consider things like:
Provenance: who developed it? Is there a name for that AI model (ie Claude, GPT 4, etc.)? Are there specific developers that can be identified by name?
Methodology: What training data sets was the specific model trained on? Was the training data scraped off the worldwide Internet or was there specific training data? Are there RAG (Retrieval-Augmentation Generation) features to enhance the model’s training?
Testing and evaluation: What techniques were applied to determine weights and bias? How about model performance based on precision recall | F1 scores?
The approach utilizing AI BOMs hasn't been completed and is still a work in progress based on advancements within industry. The U.S. Army is certainly taking a vested interest in AI bill of materials. One of the positive desired end results is an added risk mitigation framework for the supply chain and having a better understanding of what is required for AI lifecycle management before deploying it within secure government information systems and operating at different information security impact levels. The statistics on organizations within private sector experiencing cybersecurity or software supply chain incidents are eye opening.
What does a risk in the AI supply chain look like?
Risk in AI varies. For example, you have models that are susceptible to data poisoning, active exploits, and several others. Additionally, GANS, generative adversarial networks, can also pose a threat to the performance and trustworthiness of an AI model. Such risk factors have led to technological considerations on distributed AI model concepts, more assured data model architectures, and zero trust in order to improve overall security.
Training datasets are also very important because ideally developers would like to minimize bias while acknowledging proper statistical weights and improve generative AI responses to prompt engineering queries. Similarly, object misclassification in a training dataset for a computer vision AI model also poses a significant risk which is why human oversight and human centered design is still very much important.