Thanks to the transformative potential of artificial intelligence, we are entering an era of national security reshaped by revolutionary technology. AI has the potential to improve future military operations by improving decision-making, combat effectiveness and operational efficiency.
The US military is already leveraging AI for autonomous reconnaissance and combat systems, data analysis and cybersecurity. Over the next five years, AI will enable new advanced applications such as swarm intelligence for greater situational awareness, predictive analytics to predict enemy movements, and enhanced cybersecurity. These developments will be facilitated by the convergence of IT growth, big data and emerging technologies in wearable and embedded systems that could make the military more efficient, more agile and more efficient.
Given the “global AI arms race,” we must take action now to ensure the U.S. military is prepared to stay at the forefront of this evolving landscape.
Adapt to the benefits of AI without falling behind
Managing investments and policies to adopt AI will enable the military to maintain technological superiority. However, the Department of Defense’s traditional funding strategies, contracting vehicles, and acquisition pathways cannot keep pace with advances in AI. This must change. An immediate step would be to reallocate R&D budgetary resources to include both short-term applications of AI and long-term fundamental research. This change requires parallel preparation of acquisition offices and operational end users, so that they can capture and evolve fundamental results to mission-ready status as soon as they become available.
New data repositories from cross-branch training and mission operations must be continually populated into a robust corpus designed for interoperability and rapid use to validate assumptions and performance. A new investment ecosystem fostering low-risk, barrier-free, rapid-fail fundamental science feasibility explorations, with necessary safeguards, will also attract more candidate technologies through the notorious technological death valley of DoD.
The Federal Defense Acquisition Regulation Supplement and other policies should be revised to facilitate more agile AI procurement. For example, adopting modular contracting methods under other transaction authorization or indefinite delivery and indefinite quantity vehicles could enable rapid ramp-up of AI technology offerers without the traditional burdens of primary monitoring.
Additionally, a central repository of AI best practices and development frameworks, which can also integrate standardized data formats from non-AI technologies, will accelerate cross-industry learning and accelerate R&D efforts.
It will also be essential to foster better partnerships with industry and academia, promote AI technology transitions and invest in startup R&D. To motivate partner contributions, DoD should not restrict intellectual property rights or data rights. A controlled-access data sharing ecosystem, in which our militaries and allied nations have invested, will allow AI models to be trained faster and more thoroughly. The advantage will go to countries that apply these models most effectively to achieve the right results.
Besides rewards, AI poses increased risks in military operations
AI systems operating without human oversight raise moral, sociopolitical, and legal implications, particularly when they automate parts of the “kill chain.” Carefully considered ethical and legal frameworks for AI-driven actions and decisions will require changes in national and global policies and standards.
This should go beyond just keeping a human in the loop. AI outperforms humans in a myriad of technical, creative and strategic areas. Using AI to quantify risks within valid operational scenarios could refocus humans on establishing only ethically acceptable risk thresholds that partner countries can follow. Remember that general policies prohibiting or degrading the application of AI create opportunities for less-monitored foreign actors to gain a technological advantage.
AI also raises security concerns related to increased vulnerability to cyberattacks and potential data manipulation. Every sensor, data transfer, and endpoint creates an attack surface that experienced adversaries could target. DoD needs technology investment policies and strategies that address these data collection points. When adversaries find an attack surface, resilience will be essential. Security methods from other high-risk technical fields like nuclear energy offer valuable lessons on how to approach the risks associated with complex AI-based systems.
Accountability and accountability for the development and use of AI in military operations is essential. AI systems will require rigorous testing, validation and safeguards to ensure their reliability, robustness and security. Existing medical device practices provide a useful analogy.
There are also legal questions such as those we encounter in social media cases: Who is responsible for monitoring and enforcing the content: the ISP, the service provider, or the user who provided the data? Policies and constructs that penalize new AI developers striving to establish market share may well drive technology innovators away from DoD applications.
Data privacy presents another consideration. The European Union’s General Data Protection Regulation and similar laws call for a “right to transparency.” In the same way, we all want AI to be able to explain how it arrived at a result. The challenge is how to define acceptable standards to achieve such transparency, which requires an understanding of how AI works.
Finally, there is the question of how to restore trust in AI once lost. If an AI learns a behavior and acts in an unexpected or undesirable way, how can that behavior not only be avoided in the future, but also unlearned from even being considered? How far should AI go back in the verification process to achieve a necessary and acceptable level of trust? This will be a difficult issue to negotiate, as different governments and cultures have different risk acceptance criteria. However, long debate deadlines may not be enough given how quickly AI is advancing.
The human-machine dynamic
Humans have always co-evolved with tools and technologies that make our lives productive. Future AI-based systems must accommodate both humans and AI as users of the system itself. People need to learn to approach and view AI systems as new team members. As with any team, productivity often comes from sharing situational awareness and understanding teammates’ goals, motivations, behaviors, and/or impacts of actions. In the military, much of this understanding comes through rigorous joint training and explicit tactics, techniques, and procedures that foster trust and create common ground.
Future AI systems may need to start at this level, integrating and training human and AI users simultaneously as a team. But when the tandem actually needs to perform work, the interfaces each needs are radically different.
For example, humans are visually oriented and rely on graphical human-machine interfaces (HMIs) to identify and make sense of visual patterns. With AI as a collaborator, users will still need a natural HMI to understand their role in the operation of the system. They will also need ways to productively engage with AI, such as natural language models and/or training in the new field of “rapid engineering” to direct AI action. The AI itself will need consistent, well-formatted data to be able to properly integrate it into its model structure and impact its results.
The future of AI is fast approaching
Although AI holds great promise for the future of U.S. military operations, many complex questions remain to be resolved. There is no time to hesitate.
To realize the full potential of AI, DoD must act quickly to understand its current use, anticipate future developments, and address associated risks. By adapting its investments, policies, and strategies, the U.S. military can maintain its technological edge and ensure the security and success of future operations.
Michael P. Jenkins is chief scientist at Knowmadics inc..
Copyright © 2023 Federal News Network. All rights reserved. This website is not intended for users located in the European Economic Area.