Добавить новость
smi24.net
WND
Сентябрь
2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
23
24
25
26
27
28
29
30

‘A decisive capability’: Don’t undercut U.S. military power by undermining its AI

0
WND 
A B-2 Spirit takes off from Whiteman Air Force Base, Missouri, in support of Operation Midnight Hammer, June 21, 2025. Midnight Hammer was a deeply coordinated strike combining air and naval firepower to target Iran’s underground nuclear sites. (U.S. Air Force photo)

In the U.S. military and intelligence communities, we can’t cut corners. We must equip our highly trained people with the most advanced weapons and the most powerful technology available—because lives, missions, and the defense of freedom depend on it.

So why would we shortchange our troops and analysts by undermining artificial intelligence (AI), the very technology that’s quickly becoming one of the most decisive capabilities in the 21st Century?

Today, AI is beginning to deliver meaningful operational gains for the Defense Department (DoD) and the Intelligence Community (IC). AI is being used to simulate battlefield conditions in training, process vast amounts of information in support of decision making and intelligence, enhance cyber defense, enable real-time combat system updates, and field autonomous weapons systems.

If we underinvest in American AI or place heavy-handed restrictions on its development, we don’t just risk falling behind in innovation. We risk falling behind on the front lines.

Unfortunately, that risk is growing. Policymakers of both parties have proposed more than 1,000 AI rules at the state level, many of which, while well-intentioned, could slow the development of powerful, American-made AI models and hand our adversaries a lasting advantage.

Meanwhile, misguided efforts to rewrite longstanding fair use copyright principles or overly restrict the data AI systems can be trained on may sound like minor regulatory tweaks, but in reality, they strike at the heart of how these systems function. And their consequences for national security could be severe.

Why does training data matter so much? Because large language AI models (LLMs) don’t “reason” like humans—they learn patterns from the data they’ve seen. The broader and more representative the training data, the more accurately they can interpret complex scenarios, spot emerging threats, or generate mission-relevant insights.

For national security missions, that means combining high-quality, publicly available data with mission-specific, classified datasets. While models used by the DoD and IC will most often be specially trained for internal needs, they still benefit enormously from being built atop strong commercial foundations. The goal is simple: avoid creating dangerous “blind spots” in AI models that could delay time-sensitive decisions or miss signals altogether. If U.S. law makes it harder for our own models to learn from publicly available content, while our adversaries face no such limits, we handicap ourselves by design.

Let’s be clear: some guardrails are necessary, especially where AI applications intersect with life-and-death decisions, such as weapons of mass destruction or autonomous targeting systems. But general-purpose AI models, used across commercial and government settings alike, must be trained comprehensively if they’re going to help safeguard U.S. national security and support mission-critical operations.

Case in point: Two leading U.S.-built AI models—Meta’s Llama and Anthropic’s Claude—were recently approved for use in Amazon Web Services’ secure government cloud after meeting the highest federal security standards. These models are now cleared to support sensitive defense and intelligence operations in secure, compliant environments.

This transformation holds enormous promise for U.S. military and intelligence operations—but only if we enable American AI to innovate at the pace required to remain globally competitive. If we stumble, we will jeopardize the very tools and technologies that help further strengthen our warfighters and analysts, an outcome our foreign adversaries like China, Russia, and Iran would welcome.

Beijing has made AI a cornerstone of its military doctrine and is investing more than $1.4 trillion in AI development by 2030. It has launched hundreds of university programs and is aggressively exporting both open- and closed-source models worldwide, all of which are aligned with authoritarian values of surveillance, censorship, and centralized control.

The stakes are high. The country that leads in AI will not only shape the global economy but also shape the global order. The Administration’s recent AI Action Plan is a strong first step toward meeting that challenge. Now, Congress must codify and expand it to ensure enduring national advantage.

That’s why U.S. AI policy, especially in defense and intelligence, must be guided by four clear principles:

  1. Scale America’s AI infrastructure—energy, compute, data, talent, and more—while strategically depriving China of the critical enablers of its AI ambitions.
  2. Ensure our policies support our strategic goals. That includes boosting research and development (R&D); attracting capital; maximizing AI model quality through broad, responsible training; and working toward a more centralized AI regulatory framework, including a state AI regulation pause and continuation of existing fair use laws, which enables AI scaling and deployment.
  3. Protect the public against high-risk scenarios with smart, enforceable guardrails that don’t overreach.
  4. Ensure military and intelligence access to the most capable and secure commercial AI tools available—both open- and closed-source—and that those tools can be adapted for mission-specific use.

AI is now foundational to protecting our national interests. We always seek to send our troops into battle with the best weapons and equipment available. Today, we must also ensure that those in uniform and our intelligence professionals have a competitive advantage in AI.


General Joseph F. Dunford Jr. is the 19th chairman of the Joint Chiefs of Staff. He serves as a national security board advisor for the American Edge Project.

This article was originally published by RealClearDefense and made available via RealClearWire.














Музыкальные новости






















СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *