Pentagon fears “AI psychosis” as machines learn to make lethal decisions — Politico

Artificial intelligence is reshaping U.S. national security debates, according to Politico. Pentagon officials are exploring how large language models could support battlefield strategy and autonomous weapons systems capable of making real-time decisions — including whether to use lethal force. Experts, however, warn that such capabilities could easily spiral out of control.
Source Politico
Former Defense Department officials have raised concerns about “AI psychosis,” where misaligned algorithms might make unpredictable or destructive choices. Publicly available models like ChatGPT are considered too constrained for military use, yet the Pentagon is reportedly pursuing its own versions built specifically for defense operations.
Analysts caution that AI systems trained on human data inherit our cognitive biases and tendencies toward escalation. Without strict oversight, these technologies could accelerate conflicts rather than prevent them — a risk that underscores the urgent need for ethical safeguards in military AI development.
