wǔ jiǎo dà lóu五角大楼duì对Anthropic:AIzài在wǔ lì武力yǔ与lún lǐ伦理zhī jiān之间
The U.S. Department of Defense has asked AI company Anthropic to relax some of the "ethical restrictions" on the Claude model in military contracts, with a deadline of February 27, 2026.
The Department of Defense also stated that if this is not done, they might take tougher measures, including penalizing the contract.
Anthropic originally used "constitutional AI" rules to ensure Claude avoids involvement in lethal weapons without human control and does not assist in mass surveillance.
However, the military believes that as long as it is a "legal military use," the tool should be usable.
The problem is that what is legally allowed may not align with the ethical boundaries set by the company.
What is more worrying is that even if AI does not directly fire weapons, it can analyze intelligence faster, select targets, and shorten human decision time, making decisions more urgent and harder to hold accountable.
Currently, there are no strong international rules to regulate these matters.
If countries and companies keep lowering restrictions to compete, ethics might become seen as a "hindrance."
This conflict reminds us that AI is not just a tool but also affects security and order.
In the future, clearer rules are needed to ensure that critical decisions remain truly under human responsibility.