AI vs. the Pentagon: killer robots, mass surveillance, and red lines
摘要
美国国防部要求AI公司放宽对其模型的限制,允许“任何合法用途”,包括对美国民众的大规模监控和完全自主的致命武器。Anthropic公司因此与五角大楼陷入激烈谈判,并拒绝遵守新条款,坚持不涉足致命自主武器和大规模监控的底线。尽管其竞争对手据称已同意新条款,且面临被列为“供应链风险”的压力,Anthropic首席执行官仍表示威胁不会改变其立场。此事凸显了AI企业
Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new military contract terms that would require it to loosen the guardrails on its AI models, allowing for “any lawful use,” even mass surveillance of Americans and fully autonomous lethal weapons.
Pentagon CTO Emil Michael is pushing for Anthropic to be designated a “supply chain risk” if it doesn’t comply, a label usually only given to national security threats. Anthropic’s rivals OpenAI and xAI have reportedly agreed to the new terms, but even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei is still refusing to cross his company’s red line, stating that “threats do not change our position: we cannot in good conscience accede to their request.”
Follow along here for the latest updates on the clash between AI companies and the Pentagon…
- We don’t have to have unsupervised killer robots
- Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
- Pete Hegseth’s Pentagon AI bro squad includes a former Uber executive and a private equity billionaire
- Inside Anthropic’s existential negotiations with the Pentagon
转载信息
评论 (0)
暂无评论,来留下第一条评论吧