Hours After Trump's Ban, U.S. Uses Anthropic Tools for Iran Attack

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed.

Trump Orders U.S. Agencies to Stop Using Anthropic's Products

President Trump ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, a directive that could vastly complicate government intelligence analysis and defense work. Defense Secretary Pete Hegseth designated the company a “supply-chain risk to national security,” a label that means that no contractor or supplier that works with the military can do business with Anthropic — an all but unheard-of, legal experts said. It strips an American company of its government work by using a process previously deployed only with foreign companies the United States considered security risks.

Plaintiff in Social-Media Addiction Lawsuit Testifies at Trial

The plaintiff in a landmark trial over whether the design of social-media apps can foster addiction in children told a jury that using YouTube and Instagram had contributed to her social isolation and mental health issues, including anxiety, body dysmorphia and depression. The 20-year-old woman, described in court as Kaley G.M., took the witness stand after nearly three weeks of testimony in the civil case.

Anthropic Says It Won't Accommodate Pentagon's Demands

The standoff between the Pentagon and Anthropic over how artificial intelligence can be used in defense continued on Thursday as the AI start-up reiterated its reservations, a day before a deadline imposed by the Trump administration for the company to permit its powerful technology to be applied broadly for military operations. The two sides are hurtling toward a deadline over a Pentagon demand that Anthropic provide unfettered access to its AI system without safeguards demanded by the company, as part of the negotiations over a $200 million contract involving AI in classified systems.

Instagram to Notify Parents if Teens Frequently Search for Suicide

Instagram said it would notify parents if their teenager repeatedly searches for terms related to suicide or self-harm within a short period, as pressure grows for governments to follow Australia's ban on the use of social media for under 16s. Britain said in January it was considering restrictions to protect children online, after Australia's move in December. Spain, Greece, and Slovenia have in recent weeks said they are also looking at limiting access.

FTC Issues Policy Statement on Children's Online Privacy Protection Rule

The Federal Trade Commission (FTC) issued a policy statement advising industry that it will not bring enforcement actions against website and online service providers who collect, use and share personal data using age verification technologies. Companies have historically worried that collecting data for age verification could violate the FTC’s Children’s Online Privacy Protection Rule (COPPA Rule), which requires commercial websites and online service operators to obtain parental consent before collecting, using or disclosing personal information of children under 13.

Hacker Exploits Anthropic Chatbot to Attack Mexican Gov't Agencies

A hacker exploited Anthropic PBC’s artificial intelligence chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research.

Anthropic, Facing Competition, Scales Back Safety Commitment

Anthropic, the artificial-intelligence company known for its devotion to safety, is scaling back that commitment by softening its core safety policy to stay competitive with other AI labs. Anthropic previously paused development work on its model if it could be classified as dangerous, but said it would end that practice if a comparable or superior model was released by a competitor.

Apple Creating 'Age Assurance' Tools to Comply with Countries' Laws

Apple is launching new tools to comply with the growing number of age-verification laws both in the U.S. and abroad. As part of the changes, Apple will block the downloads of apps rated 18+ in Brazil, Australia, and Singapore, while also rolling out other features to comply with laws in Utah and Louisiana in the U.S. The company informed developers that it’s expanding its set of “age assurance” tools, including an updated Declared Age Range API now available for beta testing.

Defense Department Threatens to Cancel Anthropic's Contract

Defense Secretary Pete Hegseth gave Anthropic Chief Executive Dario Amodei three days to comply with the Pentagon’s demands on using its artificial-intelligence models or face cancellation of the company’s contract, people familiar with the matter said. If Anthropic doesn’t show more flexibility working with the military, Hegseth said he could also label the company a supply-chain risk, a move typically reserved for overseas companies linked to foreign adversaries, or invoke the Defense Production Act to essentially force the company to work more collaboratively with the Pentagon.

Hegseth Summons Anthropic CEO to Discuss Military's Use of Claude

Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon for what sources say is likely to be a tense meeting over terms for military use of Anthropic's Claude. Claude is the only AI model available in the military's classified systems, and the most capable model for sensitive defense and intelligence work. The Pentagon doesn't want to lose access to Claude but is furious with Anthropic for refusing to lift its safeguards entirely.

Anthropic Accuses Chinese Companies of Siphoning Info from Claude

U.S. artificial-intelligence startup Anthropic said three Chinese AI companies set up more than 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up. The three companies — DeepSeek, Moonshot AI and MiniMax — prompted Claude more than 16 million times, siphoning information from Anthropic’s system to train and improve their own products, Anthropic said in a blog post.

Zuckerberg Defends Meta at Trial Over Social Media Addiction

In his first time testifying about child safety in front of a jury, Meta CEO Mark Zuckerberg said the company does not seek to make Instagram addictive to younger users, pushing back against claims that the social media app is designed to be harmful to children. “I’m focused on building a community that is sustainable,” he said when he was asked whether Meta wants people to be addicted to its social media platforms.

UK Prime Minister Wants to Fine Tech Firms for Not Removing 'Revenge Porn'

Deepfake nudes and “revenge porn” must be removed from the internet within 48 hours or technology firms risk being blocked in the UK, Keir Starmer has said, calling it a “national emergency” that the government must confront. Companies could be fined millions or even blocked altogether if they allow the images to spread or be reposted after victims give notice.

Disney Sends Demand Letter to ByteDance Over 'Seedance' AI Tool

The Walt Disney Company sent a cease-and-desist letter to ByteDance, alleging the Chinese tech giant has been infringing on its works to train and develop an AI video generation model without compensation, according to a copy of the letter obtained by Axios. It's the most serious action a Hollywood studio has taken so far against ByteDance since it launched Seedance 2.0.

Public Radio Host Sues Google for Recreating His Voice on NotebookLM

David Greene, a public radio veteran who has hosted NPR’s “Morning Edition” and KCRW’s political podcast “Left, Right & Center,” is suing Google, alleging that it violated his rights by building a product that replicated his voice without payment or permission, giving users the power to make it say things Greene would never say. The dispute involves NotebookLM, a language models trained on vast libraries of writing and speech by real humans who were never told their words and voices would be used in that way — raising profound questions of copyright and ownership.