California Gov. Gavin Newsom has signed an executive order for California to independently review federal supply-chain risk designations for businesses. This follows a dispute between AI company Anthropic and the Department of Defense over contract terms. The order aims to regulate AI use by state employees while encouraging its adoption. California agencies must develop AI contract standards, improve government transparency, and issue guidance on AI-generated content. Newsom's move contrasts with federal policies, which took a lighter regulatory approach. This is Newsom's second AI-focused order, reflecting growing interest from unions and tech donors.

Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.

The White House is laying out a new framework that it wants Congress to use to shape national rules for artificial intelligence without curbing growth in the sector. It wants Congress to "preempt" state laws is sees as too burdensome. The focus is on protecting children, preventing electricity costs from surging, respecting intellectual property rights, preventing censorship and educating Americans on using the technology. It comes as state governments have forged ahead on their own regulations. Civil liberties and consumer rights groups have lobbied for more regulations on the powerful technology. But the industry and the White House say a patchwork of rules would hurt growth.

Anthropic's moral stand on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies but also exposing a growing awareness that maybe chatbots just aren't capable enough for acts of war. Anthropic's chatbot, Claude, for the first time outpaced its better-known rival ChatGPT in phone app downloads in the United States this week, a signal of growing interest from consumers siding with Anthropic in its standoff with the Pentagon. But while many military and human rights experts have applauded Amodei for standing up for ethical principles, some are also frustrated by years of AI industry marketing that persuaded the government to apply AI on high-stakes tasks.

President Donald Trump says he's ordering all federal agencies to phase out use of Anthropic technology after the company's unusually public dispute with the Pentagon over artificial intelligence safety. Trump's comments Friday came just over an hour before the Pentagon's deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences. CEO Dario Amodei has said his company "cannot in good conscience accede" to the Defense Department's demands. Anthropic didn't immediately reply to a request for comment to Trump's remarks.

A study finds that AI chatbots often avoid answering high-risk suicide questions but are inconsistent with less direct prompts. Published Tuesday in the journal Psychiatric Services, the study highlights the need for improvement in chatbots like ChatGPT, Gemini, and Claude. Researchers from RAND Corporation emphasize the importance of setting benchmarks for how AI handles mental health queries. Concerns arise as more people, including children, rely on these tools for support. The study coincides with a lawsuit against OpenAI, alleging ChatGPT contributed to a California teenager's suicide. Researchers urge companies to enhance safety measures.

A coalition of philanthropic funders will spend $1 billion over 15 years to help develop artificial intelligence tools to help spur economic mobility. The funders announced Thursday that they will create a new entity, NextLadder Ventures, to offer grants and investments to nonprofits and for-profits that develop tools to help front-line case workers manage often huge caseloads. The group includes the Gates Foundation, Ballmer Group, Stand Together, Valhalla Foundation and hedge fund founder John Overdeck. The AI company Anthropic will offer technical expertise and access to its technologies to the nonprofits and companies NextLadder invests in.

Featured
  • Updated

The San Francisco Bay Area is the undoubted leader in artificial intelligence investment, but the Peninsula has yet to reap all the benefits —…