California Gov. Gavin Newsom has signed an executive order for California to independently review federal supply-chain risk designations for businesses. This follows a dispute between AI company Anthropic and the Department of Defense over contract terms. The order aims to regulate AI use by state employees while encouraging its adoption. California agencies must develop AI contract standards, improve government transparency, and issue guidance on AI-generated content. Newsom's move contrasts with federal policies, which took a lighter regulatory approach. This is Newsom's second AI-focused order, reflecting growing interest from unions and tech donors.

Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.

Anthropic's moral stand on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies but also exposing a growing awareness that maybe chatbots just aren't capable enough for acts of war. Anthropic's chatbot, Claude, for the first time outpaced its better-known rival ChatGPT in phone app downloads in the United States this week, a signal of growing interest from consumers siding with Anthropic in its standoff with the Pentagon. But while many military and human rights experts have applauded Amodei for standing up for ethical principles, some are also frustrated by years of AI industry marketing that persuaded the government to apply AI on high-stakes tasks.

Tech companies are pushing new health chatbots, but experts say you still need to talk to your doctor. OpenAI has introduced ChatGPT Health, and Anthropic has added similar health features for some Claude users. The companies say the bots can review health records and app data to explain medical results and trends. Doctors say they can beat a basic Google search — if users provide more context. But experts warn you should skip AI for emergency symptoms such as chest pain, shortness of breath or severe headaches. Experts also warn about privacy. Anything shared with an AI company isn't protected by the privacy laws that normally governs sensitive medical information.

Featured
  • Updated

In its recent push to enhance safety and avoid fatal collisions, Caltrain voiced support for a recent bill by U.S. Rep. Kevin Mullin, D-South …

OpenAI plans to introduce ads for ChatGPT users who aren't paying for the premium version. The company announced Friday that it will start testing ads in the coming weeks. OpenAI aims to monetize its chatbot, which has over 800 million users, most of whom use it for free. Despite being valued at $500 billion, the startup is losing more money than it makes. The digital ads will appear at the bottom of ChatGPT's answers when there's a relevant sponsored product or service. OpenAI assures that the ads will be clearly labeled and separated from the organic answers.