Police and OpenAI officials say a 20-year-old man suspected of throwing a Molotov cocktail at CEO Sam Altman's San Francisco home has been arrested. The police department says the incident occurred shortly after 4 a.m. Friday and that the thrown device set an exterior gate on fire. Police say the suspect fled on foot. Less than an hour later, police were called to OpenAI headquarters, where they said the same person was threatening to burn down the building. No one was hurt, and OpenAI says it is assisting with the investigation. Police haven't publicly identified the man they arrested.
AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.
The White House is laying out a new framework that it wants Congress to use to shape national rules for artificial intelligence without curbing growth in the sector. It wants Congress to "preempt" state laws is sees as too burdensome. The focus is on protecting children, preventing electricity costs from surging, respecting intellectual property rights, preventing censorship and educating Americans on using the technology. It comes as state governments have forged ahead on their own regulations. Civil liberties and consumer rights groups have lobbied for more regulations on the powerful technology. But the industry and the White House say a patchwork of rules would hurt growth.
Anthropic's moral stand on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies but also exposing a growing awareness that maybe chatbots just aren't capable enough for acts of war. Anthropic's chatbot, Claude, for the first time outpaced its better-known rival ChatGPT in phone app downloads in the United States this week, a signal of growing interest from consumers siding with Anthropic in its standoff with the Pentagon. But while many military and human rights experts have applauded Amodei for standing up for ethical principles, some are also frustrated by years of AI industry marketing that persuaded the government to apply AI on high-stakes tasks.
President Donald Trump says he's ordering all federal agencies to phase out use of Anthropic technology after the company's unusually public dispute with the Pentagon over artificial intelligence safety. Trump's comments Friday came just over an hour before the Pentagon's deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences. CEO Dario Amodei has said his company "cannot in good conscience accede" to the Defense Department's demands. Anthropic didn't immediately reply to a request for comment to Trump's remarks.
Hegseth warns Anthropic to let the military use the company's AI tech as it sees fit, AP source says
Defense Secretary Pete Hegseth is pressuring Anthropic to give the military broader access to its artificial intelligence technology or lose its Pentagon contract. Defense officials warned Tuesday that they could designate Anthropic a supply chain risk or use the Defense Production Act to essentially give the military more authority to use its products even if it doesn't approve of how they are used. That's according to a person familiar with the meeting and a senior Pentagon official who spoke on condition of anonymity. Anthropic makes the chatbot Claude and is the last of its peers to not supply its technology to a new U.S. military internal network.
California Attorney General Rob Bonta on Wednesday announced an investigation into how and whether Elon Musk's X and xAI broke the law in the …
OpenAI plans to introduce ads for ChatGPT users who aren't paying for the premium version. The company announced Friday that it will start testing ads in the coming weeks. OpenAI aims to monetize its chatbot, which has over 800 million users, most of whom use it for free. Despite being valued at $500 billion, the startup is losing more money than it makes. The digital ads will appear at the bottom of ChatGPT's answers when there's a relevant sponsored product or service. OpenAI assures that the ads will be clearly labeled and separated from the organic answers.
Kids safety advocate Common Sense Media and ChatGPT-maker OpenAI joined together to advance a ballot measure that would amend the California C…
More artificial intelligence is being implanted into Gmail as Google tries to turn the world's most used email service into a personal assistant that can improve writing, summarize far-flung information buried in inboxes and deliver daily to-do lists. The new AI features announced Thursday could herald a pivotal moment for Gmail, a service that transformed email when it was introduced nearly 22 years ago. Gmail's new AI options will only be available in English within the U.S. for starters, but Google is promising to introduce them in other countries and other languages as the year unfolds.
