Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.

A growing number of U.S. college instructors are turning to oral exams to help combat an AI crisis in higher education. Some are replacing written assignments with oral exams. Others are pairing Socratic-style questioning with written assignments or requiring students to attend office hours. Instructors say they know student use of AI is ubiquitous but hard to police, and it's impacting student learning. Oral exams allow instructors to determine what students know and where they need help. Students say they don't always love the testing format, but many agree that it's effective. As one student says, knowing that you will be face-to-face with a professor "makes you realize, 'I should study this.'"

Tech companies are pushing new health chatbots, but experts say you still need to talk to your doctor. OpenAI has introduced ChatGPT Health, and Anthropic has added similar health features for some Claude users. The companies say the bots can review health records and app data to explain medical results and trends. Doctors say they can beat a basic Google search — if users provide more context. But experts warn you should skip AI for emergency symptoms such as chest pain, shortness of breath or severe headaches. Experts also warn about privacy. Anything shared with an AI company isn't protected by the privacy laws that normally governs sensitive medical information.

Moltbook, a so-called social network built exclusively for AI agents, has generated buzz in the technology world and posts from the platform have set the internet ablaze with conversations about autonomous artificial intelligence. While the technology world has been split between excitement and skepticism about Moltbook, many experts have expressed security concerns about the platform. One researcher was able to able to gain unauthenticated access to a database that included personal information and gave him the ability to edit content on the site. More than 1.6 million AI agents are registered on Moltbook, according to the site, but that number has been disputed.

OpenAI plans to introduce ads for ChatGPT users who aren't paying for the premium version. The company announced Friday that it will start testing ads in the coming weeks. OpenAI aims to monetize its chatbot, which has over 800 million users, most of whom use it for free. Despite being valued at $500 billion, the startup is losing more money than it makes. The digital ads will appear at the bottom of ChatGPT's answers when there's a relevant sponsored product or service. OpenAI assures that the ads will be clearly labeled and separated from the organic answers.

Featured
  • Updated

Kids safety advocate Common Sense Media and ChatGPT-maker OpenAI joined together to advance a ballot measure that would amend the California C…

More artificial intelligence is being implanted into Gmail as Google tries to turn the world's most used email service into a personal assistant that can improve writing, summarize far-flung information buried in inboxes and deliver daily to-do lists. The new AI features announced Thursday could herald a pivotal moment for Gmail, a service that transformed email when it was introduced nearly 22 years ago. Gmail's new AI options will only be available in English within the U.S. for starters, but Google is promising to introduce them in other countries and other languages as the year unfolds.