Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.

Tech companies are pushing new health chatbots, but experts say you still need to talk to your doctor. OpenAI has introduced ChatGPT Health, and Anthropic has added similar health features for some Claude users. The companies say the bots can review health records and app data to explain medical results and trends. Doctors say they can beat a basic Google search — if users provide more context. But experts warn you should skip AI for emergency symptoms such as chest pain, shortness of breath or severe headaches. Experts also warn about privacy. Anything shared with an AI company isn't protected by the privacy laws that normally governs sensitive medical information.

  • Updated

With now free access to general purpose large language models, having an artificial intelligence buddy in your pocket is now available to ever…

  • Updated

California Senate Bill 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was first introduced to the st…