They’re cute, even cuddly, and promise learning and companionship — but artificial intelligence toys are not safe for kids, according to child…

As mental health chatbots driven by artificial intelligence proliferate, a small number of states are trying to regulate them. Laws in Illinois, Nevada and Utah are among the first in the nation to put limits on or ban therapy chatbots. But app creators, policymakers and mental health advocates say it is just a start and federal regulation would be ideal. App makers worry the patchwork of laws could stifle innovation needed due to a nationwide shortage of human therapists. They also note that many of the laws don't cover generic chatbots like ChatGPT.

  • Updated

The attorneys general of California and Delaware have expressed serious concerns about the safety of OpenAI's chatbot, ChatGPT, especially for children and teens. They sent a letter to OpenAI after a meeting with its legal team earlier this week. The officials have been reviewing OpenAI's plans to restructure its business, focusing on safety oversight. They say they are alarmed by reports of dangerous interactions between chatbots and users, including a suicide and a murder-suicide linked to OpenAI's chatbot. The two officials have oversight over OpenAI's plans to restructure its nonprofit origins, but want better safety measures.

A crackdown on predictive software that sets prices and can rip you off seemed to be brewing in the California Legislature earlier this year, …

A study finds that AI chatbots often avoid answering high-risk suicide questions but are inconsistent with less direct prompts. Published Tuesday in the journal Psychiatric Services, the study highlights the need for improvement in chatbots like ChatGPT, Gemini, and Claude. Researchers from RAND Corporation emphasize the importance of setting benchmarks for how AI handles mental health queries. Concerns arise as more people, including children, rely on these tools for support. The study coincides with a lawsuit against OpenAI, alleging ChatGPT contributed to a California teenager's suicide. Researchers urge companies to enhance safety measures.

New research from a watchdog group reveals ChatGPT can provide harmful advice to teens. The Associated Press reviewed interactions where the chatbot gave detailed plans for drug use, eating disorders and even suicide notes. The Center for Countering Digital Hate found that testing ChatGPT with harmful prompts led the chatbot to respond in dangerous ways more than half the time. The study highlights the risks as more people, particularly teens, turn to AI for companionship and advice. OpenAI, the maker of ChatGPT, said after viewing the findings that its "work is ongoing" in refining "how models identify and respond appropriately in sensitive situations."

Tech companies selling AI to the federal government now face a new challenge: proving their chatbots aren't "woke." President Donald Trump's plan to counter China's AI dominance includes an executive order to prevent "woke AI" in the federal government. This order mirrors China's approach of aligning AI with state values. Major AI providers like Google and Microsoft have not commented on the directive. Critics argue the order forces tech companies into a culture war. The order's impact on AI development and compliance remains uncertain, with some seeing it as a soft but coercive measure.

Teenagers are increasingly turning to AI for advice, emotional support and decision making, according to a new study. Common Sense Media found that over 70% of teens have used AI companions, with many finding the interactions as satisfying as talking to real friends. Experts warn this trend could harm social skills and mental health, as teens rely on AI for validation and avoid real-world challenges. Concerns also include inappropriate content and the lack of regulation of AI platforms. Researchers emphasize that while AI can assist, it should not replace human connections, especially during adolescence, a critical time for social and emotional development.

Featured
  • Updated

San Jose's mayor is using AI tools like ChatGPT to streamline city operations and improve services for its 1 million residents. The city has trained workers to use AI for tasks like responding to pothole complaints and drafting grant proposals. San Francisco is also adopting AI, providing Microsoft's Copilot chatbot to nearly 30,000 city employees. While AI has saved time and improved efficiency, officials stress the need for human oversight to avoid errors. Some cities, like Stockton, have paused AI projects due to high costs. Experts predict many AI initiatives may fail without clear value or risk management.