A study finds that AI chatbots often avoid answering high-risk suicide questions but are inconsistent with less direct prompts. Published Tuesday in the journal Psychiatric Services, the study highlights the need for improvement in chatbots like ChatGPT, Gemini, and Claude. Researchers from RAND Corporation emphasize the importance of setting benchmarks for how AI handles mental health queries. Concerns arise as more people, including children, rely on these tools for support. The study coincides with a lawsuit against OpenAI, alleging ChatGPT contributed to a California teenager's suicide. Researchers urge companies to enhance safety measures.

Experts say it's worth easing kids back into a regular sleep routine with the start of a new school year. A good night's sleep helps students stay focused and attentive in class. To get back on track, experts recommend setting earlier bedtimes a week or two before the first day of school — or gradually going to bed 15 minutes earlier each night. Don't eat a heavy meal before bed and avoid TV or screen time two hours before sleep. Instead, work in relaxing activities like showering and reading a story. Parents can adjust based on what works best for their child.

Teenagers are increasingly turning to AI for advice, emotional support and decision making, according to a new study. Common Sense Media found that over 70% of teens have used AI companions, with many finding the interactions as satisfying as talking to real friends. Experts warn this trend could harm social skills and mental health, as teens rely on AI for validation and avoid real-world challenges. Concerns also include inappropriate content and the lack of regulation of AI platforms. Researchers emphasize that while AI can assist, it should not replace human connections, especially during adolescence, a critical time for social and emotional development.