SAN FRANCISCO (AP) — A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people.

The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for "further refinement" in OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude.

Recommended for you

Recommended for you

(0) comments

Welcome to the discussion.

Keep the discussion civilized. Absolutely NO personal attacks or insults directed toward writers, nor others who make comments.
Keep it clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
Don't threaten. Threats of harming another person will not be tolerated.
Be truthful. Don't knowingly lie about anyone or anything.
Be proactive. Use the 'Report' link on each comment to let us know of abusive posts.
PLEASE TURN OFF YOUR CAPS LOCK.
Anyone violating these rules will be issued a warning. After the warning, comment privileges can be revoked.

Thank you for visiting the Daily Journal.

Please purchase a Premium Subscription to continue reading. To continue, please log in, or sign up for a new account.

We offer one free story view per month. If you register for an account, you will get two additional story views. After those three total views, we ask that you support us with a subscription.

A subscription to our digital content is so much more than just access to our valuable content. It means you’re helping to support a local community institution that has, from its very start, supported the betterment of our society. Thank you very much!

Want to join the discussion?

Only subscribers can view and post comments on articles.

Already a subscriber? Login Here