Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.

The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.

Recommended for you

Recommended for you

(0) comments

Welcome to the discussion.

Keep the discussion civilized. Absolutely NO personal attacks or insults directed toward writers, nor others who make comments.
Keep it clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
Don't threaten. Threats of harming another person will not be tolerated.
Be truthful. Don't knowingly lie about anyone or anything.
Be proactive. Use the 'Report' link on each comment to let us know of abusive posts.
PLEASE TURN OFF YOUR CAPS LOCK.
Anyone violating these rules will be issued a warning. After the warning, comment privileges can be revoked.

Thank you for visiting the Daily Journal.

Please purchase a Premium Subscription to continue reading. To continue, please log in, or sign up for a new account.

We offer one free story view per month. If you register for an account, you will get two additional story views. After those three total views, we ask that you support us with a subscription.

A subscription to our digital content is so much more than just access to our valuable content. It means you’re helping to support a local community institution that has, from its very start, supported the betterment of our society. Thank you very much!

Want to join the discussion?

Only subscribers can view and post comments on articles.

Already a subscriber? Login Here