{"id":3564,"date":"2025-08-05T21:11:00","date_gmt":"2025-08-05T21:11:00","guid":{"rendered":"https:\/\/floridiansi.com\/?p=3564"},"modified":"2025-08-05T21:11:00","modified_gmt":"2025-08-05T21:11:00","slug":"ai-vs-human-therapists-study-finds-chatgpt-responses-rated-higher","status":"publish","type":"post","link":"https:\/\/floridiansi.com\/?p=3564","title":{"rendered":"AI vs. Human Therapists: Study Finds ChatGPT Responses Rated Higher"},"content":{"rendered":"<p>Whether machines could be therapists is a question that has received increased attention given some of the benefits of working with generative artificial intelligence (AI).<\/p>\n<p>Although \u00a0previous research has found that humans can struggle to tell the difference between responses from machines and humans, recent findings suggest that AI can write empathically and the generated content is rated highly by both mental health professionals and voluntary service users to the extent that it is often favored over content written by professionals.<\/p>\n<p>In their new study involving over 800 participants, Hatch and colleagues showed that, although differences in language patterns were noticed, individuals could rarely identify whether responses were written by ChatGPT or by therapists when presented with 18 couple\u2019s therapy vignettes.<\/p>\n<p>This finding echoes Alan Turing\u2019s prediction that humans would be unable to tell the difference between responses written by a machine and those written by a human. In addition, the responses written by ChatGPT were generally rated higher in core psychotherapy guiding principles.<\/p>\n<p>Further analysis revealed that the responses generated by ChatGPT were generally longer than those written by the therapists. After controlling for length, ChatGPT continued to respond with more nouns and adjectives than therapists.<\/p>\n<p>Considering that nouns can be used to describe people, places, and things, and adjectives can be used to provide more context, this could mean that ChatGPT contextualizes more extensively than the therapists.<\/p>\n<p>More extensive contextualization may have led respondents to rate the ChatGPT responses higher on the common factors of therapy (components that are common to all modalities of therapy in order to achieve desired results).<\/p>\n<p>According to the authors, these results may be an early indication that ChatGPT has the potential to improve psychotherapeutic processes. In particular, this work may lead to the development of different methods of testing and creating psychotherapeutic interventions.<\/p>\n<p data-slot-rendered-content=\"true\">Given the mounting evidence suggesting that generative AI can be useful in therapeutic settings and the likelihood that it might be integrated into therapeutic settings sooner rather than later, the authors call for mental health experts to expand their technical literacy in order to ensure that AI models are being carefully trained and supervised by responsible professionals, thus improving quality of, and access to care.<\/p>\n<p>The authors add: \u201cSince the invention of ELIZA nearly sixty years ago, researchers have debated whether AI could play the role of a therapist. Although there are still many important lingering questions, our findings indicate the answer may be \u201cYes.\u201d<\/p>\n<p>\u201cWe hope our work galvanizes both the public and Mental Practitioners to ask important questions about the ethics, feasibility, and utility of integrating AI and mental health treatment, before the AI train leaves the station.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Whether machines could be therapists is a question that has received increased attention given some of the benefits of working with generative artificial intelligence (AI). Although \u00a0previous research has found that humans can struggle to tell the difference between responses from machines and humans, recent findings suggest that AI can write empathically and the generated&#8230;<\/p>\n","protected":false},"author":1,"featured_media":3565,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32],"tags":[],"class_list":["post-3564","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-neuroscience"],"_links":{"self":[{"href":"https:\/\/floridiansi.com\/index.php?rest_route=\/wp\/v2\/posts\/3564","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/floridiansi.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/floridiansi.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/floridiansi.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/floridiansi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3564"}],"version-history":[{"count":1,"href":"https:\/\/floridiansi.com\/index.php?rest_route=\/wp\/v2\/posts\/3564\/revisions"}],"predecessor-version":[{"id":3566,"href":"https:\/\/floridiansi.com\/index.php?rest_route=\/wp\/v2\/posts\/3564\/revisions\/3566"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/floridiansi.com\/index.php?rest_route=\/wp\/v2\/media\/3565"}],"wp:attachment":[{"href":"https:\/\/floridiansi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3564"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/floridiansi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3564"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/floridiansi.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3564"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}