"Sincerity, if you can fake that, you've got it made."  This quip has been attributed to comedians, Hollywood moguls, and senior attorneys instructing newcomers.  Wherever it comes from, faking sincerity is the punchline—an absurdity.

Yet makers of modern AI therapy apps like Woebot and Youper don’t get the joke.  On the contrary, the joke is on us.

It’s not hard to fake sincerity and concern.  Corporate voicemail often assures callers “your call is very important to us,” as we remain on hold for additional interminable minutes.  Politicians offer “thoughts and prayers” while their actions belie their words.  Greeting cards express sentiments their senders may or may not really feel.  And as far back as the 1960s, computers were programmed to mimic connection and concern.

Computer scientist Joseph Weizenbaum developed ELIZA in the mid-1960s to simulate a Rogerian therapist.  That is, ELIZA simulated a humanistic therapist in the mold of the famous psychologist and author Carl Rogers.  It didn’t do this very well, as Weizenbaum was the first to admit.

Mostly the program rephrased what the user typed in, printing it back out in the form of a question, or a request to hear more about that topic.  For example, if a user typed “I feel depressed,” ELIZA might respond “You feel depressed?  Tell me more.”  Later versions incorporated better grammar rules and stored user statements in a memory stack, so the program could periodically offer something like: “Earlier you said you were mad at your boss.  Can you expand on that?”  In all cases, the program was simply manipulating text, replacing “I” with “you” and so forth.  If it recalled a topic, it did so randomly, as there was no understanding involved.

Weizenbaum never intended ELIZA to replace real therapists.  He later expressed dismay that some users, e.g., his own secretary, treated the program as though it actually cared about them.*  In fact, his aim in creating ELIZA was to demonstrate the superficiality of communication between humans and machines.  That is, to show that humans can be lulled into believing they are dialoging with an intelligent entity, fooled by rather simple mechanical responses.

Computers have advanced dramatically in the past half-century, not least in their handling of natural language.  Alexa and Siri parse our questions and commands in meaningful, helpful ways, and respond in a lifelike human voice.  Yet they, and the banks of computer servers that operate behind the scenes, still don’t care about us.  They’re machines.

Embodying AI in a robot that looks and/or moves as if alive feeds our tendency to ascribe emotions and motivations to it.  Stuffed animals with embedded AI are now given to some elderly to keep them company.  Usually regarded as a benign kindness to the demented who can’t tell the difference, others point out that children and even some lucid adults feel warmly toward dolls and stuffed animals, including inert toys without “intelligence.”  Indeed, the cuddliness may be more compelling than the technology.

It’s interesting to ponder what to make of this.  Perhaps the ability to suspend disbelief in the service of empathizing with inanimate objects is a strength, akin to our suspension of disbelief when enjoying a movie or play.  Or maybe better analogies are to hypnotizability and suggestibility, which can be either strengths or weaknesses depending on the circumstances.

Which brings us back to Woebot and Youper.  Unlike ELIZA, these chatbots aim to help real people in distress.  Both apps offer a lot up front, then hedge in the fine print. Youper offers “daily therapy exercises,” yet denies being “therapy.” Woebot says it doesn’t replace human therapists, but its website testimonials strongly imply otherwise. Both simulate cognitive behavioral therapy (CBT), not Rogerian or another therapy of depth, insight, and relationship.  It is probably easier to simulate the former, as CBT is more algorithmic — rule-based, operationalized — than depth therapy.  However, even CBT requires subtlety lacking in these AI versions.  Like ELIZA decades earlier, the blind missteps of modern mental health chatbots are annoying to some users.

Woebot in particular aspires to be more than a self-help book or a set of exercises.  For unlike Weizenbaum, but like the makers of robot pets, Woebot’s creators welcome the illusion that their program is caring — that it can form a “human level bond.”  Indeed, they cite this as a feature and proudly present a supportive study by authors who all have financial ties to Woebot Health.  If validated, this finding would only underscore Weizenbaum’s concern from the 1960s: It doesn’t take much artificial intelligence to fool people.  Psychologist and writer Sherry Turkle says of Woebot:

We will humanize whatever seems capable of communicating with us…. You’re creating the illusion of intimacy, without the demands of a relationship. You have created a bond with something that doesn’t know it is bonding with you. It doesn’t understand a thing.

While it is plainly unethical to base treatment on outright deception, the ethics are more complicated when treatment relies on human frailties.  Psychoanalysis and dynamic psychotherapy utilize transference, the unconscious, often faulty assumptions a patient brings to the therapy relationship. The ultimate goal, however, is to shine a light on these assumptions, to bring them more in line with reality.  Not so with AI therapy: success would not be a dawning realization that the app has no feelings and doesn’t care.  Instead, the false perception of sincerity has to persist through treatment and beyond.  One user’s testimonial displayed on the Woebot Health website reads: “Woebot is sweet and has plenty of human warmth.”

Human therapists can fake sincerity too, of course.  However, doing so is a fault, not a selling point.  Psychotherapy training programs never advise students to “fake sincerity, then you have it made.”  We recognize that joke for what it is.  Therapies of depth, insight, and relationship are founded on genuine human connection.  If in the distant future an advanced AI passes the Turing Test—that is, conducts therapy of depth, insight, and relationship indistinguishable from that of a skilled human therapist—then interstellar PsiAN will be open-minded enough to endorse its work without human favoritism.  But not before.

Written By:

Dr. Reidbord, editor of the Forum, is a San Francisco psychiatrist. His practice is primarily psychodynamic psychotherapy, but also includes medication management and integrative practices (meditation, journaling, nutrition, exercise). He has taught and supervised psychiatry residents for 30 years, and has blogged on medical and psychiatric topics since 2008.

Reference:

*Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. W. H. Freeman. p. 7

Image courtesy of @lazycreekimage, via Unsplash

Steven Reidbord MD

Editor of the PsiAN Forum. Board-certified adult psychiatrist in San Francisco, mainly offering eclectic, psychodynamic therapy (sometimes with medications and/or integrative approaches as well). Long-time seminar leader and psychotherapy supervisor for psychiatry residents, and a blogger on psychiatric and medical topics since 2008.

http://www.stevenreidbordmd.com
Previous
Previous

PsiAN hosted a presentation on our original research

Next
Next

PsiAN Therapy Bot Article Featured in The New York Times