Search This Blog

Sunday, September 22, 2024

Thoughts on Artificial Intelligence (4)


Empathy from AI is apparently the next big thing. Last week I heard about a study in which blinded readers judged AI to be more compassionate than human doctors in responses to medical questions. A closer look at the study methodology, however, does not seem to condemn human doctors as many news writers did in reporting the study. The researchers compared ChatGPT's responses with human doctors' responses on Reddit. Perhaps it is true that humans writing anonymously on an online forum in response to questions from patients who they don't know personally are not as friendly or compassionate as algorithms with built-in polite phrases. It does not evaluate the degree of empathy between a doctor sitting face to face with a patient, who may have been visiting him or her for years. 

Nevertheless, one of the advantages of AI is obvious: Its performance in displaying empathy and cordiality is highly consistent and unwavering, unlike humans who have vastly differences in background, bias, temperament, education, and more. Even the same person can have good days and bad days. I played around with the free Microsoft and Google chat bots a bit, sometimes faking irritation and frustration and sometimes attacking them with "angry" expressions. The chatbots, obviously, have no emotional reactions to my attacks and maintain thoroughly neutral and patient responses, peppered with psychologically informed empathic phrases like "I can see why you might feel this way." Since there is no feeling behind the screen, it is not possible to hurt their feelings. 

Human medical professionals can never be as unfailingly unflappable as machines, that is for sure. But the study revealed another insight: Recipients of the medical responses were satisfied with AI-generated empathy. Perhaps the reason is that humans are ultimately self-centered individuals. What matters the most is how "I" am being taken care of, especially in the health care context, when I am very likely ill, anxious, suffering, or vulnerable. My capacity to recognize and forgive the doctor's emotional limitations may be next to nil. A robotic doctor who will not yell at me or judge me for my shortcomings or lose his patience seems pretty compassionate. 

This leads me to a further question. Is there a fundamental difference between real human empathy and a machine-generated "empathy"? The latter could be described as the recipient's own projection of empathy, or an imagined empathy. As long as the recipient feels being empathized with, it is in a sense real, regardless of what the machine actually gives him (ie, programmed sentences). The former, however, involves two (or more) people; it is a psychological phenomenon that bounces between two (or more) people, and both of them feel something. 

Considering how much projection humans do everywhere every day all the time, machine's empathy is hardly a brand new thing. How many people feel their hearts flutter by looking at a celebrity on TV or a singer on stage? How many people believe the "dear leader" knows them and will "fix their lives without knowing who the heck they are? Our natural tendency of projection and transference has led us to invent machines that provide scripted, automatic "empathy," removed of all unpredictability, to quench our thirst and sooth our mind. Can we even tell the difference? 

No comments:

The Ending of Le Samourai (1967), Explained

A quick online search after watching Jean-Pierre Melville's Le Samourai confirmed my suspicion: The plot is very rarely understood b...