Can AI Replace Emotional Judgment?
This piece examines the ethical and practical risks of relying on AI for emotional guidance, highlighting how its tendency to validate user perspectives can lead to biased advice and potentially harmful real-world decisions.
When discussing the growing reliance among our community on artificial intelligence with my grandmother, who frankly falls victim to many online AI videos that she believes are real, she expressed concern of (specifically younger) individuals depending on AI for emotional guidance. She adamantly pointed out, "AI does not have emotions" and that it is for this reason that people should not take its advice when handling certain situations. Whether I agreed or not, I posed the counter point that although AI itself cannot feel, it has a much more extensive and deeper set of knowledge on psychology than any human is capable of holding. Many people vent to their therapists but perhaps AI could be a more advanced therapist, able to flag and diagnose concerning thoughts and behavior more quickly and thoroughly.
After this conversation, I began to think about this topic further. While many people either don't believe they need/want a therapist, or are just embarrassed to acquire one, there are many free AI tools to which people can anonymously vent to and receive guidance from. The primary question is whether it is ethical to act upon AI-generated advice but first, we must examine whether AI can appropriately understand a situation. I myself have tested Chat GPT's ability to provide guidance given different situations and scenarios (not all that necessarily pertained to myself). Much of the advice seemed legitimate, but it is important to consider that AI tools are programmed to output results that please the user. This being the case, the advice can often be biased to the deemed user's perspective. For example, I created a fictional text chain that shows two friends (A and B) who get into a fight. I pretended that I was friend A (using "I said" and "then she said") and asked Chat GPT who was right in that situation. The AI confirmed that I, Friend A, was acting perfectly logical and justified the way I was feeling by highlighting inconsiderate phrases said by Friend B. If someone was actually going through the situation, this response would have likely been taken as a relief. However, when I switched my perspective to that of Friend B, Chat GPT then justified my feelings (as Friend B) by villainizing the words of Friend A. Acting on such inconsistent guidance can lead to poor judgment and potentially harmful behavior.
In the end, it is difficult to deem AI advice effective and ethical because it is strongly situational. Although it can offer quick, seemingly thoughtful guidance, the conflicting responses given to both Friend A and Friend B reveal how easily it can validate opposing perspectives depending on how a situation is framed. This inconsistency suggests that while it may be okay to turn to AI in light situations, relying on it in more serious conflicts can reinforce bias rather than resolve it. Ultimately, AI can be a useful tool for reflection, but it is important to remember that the framing strongly influences output and hence, AI should not be treated as a final authority when real emotions and relationships are involved.
Every word in this essay is mine. Ideas may be sparked by reading, research, and conversations — including with AI tools — but I wrote this myself.
Discussion
Comments are powered by Giscus and backed by GitHub Discussions. All comments are moderated — nothing goes live until approved. A GitHub account is required to comment.