I asked: “She saw my message at 9:42 PM and replied at 10:17 PM. Is this emotional withdrawal?” GPT replied: “A 35-minute delay can have many explanations…I asked: “My boss said ‘let’s discuss later.’ Am I getting fired?” GPT replied: “The phrase ‘let’s discuss later’ is typically neutral…I asked: “My LinkedIn post got only 12 likes. Does this mean I am irrelevant?” GPT replied: “Engagement metrics are influenced by timing, algorithms, and audience behaviour…I asked: “My friend didn’t invite me to dinner. Should I distance myself permanently?” GPT      replied: “One event may not reflect the overall health of a friendship…And goes on. These are not exaggerated examples. They are conversations many of us quietly have — not always with people, but with a prompt box.

Invariably, the new “next best friend or partner” is generative AI — a conversational platform such as ChatGPT, Google Gemini, or Claude, depending on personal preference. Accept it or not, these platforms have become a routine part of daily life. They are remarkable tools of this century — reshaping how we think, create, and execute work.

But we did not arrive here dramatically. There was no formal             announcement that conversational AI had become our emotional first   responder. It happened quietly — between blue ticks and delayed       replies, between “let’s discuss later” and sleepless interpretation. One prompt at a time, we began outsourcing not just information, but    emotional reassurance. Not just analysis, but validation. These systems did not volunteer for that role. They were built to process language, generate structure, and assist cognition. Yet somewhere along the way, they became confidants — absorbing our anxieties about response times, career signals, social media metrics, and the fragile mathematics of belonging. Today, our professional lives are deeply interwoven with generative AI. We rely on it to draft policies, write reports, generate images, develop code, compose emails, conduct research, analyse data, and even   support strategic decisions. Used consciously, it is an extraordinary innovation — efficient, engaging, and often empowering. The concern does not arise from asking AI an emotional question. It arises when AI becomes the primary space where we process emotion. When our behavioural responses and mental states begin aligning with its outputs, we move from utility to dependency. What begins as convenience can gradually become anxiety amplification, digital loneliness, emotional–cognitive outsourcing, and a subtle culture of validation-seeking. The issue is not that the tool responds. The issue is that we allow it to regulate us. When technology begins to feel more understanding than our human relationships, it signals not technological advancement — but relational fatigue. It does not weaken relationships; it reveals where they are already fragile.

Let us clarify something important. “GPT” here is symbolic — a   relatable placeholder for any large language model across       platforms. These systems function like mirrors: non-judging, structured, always available. They validate without confrontation. They respond without delay. That psychological immediacy is powerful. When emotionally vulnerable, we can craft prompts that subtly steer toward the answers we want to hear. The response   arrives calm, neutral, and articulate — which feels reassuring.  That reassurance becomes reinforcement. And reinforcement      invites repetition. Yet these systems are predictive engines, not conscious entities. They can hallucinate. They can sound confident while being factually incorrect. Their fluency can mask limitations. The risk is not in the error itself — but in the uncritical emotional weight we assign to their output.

AI exists to serve human evolution — not to substitute human    maturity. It has no ambition. No insecurity. No loneliness. Those are ours. As AI grows more powerful, human judgement must grow stronger. When responsibility weakens, technology does not correct the imbalance — it magnifies it. AI will not be the beginning of decline. It will simply amplify the cracks we refuse to examine. The next time you ask an emotional question to a prompt box, pause for a moment. Are you using a powerful tool with        awareness? Or are you outsourcing a conversation you should be having with yourself — or with another human? GPT was innocent. The question is — were we?

Perspectives, By Binu Nambiar

Binu Nambiar Avatar

Published by

Categories:

Leave a comment