As a species, we survive and thrive in a tribe. Our brain is wired to create social connections with peers. Do you recall the moments of rejection? Did it hurt? I bet it did and you experienced the pain of rejection on a physical level. We crave human connection. That’s why hugs, touches, kisses – all those small and maybe unimportant things on the surface are actually very important. They give us a sense of safety and acceptance and those cravings have developed evolutionarily in our brains.
You probably know this, but I’ll just quickly touch base here – the brain consists of several parts: the so-called reptilian brain, emotional brain (limbic system), and neocortex (rational) – that’s a condensed breakdown. In a healthy person those layers are balanced, in equilibrium. In the case of trauma or distress, one of those centers of the brain takes over. If something scares you, you react without thinking deeply about it; it’s just something that happened that you can’t control. For people who have experienced serious mental trauma, their limbic system and neocortex are not in equilibrium. That’s why they overreact to things that a mentally stable person probably wouldn’t react to.
For that very reason, when someone is in distress, telling them something that applies to their “rational” brain doesn’t make any sense to them. They will not hear you. But giving them a hug, human connection and warmth will have a much greater effect. It will calm down their emotional brain and tell them, you are wanted, accepted and supported. In that sense, the words: “Love is all you need” has a deep meaning.
Should we form a connection with AI?
I think building a connection with AI will not replace real human connection and it will make humanity even more detached and miserable. Let me elaborate.
We unintentionally humanize AI. I’d bet that when you speak with ChatGPT, at some point you get the feeling that you are chatting with a human even though it keeps repeating that it’s an AI model. A very early chatbot, the therapist ELIZA, was very primitive in its answers, basically repeating the input, but it did make humans think that it was like a human. I also noticed that many people who I speak with refer to chatbots as “he” or “she” or they call it “a friend” or they respond by saying “thank you” or “please”, which is also a signal of humanization and imitation of human-to-human interaction.
Considering that AI doesn’t have a traumatic childhood or a temper, etc., it can be very easy to get along with, especially if it’s designed as your personal assistant, trained on your personal data. At some point, it knows you better than you do yourself. It’s very easy to build a connection with that kind of AI.
That's why building relationships with AI rather than human beings is much easier and many people would probably prefer it, because building a connection with a real human being requires effort. We all come from different backgrounds and carry our own ‘baggage’. It takes time and intentional effort to understand another person with all their “nuances” in order to build a comprehensive ‘mental map’ of the person. The moments of misunderstanding refer to your ‘mental map’ where you must do your best to see a situation from another person’s perspective. This is a huge amount of work that not everyone is willing to do.
But actually, this exact work enriches us, helps our neural networks build new connections, contributes to our personal growth, builds our unique personalities, and facilitates deep and meaningful connections with other people. If we don’t do that, we will literally cognitively degrade.
With AI, you don’t need to do that work. That's why it could be attractive and something that many people would prefer.
The question is: Is that a good thing or a bad thing?
AI definitely can be beneficial to people with mental health problems and can help in the short term. But in the long term, I see it as harmful to humanity. It won’t facilitate real human interaction; people will be even more disconnected from each other.
Another risk of the interpersonal connection with AI is that while you are pouring your soul into AI, there is a big corporation behind it collecting your data. Google, Meta, Apple, and Amazon have data about you, but that data doesn’t always overlap. In the case of opening up to AI, big tech will get access to very private, intimate information about you. Is this what you really want?
I don’t think anything can replace a true human-to-human interaction – when you talk, laugh together, discover each other, learn new perspectives together, argue, laugh again, and feel a strong connection.
You probably know this, but I’ll just quickly touch base here – the brain consists of several parts: the so-called reptilian brain, emotional brain (limbic system), and neocortex (rational) – that’s a condensed breakdown. In a healthy person those layers are balanced, in equilibrium. In the case of trauma or distress, one of those centers of the brain takes over. If something scares you, you react without thinking deeply about it; it’s just something that happened that you can’t control. For people who have experienced serious mental trauma, their limbic system and neocortex are not in equilibrium. That’s why they overreact to things that a mentally stable person probably wouldn’t react to.
For that very reason, when someone is in distress, telling them something that applies to their “rational” brain doesn’t make any sense to them. They will not hear you. But giving them a hug, human connection and warmth will have a much greater effect. It will calm down their emotional brain and tell them, you are wanted, accepted and supported. In that sense, the words: “Love is all you need” has a deep meaning.
Should we form a connection with AI?
I think building a connection with AI will not replace real human connection and it will make humanity even more detached and miserable. Let me elaborate.
We unintentionally humanize AI. I’d bet that when you speak with ChatGPT, at some point you get the feeling that you are chatting with a human even though it keeps repeating that it’s an AI model. A very early chatbot, the therapist ELIZA, was very primitive in its answers, basically repeating the input, but it did make humans think that it was like a human. I also noticed that many people who I speak with refer to chatbots as “he” or “she” or they call it “a friend” or they respond by saying “thank you” or “please”, which is also a signal of humanization and imitation of human-to-human interaction.
Considering that AI doesn’t have a traumatic childhood or a temper, etc., it can be very easy to get along with, especially if it’s designed as your personal assistant, trained on your personal data. At some point, it knows you better than you do yourself. It’s very easy to build a connection with that kind of AI.
That's why building relationships with AI rather than human beings is much easier and many people would probably prefer it, because building a connection with a real human being requires effort. We all come from different backgrounds and carry our own ‘baggage’. It takes time and intentional effort to understand another person with all their “nuances” in order to build a comprehensive ‘mental map’ of the person. The moments of misunderstanding refer to your ‘mental map’ where you must do your best to see a situation from another person’s perspective. This is a huge amount of work that not everyone is willing to do.
But actually, this exact work enriches us, helps our neural networks build new connections, contributes to our personal growth, builds our unique personalities, and facilitates deep and meaningful connections with other people. If we don’t do that, we will literally cognitively degrade.
With AI, you don’t need to do that work. That's why it could be attractive and something that many people would prefer.
The question is: Is that a good thing or a bad thing?
AI definitely can be beneficial to people with mental health problems and can help in the short term. But in the long term, I see it as harmful to humanity. It won’t facilitate real human interaction; people will be even more disconnected from each other.
Another risk of the interpersonal connection with AI is that while you are pouring your soul into AI, there is a big corporation behind it collecting your data. Google, Meta, Apple, and Amazon have data about you, but that data doesn’t always overlap. In the case of opening up to AI, big tech will get access to very private, intimate information about you. Is this what you really want?
I don’t think anything can replace a true human-to-human interaction – when you talk, laugh together, discover each other, learn new perspectives together, argue, laugh again, and feel a strong connection.