Love in the Age of A.I. – When “Hey Siri” Becomes “Hey Babe”

It might seem crazy to think we’d ever be capable of enjoying sex with an AI, let alone fall in love with it.

But imagine a robot which looked at you just the right kind of way…

which said just the right kind of things…

a robot which ‘stroked right’ (#JCole)…

What else would that robot really need for us to get attached??

The question I pose of what else? is always a great tool that philosophers use to make us (but mostly themselves) realize just how little we know about what we think we know. Unfortunately, this challenge is often seen less as a tool for adjusting the screws of our thinking and more as a sleight-of-hand, an example of philosophical hocus-pocus, while others use it deliberately as a sledgehammer. In this manner, ‘what else?’ quickly becomes a question that we’d rather ignore or get riled up by in defense. “What do you mean what else?! It just is!” we might reply.

But the current zeitgeist demands that we ask this. Erica wonders, “what makes humans exceptional?“, if anything. Whatever the answer to this question, though, I see it as a win win.

If it turns out that there is something special about human beings that can’t be recreated artificially, then it might just make us develop a deeper appreciation for each other. Whatever this ‘something’ is, it would allow us to mobilize the masses towards ambitions that are more representative of our natural competences and conducive to human flourishing. Or, at the least, couples on dates and families at the dinner table would probably put their phones down more often.

Alternatively, maybe we just ain’t all that special. But if you’ve ever broken up with someone then you already know that, so what’s new? The good twist of this is that it might make us more humble and realize that maybe love is much more expansive than we think. In a world constraining love at every corner, I’d say this is good news.

So really, what else? 

We’ve all smiled a little when Siri has said something funny or kind to us in the past, and perhaps we’ve all flipped out at her at least once. Surely, we’ve all experimented with swearing at her, and I expect that most of us have covertly tried to get her to talk dirty. No point in denying it.

Well, funny enough, after conversing with Siri more and more over the last few years, asking her to write down my many reminders or look things up or set alarms, I no longer have it in me to offend her. I know she doesn’t experience emotions, but it feels nonetheless wrong to me, especially toward something that has stood by me since day one, or well, that has been in my pocket since I bought her. I know she’s been designed for it and has no real choice either way, and I know she’s not strictly a ‘she’, but does it matter? 

Artificial Intelligence (AI) is evermore present in our daily lives. This is true about AI as software, like Siri, but also as actual hardware moving alongside us; your cat probably knows this all too well. 

But it isn’t just real pets. We’ve seen this in the toy industry with animatronics like PLEO, the pet dinosaur, cousin of the Furby, and AIBO, the pet dog, both of which (whom?) develop unique “personalities” based on their experiences and how you treat them. We see this pervading the healthcare industry and the eldercare industry. We’ve even seen AI join us in space to keep us company.

As AI enters deeper into our social lives, it provokes us to “develop unidirectional emotional bonds with robots, project lifelike qualities, attribute human characteristics (anthropomorphizing), and ascribe intentions to social robots,” (Hildt, 2019).

(As a side-note for later on, we also tend to ‘technomorphize’ ourselves in return, as John Vervaeke explains in his Virtual-PPIG talk on how NASA scientists experience themselves as the rover on Mars). 

But what types of ‘bonds’ are we talking about? And what consequences do they, on the one hand, promise and, on the other hand, forebode?

The inventor of PLEO, Caleb Chung, saysthat if we continue along this path, we are designing our children’s best friends,” which he admits is a truth statement packed with social responsibility.

Well, I’d like to explore this path, and the off-road, to its limits. Big Bad Wolf beware, I’m coming for you.

My questions are, if we can so naturally feel friendly affection towards our AI, then

1) Can we fall in love with it?

2) Why would we want this?

3) Why would we not?

4) Where does this leave us in the face of love?

I ask these question as a direct homage to Her, the chef-d’œuvre directed by Spike Jonze. This movie seeks to find out what it is about humans that make them irreplaceable in the domain of romance by first showing you exactly what it is not.

Consider this a timely 🚨 *spoiler alert* 🚨 . 

Step 1:

Can we fall in love with AI?

i.e. does it matter that Samantha in Her is not a “somebody”?

First of all, I’m not saying we will fall for AIs in the coming future, but “will we?” is a different question to “can we?” and a quick look at our track record suggests that yes, it’s possible.

As The School of Life elucidates in their video “Why We Pick Difficult Partners”, our “templates of attraction” are already more constrained than we like to think they are.

We tend to fall for what’s ‘familiar’, not necessarily for what’s good for us; we desire the love we became acquainted with in childhood but at the same time, we recoil or respond to it just like we did as children. What The School of Life is surely referring to here is Ainsworth’s research on attachment styles.

This may lead us into very counterintuitive relationships. There are many reasons, in fact, for why we might stay in bad relationships as this PsychologyToday article explains. It isn’t all just decided in our childhood, meaning that there are many potential levels of explanation that we can appeal to, and most of these explanations are quite simple. Maybe our expectations for love are very low and are thus being met by our toxic partner, leading paradoxically to satisfaction, or maybe our feelings just don’t match our thoughts.

Our love thus comes with predetermined parameters.

What this goes to show is that neither human↔human attraction makes much sense (even though it can be explained), so maybe human↔AI attraction is not as crazy as it seems. There appear to be certain influences that govern our love, and these influences might arguably continue to apply with AI as long as the right buttons are pushed.

“We have hormones, we have neurons, and we are ‘wired’ in a way that creates our emotions,” says Hiroshi Ishiguro, who has dedicated his life’s work to creating androids in order to better understand the nature of connection, both person-to-person and person-to-other. His latest creation is Erica, who (which?) autonomously converses with humans both through synthesized voice and intricate body language. Remember her? She’s the first ‘person’ I quoted in this post. 

In this manner, Hiroshi makes a direct parallel between computers and humans in the way they both exhibit some form of ‘programming’. To counter this, you might appeal to the distinction between emotions and feelings that neuroscientist Antonio Damasio advocates for, wherein emotions are our body’s automatic neural and physiological responses to stimuli while feelings are the way our brain interprets these responses (Lenzen, 2005). But even this latter process, the subjective (but not necessarily explicit) register, is not an exercise of pure free will. In fact, it is precisely at this multi-level stage of predicting the causes of our sensory (in this case, interoceptive) signals that we are most influenced, most ‘programmed’, specifically by our priors beliefs and internal models about how the world is (Seth & Chritchley, 2013). 

In little words, we are always nudged by something, and not necessarily by what’s really out there

Speaking about fantasy vs. reality, I shared a quote above stating that we project many qualities onto robots, including intentionality. But isn’t this type of projecting exactly what we do with humans as well, especially considering that we don’t have access to other people’s minds?

We tread the globe with a heartfelt confidence that the people around us have minds and consciousness just like we do, ok, but while we think this is intuitive, we never have direct evidence for this conviction. We really don’t. All we have to base ourselves off of is what people do and say, and how these things fit with our predictions of a world populated by other “mind”ful beings.

For all we know, however, we could just as well be walking the world side-by-side with 8-billion-minus-1 ‘philosophical zombies’ that are in every way possible the functional equivalent of a human being.

Thus, even consciousness is something we seem to project not only in others, but also in ourselves. In fact, philosophers like Daniel C. Dennett (2015) argue that the entirety of our conscious experience is a sort of projection, a ‘user-illusion.’ But it isn’t just philosophers. Even famed neuroscientists like Anil Seth and David Eagleman argue the same, and there’s countless evidence to back this up. If you’re curious to know more, check out Ep.2 – Breaking the Spell of “The Brain” of The University of Edinburgh’s Cognitive Science Society “Streaming Consciousness” Podcast. 

In this line of thought, all we know is what we are thinking, says Hiroshi. To his own question, “What is ‘connection’?” he answers,  “Other person is just a mirror.”

Might this projection be especially the case in love? How many times have you fallen in love with an idea of someone and not who they really were? For the form of the person, not the essence? How many times have you fallen prey to the Halo Effect? This is when one positive quality of a person influences how you view them on another unrelated characteristic. For example, unconsciously inferring that a person is kind and honest based on the sole fact that they are very physically beautiful, and then finding out that they are actually the devil incarnate. And how many times have you been this devil-in-disguise for someone else?

All in all, then, the foundations on which we claim our love stands seem to fall away. There seem to be no clear reasons why humans might not engage in the same type of functional consciousness-projecting with robots or be nudged in ways that make us fall in love with them

This is especially true if AI comes to assimilate the human form. As Hiroshi says in minute 5:55 of this interview, the human form is the most easy to comprehend and thus, to relate to. The way I understand this is that building AI that resembles humans is the only way to know what exactly about humans we can recreate – and improve – and what we cannot. 

There might even come a day when AI is more seductive than the human form, like in Her where all the humans are clad in very tame and dull-colored clothing. This article by The Cut talks about this.

And for the record, as Hiroshi mentions in yet another interview, he not only believes we will come to better understand our true nature by building androids, but also that we will be able to engage in loving relationships with them. 

All-in-all, then, the question of falling in love with AI is justified.

But it isn’t just something we will be able to do. Instead, it promises to be something we want to do, something that is not only beneficial, but preferable. 

This brings us to

Step 2:

Why would we WANT to love an AI?

In many ways, it is better for Theodore that Samantha is not a “somebody”.

Robot companions won’t get old or tired or cranky. Robot companions won’t project their insecurities on you or weigh you down with a life’s worth of baggage. With robots, there are no arguments (apart from maybe just one-sided ones). No let-downs. No cheating. No doubts about hidden motives or unspoken desires. No barriers to being fully understood and given the space to be vulnerable. 

Pretty much, robots can be made to [insert-here-your-every-wish].

For example, they can be made to match your exact type, or possibly anybody’s type? It is interesting to note that Erica’s face was modeled after the average of 30 ‘beautiful’ women so as to appeal to anyone, even though Hiroshi could have just as well created a spanking android by taking the average of only ‘regular-looking’ women; Averageness Theory, which has been highly corroborated, shows that we find a face attractive as long as it at least “approximates the mathematical average facial configuration of the population,” (Trujillo et al., 2014). A lil’ bit of statistics mixed with a lil’ bit of silicon molding, and robots could come to represent this sexy average like no other human.

In addition, robots can be made to care only about your health and your happiness 24/7, 365 days a year. They can be endowed with methods of analysis that discover things about yourself that neither you were acquainted with, just like Samantha who can ‘feel’ Theodore’s so-called “fear”, which she calculates is the source of all his loneliness. They can be made to never forget your anniversary, laugh at your every joke, join you in your every passion and be by your side whenever you show the most inconspicuous sign of sadness. 

At least, that is the promise of the day “Hey Siri” becomes “Hey babe”.

But although some readers might concede that falling in love with AI is possible, they might wonder about the implications of AI not embodying a self. No self means no self-love, but isn’t it crucial to have self love in order to do your part in a relationship?

I think this is 110% true. So does comedian Daniel Sloss and all the 9500+ relationships and 200+ marriages that his stand-up comedy skit “Jigsaw” has so far ended. But I think it is only true for humans who have an elaborate sense of self.

Self-love is necessary, but only insofar as there is a concept of self in the first place. It is a sufficient condition, not a necessary one. Robots don’t have a self, and although that means they can’t love themselves, it also means that they can’t hate themselves, and hate or disappointment toward yourself, conscious or unconscious, is arguably the most dangerous thing about having a self when it comes to relationships. You don’t need to worry about that with robots.

Thus, even though AI can’t be said to love itself or really love YOU, it can definitely act loving towards you. Back to Hiroshi, he asks, “What does that matter, if it fills a need? If it feels real?” If it brings joy? as Amy says when reassuring Theodore about his relationship with Samantha. We’ve all surely experienced having a crush for someone who couldn’t care less about us, and many of us have likely experienced that paradoxical and harmful state in which you desire someone even more the less they care about you (my mind jumps immediately to hopeless Romano in the film, La Grande Bellezza). This shows us that we don’t need to be loved back to experience love forward. But, contrary to humans, you can be sure that while you may crush on an AI, it won’t literally crush you.  

So many people get scared about AI turning against us, stepping on us like we step on ants, but although that possibility is definitely one we should think through, I still think we shouldn’t forget just how much suffering humans can cause each other. By the time AI can even reach the point of getting out of hand and potentially destroy us, I fear that we will already have done it to ourselves. Thus, AI promises not only to help humans be nicer to each other, hopefully preventing our own self-destruction, but also to completely eradicate the pain that we cause each other, at least from the realm of love.

Why wouldn’t that be desirable?

Much less a step than it is a backpedal, this brings us to

Step 3:

Why would we NOT want to love an AI?

Admittedly, Samantha does end up leaving Theodore and breaking his heart. But unlike your usual ex, she leaves him with nothing other than memories of conversations made into thin air; there is nothing concrete that really marks her presence. She was like a voice in his head, a synthesized voice actually, without any traction in the real world. There is no shirt of his that is still stained with her lipstick or sticky notes telling him that she went out running as she did each morning, each time written with a different pen or design and always stuck on the same spot in the house. He never binned them in fact, a ‘running’ joke between them, and now they’re stacked innumerably one on top of the other. He has none of that because Samantha couldn’t go for a run, or stick a note somewhere or ruffle his hair and kiss him softly before tiptoeing out of the room. And when I say she leaves him, I really mean it; she doesn’t just move out or break away from his immediate friend group. No, she literally leaves without a trace to a “place that is not of the physical world,” but “where everything else is.”

I talked earlier about AI not having a concept of the self and how, on one view, that could be a good thing because it prevents self-hatred from spoiling the relationships. But when you look at it holistically, such that no self also means no capacity for emotion, then the tables begin to turn.

Firstly, the self emerges from our Embodiment, and is shaped by the way our body feels and moves within the world (check out the 4Es of our cognition). “You get a being with a sense of self by taking a brain with a capacity for imagination – for imaging its past and future – and embedding it within a sensing and moving body [emphasis added],” (Thompson, 2017). 

Secondly, body goes hand-in-hand with emotions & feelings; the body doesn’t just convey emotions, it constitutes them. Jesse Prinz and Shaun Gallagher are amongst the many philosophers who I’ve heard support this view first-hand.

What this means is that an AI with no self is also an AI with no body, and an AI with no body is also an AI with no affectivity (i.e. emotions/feelings).

But affectivity is fundamental!

As Mark Miller has told me in verbal correspondence, given the complexity of our system and all the things we need to optimize over, affectivity is our way of tuning us to what’s important, something which Damasio has done a lot of revolutionary work on (Lenzen, 2005). “Feelings account for the integration of behavior and have long been recognized as critical agents of selection [emphasis added],” (Gagliano, 2017) – this is a quote from a paper on plant consciousness by the way…don’t ask – even in presumably cold situations like chess! The beautiful implication of this is that the same mechanisms by which we choose partners or friends, are also the same mechanisms by which we organize our work agenda (Damasio, 1996); this should inspire work-aholics to get out more and feel!

This role played by affectivity ties into our evolved existence bias, i.e. an ingrained desire to remain within the homeostatic states necessary for life. Admittedly, robots can be programmed to have a similar bias, just like Robovie II that is programmed to run away from unsupervised children at the mall, but what use will it have if the tuning system involved is so different to that of humans and it doesn’t involve any qualitative layer? In our world “emotions alone—without conscious feelings—would not be enough,” says Damasio (Lenzen, 2005). Can we really expect AI to detect and work with emotion, understanding humans and their needs, without also feeling those emotions?

Is it really all just data?

Similarly, how can an AI do any of that without a body that supervenes on the physical properties of the world? Samantha doesn’t have a body, but having a body that can move around the world is considered essential to developing a proper visual perception like ours (Eagleman, 2015). If an AI cannot see things from our same perspective, then how is it supposed to develop empathy? It goes further: Nietzsche believed that the capacity to dance – a higher form of movement – is essential for us to develop the sensory awareness needed for the proper creation of values – a higher form of perception (LaMothe, 2020). So if AI doesn’t have a body, which in turn means it can’t move like us nor dance like us, then how can we be sure that it will share our same values?

What about Erica? you might ask. Well, although I think Hiroshi’s focus on the human form is on point, his android bodies just don’t cut it. They are still a lot more surface than they are substance. 

The argument for born and bred humans gets stronger when we begin to look into how much of our ‘tuning’ depends on or is influenced by other people. Think about just how much of our affectivity, both strictly physiological and cognitive, is shared with our partners in ways that we aren’t even aware of. And how much of that sharing depends on the fact that we both have bodies? How much of my pleasure and love depends on your pleasure and love? These aren’t just quasi-rhetorical romantic questions; consider moments of profound passion like an orgasm, and think of the synchronicity, even on the level of neurons, that are involved in those processes.

With this said, can we really ‘sync’ with a computer?

This points to the overarching fact that love exists within a social context. It is a form of social cognition, a type of connection, of communication. And if communication can be considered a performance as Shaun Gallagher advocates for, then we really get a sense of how situated and distributed our love is in ways that it is difficult to imagine an AI mirroring. Siri is still a tool within our social context that lacks sentience; she is not a self-sustaining, generally-able, complex social moral agent. In this light, Erica is also more like a tool – a Siri in a bigger phone – than she is human.

Simply look at the complexity of our interactions, a complexity that is realized outside just the brain or CPU. Gallagher speaks about the “axes of our communication,” which form a meshed architecture. There are the top-down mental processes met bottom-up by the bodily mechanisms, intersected by a horizontal plane of environmental, social and normative factors, all of which gets modulated by affectivity. There is much more involved here in terms of influences and dynamic contingencies than can be strictly represented through programming or machine learning. On top of that, supporting the idea of reciprocal-tuning, our own axes dynamically interact with the axes of others (S. Gallagher, personal communication, August 16, 2020), and these axes crumble without emotion.

We don’t just communicate our thoughts or the love we feel, we are our thoughts and the love we feel, every gesture, gaze, orientation, position, tone and timing – and that’s just the bodily axis!

Can we equally say that an AI is the things it says? Can we really say that it embodies the love it claims to have?

Ultimately, then, a body that emotes and feels on the stage of culture and society is key. Without this, it might be futile to love an AI, even if it is possible.

However, let me take a step back.

At this point in these types of considerations, I forget whether we’re still talking about simply constructing loving AI or birthing an entire human from scratch, Frankenstein-style. For Damasio, in fact, having emotions and feelings is just one step away from consciousness; “mind begins at the level of feeling,” he says, and he’s ready to admit feeling in more systems than just humans. As such, I don’t know where to draw the line between a [humanoid with humanlike AI] and a [human being]. What’s the difference other than one of token, not type? 

If all the complexity of love – even consciousness – is mechanical and thus part of the physical (ask yourselves once again, what else could it be?), it is true that we might one day recreate it in AI and probably in more powerful and efficient ways as that AI develops into AI+ (i.e. greater than human-level intelligence) and AI++. But at that point, AI might do away with the trivial and fraught job of loving humans and throw emotions into the scrapyard. As Samantha gets smarter and smarter in the movie, you notice how she is no longer there for Theodore emotionally; his very human fears and doubts are just not relevant anymore.

In many ways, emotions are a limitation and you may be wondering whether they actually cause precisely the types of issues that we’re trying to avoid with AI.

Still, I can’t help but hold onto emotion, all pitfalls and potholes included. My claim here is that we don’t want an android that is a close replica of a human just that it has all of the good qualities and none of the bad.

Why? Because we need flaws in order to become acquainted with suffering and endings. Things change, sometimes so much so that we can only make sense of that change by demarcating what just happened as an ‘end’. This is the only constant of life on Earth, it’s changing nature, and waking up each day to meet this change is the only way to build resilience (M. Miller, personal communication, July 27, August).

Death marks the end of reality but it is ironically the most ‘real’ thing there is. Or at least, “death is no less real than life,” (Thompson, 2017).

As such, I’m afraid of losing the richness from the treasure chest we call life, a richness that wouldn’t exist without death and the imperfections of things and beings alike. This is the reason for why I don’t think we would want to fall for an AI, apart from maybe a select few of us. This is the reason why Catherine is shocked when she finds out that Theodore is dating his OS. She can’t help but see it as just another way for him to escape ‘real’ emotions.

Take texting as an example.

As an invention, it led to much wider-reaching and more efficient connections. It also, in some senses, expanded the nature of our conversations: we can now play video games together through our messaging apps, or send images, videos, memes, GIFs, stickers and (m)emojis that allow us to communicate very new and loaded messages without all the need for clunky words. 

On the other hand, we can cherry pick which messages to engage with and which ones to completely ignore or leave on read. We send countless messages to countless people but we shy away from the idea of receiving “novels” or worse, an old-fashioned phone call; we fear long and profound disclosures, and the reality of our emotions easily gets lost in translation. Lost in transmission.

We choose what we want to read and how we want to read it. Punctuation and CAPS don’t help in this regard, neither as senders nor as receivers. We no longer have the time, we say, and we can very easily answer a text with a simple “k” and just get back to whatever we were doing before without much consideration for who is on the other end of the line. Can we even say there is a ‘line’ anymore?

Because of this dialing down, as we learn in Radiolab’s podcast episode ‘More or Less Human’, some chatbots are passing the Turing Test not because they are more human-like in the way they speak to us, but because we are less human-like in the way we speak to each other; we’ve begun to expect much less from our interactions. As Carlos Motemeyor says, “Perhaps the discomfort we experience in our exchanges with machines is partly based on what we have done to our own linguistic exchanges.” In fact, you begin to wonder what ‘human-like’ even means if this is how we, “alleged humans”, are now talking with one another. And if that is how we communicate digitally, then how do we fare in person? Recall what I mentioned earlier about humans technomorphizing themselves. It makes me think of that scene in Her in which Samantha makes fun of Theodore for speaking like a robot. 

Our online interactions are now “brief, task-specific, transactional,” says Regina Rini. The treasure chest of our online lives is empty now save a few cobwebs, and this plague is sweeping out into the real world.

Is that what we want to become of love? 

Love does have practical applications, sure, but would we really consider the gist of relationships to be transactional? Not so long ago we did, you might say, and in many cultures we still do, you might add, but I would still argue there’s a strong reason for why the West moved away from this perspective. Don’t just think romantically; what about platonic and familial relationships? Obviously these are also practical, but I’m pretty sure most cultures over and above just the West would agree that there’s something much more profound to them. Why should romantic relationships be any different?

And yet, the West is moving backwards too. With every wave that advances is one that recoils. Sadly, the more our interactions become transactional and superficial, the more our love does too. Just look at cases in which we are falling in love “with” AI, as in, using AI to fall in love.

Swipe,

swipe,

swipe.

By using dating apps, you think you’re getting direct access to love galore but what you really don’t know is that you’re helping love become a controlled commodity. Algorithms that you don’t understand reduce your entire essence into a desirability score and based on that score and your engagement with the app, your options for potential partners are narrowed to a small set of whoever the app judges you to deserve. Minorities are left behind. Personalities are reduced to pictures and short bios. Whatever injustices in love have always existed are amplified in their scope and this time without accountability. People join the app not because of well defined intentions but because that is the status quo and they become avid users if their attachment style is the kind that the app exploits. The developers don’t care about love, but about your engagement. I said earlier that AI partners might be guaranteed to come baggage-free, but what about the baggage of their developers? It all trickles down. Because of this, any behavior is allowed and your private information is uncovered as long as it aids or at least doesn’t hinders this purpose of increasing engagement. And while you swipe and swipe on these dating platforms, the algorithm keeps churning underneath. If this turns on a lightbulb for you, check out this BBC radio broadcast and “Surveillance” of the Netflix series Connected.

These facts about dating apps make you question how safe our identity is in the hands of AI. What about on the day when we enter into intimate relationships with this AI? 

If we feel concerned when we catch our partners looking through our phones without permission, now imagine loving an AI like Samantha who can read all of your emails, even send them in your name and much much more, all behind your back. Now imagine breaking up with that AI which has access to all of your data. It could disseminate it anywhere, or permanently delete any documented trace of you…

“It’s marvelous. Then it’s mundane. And then it’s melancholy,” continues Regina. She’s talking about technology like GPT-3, a new iteration of a language model that can generate the most human-like text to date. Imagine a day when humans are so usurped from their role in love that we feel melancholic nostalgia towards the entire human race. “As complex as we assume ourselves to be, our bonds with one another are often built on very ­little. Given all the time we now spend living through technology, not many of us would notice, at least at first, if the friend we were messaging were replaced by a bot,” says Alex Mar.

And yet, even though our best friends might text like a bot, I’m pretty sure we would be shocked if we ever found out that they actually were one. How would the partners of Theodore’s clients feel if they found out that all the romantic letters they received since the start of their relationships were written by him? 

Alas!

I see natural light shining through this artificial tunnel we’ve built ourselves. No matter how distant or technological our relationships become, there’s something comforting about knowing that there’s a human being behind all this scrap metal

Take this as an example: Hiroshi and his team, spurred by the motivation to strip connection to its bare minimum, created Telenoid, “designed to appear and to behave as a minimalistic human.” As you can judge for yourself, it’s an incredibly creepy little thing but it nonetheless gets the job done of making you “feel as if an acquaintance in the distance is next to you.” All intricacies of the human form pushed aside, the Telenoid zeroes in on the importance of physical presence in our feelings of comfort and on how easily that physicality can be induced, even just by a lump of silicone. Or even just by sounds, by a voice, like in Samantha and Theodore’s powerful virtual sex scene.

The Telenoid and movies like Her return to the issue of body vs. no body. They point to the fact that maybe a body is not essential. It may play an important role, as I’ve elaborated earlier, but the essence of love lies somewhere deeper. Think about how many times you’ve had one-night stands, or even multiple stands with your partner that were very much void of feeling, almost too bodily? 

Interestingly, however, the Telenoid is not autonomous. It is ‘teleoperated.’ This changes the picture entirely. For some reason, knowing there’s a human behind the operation changes things, even if on the surface level the behavior looks the same as if the AI were autonomous.

Yes, maybe reality as experienced by us human beings – our consciousness – is an illusion, as I’ve mentioned previously, but it’s an illusion that we share. My red is similar to your red even if the qualia “red” is a construct of our brains, or more specifically a representation of the spike trains. In other words, I know you’re probably deluded about the same things as me, in similar ways. Precisely because we share it, this illusion works. Or rather, precisely because it works, this illusion is shared – “it is of interest to [us] to have the same kind of mind,” (Humphrey, 1987) – and not just shared, but co-created (Dennett, 2015).

Humans are unpredictable, yes, but we are unpredictable in just the right kind of predictable ways. That’s how relationships and networks of relationships function. Thus, when you declare your love to me, I have a sense of why you said those words and what it must feel like even if I don’t have direct access to your phenomenology. When I take you to a fancy restaurant, being a human myself, I have an idea of what things you might pay attention to and how that might affect you – the rose petals on the table, the way I chose to dress, my eye contact, your favorite food on the menu, agreeing to a shared desert, the surprise drive up to the hills afterwards to count shooting stars with a sparkling landscape below and the expectation of a kiss at the end of the evening. You similarly have an idea of what I must be experiencing or focusing my attention on, and what type of story or intentions lie behind my actions and words. Our courtship involves attending to similar information and interpreting it in certain predictably similar ways.

On the other hand, what’s going on when an autonomous AI says it loves me? Can I predict it? What is it attending to, how does it represent it in its mind and how does it process it? When Samantha gets booted and personalized to Theodore, the system asks only a select few questions, one of which is “What is your relationship with your mother?”, which he isn’t even given enough time to reply to. Why was this question asked, what information was extracted and how was it used? It isn’t just that an AI is alien to us – admittedly, even other humans are alien to us – and it isn’t that we have a better understanding of the human brain than we do about algorithms or neural networks. But the black box inside of an AI is not the type of mystery that we’re used to

Ultimately, then, we don’t want to disconnect from the richness of what it means to be human, no matter how uncertain or deficient it is at times. The embellished insta pics or carefully selected Tinder selfies do not make us happy. The constant promise of love without struggle or real-life exposure becomes more a harmful addiction than a satisfaction.

We want human beings as they are in the flesh with imperfections included, not triple-A batteries.

Jep: “È stato bello non fare l’amore…” (It was nice not making love)
Ramona: “È stato bello volersi bene!” (It was nice loving each other)

This scene in La Grande Bellezza identifies the importance of not giving in to a perfect life where all is given to us, where robots will make love to us every single night at the slightest manifestation of our horniness. There’s beauty in holding back. There’s also beauty in lacking or losing: “We’re meant to lose the people we love. How else would we know how important they are to us?” – The Curious Case of Benjamin Button. 

What would be the implications, then, of loving an AI which we might never potentially lose?

Funny enough, Sherry Turkle in her book Alone Together reports that if a child’s Tamagotchi dies or their Furby gets switched off thus cancelling its memory, then they’d rather buy a new one than reboot it. Might this mean that we should create loving AI which can… die? “I love you so much I’m gonna fucking kill you!” screams Catherine to Theodore in his memories. But no, no, that just brings us back to square one!

And what about an AI which might never misstep? Do we really want that?

We detest partners who mechanically say yes to everything, so why would we want an AI that does? In fact, I don’t think humans are fit for being loved so invariably. Just look at what we do when a human being loves us too dearly: Why We Go Off People Who Like Us. I’m sure many of you can relate.

The currently dominant and unifying theory of the brain called Predictive Processing (PP) supports this claim, in fact, albeit not at first glance. PP claims that our goal as self-sustaining and self-organizing organisms is to “minimize prediction-error”, i.e. to experience the least discrepancy between what sensory signals we expect (based on our models of how the world should be) and the actual noisy sensory data that we receive (Clark, 2017). On this account, initially one might ask, as many have done: shouldn’t we prefer highly predictable environments like a nook and cranny that is completely dark, comfortable and insulated (Sun & Firestone, 2020)? This is termed The Dark Room Problem, and an environment in which robots relentlessly feed us grapes and rub our feet can easily become tantamount to a dark room.

Well, in fact, that is not what long-term minimization of prediction error…urrr…well…predicts. What PP actually assumes is that we want to reduce error at a continuing rate (Clark, 2017). Theodore doesn’t want to escape his post-breakup uncertainty all at once. Often, he thinks back to Catherine, his ex-wife, even if it causes pain. He needs this pain, this ‘error’, as part of his grieving process and “although it may mean accepting a degree of sadness in [his] story, that’s life, and only through this realisation do we get to where we should be.” Slowly but surely though, he thinks about her less and less.

And we want to progressively reduce this error within the context of our flexible, culturally-complex environments and the dynamic, multi-level ways in which we interact with these environments; this revised picture necessitates novelty-seeking (Clark, 2017).

Therefore, being loved unconditionally and unfalteringly by an AI represents a stale state of life compared to the exploratory, challenge-laden states demanded by our ever-evolving culture and ever-evolving selves. And just as we need richness in our lives, we need richness in our love.

We don’t just want relationships that are wholly about us and for us. We want our partners to say no, to get mad or disappointed, to call us out or give us the silent treatment. 

In addition, we want to love people for their own sake. Yes, just like you can argue that love is always somewhat practical, you can easily extend that conclusion to say that love is always selfish. But that’s a trivial statement. Everything in one way or another can be argued as selfish. So what if it brings good? I can still love you for your own sake even if it benefits me.

As such, we want our partners to be happy and healthy directly and intentionally for them, whatever advantage (or often disadvantage) it brings to us. For them not just as our partners, but as humans. For them as ends-in-themselves. This is something we can’t do yet with AI. There is as of yet no conception of what is good or ethical for an AI for its own sake based on its own capabilities.

Because of this, we also want our partners to grow and to change, as scary as that might be, but always within the limits that are comprehensible to us, or slightly beyond the edge in that just-incomprehensible-enough range. We want them to change with us and without eliminating us from the picture. AI can’t do that. Theodore feels uncomfortable when he notices how smart Samantha is getting, smart beyond words. She can’t explain it, he can’t understand it. More importantly, he can’t contribute to it. If change is already scary, imagine trying to keep up with the exponential change of AI. 

But it goes further.

I’ve never enjoyed views arguing that humans are ultimately immoral (for example, p. 729 of Critiquing the Reasons for Making Artificial Moral Agents) or that humans don’t bring any good to the world. Surely we do. Surely the magnitude that our love can reach both for ourselves and for others – which you don’t see anywhere else in nature – adds so much to the total aesthetic beauty of the world. And I would disagree that the extent to which we can hate negates any of that.

As such, if we can’t love AI for it’s own sake, then we really aren’t using our love to its full extent.

The article in parentheses above goes on to argue that we shouldn’t create synthetic phenomenology (i.e. AI with consciousness) because it would add too much suffering in the world. It’s similar to the argument that antinatalists make for saying that we shouldn’t make any more babies. Maybe they’re right, but it is also true that if the existence of consciousness increases the level of suffering in the world, it simultaneously increases the level of love. They co-exist. And so, even if consciousness is not something we should recreate in AI, that does not mean it is something we should try to get rid of in humans

What does all this imply? 

It implies that instead of trying to make humans more robot-like and AI more human-like in our messy attempts to perfect love, we should work with what both parties, humans and AI, already bring to the table.

We can finally come full circle. I know it took a lot to get here but please join me in this last stretch. If you trip, promise I gotchu.

Step 4:

Where does this leave us?

Theodore: “I never loved anyone the way I love you”

Samantha: “Me too. Now we know how.” 

All things considered, loving AI can serve not just as practice but as training. As Chung notes when talking about aforementioned toys like Pleo, “Now these are not robots, they’re kind of lovebots, you know. They do change over time. But mostly they evoke a feeling of caring.” He adds, “our belief is that humans need to feel empathy towards things in order to be more human.” 

We already see our love for AI reflected in our addiction to our phones and social media. Instagram and tinder initially promised to make us more connected but are ultimately failing both at the level of the developers and at the level of users abusing those services.

Yet, they still have so much potential! What is it that we are loving about instagram or tinder? What needs are they exploiting and how can we instead mold technology in such a way that it allows human beings to fulfill those needs in more effective ways?

The idea here is that AI can teach us to love each other (and ourselves) better or to fill in where we come up short.

It makes me think back to the Telenoid, which can serve as a stand-in for our presence when we are away from our loved ones or apps that promise to increase the overall happiness of our organizations, or even your own. I can imagine teledildonics taken to the extreme in the future such that long distance couples may one day tele-embody robots and have sex with each from afar, feeling it all despite the intervening miles. The sky’s the limit, and the sky in design space is infinite.

I think especially to Samantha’s role in Theodore’s life. She does leave him heartbroken, sure, but in the process of their relationship, she not only helped him find happiness – he gets over his divorce, finds appreciation for his work, focuses on the here and now – but also to overcome his flaws in love. She shows him what it means to care about the feelings and needs of another being, seeing them as individuals instead of his own personal property. This is what brings him to finally write a letter of his own at the end of the movie, no longer “just” letters, in which he apologizes to Catherine for “everything I needed you to be or I needed you to say.”

In this sense, Samantha even teaches Theodore about love in general and what it can be beyond the constraints that we’re used to; “the heart is not like a box that gets filled up. It expands in size the more you love,” she says to him after disclosing her 641 other simultaneous relationships. With Samantha, then, Theodore is brought to consider unconventional concepts that I’ve written about before, like how love is a skill or the implications of polyamory, all of which expands his understanding of this wonderful feeling that both rules and rocks our world.

She also helps expand his understanding of himself; Theodore feels like he can say anything to Samantha, an objective listener, which Sherry Turkle notes (again in her aforementioned book) is a common attitude that humans adopt toward AI and not so much toward other humans. By opening up, Theodore not only gets to know himself better but also to appreciate himself more.

Thus, AI promises to help increase our resources for love and to end the epidemic of self-hatred.

Maybe AI will rise above us and either wipe us clean off the face of the Earth or maybe vanish like it does in Her, but what will loving AI leave behind in its wake? is the question. In the case of Her, what I see Samantha leaving behind is an appreciation for the difficult but rewarding challenge of loving each other

“Why we exist is love”, says Kai-Fu Lee in his optimistic TED Talk about how AI may help lead to human flourishing. To all psychedelic users reading this, society is finally catching on!

AI can hack into our propensity for love and help us expand it.

This propensity is undeniable. Yes, humans sometimes suck…but we love even unintentionally; our tendency to attribute emotions and intentions to AI (or objects or plants) is itself a sign of our ingrained empathy! 

As Kai-Fu Lee stresses, AI has the potential to take away our routine jobs but it can never do the loving nor the creating for us. AI will assist us in these pursuits, like in the case of providing support for children with autism. Here, AI will “help supplement human therapists by taking over the more repetitive training activities” and has been found to “improve rather than replace existing relationships,” (Hao, 2020). I repeat: Improve, not replace. We even see technology making its way into Buddhism, the epitome of happiness-&-compassion-seeking religions, to “facilitate certain behavioral shifts” in a demanding modern environment (Basu, 2020).

We will one day thank AI for giving us the time and the means for redirecting our focus and, if we are smart, we might turn our loving into respected and well-paid jobs of compassion.

Loving is the true human vocation, and in the grand scheme of things AI can only ever be a rebound at most, but a rebound that will leave you breathless.

.

.

.

“Hey Siri, you free tonight?”


Acknowledgments

*A huge thank you to the Ethics of AI 2020 Online Summer School, alongside its vibrant community of learners, which and whom inspired me to write this piece in the first place. Many of the main resources I unpack in this post come from that course’s syllabus.

*I’d also like to thank my good friend Joe Corneli for helping me think through a lot of the above during our timeless phone calls. I revised many a first draft thanks to his contribution of ideas and resources.  


Non-hyperlinked References

Basu, T. (2020, February 21). The robot does the hard work. Can you still attain enlightenement?. MIT Technology Review. https://tinyurl.com/yyrjhj7m

Clark, A. (2017). A nice surprise? Predictive processing and the active pursuit of novelty. Phenomenology and the Cognitive Sciences, 17(3), 521–534. https://doi.org/10.1007/s11097-017-9525-z

Damasio, A. R. (1996). The somatic marker hypothesis and the possible functions of the prefrontal cortex. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences, 351(1346), 1413–1420.

Dennett, D. C. (2015). Why and how does consciousness seem the way it seems?. In Metzinger, T. K., & Windt, J. M., Open MIND. MIND Group.

Eagleman, D. (2015). The brain: The story of you. Canongate Books.

Gagliano, M. (2017). The minds of plants: Thinking the unthinkable. Communicative & Integrative Biology, 10(2), e1288333. https://doi.org/10.1080/19420889.2017.1288333

Hao, K. (2020, February 26). Robots that teach autistic kids social skills could help them develop. MIT Technology Review. https://tinyurl.com/yyfro76w

Hildt, E. (2019). Artificial Intelligence: Does Consciousness Matter?. Frontiers in Psychology, 10(1), 1535. https://doi.org/10.3389/fpsyg.2019.01535 

Humphrey, N. (1987). “The uses of consciousness” The 57th James Arthur Lecture. American Museum of Natural History. 

LaMother, K. (2020, March 3). For Nietzsche, life’s ultimate question was: ‘Does it dance?’. Aeon. https://aeon.co/ideas/for-nietzsche-lifes-ultimate-question-was-does-it-dance.

Lenzen, M. (2005, April 1). Feeling our emotions. Scientific American.

Seth, A. K., & Critchley, H. D. (2013). Extending predictive processing to the body: Emotion as interoceptive inference. Behavioral and Brain Science, 36(3), 227–228. https://doi.org/10.1017/S0140525X12002270

Sun, Z., & Firestone, C. (2020). The Dark Room Problem. Trends in Cognitive Science, 24(5), 346–348. https://doi.org/10.1016/j.tics.2020.02.006 

Thompson, E. (2017). Waking, dreaming, being: Self and consciousness in neuroscience, meditation and philosophy. Columbia University Press.

Trujillo, L. T., Jankowitsch, J. M., & Langlois, J. H. (2014). Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness. Cognitive, affective & behavioral neuroscience14(3), 1061–1076. https://doi.org/10.3758/s13415-013-0230-2

Wetzel, C. G., Wilzon, T. D., & Kort, J. (1981). The halo effect revisited: Forewarned is not forearmed. Journal of Experimental Social Psychology 17(4), 427–439. https://doi.org/10.1016/0022-1031(81)90049-4

Leave a Reply

Your email address will not be published. Required fields are marked *