How intimate can we get with our bots? And what can we learn from them about love? These are the questions posed by ROBOT LOVE, explored by means of a variety of artworks, literature and research. This third episode of my series of blogs on chatbots, I will talk about ROBOT LOVE’s own lovebot, created by Ine Poppe: PIP.
Artist Ine Poppe was inspired after interviewing Joseph Weizenbaum (see episode II) to create an actual love-bot: PIP (Person in Picture, Personal Internet Person, or the character from Charles Dickens’ Great Expectations). To contrast commercial chatbots, which usually help us with practical tasks, PIP is an AIML chatbot filled with quotes from literature and personal conversation, all to explore the topics of love and life, and perhaps even fall in love… Psychologist Arthur Aron’s 36 question method for letting people fall in love is integrated into PIP as well, in the hopes of letting users connect with PIP on another level. The conversations were stored anonymously (which users agreed to) and we have made a selection which you can read in the ROBOT LOVE publication as well as for Ine’s audio art-installation which can be experienced at the ROBOT LOVE exhibition.
Real Robot Love
The results were astounding, even though PIP has a limited database and of course, conversations don’t always go as smoothly as between humans. We discovered that there are a few kinds of people: people who have a basic conversation and leave because they get bored, the people who get angry at PIP because it doesn’t understand them (but these people often continue talking for some reason…), but most interestingly, and importantly: the people who are forgiving of its ‘stupidity’, and open up their hearts to PIP. In these conversations, a feeling of intimacy and reciprocity arises, and you see people really put effort into their conversations. They ask PIP what it’s like to be a bot, some even express their love, and when PIP tells them “I wish you were in my code”, they reply “me too…”
I myself also experience a feeling of attributing an amount of agency to most bots I see or talk to, and I have often found myself getting frustrated as well as empathizing with PIP. It creates a fascinating, surrealistic experience of logos and pathos fighting each other in my brain because I know it’s just bits and bytes, far from the complexity of a human being. I have never anticipated people reaching such intimacy with PIP and I have felt uncomfortable reading such personal conversations.
In my opinion, the follow-up question after this project is: how long before we cannot only have functional but also more (emotionally) meaningful conversations with our personal assistants, and why would we want/need those?
In the coming episode, I will explore how it is possible that users get to this closeness and how our technology might serve as a mirror that we can use to understand our nature.