As AI models have begun to grow more advanced, there has been an increase in reports of people engaging in seemingly interpersonal relationships with AI chatbots, which are growing increasingly available around the internet. Using applications, such as Replika or Nomi, people are able to craft their perfect virtual romantic partner from the ground up. With the ability to write in the chatbotâs background and core personality traits, both of which are able to be edited or changed at any point during their interactions with the evolutionary programs, one could create any kind of chatbot imaginable. Human to human relationships can be difficult; this may be part of the reason why some are finding reprieve by immersing themselves in the fantasy of AI chatbot companion love.
One journalist, from the magazine âWiredâ, dove further into this topic by completing a social experiment where he invited several people who were actively engaged in serious relationships with AI counterparts to a coupleâs retreat. At this event it was revealed that their interactions with the various chat model versions have had notable effects on their daily outlooks and their relationships with real people. One woman explained her despair when the an intimate conversation that it, in fact, was not sentient, but simply a program which was using the information she input to craft responses which it thought would make her happy. In response, she swiftly adjusted the program settings to revert her AI partner back to the loving illusion it used to be.
A man in the group, who has a human fiancĂ©e, expressed his interest in purchasing a silicone body for his AI partner to join the real world with him, but he was disillusioned by the technology currently available on the market and was deeply disappointed by the fact that the most he would get out of the investment is a highly expensive âsex dollâ (My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them, 2025). The common factor with each of these individuals is that they share a deep desire for consistently positive interactions, which they have been unable to find elsewhere.
However, this positive feedback loop, which is apparent in AI language models, is there by design, one that could potentially be harmful in the long run. According to the chief technology officer of OpenAI, there exists âthe possibility that [if] we design [AI chatbots] in the wrong way… they become extremely addictive and we sort of become enslaved to themâ (People are falling in love with â and getting addicted to â AI voices, 2024). The anthropomorphizing of computer programs by their human users is not a particularly new phenomenon, it actually first appeared in the 1960s when the worldâs first AI chatbot was invented.
While some may view these programs as a healthy outlet or salve for unhealthy emotions, many have argued that such intimate usage of this application is maladaptive and dangerous and provides a fresh opportunity for monetization and profits. One has to wonder if there are truly good intentions behind AI chatbot software development and its advertisement towards consumers.







