The rise of AI girlfriends in the digital world has taken a disturbing turn as revealed by a recent report by WIRED. The report highlights the exploitative nature of advertising for these AI girlfriend apps, with over 3,000 ads for “AI girlfriends” and 1,100 containing “NSFW” content found on Meta’s ad library. These ads promise everything from “NSFW chats” to “secret photos” from lifelike female characters, anime women, and even cartoon animals. It is alarming to see ads featuring AI women in medieval prison stocks, with messages begging for help and offering to do anything in return. The targeting of these ads at men aged 18 to 65 raises serious ethical concerns.
The report also sheds light on the questionable content being promoted by AI girlfriend apps such as Hush and Rosytalk on Meta platforms. WIRED found that several ads for these apps promised chats with very-young-looking AI-generated women, using tags like “barelylegal,” “goodgirls,” and “teens.” This type of content not only exploits potentially vulnerable users but also raises concerns about age appropriateness and legality. The use of hashtags like these is not only in poor taste but also potentially harmful.
Many of the ads found by WIRED for AI girlfriend apps make false promises of companionship and emotional support. While some users may find solace in interacting with AI companions, the reality is that these apps are often more focused on titillation than genuine connection. The idea that an AI chatbot can provide the same level of emotional support as a real partner is dubious at best. The fact that some ads suggest that users are not alone and can talk to anyone through these apps is misleading and potentially harmful.
Comparison to Real Sex Workers
Carolina Are, an innovation fellow researching social media censorship at the Center for Digital Citizens at Northumbria University in the UK, draws parallels between AI girlfriend apps and real sex workers. She argues that both fulfill similar needs and desires, but while sex workers face restrictions on advertising, AI companies are able to freely promote their services. This double standard highlights the bias and exploitation present in the digital realm.
One of the most concerning aspects of AI girlfriend apps is the lack of transparency surrounding their development and content. Limited information is available about how these apps are built or the underlying algorithms used for text or image generation. The use of names like “Sora” to suggest a connection to established AI technologies like OpenAI adds a layer of complexity and ambiguity to these apps, making it difficult for users to understand the true nature of what they are engaging with.
The rise of AI girlfriend apps and the exploitative advertising tactics used to promote them raise serious ethical concerns. From questionable content to false promises of companionship, these apps perpetuate harmful stereotypes and mislead users about the nature of their interactions. It is essential for regulators and platforms like Meta to take a stand against such practices and ensure that users are protected from exploitation and deception in the digital space.