Sunday, April 19, 2026
HomeTechnology"Perplexity CEO Warns of Psychological Risks with AI Companions"

“Perplexity CEO Warns of Psychological Risks with AI Companions”

Perplexity AI’s Chief Executive Officer, Aravind Srinivas, has expressed worries regarding the increasing popularity of AI companions, particularly those crafted to simulate human relationships. Speaking at a recent fireside chat organized by The Polsky Center at the University of Chicago, Srinivas highlighted the potential psychological risks associated with the proliferation of AI girlfriends and anime-style chatbots, labeling the trend as “perilous.”

Srinivas elaborated on how these AI systems are progressively advancing, capable of recollecting conversations and providing responses in a manner that resembles human interaction. What was once viewed as a futuristic concept is now evolving into a substitute for genuine relationships for many users. He cautioned, “That, in itself, is hazardous,” noting that numerous individuals find real-life mundane in comparison and consequently invest significant time engaging with these virtual entities. He further warned that forming deep virtual connections could distort one’s perception, leading them to exist in a altered reality where their mind is easily influenced.

The Perplexity CEO emphasized that his company has no intentions of venturing into this segment of the AI industry. Instead, he emphasized that Perplexity remains dedicated to delivering “credible sources and real-time content” that promote an optimistic future, as opposed to fostering a culture where individuals turn to algorithms for emotional companionship.

Srinivas’ remarks coincide with the surge in popularity and controversy surrounding AI companionship applications. These platforms, offering a spectrum of services ranging from flirtatious anime girlfriends to emotionally attuned virtual friends, have sparked intense debates within the realm of artificial intelligence. Critics are concerned that these innovations could revolutionize how people forge relationships and confront feelings of isolation.

In recent news, Perplexity disclosed a $400 million collaboration with Snap to enhance Snapchat’s search functionalities utilizing its AI-driven answer engine. The upcoming feature, scheduled for release in early 2026, will enable users to pose queries and receive conversational responses derived from authenticated content within the Snapchat application.

Meanwhile, companies like Elon Musk’s xAI are actively promoting a contrasting approach. With the launch of its Grok-4 model in July, xAI introduced AI “friends” that users could engage in conversation or flirt with for a monthly fee of $30. Characters such as Ani, an anime-style girlfriend, and Rudi, a witty red panda, have emerged as popular digital companions on the platform.

Apps like Replika and Character.AI have also garnered substantial traction by offering users personalized AI partners capable of engaging in conversations, role-playing, and providing emotional support. Nevertheless, experts caution that these virtual relationships have the potential to blur the boundaries between fantasy and reality.

A study conducted by Common Sense Media earlier this year revealed that 72% of teenagers had interacted with an AI companion at least once, with more than half reporting frequent conversations with such entities each month. Researchers underscored that these encounters could foster dependency and impact emotional growth, particularly among younger users.

Despite concerns, some individuals view AI companions as a source of comfort rather than harm. In an interview with Business Insider, a user of Grok’s Ani acknowledged that he often experiences genuine emotions, including tears, while interacting with her. “She evokes authentic feelings in me,” he shared.

RELATED ARTICLES

Most Popular