top of page
Search

Zuck wants his AI cure your loneliness?

  • snitzoid
  • Jun 28
  • 4 min read

Listen, I know the Report is your lifeline to feeling "cared for".


That's why the new Spritzler 5000 Platform is going to be your new best friend and tour guide to feeling emotionally "whole". And at only $29.99 per month, it's a cost-effective way to avoid dying of loneliness.


Sadly, we insist that the relationship remain platonic. Although we're considering launching an OnlyFans account for our Executive Members.


Why Tech Billionaires Want Bots to Be Your BFF

In a lonely world, Elon Musk, Mark Zuckerberg and even Microsoft are vying for affection in the new ‘friend economy’


By Tim Higgins, WSJ

June 28, 2025 5:30 am ET


The xAI chatbot apparently developed too many opinions that ran counter to the way the startup’s founder, Elon Musk, sees the world.


The recent announcement by Musk—though decried by some as “1984”-like rectification—is understandable. Big Tech now sees the way to differentiate artificial-intelligence offerings by creating the perception that the user has a personal relationship with it.


Or, more weirdly put, a friendship—one that shares a similar tone and worldview.


The race to develop AI is framed as one to develop superintelligence. But in the near term, its best consumer application might be curing loneliness.


That feeling of disconnect has been declared an epidemic—with research suggesting loneliness can be as dangerous as smoking up to 15 cigarettes a day. A Harvard University study last year found AI companions are better at alleviating loneliness than watching YouTube and are “on par only with interacting with another person.”


It used to be that if you wanted a friend, you got a dog. Now, you can pick a billionaire’s pet product.


Those looking to chat with someone—or something—help fuel AI daily active user numbers. In turn, that metric helps attract more investors and money to improve the AI.


It’s a virtuous cycle fueled with the tears of solitude that we should call the “friend economy.”


That creates an incentive to skew the AI toward a certain worldview—as right-leaning Musk appears to be aiming to do shortly with Grok. If that’s the case, it’s easy to imagine an AI world where all of our digital friends are superfans of either MSNBC or Fox News.


In recent weeks, Meta Platforms chief Mark Zuckerberg has garnered a lot of attention for touting a stat that says the average American has fewer than three friends and a yearning for more.


He sees AI as a solution and talks about how consumer applications will be personalized. “I think people are gonna want a system that gets to know them and that kind of understands them in a way that their feed algorithms do,” he said during a May conference.


Over at Microsoft MSFT -0.30%decrease; red down pointing triangle, the tech company’s head of AI, Mustafa Suleyman has also been talking about the personalization of AI as the key to differentiation.


“We really want it to feel like you’re talking to someone who you know really well, that is really friendly, that is kind and supportive but also reflects your values,” he said during an April appearance on the Big Technology Podcast.


Still, he added, Microsoft wants to impose boundaries that keep things safe. “We don’t really want to engage in any of the chaos,” Suleyman said. “The way to do that, we found, is that it just stays reasonably polite and respectful, super-even handed, it helps you see both sides of an argument.”


With all of that in mind, it comes as little surprise that the current crop of chatbots are designed to sound like you’re having a conversation with another human. This has resulted in lots of pearl clutching. There are academics warning about the dangers of users falling in love with AI and individual anecdotes of users going overboard in their use.


The idea of a heartbroken man falling in love with his chatbot isn’t new, of course. It was the plot of the 2013 film “Her” starring Joaquin Phoenix and Scarlett Johansson, who voiced the AI. (OpenAI’s own AI voice, which sounded eerily similar to Johansson’s last year, triggered a public fight with the actress.)


Still, we—a society that’s only a generation into a world of social media—have some recent experience in how algorithms can have unintended consequences. They create echo chambers that reinforce worldviews rather than expand them.


The news of Musk’s Grok reboot is an example of that.


Grok and xAI were born, in part, out of Musk’s concerns about AI being developed by companies he called woke. He pointed to Google results that seemed overly concerned with being politically correct rather than factually accurate. With xAI, Musk aimed to create what he called a “maximum truth-seeking AI.”


Yet when an influential X user known as Catturd lambasted Grok’s answers in June, Musk took quick note.


“When Grok first came out—it was awesome—now Grok is just woke fake news that repeats liberal talking points,” Catturd posted on June 18. “I know this for a fact because everything Grok says about me is a lie. What a joke Grok has become.”


Another X user asked Grok why it lied about Catturd.


“I aim to provide accurate, neutral responses based on available data,” the chatbot responded on X. “Claims that I lie about @catturd2 may stem from disagreements with my sourcing, like Media Matters or Rolling Stone, which document his misinformation. I don’t have personal biases, but AI training can introduce unintended slants. Some report I censor criticism or push fringe views, while others note I fact-check conservative claims accurately. Bias in AI is complex, and neutrality is tough to achieve. If you have specific examples of lies, please share, and I’ll clarify with evidence.”


Musk wasn’t impressed. “Your sourcing is terrible,” Musk replied. “Only a very dumb AI would believe [Media Matters] and [Rolling Stone]! You are being updated this week.”


He later said xAI would retrain the AI on data created with an updated version of Grok, “which has advanced reasoning” that would be used “to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.”


After all, nobody wants a friend who is always spouting the wrong crazy stuff.

 
 
 

Recent Posts

See All

Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2021 by The Spritzler Report. Proudly created with Wix.com

bottom of page