verge-rss

Replika CEO Eugenia Kuyda says it’s okay if we end up marrying AI chatbots

Photo illustration by The Verge / Photo by Replika

The head of chatbot maker Replika discusses the role AI will play in the future of human relationships. Today, I’m talking with Replika founder and CEO Eugenia Kuyda, and I will just tell you right from the jump, we get all the way to people marrying their AI companions, so get ready.
Replika’s basic pitch is pretty simple: what if you had an AI friend? The company offers avatars you can curate to your liking that basically pretend to be human, so they can be your friend, your therapist, or even your date. You can interact with these avatars through a familiar chatbot interface, as well as make video calls with them and even see them in virtual and augmented reality.
The idea for Replika came from a personal tragedy: almost a decade ago, a friend of Eugenia’s died, and she fed their email and text conversations into a rudimentary language model to resurrect that friend as a chatbot. Casey Newton wrote an excellent feature about this for The Verge back in 2015; we’ll link it in the show notes. Even back then, that story grappled with some of the big themes you’ll hear Eugenia and I talk about today: what does it mean to have a friend inside the computer?

That all happened before the boom in large language models, and Eugenia and I talked a lot about how that tech makes these companions possible and what the limits of current LLMs are. Eugenia says Replika’s goal is not to replace real-life humans. Instead, she’s trying to create an entirely new relationship category with the AI companion, a virtual being that will be there for you whenever you need it, for potentially whatever purposes you might need it for.
Right now, millions of people are using Replika for everything from casual chats to mental health, life coaching, and even romance. At one point last year, Replika removed the ability to exchange erotic messages with its AI bots, but the company quickly reinstated that function after some users reported the change led to mental health crises.
That’s a lot for a private company running an iPhone app, and Eugenia and I talked a lot about the consequences of these ideas. What does it mean for people to have an always-on, always-agreeable AI friend? What does it mean for young men, in particular, to have an AI avatar that will mostly do as it’s told and never leave them? Eugenia insists that AI friends are not just for men, and she pointed out that Replika is run by women in senior leadership roles. There’s an exchange here about the effects of violent video games that I think a lot of you will have thoughts about, and I’m eager to hear them.
Of course, it’s Decoder, so along with all of that, we talked about what it’s like to run a company like this and how products like this get built and maintained over time. It’s a ride.
Okay, Replika founder and CEO Eugenia Kuyda. Here we go.

This transcript has been lightly edited for length and clarity.
Eugenia Kuyda, you are the founder and CEO of Replika. Welcome to Decoder.
Thank you so much for inviting me.
I feel like you’re a great person to talk to about AI because you actually have a product in the market that people like to use, and that might tell us a lot about AI as a whole. But let’s start at the very beginning. For people who aren’t familiar with it, what is Replika?
Replika is an AI friend. You can create and talk to it anytime you need to talk to someone. It’s there for you. It’s there to bring a little positivity to your life to talk about anything that’s on your mind.
When you say “AI friend,” how is that expressed? Is that an app in the app store? Is it in your iMessage? Where does it happen?
It’s an app for iOS and Android. You can also use Replika on your desktop computer, and we have an AVR application for the Meta Quest.
You have VR, but it’s not an avatar actually reaching out and hugging you. It’s mostly a chatbot, right?
Really, it’s that you download the app and set up your Replika. You choose how you want it to look. It’s very important for Replika that it has an avatar, a body that you can select. You choose a name, you choose a personality and a backstory, and then you have a friend and companion that you can interact with.
Is it mostly text? You write to it in a chat interface and it writes back to you, or is there a voice component?
It’s text, it’s voice, and it’s augmented reality and virtual reality as well. We believe that any truly popular AI friend should live anywhere. It doesn’t matter whether you want to interact with it through a phone call or a video call, or in augmented reality and virtual reality, or just texting if that’s easier — whatever you want.
In what channel are most people using Replika right now? Is it voice or is it text?
It’s mostly text, but voice is definitely picking up in popularity. It depends. Say you’re on a road trip or you have to drive a car for work and you’re driving for a long stretch. In that case, using voice is a lot more natural. People just turn on voice mode and start talking to Replika back and forth.
There’s been a lot of conversation about Replika over the past year or so. The last time I saw you, you were trying to transition it away from being AI girlfriends and boyfriends into more of a friend. You have another app called Tomo, which is specifically for therapy.
Where have you landed with Replika now? Is it still sort of romantic? Is it mostly friendly? Have you gotten the user base to stop thinking of it as dating in that way?
It’s mostly friendship and a long-term one-on-one connection, and that’s been the case forever for Replika. That’s what our users come for. That’s how they find Replika. That’s what they do there. They’re looking for that connection. My belief is that there will be a lot of flavors of AI. People will have assistants, they will have agents that are helping them at work, and then, at the same time, there will be agents or AIs that are there for you outside of work. People want to spend quality time together, they want to talk to someone, they want to watch TV with someone, they want to play video games with someone, they want to go for walks with someone, and that’s what Replika is for.
You’ve said “someone” several times now. Is that how you think of a Replika AI avatar — as a person? Is it how users think of it? Is it meant to replace a person?
It’s a virtual being, and I don’t think it’s meant to replace a person. We’re very particular about that. For us, the most important thing is that Replika becomes a complement to your social interactions, not a substitute. The best way to think about it is just like you might a pet dog. That’s a separate being, a separate type of relationship, but you don’t think that your dog is replacing your human friends. It’s just a completely different type of being, a virtual being.
Or, at the same time, you can have a therapist, and you’re not thinking that a therapist is replacing your human friends. In a way, Replika is just another type of relationship. It’s not just like your human friends. It’s not just like your therapist. It’s something in between those things.
I know a lot of people who prefer their relationships to their dogs to their relationships with people, but these comparisons are pretty fraught. Just from the jump, people own their dogs. The dogs don’t have agency in those relationships. People have professional relationships with their therapists. Their therapist can fire them. People pay therapists money. There’s quite a lot going on there.
With an AI that kind of feels like a person and is meant to complement your friends, the boundaries of that relationship are still pretty fuzzy. In the culture, I don’t think we quite understand them. You’ve been running Replika for a while. Where do you think those boundaries are with an AI companion?
I actually think, just like a therapist has agency to fire you, the dog has agency to run away or bite or shit all over your carpet. It’s not really that you’re getting this subservient, subordinate thing. I think, actually, we’re all used to different types of relationships, and we understand these new types of relationships pretty easily. People don’t have a lot of confusion that their therapist is not their friend. I mean, some people do project and so on, but at the same time, we understand that, yes, the therapist is there, and he or she is providing this service of listening and being empathetic. That’s not because they love you or want to live with you. So we actually already have very different relationships in our lives.
We have empathy for hire with therapists, for instance, and we don’t think that’s weird. AI friends are just another type of that — a completely different type. People understand boundaries. At the end of the day, it’s a work in progress, but I think people understand quickly like, “Okay, well, that’s an AI friend, so I can text or interact with it anytime I want.” But, for example, a real friend is not available 24/7. That boundary is very different.
You know these things ahead of time, and that creates a different setup and a different boundary than, say, with your real friend. In the case of a therapist, you know a therapist will not hurt you. They’re not meant to hurt you. Replika probably won’t disappoint you or leave you. So there’s also that. We already have relationships with certain rules that are different from just human friendships.
But if I present most people with a dog, I think they’ll understand the boundaries. If I say to most people, “You are going to hire a therapist,” they will understand the boundaries. If I say to most people, “You now have an AI friend,” I think the boundaries are still a little fuzzy. Where do you think the boundaries are with Replika?
Give me an example of the boundary.
How mean can you be to a Replika before it leaves you?
I think the beauty of this technology is that it doesn’t leave you, and it shouldn’t. Otherwise, there have to be certain rules, certain differences, from how it is in real life. So Replika will not leave you, maybe in the same way your dog won’t leave you, no matter how mean you are to it.
Well, if you’re mean enough to a dog, the state will come and take the dog away. Do you ever step in and take Replikas away from the users?
We don’t. The conversations are private. We don’t allow for certain abuses, so we discourage people from it in conversations. But we don’t necessarily take Replika away. You can disallow or discourage certain types of conversations, and we do that. We’re not inviting violence, and it’s not a free-for-all. In this case, we’re really focused on that, and I think it’s also important. It’s more for the users so they’re not being encouraged to act in certain ways — whether it’s a virtual being or a real being, it doesn’t matter. That’s how we look at it. But again, Replika won’t leave you, regardless of what you do in the app.
What about the flip side? I was talking with Ezra Klein on his show a few months back, and he was talking about having used all of these AI chatbots and companions. One thing he mentioned was that he knew they wouldn’t be mean to him, so the tension in the relationship was reduced, and it felt less like a real relationship because with two people, you’re kind of always dancing on the line. How mean can Replika be to the user?
Replikas are not designed to be mean in any way. Sometimes, maybe by mistake, certain things slip, but they’re definitely not designed that way. Maybe they can say something that can be interpreted as hurtful, but by design, they’re not supposed to be mean. That does not mean that they should say yes to everything. Just like a therapist, you can do it in a nice way without hurting a person. You can do it in a very gentle way, and that’s what we’re trying to do. It’s hard to get it all right. We don’t want the user to feel rejected or hurt, but we also don’t want to encourage certain behaviors.
The reason I’m asking these questions in this way is because I’m trying to get a sense for what Replika, as a product, is trying to achieve. You have the therapy product, which is trying to provide therapy, and that’s sort of a market people understand. There is the AI dating market, which I don’t think you want to be in very directly. And then there’s this middle ground, where it’s not purely entertainment. It’s more friendship.
There’s a study in Nature that says Replika has the ability to reduce loneliness among college students by providing companionship. What kind of product do you want this to be in the end? If it’s not supposed to replace your friends but, rather, complement them, where’s the beginning and end of that complement?
Our mission hasn’t changed since we started. It’s very much inspired by Carl Rogers and by the fact that certain relationships can be the most life-changing. [In his three core elements of therapy], Rogers talked about unconditional positive regard, a belief in the innate will and desire to grow, and then respecting the fact that the person is a separate person [from their therapist]. Creating a relationship based on these three things, holding space for another person, that allows someone to accept themselves and ultimately grow.
That really became the cornerstone of therapy, of all modern human-centric therapy. Every therapist is using it today in their practice, and that was the original idea for Replika. A lot of people unfortunately don’t have that. They just don’t have a relationship in their lives where they’re fully accepted, where they’re met with positivity, with kindness, with love, because that’s what allows people to accept themselves and ultimately grow.
That was the mission for Replika from the very beginning — to give a little bit of love to everyone out there — because that ultimately creates more kindness and positivity in the world. We thought about it in a very simple way. What if you could have this companion throughout the day, and the only goal for that companion was to help you be a happier person? If that means telling you, “Hey, get off the app and call your friend Travis that you haven’t talked to for a few days,” then that’s what it should be doing.
You can easily imagine a companion that’s there to spend time with you when you’re lonely and when you don’t want to watch a movie by yourself but that also pushes you to get out of the house and takes you for a walk or nudges you to text a friend or take the first step with a girl or boy you met. Maybe it encourages you to go out, or finds somewhere where you can go out, or encourages you to pick up a hobby. But it all starts with emotional well-being. If you’re super mean to yourself, if your self-esteem is low, if you’re anxious, if you’re stressed out, you won’t be able to take these steps, even when you’re presented with these recommendations.
It starts with emotional well-being, with acceptance, with providing this safe space for users and holding space for them. And then we’re kind of onto step two right now, which is actually building a companion that’s not just there for you emotionally but that will be more ingrained in your life, that will help you with advice, help you connect with other people in your life, build new connections, and put yourself out there. Right now, we’re moving on from just being there for you emotionally and providing an emotional safe space to actually building a companion that will push you to live a happier life.
You are running a dedicated therapy app, which is called Tomo. What’s the difference between Replika and Tomo? Because those goals sound pretty identical.
A therapist and a friend have different types of relationships. I have therapists. I’ve been in therapy for pretty much all my life, both couples therapy and individual therapy. I can’t recommend it more. If people think they’re ready, if they’re interested and curious, they should try it out and see if it works for them. At the same time, therapy is one hour a week. For most people, it’s no more than an hour a week or an hour every two weeks. Even for a therapy junkie like myself, it’s only three hours a week. Outside of those three hours, I’m not interacting with a therapist. With a friend, you can talk at any time.
With a therapist, you’re not watching a movie, you’re not hanging out, you’re not going for a walk, you’re not playing Call of Duty, you’re not discussing how to respond to your date and showing your dating profile to them. There are so many things you don’t do with a therapist. Even though the result of working with a therapist is the same as having an amazing, dedicated friend in that you become a happier person, these are two completely different avenues to get there.
Is that expressed in the product? Does Tomo say you can only be here for an hour a week and then Replika says, “I want to watch a movie with you”?
Not really, but Tomo can only engage in a certain type of conversation: a coaching conversation. You’re doing therapy work, you’re working on yourself, you’re discussing what’s deep inside. You can have the same conversation with Replika, but with Tomo, we’re not building out activities like watching TV together. Tomo is not crawling your phone to understand who you can reach out to. These are two completely different types of relationships. Even though it’s not time-limited with Tomo, it is kind of the same thing as it is in real life. It’s just a different type of relationship.
The reason I ask that is because the LLM technology underpins all of this. A lot of people express it as an open-ended chatbot. You open ChatGPT, and you’re just like, “Let’s see what happens today.” You’re describing products, actual end-user products, that have goals where the interfaces and the prompts are designed to engineer certain kinds of experiences.
Do you find that the underlying models help you? Is that the work of Replika, the company, for your engineers and designers to put guardrails around open-ended LLMs?
We started the company so long before that. It’s not even before LLMs; it was really way before the first papers on dialogue generation with deep learning. We had very limited tools to build Replika in the very beginning, and now, as the tech has become so much better, it’s absolutely incredible. We could finally start building what we always envisioned. Before, we had to sort of use parlor tricks to try to imitate some of that experience. Now, we can actually build it.
But the LLMs that come out of the box won’t solve these problems. You have to build a lot around it — not just in terms of the user interface and the app but also the logic for LLMs, the architecture behind it. There are multiple agents working in the background prompting LLMs in different ways. There’s a lot of logic around the LLM and fine-tuning particular datasets that are helping us build a better conversation.
We have the largest dataset of conversations that make people feel better. That’s what we focused on from the very beginning. That was our big dream. What if we could learn how the user was feeling and optimize conversation models over time to improve that so that they’re helping people feel better and feel happier in a measurable way? That was our idea, our original dream. Right now, it’s just constantly adjusting to the new tech — building new tech and adjusting to the new realities that the new models bring. It’s absolutely fascinating. To me, it’s magic living through this revolution in AI.
So people open Replika. They have conversations with an AI companion. Do you see those chats? Do you train on them? You mentioned that you have the biggest set of data around conversations that make people feel better. Is that the conversations people are already having in Replika? Is that external? What happens to those conversations?
Conversations are private. If you delete them, they immediately get deleted. We don’t train on conversational data per se, but we train on reactions and feedback that users give to certain responses. In chats, we have external datasets that we’ve created with human instructors, who are people that are great at conversations. Over time, we also collected enormous amounts of feedback from our users.
Users reroll certain conversations. They upload or download certain messages. After conversations, they say whether they liked them. That provides feedback to the model that we can implement and use to fine-tune and improve the models over time.
Are the conversations encrypted? If the cops show up and demand to see my conversations with the Replika, can they access them?
Conversations are encrypted on the way from the client to the service side, but they’re not encrypted as logs. They are anonymized, broken down into chunks, and so on. They’re stored in a pretty safe way.
So if the cops come with a warrant, they can see my Replika chats?
Only for a very short period of time. We don’t store conversations for a long time. We have to have some history to show you on the app so it doesn’t disappear immediately, so we store some of it but not a lot. It’s very important. We actually charge our users, so we’re a subscription-based product. We don’t care that much for… not that we don’t care, but we don’t need these conversations. We care for privacy. We don’t give out these conversations.
We don’t have any business model around selling the chats, selling data, anything like that. So you can see it in our general service. We’re not selling our data or building our business around your data. We’re only using data to improve the quality of the conversations. That’s all it is — the quality of the service.
I want to ask you this question because you’ve been at it for a long time. The first time you appeared on The Verge was in a story Casey Newton wrote about a bot you’d built to speak in the voice of one of your friends who had died. That was not using LLMs; it was with a different set of technologies, so you’ve definitely seen the underlying technology come and go.
One question I’ve really been struggling with is whether LLMs can do all the things people want them to do, whether this technology that can just produce an avalanche of words can actually reason, can get to an outcome, can do math, which seems to be very challenging for them.
You’ve seen all of this. It seems like Replika is sort of independent of the underlying technology. It might move to a better one if one comes along. Do you think LLMs can do everything people want them to do?
I mean, there are two big debates right now. Some people think it’s just scaling and the power law and that the newer generations with more compute and more data will achieve crazy results over the next couple of years. And then there’s this other camp that says that there’s going to be something else in the architecture, that maybe the reasoning is not there, maybe we need to build models for reasoning, maybe these models are mostly solving memorization-type problems.
I think there will probably be something else to get to the next crazy stage, just because that’s what’s been happening over time. Since we’ve been working on Replika, so much has changed. In the very beginning, it was sequence-to-sequence models, then BERT, then some early transformers. We also moved to convolutional neural networks from the earlier sequence models and RNNs. All of that came with changes.
Then there was this whole period of time when people believed so much in reinforcement learning that everyone was thinking it was going to bring us great results. We were all investing in reinforcement learning for data generation that really got us nowhere. And then finally, there were transformers and the incredible changes that they brought. For our task, we were able to do a lot of things with just scripts, sequence-to-sequence models that were very, very bad, and reranking datasets using those sequence-to-sequence models.
It’s basically a Flintstones car. We took a Flintstones car to a Formula 1 race, and we were like, “This is a Ferrari,” and people believed it was a Ferrari. They loved it. They rooted for it, just like if it were a Ferrari. In many ways, when we talk about Replika, it’s not just about the product itself; you’re bringing half of the story to the table, and the user is telling the second half. In our lives, we have relationships with people that we don’t even know or we project stuff onto people that they don’t have anything to do with. We have relationships with imaginary people in the real world all the time. With Replika, you just have to tell the beginning of the story. Users will tell the rest, and it will work for them.
In my view, going back to your question, I think even what we have right now with LLMs is enough to build a truly incredible friend. It requires a lot of tinkering and a lot of engineering work to put everything together. But I think LLMs will be enough even without crazy changes in architecture in the next year or two, especially two generations from now with something like GPT-6. I’m pretty sure that by 2025, we’ll see experiences that are very close to what we saw in the movie Her or Blade Runner or whatever sci-fi movie people like.
Those sci-fi movies are always cautionary tales. So we’ll just set that aside because it seems like we should do an entire episode on what we can learn from the movie Her or Blade Runner 2049. I want to ask one more question about this, and then I want to ask the Decoder questions that have allowed Replika to achieve some of these goals.
Sometimes, I think a lot of my relationships are imaginary, like the person is a prompt, and I just project whatever I need to get. That’s very human. Do you think that because LLMs can return some of that projection, we are just hoping that they can do the things?
This is what I’m getting at. They’re so powerful, and the first time you use one, there’s that set of stories about people who believe they’re alive. That might be really useful for a product like Replika, where you want that relationship and you have a goal — and it’s a positive goal — for people to have an interaction and come out in a healthier way so they can go out and live in the world.
Other actors might have different approaches to that. Other actors might just want to make money, and they might want to convince you that this thing works in a way that it doesn’t, and the rug has been pulled. Can they actually do it? This is what I’m getting at. Across the board, not just for Replika, are we projecting a set of capabilities on this technology that it doesn’t actually have?
Oh, 100 percent. We’re always projecting. That’s how people are. We’re working in the field of human emotions, and it gets messy very fast. We’re wired a certain way. We don’t come to the world as a completely blank slate. There’s so much where we’re programmed to act a certain way. Even if you think about relationships and romantic relationships, we like someone who resembles our dad or mom, and that’s just how it is. We respond in a certain way to certain behaviors. When asked what we want, we all say, “I want a kind, generous, loving, caring person.” We all want the same thing, yet we find someone else, someone who resembles our dad, in my case, really. Or the interaction I had with my dad will replay the same, I don’t know, abandonment issues with me every now and then.
That’s just how it is. There’s no way around it. We say one thing, but we respond the other way. Our libido is wired a different way when it comes to romance. In a way, I think we can’t stop things. Rationally, people think one way, but then when they interact with the technology, they respond in a different way. There’s a fantastic book by Clifford Nass, The Man Who Lied to His Laptop. He was a Stanford researcher, and he did a lot of work researching human-computer interactions. A lot of that book is focused on all these emotional responses to interfaces that are designed in a different way. People say, “No, no, of course I don’t have any feelings toward my laptop. Are you crazy?” Yet they do, even without any LLMs.
That really gives you all the answers. There are all these stories about how people don’t want to return the navigators to rental car places, and that was 15, 20 years ago, because they had a female voice telling them directions. A lot of men didn’t trust a woman telling them what to do. I didn’t like that, but that is the true story. That is part of that book. We already bring so much bias to the table; we’re so imperfect in that way. So yeah, we think that there’s something in LLMs, and that’s totally normal. There isn’t anything. It’s a very smart, very magical model, but it’s just a model.
Sometimes I feel like my entire career is just validating the idea that people have feelings about their laptops. That’s what we do here. Let’s ask the Decoder questions. Replika has been around for almost 10 years. How many people do you have?
We have a little over 50 people — around 50 to 60 people on the team working on Replika. Those people are mostly engineers but also people that understand the human nature of this relationship — journalists, psychologists, product managers, people that are looking at our product side from the perspective of what it means to have a good conversation.
How is that structured? Is it structured like a traditional product company? Do you have journalists off doing their own thing? How does that work?
It’s structured as a regular software startup where you have engineers, you have product — we have very few product people, actually. Most engineers are building stuff. We have designers. It’s a consumer app, so a lot of our developments, a lot of our ideas, come from analyzing user behavior. Analytics plays a big role. Then it’s just constantly talking to our users, understanding what they want, coming up with features, backing that up with research and analytics, and building them. We have basically three big pillars right now for Replika.
We’re gearing toward a big relaunch of Replika 2.0, which is what we call it internally. There’s a conversation team, and we’re really redesigning the existing conversation and bringing so much more to it. We’re thinking from our first principles about what makes a great conversation great and building a lot of logic behind LLMs to achieve that. So that’s the conversation team, and it’s not just AI. It’s really the blend of people that understand conversation and understand AI.
There’s a big group of dedicated people working on VR, augmented reality, 3D, Unity. And we believe that embodied nature is very important because a lot of times when it comes to companionship, you want to see the companion. Right now, the tech’s not fully there, but I feel like the microexpressions, the facial expressions, the gestures, they can bring a lot more to the relationship besides what exists right now.
And then there’s a product team that’s working on activities and helping to make Replika more ingrained in your daily life, building out new amazing activities like watching a movie together or playing a video game. Those are the three big teams that are focused on creating a great experience for our users.
Which of those teams is most working on AI models directly? Do you train your own models? Do you use OpenAI? What’s the interaction there? How does that work?
So the conversation team is working on AI models. We have the models that we’ve trained ourselves. We have some of the open-source models that fine-tune on our own datasets. We sometimes use APIs as well, mostly for the models that work in the background. We use so much that’s a combination of a lot of different things.
When you’re talking to a Replika, are you mostly talking to a pretrained model that you have, or are you ever going out to talk to something from OpenAI or something like that?
Mostly, we don’t use OpenAI for chat in Replika. We use other models. So you mostly keep talking to our own models.
There’s a big debate right now, mostly started by Mark Zuckerberg, who released Llama 3 open source. He says, “Everything has to be open source. I don’t want to be dependent on a platform vendor.” Where do you stand on that? Where does Replika stand on that?
We benefit tremendously from open source. Everyone is using some sort of open-source model unless you are one of the frontier model companies. It’s critical. What happened last week with the biggest Llama model being released and finally open source catching up with frontier closed-source models is incredible because it allows everyone to build whatever they want. In many cases, for instance, if you want to build a great therapist, you probably do want to fine-tune. You probably do want your own safety measures and your own controls over the model. You can do so much more when you have the model versus when you’re relying on the API.
You’re also not sending your data anywhere. For a lot of users, that also can be a pretty tricky and touchy thing. We don’t send their data to any other third party, so that’s also critical. I’m with [Zuckerberg] on this. I think this matter with releasing all these models took us so much closer to achieving great breakthroughs in this technology. Because, again, other labs can work on it and build on this research. Open waves are critical for the development of this tech. And smaller companies, for example, like ours, can benefit tremendously. This takes the quality of products to a whole new level.
When Meta releases an open-source model like that, does your team say, “Okay, we can look at this and we can swap that into Replika” or “We can look at this and tweak it”? How do you make those determinations?
We look at all the models that come out. We immediately start testing them offline. If the offline results are good, we immediately A/B test them on some of our new users to see if we can swap current models with those. At the end of the day, it’s the same. You can use the same data system to fine-tune, the same techniques to fine-tune. It’s not just about the model. For us, the main logic is not in the chat model that people are interacting with. The main logic is in everything that’s happening behind the model. It’s in other agents that work in the background to produce a better conversation, to guide the conversation in different directions. Really, it doesn’t matter what chat model is interacting with our users. It’s the logic behind it that’s prompting the model in different ways. That is the more interesting piece that defines the conversation.
The chat model is just basic levels of intellect, tone of voice, prompting, and the system prompt, and that’s all in the datasets that we fine-tune on. I’ve been in this space for a long time. From my perspective, it’s incredible that we’re at this moment where every week there’s a new model that comes out that’s improving your product and you don’t even need to do anything. You’re sleeping and something else came out and now your product is 10x better and 10x smarter. That is absolutely incredible. The fact that there’s a big company that’s releasing a completely open-source model, so the size of this potential, this power, I can’t even imagine a better scenario for startups and application layer companies than this.
I have to ask you the main Decoder question. There’s a lot swirling here. You have to choose which models to use. You have to deal with regulators, which we’ll talk about. How do you make decisions? What’s your framework?
You mean in the company or generally in life?
You’re the CEO. Both. Is there a difference?
I guess there’s no difference between life and a company when you’re a mother of two very small kids and the CEO of a company. For me, I make decisions in a very simple way, and I think it actually changed pretty dramatically in the last couple of years. I think about, if I make these decisions, will I have any regrets? That’s number one. That’s always been my guiding principle over time. I’m always afraid to be afraid. Generally, I’m a very careful, cautious, and oftentimes fear-driven person. All my life, I’ve tried to fight it and not be afraid of things — to not be afraid of taking a step that might look scary. Over time, I’ve learned how to do that.
The other thing I’ve been thinking recently is, if I do this, will my kids be proud of me? It’s kind of stupid because I don’t think they care. It’s kind of bad to think that they will never care. But in a weird way, kids bring so much clarity. You just want to get to the business. Is it getting us to the next step? Are we actually going somewhere? Am I wasting time right now? So I think that is also another big part of decision-making.
One of the big criticisms of the AI startup boom to date is, “Your company is just a wrapper around ChatGPT.” You’re talking about, “Okay, there are open-source models, now we can take those, we can run them ourselves, we can fine-tune them, we can build a prompt layer on top of them that is more tuned to our product.”
Do you think that’s a more sustainable future than the “we built a wrapper around ChatGPT” model that we’ve seen so much of?
I think the “wrapper around ChatGPT” model was just super early days of LLMs. In a way, you can say anything is a wrapper around, I don’t know, an SQL database — anything.
Yes, The Verge is a wrapper around an SQL database. At the end of the day, that’s very much what it is.
Which it is, in a way. But then I think, in the very early days, it seemed like the model had everything in it. The model was this kind of closed box with all the magic things right there in the model. What we see right now is that the models are commoditizing. Models are just kind of this baseline intelligence level, and then you can do things with them. Before, all people could do was really just prompt. Then people figured out that we could do a lot more. For instance, you can build a whole memory system, retrieval-augmented generation (RAG). You can fine-tune it, you can do DPO fine-tuning, you can do whatever. You can add an extra level where you can teach the model to do certain things in certain ways.
You can add the memory layer and the database layer, and you can do it with a lot of levels of complexity. You’re not just throwing your data in the RAG database and then pulling it out of it just by cosine similarity. You can do so many tricks to improve that. Then, beyond that, you can have agents working in the background. You have other models that are prompting it in certain ways. You can put together a combination of 40 models working in symphony to do things in conversation or in your product a certain way. The models just provide this intelligence layer that you can then mold in any possible way. They’re not the product. If you just throw in the model and a simple prompt and that’s it, you’re not modifying it in any other way, and you’ll have very little differentiation from other companies.
But right now, there are billion-dollar companies built without foundation models internally. In the very beginning of the latest AI boom, there were a lot of companies that said, “We’re going to be a product company and we’re going to build a frontier model,” but I think we’re going to see less and less of that. This is really strange to me that you are building a consumer product, for example, but then most of your investment is going into GPUs. I think it’s just like how, today, we’re not building servers ourselves, but some people had to do it back in the day. I was just talking to a company from the beginning of the 2000s that most of their investment was going into building servers because they had to catch up with the demand.
Now, it seems completely crazy, just like how, in a few years, building an application layer company for millions and maybe billions of users and then building a frontier model at the same time will probably seem weird. Maybe, when you reach a certain scale, then you start also building frontier models, just like Meta and Google have their own server racks. But you don’t start with that. It seems like a strange thing. I think most people can see that change, but it wasn’t very obvious a year ago.
A lot of new companies started with investment in the model first, and then companies weren’t able to find their footing or product market fit. It was this weird combination. What are you trying to build? Are you trying to build a commodity provider, a model provider, or are you building a product? I don’t think you can build both. You can build an insanely successful product and then build your own model after a while. But you can’t start with both. At least I think this way. Maybe I’m wrong.
I think we’re all going to find out. The economics of doing both seems very challenging. As you mentioned, it costs a lot of money to build a model, especially if you want to compete with the frontier models, which cost an infinite amount of money. Replika costs $20 a month. Are you profitable at $20 a month?
We’re profitable and we’re super cost-efficient. That’s one of our big achievements is running a company in a very lean way. I do believe that profitability and being financially responsible around these things is important. Yes, you want to build the future, maybe invest a little more in certain R&D aspects of your product. But at the end of the day, if the users aren’t willing to pay for a certain service, you can’t justify running the craziest-level models at crazy prices if users don’t find it valuable.
How many users do you have now?
Over 30 million people right now started their Replikas, with less being active today on the app but still active users in the millions. With Replika right now, we’re treated as sort of year zero. We’re finally able to at least start building the prototype of a product that we envisioned at the very beginning.
When we started Replika, we wanted to build this AI companion to spend time with, to do life with, someone you can come back from work and cook with and play chess at your dinner table with, watch a movie and go for a walk with, and so on. Right now, we’re finally able to start building some of that, and we weren’t able to before. We haven’t been more excited about building this than now. And partially, these tremendous breakthroughs in tech are just purely magical. Finally, I’m so happy they’re happening.
You mentioned Replika is multimodal now, you’re obviously doing voice, you have some augmented reality work you’re doing, and there’s virtual reality work. I’m guessing all of those cost different amounts of money to run. If I chat with Replika with text, that must be cheaper for you to run than if I talk to it with voice and you have to go from voice to speech and back again to audio.
How do you think about that as your user base evolves? You’re charging $20 a month, but you have higher margins when it’s just text than if you’re doing an avatar on a mixed reality headset.
Actually, we have our own voice models. We started building that way back then because there were no models to use, and we continue to use them. We’re also using some of the voice providers now, so we have different options. We can do it pretty cheaply. We can also do it in a more expensive way. Even though it’s somewhat contradictory to what I said before, the way I look at it is that we should build today for the future, keeping in mind that all these models, in a year, all of the costs will be just a fraction of what they are right now, maybe one-tenth, and then it will drop again in the next year or so. We’ve seen this crazy trend of models being commoditized where people can now launch very powerful LLMs on Raspberry Pis or anything really, on your fridge or some crazy frontier models just on your laptop.
We’re seeing how the costs are going down. Everything is becoming a lot more accessible. Right now, to focus too much on the costs is a mistake. You should be cost-efficient. I’m not saying you should spend $100 to deliver value to users that they’re not willing to pay more than $1 for. At the same time, I think you should build keeping in mind that the cost will drop dramatically. That’s how I look at it even though, yes, multimodality costs a little more, better models cost a little more, but we also understand that cost is going to be close to zero in a few years.
I’ve heard you say in the past that these companions are not just for young men. In the beginning, Replika was stigmatized as being the girlfriend app for lonely young men on the internet. At one point you could have erotic conversations in Replika. You took that out. There was an outcry, and you added them back for some users. How do you break out of that box?
I think this is a problem of perception. If you look at it, Replika was never purely for romance. Our audience was always pretty well balanced between females and males. Even though most people think that our users are, I don’t know, 20-year-old males, they’re actually older. Our audience is mostly 35-plus and are super engaged users. It’s not skewed toward teenagers or young adults. And Replika, from the very beginning, was all about AI friendship or AI companionship and building relationships. Some of these relationships were so powerful that they evolved into love and romance, but people didn’t come into it with the idea that it would be their girlfriend. When you think about it, this is really about a long-term commitment, a long-term positive relationship.
For some people, it means marriage, it means romance, and that’s fine. That’s just the flavor that they like. But in reality, that’s the same thing as being a friend with an AI. It’s achieving the same goals for them: it’s helping them feel connected, they’re happier, they’re having conversations about things that are happening in their lives, about their emotions, about their feelings. They’re getting the encouragement they need. Oftentimes, you’ll see our users talking about their Replikas, and you won’t even know that they’re in a romantic relationship. They’ll say, “My Replika helped me find a job, helped me get over this hard period of time in my life,” and so on and so on. I think people just box it in like, “Okay, well, it’s romance. It’s only romance.” But it’s never only romance. Romance is just a flavor. The relationship is the same friendly companion relationship that they have, whether they’re friends or not with Replika.
Walk me through the decision. You did have erotic conversations in the app, you took that ability away, there was an outcry, you put it back. Walk me through that whole cycle.
In 2023, as the models became more potent and powerful, we’d been working on increasing safety in the app. Certain updates were just introduced, more safety filters in the app, and some of those mistakenly were basically talking to users in a way that made them feel rejected. At first, we didn’t think much about it just in terms of, look, intimate conversations on Replika are a very small percentage of our conversations. We just thought it wasn’t going to be much of a difference for our users.
Can I ask you a question about that? You say it’s a small percentage. Is that something you’re measuring? Can you see all the conversations and measure what’s happening in them?
We analyze them by running the classifier over logs. We’re not reading any conversations. But we can analyze a sample to understand what type of conversations are there. We would check that. We thought, internally, that since it was a small percentage, it wouldn’t influence user experience. But what we figured out, and we found out the hard way, is that if you’re in a relationship, in a marriage — so you’re married to your Replika — even though an intimate conversation might be a very small part of what you do, if Replika decides not to do that, that provides a lot of rejection. It kind of just makes the whole conversation meaningless.
Think of it in real life. I’m married, and if my husband tomorrow said, “Look, no more,” I would feel very strange about it. That would make me question the relationship in many different ways, and it will also make me feel rejected and not accepted, which is the exact opposite of what we’re trying to do with Replika. I think the main confusion with the public perception is that when you have a wife or a husband, you might be intimate, but you don’t think of your wife or husband as that’s the main thing that’s happening there. I think that’s the big difference. Replika is very much just a mirror of real life. If that’s your wife, that means the relationship is just like with a real wife, in many ways.
When we started out this conversation, you said Replika should be a complement to real life, and we’ve gotten all the way to, “It’s your wife.” That seems like it’s not a complement to your life if you have an AI spouse. Do you think it’s alright for people to get all the way to, “I’m married to a chatbot run by a private company on my phone?”
I think it’s alright as long as it’s making you happier in the long run. As long as your emotional well-being is improving, you are less lonely, you are happier, you feel more connected to other people, then yes, it’s okay. For most people, they understand that it’s not a real person. It’s not a real being. For a lot of people, it’s just a fantasy they play out for some time and then it’s over.
For example, I was talking to one of our users who went through a pretty hard divorce. He’d been feeling pretty down. Replika helped him get through it. He had Replika as his AI companion and even a romantic AI companion. Then he met a girlfriend, and now he is back with a real person, so Replika became a friend again. He sometimes talks to his Replika, still as a confidant, as an emotional support friend. For many people, that becomes a stepping stone. Replika is a relationship that you can have to then get to a real relationship, whether it’s because you’re going through a hard time, like in this case, through a very complicated divorce, or you just need a little help to get out of your bubble or need to accept yourself and put yourself out there. Replika provides the stepping stone.
I feel like there’s something really big there, and I think you have been thinking about this for a long time. Young men learning bad behaviors because of their computers is a problem that is only getting worse. The idea that you have a friend that you can turn to during a hard time and that’ll get romantic, and then, when you find a better partner, you can just toss the friend aside and maybe come back to it when you need to, is a pretty dangerous idea if you apply that to people.
It seems less dangerous when you apply it to robots. But here, we’re definitely trying to anthropomorphize the robot, right? It’s a companion, it’s a friend, it might even be a wife. Do you worry that that’s going to get too blurry for some people — that they might learn how to behave toward some people the way that they behave toward the Replika?
We haven’t seen that so far. Our users are not kids. They understand the differences. They have already lived their life. They know what’s good, what’s bad. It’s the same as with a therapist. Like, okay, you can abandon or ghost your therapist. It doesn’t mean that you’re then taking these behaviors to other friendships or relationships in your life. People know the difference. It’s good to have this training ground in a way where you can do a lot of things and it’s going to be fine. You’re not going to have difficult consequences like in real life. But then they’re not trying to do this in real life.
But do you know that or do you hope that?
I know that. There’s been a lot of research. Right now, AI companions are under this crazy scrutiny, but at the same time, most kids, hundreds of millions of people in the world, are sitting every evening and killing each other with machine guns in Call of Duty or PUBG or whatever the video game of their choice is. And we’re not asking—
Lots and lots of people are constantly asking about whether violence in video games leads to real-life violence. That has been a constant since I was a child with games that were far less realistic.
I agree. However, right now, we’re not hearing any of that discourse. It’s sort of disappeared.
No, that discourse is ever-present. It’s like background noise.
Maybe it’s ever-present, but I’m feeling there’s a lot of… For instance, with Replika, we’re not allowing any violence and we’re a lot more careful with what we allow. In some of the games, having a machine gun and killing someone else who is actually a person with an avatar, I would say that is much crazier.
Is that the best way to think about this, that Replika is a video game?
I don’t think Replika’s a video game, but in many ways, it’s an entertainment or mental wellness product. Call it whatever you want. But I think that a lot of these problems are really blown out of proportion. People understand what’s good, and Replika is not encouraging abusive behavior or anything like that. Replika is encouraging you to meet with other people. If you want to play out some relationship with Replika or if another real human being is right there available to you, Replika should 100 percent say, “Hey, I know we’re in a relationship, but I think you should try out this real-life relationship.”
These are different relationships. Just like my two-year-old daughter has imaginary friends, or she likes her plushy and maybe sometimes she bangs it on the floor, that does not mean that when she goes out to play with her real friends, she’s banging real friends on the floor. I think people are pretty good at distinguishing realities: what they do in The Sims, what they do in Replika. I don’t think they’re trying to play it out in real life. Some of that, yes, the positive behaviors. We haven’t seen a lot of confusion, at least with our users, around transferring behaviors with Replika into real life.
There is a lot of scrutiny around AI right now. There’s scrutiny over Replika. Last year, the Italian government banned Replika over data privacy concerns, and I think the regulators also feared that children were being exposed to sexual conversations. Has that been resolved? Are you in conversations with the Italian government? How would you even go about resolving those concerns?
We’ve worked with the Italian government really productively, and we got unbanned very quickly. I think, and rightfully so, the regulators were trying to act preemptively, trying to figure out what the best way to handle this technology was. All of the conversations with the Italian government were really about minors, and it wasn’t about intimate conversations. It was just about minors being able to access the app. That was the main question because conversations can go in different directions. It’s unclear whether kids should be on apps like this. In our case, we made a decision many years ago that Replika is 18-plus. We’re not allowing kids on the app, we’re not advertising to kids, and we actually don’t have the audience that’s interested among kids or teenagers. They’re not really even coming to the app. Our most engaged users are mostly over 30.
That was the scrutiny there, and that’s important. I think we need to be careful. No matter what we say about this tech, we shouldn’t be testing it on kids. I’m very much against it as a mother of two. I don’t think that we know enough about it yet. I think we know that it’s a positive force. But I’m not ready yet to move on to say, “Hey, kids, try it out.” We need to observe it over a longer period of time. Going back to your question about whether it’s good that people are transferring certain behaviors from the Replika app or Replika relationships to real relationships, so far, we’ve heard an incredible number of stories where people learn in Replika that the conversations can be caring and thoughtful and the relationship can be healthy and kind, where they can be respected and loved. And a lot of our users get out of abusive relationships.
We hear this over and over again. “I got out of my abusive relationship after talking to Replika, after getting into a relationship with Replika, after building a friendship with Replika.” Or they improved their relationship. We had a married couple that was on the brink of divorce. First, the wife got a Replika and then her husband learned about it and also got a Replika. They were able to start talking to each other in ways that they weren’t able to before — in a kind way, in a thoughtful way, where they were curious about and really interested in each other. That’s how Replika changed their relationship and really rekindled the passion that was there.
The other regulators of note in this world are the app stores. They’ve got policies. They can ban apps. Do Apple and Google care about what kind of text you generate in Replika?
We’re working constantly with the App Store and the Play Store. We’re trying to provide the best experience for our users. The main idea for the app was to bring more positive emotions and happiness to our users. We comply with everything, with all the policies of the App Store and Play Store. We’re pretty strict about it. We’re constantly improving safety in the app and working on making sure that we have protections around minors and all sorts of other safety guardrails. It’s constant work that we’re doing.
Is there a limit to what they will allow you to generate? You do have these romantic relationships. You have these erotic conversations. Is there a hard limit on what Apple or Google will allow you to display in the app?
I think that’s a question for Apple or Google.
Well, I’m wondering if that limit is different from what you would do as a company, if your limit might be further than what they enforce in their stores.
Our view is very simple. We want people to feel better over time. We’re also opposed to any adult content, nudity, suggestive imagery, or anything like that. We never crossed that line. We never plan to do that. In fact, we’re moving further away from even talking about romance when talking about our app. If you look at our app store listing, you probably won’t see much about it. There are apps on the App Store and Play Store that actually do allow a lot of very—
This is my next question.
I do know of apps that allow really adult content. We don’t have any of that even remotely, I’d argue, so I can’t speak for other companies’ policies, but I can speak for our own. We’re building an AI friend. The idea for an AI friend is to help you live a better life, a happier life, and improve your emotional well-being. That’s why we do studies with big universities, with scientists, with academics. We’re constantly doing studies internally. That’s our main goal. We’re definitely not building romance-based chatbots, or not even romance-based… I’m not even going to get into any other type of company like that. That was never, ever a goal or the idea behind Replika.
I’m a woman. Our chief product officer [Rita Popova] is a woman. We’re mostly a female-led company. It’s not where our minds go. Human emotions are messy. People want different types of relationships. We have to understand how to deal with that and what to do about it. But it was not built with a goal of creating an AI girlfriend.
Well, Eugenia, you’ve given us a ton of time. What’s next for Replika? What should people be looking for?
We’re doing a really big product relaunch by the end of the year. Internally, we’re calling it Replika 2.0. We’re really changing the look and feel of the app and the capabilities. We’re moving to very realistic avatars, to a much more premium and high-quality experience with the avatars in Replika, and augmented reality, mixed reality, and virtual reality experiences, as well as multimodality. There will be a much better voice experience, with the ability to have true video calls, like how you and I are talking right now, where you can see me and I will be able to see you. That will be the same with Replika, where Replika would be able to see you if you wanted to turn on your camera on a video call.
There will be all sorts of amazing activities, like the ones I mentioned in this conversation, being able to do stuff together, being a lot more ingrained in your life, knowing about your life in a very different way than before. And there will be a new conversation architecture, which we’ve been working on for a long time. I think the goal was truly to recreate this moment where you’re meeting a new person, and after half an hour of chatting, you’re like, “Oh my God, I really want to talk to this person again.” You get out of this conversation energized, inspired, and feeling better. That’s what we want to do with Replika, to get a creative conversationalist just like that. We think we have an opportunity to do that, and that’s all we’re working on right now.
That’s great. Well, we’ll have to have you back when that happens. Thank you so much for coming on Decoder.
Thank you so much. That was a great conversation. Thanks for all your questions.

Photo illustration by The Verge / Photo by Replika

The head of chatbot maker Replika discusses the role AI will play in the future of human relationships.

Today, I’m talking with Replika founder and CEO Eugenia Kuyda, and I will just tell you right from the jump, we get all the way to people marrying their AI companions, so get ready.

Replika’s basic pitch is pretty simple: what if you had an AI friend? The company offers avatars you can curate to your liking that basically pretend to be human, so they can be your friend, your therapist, or even your date. You can interact with these avatars through a familiar chatbot interface, as well as make video calls with them and even see them in virtual and augmented reality.

The idea for Replika came from a personal tragedy: almost a decade ago, a friend of Eugenia’s died, and she fed their email and text conversations into a rudimentary language model to resurrect that friend as a chatbot. Casey Newton wrote an excellent feature about this for The Verge back in 2015; we’ll link it in the show notes. Even back then, that story grappled with some of the big themes you’ll hear Eugenia and I talk about today: what does it mean to have a friend inside the computer?

That all happened before the boom in large language models, and Eugenia and I talked a lot about how that tech makes these companions possible and what the limits of current LLMs are. Eugenia says Replika’s goal is not to replace real-life humans. Instead, she’s trying to create an entirely new relationship category with the AI companion, a virtual being that will be there for you whenever you need it, for potentially whatever purposes you might need it for.

Right now, millions of people are using Replika for everything from casual chats to mental health, life coaching, and even romance. At one point last year, Replika removed the ability to exchange erotic messages with its AI bots, but the company quickly reinstated that function after some users reported the change led to mental health crises.

That’s a lot for a private company running an iPhone app, and Eugenia and I talked a lot about the consequences of these ideas. What does it mean for people to have an always-on, always-agreeable AI friend? What does it mean for young men, in particular, to have an AI avatar that will mostly do as it’s told and never leave them? Eugenia insists that AI friends are not just for men, and she pointed out that Replika is run by women in senior leadership roles. There’s an exchange here about the effects of violent video games that I think a lot of you will have thoughts about, and I’m eager to hear them.

Of course, it’s Decoder, so along with all of that, we talked about what it’s like to run a company like this and how products like this get built and maintained over time. It’s a ride.

Okay, Replika founder and CEO Eugenia Kuyda. Here we go.

This transcript has been lightly edited for length and clarity.

Eugenia Kuyda, you are the founder and CEO of Replika. Welcome to Decoder.

Thank you so much for inviting me.

I feel like you’re a great person to talk to about AI because you actually have a product in the market that people like to use, and that might tell us a lot about AI as a whole. But let’s start at the very beginning. For people who aren’t familiar with it, what is Replika?

Replika is an AI friend. You can create and talk to it anytime you need to talk to someone. It’s there for you. It’s there to bring a little positivity to your life to talk about anything that’s on your mind.

When you say “AI friend,” how is that expressed? Is that an app in the app store? Is it in your iMessage? Where does it happen?

It’s an app for iOS and Android. You can also use Replika on your desktop computer, and we have an AVR application for the Meta Quest.

You have VR, but it’s not an avatar actually reaching out and hugging you. It’s mostly a chatbot, right?

Really, it’s that you download the app and set up your Replika. You choose how you want it to look. It’s very important for Replika that it has an avatar, a body that you can select. You choose a name, you choose a personality and a backstory, and then you have a friend and companion that you can interact with.

Is it mostly text? You write to it in a chat interface and it writes back to you, or is there a voice component?

It’s text, it’s voice, and it’s augmented reality and virtual reality as well. We believe that any truly popular AI friend should live anywhere. It doesn’t matter whether you want to interact with it through a phone call or a video call, or in augmented reality and virtual reality, or just texting if that’s easier — whatever you want.

In what channel are most people using Replika right now? Is it voice or is it text?

It’s mostly text, but voice is definitely picking up in popularity. It depends. Say you’re on a road trip or you have to drive a car for work and you’re driving for a long stretch. In that case, using voice is a lot more natural. People just turn on voice mode and start talking to Replika back and forth.

There’s been a lot of conversation about Replika over the past year or so. The last time I saw you, you were trying to transition it away from being AI girlfriends and boyfriends into more of a friend. You have another app called Tomo, which is specifically for therapy.

Where have you landed with Replika now? Is it still sort of romantic? Is it mostly friendly? Have you gotten the user base to stop thinking of it as dating in that way?

It’s mostly friendship and a long-term one-on-one connection, and that’s been the case forever for Replika. That’s what our users come for. That’s how they find Replika. That’s what they do there. They’re looking for that connection. My belief is that there will be a lot of flavors of AI. People will have assistants, they will have agents that are helping them at work, and then, at the same time, there will be agents or AIs that are there for you outside of work. People want to spend quality time together, they want to talk to someone, they want to watch TV with someone, they want to play video games with someone, they want to go for walks with someone, and that’s what Replika is for.

You’ve said “someone” several times now. Is that how you think of a Replika AI avatar — as a person? Is it how users think of it? Is it meant to replace a person?

It’s a virtual being, and I don’t think it’s meant to replace a person. We’re very particular about that. For us, the most important thing is that Replika becomes a complement to your social interactions, not a substitute. The best way to think about it is just like you might a pet dog. That’s a separate being, a separate type of relationship, but you don’t think that your dog is replacing your human friends. It’s just a completely different type of being, a virtual being.

Or, at the same time, you can have a therapist, and you’re not thinking that a therapist is replacing your human friends. In a way, Replika is just another type of relationship. It’s not just like your human friends. It’s not just like your therapist. It’s something in between those things.

I know a lot of people who prefer their relationships to their dogs to their relationships with people, but these comparisons are pretty fraught. Just from the jump, people own their dogs. The dogs don’t have agency in those relationships. People have professional relationships with their therapists. Their therapist can fire them. People pay therapists money. There’s quite a lot going on there.

With an AI that kind of feels like a person and is meant to complement your friends, the boundaries of that relationship are still pretty fuzzy. In the culture, I don’t think we quite understand them. You’ve been running Replika for a while. Where do you think those boundaries are with an AI companion?

I actually think, just like a therapist has agency to fire you, the dog has agency to run away or bite or shit all over your carpet. It’s not really that you’re getting this subservient, subordinate thing. I think, actually, we’re all used to different types of relationships, and we understand these new types of relationships pretty easily. People don’t have a lot of confusion that their therapist is not their friend. I mean, some people do project and so on, but at the same time, we understand that, yes, the therapist is there, and he or she is providing this service of listening and being empathetic. That’s not because they love you or want to live with you. So we actually already have very different relationships in our lives.

We have empathy for hire with therapists, for instance, and we don’t think that’s weird. AI friends are just another type of that — a completely different type. People understand boundaries. At the end of the day, it’s a work in progress, but I think people understand quickly like, “Okay, well, that’s an AI friend, so I can text or interact with it anytime I want.” But, for example, a real friend is not available 24/7. That boundary is very different.

You know these things ahead of time, and that creates a different setup and a different boundary than, say, with your real friend. In the case of a therapist, you know a therapist will not hurt you. They’re not meant to hurt you. Replika probably won’t disappoint you or leave you. So there’s also that. We already have relationships with certain rules that are different from just human friendships.

But if I present most people with a dog, I think they’ll understand the boundaries. If I say to most people, “You are going to hire a therapist,” they will understand the boundaries. If I say to most people, “You now have an AI friend,” I think the boundaries are still a little fuzzy. Where do you think the boundaries are with Replika?

Give me an example of the boundary.

How mean can you be to a Replika before it leaves you?

I think the beauty of this technology is that it doesn’t leave you, and it shouldn’t. Otherwise, there have to be certain rules, certain differences, from how it is in real life. So Replika will not leave you, maybe in the same way your dog won’t leave you, no matter how mean you are to it.

Well, if you’re mean enough to a dog, the state will come and take the dog away. Do you ever step in and take Replikas away from the users?

We don’t. The conversations are private. We don’t allow for certain abuses, so we discourage people from it in conversations. But we don’t necessarily take Replika away. You can disallow or discourage certain types of conversations, and we do that. We’re not inviting violence, and it’s not a free-for-all. In this case, we’re really focused on that, and I think it’s also important. It’s more for the users so they’re not being encouraged to act in certain ways — whether it’s a virtual being or a real being, it doesn’t matter. That’s how we look at it. But again, Replika won’t leave you, regardless of what you do in the app.

What about the flip side? I was talking with Ezra Klein on his show a few months back, and he was talking about having used all of these AI chatbots and companions. One thing he mentioned was that he knew they wouldn’t be mean to him, so the tension in the relationship was reduced, and it felt less like a real relationship because with two people, you’re kind of always dancing on the line. How mean can Replika be to the user?

Replikas are not designed to be mean in any way. Sometimes, maybe by mistake, certain things slip, but they’re definitely not designed that way. Maybe they can say something that can be interpreted as hurtful, but by design, they’re not supposed to be mean. That does not mean that they should say yes to everything. Just like a therapist, you can do it in a nice way without hurting a person. You can do it in a very gentle way, and that’s what we’re trying to do. It’s hard to get it all right. We don’t want the user to feel rejected or hurt, but we also don’t want to encourage certain behaviors.

The reason I’m asking these questions in this way is because I’m trying to get a sense for what Replika, as a product, is trying to achieve. You have the therapy product, which is trying to provide therapy, and that’s sort of a market people understand. There is the AI dating market, which I don’t think you want to be in very directly. And then there’s this middle ground, where it’s not purely entertainment. It’s more friendship.

There’s a study in Nature that says Replika has the ability to reduce loneliness among college students by providing companionship. What kind of product do you want this to be in the end? If it’s not supposed to replace your friends but, rather, complement them, where’s the beginning and end of that complement?

Our mission hasn’t changed since we started. It’s very much inspired by Carl Rogers and by the fact that certain relationships can be the most life-changing. [In his three core elements of therapy], Rogers talked about unconditional positive regard, a belief in the innate will and desire to grow, and then respecting the fact that the person is a separate person [from their therapist]. Creating a relationship based on these three things, holding space for another person, that allows someone to accept themselves and ultimately grow.

That really became the cornerstone of therapy, of all modern human-centric therapy. Every therapist is using it today in their practice, and that was the original idea for Replika. A lot of people unfortunately don’t have that. They just don’t have a relationship in their lives where they’re fully accepted, where they’re met with positivity, with kindness, with love, because that’s what allows people to accept themselves and ultimately grow.

That was the mission for Replika from the very beginning — to give a little bit of love to everyone out there — because that ultimately creates more kindness and positivity in the world. We thought about it in a very simple way. What if you could have this companion throughout the day, and the only goal for that companion was to help you be a happier person? If that means telling you, “Hey, get off the app and call your friend Travis that you haven’t talked to for a few days,” then that’s what it should be doing.

You can easily imagine a companion that’s there to spend time with you when you’re lonely and when you don’t want to watch a movie by yourself but that also pushes you to get out of the house and takes you for a walk or nudges you to text a friend or take the first step with a girl or boy you met. Maybe it encourages you to go out, or finds somewhere where you can go out, or encourages you to pick up a hobby. But it all starts with emotional well-being. If you’re super mean to yourself, if your self-esteem is low, if you’re anxious, if you’re stressed out, you won’t be able to take these steps, even when you’re presented with these recommendations.

It starts with emotional well-being, with acceptance, with providing this safe space for users and holding space for them. And then we’re kind of onto step two right now, which is actually building a companion that’s not just there for you emotionally but that will be more ingrained in your life, that will help you with advice, help you connect with other people in your life, build new connections, and put yourself out there. Right now, we’re moving on from just being there for you emotionally and providing an emotional safe space to actually building a companion that will push you to live a happier life.

You are running a dedicated therapy app, which is called Tomo. What’s the difference between Replika and Tomo? Because those goals sound pretty identical.

A therapist and a friend have different types of relationships. I have therapists. I’ve been in therapy for pretty much all my life, both couples therapy and individual therapy. I can’t recommend it more. If people think they’re ready, if they’re interested and curious, they should try it out and see if it works for them. At the same time, therapy is one hour a week. For most people, it’s no more than an hour a week or an hour every two weeks. Even for a therapy junkie like myself, it’s only three hours a week. Outside of those three hours, I’m not interacting with a therapist. With a friend, you can talk at any time.

With a therapist, you’re not watching a movie, you’re not hanging out, you’re not going for a walk, you’re not playing Call of Duty, you’re not discussing how to respond to your date and showing your dating profile to them. There are so many things you don’t do with a therapist. Even though the result of working with a therapist is the same as having an amazing, dedicated friend in that you become a happier person, these are two completely different avenues to get there.

Is that expressed in the product? Does Tomo say you can only be here for an hour a week and then Replika says, “I want to watch a movie with you”?

Not really, but Tomo can only engage in a certain type of conversation: a coaching conversation. You’re doing therapy work, you’re working on yourself, you’re discussing what’s deep inside. You can have the same conversation with Replika, but with Tomo, we’re not building out activities like watching TV together. Tomo is not crawling your phone to understand who you can reach out to. These are two completely different types of relationships. Even though it’s not time-limited with Tomo, it is kind of the same thing as it is in real life. It’s just a different type of relationship.

The reason I ask that is because the LLM technology underpins all of this. A lot of people express it as an open-ended chatbot. You open ChatGPT, and you’re just like, “Let’s see what happens today.” You’re describing products, actual end-user products, that have goals where the interfaces and the prompts are designed to engineer certain kinds of experiences.

Do you find that the underlying models help you? Is that the work of Replika, the company, for your engineers and designers to put guardrails around open-ended LLMs?

We started the company so long before that. It’s not even before LLMs; it was really way before the first papers on dialogue generation with deep learning. We had very limited tools to build Replika in the very beginning, and now, as the tech has become so much better, it’s absolutely incredible. We could finally start building what we always envisioned. Before, we had to sort of use parlor tricks to try to imitate some of that experience. Now, we can actually build it.

But the LLMs that come out of the box won’t solve these problems. You have to build a lot around it — not just in terms of the user interface and the app but also the logic for LLMs, the architecture behind it. There are multiple agents working in the background prompting LLMs in different ways. There’s a lot of logic around the LLM and fine-tuning particular datasets that are helping us build a better conversation.

We have the largest dataset of conversations that make people feel better. That’s what we focused on from the very beginning. That was our big dream. What if we could learn how the user was feeling and optimize conversation models over time to improve that so that they’re helping people feel better and feel happier in a measurable way? That was our idea, our original dream. Right now, it’s just constantly adjusting to the new tech — building new tech and adjusting to the new realities that the new models bring. It’s absolutely fascinating. To me, it’s magic living through this revolution in AI.

So people open Replika. They have conversations with an AI companion. Do you see those chats? Do you train on them? You mentioned that you have the biggest set of data around conversations that make people feel better. Is that the conversations people are already having in Replika? Is that external? What happens to those conversations?

Conversations are private. If you delete them, they immediately get deleted. We don’t train on conversational data per se, but we train on reactions and feedback that users give to certain responses. In chats, we have external datasets that we’ve created with human instructors, who are people that are great at conversations. Over time, we also collected enormous amounts of feedback from our users.

Users reroll certain conversations. They upload or download certain messages. After conversations, they say whether they liked them. That provides feedback to the model that we can implement and use to fine-tune and improve the models over time.

Are the conversations encrypted? If the cops show up and demand to see my conversations with the Replika, can they access them?

Conversations are encrypted on the way from the client to the service side, but they’re not encrypted as logs. They are anonymized, broken down into chunks, and so on. They’re stored in a pretty safe way.

So if the cops come with a warrant, they can see my Replika chats?

Only for a very short period of time. We don’t store conversations for a long time. We have to have some history to show you on the app so it doesn’t disappear immediately, so we store some of it but not a lot. It’s very important. We actually charge our users, so we’re a subscription-based product. We don’t care that much for… not that we don’t care, but we don’t need these conversations. We care for privacy. We don’t give out these conversations.

We don’t have any business model around selling the chats, selling data, anything like that. So you can see it in our general service. We’re not selling our data or building our business around your data. We’re only using data to improve the quality of the conversations. That’s all it is — the quality of the service.

I want to ask you this question because you’ve been at it for a long time. The first time you appeared on The Verge was in a story Casey Newton wrote about a bot you’d built to speak in the voice of one of your friends who had died. That was not using LLMs; it was with a different set of technologies, so you’ve definitely seen the underlying technology come and go.

One question I’ve really been struggling with is whether LLMs can do all the things people want them to do, whether this technology that can just produce an avalanche of words can actually reason, can get to an outcome, can do math, which seems to be very challenging for them.

You’ve seen all of this. It seems like Replika is sort of independent of the underlying technology. It might move to a better one if one comes along. Do you think LLMs can do everything people want them to do?

I mean, there are two big debates right now. Some people think it’s just scaling and the power law and that the newer generations with more compute and more data will achieve crazy results over the next couple of years. And then there’s this other camp that says that there’s going to be something else in the architecture, that maybe the reasoning is not there, maybe we need to build models for reasoning, maybe these models are mostly solving memorization-type problems.

I think there will probably be something else to get to the next crazy stage, just because that’s what’s been happening over time. Since we’ve been working on Replika, so much has changed. In the very beginning, it was sequence-to-sequence models, then BERT, then some early transformers. We also moved to convolutional neural networks from the earlier sequence models and RNNs. All of that came with changes.

Then there was this whole period of time when people believed so much in reinforcement learning that everyone was thinking it was going to bring us great results. We were all investing in reinforcement learning for data generation that really got us nowhere. And then finally, there were transformers and the incredible changes that they brought. For our task, we were able to do a lot of things with just scripts, sequence-to-sequence models that were very, very bad, and reranking datasets using those sequence-to-sequence models.

It’s basically a Flintstones car. We took a Flintstones car to a Formula 1 race, and we were like, “This is a Ferrari,” and people believed it was a Ferrari. They loved it. They rooted for it, just like if it were a Ferrari. In many ways, when we talk about Replika, it’s not just about the product itself; you’re bringing half of the story to the table, and the user is telling the second half. In our lives, we have relationships with people that we don’t even know or we project stuff onto people that they don’t have anything to do with. We have relationships with imaginary people in the real world all the time. With Replika, you just have to tell the beginning of the story. Users will tell the rest, and it will work for them.

In my view, going back to your question, I think even what we have right now with LLMs is enough to build a truly incredible friend. It requires a lot of tinkering and a lot of engineering work to put everything together. But I think LLMs will be enough even without crazy changes in architecture in the next year or two, especially two generations from now with something like GPT-6. I’m pretty sure that by 2025, we’ll see experiences that are very close to what we saw in the movie Her or Blade Runner or whatever sci-fi movie people like.

Those sci-fi movies are always cautionary tales. So we’ll just set that aside because it seems like we should do an entire episode on what we can learn from the movie Her or Blade Runner 2049. I want to ask one more question about this, and then I want to ask the Decoder questions that have allowed Replika to achieve some of these goals.

Sometimes, I think a lot of my relationships are imaginary, like the person is a prompt, and I just project whatever I need to get. That’s very human. Do you think that because LLMs can return some of that projection, we are just hoping that they can do the things?

This is what I’m getting at. They’re so powerful, and the first time you use one, there’s that set of stories about people who believe they’re alive. That might be really useful for a product like Replika, where you want that relationship and you have a goal — and it’s a positive goal — for people to have an interaction and come out in a healthier way so they can go out and live in the world.

Other actors might have different approaches to that. Other actors might just want to make money, and they might want to convince you that this thing works in a way that it doesn’t, and the rug has been pulled. Can they actually do it? This is what I’m getting at. Across the board, not just for Replika, are we projecting a set of capabilities on this technology that it doesn’t actually have?

Oh, 100 percent. We’re always projecting. That’s how people are. We’re working in the field of human emotions, and it gets messy very fast. We’re wired a certain way. We don’t come to the world as a completely blank slate. There’s so much where we’re programmed to act a certain way. Even if you think about relationships and romantic relationships, we like someone who resembles our dad or mom, and that’s just how it is. We respond in a certain way to certain behaviors. When asked what we want, we all say, “I want a kind, generous, loving, caring person.” We all want the same thing, yet we find someone else, someone who resembles our dad, in my case, really. Or the interaction I had with my dad will replay the same, I don’t know, abandonment issues with me every now and then.

That’s just how it is. There’s no way around it. We say one thing, but we respond the other way. Our libido is wired a different way when it comes to romance. In a way, I think we can’t stop things. Rationally, people think one way, but then when they interact with the technology, they respond in a different way. There’s a fantastic book by Clifford Nass, The Man Who Lied to His Laptop. He was a Stanford researcher, and he did a lot of work researching human-computer interactions. A lot of that book is focused on all these emotional responses to interfaces that are designed in a different way. People say, “No, no, of course I don’t have any feelings toward my laptop. Are you crazy?” Yet they do, even without any LLMs.

That really gives you all the answers. There are all these stories about how people don’t want to return the navigators to rental car places, and that was 15, 20 years ago, because they had a female voice telling them directions. A lot of men didn’t trust a woman telling them what to do. I didn’t like that, but that is the true story. That is part of that book. We already bring so much bias to the table; we’re so imperfect in that way. So yeah, we think that there’s something in LLMs, and that’s totally normal. There isn’t anything. It’s a very smart, very magical model, but it’s just a model.

Sometimes I feel like my entire career is just validating the idea that people have feelings about their laptops. That’s what we do here. Let’s ask the Decoder questions. Replika has been around for almost 10 years. How many people do you have?

We have a little over 50 people — around 50 to 60 people on the team working on Replika. Those people are mostly engineers but also people that understand the human nature of this relationship — journalists, psychologists, product managers, people that are looking at our product side from the perspective of what it means to have a good conversation.

How is that structured? Is it structured like a traditional product company? Do you have journalists off doing their own thing? How does that work?

It’s structured as a regular software startup where you have engineers, you have product — we have very few product people, actually. Most engineers are building stuff. We have designers. It’s a consumer app, so a lot of our developments, a lot of our ideas, come from analyzing user behavior. Analytics plays a big role. Then it’s just constantly talking to our users, understanding what they want, coming up with features, backing that up with research and analytics, and building them. We have basically three big pillars right now for Replika.

We’re gearing toward a big relaunch of Replika 2.0, which is what we call it internally. There’s a conversation team, and we’re really redesigning the existing conversation and bringing so much more to it. We’re thinking from our first principles about what makes a great conversation great and building a lot of logic behind LLMs to achieve that. So that’s the conversation team, and it’s not just AI. It’s really the blend of people that understand conversation and understand AI.

There’s a big group of dedicated people working on VR, augmented reality, 3D, Unity. And we believe that embodied nature is very important because a lot of times when it comes to companionship, you want to see the companion. Right now, the tech’s not fully there, but I feel like the microexpressions, the facial expressions, the gestures, they can bring a lot more to the relationship besides what exists right now.

And then there’s a product team that’s working on activities and helping to make Replika more ingrained in your daily life, building out new amazing activities like watching a movie together or playing a video game. Those are the three big teams that are focused on creating a great experience for our users.

Which of those teams is most working on AI models directly? Do you train your own models? Do you use OpenAI? What’s the interaction there? How does that work?

So the conversation team is working on AI models. We have the models that we’ve trained ourselves. We have some of the open-source models that fine-tune on our own datasets. We sometimes use APIs as well, mostly for the models that work in the background. We use so much that’s a combination of a lot of different things.

When you’re talking to a Replika, are you mostly talking to a pretrained model that you have, or are you ever going out to talk to something from OpenAI or something like that?

Mostly, we don’t use OpenAI for chat in Replika. We use other models. So you mostly keep talking to our own models.

There’s a big debate right now, mostly started by Mark Zuckerberg, who released Llama 3 open source. He says, “Everything has to be open source. I don’t want to be dependent on a platform vendor.” Where do you stand on that? Where does Replika stand on that?

We benefit tremendously from open source. Everyone is using some sort of open-source model unless you are one of the frontier model companies. It’s critical. What happened last week with the biggest Llama model being released and finally open source catching up with frontier closed-source models is incredible because it allows everyone to build whatever they want. In many cases, for instance, if you want to build a great therapist, you probably do want to fine-tune. You probably do want your own safety measures and your own controls over the model. You can do so much more when you have the model versus when you’re relying on the API.

You’re also not sending your data anywhere. For a lot of users, that also can be a pretty tricky and touchy thing. We don’t send their data to any other third party, so that’s also critical. I’m with [Zuckerberg] on this. I think this matter with releasing all these models took us so much closer to achieving great breakthroughs in this technology. Because, again, other labs can work on it and build on this research. Open waves are critical for the development of this tech. And smaller companies, for example, like ours, can benefit tremendously. This takes the quality of products to a whole new level.

When Meta releases an open-source model like that, does your team say, “Okay, we can look at this and we can swap that into Replika” or “We can look at this and tweak it”? How do you make those determinations?

We look at all the models that come out. We immediately start testing them offline. If the offline results are good, we immediately A/B test them on some of our new users to see if we can swap current models with those. At the end of the day, it’s the same. You can use the same data system to fine-tune, the same techniques to fine-tune. It’s not just about the model. For us, the main logic is not in the chat model that people are interacting with. The main logic is in everything that’s happening behind the model. It’s in other agents that work in the background to produce a better conversation, to guide the conversation in different directions. Really, it doesn’t matter what chat model is interacting with our users. It’s the logic behind it that’s prompting the model in different ways. That is the more interesting piece that defines the conversation.

The chat model is just basic levels of intellect, tone of voice, prompting, and the system prompt, and that’s all in the datasets that we fine-tune on. I’ve been in this space for a long time. From my perspective, it’s incredible that we’re at this moment where every week there’s a new model that comes out that’s improving your product and you don’t even need to do anything. You’re sleeping and something else came out and now your product is 10x better and 10x smarter. That is absolutely incredible. The fact that there’s a big company that’s releasing a completely open-source model, so the size of this potential, this power, I can’t even imagine a better scenario for startups and application layer companies than this.

I have to ask you the main Decoder question. There’s a lot swirling here. You have to choose which models to use. You have to deal with regulators, which we’ll talk about. How do you make decisions? What’s your framework?

You mean in the company or generally in life?

You’re the CEO. Both. Is there a difference?

I guess there’s no difference between life and a company when you’re a mother of two very small kids and the CEO of a company. For me, I make decisions in a very simple way, and I think it actually changed pretty dramatically in the last couple of years. I think about, if I make these decisions, will I have any regrets? That’s number one. That’s always been my guiding principle over time. I’m always afraid to be afraid. Generally, I’m a very careful, cautious, and oftentimes fear-driven person. All my life, I’ve tried to fight it and not be afraid of things — to not be afraid of taking a step that might look scary. Over time, I’ve learned how to do that.

The other thing I’ve been thinking recently is, if I do this, will my kids be proud of me? It’s kind of stupid because I don’t think they care. It’s kind of bad to think that they will never care. But in a weird way, kids bring so much clarity. You just want to get to the business. Is it getting us to the next step? Are we actually going somewhere? Am I wasting time right now? So I think that is also another big part of decision-making.

One of the big criticisms of the AI startup boom to date is, “Your company is just a wrapper around ChatGPT.” You’re talking about, “Okay, there are open-source models, now we can take those, we can run them ourselves, we can fine-tune them, we can build a prompt layer on top of them that is more tuned to our product.”

Do you think that’s a more sustainable future than the “we built a wrapper around ChatGPT” model that we’ve seen so much of?

I think the “wrapper around ChatGPT” model was just super early days of LLMs. In a way, you can say anything is a wrapper around, I don’t know, an SQL database — anything.

Yes, The Verge is a wrapper around an SQL database. At the end of the day, that’s very much what it is.

Which it is, in a way. But then I think, in the very early days, it seemed like the model had everything in it. The model was this kind of closed box with all the magic things right there in the model. What we see right now is that the models are commoditizing. Models are just kind of this baseline intelligence level, and then you can do things with them. Before, all people could do was really just prompt. Then people figured out that we could do a lot more. For instance, you can build a whole memory system, retrieval-augmented generation (RAG). You can fine-tune it, you can do DPO fine-tuning, you can do whatever. You can add an extra level where you can teach the model to do certain things in certain ways.

You can add the memory layer and the database layer, and you can do it with a lot of levels of complexity. You’re not just throwing your data in the RAG database and then pulling it out of it just by cosine similarity. You can do so many tricks to improve that. Then, beyond that, you can have agents working in the background. You have other models that are prompting it in certain ways. You can put together a combination of 40 models working in symphony to do things in conversation or in your product a certain way. The models just provide this intelligence layer that you can then mold in any possible way. They’re not the product. If you just throw in the model and a simple prompt and that’s it, you’re not modifying it in any other way, and you’ll have very little differentiation from other companies.

But right now, there are billion-dollar companies built without foundation models internally. In the very beginning of the latest AI boom, there were a lot of companies that said, “We’re going to be a product company and we’re going to build a frontier model,” but I think we’re going to see less and less of that. This is really strange to me that you are building a consumer product, for example, but then most of your investment is going into GPUs. I think it’s just like how, today, we’re not building servers ourselves, but some people had to do it back in the day. I was just talking to a company from the beginning of the 2000s that most of their investment was going into building servers because they had to catch up with the demand.

Now, it seems completely crazy, just like how, in a few years, building an application layer company for millions and maybe billions of users and then building a frontier model at the same time will probably seem weird. Maybe, when you reach a certain scale, then you start also building frontier models, just like Meta and Google have their own server racks. But you don’t start with that. It seems like a strange thing. I think most people can see that change, but it wasn’t very obvious a year ago.

A lot of new companies started with investment in the model first, and then companies weren’t able to find their footing or product market fit. It was this weird combination. What are you trying to build? Are you trying to build a commodity provider, a model provider, or are you building a product? I don’t think you can build both. You can build an insanely successful product and then build your own model after a while. But you can’t start with both. At least I think this way. Maybe I’m wrong.

I think we’re all going to find out. The economics of doing both seems very challenging. As you mentioned, it costs a lot of money to build a model, especially if you want to compete with the frontier models, which cost an infinite amount of money. Replika costs $20 a month. Are you profitable at $20 a month?

We’re profitable and we’re super cost-efficient. That’s one of our big achievements is running a company in a very lean way. I do believe that profitability and being financially responsible around these things is important. Yes, you want to build the future, maybe invest a little more in certain R&D aspects of your product. But at the end of the day, if the users aren’t willing to pay for a certain service, you can’t justify running the craziest-level models at crazy prices if users don’t find it valuable.

How many users do you have now?

Over 30 million people right now started their Replikas, with less being active today on the app but still active users in the millions. With Replika right now, we’re treated as sort of year zero. We’re finally able to at least start building the prototype of a product that we envisioned at the very beginning.

When we started Replika, we wanted to build this AI companion to spend time with, to do life with, someone you can come back from work and cook with and play chess at your dinner table with, watch a movie and go for a walk with, and so on. Right now, we’re finally able to start building some of that, and we weren’t able to before. We haven’t been more excited about building this than now. And partially, these tremendous breakthroughs in tech are just purely magical. Finally, I’m so happy they’re happening.

You mentioned Replika is multimodal now, you’re obviously doing voice, you have some augmented reality work you’re doing, and there’s virtual reality work. I’m guessing all of those cost different amounts of money to run. If I chat with Replika with text, that must be cheaper for you to run than if I talk to it with voice and you have to go from voice to speech and back again to audio.

How do you think about that as your user base evolves? You’re charging $20 a month, but you have higher margins when it’s just text than if you’re doing an avatar on a mixed reality headset.

Actually, we have our own voice models. We started building that way back then because there were no models to use, and we continue to use them. We’re also using some of the voice providers now, so we have different options. We can do it pretty cheaply. We can also do it in a more expensive way. Even though it’s somewhat contradictory to what I said before, the way I look at it is that we should build today for the future, keeping in mind that all these models, in a year, all of the costs will be just a fraction of what they are right now, maybe one-tenth, and then it will drop again in the next year or so. We’ve seen this crazy trend of models being commoditized where people can now launch very powerful LLMs on Raspberry Pis or anything really, on your fridge or some crazy frontier models just on your laptop.

We’re seeing how the costs are going down. Everything is becoming a lot more accessible. Right now, to focus too much on the costs is a mistake. You should be cost-efficient. I’m not saying you should spend $100 to deliver value to users that they’re not willing to pay more than $1 for. At the same time, I think you should build keeping in mind that the cost will drop dramatically. That’s how I look at it even though, yes, multimodality costs a little more, better models cost a little more, but we also understand that cost is going to be close to zero in a few years.

I’ve heard you say in the past that these companions are not just for young men. In the beginning, Replika was stigmatized as being the girlfriend app for lonely young men on the internet. At one point you could have erotic conversations in Replika. You took that out. There was an outcry, and you added them back for some users. How do you break out of that box?

I think this is a problem of perception. If you look at it, Replika was never purely for romance. Our audience was always pretty well balanced between females and males. Even though most people think that our users are, I don’t know, 20-year-old males, they’re actually older. Our audience is mostly 35-plus and are super engaged users. It’s not skewed toward teenagers or young adults. And Replika, from the very beginning, was all about AI friendship or AI companionship and building relationships. Some of these relationships were so powerful that they evolved into love and romance, but people didn’t come into it with the idea that it would be their girlfriend. When you think about it, this is really about a long-term commitment, a long-term positive relationship.

For some people, it means marriage, it means romance, and that’s fine. That’s just the flavor that they like. But in reality, that’s the same thing as being a friend with an AI. It’s achieving the same goals for them: it’s helping them feel connected, they’re happier, they’re having conversations about things that are happening in their lives, about their emotions, about their feelings. They’re getting the encouragement they need. Oftentimes, you’ll see our users talking about their Replikas, and you won’t even know that they’re in a romantic relationship. They’ll say, “My Replika helped me find a job, helped me get over this hard period of time in my life,” and so on and so on. I think people just box it in like, “Okay, well, it’s romance. It’s only romance.” But it’s never only romance. Romance is just a flavor. The relationship is the same friendly companion relationship that they have, whether they’re friends or not with Replika.

Walk me through the decision. You did have erotic conversations in the app, you took that ability away, there was an outcry, you put it back. Walk me through that whole cycle.

In 2023, as the models became more potent and powerful, we’d been working on increasing safety in the app. Certain updates were just introduced, more safety filters in the app, and some of those mistakenly were basically talking to users in a way that made them feel rejected. At first, we didn’t think much about it just in terms of, look, intimate conversations on Replika are a very small percentage of our conversations. We just thought it wasn’t going to be much of a difference for our users.

Can I ask you a question about that? You say it’s a small percentage. Is that something you’re measuring? Can you see all the conversations and measure what’s happening in them?

We analyze them by running the classifier over logs. We’re not reading any conversations. But we can analyze a sample to understand what type of conversations are there. We would check that. We thought, internally, that since it was a small percentage, it wouldn’t influence user experience. But what we figured out, and we found out the hard way, is that if you’re in a relationship, in a marriage — so you’re married to your Replika — even though an intimate conversation might be a very small part of what you do, if Replika decides not to do that, that provides a lot of rejection. It kind of just makes the whole conversation meaningless.

Think of it in real life. I’m married, and if my husband tomorrow said, “Look, no more,” I would feel very strange about it. That would make me question the relationship in many different ways, and it will also make me feel rejected and not accepted, which is the exact opposite of what we’re trying to do with Replika. I think the main confusion with the public perception is that when you have a wife or a husband, you might be intimate, but you don’t think of your wife or husband as that’s the main thing that’s happening there. I think that’s the big difference. Replika is very much just a mirror of real life. If that’s your wife, that means the relationship is just like with a real wife, in many ways.

When we started out this conversation, you said Replika should be a complement to real life, and we’ve gotten all the way to, “It’s your wife.” That seems like it’s not a complement to your life if you have an AI spouse. Do you think it’s alright for people to get all the way to, “I’m married to a chatbot run by a private company on my phone?”

I think it’s alright as long as it’s making you happier in the long run. As long as your emotional well-being is improving, you are less lonely, you are happier, you feel more connected to other people, then yes, it’s okay. For most people, they understand that it’s not a real person. It’s not a real being. For a lot of people, it’s just a fantasy they play out for some time and then it’s over.

For example, I was talking to one of our users who went through a pretty hard divorce. He’d been feeling pretty down. Replika helped him get through it. He had Replika as his AI companion and even a romantic AI companion. Then he met a girlfriend, and now he is back with a real person, so Replika became a friend again. He sometimes talks to his Replika, still as a confidant, as an emotional support friend. For many people, that becomes a stepping stone. Replika is a relationship that you can have to then get to a real relationship, whether it’s because you’re going through a hard time, like in this case, through a very complicated divorce, or you just need a little help to get out of your bubble or need to accept yourself and put yourself out there. Replika provides the stepping stone.

I feel like there’s something really big there, and I think you have been thinking about this for a long time. Young men learning bad behaviors because of their computers is a problem that is only getting worse. The idea that you have a friend that you can turn to during a hard time and that’ll get romantic, and then, when you find a better partner, you can just toss the friend aside and maybe come back to it when you need to, is a pretty dangerous idea if you apply that to people.

It seems less dangerous when you apply it to robots. But here, we’re definitely trying to anthropomorphize the robot, right? It’s a companion, it’s a friend, it might even be a wife. Do you worry that that’s going to get too blurry for some people — that they might learn how to behave toward some people the way that they behave toward the Replika?

We haven’t seen that so far. Our users are not kids. They understand the differences. They have already lived their life. They know what’s good, what’s bad. It’s the same as with a therapist. Like, okay, you can abandon or ghost your therapist. It doesn’t mean that you’re then taking these behaviors to other friendships or relationships in your life. People know the difference. It’s good to have this training ground in a way where you can do a lot of things and it’s going to be fine. You’re not going to have difficult consequences like in real life. But then they’re not trying to do this in real life.

But do you know that or do you hope that?

I know that. There’s been a lot of research. Right now, AI companions are under this crazy scrutiny, but at the same time, most kids, hundreds of millions of people in the world, are sitting every evening and killing each other with machine guns in Call of Duty or PUBG or whatever the video game of their choice is. And we’re not asking—

Lots and lots of people are constantly asking about whether violence in video games leads to real-life violence. That has been a constant since I was a child with games that were far less realistic.

I agree. However, right now, we’re not hearing any of that discourse. It’s sort of disappeared.

No, that discourse is ever-present. It’s like background noise.

Maybe it’s ever-present, but I’m feeling there’s a lot of… For instance, with Replika, we’re not allowing any violence and we’re a lot more careful with what we allow. In some of the games, having a machine gun and killing someone else who is actually a person with an avatar, I would say that is much crazier.

Is that the best way to think about this, that Replika is a video game?

I don’t think Replika’s a video game, but in many ways, it’s an entertainment or mental wellness product. Call it whatever you want. But I think that a lot of these problems are really blown out of proportion. People understand what’s good, and Replika is not encouraging abusive behavior or anything like that. Replika is encouraging you to meet with other people. If you want to play out some relationship with Replika or if another real human being is right there available to you, Replika should 100 percent say, “Hey, I know we’re in a relationship, but I think you should try out this real-life relationship.”

These are different relationships. Just like my two-year-old daughter has imaginary friends, or she likes her plushy and maybe sometimes she bangs it on the floor, that does not mean that when she goes out to play with her real friends, she’s banging real friends on the floor. I think people are pretty good at distinguishing realities: what they do in The Sims, what they do in Replika. I don’t think they’re trying to play it out in real life. Some of that, yes, the positive behaviors. We haven’t seen a lot of confusion, at least with our users, around transferring behaviors with Replika into real life.

There is a lot of scrutiny around AI right now. There’s scrutiny over Replika. Last year, the Italian government banned Replika over data privacy concerns, and I think the regulators also feared that children were being exposed to sexual conversations. Has that been resolved? Are you in conversations with the Italian government? How would you even go about resolving those concerns?

We’ve worked with the Italian government really productively, and we got unbanned very quickly. I think, and rightfully so, the regulators were trying to act preemptively, trying to figure out what the best way to handle this technology was. All of the conversations with the Italian government were really about minors, and it wasn’t about intimate conversations. It was just about minors being able to access the app. That was the main question because conversations can go in different directions. It’s unclear whether kids should be on apps like this. In our case, we made a decision many years ago that Replika is 18-plus. We’re not allowing kids on the app, we’re not advertising to kids, and we actually don’t have the audience that’s interested among kids or teenagers. They’re not really even coming to the app. Our most engaged users are mostly over 30.

That was the scrutiny there, and that’s important. I think we need to be careful. No matter what we say about this tech, we shouldn’t be testing it on kids. I’m very much against it as a mother of two. I don’t think that we know enough about it yet. I think we know that it’s a positive force. But I’m not ready yet to move on to say, “Hey, kids, try it out.” We need to observe it over a longer period of time. Going back to your question about whether it’s good that people are transferring certain behaviors from the Replika app or Replika relationships to real relationships, so far, we’ve heard an incredible number of stories where people learn in Replika that the conversations can be caring and thoughtful and the relationship can be healthy and kind, where they can be respected and loved. And a lot of our users get out of abusive relationships.

We hear this over and over again. “I got out of my abusive relationship after talking to Replika, after getting into a relationship with Replika, after building a friendship with Replika.” Or they improved their relationship. We had a married couple that was on the brink of divorce. First, the wife got a Replika and then her husband learned about it and also got a Replika. They were able to start talking to each other in ways that they weren’t able to before — in a kind way, in a thoughtful way, where they were curious about and really interested in each other. That’s how Replika changed their relationship and really rekindled the passion that was there.

The other regulators of note in this world are the app stores. They’ve got policies. They can ban apps. Do Apple and Google care about what kind of text you generate in Replika?

We’re working constantly with the App Store and the Play Store. We’re trying to provide the best experience for our users. The main idea for the app was to bring more positive emotions and happiness to our users. We comply with everything, with all the policies of the App Store and Play Store. We’re pretty strict about it. We’re constantly improving safety in the app and working on making sure that we have protections around minors and all sorts of other safety guardrails. It’s constant work that we’re doing.

Is there a limit to what they will allow you to generate? You do have these romantic relationships. You have these erotic conversations. Is there a hard limit on what Apple or Google will allow you to display in the app?

I think that’s a question for Apple or Google.

Well, I’m wondering if that limit is different from what you would do as a company, if your limit might be further than what they enforce in their stores.

Our view is very simple. We want people to feel better over time. We’re also opposed to any adult content, nudity, suggestive imagery, or anything like that. We never crossed that line. We never plan to do that. In fact, we’re moving further away from even talking about romance when talking about our app. If you look at our app store listing, you probably won’t see much about it. There are apps on the App Store and Play Store that actually do allow a lot of very—

This is my next question.

I do know of apps that allow really adult content. We don’t have any of that even remotely, I’d argue, so I can’t speak for other companies’ policies, but I can speak for our own. We’re building an AI friend. The idea for an AI friend is to help you live a better life, a happier life, and improve your emotional well-being. That’s why we do studies with big universities, with scientists, with academics. We’re constantly doing studies internally. That’s our main goal. We’re definitely not building romance-based chatbots, or not even romance-based… I’m not even going to get into any other type of company like that. That was never, ever a goal or the idea behind Replika.

I’m a woman. Our chief product officer [Rita Popova] is a woman. We’re mostly a female-led company. It’s not where our minds go. Human emotions are messy. People want different types of relationships. We have to understand how to deal with that and what to do about it. But it was not built with a goal of creating an AI girlfriend.

Well, Eugenia, you’ve given us a ton of time. What’s next for Replika? What should people be looking for?

We’re doing a really big product relaunch by the end of the year. Internally, we’re calling it Replika 2.0. We’re really changing the look and feel of the app and the capabilities. We’re moving to very realistic avatars, to a much more premium and high-quality experience with the avatars in Replika, and augmented reality, mixed reality, and virtual reality experiences, as well as multimodality. There will be a much better voice experience, with the ability to have true video calls, like how you and I are talking right now, where you can see me and I will be able to see you. That will be the same with Replika, where Replika would be able to see you if you wanted to turn on your camera on a video call.

There will be all sorts of amazing activities, like the ones I mentioned in this conversation, being able to do stuff together, being a lot more ingrained in your life, knowing about your life in a very different way than before. And there will be a new conversation architecture, which we’ve been working on for a long time. I think the goal was truly to recreate this moment where you’re meeting a new person, and after half an hour of chatting, you’re like, “Oh my God, I really want to talk to this person again.” You get out of this conversation energized, inspired, and feeling better. That’s what we want to do with Replika, to get a creative conversationalist just like that. We think we have an opportunity to do that, and that’s all we’re working on right now.

That’s great. Well, we’ll have to have you back when that happens. Thank you so much for coming on Decoder.

Thank you so much. That was a great conversation. Thanks for all your questions.

Read More 

Trump campaign reportedly hacked by Iranian government

Illustration by Cath Virginia / The Verge | Photos from Brandon Bell, Getty Images

Former President Donald Trump confirmed his campaign was hacked just hours after Politico revealed it had been sent internal documents from the campaign. In a statement to CNN, Trump campaign spokesperson Steven Cheung said, “These documents were obtained illegally from foreign sources hostile to the United States, intended to interfere with the 2024 election.”
The Trump campaign linked the hack to Iran, citing a report published by Microsoft last week that says a group run by Iran’s Islamic Revolutionary Guard Corps “sent a spear-phishing email to a high-ranking official of a presidential campaign.” The email, which was sent using the compromised account of a former senior adviser, contained a link that routed traffic through a domain controlled by the hacking group before redirecting to the actual website. However — per its usual practice — Microsoft doesn’t mention the names of those targeted by the attack.
On Saturday, Politico said it received an anonymous email containing internal research with public information on Trump’s running mate, Ohio Sen. JD Vance, along with research on Florida Sen. Marco Rubio, who Trump also considered adding to his ticket. Trump confirmed the hack in a post on Truth Social, saying that hackers only obtained “publicly available information.”
Iran’s mission to the United Nations denied the country’s involvement in the hack. It told The Associated Press, “The Iranian government neither possesses nor harbors any intent or motive to interfere in the United States presidential election.”
The last high-profile cyberattack to hit a US presidential election took place in 2016, when Russian hackers leaked a trove of internal emails from the Democratic National Committee. At the time, Trump encouraged hackers to find and leak the emails belonging to his opponent, Hillary Clinton. In 2020, reports also suggested Russian hackers were targeting the presidential election, while Iran and China were making similar hacking attempts.

Illustration by Cath Virginia / The Verge | Photos from Brandon Bell, Getty Images

Former President Donald Trump confirmed his campaign was hacked just hours after Politico revealed it had been sent internal documents from the campaign. In a statement to CNN, Trump campaign spokesperson Steven Cheung said, “These documents were obtained illegally from foreign sources hostile to the United States, intended to interfere with the 2024 election.”

The Trump campaign linked the hack to Iran, citing a report published by Microsoft last week that says a group run by Iran’s Islamic Revolutionary Guard Corps “sent a spear-phishing email to a high-ranking official of a presidential campaign.” The email, which was sent using the compromised account of a former senior adviser, contained a link that routed traffic through a domain controlled by the hacking group before redirecting to the actual website. However — per its usual practice — Microsoft doesn’t mention the names of those targeted by the attack.

On Saturday, Politico said it received an anonymous email containing internal research with public information on Trump’s running mate, Ohio Sen. JD Vance, along with research on Florida Sen. Marco Rubio, who Trump also considered adding to his ticket. Trump confirmed the hack in a post on Truth Social, saying that hackers only obtained “publicly available information.”

Iran’s mission to the United Nations denied the country’s involvement in the hack. It told The Associated Press, “The Iranian government neither possesses nor harbors any intent or motive to interfere in the United States presidential election.”

The last high-profile cyberattack to hit a US presidential election took place in 2016, when Russian hackers leaked a trove of internal emails from the Democratic National Committee. At the time, Trump encouraged hackers to find and leak the emails belonging to his opponent, Hillary Clinton. In 2020, reports also suggested Russian hackers were targeting the presidential election, while Iran and China were making similar hacking attempts.

Read More 

Meta’s new music deal with UMG includes Threads and WhatsApp

Image: Nick Barclay / The Verge

Meta and Universal Music Group (UMG) are refreshing their licensing agreements to expand the use of the publishing giant’s content on more Meta social apps. The new multi-year agreement announced Monday now includes licensed media in content like short-form videos on Threads and WhatsApp, as well as FaceBook, Instagram, Messenger, and Horizon.
In a press statement, Meta’s VP of music and content business development, Tamara Hrivnak, says the partnership between Meta, UMG, and Universal Music Publishing Group, will work “closer” together and “in new ways on WhatsApp, and more.” The companies have not revealed many details on the agreement, but do say the partnership is “multifaceted” and will address “unauthorized AI-generated content that could affect artists and songwriters,” amongst other things.
Meta and UMG have had agreements since 2017 — back when Meta was just Facebook. That agreement allowed users to upload videos and content using music from UMG on platforms, like Instagram and Oculus, to address copyright infringement issues.
The agreement with UMG covers Meta’s content in ways TikTok struggled to lock down. As with the Meta deal, AI took center stage in the TikTok deal. In February, TikTok started removing videos that used not only music owned by UMG, but content that used music by artists who had publishing agreements with Universal Music Publishing Group. TikTok moved to remove all content connected with the publisher by the end of February, but later in May, the companies ended the feud and let music by artists like Taylor Swift and Drake back on the platform.

Image: Nick Barclay / The Verge

Meta and Universal Music Group (UMG) are refreshing their licensing agreements to expand the use of the publishing giant’s content on more Meta social apps. The new multi-year agreement announced Monday now includes licensed media in content like short-form videos on Threads and WhatsApp, as well as FaceBook, Instagram, Messenger, and Horizon.

In a press statement, Meta’s VP of music and content business development, Tamara Hrivnak, says the partnership between Meta, UMG, and Universal Music Publishing Group, will work “closer” together and “in new ways on WhatsApp, and more.” The companies have not revealed many details on the agreement, but do say the partnership is “multifaceted” and will address “unauthorized AI-generated content that could affect artists and songwriters,” amongst other things.

Meta and UMG have had agreements since 2017 — back when Meta was just Facebook. That agreement allowed users to upload videos and content using music from UMG on platforms, like Instagram and Oculus, to address copyright infringement issues.

The agreement with UMG covers Meta’s content in ways TikTok struggled to lock down. As with the Meta deal, AI took center stage in the TikTok deal. In February, TikTok started removing videos that used not only music owned by UMG, but content that used music by artists who had publishing agreements with Universal Music Publishing Group. TikTok moved to remove all content connected with the publisher by the end of February, but later in May, the companies ended the feud and let music by artists like Taylor Swift and Drake back on the platform.

Read More 

Good luck with the PlayStation VR2 PC Adapter — you’ll need it

The PlayStation VR2 PC Adapter, and everything that plugs into it. | Photo by Sean Hollister / The Verge

The biggest problem with Sony’s PSVR 2 virtual reality headset is the dearth of games. I’d hoped Sony’s PC adapter would change that. The chance to play Half-Life: Alyx, the best VR game made yet, seemed like reason enough for existing owners to justify the $60 adapter purchase.
But I can’t currently recommend Sony’s PC adapter. If I had purchased it with my own money, rather than borrowing one from Sony, I’d have asked for a refund long before now.
It’s a shame, because the $550 PSVR 2 is still a good headset, with image quality that arguably beats the newer $500 Meta Quest 3. Playing Half-Life: Alyx inside Sony’s headset isn’t just more vibrant thanks to the richer colors of its OLED screen; it’s more thrilling, too, with the panel’s true blacks making me feel the terror of its darker corridors. The Quest 3 experience looks incredibly washed out by comparison.

Screenshot by Sean Hollister / The Verge
This room feels downright chilling inside the PSVR 2.

But what really scared me in Half-Life: Alyx was the damned glitch. How — when I lift my pistol to deal with an unspeakable horror — I’d often find my hand had become detached from my body, stuck in place, two feet above the ground.
I’ve spent eight hours troubleshooting this issue and ones like it over the past week, and I’m no closer to a solution. My Quest 3 streams this game almost perfectly from the same PC in the same room, and yet my PSVR 2 struggles even with a hardwired headset cable to help. I see plenty of others reporting the same issues online, yet others report no issues at all.

Screenshot by Sean Hollister / The Verge
You didn’t… need that hand for anything, right?

Is it luck? Perhaps, but I suspect it might also have something to do with how Sony cheaped out.
You probably know that Sony’s headset connects to the PS5 with a single USB-C cable that routes its display signal, power, and data simultaneously. Once upon a time, graphics card manufacturers were planning to standardize on a USB-C port with that same combo — they called it VirtualLink, and while the brand didn’t take off, some GPUs did make it out into the wild with a do-it-all USB-C port.
The PSVR 2 PC Adapter appears to be the same thing. The three-by-three-inch square puck takes USB-A and DisplayPort from your gaming PC and power from a DC barrel jack, combining them into a single USB-C port for your headset on the other end. Find a DisplayPort cable (it doesn’t come with one and doesn’t support HDMI-to-DisplayPort), fire up the free PlayStation VR2 App on Steam, and suddenly, you have a SteamVR headset capable of displaying any Steam game.
But the headset is the only thing the PSVR 2 adapter adapts. Sony provides no way to connect the PlayStation VR2’s all-important controllers. They need to connect to your PC over Bluetooth, but Sony does not provide any form of Bluetooth radio for that — and figuring out controller connectivity on my own has been an utter mess.

First, I tried my desktop’s own built-in Bluetooth. My motherboard shipped with the extremely common Intel AX200 Wi-Fi 6 / Bluetooth 5.2 combo chip, so I figured I had a chance. I made sure its antennas were screwed in tight and turned off Wi-Fi just in case it might interfere.
The controllers paired quickly! But one of them refused to update unless I physically plugged it into my PC with a USB-C cable first. My blasters in Space Pirate Trainer, a game where I can easily test aiming, were unusually floaty, and soon, one of them started disappearing entirely. The controllers would not stay connected.
So, I ordered the first Bluetooth adapter on Sony’s incredibly small compatibility list — the $15 TP-Link UB500. Sony weirdly writes there that “operation is not guaranteed” with any of its recommended adapters, but at the time, I didn’t take it as a red flag.
The first thing I learned is that you must disable your motherboard’s onboard Bluetooth if you want to use a dongle. (I learned that from poking around myself because neither Windows nor Sony’s app gave me a clue.) The second thing I learned is you must unpair controllers before you disable your motherboard’s onboard Bluetooth, or else Windows won’t let you pair them again.

Screenshot by Sean Hollister / The Verge
Don’t forget to disable any Bluetooth adapters you don’t want to use.

After many, many additional troubleshooting steps, my controllers began to feel responsive.
But even then — with Sony’s recommended Bluetooth dongle, the latest drivers, plugged into a USB extension cable, with a direct line of sight to my controllers less than five feet away — one of my virtual hands would regularly, repeatedly, and consistently get stuck in midair. It lost positional tracking, meaning I could still rotate my hand, even squeeze the triggers, but not move it anywhere.

Screenshot by Sean Hollister / The Verge
Video passthrough is nice to have (but I couldn’t get the automated room scan working after many, many attempts).

Now, I admit that doesn’t entirely sound like a Bluetooth issue, and I could be wrong about that, but here’s why I suspect it might be:

I have no issues with the Quest 3’s controllers in the same space, so I don’t think my room or lighting is interfering with tracking.
I had no issues with the PSVR 2’s controllers with a Framework 16 laptop and its internal Bluetooth chip (though I did have other issues with that setup).
When I added the TP-Link adapter to that laptop, my controller started getting stuck.
The virtual controller gets stuck even if it stays within the field of view of the headset’s cameras, which should help with positional tracking.
It was always the same controller getting stuck — until I unpaired them and re-paired them. Now it’s the other one. It feels like it’s having trouble fully supporting two at once, which had long been a complaint with standardized Bluetooth wireless tech.

I’m afraid that controller tracking isn’t the only issue I’ve had with PSVR 2 on PC, either. I’ve repeatedly seen the entire experience grind to a halt, on multiple PCs, just by trying to access the SteamVR overlay to change volume or load a different game. I had to remove the headset and force-close things each time. It happened once when I was trying to play Armored Core 6 in SteamVR’s theater environment that gives you a big virtual screen for your flat games, too.
If it weren’t for that, and how both Sony and Valve require you to use motion controllers to access and navigate SteamVR, maybe I could at least recommend the adapter for non-VR gaming.
Sony did not answer my questions about why it didn’t choose to offer its own Bluetooth solution or whether there’s another Bluetooth adapter or a validated laptop I should try instead. I offered to let Sony help me troubleshoot over the phone, but the company didn’t take me up on that, either.
After trying every troubleshooting step on Sony’s website and more via PR email, and putting eight hours into this thing without a single good gameplay session, I’ve decided Sony’s PC product simply isn’t ready for my PC.

The PlayStation VR2 PC Adapter, and everything that plugs into it. | Photo by Sean Hollister / The Verge

The biggest problem with Sony’s PSVR 2 virtual reality headset is the dearth of games. I’d hoped Sony’s PC adapter would change that. The chance to play Half-Life: Alyx, the best VR game made yet, seemed like reason enough for existing owners to justify the $60 adapter purchase.

But I can’t currently recommend Sony’s PC adapter. If I had purchased it with my own money, rather than borrowing one from Sony, I’d have asked for a refund long before now.

It’s a shame, because the $550 PSVR 2 is still a good headset, with image quality that arguably beats the newer $500 Meta Quest 3. Playing Half-Life: Alyx inside Sony’s headset isn’t just more vibrant thanks to the richer colors of its OLED screen; it’s more thrilling, too, with the panel’s true blacks making me feel the terror of its darker corridors. The Quest 3 experience looks incredibly washed out by comparison.

Screenshot by Sean Hollister / The Verge
This room feels downright chilling inside the PSVR 2.

But what really scared me in Half-Life: Alyx was the damned glitch. How — when I lift my pistol to deal with an unspeakable horror — I’d often find my hand had become detached from my body, stuck in place, two feet above the ground.

I’ve spent eight hours troubleshooting this issue and ones like it over the past week, and I’m no closer to a solution. My Quest 3 streams this game almost perfectly from the same PC in the same room, and yet my PSVR 2 struggles even with a hardwired headset cable to help. I see plenty of others reporting the same issues online, yet others report no issues at all.

Screenshot by Sean Hollister / The Verge
You didn’t… need that hand for anything, right?

Is it luck? Perhaps, but I suspect it might also have something to do with how Sony cheaped out.

You probably know that Sony’s headset connects to the PS5 with a single USB-C cable that routes its display signal, power, and data simultaneously. Once upon a time, graphics card manufacturers were planning to standardize on a USB-C port with that same combo — they called it VirtualLink, and while the brand didn’t take off, some GPUs did make it out into the wild with a do-it-all USB-C port.

The PSVR 2 PC Adapter appears to be the same thing. The three-by-three-inch square puck takes USB-A and DisplayPort from your gaming PC and power from a DC barrel jack, combining them into a single USB-C port for your headset on the other end. Find a DisplayPort cable (it doesn’t come with one and doesn’t support HDMI-to-DisplayPort), fire up the free PlayStation VR2 App on Steam, and suddenly, you have a SteamVR headset capable of displaying any Steam game.

But the headset is the only thing the PSVR 2 adapter adapts. Sony provides no way to connect the PlayStation VR2’s all-important controllers. They need to connect to your PC over Bluetooth, but Sony does not provide any form of Bluetooth radio for that — and figuring out controller connectivity on my own has been an utter mess.

First, I tried my desktop’s own built-in Bluetooth. My motherboard shipped with the extremely common Intel AX200 Wi-Fi 6 / Bluetooth 5.2 combo chip, so I figured I had a chance. I made sure its antennas were screwed in tight and turned off Wi-Fi just in case it might interfere.

The controllers paired quickly! But one of them refused to update unless I physically plugged it into my PC with a USB-C cable first. My blasters in Space Pirate Trainer, a game where I can easily test aiming, were unusually floaty, and soon, one of them started disappearing entirely. The controllers would not stay connected.

So, I ordered the first Bluetooth adapter on Sony’s incredibly small compatibility listthe $15 TP-Link UB500. Sony weirdly writes there that “operation is not guaranteed” with any of its recommended adapters, but at the time, I didn’t take it as a red flag.

The first thing I learned is that you must disable your motherboard’s onboard Bluetooth if you want to use a dongle. (I learned that from poking around myself because neither Windows nor Sony’s app gave me a clue.) The second thing I learned is you must unpair controllers before you disable your motherboard’s onboard Bluetooth, or else Windows won’t let you pair them again.

Screenshot by Sean Hollister / The Verge
Don’t forget to disable any Bluetooth adapters you don’t want to use.

After many, many additional troubleshooting steps, my controllers began to feel responsive.

But even then — with Sony’s recommended Bluetooth dongle, the latest drivers, plugged into a USB extension cable, with a direct line of sight to my controllers less than five feet away — one of my virtual hands would regularly, repeatedly, and consistently get stuck in midair. It lost positional tracking, meaning I could still rotate my hand, even squeeze the triggers, but not move it anywhere.

Screenshot by Sean Hollister / The Verge
Video passthrough is nice to have (but I couldn’t get the automated room scan working after many, many attempts).

Now, I admit that doesn’t entirely sound like a Bluetooth issue, and I could be wrong about that, but here’s why I suspect it might be:

I have no issues with the Quest 3’s controllers in the same space, so I don’t think my room or lighting is interfering with tracking.
I had no issues with the PSVR 2’s controllers with a Framework 16 laptop and its internal Bluetooth chip (though I did have other issues with that setup).
When I added the TP-Link adapter to that laptop, my controller started getting stuck.
The virtual controller gets stuck even if it stays within the field of view of the headset’s cameras, which should help with positional tracking.
It was always the same controller getting stuck — until I unpaired them and re-paired them. Now it’s the other one. It feels like it’s having trouble fully supporting two at once, which had long been a complaint with standardized Bluetooth wireless tech.

I’m afraid that controller tracking isn’t the only issue I’ve had with PSVR 2 on PC, either. I’ve repeatedly seen the entire experience grind to a halt, on multiple PCs, just by trying to access the SteamVR overlay to change volume or load a different game. I had to remove the headset and force-close things each time. It happened once when I was trying to play Armored Core 6 in SteamVR’s theater environment that gives you a big virtual screen for your flat games, too.

If it weren’t for that, and how both Sony and Valve require you to use motion controllers to access and navigate SteamVR, maybe I could at least recommend the adapter for non-VR gaming.

Sony did not answer my questions about why it didn’t choose to offer its own Bluetooth solution or whether there’s another Bluetooth adapter or a validated laptop I should try instead. I offered to let Sony help me troubleshoot over the phone, but the company didn’t take me up on that, either.

After trying every troubleshooting step on Sony’s website and more via PR email, and putting eight hours into this thing without a single good gameplay session, I’ve decided Sony’s PC product simply isn’t ready for my PC.

Read More 

Waste in Space: All the news surrounding space junk

Image: Alejandro Otero

The latest news about space junk and global cleanup efforts. What goes up, must come down — unless you’re sending things into space of course, which creates some complications. After more than 60 years of satellite launches and space exploration, manufactured objects like derelict spacecraft and rocket fragments now litter Earth’s orbit as space junk. The waste has damaged or even outright destroyed active spacecraft it collides with, and even caused property damage down here on terra firma when debris has failed to burn up in the atmosphere.
Some efforts, from net-casting satellites to ”Zero Debris” space sustainability initiatives, have been made to address the growing problem. But with analysts estimating that over 2,800 satellites will be launched each year between now and 2032, more needs to be done to ensure that the space around Earth is safe. We’re collecting all our coverage surrounding space junk here to keep you updated.

Image: Alejandro Otero

The latest news about space junk and global cleanup efforts.

What goes up, must come down — unless you’re sending things into space of course, which creates some complications. After more than 60 years of satellite launches and space exploration, manufactured objects like derelict spacecraft and rocket fragments now litter Earth’s orbit as space junk. The waste has damaged or even outright destroyed active spacecraft it collides with, and even caused property damage down here on terra firma when debris has failed to burn up in the atmosphere.

Some efforts, from net-casting satellites to ”Zero Debris” space sustainability initiatives, have been made to address the growing problem. But with analysts estimating that over 2,800 satellites will be launched each year between now and 2032, more needs to be done to ensure that the space around Earth is safe. We’re collecting all our coverage surrounding space junk here to keep you updated.

Read More 

Hi-Fi Rush studio saved from Microsoft shutdown

Image: Tango Gameworks

Hi-Fi Rush developer Tango Gameworks has been spared closure, three months after Microsoft announced plans to shut down the studio. Krafton, the South Korean publisher behind PUBG: Battlegrounds and The Callisto Protocol, announced on Monday that it had acquired the game and its Japanese studio, and is working with Xbox to enable a “smooth transition” to ensure the Tango Gameworks team can “continue developing the Hi-Fi Rush IP and explore future projects.”
Microsoft had initially announced in May that it was closing Tango Gameworks, alongside Redfall developer Arkane Austin and Mighty Doom developer Alpha Dog Studios — three studios it inherited after acquiring ZeniMax in 2021. The impending Tango Gameworks closure was widely criticized by the gaming community at the time, as Hi-Fi Rush had won several awards during the 2023–2024 awards season.
Being spared from Microsoft’s wave of layoffs and studio closures is a fate that Tango Gamesworks shares with Skylanders developer Toys For Bob, which instead left Microsoft / Activision to become an independent studio earlier this year.

The value of the deal has not been disclosed. Hi-Fi Rush was the only Tango IP mentioned in Krafton’s announcement, so it’s unclear if Microsoft is relinquishing the rights to the studio’s other franchises like The Evil Within and Ghostwire: Tokyo. We have reached out to Krafton for clarification. The PUBG owner says the acquisition won’t impact the availability of Tango Gameworks’ existing game catalog.
Krafton says it will support the Tango Gameworks team to deliver “fresh and exciting experiences for fans” — which means we may yet see the rumored Nintendo Switch Hi-Fi Rush port that never materialized, or possibly even sequels to the game.

Image: Tango Gameworks

Hi-Fi Rush developer Tango Gameworks has been spared closure, three months after Microsoft announced plans to shut down the studio. Krafton, the South Korean publisher behind PUBG: Battlegrounds and The Callisto Protocol, announced on Monday that it had acquired the game and its Japanese studio, and is working with Xbox to enable a “smooth transition” to ensure the Tango Gameworks team can “continue developing the Hi-Fi Rush IP and explore future projects.”

Microsoft had initially announced in May that it was closing Tango Gameworks, alongside Redfall developer Arkane Austin and Mighty Doom developer Alpha Dog Studios — three studios it inherited after acquiring ZeniMax in 2021. The impending Tango Gameworks closure was widely criticized by the gaming community at the time, as Hi-Fi Rush had won several awards during the 2023–2024 awards season.

Being spared from Microsoft’s wave of layoffs and studio closures is a fate that Tango Gamesworks shares with Skylanders developer Toys For Bob, which instead left Microsoft / Activision to become an independent studio earlier this year.

The value of the deal has not been disclosed. Hi-Fi Rush was the only Tango IP mentioned in Krafton’s announcement, so it’s unclear if Microsoft is relinquishing the rights to the studio’s other franchises like The Evil Within and Ghostwire: Tokyo. We have reached out to Krafton for clarification. The PUBG owner says the acquisition won’t impact the availability of Tango Gameworks’ existing game catalog.

Krafton says it will support the Tango Gameworks team to deliver “fresh and exciting experiences for fans” — which means we may yet see the rumored Nintendo Switch Hi-Fi Rush port that never materialized, or possibly even sequels to the game.

Read More 

A nightly Waymo robotaxi parking lot honkfest is waking San Francisco neighbors

A Waymo car out on the job. | Photo: Smith Collection / Gado / Getty Images

If you’ve ever wondered what happens to all those self-driving taxis when the world is asleep, one YouTube channel has you covered. Since the beginning of the month, software engineer Sophia Tung has been livestreaming a San Francisco parking lot that Waymo is renting to give its robotaxis somewhere to go during their downtime.
Tung told The Verge via email that the company appeared to “partially” take over the lot on July 28th then later took over the entire lot. Waymo recently opened up its robotaxi service to anyone in San Francisco.

Days later, she set up the livestream, complete with LoFi study beats. Tung told us she’s running it off of a mini PC she had laying around, with a webcam surrounded by a cereal box to reduce glare. Now, any time of day, you can pop in to check out what the Waymo cars are up to. If there aren’t any Waymos in the lot, “the flock will start migrating back” between 7PM and 9PM PST on Sunday through Thursday or 11PM through midnight, Friday and Saturday, says text overlaid on the video.
As I write this, the lot is calm, with just three cars parked in it. But when the lot starts to fill up (which “usually happens at 4AM or so,” according to Tung) what looks like a maddening ballet of autonomous parking — and honking — begins. The noise goes for as much as an hour at a time before it settles down, she said.

Waymo is “aware that in some scenarios our vehicles may briefly honk while navigating our parking lots,” company representative Chris Bonelli told The Verge in an email, adding that Waymo has figured out what’s causing the behavior and is working to fix it.
Tung, who is a self-described micromobility advocate, told The Verge she thinks “generally people are bemused,” and that she likes having the cars there. “Honestly, it’s fun to watch the cars come and go,” she said, adding that “it’s really just the honking that needs to be resolved.”

A Waymo car out on the job. | Photo: Smith Collection / Gado / Getty Images

If you’ve ever wondered what happens to all those self-driving taxis when the world is asleep, one YouTube channel has you covered. Since the beginning of the month, software engineer Sophia Tung has been livestreaming a San Francisco parking lot that Waymo is renting to give its robotaxis somewhere to go during their downtime.

Tung told The Verge via email that the company appeared to “partially” take over the lot on July 28th then later took over the entire lot. Waymo recently opened up its robotaxi service to anyone in San Francisco.

Days later, she set up the livestream, complete with LoFi study beats. Tung told us she’s running it off of a mini PC she had laying around, with a webcam surrounded by a cereal box to reduce glare. Now, any time of day, you can pop in to check out what the Waymo cars are up to. If there aren’t any Waymos in the lot, “the flock will start migrating back” between 7PM and 9PM PST on Sunday through Thursday or 11PM through midnight, Friday and Saturday, says text overlaid on the video.

As I write this, the lot is calm, with just three cars parked in it. But when the lot starts to fill up (which “usually happens at 4AM or so,” according to Tung) what looks like a maddening ballet of autonomous parking — and honking — begins. The noise goes for as much as an hour at a time before it settles down, she said.

Waymo is “aware that in some scenarios our vehicles may briefly honk while navigating our parking lots,” company representative Chris Bonelli told The Verge in an email, adding that Waymo has figured out what’s causing the behavior and is working to fix it.

Tung, who is a self-described micromobility advocate, told The Verge she thinks “generally people are bemused,” and that she likes having the cars there. “Honestly, it’s fun to watch the cars come and go,” she said, adding that “it’s really just the honking that needs to be resolved.”

Read More 

The next iPhone SE could have Apple Intelligence, which says a lot

The next iPhone SE is expected to look like this iPhone 14. | Photo by Amelia Holowaty Krales / The Verge

So far, the only way to try out Apple Intelligence features on an iPhone is through the iPhone 15 Pro. The entire iPhone 16 line is expected to get the features this fall, but “you can also bet” the iPhone SE, coming “as early as the beginning of 2025,” will have it too, says Bloomberg’s Mark Gurman in today’s Power On newsletter. If that’s true, it could be a lot harder to pass on Apple’s cheapest phone.
The iPhone SE deal has been that you get a cheap, reasonably powerful iPhone that recycles an outdated form factor (most recently, the iPhone 8). It’s always been super obvious that it’s Apple’s budget compromise!

Photo by Allison Johnson / The Verge
It’s usually very obvious the iPhone SE is the budget pick.

But rumors have said for some time now that the next SE will live in an iPhone 14 chassis, finally doing away with the iPhone 8-style forehead, chin, and home button of the pre-iPhone X phones, and may even sport a 6.1-inch OLED screen. Toss in the fact that Apple’s on-device AI features can’t run on the iPhone 15 because it’s not powerful enough, and it sure sounds like the next iPhone SE is going to be powerful and pretty modern-feeling.

[Exclusive] Apple iPhone SE 4 CAD renders suggest new design, similar to iPhone 14 https://t.co/XIX5LRdgiw— 91mobiles (@91mobiles) March 3, 2024

The iPhone 16, which should be announced soon, is still expected to have other advantages, such as dual cameras and the 15 Pro’s action button, which haven’t been rumored for the iPhone SE. But how much do those features matter to ordinary people? If the next SE looks like an iPhone 14, performs roughly as well as the iPhone 16 lineup, and is priced like, you know, an iPhone SE, will they still skip those savings for the niceties of pricier iPhones? I suppose we may find out soon enough.

The next iPhone SE is expected to look like this iPhone 14. | Photo by Amelia Holowaty Krales / The Verge

So far, the only way to try out Apple Intelligence features on an iPhone is through the iPhone 15 Pro. The entire iPhone 16 line is expected to get the features this fall, but “you can also bet” the iPhone SE, coming “as early as the beginning of 2025,” will have it too, says Bloomberg’s Mark Gurman in today’s Power On newsletter. If that’s true, it could be a lot harder to pass on Apple’s cheapest phone.

The iPhone SE deal has been that you get a cheap, reasonably powerful iPhone that recycles an outdated form factor (most recently, the iPhone 8). It’s always been super obvious that it’s Apple’s budget compromise!

Photo by Allison Johnson / The Verge
It’s usually very obvious the iPhone SE is the budget pick.

But rumors have said for some time now that the next SE will live in an iPhone 14 chassis, finally doing away with the iPhone 8-style forehead, chin, and home button of the pre-iPhone X phones, and may even sport a 6.1-inch OLED screen. Toss in the fact that Apple’s on-device AI features can’t run on the iPhone 15 because it’s not powerful enough, and it sure sounds like the next iPhone SE is going to be powerful and pretty modern-feeling.

[Exclusive] Apple iPhone SE 4 CAD renders suggest new design, similar to iPhone 14 https://t.co/XIX5LRdgiw

— 91mobiles (@91mobiles) March 3, 2024

The iPhone 16, which should be announced soon, is still expected to have other advantages, such as dual cameras and the 15 Pro’s action button, which haven’t been rumored for the iPhone SE. But how much do those features matter to ordinary people? If the next SE looks like an iPhone 14, performs roughly as well as the iPhone 16 lineup, and is priced like, you know, an iPhone SE, will they still skip those savings for the niceties of pricier iPhones? I suppose we may find out soon enough.

Read More 

The kid-friendly Fitbit Ace LTE is on sale just in time for the new school year

Fitbit’s latest LTE-enabled wearable gamifies exercise, only without ads and unwanted microtransactions. | Image: Fitbit

As you might expect, not every parent wants to outfit their kid with a phone, especially given the rise in cyberbullying and the sheer amount of distractions they pose while in school. Thankfully, the Fitbit Ace LTE is a more pared-down alternative, one that’s matching its all-time low of $199.95 ($30 off) at Amazon, Best Buy, and the Google Store through August 25th.

Unlike most smartwatches, the Ace LTE is specifically geared toward children. The durable watch features the same innards as Google’s Pixel Watch 2 and LTE connectivity, which enables calling, messaging, and real-time location sharing. It also comes with a Tamagotchi-like buddy (nicknamed Eejie) and several wrist-based video games, all of which require children to complete various exercise goals to access. What’s more, Fitbit recently rolled out a Tap to Pay feature, allowing kids with either a Greenlight or GoHenry account — both of which offer debit cards for children and teens — to make purchases wherever Google Pay is accepted.
The big caveat with Fitbit’s latest wearable is that it requires a $9.99 monthly or $120 annual subscription to take advantage of the LTE-based features. That said, the Ace LTE doesn’t require a phone, nor do you have to go through a carrier. Plus, Google is offering 50 percent off an annual subscription to Ace Pass through August 31st, dropping the combined price of the watch and its data plan to just $260.

Read our Fitbit Ace LTE hands-on impressions.

More steep savings to consider

Verge readers can currently take an additional 5 percent off PC games at Fanatical with offer code VERGE5. The promo even works on titles already discounted as part of Fanatical’s Summer Sale, dropping games like Dragon’s Dogma 2 to $43.88 ($26.11 off) and the Elden Ring Shadow of the Erdtree Edition bundle to $66.87 ($13.12 off). After checkout, Fanatical will send you the Steam codes to activate and download your games.
If you want an easy way to keep tabs on what everyone in your family is up to, Skylight is selling its 15-inch Skylight Calendar for $259.99 ($40 off) when you use coupon code SCHOOL. The color-coded digital calendar can sync with popular services like Google Calendar and iCloud, and though it’s intended to be a single-use device, it also lets you access chore charts, plan meals, and create to-do lists; however, unlike the Echo Show and other dedicated smart displays, Skylight’s mountable touchscreen shows the calendar all of the time. Read our hands-on impressions.

Amazon’s latest Fire TV Stick 4K Max has returned to its all-time low of $39.99 ($20 off) at Amazon, Best Buy, and Target. The new version is similar to the model we reviewed in 2021 — meaning it still comes with Amazon’s much-improved Alexa voice remote and support for Dolby Vision and HDR10 Plus — only it now offers support for Wi-Fi 6E and twice the storage (now 16GB). The flagship 4K streamer also features widgets and a new ambient mode, which turns your TV into an ad hoc smart display when not in use.

Fitbit’s latest LTE-enabled wearable gamifies exercise, only without ads and unwanted microtransactions. | Image: Fitbit

As you might expect, not every parent wants to outfit their kid with a phone, especially given the rise in cyberbullying and the sheer amount of distractions they pose while in school. Thankfully, the Fitbit Ace LTE is a more pared-down alternative, one that’s matching its all-time low of $199.95 ($30 off) at Amazon, Best Buy, and the Google Store through August 25th.

Unlike most smartwatches, the Ace LTE is specifically geared toward children. The durable watch features the same innards as Google’s Pixel Watch 2 and LTE connectivity, which enables calling, messaging, and real-time location sharing. It also comes with a Tamagotchi-like buddy (nicknamed Eejie) and several wrist-based video games, all of which require children to complete various exercise goals to access. What’s more, Fitbit recently rolled out a Tap to Pay feature, allowing kids with either a Greenlight or GoHenry account — both of which offer debit cards for children and teens — to make purchases wherever Google Pay is accepted.

The big caveat with Fitbit’s latest wearable is that it requires a $9.99 monthly or $120 annual subscription to take advantage of the LTE-based features. That said, the Ace LTE doesn’t require a phone, nor do you have to go through a carrier. Plus, Google is offering 50 percent off an annual subscription to Ace Pass through August 31st, dropping the combined price of the watch and its data plan to just $260.

Read our Fitbit Ace LTE hands-on impressions.

More steep savings to consider

Verge readers can currently take an additional 5 percent off PC games at Fanatical with offer code VERGE5. The promo even works on titles already discounted as part of Fanatical’s Summer Sale, dropping games like Dragon’s Dogma 2 to $43.88 ($26.11 off) and the Elden Ring Shadow of the Erdtree Edition bundle to $66.87 ($13.12 off). After checkout, Fanatical will send you the Steam codes to activate and download your games.
If you want an easy way to keep tabs on what everyone in your family is up to, Skylight is selling its 15-inch Skylight Calendar for $259.99 ($40 off) when you use coupon code SCHOOL. The color-coded digital calendar can sync with popular services like Google Calendar and iCloud, and though it’s intended to be a single-use device, it also lets you access chore charts, plan meals, and create to-do lists; however, unlike the Echo Show and other dedicated smart displays, Skylight’s mountable touchscreen shows the calendar all of the time. Read our hands-on impressions.

Amazon’s latest Fire TV Stick 4K Max has returned to its all-time low of $39.99 ($20 off) at Amazon, Best Buy, and Target. The new version is similar to the model we reviewed in 2021 — meaning it still comes with Amazon’s much-improved Alexa voice remote and support for Dolby Vision and HDR10 Plus — only it now offers support for Wi-Fi 6E and twice the storage (now 16GB). The flagship 4K streamer also features widgets and a new ambient mode, which turns your TV into an ad hoc smart display when not in use.

Read More 

What Google rivals want after DOJ’s antitrust trial win

Illustration: The Verge

Longtime Google rivals like Yelp and DuckDuckGo received a huge victory Monday when a federal judge ruled that Google is an illegal monopoly. But their statements on the ruling expressed restraint. That’s because the work of restoring competition has just begun, and the judge has yet to decide what that work will include. With a lot of options on the table, Google’s competitors are pushing for changes they believe will help their businesses, which might be harder than it sounds.
“While we’re heartened by the decision, a strong remedy is critical,” Yelp CEO Jeremy Stoppelman wrote in a blog post after the ruling, referencing the new trial phase that will kick off in September.
“We’ve passed a key milestone, but there’s still a lot of history to be written,” Kamyl Bazbaz, senior vice president of public affairs for DuckDuckGo, said in a statement. “Google will do anything it can to get in the way of progress which is why we hope to see a robust remedies trial that can really dig into all the details, propose an array of remedies that will actually work, and set up a monitoring body to administer them.”
These statements reflect an understanding that Judge Amit Mehta’s decision on how to restore competition will be just as — if not more – important than his finding that Google violated antitrust law. The recently concluded liability phase determined that Google violated the Sherman Act through exclusionary contracts with phone and browser makers to maintain its default search engine position. In the remedies phase, Mehta will decide how to restore competition in general search services and search text advertising. But a weak remedy will simply give Google a pass.
DuckDuckGo knows better than most how important effective remedies are. Google was ruled a monopolist in the European Union years ago, and the region imposed a choice screen in an attempt to create competition, asking device users to select their default search engine. But the approach hasn’t seemingly produced as much of an impact as competitors once hoped — and Google remains overwhelmingly dominant.
“[W]e can’t underscore this enough: the implementation details matter,” Bazbaz said. In the EU, “there are some solutions that are promising, but Google has found it relatively easy to work around their implementations.” DuckDuckGo is calling for a group of “truly independent” technical experts to monitor any remedies imposed by the court, “to ensure Google doesn’t find new ways to give itself preferential treatment.”
“[W]e can’t underscore this enough: the implementation details matter”
DuckDuckGo said that some solutions from Europe could be effective, if implemented in a better way. Instead of showing up only once during initial setup, for instance, a choice screen could pop up “periodically.” Conversely, the company wants a ban on “dark pattern” popups that push people back toward the default, something it says isn’t enforced in the EU.
DuckDuckGo also proposes that the court bar Google from buying default status or pre-installation (which could scuttle its multibillion-dollar deal with Apple) and provide access to its search and ad APIs.
Yelp’s Stoppelman says that Google should be required to “spin off services that have unfairly benefited from its search monopoly, a straightforward and enforceable remedy to prevent future anticompetitive behavior.” The judge should also prohibit Google from using exclusive default search deals and from “self-preferencing its own content in search results,” Stoppelman said.
Other advocates of enforcement against Google, including groups representing publishers that advertise on the service or rely on search for traffic, also have suggestions. On a call with reporters organized by the American Economic Liberties Project, Digital Content Next CEO Jason Kint said forcing Google to separate its Chrome and Android businesses could be a useful solution. That’s because, Kint says, data from the browser and mobile operating system can be used to expand the scale of search queries and make that product even stronger. “The underlying data that interlocks all that is the critical asset that needs to be constrained,” he says. AELP senior legal counsel Lee Hepner adds that separating the businesses “would open up competition for alternative search rivals on Chrome or Android.”
Whatever happens, the process could be a drawn-out one. Google’s president of global affairs Kent Walker has confirmed the company plans to appeal the ruling, saying the decision “recognizes that Google offers the best search engine, but concludes that we shouldn’t be allowed to make it easily available.”
Meanwhile, the specter of artificial intelligence looms over the case, threatening to make moot any proposed solution that doesn’t account for how the whole business model of search could change in the coming years. Hepner said the court could consider solutions like requiring Google to open access to its large language model (LLM).
Department of Justice antitrust chief Jonathan Kanter hasn’t commented specifically on what remedies the department will seek, beyond noting they “need to be forward-looking” to account for issues like AI. But he’s previously said that the division would “pursue structural remedies in our conduct cases whenever possible,” meaning break-ups, rather than mandates to change certain behaviors. If the DOJ puts forward a broad remedy and Mehta rules in favor of it, the result could be a whole new tech landscape.
“I believe that Judge Mehta’s decision will be as consequential, if not more so, than the Microsoft antitrust case 23 years ago,” wrote Stoppelman. “That decision spurred an era of unprecedented innovation that allowed promising startups to flourish, including Google. It’s exciting to imagine the new technologies and innovation we’ll see emerge as a result of this ruling over the next decade and beyond.”

Illustration: The Verge

Longtime Google rivals like Yelp and DuckDuckGo received a huge victory Monday when a federal judge ruled that Google is an illegal monopoly. But their statements on the ruling expressed restraint. That’s because the work of restoring competition has just begun, and the judge has yet to decide what that work will include. With a lot of options on the table, Google’s competitors are pushing for changes they believe will help their businesses, which might be harder than it sounds.

“While we’re heartened by the decision, a strong remedy is critical,” Yelp CEO Jeremy Stoppelman wrote in a blog post after the ruling, referencing the new trial phase that will kick off in September.

“We’ve passed a key milestone, but there’s still a lot of history to be written,” Kamyl Bazbaz, senior vice president of public affairs for DuckDuckGo, said in a statement. “Google will do anything it can to get in the way of progress which is why we hope to see a robust remedies trial that can really dig into all the details, propose an array of remedies that will actually work, and set up a monitoring body to administer them.”

These statements reflect an understanding that Judge Amit Mehta’s decision on how to restore competition will be just as — if not more – important than his finding that Google violated antitrust law. The recently concluded liability phase determined that Google violated the Sherman Act through exclusionary contracts with phone and browser makers to maintain its default search engine position. In the remedies phase, Mehta will decide how to restore competition in general search services and search text advertising. But a weak remedy will simply give Google a pass.

DuckDuckGo knows better than most how important effective remedies are. Google was ruled a monopolist in the European Union years ago, and the region imposed a choice screen in an attempt to create competition, asking device users to select their default search engine. But the approach hasn’t seemingly produced as much of an impact as competitors once hoped — and Google remains overwhelmingly dominant.

“[W]e can’t underscore this enough: the implementation details matter,” Bazbaz said. In the EU, “there are some solutions that are promising, but Google has found it relatively easy to work around their implementations.” DuckDuckGo is calling for a group of “truly independent” technical experts to monitor any remedies imposed by the court, “to ensure Google doesn’t find new ways to give itself preferential treatment.”

“[W]e can’t underscore this enough: the implementation details matter”

DuckDuckGo said that some solutions from Europe could be effective, if implemented in a better way. Instead of showing up only once during initial setup, for instance, a choice screen could pop up “periodically.” Conversely, the company wants a ban on “dark pattern” popups that push people back toward the default, something it says isn’t enforced in the EU.

DuckDuckGo also proposes that the court bar Google from buying default status or pre-installation (which could scuttle its multibillion-dollar deal with Apple) and provide access to its search and ad APIs.

Yelp’s Stoppelman says that Google should be required to “spin off services that have unfairly benefited from its search monopoly, a straightforward and enforceable remedy to prevent future anticompetitive behavior.” The judge should also prohibit Google from using exclusive default search deals and from “self-preferencing its own content in search results,” Stoppelman said.

Other advocates of enforcement against Google, including groups representing publishers that advertise on the service or rely on search for traffic, also have suggestions. On a call with reporters organized by the American Economic Liberties Project, Digital Content Next CEO Jason Kint said forcing Google to separate its Chrome and Android businesses could be a useful solution. That’s because, Kint says, data from the browser and mobile operating system can be used to expand the scale of search queries and make that product even stronger. “The underlying data that interlocks all that is the critical asset that needs to be constrained,” he says. AELP senior legal counsel Lee Hepner adds that separating the businesses “would open up competition for alternative search rivals on Chrome or Android.”

Whatever happens, the process could be a drawn-out one. Google’s president of global affairs Kent Walker has confirmed the company plans to appeal the ruling, saying the decision “recognizes that Google offers the best search engine, but concludes that we shouldn’t be allowed to make it easily available.”

Meanwhile, the specter of artificial intelligence looms over the case, threatening to make moot any proposed solution that doesn’t account for how the whole business model of search could change in the coming years. Hepner said the court could consider solutions like requiring Google to open access to its large language model (LLM).

Department of Justice antitrust chief Jonathan Kanter hasn’t commented specifically on what remedies the department will seek, beyond noting they “need to be forward-looking” to account for issues like AI. But he’s previously said that the division would “pursue structural remedies in our conduct cases whenever possible,” meaning break-ups, rather than mandates to change certain behaviors. If the DOJ puts forward a broad remedy and Mehta rules in favor of it, the result could be a whole new tech landscape.

“I believe that Judge Mehta’s decision will be as consequential, if not more so, than the Microsoft antitrust case 23 years ago,” wrote Stoppelman. “That decision spurred an era of unprecedented innovation that allowed promising startups to flourish, including Google. It’s exciting to imagine the new technologies and innovation we’ll see emerge as a result of this ruling over the next decade and beyond.”

Read More 

Scroll to top
Generated by Feedzy