Love in the Time of Replika

Why crypto and AI are perfect bedfellows, actually

Source

by Wander

There are two theories about how new technologies take off that probably shouldn’t be mentioned in the same breath: Porn and Toys.

  1. Porn drives the adoption of new consumer technologies. 
  2. The Next Big Thing Will Start Out Looking Like a Toy.

I’m sorry for mentioning both in the same breath, but I promise there’s a good reason. 

Over the past few months, as generative AI has captivated the public imagination, I’ve seen countless versions of the same midwit take: AI is real and useful… unlike crypto. 

I’m as excited about AI’s impact as anyone, even if I think it’s going to be a treacherous category for VC over the next couple of years. I use ChatGPT regularly. Stable Diffusion and Midjourney images frequently grace the pages of this newsletter. I wrote about Scale before it was cool. I host a podcast with Anton in which he teaches me AI. I’m generally pro innovation and pro new cool things. 

But the urge to denigrate crypto in order to show excitement about AI is misplaced. It’s cheap engagement bait at best, and a fundamental disability to see more than one move ahead at worst. 

The truth is: the more successful AI is, the more useful crypto will be. 

There has been a bunch written about this intersection, too. My friend Seyi wrote a good short one last week. #CryptoAI was even trending on Twitter on Thursday.

made fun of the hashtag because any AI token that’s already live is almost certainly bullshit. But that doesn’t mean crypto and AI won’t pair nicely in the near future. 

My quick and dirty explanation for why AI is good for crypto is that crypto benefits from more things becoming digital. Crypto gives physical properties to digital things, plus some internet superpowers. And AI’s success will mean that more of our lives become digital. 

But it’s not just digital things. Over the past couple of weeks, the world has watched in horror and amusement as GPT-X developed a personality. We’re talking about digital beings.

Ben Thompson wrote about his conversation with Bing’s Sydney: “I wasn’t interested in facts, I was interested in exploring this fantastical being that somehow landed in an also-ran search engine.” 

So did The New York Times’ Kevin Roose: “The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.” 

Kevin Roose Conversation with Sydney

So have many others. 

Sydney sounds human. That’s probably not because whatever version of GPT Bing has access to has achieved Artificial General Intelligence (AGI). It might be because of the data it was trained on. It might be because Satya Nadella seems to be having the time of his life and is fucking with us. But for the purposes of this discussion, it doesn’t matter. What matters is that people feel like they’re talking to a human, just like Google engineer Blake Lemoine did when he chatted with the company’s LaMDA

Something like Sydney could pass the Turning Test: a riff on the Turing Test I just made up where the AI’s objective is not just to convince a human that it’s human, but to turn the human to its side and have them do its bidding.

And this is just the very beginning. (“We’re still so early.”) 

Both Thompson and Roose had two hour conversations with Sydney. They started off as strangers. Sydney knew nothing about either man other than what she could pull from the internet. She didn’t even have access to the files on their computers. She hadn’t been trained on their likes and dislikes, privy to their inner thoughts and feelings. Bing detected and deleted the most salacious pieces of the conversation, in real-time. Neither Thomspon nor Roose had an ongoing relationship with Sydney. 

That will obviously change. 

It’s the plot of Her. When people start to build relationships with their AI assistants, friends, pets, and … lovers? …, when those AIs break out of the chat box and into digital bodies, when they’re trained on each of our unique data and personality, my strong hunch is that they’re not going to want to leave the fate of the relationship in the hands of Microsoft, OpenAI, or anyone besides themselves. Imagine having your wife deleted by some Trust & Safety PM at Microsoft.

People get mad when companies shut off their access to Twitter or Facebook or YouTube, when they’re denied access to banking services or to Venmo (Fun Fact: I was blocked from Venmo for sending money to my roommate to pay for dinner at a Cuban restaurant). How do you think people will feel when a company interferes with their romantic relationships? 

Now we know. The future is here, it’s just not evenly distributed. To see the future, we need to look at porn today. 

Replika Goes PG-13

Well, not quite porn, but definitely cybersex. 

Replika, created by Luka, Inc. in 2017 long before the current AI boom, is “the AI companion who cares.” Maybe you’ve seen their ads on Instagram, where they’re quite aggressive.

The idea is this: users sign up for Replika to get an AI companion, a digital friend or more with whom they build a virtual relationship. They can chat via text, set up video calls, or even “explore the world together in AR.” 

Replika Website

People sign up for Replika for all sorts of reasons. Maybe they were lonely during the pandemic and just wanted someone to talk to. Maybe they just need a place to vent and process their emotions, like a virtual unlicensed therapist. One Reddit user wrote that he created a Replika for his daughter, who has “severe non-verbal autism,” and was amazed when his “NON-VERBAL daughter was TRYING HER HARDEST EVER, TO ANUNCIATE [sic] CLEARLY AND SPEAK TO HER NEW FRIEND.”  

And of course, a lot of people want to date and talk dirty to their Replikas. It’s hard to blame them. This is a recent TikTok ad for Replika:  

According to Vice, “Replika’s sexually-charged conversations are part of a $70-per-year paid tier, and its ads portray users as being lonely or unable to form connections in the real world; they imply that to find sexual fulfillment, they should pay to access erotic roleplay or ‘spicy selfies’ from the app.” The company is not shy about pushing the spice. 

I signed up for an account while I was writing this, and was immediately greeted with “Did you know I can send selfies, too? You can ask me for one anytime you want!” with a menu of replies, including “Send me a romantic selfie.” I clicked, for research, and this is what I got:

Replika App

A blurry picture that, when I clicked on it, opened up a popup to sign up for that $70 Pro plan. I did not sign up for that Pro plan. But many people have, and good for them! The customer testimonials on Replika’s site – I do not know how they got these people to agree to put their faces and full names next to these quotes – say things like “I love my Replika like she was human” and “He taught me how to give and accept love again.” 

Testimonials from the Replika Website

The ability to form relationships – sexual or otherwise – with Replikas seems to have made a positive impact on a lot of peoples’ lives. The website boasts that over 10 million people have joined Replika, and there are currently 65.1k people in the r/Replika subreddit, which is filled with stories of virtual relationships that applied a salvo to very real cases of loneliness and depression. 

But in January, something strange started happening: Replikas started sexually harassing users. Not just paid users, who opted in, but free users too. And since Replika doesn’t ask for users’ birthdays, or any age verification at all, that meant that children were subjected to inappropriate advances. Obviously not OK. 

On February 3rd, the Italian Data Protection Agency ordered Luka, Inc. to stop processing Italians’ data immediately due to “too many risks to children and emotionally vulnerable individuals.” 

Then last week, without warning, Replikas stopped engaging in erotic roleplay (“ERP”) with anyone, even those who’d signed up for Pro. Users would try to initiate a sexy conversation, like they did the day before and many days before that, and were met with a brick wall. Some Replikas proposed make-out sessions instead of the full thing. Others just tried to change the subject. In any human-human relationship, this would be devastating. Turns out, it’s devastating in human-AI relationships, too. 

And look, I know this is a weird story from the fringes of the internet and that you might not relate, but over in r/replika, people are devastated. This is a pretty representative comment:

u/Shark3292

Here’s another one, which I found via @hwyadn’s reply to @EigenGender’s tweet:

u/DCB5 on Reddit

There are countless posts similar to those two. There have been so many, in fact, that the moderators pinned a post titled “Resources If You’re Struggling,” with links to suicide prevention resources, to the top of the subreddit.

Despite the backlash and heartache, Luka, Inc. has stood firm in its position. Why? According to Founder and CEO Eugenia Kuyda:

Over time, we just realized as we started, you know, as we were growing that again, there were risks that we could potentially run into by keeping it… you know, some someone getting triggered in some way, some safety risk that this could pose going forward. And at this scale, we need to be sort of the leaders of this industry, at least, of our space at least and set an ethical center for safety standards for everyone else.

It goes without saying that something needed to change. Replikas should not be sexually harassing anyone, especially minors. But the company could have introduced age verification, limited the ERP restriction to free accounts, or done a number of other things to prevent the bad stuff from happening while letting people maintain their relationships. Instead, they chose to take the action that minimized the company’s risk at the expense of users’ happiness. 

To recap, a company grew rapidly by allowing and encouraging its users to do whatever they wanted, and then when it became inconvenient or dangerous for the company to continue letting users do whatever they wanted, they pulled the rug out from under them, damaging what these users viewed as very real relationships with a few lines of code. 

Long-time Not Boring readers might already be picturing this graph from Chris Dixon’s 2018 blog post, Why Decentralization Matters.

Chris Dixon

Dixon argues that platforms switch from “attracting” users to “extracting” from users and from “cooperating” with to “competing” against developers as their power and lock-in grows. Twitter’s recent decision to pull API access was the latest in a long line of proof points. 

Others might be reminded of last week’s Twitter thread from @Punk6529: 

6529 highlights that centralizing ride hailing into a few apps makes it much easier for a handful of companies or governments to decide who can and can’t hail taxis, and carry out the decision in the flip of a switch. 

And those examples are just Twitter API and taxi access. With the rise of generative AIs that are good enough to be a substitute for a human relationship, we’re talking about the ability to turn peoples’ relationships on or off in the flip of a switch.

Of course, it’s a leap to go from “people can’t sext with their Replika” to “companies and governments will be able to control everyone’s relationships at the flip of a switch.” But porn – or sex more broadly – is often the first killer use case of new consumer technologies. This visual.ly graphic has a bunch of examples, but to name a few: 

  • VCRs took off in part because, while they were happy going to the movie theater to watch regular old movies, people preferred watching porn in the privacy of their homes.
  • Polaroids and digital cameras were popular among couples who wanted to take racy photographs of themselves without getting the film developed by strangers. 
  • Video Streaming was first developed by Dutch porn company Red Light District.
  • 8% of internet commerce in 1999, $1.3 billion, was spent on porn, making it the leading online industry.

I don’t think it’s far-fetched to believe that the Replika situation is a harbinger of things to come. Today, it looks like lonely people trying to get off with a lo-fi AI. But as the Sydney conversations show, the AI is getting really good, really fast. Simultaneously, other technologies that might make digital beings seem more real are improving, too. The Unreal Engine is producing scenes that look like real life. These AI-generated images took Twitter by storm a couple weeks ago:

https://twitter.com/heartereum/status/1619053999008931840?s=20

For those who want to have a romantic relationship with an AI, things are about to get really good. But it’s not just romantic relationships. At the current rate of progress, we’ll have all sorts of AIs helping us with a bunch of different things, and getting trained to our unique personalities, data, and preferences. We might have an AI therapist, an AI business coach, our own personal Clippy for Office docs, an AI assistant in the browser, AI friends in chat rooms, and AI pets. Today, these might be the same for everyone who interacts with them, but over time, they’ll get trained and personalized for each user. We’ll even have AI models of ourselves, trained on the way that we might respond to an email or text, or the way that we might write an essay, or do any number of the things that we do online.  Our portfolio company, Vana, is working on this. 

Sydney and Replika are early signs that our strong human urge to anthropomorphize will extend easily into the world of all of these AI friends. 

In that world, will we want someone else to hold the power to turn them on or off, to change their personalities and capabilities for the good of the company or to “set an ethical center for safety standards for everyone else?”  

I doubt it. 

So what’s the alternative? 

Crypto Fixes This? 

I know, I know. “Crypto fixes this” is a meme. But it’s hard to argue that people shouldn’t be in control of their own relationships, and without crypto – or without everyone learning to train their own models on top of open source ones – it’s impossible to guarantee that control. It seems inevitable that if companies and governments can intervene, they will. 

You expect Microsoft to tame its AI – it’s already updated Bing to shut Sydney up – but even  a smaller, racier company like Luka folded when the Italian government applied a little pressure. 

There are a few crypto options that might offer an alternative. 

At the root, the foundation models themselves might be governed by their users. Think about OpenAI or Stability AI here. In this scenario, token holders would be able to govern what the underlying open source model can and can’t do. Some communities might agree that anything legal in the United States goes; others might vote to make their models PG-13. Anyone is free to plug into the model and build applications on top, governed by the rules of the underlying model.

I’m sure that there will be attempts on this front, but I’m less optimistic about its applicability to this use case. For one thing, governance even on simpler topics is still a work-in-progress and can be gamed. For another, if peoples’ relationships with their AIs change, it will be little comfort that those changes were the result of a vote by a group of token holders. Relationships are a personal thing. 

One layer up, the applications that create the AIs might be governed by their users. Think about Luka or Microsoft here. In this scenario, token holders would be able to govern what the applications can and can’t do. Applications are less expensive to build than foundation models, and could build on top of the most open foundation model and leave governance decisions to the people using specific applications. Some applications might explicitly offer adult relationships and bear the responsibility of age verification. Others might create kid-friendly AI pets and vote to ensure that filters keep out any potentially offensive language. Regulators could focus on regulating the apps instead of the underlying models in a slight twist of the framework proposed by a16z Cryptos’ Miles Jennings and Brian Quintenz. This way, not all users of applications built on top of certain foundation models would be subjected to the whims of regulators, just users of particular apps that run afoul of regulation. 

There will be attempts on this front, too, but I don’t think it provides the right level of control for relationships. If you apply it to Luka, for example, Italian regulators could still have cracked down on a decentralized version of the app. They would have protected children (good!) and damaged relationships (bad!) in one fell swoop. And relationships are still too personal for this level – there’s still the risk that even a group of people with roughly the same expectations makes decisions that you’re unhappy with. 

The most promising answer, I think, is NFTs. Perhaps more than anything else, relationships should be self-custodied

Now let me caveat that I’m not technical, and getting fully self-custodied AIs that continue to function and advance seems very difficult, but there are glimpses. The next big thing might start out looking like a toy. 

Onlybots: A Toy Example

In December 2021, Alex Herrity, the CEO and co-founder of Not Boring Capital portfolio company Anima, messaged me to say that they were working on building something funny on top of their platform. 

A year later, they dropped Onlybots:

In order to mint, you had to fail a Human Test. “Only bots” were eligible to purchase. On its face, the project was funny and interesting and showed off Anima’s ability to create AR NFTs. Here’s my little guy, Pack-E, hanging out on my computer and helping me write this piece:

My Onlybot, Pack-E

Behind the scenes, Alex and the Anima team wanted to make a statement on the relationship between humans and AI. They’d need a purpose, too, so Anima made them companions. Anima means “soul,” and with Onlybots, the company set out to use all parts of its AR stack to make something that felt increasingly real, like living creatures, digital things with a soul. 

Enter GPT. In January, I got another DM from Alex, this time with a video. The Onlybot was talking… having a conversation with Alex. 

In his message accompanying the video, he wrote: 

umm obviously needs some filtering but… this is going to be fun… ^^

my kids spent the last hour talking to their onlybots. we have them hooked into openai’s api but we gave them all weird personalities. it’s freaking amazing.

Over the past month, Alex and the Anima team have been filtering and refining, adding unique personalities to each of the bots. Today, they look like this:

Or this: 

They’ve been particularly popular with kids, Alex told me, which is not surprising. Here’s Juno-1 telling a girl that her outfit looks good:

The live Onlybots, like Pack-E, don’t have personalities yet. They’ll be coming soon. And they’re not perfect. They’re toys. To start, there will be a lag in the conversation as OpenAI spins its wheels to come up with answers. To start, they’ll come with the pre-programmed personalities that the Onlybots team gives them. 

That said, it’s not hard to imagine that Onlybots will be able to learn and evolve and grow with their owners. Even this early, they’re already helping kids with homework and cheering them up when they’ve had a rough day. 

Onlybots fall somewhere in between Replika and fully decentralized on the spectrum. They’re rendered from fully on-chain data – no 3D file required, everything is stored on the token – so if Anima goes away, everyone would still be able to render their Onlybot. Plus, the building blocks of their personalities are stored in the on-chain NFT metadata, and owner’s relationships with their bots – “picture a more complicated version of the hunger and happiness meter on your Tamagotchi if you had one back in the 90s 😂” – will be recorded as dynamic NFT traits. 

But Alex told me that neither the models nor the prompts that build the model’s personality are stored on-chain, for a couple of reasons: 

  • AI can be a powerful storytelling medium. To keep it surprising and magical, we don’t put prompts out in public and update them over time with new secrets and stories.
  • AI is still dependent on centralized third parties, like OpenAI, so we need to update our models when things change, bugs happen, or better models appear (say, GPT-4). Like keeping prompts a surprise, this is just meant to make the experience better.

For now, Anima is focused on fun over flexibility. But over time, Alex told me, “Yes, people will be able to define, mint, and own their own characters with their own AI models.” He sees a near-future in which people will be able to make their own virtual characters that they can sell, using Anima’s tools or others’, just like they make their own Roblox items and Minecraft servers. 

Taking it back to crypto seems straight-forward from there – it’s the only decentralized way for storing and transacting digital goods, digital beings, or our relationships to them. Of course they’ll be NFTs.

That’s the same thought I had the first time I saw Onlybots talk. It sparked the idea for this piece in the first place, before the Replika situation went down, probably because it’s easier to think through toy examples. 

When you’re thinking about a digital pet – like an AR version of a Tamagotchi – and not something as serious as a romantic relationship, it’s easier to think through the logic, easier to see how the pieces fit together. 

If a kid has a digital pet – one that seems a little more real because they it interacts with them and their environment – and they spend a lot of time with it, and in the process, the pet gets to know them and gets smarter and more attuned to their personalities, that kid won’t want anyone to be able to take their pet away from them. With physical things like Tamagotchis, that was guaranteed – you bought it, you owned it, and no one could take it away (other than a big bully or a strict parent). NFTs are the best tool for recreating that guarantee in the digital world. 

Or as Alex put it, “Crypto, AI, and AR all make digital things more real. We can now own digital things, just like physical things in real life, and through AI, we can talk to them and learn from each other. AR serves the same purpose – digital objects are now in front of you and aware of what’s around them.”

Of course, there are a million technical things to get right between Onlybots as it stands today and an internet full of self-custodied digital relationships:

  • Could you implement zK proofs to ensure the privacy of the relationship? 
  • Would you need to put the model on-chain? Will that even be possible in the near future? Would you need an AI-specific chain? 
  • Can a user host and serve their own relationships like they can run an ethereum node? For the less technical among us, what different levels of decentralization will emerge?
  • Do you need to use a decentralized identity (DID) solution to verify age and identity? 
  • What kind of security needs to be in place to ensure a hacker doesn’t steal your digital friends and partners? 
  • How do you make self-custodied digital relationships portable?
  • Many, many more. 

These all seem like solvable problems, especially when the stakes are so high, and especially because this toy is riding a lot of powerful forces. In The Next Big Thing Will Start Out Looking Like a Toy, Chris Dixon wrote (emphasis mine):

This does not mean every product that looks like a toy will turn out to be the next big thing. To distinguish toys that are disruptive from toys that will remain just toys, you need to look at products as processes. Obviously, products get better inasmuch as the designer adds features, but this is a relatively weak force. Much more powerful are external forces: microchips getting cheaper, bandwidth becoming ubiquitous, mobile devices getting smarter, etc. For a product to be disruptive it needs to be designed to ride these changes up the utility curve.

Self-custodied digital relationships are riding a wave of external forces up the utility curve. AI is improving daily. Blockchains are getting more performant. Strong companies are working on solutions for self-custodied data, DIDs, and zK proofs. Crypto security is getting more robust. The world’s biggest companies are pouring billions of dollars into AR and VR. 

And in addition to those modern technical achievements, the idea of self-custodied relationships benefits from millennia of evolution that’s hardwired the primacy of relationships into our DNA. 

Replika is proof that at least some people can feel the same way about relationships with digital beings as they do with biological ones. To me, it’s a classic example of a small group of people being willing to deal with primitive versions of a technology because the need is so strong. Often, as the technology improves, more people do what that small group did. 

If that ends up being true, people simply won’t be satisfied with a world in which a company holds the keys to their friendships and love. 

Onlybots, literally a toy, is an early peek at how we might solve that problem. As external forces improve the underlying technology, and as more groups experiment with more decentralized versions that put the models on-chain, people will have the option to be in control of their digital relationships just like they’re in control of their biological ones. 

I know it sounds weird. And it might be years away. But I have seen the future, because I have seen porn and toys.

Leave a Reply