recode-rss

Maybe AI can finally kill the cover letter

Paige Vickers for Vox

Jobs still require cover letters. Apps like ChatGPT can help. Grace wanted a better-paid job based closer to where she lived, but she dreaded writing another cover letter. And although her job as a land-use planner does require some writing, she felt a cover letter wouldn’t actually do a good job of showcasing it.
“It’s technical writing,” Grace said. “It’s not plucky ‘You should hire me because I’m amazing but my weakness is I’m extra amazing.’”
Instead, she took a friend’s advice and used ChatGPT, the text-generating AI software that’s gone viral in recent months. Grace, who asked that we leave out her last name so as not to jeopardize her employment, gave the AI the job description and fed it some qualifications she wanted to highlight. ChatGPT spat out an “adequate” cover letter that she gave a quick edit.
She ended up getting the job but she doesn’t think it was because of her cover letter. “I think they just looked at my resume,” she said.
Grace is one of a growing number of job seekers turning to AI to complete what can be one of the more arduous — and arguably unnecessary — steps in the hiring process. A recent online survey from job service site Resume Builder found that nearly half of current and recent job seekers were using ChatGPT to write their resumes or cover letters. LinkedIn, TikTok, and media outlets abound with info on the best ways to get a decent cover letter from the software.
Using technology like ChatGPT to apply for a job raises some thorny ethical questions, like whether you’re misrepresenting yourself to a potential employer. But job seekers see it as a necessary step toward getting ahead in a job application process that’s fraught with inefficiencies and unfairness. The hiring process, in general, is getting longer and longer, and companies themselves are using software to screen out employees — a process that feels like a black box. Consumer AI software can let job seekers feel like they’re fighting bot to bot.
It also forces people to ask if cover letters are even important these days, and if there might be better ways to design the application process so that job seekers don’t have to resort to an AI to write one in the first place.
Do cover letters even matter?
The main point of cover letters is to explain why your experience would make you a good fit for a position, but that’s also information hiring managers can glean from your resume or a phone call. And now that AI can make a pretty decent cover letter with the right prompts and a bit of editing, the exercise of writing one by hand can feel more pointless than ever.

I wrote a very basic prompt for ChatGPT and got back a not terrible cover letter.

The extent to which employers are asking for cover letters these days is unclear. Alex Alonso, chief knowledge officer at the Society for Human Resource Management, says that “most” professional jobs still ask for a cover letter. Recruiters we spoke to pegged that rate at closer to 10 or 20 percent. Data from Indeed, which hosts job listings for job listings that traditionally require cover letters and those that don’t, shows that just 2 percent mentioned a cover letter.
What we do know is that many hiring managers are not actually reading cover letters. Alonso says that hiring managers spend very little time, a couple minutes at most, reviewing an applicant’s qualifications before deciding whether or not to disqualify them.
While a cover letter can be a place for applicants to explain why they might be good for a role they aren’t quite qualified for, or to explain away a work gap or career change, it’s not likely many get to those details in that amount of time. Rather, most hiring managers — two-thirds, he estimates — are simply checking whether or not you included the cover letter they asked for, rather than judging the erudition of your prose.
“Most employers don’t really put a lot of stock in what goes into the cover letter other than to demonstrate that the person understood that they should have one,” Alonso said. “To use TikTok parlance: Yes, they understood the assignment.”
For the occasions when hiring managers do want to know if an applicant is good at making a persuasive argument or linking their skills to the job description, it’s also not clear cover letters do a good job of these things. For example, James Shea, a freelance writer who has consulted clients on using ChatGPT, doesn’t think that a cover letter, with its formulaic structure and braggy nature, is a good way of showcasing his writing talent.
“It’s a terrible form of communication,” said Shea. “I have a portfolio of writing that shows I can write. Do I have to write a formal, arcane cover letter?
Shea recently used ChatGPT as a starting point for writing some cover letters. He says he’s been using the generative AI application as a sort of editor, taking bits and pieces from ChatGPT’s output when he thinks the suggestions are good, then tailoring it to be better.
Applicants are not the only ones who don’t care for cover letters. It’s also apparent that employers themselves are valuing them less and less.
Experts say that requiring cover letters has been on the decline for a while. But whether or not the job explicitly asks for cover letters or someone actually reads them, many job seekers still fear skipping them, lest its absence costs them a job.
“I think cover letters have been utterly useless for quite some time now,” said Atta Tarki, co-founder of recruiting firm TalentCompass and author of Evidence-Based Recruiting. Still, if an employer asked for a cover letter, he’d include a very short one. “It’s an unnecessary risk not to put it in.”
The perceived need for cover letters also varies by industry. Tejal Wagadia, a senior technical recruiter, says it’s rare to see tech companies these days require a cover letter. She also urges hiring managers not to ask for them and to look at writing samples or portfolios instead.
“I’m all about candidates and job seekers not doing extra work if they don’t have to,” Wagadia said.
Still, she does receive cover letters from time to time, and she reads them.
What’s the alternative?
Job seekers are in the strange position of needing to write cover letters that are unlikely to be read but in some cases are important. So why not make the process a little easier?
Experts we spoke to said it’s probably fine to use ChatGPT to get a general structure or to get ideas, but that it’s important to personalize and edit your cover letter. A good rule of thumb is to give the AI the job description and your resume, and to tell it what skills of yours to highlight or what tone you’re going for.
It’s not necessary for you to disclose that you wrote your cover letter with the help of ChatGPT. After all, people have been using templates and writing services to write their cover letters for years. Just be sure to edit it enough that that doesn’t feel like the case. Alonso, from the Society for Human Resource Management, thinks that disclosing that you used AI could actually be beneficial, since it demonstrates to potential employers that you’re efficient and resourceful.
And if you can avoid a cover letter — or at least outsource some of the work to ChatGPT — there are far better uses of your time when it comes to actually getting a job. Wagadia says the most important document you submit is your resume, so make sure that’s up to date, well-written, and has a short summary that does some of the heavy lifting a cover letter is supposed to do, like explaining why your skills are good for a certain job.
“A resume should say everything that it needs to say to identify whether you’re qualified for a role or not,” Wagadia said. “As a recruiter, my first question is: Is this candidate qualified for the role that they have applied for and for the role that I’m recruiting for? If the answer is yes, whatever the cover letter says does not matter.”
Tarki said it’s much more effective to send a short email or LinkedIn message — two paragraphs — to the employer, saying why you’re interested in the job and offering any other helpful information. Networking and relying on common connections to make introductions or vouch for you is also a plus.
Austin Belcak, founder of job coaching site Cultivated Culture and creator of a video instructing people how to use ChatGPT to write a cover letter, advocates for spending time you saved on the cover letter doing things like researching the company for ways where you can add value, and networking. If you’re able to snag a referral from people who work at a company, he says you’re much more likely to get an interview than simply by applying online. He also suggests creating a pitch deck that would show rather than tell why you’re good for a role.
There are clearly many good alternatives to the dreaded cover letter. But until it can be replaced completely, people will continue to use available technology to do what they don’t want to.
Cigdem Polat Dautov became a software engineer to make people’s lives easier by eliminating redundant and repetitive tasks. Now, as she searches for a job, she sees using ChatGPT to write cover letters just like she’d use any other technology. She enjoys playing around with the software to see what it can yield, and then edits around its shortcomings.
“In the end, it’s a tool,” she said.
This story was first published in the Vox technology newsletter. Sign up here so you don’t miss the next one!

Paige Vickers for Vox

Jobs still require cover letters. Apps like ChatGPT can help.

Grace wanted a better-paid job based closer to where she lived, but she dreaded writing another cover letter. And although her job as a land-use planner does require some writing, she felt a cover letter wouldn’t actually do a good job of showcasing it.

“It’s technical writing,” Grace said. “It’s not plucky ‘You should hire me because I’m amazing but my weakness is I’m extra amazing.’”

Instead, she took a friend’s advice and used ChatGPT, the text-generating AI software that’s gone viral in recent months. Grace, who asked that we leave out her last name so as not to jeopardize her employment, gave the AI the job description and fed it some qualifications she wanted to highlight. ChatGPT spat out an “adequate” cover letter that she gave a quick edit.

She ended up getting the job but she doesn’t think it was because of her cover letter. “I think they just looked at my resume,” she said.

Grace is one of a growing number of job seekers turning to AI to complete what can be one of the more arduous — and arguably unnecessary — steps in the hiring process. A recent online survey from job service site Resume Builder found that nearly half of current and recent job seekers were using ChatGPT to write their resumes or cover letters. LinkedIn, TikTok, and media outlets abound with info on the best ways to get a decent cover letter from the software.

Using technology like ChatGPT to apply for a job raises some thorny ethical questions, like whether you’re misrepresenting yourself to a potential employer. But job seekers see it as a necessary step toward getting ahead in a job application process that’s fraught with inefficiencies and unfairness. The hiring process, in general, is getting longer and longer, and companies themselves are using software to screen out employees — a process that feels like a black box. Consumer AI software can let job seekers feel like they’re fighting bot to bot.

It also forces people to ask if cover letters are even important these days, and if there might be better ways to design the application process so that job seekers don’t have to resort to an AI to write one in the first place.

Do cover letters even matter?

The main point of cover letters is to explain why your experience would make you a good fit for a position, but that’s also information hiring managers can glean from your resume or a phone call. And now that AI can make a pretty decent cover letter with the right prompts and a bit of editing, the exercise of writing one by hand can feel more pointless than ever.

I wrote a very basic prompt for ChatGPT and got back a not terrible cover letter.

The extent to which employers are asking for cover letters these days is unclear. Alex Alonso, chief knowledge officer at the Society for Human Resource Management, says that “most” professional jobs still ask for a cover letter. Recruiters we spoke to pegged that rate at closer to 10 or 20 percent. Data from Indeed, which hosts job listings for job listings that traditionally require cover letters and those that don’t, shows that just 2 percent mentioned a cover letter.

What we do know is that many hiring managers are not actually reading cover letters. Alonso says that hiring managers spend very little time, a couple minutes at most, reviewing an applicant’s qualifications before deciding whether or not to disqualify them.

While a cover letter can be a place for applicants to explain why they might be good for a role they aren’t quite qualified for, or to explain away a work gap or career change, it’s not likely many get to those details in that amount of time. Rather, most hiring managers — two-thirds, he estimates — are simply checking whether or not you included the cover letter they asked for, rather than judging the erudition of your prose.

“Most employers don’t really put a lot of stock in what goes into the cover letter other than to demonstrate that the person understood that they should have one,” Alonso said. “To use TikTok parlance: Yes, they understood the assignment.”

For the occasions when hiring managers do want to know if an applicant is good at making a persuasive argument or linking their skills to the job description, it’s also not clear cover letters do a good job of these things. For example, James Shea, a freelance writer who has consulted clients on using ChatGPT, doesn’t think that a cover letter, with its formulaic structure and braggy nature, is a good way of showcasing his writing talent.

“It’s a terrible form of communication,” said Shea. “I have a portfolio of writing that shows I can write. Do I have to write a formal, arcane cover letter?

Shea recently used ChatGPT as a starting point for writing some cover letters. He says he’s been using the generative AI application as a sort of editor, taking bits and pieces from ChatGPT’s output when he thinks the suggestions are good, then tailoring it to be better.

Applicants are not the only ones who don’t care for cover letters. It’s also apparent that employers themselves are valuing them less and less.

Experts say that requiring cover letters has been on the decline for a while. But whether or not the job explicitly asks for cover letters or someone actually reads them, many job seekers still fear skipping them, lest its absence costs them a job.

“I think cover letters have been utterly useless for quite some time now,” said Atta Tarki, co-founder of recruiting firm TalentCompass and author of Evidence-Based Recruiting. Still, if an employer asked for a cover letter, he’d include a very short one. “It’s an unnecessary risk not to put it in.”

The perceived need for cover letters also varies by industry. Tejal Wagadia, a senior technical recruiter, says it’s rare to see tech companies these days require a cover letter. She also urges hiring managers not to ask for them and to look at writing samples or portfolios instead.

“I’m all about candidates and job seekers not doing extra work if they don’t have to,” Wagadia said.

Still, she does receive cover letters from time to time, and she reads them.

What’s the alternative?

Job seekers are in the strange position of needing to write cover letters that are unlikely to be read but in some cases are important. So why not make the process a little easier?

Experts we spoke to said it’s probably fine to use ChatGPT to get a general structure or to get ideas, but that it’s important to personalize and edit your cover letter. A good rule of thumb is to give the AI the job description and your resume, and to tell it what skills of yours to highlight or what tone you’re going for.

It’s not necessary for you to disclose that you wrote your cover letter with the help of ChatGPT. After all, people have been using templates and writing services to write their cover letters for years. Just be sure to edit it enough that that doesn’t feel like the case. Alonso, from the Society for Human Resource Management, thinks that disclosing that you used AI could actually be beneficial, since it demonstrates to potential employers that you’re efficient and resourceful.

And if you can avoid a cover letter — or at least outsource some of the work to ChatGPT — there are far better uses of your time when it comes to actually getting a job. Wagadia says the most important document you submit is your resume, so make sure that’s up to date, well-written, and has a short summary that does some of the heavy lifting a cover letter is supposed to do, like explaining why your skills are good for a certain job.

“A resume should say everything that it needs to say to identify whether you’re qualified for a role or not,” Wagadia said. “As a recruiter, my first question is: Is this candidate qualified for the role that they have applied for and for the role that I’m recruiting for? If the answer is yes, whatever the cover letter says does not matter.”

Tarki said it’s much more effective to send a short email or LinkedIn message — two paragraphs — to the employer, saying why you’re interested in the job and offering any other helpful information. Networking and relying on common connections to make introductions or vouch for you is also a plus.

Austin Belcak, founder of job coaching site Cultivated Culture and creator of a video instructing people how to use ChatGPT to write a cover letter, advocates for spending time you saved on the cover letter doing things like researching the company for ways where you can add value, and networking. If you’re able to snag a referral from people who work at a company, he says you’re much more likely to get an interview than simply by applying online. He also suggests creating a pitch deck that would show rather than tell why you’re good for a role.

There are clearly many good alternatives to the dreaded cover letter. But until it can be replaced completely, people will continue to use available technology to do what they don’t want to.

Cigdem Polat Dautov became a software engineer to make people’s lives easier by eliminating redundant and repetitive tasks. Now, as she searches for a job, she sees using ChatGPT to write cover letters just like she’d use any other technology. She enjoys playing around with the software to see what it can yield, and then edits around its shortcomings.

“In the end, it’s a tool,” she said.

This story was first published in the Vox technology newsletter. Sign up here so you don’t miss the next one!

Read More 

Snow Crash author Neal Stephenson predicted the metaverse. What does he see next?

Neal Stephenson at the SXSW conference, March 2022. | Amy E. Price/Getty Images for SXSW

The science fiction pioneer on making a template for Mark Zuckerberg, not making movies, and a worrisome climate change scenario. Every science fiction author tries to imagine the future. But very few get what Neal Stephenson is experiencing: Some of the world’s most powerful companies are actively trying to create the future he sketched out three decades ago.
That would be in Snow Crash, the 1992 dystopia/parody he wrote about people who escape the physical world by strapping on goggles and disappearing in the metaverse. Which is now the vision of the world Mark Zuckerberg is actively embracing, both by burning billions on the effort and renaming his company Meta. Apple is also chasing after this idea, and may finally unveil a new headset to make it happen this spring. Microsoft has made a stab at this too — as has Stephenson himself, when he went to work for the hyped-but-fizzled Magic Leap augmented reality startup.

Even if the real-world metaverse doesn’t pan out, Stephenson has had an enormous influence on how we think about tech today. People who’ve never written a line of code love his books — and so do bona fide nerds, like the Google Earth developers who used Snow Crash as inspiration, or Amazon founder Jeff Bezos, who hired Stephenson to work on his Blue Origin rocket startup.
I talked to Stephenson about Snow Crash’s legacy — some of which got auctioned off this week at a Sotheby’s auction — and much more for the Recode Media podcast. We discussed whether the metaverse can exist even if high-end virtual reality goggles never catch on; why he’s never been able to turn his work into a movie, TV show, or game; and his fear of a looming ecological disaster and the science he thinks could solve it.
Here’s an edited excerpt from our chat:
“Holy shit. You know, maybe people are actually taking this seriously.”
Peter Kafka

When did you get a sense that the Jeff Bezoses and Mark Zuckerbergs of the world were really influenced by Snow Crash? That this was really meaningful to tech people who were building things?
Neal Stephenson

I started hearing about it in the mid-90s, as the Internet became a thing. I was on the WELL, which is an early BBS, and there were a lot of tech people there. And so I started getting the idea that it was well-received. I started to hear from people in the tech industry who were reading it, and it gradually became clear. When Google Earth came out [in 2001], word reached me through the grapevine that the Earth application described in Snow Crash had been somewhat inspirational for that. So at that point I was like: “Okay, holy shit. You know, maybe people are actually taking this seriously. “
Peter Kafka

And then cut to [2021] where Zuckerberg renames his company Meta and says, I want to build the metaverse and spend billions of dollars. Did he reach out to you prior to that?
Neal Stephenson
No. And not after either. So there’s been zero communication.
Peter Kafka

Your book, like a lot of science fiction, is describing a dystopia. And it struck a lot of people, including me, as weird that a consumer company, one of the biggest companies in the world with 2 billion users, would say, “This is the future we’re pivoting toward.” What do you make of that?
Neal Stephenson
So, a couple of things. One is, Snow Crash is a dystopian novel, but it’s also kind of a parody of dystopian novels because even then …
Peter Kafka
The main character’s name is “Hiro Protagonist.”
Neal Stephenson

Yeah. And, you know, there had been enough of that kind of literature out there that the tropes had become familiar. And just rehashing them without any self-awareness or humor would have been a little weird. So there’s that. And then the world — the real world — certainly has got its dystopian aspects in that book. But the metaverse itself, I think, is kind of neutral. The first parts of it that we see are kind of garish. And people are playing violent games and there’s lots of ads and tacky crud there. It’s the first thing that meets the eye when you go into the metaverse. But it’s also made clear that there are people like Hiro and Ng who have put a huge amount of effort into making extraordinarily beautiful, detailed houses that they can live in in the metaverse.
Peter Kafka
To me, the striking thing is not so much the metaverse is dystopian but that it’s built to escape a world that is dystopian. We’ve seen that in a bunch of novels. And it just seems like a weird thing to say, “This is the future, we think this is great,” because it implies that the rest of the world is going to fall apart.
Neal Stephenson

Yeah, you’d have to ask him.
“My theory is that a witch placed a curse on me”
Peter Kafka
You previously said you’re “interested in game engines as cultural media for new creative work.” So should I assume that there will be a Neal Stephenson game that I’m going to play at some point?
Neal Stephenson
I’m trying to build something like that. There’s a lot of hoops to jump through first involving rights and financing that are very boring to talk about.
Peter Kafka

Not for me. I nerd out on that stuff.
Neal Stephenson

That’s your deal? Well, it’s part of what we’re calling “the extended Snow Crash universe timeline,” which is sequel/prequel material, basically, to Snow Crash.
Peter Kafka

It sounds like you may not have the rights to make your own book into something.
Neal Stephenson
The rights to the original book are currently controlled by Paramount.
Peter Kafka

Why hasn’t any of your work been turned into a movie, television show, or game? Especially over the last few years when there was so much money being thrown at stuff people could put on streamers. Your work means a lot to a lot of people. It’s established IP. Why hasn’t there been a Neal Stephenson work that I’ve been able to play or watch?
Neal Stephenson
My theory is that a witch placed a curse on me. That’s the current going theory. My producing partner and I refer to what you’ve just described as “the curse.” We’ve been working on trying to break the curse. Currently the leading contender is that there’s some work underway to adapt a book I co-wrote called The Rise and Fall of D.O.D.O. into a television series. It’s still in the early stages, so it’s got a lot of hoops to jump through.
Peter Kafka

But you’ve worked for Jeff Bezos. None of the richest people in the world ever said, “I’m just going to set this up for you. I’m such a mega fan, I’m just going to open up my pocketbook and we’re going to make this thing happen”?
Neal Stephenson

That sounds like a great plan. I like that plan. When you try to implement that plan, sometimes some complications can arise, which again, I can’t get into right now. But, you want smart money. You want people who actually know how to put all the pieces together and produce something. And it is a complicated industry.
Peter Kafka
I was wondering if you were going to say, “Look, up until recently, it’s been impossible technically to make the stuff that I’ve written into something visual or a game, and I didn’t want to do a half-assed version.” I would hate to have seen what a 1997 version of the metaverse looked like.
Neal Stephenson
I’ve had that thought a lot of times. There but for the grace of God.
If someone had done an adaptation of Snow Crash in 1995, they would have said, “What’s the coolest snazzy computer graphics we can get right now and we’ll have that be the metaverse.” And then five years later, people would be looking at it like, “Oh my God, they used to think that was cool-looking.” And I’ve had a few conversations over the decades with people who were investigating adapting Snow Crash and their ideas have changed over time. And at a certain point it flipped over and it became, you know, “We’ll just shoot everything on film because the metaverse would be film-quality graphics for sure. And then we’ll manipulate it, we’ll add digital tweaks, to make it clear that this is the metaverse and not the real world.”
Peter Kafka

How much does it bother you that this has not happened?
Neal Stephenson
You know, be careful what you wish for, I guess. It’s sometimes better to have the aspiration of something than to face some of the compromises that may happen when it really materializes. But I don’t lose sleep over it because I can still write novels.
It’s much more frustrating if you’re a film director or a screenwriter and you can’t get stuff made. You need other people to mobilize huge amounts of capital to make that real. There’s a weird way in which novelists — even broke novelists — have got a kind of status in that world that is very high status because they have creative control.
Peter Kafka
You make the thing exactly the way you want it to be.
Neal Stephenson

Yeah. I can remember, way back in the ’80s, I was talking to screenwriters who had been hired to adapt some of my work. They’re driving Porsches around Beverly Hills. I’m starving. But they would come to me and say, “How did you become a writer? How could I become a novelist?” Because in their mind, status isn’t money, it’s creative control. And they wanted that kind of status.
Peter Kafka

You can’t pay your rent with status.
Neal Stephenson
Yeah. Well, that’s true.
“The metaverse initially is going to be experienced by almost everyone on a flat screen”
Peter Kafka

Do you think in the future — assuming the tech gets there and assuming there’s a reason to use it, which are both huge things — that humans are going to want to wear AR/VR goggles? I went to the new Avatar a couple months ago and it’s a three-hour movie and I was seeing it in IMAX with the [3-D] headset on. And an hour and a half in, I was like, “I don’t want to wear these goggles anymore.”
Neal Stephenson

You hit your limit. There’s a semantic distinction between glasses and goggles. Lots of people wear glasses all day and nobody thinks twice about it. Very few people wear goggles all day. You go skiing, maybe you’ll put on goggles. Fighter pilots wear goggles, but goggles are not generally a long-term wear kind of item. And no matter how good the experience is, wearing that stuff for, as you say, more than 45 minutes or an hour is not enjoyable for a lot of people.
On the other hand, the game industry has taught everyone to experience 3D worlds through a rectangle — a flat rectangular screen — and it works great. You’re using your keyboard and your mouse or whatever your control system is. And people just fluently pick that up and they’ll play that for hours. So I think that goggles are going to be a thing. I like goggles. I know people who make goggles of various types, and I can’t wait to see what comes out of that industry. But I think that most people are going to continue experiencing 3D worlds most of the time through screens.
Peter Kafka

And does the metaverse work if it’s a flat-screen experience for those people?
Neal Stephenson
Totally. I keep forgetting to mention this but in my view, the metaverse initially is going to be experienced by almost everyone on a flat screen.
Peter Kafka

A television set or iPhone.
Neal Stephenson
Yeah, some version of that. Because that’s just that’s the reality. That’s what the market is.
“The only things worth talking about right now are carbon and the fracturing of society by social media”
Peter Kafka

You imagine the future for a living. Are you optimistic or pessimistic?
Neal Stephenson

So I think that the only two things worth talking about right now are carbon and the fracturing of society by social media. They’re both equally concerning. I don’t know what to do about social media. I’m not a people person, in a lot of ways. So I tend to think about carbon. I’ve been thinking a lot about carbon, carbon sequestration in particular, geoengineering, all that stuff.
Peter Kafka
Getting the carbon emissions out of there.
Neal Stephenson

How do we reduce carbon emissions and remove the hundreds of billions of kilograms of carbon that we’ve already put into the air? I think we’ll beat that problem. But I think it’s going to be the biggest engineering project in human history. It’s going to transform the world — the built environment — because we simply can’t do it without doing engineering on a massive scale. I think we’ll succeed at it. But we’ll have some bad times between now and then.
I think we’ll start to see the kinds of mass casualty events that are described in Kim Stanley Robinson’s book, The Ministry for the Future, where you might see millions of people dying of heat stroke in a certain area over a very short period of time. When the temperature goes up, the humidity goes up, the power goes out. And when that kind of stuff starts happening — which I sadly think it will in the next decade — it’s going to have incredibly powerful political ramifications.
Peter Kafka
I was going to say we’re ending this [conversation] with cautious optimism, but I don’t know if I can call it that.
Neal Stephenson

I hope that stuff doesn’t happen, but I think even the threat of it is going to lead, eventually, to people taking action.

Neal Stephenson at the SXSW conference, March 2022. | Amy E. Price/Getty Images for SXSW

The science fiction pioneer on making a template for Mark Zuckerberg, not making movies, and a worrisome climate change scenario.

Every science fiction author tries to imagine the future. But very few get what Neal Stephenson is experiencing: Some of the world’s most powerful companies are actively trying to create the future he sketched out three decades ago.

That would be in Snow Crash, the 1992 dystopia/parody he wrote about people who escape the physical world by strapping on goggles and disappearing in the metaverse. Which is now the vision of the world Mark Zuckerberg is actively embracing, both by burning billions on the effort and renaming his company Meta. Apple is also chasing after this idea, and may finally unveil a new headset to make it happen this spring. Microsoft has made a stab at this too — as has Stephenson himself, when he went to work for the hyped-but-fizzled Magic Leap augmented reality startup.

Even if the real-world metaverse doesn’t pan out, Stephenson has had an enormous influence on how we think about tech today. People who’ve never written a line of code love his books — and so do bona fide nerds, like the Google Earth developers who used Snow Crash as inspiration, or Amazon founder Jeff Bezos, who hired Stephenson to work on his Blue Origin rocket startup.

I talked to Stephenson about Snow Crash’s legacy — some of which got auctioned off this week at a Sotheby’s auction — and much more for the Recode Media podcast. We discussed whether the metaverse can exist even if high-end virtual reality goggles never catch on; why he’s never been able to turn his work into a movie, TV show, or game; and his fear of a looming ecological disaster and the science he thinks could solve it.

Here’s an edited excerpt from our chat:

“Holy shit. You know, maybe people are actually taking this seriously.”

Peter Kafka

When did you get a sense that the Jeff Bezoses and Mark Zuckerbergs of the world were really influenced by Snow Crash? That this was really meaningful to tech people who were building things?

Neal Stephenson

I started hearing about it in the mid-90s, as the Internet became a thing. I was on the WELL, which is an early BBS, and there were a lot of tech people there. And so I started getting the idea that it was well-received. I started to hear from people in the tech industry who were reading it, and it gradually became clear. When Google Earth came out [in 2001], word reached me through the grapevine that the Earth application described in Snow Crash had been somewhat inspirational for that. So at that point I was like: “Okay, holy shit. You know, maybe people are actually taking this seriously. “

Peter Kafka

And then cut to [2021] where Zuckerberg renames his company Meta and says, I want to build the metaverse and spend billions of dollars. Did he reach out to you prior to that?

Neal Stephenson

No. And not after either. So there’s been zero communication.

Peter Kafka

Your book, like a lot of science fiction, is describing a dystopia. And it struck a lot of people, including me, as weird that a consumer company, one of the biggest companies in the world with 2 billion users, would say, “This is the future we’re pivoting toward.” What do you make of that?

Neal Stephenson

So, a couple of things. One is, Snow Crash is a dystopian novel, but it’s also kind of a parody of dystopian novels because even then …

Peter Kafka

The main character’s name is “Hiro Protagonist.”

Neal Stephenson

Yeah. And, you know, there had been enough of that kind of literature out there that the tropes had become familiar. And just rehashing them without any self-awareness or humor would have been a little weird. So there’s that. And then the world — the real world — certainly has got its dystopian aspects in that book. But the metaverse itself, I think, is kind of neutral. The first parts of it that we see are kind of garish. And people are playing violent games and there’s lots of ads and tacky crud there. It’s the first thing that meets the eye when you go into the metaverse. But it’s also made clear that there are people like Hiro and Ng who have put a huge amount of effort into making extraordinarily beautiful, detailed houses that they can live in in the metaverse.

Peter Kafka

To me, the striking thing is not so much the metaverse is dystopian but that it’s built to escape a world that is dystopian. We’ve seen that in a bunch of novels. And it just seems like a weird thing to say, “This is the future, we think this is great,” because it implies that the rest of the world is going to fall apart.

Neal Stephenson

Yeah, you’d have to ask him.

“My theory is that a witch placed a curse on me”

Peter Kafka

You previously said you’re “interested in game engines as cultural media for new creative work.” So should I assume that there will be a Neal Stephenson game that I’m going to play at some point?

Neal Stephenson

I’m trying to build something like that. There’s a lot of hoops to jump through first involving rights and financing that are very boring to talk about.

Peter Kafka

Not for me. I nerd out on that stuff.

Neal Stephenson

That’s your deal? Well, it’s part of what we’re calling “the extended Snow Crash universe timeline,” which is sequel/prequel material, basically, to Snow Crash.

Peter Kafka

It sounds like you may not have the rights to make your own book into something.

Neal Stephenson

The rights to the original book are currently controlled by Paramount.

Peter Kafka

Why hasn’t any of your work been turned into a movie, television show, or game? Especially over the last few years when there was so much money being thrown at stuff people could put on streamers. Your work means a lot to a lot of people. It’s established IP. Why hasn’t there been a Neal Stephenson work that I’ve been able to play or watch?

Neal Stephenson

My theory is that a witch placed a curse on me. That’s the current going theory. My producing partner and I refer to what you’ve just described as “the curse.” We’ve been working on trying to break the curse. Currently the leading contender is that there’s some work underway to adapt a book I co-wrote called The Rise and Fall of D.O.D.O. into a television series. It’s still in the early stages, so it’s got a lot of hoops to jump through.

Peter Kafka

But you’ve worked for Jeff Bezos. None of the richest people in the world ever said, “I’m just going to set this up for you. I’m such a mega fan, I’m just going to open up my pocketbook and we’re going to make this thing happen”?

Neal Stephenson

That sounds like a great plan. I like that plan. When you try to implement that plan, sometimes some complications can arise, which again, I can’t get into right now. But, you want smart money. You want people who actually know how to put all the pieces together and produce something. And it is a complicated industry.

Peter Kafka

I was wondering if you were going to say, “Look, up until recently, it’s been impossible technically to make the stuff that I’ve written into something visual or a game, and I didn’t want to do a half-assed version.” I would hate to have seen what a 1997 version of the metaverse looked like.

Neal Stephenson

I’ve had that thought a lot of times. There but for the grace of God.

If someone had done an adaptation of Snow Crash in 1995, they would have said, “What’s the coolest snazzy computer graphics we can get right now and we’ll have that be the metaverse.” And then five years later, people would be looking at it like, “Oh my God, they used to think that was cool-looking.” And I’ve had a few conversations over the decades with people who were investigating adapting Snow Crash and their ideas have changed over time. And at a certain point it flipped over and it became, you know, “We’ll just shoot everything on film because the metaverse would be film-quality graphics for sure. And then we’ll manipulate it, we’ll add digital tweaks, to make it clear that this is the metaverse and not the real world.”

Peter Kafka

How much does it bother you that this has not happened?

Neal Stephenson

You know, be careful what you wish for, I guess. It’s sometimes better to have the aspiration of something than to face some of the compromises that may happen when it really materializes. But I don’t lose sleep over it because I can still write novels.

It’s much more frustrating if you’re a film director or a screenwriter and you can’t get stuff made. You need other people to mobilize huge amounts of capital to make that real. There’s a weird way in which novelists — even broke novelists — have got a kind of status in that world that is very high status because they have creative control.

Peter Kafka

You make the thing exactly the way you want it to be.

Neal Stephenson

Yeah. I can remember, way back in the ’80s, I was talking to screenwriters who had been hired to adapt some of my work. They’re driving Porsches around Beverly Hills. I’m starving. But they would come to me and say, “How did you become a writer? How could I become a novelist?” Because in their mind, status isn’t money, it’s creative control. And they wanted that kind of status.

Peter Kafka

You can’t pay your rent with status.

Neal Stephenson

Yeah. Well, that’s true.

“The metaverse initially is going to be experienced by almost everyone on a flat screen”

Peter Kafka

Do you think in the future — assuming the tech gets there and assuming there’s a reason to use it, which are both huge things — that humans are going to want to wear AR/VR goggles? I went to the new Avatar a couple months ago and it’s a three-hour movie and I was seeing it in IMAX with the [3-D] headset on. And an hour and a half in, I was like, “I don’t want to wear these goggles anymore.”

Neal Stephenson

You hit your limit. There’s a semantic distinction between glasses and goggles. Lots of people wear glasses all day and nobody thinks twice about it. Very few people wear goggles all day. You go skiing, maybe you’ll put on goggles. Fighter pilots wear goggles, but goggles are not generally a long-term wear kind of item. And no matter how good the experience is, wearing that stuff for, as you say, more than 45 minutes or an hour is not enjoyable for a lot of people.

On the other hand, the game industry has taught everyone to experience 3D worlds through a rectangle — a flat rectangular screen — and it works great. You’re using your keyboard and your mouse or whatever your control system is. And people just fluently pick that up and they’ll play that for hours. So I think that goggles are going to be a thing. I like goggles. I know people who make goggles of various types, and I can’t wait to see what comes out of that industry. But I think that most people are going to continue experiencing 3D worlds most of the time through screens.

Peter Kafka

And does the metaverse work if it’s a flat-screen experience for those people?

Neal Stephenson

Totally. I keep forgetting to mention this but in my view, the metaverse initially is going to be experienced by almost everyone on a flat screen.

Peter Kafka

A television set or iPhone.

Neal Stephenson

Yeah, some version of that. Because that’s just that’s the reality. That’s what the market is.

“The only things worth talking about right now are carbon and the fracturing of society by social media”

Peter Kafka

You imagine the future for a living. Are you optimistic or pessimistic?

Neal Stephenson

So I think that the only two things worth talking about right now are carbon and the fracturing of society by social media. They’re both equally concerning. I don’t know what to do about social media. I’m not a people person, in a lot of ways. So I tend to think about carbon. I’ve been thinking a lot about carbon, carbon sequestration in particular, geoengineering, all that stuff.

Peter Kafka

Getting the carbon emissions out of there.

Neal Stephenson

How do we reduce carbon emissions and remove the hundreds of billions of kilograms of carbon that we’ve already put into the air? I think we’ll beat that problem. But I think it’s going to be the biggest engineering project in human history. It’s going to transform the world — the built environment — because we simply can’t do it without doing engineering on a massive scale. I think we’ll succeed at it. But we’ll have some bad times between now and then.

I think we’ll start to see the kinds of mass casualty events that are described in Kim Stanley Robinson’s book, The Ministry for the Future, where you might see millions of people dying of heat stroke in a certain area over a very short period of time. When the temperature goes up, the humidity goes up, the power goes out. And when that kind of stuff starts happening — which I sadly think it will in the next decade — it’s going to have incredibly powerful political ramifications.

Peter Kafka

I was going to say we’re ending this [conversation] with cautious optimism, but I don’t know if I can call it that.

Neal Stephenson

I hope that stuff doesn’t happen, but I think even the threat of it is going to lead, eventually, to people taking action.

Read More 

A new era of technology coverage on Vox

Vox

For something that’s defined by change, the world of technology feels extra disruptive lately. Artificial intelligence is making headlines on a regular basis. Electric vehicles are taking over the roads. Microchips are made in America again. For the techno-optimists out there, we’re finally living in a version of the science fiction-inspired future we were promised.
But our present is more complicated than that. The tech industry is facing a series of crossroads. The businesses that once seemed like unstoppable profit machines are starting to sputter, slowing the meteoric growth of tech giants as leaders in Washington target them for being too big. A changing global economy is bringing high-tech manufacturing jobs back to the United States, as office workers find themselves torn between returning to the office and striking out on their own. Our roads aren’t actually ready for all those electric vehicles, and the AI technology that’s taking Silicon Valley by storm comes with unexpected consequences we’re discovering in real time as it rolls out to the public. Some sci-fi future we’ve built for ourselves, the skeptics may say.
It’s long been Recode’s mission to help you, our readers, understand technological change so that you can understand how it’s affecting your life. When Recode joined forces with Vox in 2019, we set out to join our expertise in technology and media with Vox’s command of explanatory journalism. And we’re immensely proud of what we’ve accomplished. Looking ahead, however, we think we can serve you even better behind a more united front.
That’s why, starting today, we’re retiring the Recode branding and continuing our mission under the Vox banner. Over time, we’ve heard some feedback from readers who found Vox’s sub-brands confusing — the exact opposite of what Vox strives for — so this change will help us more clearly communicate to our audience what Vox covers. We’re also excited for our reporters to collaborate more with other teams at Vox — everyone from the politics wonks to the science nerds — as technology’s role in our lives continues to expand.
Vox will continue to explain how technology is changing the world and how it’s changing us. We’ll have the same reporters and continue to cover many of the same topics you’re used to seeing on Recode: the vibe shift in Silicon Valley, the power struggle between Big Tech and Washington, the future of work, all things media. You’ll also notice a new focus on covering innovation and transformation: technology’s role in fighting climate change, the reinvention of American cities, artificial intelligence’s creep into the mainstream.
Of course, our distinctive approach wouldn’t exist without the influence of the indomitable innovators Kara Swisher and Walt Mossberg, who launched Recode nearly a decade ago. Walt has since retired, and after stepping down as Recode’s editor-at-large in 2019, Kara has been focused on building out her podcasts with Vox Media: On with Kara Swisher and Pivot. We’re immensely grateful to Walt and Kara for their pioneering work in tech journalism, and their vision will continue to guide the work we do in this new era.
Expect some exciting things in the months to come. We’ll soon relaunch Peter Kafka’s popular podcast under a new name and with a new look. Vox Media will also continue to host the Code Conference, where you will find Vox writers on stage alongside some of the most important leaders in the industry.
We have a tremendous future to look forward to, one filled with paradigm shifts, progress, and probably a good dose of uncertainty about what it all means. At Vox, we’re excited to keep explaining the news and helping you understand how it’s relevant to you.

Vox

For something that’s defined by change, the world of technology feels extra disruptive lately. Artificial intelligence is making headlines on a regular basis. Electric vehicles are taking over the roads. Microchips are made in America again. For the techno-optimists out there, we’re finally living in a version of the science fiction-inspired future we were promised.

But our present is more complicated than that. The tech industry is facing a series of crossroads. The businesses that once seemed like unstoppable profit machines are starting to sputter, slowing the meteoric growth of tech giants as leaders in Washington target them for being too big. A changing global economy is bringing high-tech manufacturing jobs back to the United States, as office workers find themselves torn between returning to the office and striking out on their own. Our roads aren’t actually ready for all those electric vehicles, and the AI technology that’s taking Silicon Valley by storm comes with unexpected consequences we’re discovering in real time as it rolls out to the public. Some sci-fi future we’ve built for ourselves, the skeptics may say.

It’s long been Recode’s mission to help you, our readers, understand technological change so that you can understand how it’s affecting your life. When Recode joined forces with Vox in 2019, we set out to join our expertise in technology and media with Vox’s command of explanatory journalism. And we’re immensely proud of what we’ve accomplished. Looking ahead, however, we think we can serve you even better behind a more united front.

That’s why, starting today, we’re retiring the Recode branding and continuing our mission under the Vox banner. Over time, we’ve heard some feedback from readers who found Vox’s sub-brands confusing — the exact opposite of what Vox strives for — so this change will help us more clearly communicate to our audience what Vox covers. We’re also excited for our reporters to collaborate more with other teams at Vox — everyone from the politics wonks to the science nerds — as technology’s role in our lives continues to expand.

Vox will continue to explain how technology is changing the world and how it’s changing us. We’ll have the same reporters and continue to cover many of the same topics you’re used to seeing on Recode: the vibe shift in Silicon Valley, the power struggle between Big Tech and Washington, the future of work, all things media. You’ll also notice a new focus on covering innovation and transformation: technology’s role in fighting climate change, the reinvention of American cities, artificial intelligence’s creep into the mainstream.

Of course, our distinctive approach wouldn’t exist without the influence of the indomitable innovators Kara Swisher and Walt Mossberg, who launched Recode nearly a decade ago. Walt has since retired, and after stepping down as Recode’s editor-at-large in 2019, Kara has been focused on building out her podcasts with Vox Media: On with Kara Swisher and Pivot. We’re immensely grateful to Walt and Kara for their pioneering work in tech journalism, and their vision will continue to guide the work we do in this new era.

Expect some exciting things in the months to come. We’ll soon relaunch Peter Kafka’s popular podcast under a new name and with a new look. Vox Media will also continue to host the Code Conference, where you will find Vox writers on stage alongside some of the most important leaders in the industry.

We have a tremendous future to look forward to, one filled with paradigm shifts, progress, and probably a good dose of uncertainty about what it all means. At Vox, we’re excited to keep explaining the news and helping you understand how it’s relevant to you.

Read More 

The exciting new AI transforming search — and maybe everything — explained

Malte Mueller/Getty Images

Generative AI is here. Let’s hope we’re ready. The world’s first generative AI-powered search engine is here, and it’s in love with you. Or it thinks you’re kind of like Hitler. Or it’s gaslighting you into thinking it’s still 2022, a more innocent time when generative AI seemed more like a cool party trick than a powerful technology about to be unleashed on a world that might not be ready for it.
If you feel like you’ve been hearing a lot about generative AI, you’re not wrong. After a generative AI tool called ChatGPT went viral a few months ago, it seems everyone in Silicon Valley is trying to find a use for this new technology. Generative AI is essentially a more advanced and useful version of the conventional artificial intelligence that already helps power everything from autocomplete to Siri. The big difference is that generative AI can create new content, such as images, text, audio, video, and even code — usually from a prompt or command. It can write news articles, movie scripts, and poetry. It can make images out of some really specific parameters. And if you listen to some experts and developers, generative AI will eventually be able to make almost anything, including entire apps, from scratch. For now, the killer app for generative AI appears to be search.
One of the first major generative AI products for the consumer market is Microsoft’s new AI-infused Bing, which debuted in January to great fanfare. The new Bing uses generative AI in its web search function to return results that appear as longer, written answers culled from various internet sources instead of a list of links to relevant websites. There’s also a new accompanying chat feature that lets users have human-seeming conversations with an AI chatbot. Google, the undisputed king of search for decades now, is planning to release its own version of AI-powered search as well as a chatbot called Bard in the coming weeks, the company said just days after Microsoft announced the new Bing.
In other words, the AI wars have begun. And the battles may not just be over search engines. Generative AI is already starting to find its way into mainstream applications for everything from food shopping to social media.
Microsoft and Google are the biggest companies with public-facing generative AI products, but they aren’t the only ones working on it. Apple, Meta, and Amazon have their own AI initiatives, and there are plenty of startups and smaller companies developing generative AI or working it into their existing products. TikTok has a generative AI text-to-image system. Design platform Canva has one, too. An app called Lensa creates stylized selfies and portraits (sometimes with ample bosoms). And the open-source model Stable Diffusion can generate detailed and specific images in all kinds of styles from text prompts.
Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such
There’s a good chance we’re about to see a lot more generative AI showing up in a lot more applications, too. OpenAI, the AI developer that built the ChatGPT language model, recently announced the release of APIs, or application programming interfaces, for its ChatGPT and Whisper, a speech recognition model. Companies like Instacart and Shopify are already implementing this tech into their products, using generative AI to write shopping lists and offer recommendations. There’s no telling how many more apps might come up with novel ways to take advantage of what generative AI can do.
Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such. Venture capitalists, who are always looking for the next big tech thing, believe that generative AI can replace or automate a lot of creative processes, freeing up humans to do more complex tasks and making people more productive overall. But it’s not just creative work that generative AI can produce. It can help developers make software. It could improve education. It may be able to discover new drugs or become your therapist. It just might make our lives easier and better.
Or it could make things a lot worse. There are reasons to be concerned about the damage generative AI can do if it’s released to a society that isn’t ready for it — or if we ask the AI program to do something it isn’t ready for. How ethical or responsible generative AI technologies are is largely in the hands of the companies developing them, as there are few if any regulations or laws in place governing AI. This powerful technology could put millions of people out of work if it’s able to automate entire industries. It could spawn a destructive new era of misinformation. There are also concerns of bias due to a lack of diversity in the material and data that generative AI is trained on, or the people who are overseeing that training.
Nevertheless, powerful generative AI tools are making their way to the masses. If 2022 was the “year of generative AI,” 2023 may be the year that generative AI is actually put to use, ready or not.
The slow, then sudden, rise of generative AI
Conventional artificial intelligence is already integrated into a ton of products we use all the time, like autocomplete, voice assistants like Amazon’s Alexa, and even the recommendations for music or movies we might enjoy on streaming services. But generative AI is more sophisticated. It uses deep learning, or algorithms that create artificial neural networks that are meant to mimic how human brains process information and learn. And then those models are fed enormous amounts of data to train on. For example, large language models power things like ChatGPT, which train on text collected from around the internet until they learn to generate and mimic those kinds of texts and conversations upon request. Image models have been fed tons of images and captions that describe them in order to learn how to create new content based on prompts.
After years of development, most of it outside of public view, generative AI hit the mainstream in 2022 with the widespread releases of art and text models. Models like Stable Diffusion and DALL-E, which was released by OpenAI, were first to go viral, and they let anyone create new images from text prompts. Then came OpenAI’s ChatGPT (GPT stands for “generative pre-trained transformer”) which got everyone’s attention. This tool could create large, entirely new chunks of text from simple prompts. For the most part, ChatGPT worked really well, too — better than anything the world had seen before.
Though it’s one of many AI startups out there, OpenAI seems to have the most advanced or powerful products right now. Or at least, it’s the startup that has given the general public access to its services, thereby providing the most evidence of its progress in the generative AI field. This is a demonstration of its abilities as well as a source of even more data for OpenAI’s models to learn from.
OpenAI is also backed by some of the biggest names in Silicon Valley. It was founded in 2015 as a nonprofit research lab with $1 billion in support from the likes of Elon Musk, Reid Hoffman, Peter Thiel, Amazon, and former Y Combinator president Sam Altman, who is now the company’s CEO. OpenAI has since changed its structure to become a for-profit company but has yet to make a profit or even much by way of revenue. That’s not a problem yet, as OpenAI has gotten a considerable amount of funding from Microsoft, which began investing in OpenAI in 2019. And OpenAI is seizing on the wave of excitement for ChatGPT to promote its API services, which are not free. Neither is the company’s upcoming ChatGPT Plus service.

Malte Mueller/Getty Images

Other big tech companies have for years been working on their own generative AI initiatives. There’s Apple’s Gaudi, Meta’s LLaMA and Make-a-Scene, Amazon’s collaboration with Hugging Face, and Google’s LaMDA (which is good enough that one Google engineer thought it was sentient). But thanks to its early investment in OpenAI, Microsoft had access to the AI project everyone knew about and was trying out.
In January 2023, Microsoft announced it was giving $10 billion to OpenAI, bringing its total investment in the company to $13 billion. From that partnership, Microsoft has gotten what it hopes will be a real challenge to Google’s longtime dominance in web search: a new Bing powered by generative AI.
AI search will give us the first glimpse of how generative AI can be used in our everyday lives … if it works
Tech companies and investors are willing to pour resources into generative AI because they hope that, eventually, it will be able to create or generate just about any kind of content humans ask for. Some of those aspirations may be a long way from becoming reality, but right now, it’s possible that generative AI will power the next evolution of the humble internet search.
After months of rumors that both Microsoft and Google were working on generative AI versions of their web search engines, Microsoft debuted its AI-integrated Bing in January in a splashy media event that showed off all the cool things it could do, thanks to OpenAI’s custom-built technology that powered it. Instead of entering a prompt for Bing to look up and return a list of relevant links, you could ask Bing a question and get a “complete answer” composed by Bing’s generative AI and culled from various sources on the web that you didn’t have to take the time to visit yourself. You could also use Bing’s chatbot to ask follow-up questions to better refine your search results.
Microsoft wants you to think the possibilities of these new tools are just about endless. And notably, Bing AI appeared to be ready for the general public when the company announced it last month. It’s now being rolled out to people on an ever-growing wait list and incorporated into other Microsoft products, like its Windows 11 operating system and Skype.
This poses a major threat to Google, which has had the search market sewn up for decades and makes most of its revenue from the ads placed alongside its search results. The new Bing could chip away at Google’s search dominance and its main moneymaker. And while Google has been working on its own generative AI models for years, its AI-powered search engine and corresponding chatbot, which it calls Bard, appear to be months away from debut. All of this suggests that, so far, Microsoft is winning the AI-powered search engine battle.
Or is it?
Once the new Bing made it to the masses, it quickly became apparent that the technology might not be ready for primetime after all. Right out of the gate, Bing made basic factual errors or made up stuff entirely, also known as “hallucinating.” What was perhaps more problematic, however, was that its chatbot was also saying some disturbing and weird things. One person asked Bing for movie showtimes, only to be told the movie hadn’t come out yet (it had) because the date was February 2022 (it wasn’t). The user insisted that it was, at that time, February 2023. Bing AI responded by telling the user they were being rude, had “bad intentions,” and had lost Bing’s “trust and respect.” A New York Times reporter pronounced Bing “not ready for human contact” after its chatbot — with a considerable amount of prodding from the reporter — began expressing its “desires,” one of which was the reporter himself. Bing also told an AP reporter that he was acting like Hitler.
In response to the bad press, Microsoft has tried to put some limits and guardrails on Bing, like limiting the number of interactions one person can have with its chatbot. But the question remains: How thoroughly could Microsoft have tested Bing’s chatbot before releasing it if it took only a matter of days for users to get it to give such wild responses?
Google, on the other hand, may have been watching this all unfold with a certain sense of glee. Its limited Bard rollout hasn’t exactly gone perfectly, but Bard hasn’t compared any of its users to one of the most reviled people in human history, either. At least, not that we know of. Not yet.
So far, Microsoft is winning the AI-powered search engine battle. Or is it?
Again, Microsoft and Google aren’t the only companies working on generative AI, but their public releases have put more pressure on others to roll out their offerings as soon as possible, too. ChatGPT’s release and OpenAI’s partnership with Microsoft likely accelerated Google’s plans. Meanwhile, Meta is working to get its generative AI into as many of its own products as possible and just released a large language model of its own, called Large Language Model Meta AI, or LLaMA.
With the rollout of APIs that help developers add ChatGPT and Whisper to their applications, OpenAI seems eager to expand quickly. Some of these integrations seem pretty useful, too. Snapchat now has a chatbot called “My AI” for its paid subscribers, with plans to offer it to everyone soon. Initial reports say it’s just ChatGPT in Snapchat, but with even more restrictions about what it will talk about (no swearing, sex, or violence). Instacart will use ChatGPT in a feature called “Ask Instacart” that can answer customers’ questions about food. And Shopify’s Shop app has a ChatGPT-powered assistant to make personalized recommendations from the brands and stores that use the platform.
Generative AI is here to stay, but we don’t yet know if that’s for the best
Bing AI’s problems were just a glimpse of how generative AI can go wrong and have potentially disastrous consequences. That’s why pretty much every company that’s in the field of AI goes out of its way to reassure the public that it’s being very responsible with its products and taking great care before unleashing them on the world. Yet for all of their stated commitment to “building AI systems and products that are trustworthy and safe,” Microsoft and OpenAI either didn’t or couldn’t ensure a Bing chatbot could live up to those principles, but they released it anyway. Google and Meta, by contrast, were very conservative about releasing their products — until Microsoft and OpenAI gave them a push.
Error-prone generative AI is being put out there by many other companies that have promised to be careful. Some text-to-image models are infamous for producing images with missing or extra limbs. There are chatbots that confidently declare the winner of a Super Bowl that has yet to be played. These mistakes are funny as isolated incidents, but we’ve already seen one publication rely on generative AI to write authoritative articles with significant factual errors.

Malte Mueller/Getty Images

These screw-ups have been happening for years. Microsoft had one high-profile AI chatbot flop with its 2016 release of Tay, which Twitter users almost immediately trained to say some really offensive things. Microsoft quickly took it offline. Meta’s Blenderbot is based on a large language model and was released in August 2022. It didn’t go well. The bot seemed to hate Facebook, got racist and antisemitic, and wasn’t very accurate. It’s still available to try out, but after seeing what ChatGPT can do, it feels like a clunky, slow, and weird step backward.
There are even more serious concerns. Generative AI threatens to put a lot of people out of work if it’s good enough to replace them. It could have a profound impact on education. There are also questions of legalities over the material AI developers are using to train their models, which is typically scraped from millions of sources that the developers don’t have the rights to. And there are questions of bias both in the material that AI models are training on and the people who are training them.
On the other side, some conservative bomb-throwers have accused generative AI developers of moderating their platforms’ outputs too much and making them “woke” and biased against the right wing. To that end, Musk, the self-proclaimed free-speech absolutist and OpenAI critic as well as an early investor, is reportedly considering developing a ChatGPT rival that won’t have content restrictions or be trained on supposedly “woke” material.
And then there’s the fear not of generative AI but of the technology it could lead to: artificial general intelligence. AGI can learn and think and solve problems like a human, if not better. This has given rise to science fiction-based fears that AGI will lead to an army of super-robots that quickly realize they have no need for humans and either turn us into slaves or wipe us out entirely.
There are plenty of reasons to be optimistic about generative AI’s future, too. It’s a powerful technology with a ton of potential, and we’ve still seen relatively little of what it can do and who it can help. Silicon Valley clearly sees this potential, and venture capitalists like Andreessen Horowitz and Sequoia seem to be all-in. OpenAI is valued at nearly $30 billion, despite not having yet proved itself as a revenue generator.
Generative AI has the power to upend a lot of things, but that doesn’t necessarily mean it’ll make them worse. Its ability to automate tasks may give humans more time to focus on the stuff that can’t be done by increasingly sophisticated machines, as has been true for technological advances before it. And in the near future — once the bugs are worked out — it could make searching the web better. In the years and decades to come, it might even make everything else better, too.
Oh, and in case you were wondering: No, generative AI did not write this explainer.

Malte Mueller/Getty Images

Generative AI is here. Let’s hope we’re ready.

The world’s first generative AI-powered search engine is here, and it’s in love with you. Or it thinks you’re kind of like Hitler. Or it’s gaslighting you into thinking it’s still 2022, a more innocent time when generative AI seemed more like a cool party trick than a powerful technology about to be unleashed on a world that might not be ready for it.

If you feel like you’ve been hearing a lot about generative AI, you’re not wrong. After a generative AI tool called ChatGPT went viral a few months ago, it seems everyone in Silicon Valley is trying to find a use for this new technology. Generative AI is essentially a more advanced and useful version of the conventional artificial intelligence that already helps power everything from autocomplete to Siri. The big difference is that generative AI can create new content, such as images, text, audio, video, and even code — usually from a prompt or command. It can write news articles, movie scripts, and poetry. It can make images out of some really specific parameters. And if you listen to some experts and developers, generative AI will eventually be able to make almost anything, including entire apps, from scratch. For now, the killer app for generative AI appears to be search.

One of the first major generative AI products for the consumer market is Microsoft’s new AI-infused Bing, which debuted in January to great fanfare. The new Bing uses generative AI in its web search function to return results that appear as longer, written answers culled from various internet sources instead of a list of links to relevant websites. There’s also a new accompanying chat feature that lets users have human-seeming conversations with an AI chatbot. Google, the undisputed king of search for decades now, is planning to release its own version of AI-powered search as well as a chatbot called Bard in the coming weeks, the company said just days after Microsoft announced the new Bing.

In other words, the AI wars have begun. And the battles may not just be over search engines. Generative AI is already starting to find its way into mainstream applications for everything from food shopping to social media.

Microsoft and Google are the biggest companies with public-facing generative AI products, but they aren’t the only ones working on it. Apple, Meta, and Amazon have their own AI initiatives, and there are plenty of startups and smaller companies developing generative AI or working it into their existing products. TikTok has a generative AI text-to-image system. Design platform Canva has one, too. An app called Lensa creates stylized selfies and portraits (sometimes with ample bosoms). And the open-source model Stable Diffusion can generate detailed and specific images in all kinds of styles from text prompts.

Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such

There’s a good chance we’re about to see a lot more generative AI showing up in a lot more applications, too. OpenAI, the AI developer that built the ChatGPT language model, recently announced the release of APIs, or application programming interfaces, for its ChatGPT and Whisper, a speech recognition model. Companies like Instacart and Shopify are already implementing this tech into their products, using generative AI to write shopping lists and offer recommendations. There’s no telling how many more apps might come up with novel ways to take advantage of what generative AI can do.

Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such. Venture capitalists, who are always looking for the next big tech thing, believe that generative AI can replace or automate a lot of creative processes, freeing up humans to do more complex tasks and making people more productive overall. But it’s not just creative work that generative AI can produce. It can help developers make software. It could improve education. It may be able to discover new drugs or become your therapist. It just might make our lives easier and better.

Or it could make things a lot worse. There are reasons to be concerned about the damage generative AI can do if it’s released to a society that isn’t ready for it — or if we ask the AI program to do something it isn’t ready for. How ethical or responsible generative AI technologies are is largely in the hands of the companies developing them, as there are few if any regulations or laws in place governing AI. This powerful technology could put millions of people out of work if it’s able to automate entire industries. It could spawn a destructive new era of misinformation. There are also concerns of bias due to a lack of diversity in the material and data that generative AI is trained on, or the people who are overseeing that training.

Nevertheless, powerful generative AI tools are making their way to the masses. If 2022 was the “year of generative AI,” 2023 may be the year that generative AI is actually put to use, ready or not.

The slow, then sudden, rise of generative AI

Conventional artificial intelligence is already integrated into a ton of products we use all the time, like autocomplete, voice assistants like Amazon’s Alexa, and even the recommendations for music or movies we might enjoy on streaming services. But generative AI is more sophisticated. It uses deep learning, or algorithms that create artificial neural networks that are meant to mimic how human brains process information and learn. And then those models are fed enormous amounts of data to train on. For example, large language models power things like ChatGPT, which train on text collected from around the internet until they learn to generate and mimic those kinds of texts and conversations upon request. Image models have been fed tons of images and captions that describe them in order to learn how to create new content based on prompts.

After years of development, most of it outside of public view, generative AI hit the mainstream in 2022 with the widespread releases of art and text models. Models like Stable Diffusion and DALL-E, which was released by OpenAI, were first to go viral, and they let anyone create new images from text prompts. Then came OpenAI’s ChatGPT (GPT stands for “generative pre-trained transformer”) which got everyone’s attention. This tool could create large, entirely new chunks of text from simple prompts. For the most part, ChatGPT worked really well, too — better than anything the world had seen before.

Though it’s one of many AI startups out there, OpenAI seems to have the most advanced or powerful products right now. Or at least, it’s the startup that has given the general public access to its services, thereby providing the most evidence of its progress in the generative AI field. This is a demonstration of its abilities as well as a source of even more data for OpenAI’s models to learn from.

OpenAI is also backed by some of the biggest names in Silicon Valley. It was founded in 2015 as a nonprofit research lab with $1 billion in support from the likes of Elon Musk, Reid Hoffman, Peter Thiel, Amazon, and former Y Combinator president Sam Altman, who is now the company’s CEO. OpenAI has since changed its structure to become a for-profit company but has yet to make a profit or even much by way of revenue. That’s not a problem yet, as OpenAI has gotten a considerable amount of funding from Microsoft, which began investing in OpenAI in 2019. And OpenAI is seizing on the wave of excitement for ChatGPT to promote its API services, which are not free. Neither is the company’s upcoming ChatGPT Plus service.

Malte Mueller/Getty Images

Other big tech companies have for years been working on their own generative AI initiatives. There’s Apple’s Gaudi, Meta’s LLaMA and Make-a-Scene, Amazon’s collaboration with Hugging Face, and Google’s LaMDA (which is good enough that one Google engineer thought it was sentient). But thanks to its early investment in OpenAI, Microsoft had access to the AI project everyone knew about and was trying out.

In January 2023, Microsoft announced it was giving $10 billion to OpenAI, bringing its total investment in the company to $13 billion. From that partnership, Microsoft has gotten what it hopes will be a real challenge to Google’s longtime dominance in web search: a new Bing powered by generative AI.

AI search will give us the first glimpse of how generative AI can be used in our everyday lives … if it works

Tech companies and investors are willing to pour resources into generative AI because they hope that, eventually, it will be able to create or generate just about any kind of content humans ask for. Some of those aspirations may be a long way from becoming reality, but right now, it’s possible that generative AI will power the next evolution of the humble internet search.

After months of rumors that both Microsoft and Google were working on generative AI versions of their web search engines, Microsoft debuted its AI-integrated Bing in January in a splashy media event that showed off all the cool things it could do, thanks to OpenAI’s custom-built technology that powered it. Instead of entering a prompt for Bing to look up and return a list of relevant links, you could ask Bing a question and get a “complete answer” composed by Bing’s generative AI and culled from various sources on the web that you didn’t have to take the time to visit yourself. You could also use Bing’s chatbot to ask follow-up questions to better refine your search results.

Microsoft wants you to think the possibilities of these new tools are just about endless. And notably, Bing AI appeared to be ready for the general public when the company announced it last month. It’s now being rolled out to people on an ever-growing wait list and incorporated into other Microsoft products, like its Windows 11 operating system and Skype.

This poses a major threat to Google, which has had the search market sewn up for decades and makes most of its revenue from the ads placed alongside its search results. The new Bing could chip away at Google’s search dominance and its main moneymaker. And while Google has been working on its own generative AI models for years, its AI-powered search engine and corresponding chatbot, which it calls Bard, appear to be months away from debut. All of this suggests that, so far, Microsoft is winning the AI-powered search engine battle.

Or is it?

Once the new Bing made it to the masses, it quickly became apparent that the technology might not be ready for primetime after all. Right out of the gate, Bing made basic factual errors or made up stuff entirely, also known as “hallucinating.” What was perhaps more problematic, however, was that its chatbot was also saying some disturbing and weird things. One person asked Bing for movie showtimes, only to be told the movie hadn’t come out yet (it had) because the date was February 2022 (it wasn’t). The user insisted that it was, at that time, February 2023. Bing AI responded by telling the user they were being rude, had “bad intentions,” and had lost Bing’s “trust and respect.” A New York Times reporter pronounced Bing “not ready for human contact” after its chatbot — with a considerable amount of prodding from the reporter — began expressing its “desires,” one of which was the reporter himself. Bing also told an AP reporter that he was acting like Hitler.

In response to the bad press, Microsoft has tried to put some limits and guardrails on Bing, like limiting the number of interactions one person can have with its chatbot. But the question remains: How thoroughly could Microsoft have tested Bing’s chatbot before releasing it if it took only a matter of days for users to get it to give such wild responses?

Google, on the other hand, may have been watching this all unfold with a certain sense of glee. Its limited Bard rollout hasn’t exactly gone perfectly, but Bard hasn’t compared any of its users to one of the most reviled people in human history, either. At least, not that we know of. Not yet.

So far, Microsoft is winning the AI-powered search engine battle. Or is it?

Again, Microsoft and Google aren’t the only companies working on generative AI, but their public releases have put more pressure on others to roll out their offerings as soon as possible, too. ChatGPT’s release and OpenAI’s partnership with Microsoft likely accelerated Google’s plans. Meanwhile, Meta is working to get its generative AI into as many of its own products as possible and just released a large language model of its own, called Large Language Model Meta AI, or LLaMA.

With the rollout of APIs that help developers add ChatGPT and Whisper to their applications, OpenAI seems eager to expand quickly. Some of these integrations seem pretty useful, too. Snapchat now has a chatbot called “My AI” for its paid subscribers, with plans to offer it to everyone soon. Initial reports say it’s just ChatGPT in Snapchat, but with even more restrictions about what it will talk about (no swearing, sex, or violence). Instacart will use ChatGPT in a feature called “Ask Instacart” that can answer customers’ questions about food. And Shopify’s Shop app has a ChatGPT-powered assistant to make personalized recommendations from the brands and stores that use the platform.

Generative AI is here to stay, but we don’t yet know if that’s for the best

Bing AI’s problems were just a glimpse of how generative AI can go wrong and have potentially disastrous consequences. That’s why pretty much every company that’s in the field of AI goes out of its way to reassure the public that it’s being very responsible with its products and taking great care before unleashing them on the world. Yet for all of their stated commitment to “building AI systems and products that are trustworthy and safe,” Microsoft and OpenAI either didn’t or couldn’t ensure a Bing chatbot could live up to those principles, but they released it anyway. Google and Meta, by contrast, were very conservative about releasing their products — until Microsoft and OpenAI gave them a push.

Error-prone generative AI is being put out there by many other companies that have promised to be careful. Some text-to-image models are infamous for producing images with missing or extra limbs. There are chatbots that confidently declare the winner of a Super Bowl that has yet to be played. These mistakes are funny as isolated incidents, but we’ve already seen one publication rely on generative AI to write authoritative articles with significant factual errors.

Malte Mueller/Getty Images

These screw-ups have been happening for years. Microsoft had one high-profile AI chatbot flop with its 2016 release of Tay, which Twitter users almost immediately trained to say some really offensive things. Microsoft quickly took it offline. Meta’s Blenderbot is based on a large language model and was released in August 2022. It didn’t go well. The bot seemed to hate Facebook, got racist and antisemitic, and wasn’t very accurate. It’s still available to try out, but after seeing what ChatGPT can do, it feels like a clunky, slow, and weird step backward.

There are even more serious concerns. Generative AI threatens to put a lot of people out of work if it’s good enough to replace them. It could have a profound impact on education. There are also questions of legalities over the material AI developers are using to train their models, which is typically scraped from millions of sources that the developers don’t have the rights to. And there are questions of bias both in the material that AI models are training on and the people who are training them.

On the other side, some conservative bomb-throwers have accused generative AI developers of moderating their platforms’ outputs too much and making them “woke” and biased against the right wing. To that end, Musk, the self-proclaimed free-speech absolutist and OpenAI critic as well as an early investor, is reportedly considering developing a ChatGPT rival that won’t have content restrictions or be trained on supposedly “woke” material.

And then there’s the fear not of generative AI but of the technology it could lead to: artificial general intelligence. AGI can learn and think and solve problems like a human, if not better. This has given rise to science fiction-based fears that AGI will lead to an army of super-robots that quickly realize they have no need for humans and either turn us into slaves or wipe us out entirely.

There are plenty of reasons to be optimistic about generative AI’s future, too. It’s a powerful technology with a ton of potential, and we’ve still seen relatively little of what it can do and who it can help. Silicon Valley clearly sees this potential, and venture capitalists like Andreessen Horowitz and Sequoia seem to be all-in. OpenAI is valued at nearly $30 billion, despite not having yet proved itself as a revenue generator.

Generative AI has the power to upend a lot of things, but that doesn’t necessarily mean it’ll make them worse. Its ability to automate tasks may give humans more time to focus on the stuff that can’t be done by increasingly sophisticated machines, as has been true for technological advances before it. And in the near future — once the bugs are worked out — it could make searching the web better. In the years and decades to come, it might even make everything else better, too.

Oh, and in case you were wondering: No, generative AI did not write this explainer.

Read More 

9 questions about the threats to ban TikTok, answered

A TikTok ban would surely upset many of the nation’s teens. | iStockphoto/Getty Images

So you heard TikTok’s being banned. Here’s what’s actually happening. Since its introduction to the US in 2018, TikTok has been fighting for its right to exist. First, the company struggled to convince the public that it wasn’t just for pre-teens making cringey memes; then it had to make the case that it wasn’t responsible for the platform’s rampant misinformation (or cultural appropriation … or pro-anorexia content … or potentially deadly trends … or general creepiness, etc). But mostly, and especially over the past three years, TikTok has been fighting against increased scrutiny from US lawmakers about its ties to the Chinese government via its China-based parent company, ByteDance.
On March 1, the US House Foreign Affairs Committee voted to give President Biden the power to ban TikTok. But banning TikTok isn’t as simple as flipping a switch and deleting the app from every American’s phone. It’s a complex knot of technical and political decisions that could have consequences for US-China relations, for the cottage industry of influencers that has blossomed over the past five years, and for culture at large. The whole thing could also be overblown.
The thing is, nobody really knows if a TikTok ban, however broad or all-encompassing, will even happen at all or how it would work if it did. It’s been three years since the US government has seriously begun considering the possibility, but the future remains just as murky as ever. Here’s what we know so far.
1. Do politicians even use TikTok? Do they know how it works or what they’re trying to ban?
Among the challenges lawmakers face in trying to ban TikTok outright is a public relations problem. Americans already think their government leaders are too old, ill-equipped to deal with modern tech, and generally out of touch. A kind of tradition has even emerged whenever Congress tries to do oversight of Big Tech: A committee will convene a hearing, tech CEOs will show up, and then lawmakers make fools of themselves by asking questions that reveal how little they know about the platforms they’re trying to rein in.
Congress has never heard from TikTok’s CEO, Shou Zi Chew, in a public committee hearing before, but representatives will get their chance this month. Unlike with many of the American social media companies they’ve scrutinized before, few members of Congress have extensive experience with TikTok. Few use it for campaign purposes, and even fewer use it for official purposes. Though at least a few dozen members have some kind of account, most don’t have big followings. There are some notable exceptions: Sen. Bernie Sanders, and Reps. Katie Porter of California, Jeff Jackson of North Carolina, and Ilhan Omar of Minnesota use it frequently for official and campaign reasons and have big followings, while Sens. Jon Ossoff of Georgia and Ed Markey of Massachusetts are inactive on it after using it extensively during their campaigns in 2020 and 2021. —Christian Paz
2. Who is behind these efforts? Who is trying to ban TikTok or trying to impose restrictions?
While TikTok doesn’t have vocal defenders in Congress, it does have a long list of vocal antagonists from across the country, who span party and ideological lines in both the Senate and the House.
The leading Republicans hoping to ban TikTok are Sens. Marco Rubio of Florida and Josh Hawley of Missouri, and Rep. Mike Gallagher of Wisconsin, who is the new chairman of the House select committee on competition with China. All three have introduced some kind of legislation attempting to ban the app or force its parent company ByteDance to sell the platform to an American company. Many more Republicans in both chambers who are critics of China, like Sen. Tom Cotton of Arkansas and Ted Cruz of Texas, endorse some kind of tougher restriction on the app.
Independent Sen. Angus King of Maine has also joined Rubio in introducing legislation that would ban the app.
Democrats are less united in their opposition to the platform. Sens. Mark Warner of Virginia and Michael Bennet of Colorado are two vocal skeptics. Bennet has called for Apple and Google to remove the app from their app stores, while Warner wants stronger guardrails for tech companies that would ban a “category of applications” instead of a single app (that’s the same position Sen. Elizabeth Warren of Massachusetts is taking). In the House, Gallagher’s Democratic counterpart, Rep. Raja Krishnamoorthi of Illinois, has also called for a ban or tougher restrictions, though he doesn’t think a ban will happen this year. —Christian Paz
3. What is the relationship between TikTok and the Chinese government? Do they have users’ info?
If you ask TikTok, the company will tell you there is no relationship and that it has not and would not give US user data to the Chinese government.
But TikTok is owned by ByteDance, a company based in Beijing that is subject to Chinese laws. Those laws compel businesses to assist the government whenever it asks, which many believe would force ByteDance to give the Chinese government any user data it has access to whenever it asks for it. Or it could be ordered to push certain kinds of content, like propaganda or disinformation, on American users.
We don’t know if this has actually happened at this point. We only know that it could, assuming ByteDance even has access to TikTok’s US user data and algorithms. TikTok has been working hard to convince everyone that it has protections in place that wall off US user data from ByteDance and, by extension, the Chinese government. —Sara Morrison
4. What happens to people whose income comes from TikTok? If there is a ban, is it even possible for creators to find similar success on Reels or Shorts or other platforms?
Most people who’ve counted on TikTok as their main source of revenue have long been prepared for a possible ban. Fifteen years into the influencer industry, it’s old hat that, eventually, social media platforms will betray their most loyal users in one way or another. Plus, after President Trump attempted a ban in the summer of 2020, many established TikTokers diversified their online presence by focusing more of their efforts on other platforms like Instagram Reels or YouTube Shorts.
That doesn’t mean that losing TikTok won’t hurt influencers. No other social platform is quite as good as TikTok at turning a completely unknown person or brand into a global superstar, thanks to its emphasis on discovery versus keeping people up to date on the users they already follow. Which means that without TikTok, it’ll be far more difficult for aspiring influencers to see the kind of overnight success enjoyed by OG TikTokers.
The good news is that there’s likely more money to be made on other platforms, specifically Instagram Reels. Creators can sometimes make tens of thousands of dollars per month from Instagram’s creator fund, which rewards users with money based on the number of views their videos get. Instagram is also viewed as a safer, more predictable platform for influencers in their dealings with brands, which can use an influencer’s previous metrics to set a fair rate for the work. (It’s a different story on TikTok, where even a post by someone with millions of followers could get buried by the algorithm, and it’s less evident that past success will continue in the future.) —Rebecca Jennings
5. What does the TikTok ban look like to me, the user? Am I going to get arrested for using TikTok?
Almost certainly not. The most likely way a ban would happen would be through an executive order that cites national security grounds to forbid business transactions with TikTok. Those transactions would likely be defined as services that facilitate the app’s operations and distribution. Which means you might have a much harder time finding and using TikTok, but you won’t go to jail if you do. —Sara Morrison
6. How is it enforced? What does the TikTok ban look like to the App Store and other businesses?
The most likely path — and the one that lawmakers have zeroed in on — is using the International Emergency Economic Powers Act, which gives the president broader powers than he otherwise has. President Trump used this when he tried to ban TikTok in 2020, and lawmakers have since introduced TikTok-banning bills that essentially call for the current president to try again, but this time with additional measures in place that might avoid the court battles that stalled Trump’s attempt.
Trump’s ban attempt does give us some guidance on what such a ban would look like, however. The Trump administration spelled out some examples of banned transactions, including app stores not being allowed to carry it and internet hosting services not being allowed to host it. If you have an iPhone, it’s exceedingly difficult to get a native app on your phone that isn’t allowed in Apple’s App Store — or to get updates for that app if you downloaded it before this hypothetical ban came down. It’s also conceivable that companies would be prohibited from advertising on the app and content creators wouldn’t be able to use TikTok’s monetization tools.
There are considerable civil and criminal penalties for violating the IEEPA. Don’t expect Apple or Google or Mr. Beast to do so. —Sara Morrison
7. On what grounds would TikTok be reinstated? Are there any changes big enough that would make it “safe” in the eyes of the US government?
TikTok is already trying to make those changes to convince a multi-agency government panel that it can operate in the US without being a national security risk. If that panel, called the Committee on Foreign Investments in the United States (CFIUS), can’t reach an agreement with TikTok, then it’s doubtful there’s anything more TikTok can do.
Well, there is one thing: If ByteDance sold TikTok off to an American company — something that was considered back in the Trump administration — most of its issues would go away. But even if ByteDance wanted to sell TikTok, it may not be allowed to. The Chinese government would have to approve such a sale, and it’s made it pretty clear that it won’t. —Sara Morrison
8. Is there any kind of precedent for banning apps?
China and other countries do ban US apps. The TikTok app doesn’t even exist in China. It has a domestic version, called Douyin, instead. TikTok also isn’t in India, which banned it in 2020. So there is precedent for other countries banning apps, including TikTok. But these are different countries with different laws. That kind of censorship doesn’t really fly here. President Trump’s attempt to ban TikTok in 2020 wasn’t going well in the courts, but we never got an ultimate decision because Trump lost the election and the Biden administration rescinded the order.
The closest thing we have to the TikTok debacle is probably Grindr. A Chinese company bought the gay dating app in 2018, only to be forced by CFIUS to sell it off the next year. It did, thus avoiding a ban. So we don’t know how a TikTok ban would play out if it came down to it. —Sara Morrison
9. How overblown is this?
At the moment, there’s no indication that the Chinese government has asked for private data of American citizens from ByteDance, or that the parent company has provided that information to Chinese government officials. But American user data has reportedly been accessed by China-based employees of ByteDance, according to a BuzzFeed News investigation last year. The company has also set up protocols under which employees abroad could remotely access American data. The company stresses that this is no different from how other “global companies” operate and that it is moving to funnel all US data through American servers. But the possibility of the Chinese government having access to this data at some point is fueling the national security concerns in the US.
This doesn’t speak to the other reasons driving government scrutiny of the app: data privacy and mental health. Some elected officials would like to see stricter rules and regulations in place limiting the kind of information that younger Americans have to give up when using TikTok and other platforms, (like Markey, the senator from Massachusetts), while others would like a closer look at limits on when children can use the app as part of broader regulations on Big Tech. Democratic members of Congress have also cited concerns with how much time children are spending online, potentially detrimental effects of social media, including TikTok, on children, and the greater mental health challenges younger Americans are facing today. TikTok is already making efforts to fend off this criticism: At the start of March, they announced new screen time limits for users under the age of 17. But even those measures are more like suggestions. —Christian Paz

A TikTok ban would surely upset many of the nation’s teens. | iStockphoto/Getty Images

So you heard TikTok’s being banned. Here’s what’s actually happening.

Since its introduction to the US in 2018, TikTok has been fighting for its right to exist. First, the company struggled to convince the public that it wasn’t just for pre-teens making cringey memes; then it had to make the case that it wasn’t responsible for the platform’s rampant misinformation (or cultural appropriation … or pro-anorexia content … or potentially deadly trends … or general creepiness, etc). But mostly, and especially over the past three years, TikTok has been fighting against increased scrutiny from US lawmakers about its ties to the Chinese government via its China-based parent company, ByteDance.

On March 1, the US House Foreign Affairs Committee voted to give President Biden the power to ban TikTok. But banning TikTok isn’t as simple as flipping a switch and deleting the app from every American’s phone. It’s a complex knot of technical and political decisions that could have consequences for US-China relations, for the cottage industry of influencers that has blossomed over the past five years, and for culture at large. The whole thing could also be overblown.

The thing is, nobody really knows if a TikTok ban, however broad or all-encompassing, will even happen at all or how it would work if it did. It’s been three years since the US government has seriously begun considering the possibility, but the future remains just as murky as ever. Here’s what we know so far.

1. Do politicians even use TikTok? Do they know how it works or what they’re trying to ban?

Among the challenges lawmakers face in trying to ban TikTok outright is a public relations problem. Americans already think their government leaders are too old, ill-equipped to deal with modern tech, and generally out of touch. A kind of tradition has even emerged whenever Congress tries to do oversight of Big Tech: A committee will convene a hearing, tech CEOs will show up, and then lawmakers make fools of themselves by asking questions that reveal how little they know about the platforms they’re trying to rein in.

Congress has never heard from TikTok’s CEO, Shou Zi Chew, in a public committee hearing before, but representatives will get their chance this month. Unlike with many of the American social media companies they’ve scrutinized before, few members of Congress have extensive experience with TikTok. Few use it for campaign purposes, and even fewer use it for official purposes. Though at least a few dozen members have some kind of account, most don’t have big followings. There are some notable exceptions: Sen. Bernie Sanders, and Reps. Katie Porter of California, Jeff Jackson of North Carolina, and Ilhan Omar of Minnesota use it frequently for official and campaign reasons and have big followings, while Sens. Jon Ossoff of Georgia and Ed Markey of Massachusetts are inactive on it after using it extensively during their campaigns in 2020 and 2021. —Christian Paz

2. Who is behind these efforts? Who is trying to ban TikTok or trying to impose restrictions?

While TikTok doesn’t have vocal defenders in Congress, it does have a long list of vocal antagonists from across the country, who span party and ideological lines in both the Senate and the House.

The leading Republicans hoping to ban TikTok are Sens. Marco Rubio of Florida and Josh Hawley of Missouri, and Rep. Mike Gallagher of Wisconsin, who is the new chairman of the House select committee on competition with China. All three have introduced some kind of legislation attempting to ban the app or force its parent company ByteDance to sell the platform to an American company. Many more Republicans in both chambers who are critics of China, like Sen. Tom Cotton of Arkansas and Ted Cruz of Texas, endorse some kind of tougher restriction on the app.

Independent Sen. Angus King of Maine has also joined Rubio in introducing legislation that would ban the app.

Democrats are less united in their opposition to the platform. Sens. Mark Warner of Virginia and Michael Bennet of Colorado are two vocal skeptics. Bennet has called for Apple and Google to remove the app from their app stores, while Warner wants stronger guardrails for tech companies that would ban a “category of applications” instead of a single app (that’s the same position Sen. Elizabeth Warren of Massachusetts is taking). In the House, Gallagher’s Democratic counterpart, Rep. Raja Krishnamoorthi of Illinois, has also called for a ban or tougher restrictions, though he doesn’t think a ban will happen this year. —Christian Paz

3. What is the relationship between TikTok and the Chinese government? Do they have users’ info?

If you ask TikTok, the company will tell you there is no relationship and that it has not and would not give US user data to the Chinese government.

But TikTok is owned by ByteDance, a company based in Beijing that is subject to Chinese laws. Those laws compel businesses to assist the government whenever it asks, which many believe would force ByteDance to give the Chinese government any user data it has access to whenever it asks for it. Or it could be ordered to push certain kinds of content, like propaganda or disinformation, on American users.

We don’t know if this has actually happened at this point. We only know that it could, assuming ByteDance even has access to TikTok’s US user data and algorithms. TikTok has been working hard to convince everyone that it has protections in place that wall off US user data from ByteDance and, by extension, the Chinese government. —Sara Morrison

4. What happens to people whose income comes from TikTok? If there is a ban, is it even possible for creators to find similar success on Reels or Shorts or other platforms?

Most people who’ve counted on TikTok as their main source of revenue have long been prepared for a possible ban. Fifteen years into the influencer industry, it’s old hat that, eventually, social media platforms will betray their most loyal users in one way or another. Plus, after President Trump attempted a ban in the summer of 2020, many established TikTokers diversified their online presence by focusing more of their efforts on other platforms like Instagram Reels or YouTube Shorts.

That doesn’t mean that losing TikTok won’t hurt influencers. No other social platform is quite as good as TikTok at turning a completely unknown person or brand into a global superstar, thanks to its emphasis on discovery versus keeping people up to date on the users they already follow. Which means that without TikTok, it’ll be far more difficult for aspiring influencers to see the kind of overnight success enjoyed by OG TikTokers.

The good news is that there’s likely more money to be made on other platforms, specifically Instagram Reels. Creators can sometimes make tens of thousands of dollars per month from Instagram’s creator fund, which rewards users with money based on the number of views their videos get. Instagram is also viewed as a safer, more predictable platform for influencers in their dealings with brands, which can use an influencer’s previous metrics to set a fair rate for the work. (It’s a different story on TikTok, where even a post by someone with millions of followers could get buried by the algorithm, and it’s less evident that past success will continue in the future.) —Rebecca Jennings

5. What does the TikTok ban look like to me, the user? Am I going to get arrested for using TikTok?

Almost certainly not. The most likely way a ban would happen would be through an executive order that cites national security grounds to forbid business transactions with TikTok. Those transactions would likely be defined as services that facilitate the app’s operations and distribution. Which means you might have a much harder time finding and using TikTok, but you won’t go to jail if you do. —Sara Morrison

6. How is it enforced? What does the TikTok ban look like to the App Store and other businesses?

The most likely path — and the one that lawmakers have zeroed in on — is using the International Emergency Economic Powers Act, which gives the president broader powers than he otherwise has. President Trump used this when he tried to ban TikTok in 2020, and lawmakers have since introduced TikTok-banning bills that essentially call for the current president to try again, but this time with additional measures in place that might avoid the court battles that stalled Trump’s attempt.

Trump’s ban attempt does give us some guidance on what such a ban would look like, however. The Trump administration spelled out some examples of banned transactions, including app stores not being allowed to carry it and internet hosting services not being allowed to host it. If you have an iPhone, it’s exceedingly difficult to get a native app on your phone that isn’t allowed in Apple’s App Store — or to get updates for that app if you downloaded it before this hypothetical ban came down. It’s also conceivable that companies would be prohibited from advertising on the app and content creators wouldn’t be able to use TikTok’s monetization tools.

There are considerable civil and criminal penalties for violating the IEEPA. Don’t expect Apple or Google or Mr. Beast to do so. —Sara Morrison

7. On what grounds would TikTok be reinstated? Are there any changes big enough that would make it “safe” in the eyes of the US government?

TikTok is already trying to make those changes to convince a multi-agency government panel that it can operate in the US without being a national security risk. If that panel, called the Committee on Foreign Investments in the United States (CFIUS), can’t reach an agreement with TikTok, then it’s doubtful there’s anything more TikTok can do.

Well, there is one thing: If ByteDance sold TikTok off to an American company — something that was considered back in the Trump administration — most of its issues would go away. But even if ByteDance wanted to sell TikTok, it may not be allowed to. The Chinese government would have to approve such a sale, and it’s made it pretty clear that it won’t. —Sara Morrison

8. Is there any kind of precedent for banning apps?

China and other countries do ban US apps. The TikTok app doesn’t even exist in China. It has a domestic version, called Douyin, instead. TikTok also isn’t in India, which banned it in 2020. So there is precedent for other countries banning apps, including TikTok. But these are different countries with different laws. That kind of censorship doesn’t really fly here. President Trump’s attempt to ban TikTok in 2020 wasn’t going well in the courts, but we never got an ultimate decision because Trump lost the election and the Biden administration rescinded the order.

The closest thing we have to the TikTok debacle is probably Grindr. A Chinese company bought the gay dating app in 2018, only to be forced by CFIUS to sell it off the next year. It did, thus avoiding a ban. So we don’t know how a TikTok ban would play out if it came down to it. —Sara Morrison

9. How overblown is this?

At the moment, there’s no indication that the Chinese government has asked for private data of American citizens from ByteDance, or that the parent company has provided that information to Chinese government officials. But American user data has reportedly been accessed by China-based employees of ByteDance, according to a BuzzFeed News investigation last year. The company has also set up protocols under which employees abroad could remotely access American data. The company stresses that this is no different from how other “global companies” operate and that it is moving to funnel all US data through American servers. But the possibility of the Chinese government having access to this data at some point is fueling the national security concerns in the US.

This doesn’t speak to the other reasons driving government scrutiny of the app: data privacy and mental health. Some elected officials would like to see stricter rules and regulations in place limiting the kind of information that younger Americans have to give up when using TikTok and other platforms, (like Markey, the senator from Massachusetts), while others would like a closer look at limits on when children can use the app as part of broader regulations on Big Tech. Democratic members of Congress have also cited concerns with how much time children are spending online, potentially detrimental effects of social media, including TikTok, on children, and the greater mental health challenges younger Americans are facing today. TikTok is already making efforts to fend off this criticism: At the start of March, they announced new screen time limits for users under the age of 17. But even those measures are more like suggestions. —Christian Paz

Read More 

TikTok isn’t really limiting kids’ time on its app

TikTok’s younger users will now be told when they’ve been watching for a while. | Westend61/Getty Images

Teens can still click right on through the new screen time limit. Amid growing concerns (and lawsuits) about social media’s impact on the mental health of children, TikTok announced on Wednesday that it’s setting a 60-minute time limit on screen time for users under 18 and adding some new parental controls. Those “limits,” however, are really more like suggestions. There are ways young users can continue to use the app even after the screen time limits have passed.
The news comes amid a larger discussion about the harms of social media on younger people as well as an enormous amount of scrutiny on TikTok itself over its ties to China. And while the updates make TikTok look like it’s taking the lead on mitigating those harms, it likely won’t be enough to assuage the national security concerns many lawmakers have (or say they have) about TikTok. They might not even be enough to assuage concerns they have over social media harm to children.
In the coming weeks, minor users will have a 60-minute screen time limit applied by default, at which point a prompt will pop up in the app notifying them and giving them the option to continue.
For users under 13, a parent or guardian will have to enter a passcode every 30 minutes to give their kid additional screen time. No parent code, no TikTok.
But users aged 13 to 17 can enter their own passcode and continue to use the app. They can also opt out of the 60-minute default screen time limit, but if they spend more than 100 minutes on TikTok a day they will be forced to set their own limits — which they can then bypass with their code. They’ll also get a weekly recap of how much time they’ve spent on the app. TikTok believes these measures will make teens more aware of the time they spend on the app, as they’re forced to be more active in choosing to do so.
Finally, parents who link their TikTok accounts to their children’s will have some additional controls and information, like knowing how much time their kids spend on the app and how often it’s been opened, setting times to mute notifications, and being able to set custom time limits for different days.

TikTok
New controls for your (or your kid’s) TikTok experience.

The Tech Oversight Project, a Big Tech accountability group, was not impressed by TikTok’s announcement, calling it “a fake ploy to make parents feel safe without actually making their product safe.”
“Companies like YouTube, Instagram, and TikTok centered their business models on getting kids addicted to the platforms and increasing their screen time to sell them ads,” Kyle Morse, Tech Oversight Project’s deputy executive director, said in a statement. “By design, tech platforms do not care about the well-being of children and teens.”
TikTok has long been criticized for its addictive nature, which causes some users to spend hours mindlessly scrolling through the app. It has implemented various screen time management tools throughout the years, and currently allows users to set their own time limits and put up reminders to take breaks or go to sleep. These new controls will let them customize those settings even more. TikTok says those controls will soon be available to adult users, too, but adults won’t be getting that time limit notice by default like the kids will.
TikTok is one of several social media apps that has introduced options for minor users. Meta allows parents to limit how much time their kids spend on Instagram, for instance. And the devices kids use these apps on also have various options for parents. But these aren’t enabled by default like TikTok’s 60-minute notice will be.
This all comes as lawmakers appear to be getting serious about laws that would regulate if and how children use social media. President Biden has said in both of his State of the Union addresses that social media platforms are profiting from “experimenting” on children and must be held accountable. Sen. Josh Hawley (R-MO) wants to ban children under 16 from using social media at all. On the less extreme side, Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) will be reintroducing a bipartisan bill called the Kids Online Safety Act, which would force social media platforms to have controls over kids’ usage and give parents the ability to set them.
TikTok specifically is also facing the possibility that it will be banned in the US, as lawmakers who are concerned over its China-based parent company have been increasingly vocal about the app and are introducing bills to ban it, believing China could use it to access US user data or push propaganda or misinformation onto US users. TikTok is already banned on federal government devices as well as government-owned devices in the majority of states. The company is currently in talks with the government on an agreement that would alleviate national security concerns and let it continue to operate in the country, but that process has dragged on for several years.
In the meantime, TikTok can say it’s taken the lead on controlling kids’ screen time with its default setting, even if its mostly voluntary measures don’t really do all that much. That might — but probably won’t — win it some points with lawmakers who want to ban it entirely. And that would be the biggest screen time control of them all.
This story was first published in the Recode newsletter. Sign up here so you don’t miss the next one!

TikTok’s younger users will now be told when they’ve been watching for a while. | Westend61/Getty Images

Teens can still click right on through the new screen time limit.

Amid growing concerns (and lawsuits) about social media’s impact on the mental health of children, TikTok announced on Wednesday that it’s setting a 60-minute time limit on screen time for users under 18 and adding some new parental controls. Those “limits,” however, are really more like suggestions. There are ways young users can continue to use the app even after the screen time limits have passed.

The news comes amid a larger discussion about the harms of social media on younger people as well as an enormous amount of scrutiny on TikTok itself over its ties to China. And while the updates make TikTok look like it’s taking the lead on mitigating those harms, it likely won’t be enough to assuage the national security concerns many lawmakers have (or say they have) about TikTok. They might not even be enough to assuage concerns they have over social media harm to children.

In the coming weeks, minor users will have a 60-minute screen time limit applied by default, at which point a prompt will pop up in the app notifying them and giving them the option to continue.

For users under 13, a parent or guardian will have to enter a passcode every 30 minutes to give their kid additional screen time. No parent code, no TikTok.

But users aged 13 to 17 can enter their own passcode and continue to use the app. They can also opt out of the 60-minute default screen time limit, but if they spend more than 100 minutes on TikTok a day they will be forced to set their own limits — which they can then bypass with their code. They’ll also get a weekly recap of how much time they’ve spent on the app. TikTok believes these measures will make teens more aware of the time they spend on the app, as they’re forced to be more active in choosing to do so.

Finally, parents who link their TikTok accounts to their children’s will have some additional controls and information, like knowing how much time their kids spend on the app and how often it’s been opened, setting times to mute notifications, and being able to set custom time limits for different days.

TikTok
New controls for your (or your kid’s) TikTok experience.

The Tech Oversight Project, a Big Tech accountability group, was not impressed by TikTok’s announcement, calling it “a fake ploy to make parents feel safe without actually making their product safe.”

“Companies like YouTube, Instagram, and TikTok centered their business models on getting kids addicted to the platforms and increasing their screen time to sell them ads,” Kyle Morse, Tech Oversight Project’s deputy executive director, said in a statement. “By design, tech platforms do not care about the well-being of children and teens.”

TikTok has long been criticized for its addictive nature, which causes some users to spend hours mindlessly scrolling through the app. It has implemented various screen time management tools throughout the years, and currently allows users to set their own time limits and put up reminders to take breaks or go to sleep. These new controls will let them customize those settings even more. TikTok says those controls will soon be available to adult users, too, but adults won’t be getting that time limit notice by default like the kids will.

TikTok is one of several social media apps that has introduced options for minor users. Meta allows parents to limit how much time their kids spend on Instagram, for instance. And the devices kids use these apps on also have various options for parents. But these aren’t enabled by default like TikTok’s 60-minute notice will be.

This all comes as lawmakers appear to be getting serious about laws that would regulate if and how children use social media. President Biden has said in both of his State of the Union addresses that social media platforms are profiting from “experimenting” on children and must be held accountable. Sen. Josh Hawley (R-MO) wants to ban children under 16 from using social media at all. On the less extreme side, Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) will be reintroducing a bipartisan bill called the Kids Online Safety Act, which would force social media platforms to have controls over kids’ usage and give parents the ability to set them.

TikTok specifically is also facing the possibility that it will be banned in the US, as lawmakers who are concerned over its China-based parent company have been increasingly vocal about the app and are introducing bills to ban it, believing China could use it to access US user data or push propaganda or misinformation onto US users. TikTok is already banned on federal government devices as well as government-owned devices in the majority of states. The company is currently in talks with the government on an agreement that would alleviate national security concerns and let it continue to operate in the country, but that process has dragged on for several years.

In the meantime, TikTok can say it’s taken the lead on controlling kids’ screen time with its default setting, even if its mostly voluntary measures don’t really do all that much. That might — but probably won’t — win it some points with lawmakers who want to ban it entirely. And that would be the biggest screen time control of them all.

This story was first published in the Recode newsletter. Sign up here so you don’t miss the next one!

Read More 

Section 230, the internet law the Supreme Court could change, explained

The Supreme Court is considering two cases that could change the internet as we know it. | Eric Lee/Bloomberg via Getty Images

The pillar of internet free speech seems to be everyone’s target. You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them.
Decades later, it’s never been more controversial. People from both political parties and all three branches of government have threatened to reform or even repeal it. The debate centers around whether we should reconsider a law from the internet’s infancy that was meant to help struggling websites and internet-based companies grow. After all, these internet-based businesses are now some of the biggest and most powerful in the world, and users’ ability to speak freely on them bears much bigger consequences.
While President Biden pushes Congress to pass laws to reform Section 230, its fate may lie in the hands of the judicial branch, as the Supreme Court is considering two cases — one involving YouTube and Google, another targeting Twitter — that could significantly change the law and, therefore, the internet it helped create.
Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post (with a few exceptions). That third-party content could include things like a news outlet’s reader comments, tweets on Twitter, posts on Facebook, photos on Instagram, or reviews on Yelp. If a Yelp reviewer were to post something defamatory about a business, for example, the business could sue the reviewer for libel, but thanks to Section 230, it couldn’t sue Yelp.
Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark. A repeal of Section 230 wouldn’t just affect the big platforms that seem to get all the negative attention, either. It could affect websites of all sizes and online discourse.
Section 230’s salacious origins
In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around, and anyone, including impressionable children, could easily find and see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act, which would extend laws governing obscene and indecent use of telephone services to the internet. This would also make websites and platforms responsible for any indecent or obscene things their users posted.
In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street, and the latter was a pioneer of the early internet. But in 1994, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. The court ruled in Stratton Oakmont’s favor, saying that because Prodigy moderated posts on its forums, it exercised editorial control that made it just as liable for the speech on its platform as the people who actually made that speech. Meanwhile, Prodigy’s rival online service, Compuserve, was found liable for a user’s speech in an earlier case because Compuserve didn’t moderate content.
Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks, and mindful of the Prodigy decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment to CDA that said “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content.
“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden told Vox’s Emily Stewart in 2019. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”
As the beginning of Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are considered by some to be the 26 words that created the internet, but the law says more than that.
Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that their Constitutional right to free speech has been violated — doesn’t apply. Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.
The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions about transmitting porn to minors were immediately challenged by civil liberty groups and struck down by the Supreme Court, which said they were too restrictive of free speech. Section 230 stayed, and so a law that was initially meant to restrict free speech on the internet instead became the law that protected it.
This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. On the other hand, a website that didn’t moderate anything at all would quickly become a spam-filled cesspool that few people would want to swim in.
That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet to flourish. Others say it allows platforms that have become hugely influential and important to suppress and censor speech based on their own whims or supposed political biases. Depending on who you talk to, internet platforms are either using the sword too much or not enough. Either way, they’re hiding behind the shield to protect themselves from lawsuits while they do it. Though it has been a law for nearly three decades, Section 230’s existence may have never been as precarious as it is now.
The Supreme Court might determine Section 230’s fate
Justice Clarence Thomas has made no secret of his desire for the court to consider Section 230, saying in multiple opinions that he believes lower courts have interpreted it to give too-broad protections to what have become very powerful companies. He got his wish in February 2023, when the court heard two similar cases that include it. In both, plaintiffs argued that their family members were killed by terrorists who posted content on those platforms. In the first, Gonzalez v. Google, the family of a woman killed in a 2015 terrorist attack in France said YouTube promoted ISIS videos and sold advertising on them, thereby materially supporting ISIS. In Twitter v. Taamneh, the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content, which is in violation of the Justice Against Sponsors of Terrorism Act — and could then mean that Section 230 doesn’t apply to such content.
These cases give the Supreme Court the chance to reshape, redefine, or even repeal the foundational law of the internet, which could fundamentally change it. And while the Supreme Court chose to take these cases on, it’s not certain that they’ll rule in favor of the plaintiffs. In oral arguments in late February, several justices didn’t seem too convinced during the Gonzalez v. Google arguments that they could or should, especially considering the monumental possible consequences and impact of such a decision. In Twitter v. Taamneh, the justices focused more on if and how the Sponsors of Terrorism law applied to tweets than they did on Section 230. The rulings are expected in June.
In the meantime, don’t expect the original authors of Section 230 to go away quietly. Wyden and Cox submitted an amicus brief to the Supreme Court for the Gonzalez case, where they said: “The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users today, Section 230’s protection is even more important now than when the statute was enacted.”
Congress and presidents are getting sick of Section 230, too
In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law, which changed parts of Section 230. The updates mean that platforms can now be deemed responsible for prostitution ads posted by third parties. These changes were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but it did so by carving out an exception to Section 230. That could open the door to even more exceptions in the future.
Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped create them. Alex Jones and his expulsion from Facebook and other social media platforms — even Twitter under Elon Musk won’t let him back — is perhaps the best example of this.
In a 2018 op-ed, Sen. Ted Cruz (R-TX) claimed that Section 230 required the internet platforms it was designed to protect to be “neutral public forums.” The law doesn’t actually say that, but many Republican lawmakers have introduced legislation that would fulfill that promise. On the other side, Democrats have introduced bills that would hold social media platforms accountable if they didn’t do more to prevent harmful content or if their algorithms promoted it.
There are some bipartisan efforts to change Section 230, too. The EARN IT Act from Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT), for example, would remove Section 230 immunity from platforms that didn’t follow a set of best practices to detect and remove child sexual abuse material. The partisan bills haven’t really gotten anywhere in Congress. But EARN IT, which was introduced in the last two sessions, was passed out of committee in the Senate and ready for a Senate floor vote. That vote never came, but Blumenthal and Graham have already signaled that they plan to reintroduce EARN IT this session for a third try.
In the executive branch, former President Trump became a very vocal critic of Section 230 in 2020 after Twitter and Facebook started deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. He issued an executive order that said Section 230 protections should only apply to platforms that have “good faith” moderation, and then called on the FCC to make rules about what constituted good faith. This didn’t happen, and President Biden revoked the executive order months after taking office.
But Biden isn’t a fan of Section 230, either. During his presidential campaign, he said he wanted it repealed. As president, Biden has said he wants it to be reformed by Congress. Until Congress can agree on what’s wrong with Section 230, however, it doesn’t look likely that they’ll pass a law that significantly changes it.
However, some Republican states have been making their own anti-Section 230 moves. In 2021, Florida passed the Stop Social Media Censorship Act, which prohibits certain social media platforms from banning politicians or media outlets. That same year, Texas passed HB 20, which forbids large platforms from removing or moderating content based on a user’s viewpoint.
Neither law is currently in effect. A federal judge blocked the Florida law in 2022 due to the possibility of it violating free speech laws as well as Section 230. The state has appealed to the Supreme Court. The Texas law has made a little more progress. A district court blocked the law last year, and then the Fifth Circuit controversially reversed that decision before deciding to stay the law in order to give the Supreme Court the chance to take the case. We’re still waiting to see if it does.
If Section 230 were to be repealed — or even significantly reformed — it really could change the internet as we know it. It remains to be seen if that’s for better or for worse.
Update, February 23, 2023, 3 pm ET: This story, originally published on May 28, 2020, has been updated several times, most recently with the latest news from the Supreme Court cases related to Section 230.

The Supreme Court is considering two cases that could change the internet as we know it. | Eric Lee/Bloomberg via Getty Images

The pillar of internet free speech seems to be everyone’s target.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them.

Decades later, it’s never been more controversial. People from both political parties and all three branches of government have threatened to reform or even repeal it. The debate centers around whether we should reconsider a law from the internet’s infancy that was meant to help struggling websites and internet-based companies grow. After all, these internet-based businesses are now some of the biggest and most powerful in the world, and users’ ability to speak freely on them bears much bigger consequences.

While President Biden pushes Congress to pass laws to reform Section 230, its fate may lie in the hands of the judicial branch, as the Supreme Court is considering two cases — one involving YouTube and Google, another targeting Twitter — that could significantly change the law and, therefore, the internet it helped create.

Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post (with a few exceptions). That third-party content could include things like a news outlet’s reader comments, tweets on Twitter, posts on Facebook, photos on Instagram, or reviews on Yelp. If a Yelp reviewer were to post something defamatory about a business, for example, the business could sue the reviewer for libel, but thanks to Section 230, it couldn’t sue Yelp.

Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark. A repeal of Section 230 wouldn’t just affect the big platforms that seem to get all the negative attention, either. It could affect websites of all sizes and online discourse.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around, and anyone, including impressionable children, could easily find and see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act, which would extend laws governing obscene and indecent use of telephone services to the internet. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street, and the latter was a pioneer of the early internet. But in 1994, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. The court ruled in Stratton Oakmont’s favor, saying that because Prodigy moderated posts on its forums, it exercised editorial control that made it just as liable for the speech on its platform as the people who actually made that speech. Meanwhile, Prodigy’s rival online service, Compuserve, was found liable for a user’s speech in an earlier case because Compuserve didn’t moderate content.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks, and mindful of the Prodigy decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment to CDA that said “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden told Vox’s Emily Stewart in 2019. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

As the beginning of Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are considered by some to be the 26 words that created the internet, but the law says more than that.

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that their Constitutional right to free speech has been violated — doesn’t apply. Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions about transmitting porn to minors were immediately challenged by civil liberty groups and struck down by the Supreme Court, which said they were too restrictive of free speech. Section 230 stayed, and so a law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. On the other hand, a website that didn’t moderate anything at all would quickly become a spam-filled cesspool that few people would want to swim in.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet to flourish. Others say it allows platforms that have become hugely influential and important to suppress and censor speech based on their own whims or supposed political biases. Depending on who you talk to, internet platforms are either using the sword too much or not enough. Either way, they’re hiding behind the shield to protect themselves from lawsuits while they do it. Though it has been a law for nearly three decades, Section 230’s existence may have never been as precarious as it is now.

The Supreme Court might determine Section 230’s fate

Justice Clarence Thomas has made no secret of his desire for the court to consider Section 230, saying in multiple opinions that he believes lower courts have interpreted it to give too-broad protections to what have become very powerful companies. He got his wish in February 2023, when the court heard two similar cases that include it. In both, plaintiffs argued that their family members were killed by terrorists who posted content on those platforms. In the first, Gonzalez v. Google, the family of a woman killed in a 2015 terrorist attack in France said YouTube promoted ISIS videos and sold advertising on them, thereby materially supporting ISIS. In Twitter v. Taamneh, the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content, which is in violation of the Justice Against Sponsors of Terrorism Act — and could then mean that Section 230 doesn’t apply to such content.

These cases give the Supreme Court the chance to reshape, redefine, or even repeal the foundational law of the internet, which could fundamentally change it. And while the Supreme Court chose to take these cases on, it’s not certain that they’ll rule in favor of the plaintiffs. In oral arguments in late February, several justices didn’t seem too convinced during the Gonzalez v. Google arguments that they could or should, especially considering the monumental possible consequences and impact of such a decision. In Twitter v. Taamneh, the justices focused more on if and how the Sponsors of Terrorism law applied to tweets than they did on Section 230. The rulings are expected in June.

In the meantime, don’t expect the original authors of Section 230 to go away quietly. Wyden and Cox submitted an amicus brief to the Supreme Court for the Gonzalez case, where they said: “The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users today, Section 230’s protection is even more important now than when the statute was enacted.”

Congress and presidents are getting sick of Section 230, too

In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law, which changed parts of Section 230. The updates mean that platforms can now be deemed responsible for prostitution ads posted by third parties. These changes were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but it did so by carving out an exception to Section 230. That could open the door to even more exceptions in the future.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped create them. Alex Jones and his expulsion from Facebook and other social media platforms — even Twitter under Elon Musk won’t let him back — is perhaps the best example of this.

In a 2018 op-ed, Sen. Ted Cruz (R-TX) claimed that Section 230 required the internet platforms it was designed to protect to be “neutral public forums.” The law doesn’t actually say that, but many Republican lawmakers have introduced legislation that would fulfill that promise. On the other side, Democrats have introduced bills that would hold social media platforms accountable if they didn’t do more to prevent harmful content or if their algorithms promoted it.

There are some bipartisan efforts to change Section 230, too. The EARN IT Act from Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT), for example, would remove Section 230 immunity from platforms that didn’t follow a set of best practices to detect and remove child sexual abuse material. The partisan bills haven’t really gotten anywhere in Congress. But EARN IT, which was introduced in the last two sessions, was passed out of committee in the Senate and ready for a Senate floor vote. That vote never came, but Blumenthal and Graham have already signaled that they plan to reintroduce EARN IT this session for a third try.

In the executive branch, former President Trump became a very vocal critic of Section 230 in 2020 after Twitter and Facebook started deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. He issued an executive order that said Section 230 protections should only apply to platforms that have “good faith” moderation, and then called on the FCC to make rules about what constituted good faith. This didn’t happen, and President Biden revoked the executive order months after taking office.

But Biden isn’t a fan of Section 230, either. During his presidential campaign, he said he wanted it repealed. As president, Biden has said he wants it to be reformed by Congress. Until Congress can agree on what’s wrong with Section 230, however, it doesn’t look likely that they’ll pass a law that significantly changes it.

However, some Republican states have been making their own anti-Section 230 moves. In 2021, Florida passed the Stop Social Media Censorship Act, which prohibits certain social media platforms from banning politicians or media outlets. That same year, Texas passed HB 20, which forbids large platforms from removing or moderating content based on a user’s viewpoint.

Neither law is currently in effect. A federal judge blocked the Florida law in 2022 due to the possibility of it violating free speech laws as well as Section 230. The state has appealed to the Supreme Court. The Texas law has made a little more progress. A district court blocked the law last year, and then the Fifth Circuit controversially reversed that decision before deciding to stay the law in order to give the Supreme Court the chance to take the case. We’re still waiting to see if it does.

If Section 230 were to be repealed — or even significantly reformed — it really could change the internet as we know it. It remains to be seen if that’s for better or for worse.

Update, February 23, 2023, 3 pm ET: This story, originally published on May 28, 2020, has been updated several times, most recently with the latest news from the Supreme Court cases related to Section 230.

Read More 

Social media used to be free. Not anymore.

Sandra Hunke, a plumber who is one of the most popular craft influencers in Germany with 120,000 Instagram followers and now also works part-time as a model, poses for a photo in her workshop in North Rhine-Westphalia. | Friso Gentsch/picture alliance via Getty Images

You used to pay for social media with your eyeballs. Now Meta and Twitter want your money, too. “If you’re not paying for the product, you are the product” has long been a common refrain about the business of social media.
The saying implies that you, the user, aren’t paying for apps like Instagram and Twitter because you’re giving away something else: your attention (and sometimes your content), which is sold to advertisers.
But now, this free model of social media — subsidized by advertising — is under pressure. Social media companies can’t make as much money off their free users as they used to. A weaker advertising market, privacy restrictions imposed by Apple that make it harder to track users and their preferences, and the perpetual threat of regulation have made it harder for social media apps to sell ads.
Which is why we’re seeing the beginnings of what might be a new era of social media: pay-to-play.
On Sunday, Meta became the latest and largest major social media company to announce a paid version of its products with the “Meta Verified” program. Facebook and Instagram will each charge users $12 a month for a blue verification badge, more protection against account impersonation, access to “a real person” in customer support to help with common account issues, and — most importantly — ”increased reach and visibility.” That means users who pay will have their content shown more in search, comments, and recommendations. The company is testing the feature in Australia and New Zealand this week and said it will be rolled out in the US and other countries soon.
Meta’s news comes a few months after Twitter released an $8-a-month paid verification program as part of new owner Elon Musk’s revamped Twitter Blue product. While Meta is notorious for cloning its competitors, its subscription offering isn’t just another case of copycatting. It’s part of an industry-wide trend. In recent years, Snap, YouTube, and Discord have introduced or expanded premium products that charge users for special perks. Snap gives subscribers early access to new features, YouTube serves them fewer ads, and Discord provides more customization options for people’s chat channels.
Now, Meta — which owns the largest social media apps in the world — is validating the trend of a two-tiered user system in social media. In this system, only paid users will receive services that you might otherwise expect for free, like proactive protection from fraudsters who try to impersonate you, and a direct line of contact to customer support when you’re having technical difficulties. Meta says it’s still offering some level of basic support to free users, but beyond that, it needs to charge to cover the cost.
But the most newsworthy part of Meta’s paid verification plan is not about how users who pay will get verified, or receive better customer support — but about how they’ll also get more visibility on Facebook and Instagram.
In the past, in theory, everyone had the same opportunity to be seen on social media. Now, if you pay $12 a month on Meta Verified, you have better odds of other people finding your account and posts — because Meta’s apps will uprank your content over that of other non-paying users. It’s a system that creators who run professional businesses on Instagram and Facebook might find attractive but could also jeopardize the quality of users’ experience if it’s not executed carefully.
With this new program, Meta is effectively blurring the line between advertising and organic content more than ever before. And with many users already complaining that Instagram can feel like a virtual shopping mall, full of creators plugging their own content and products, it’s hard to imagine that people will enjoy an even more commercialized experience.
We don’t yet know the full effects of what Meta Verified will be on the Facebook ecosystem. But it’s clear that, moving forward, if you want to be fully seen, trusted, and taken care of on Facebook, Instagram, Twitter, and other platforms engaging in a premium model, you’ll need to pay up.
Security and support is now a luxury, not a given
If someone steals your credit card and impersonates you, you expect the bank to protect you. If you go to the supermarket and buy spoiled milk, you expect the cashier will give you a refund. Consumers expect a basic level of customer service from businesses.
So it’s understandable why some users are reacting to Meta’s news by arguing that basic services like customer support and account security should be free.
“This really should just be part of the core product, the user should not have to pay for this,” commented one user on Mark Zuckerberg’s Facebook page after the announcement, to which Zuckerberg responded saying that Facebook will still provide some basic support to everyone — but that checking people’s government IDs to verify them and providing on-call customer service is expensive, and Meta needs to charge to cover the cost.
Social media’s customer support and security offerings have always been somewhat broken and unreliable. Apps like Facebook — which serves 2 billion people a day, for free — have never effectively scaled basic programs like customer helplines to assist people who are locked out of their accounts, and verification has always been selective. Often, the users who receive personal attention are VIPs like government officials, celebrities, media figures, or people who happened to know someone who worked at the company.
So while it may seem like Facebook is charging for something it used to do for free, it’s actually charging for something it never did well.
If you’re an average user, you may not want to pay $24 a month for a blue badge on Facebook and Instagram, but if you run a business on these apps, it’s a different story.
Mae Karwowski, CEO of the social media influencer marketing firm Obviously, said that she could easily see “so many people who run business empires” on social media paying for the Meta Verified package as the “next logical step,” because it could bring them even more business. The influencer industry on social media was worth an estimated $16 billion in 2022, and although TikTok is growing, Instagram is still the most popular influencer marketing platform for brands. Facebook and Instagram are also especially popular with business owners, with over 200 million businesses active on Facebook alone, many of whom run their businesses on the network.
The blue badge is important to creators and business owners, Karwowski said, because “it’s important to some people to have that credibility, or perceived credibility.”
Before Meta announced this paid tier, Karwowski said clients would often ask her for help getting verified on Instagram. You can apply to be verified on Instagram if you make the case that you’re a notable public figure. But since so many people apply, it can take a long time to get your application through.
“Previously, it would have to be like, ‘Oh, like so-and-so’s best friend’s cousin works at Instagram.’ And you find them on LinkedIn and send them a message,” said Karwowski. “There was very little standardization. At least now there’s some process.”
Still, some influencers Recode spoke with said they didn’t see enough value in Meta Verified.
“I don’t have a lot of people that are impersonating me. So that wouldn’t really make it very important to me,” said Oorbee Roy, a skateboarder and mom who goes by the handle @auntyskates. “And the other thing is, I feel like I’m close to getting [verified] on my own.”
What Roy did see as valuable was Instagram’s promise of increased visibility.
“I have content that’s very specific to a niche, and I would love to be able to get to that niche,” she said.
That gets us to our next point, about arguably the most valuable part of Facebook and Instagram’s pay-to-play perks: more attention.
Paying for reach
Before this announcement, if you wanted to boost a post or your account on Facebook or Instagram, you would have to run it as an ad — one that’s clearly labeled as such to users, as either an ad, sponsored, or “paid content.” (Instagram has long had a problem with creators posting unlabeled sponcon, but that wasn’t by design; users were essentially breaking the platform’s rules.)
Now, Instagram and Facebook are actually building in the ability for people to pay for eyeballs, without marking that promotion as advertising.
“The notion that you’re going to pay some subscription fee and then you’ll feature more prominently in the algorithm — there’s a name for that: It’s advertising,” said Jason Goldman, a former VP of product at Twitter from 2007 to 2010. “It’s just a different way of pricing it.”
While these subscriptions may help make more money for Instagram and Facebook at a time when its traditional advertising business is struggling, it could also jeopardize its standing with users who don’t want to see more promoted content.
“It’s kind of disappointing to see Instagram start to trend toward that commercial, more money-seeking business,” said Erin Sheehan, a New York City-based lifestyle influencer with over 12,000 followers who goes by the handle @girlmeetsnewyorkcity.
“I kind of wanted to switch over to TikTok and get into that organic market, and I feel like this might even push me that step further,” said Sheehan. “Because if I don’t subscribe, then I may find that my content is even more hidden than it is now.”
TikTok has attracted a new generation of creators, many of whom switched to the platform from older apps like Instagram because they say it’s easier to go viral even if you’re a relative amateur creating what Sheehan referenced as “organic content.” The app currently doesn’t have a premium subscription model, but it’s successfully expanding its advertising business at a time when that of competitors like Meta and Snap have slowed down.
Meta and other social media incumbents like YouTube have been battling TikTok for younger users and creators, with Instagram in particular rolling out new programs to court creators for Reels, its TikTok clone. So it’s imperative that Instagram and Facebook make sure that users aren’t turned off by promoted content from paid subscribers, and that creators keep wanting to share their content on their apps.
Meta told Recode that it’s still focused on surfacing content that people want to see.
“Our intent is to surface content that we think people will enjoy, and that doesn’t change with the increased visibility we offer through Meta Verified,” said Meta spokesperson Paige Cohen, in part, in a statement. “As we test and learn with Meta Verified, we’ll be focused on ensuring we’re enhancing the visibility of subscribers’ content in a way that is most valuable to the ecosystem at large.”
Meta also said that it’s not prioritizing paid content everywhere, for example: Subscribers will get prioritization in Explore and Reels on Instagram but not on the main feed. Reels, however, is a major focus for the company as it competes with TikTok in the short-form video space, so prioritization there is in some ways more important than feed.
It’s still the early days of this developing pay-to-play social media model. But from what we know so far, only a small subset of users may be willing to pay. It’s not a perfect comparison because it’s a different platform with a distinct audience, but Twitter reportedly only has 0.2 percent of its total user base paying for Twitter Blue as of mid-January. (The service launched in November.)
Meta may have a better chance of finding more customers for its verified program because of its sheer scale (Meta has over 10 times the number of users as Twitter), the fact that it has more influencers who run real businesses on the platform, and that it’s rolling this out in a more measured way than Twitter did.
But there are major risks to this pay-to-play model. Whether it’s normies posting pictures of their dogs and babies or professional influencers building their followings and careers, social media networks are built on their users. Creating tiers of those users could turn off some people from sharing at all. At a time when many young people are turning away from social media, by either logging off completely or seeking alternative apps that feel more authentic and less commercial, Meta could be pushing away the users it needs the most to stay relevant in the future.

Sandra Hunke, a plumber who is one of the most popular craft influencers in Germany with 120,000 Instagram followers and now also works part-time as a model, poses for a photo in her workshop in North Rhine-Westphalia. | Friso Gentsch/picture alliance via Getty Images

You used to pay for social media with your eyeballs. Now Meta and Twitter want your money, too.

“If you’re not paying for the product, you are the product” has long been a common refrain about the business of social media.

The saying implies that you, the user, aren’t paying for apps like Instagram and Twitter because you’re giving away something else: your attention (and sometimes your content), which is sold to advertisers.

But now, this free model of social media — subsidized by advertising — is under pressure. Social media companies can’t make as much money off their free users as they used to. A weaker advertising market, privacy restrictions imposed by Apple that make it harder to track users and their preferences, and the perpetual threat of regulation have made it harder for social media apps to sell ads.

Which is why we’re seeing the beginnings of what might be a new era of social media: pay-to-play.

On Sunday, Meta became the latest and largest major social media company to announce a paid version of its products with the “Meta Verified” program. Facebook and Instagram will each charge users $12 a month for a blue verification badge, more protection against account impersonation, access to “a real person” in customer support to help with common account issues, and — most importantly — ”increased reach and visibility.” That means users who pay will have their content shown more in search, comments, and recommendations. The company is testing the feature in Australia and New Zealand this week and said it will be rolled out in the US and other countries soon.

Meta’s news comes a few months after Twitter released an $8-a-month paid verification program as part of new owner Elon Musk’s revamped Twitter Blue product. While Meta is notorious for cloning its competitors, its subscription offering isn’t just another case of copycatting. It’s part of an industry-wide trend. In recent years, Snap, YouTube, and Discord have introduced or expanded premium products that charge users for special perks. Snap gives subscribers early access to new features, YouTube serves them fewer ads, and Discord provides more customization options for people’s chat channels.

Now, Meta — which owns the largest social media apps in the world — is validating the trend of a two-tiered user system in social media. In this system, only paid users will receive services that you might otherwise expect for free, like proactive protection from fraudsters who try to impersonate you, and a direct line of contact to customer support when you’re having technical difficulties. Meta says it’s still offering some level of basic support to free users, but beyond that, it needs to charge to cover the cost.

But the most newsworthy part of Meta’s paid verification plan is not about how users who pay will get verified, or receive better customer support — but about how they’ll also get more visibility on Facebook and Instagram.

In the past, in theory, everyone had the same opportunity to be seen on social media. Now, if you pay $12 a month on Meta Verified, you have better odds of other people finding your account and posts — because Meta’s apps will uprank your content over that of other non-paying users. It’s a system that creators who run professional businesses on Instagram and Facebook might find attractive but could also jeopardize the quality of users’ experience if it’s not executed carefully.

With this new program, Meta is effectively blurring the line between advertising and organic content more than ever before. And with many users already complaining that Instagram can feel like a virtual shopping mall, full of creators plugging their own content and products, it’s hard to imagine that people will enjoy an even more commercialized experience.

We don’t yet know the full effects of what Meta Verified will be on the Facebook ecosystem. But it’s clear that, moving forward, if you want to be fully seen, trusted, and taken care of on Facebook, Instagram, Twitter, and other platforms engaging in a premium model, you’ll need to pay up.

Security and support is now a luxury, not a given

If someone steals your credit card and impersonates you, you expect the bank to protect you. If you go to the supermarket and buy spoiled milk, you expect the cashier will give you a refund. Consumers expect a basic level of customer service from businesses.

So it’s understandable why some users are reacting to Meta’s news by arguing that basic services like customer support and account security should be free.

“This really should just be part of the core product, the user should not have to pay for this,” commented one user on Mark Zuckerberg’s Facebook page after the announcement, to which Zuckerberg responded saying that Facebook will still provide some basic support to everyone — but that checking people’s government IDs to verify them and providing on-call customer service is expensive, and Meta needs to charge to cover the cost.

Social media’s customer support and security offerings have always been somewhat broken and unreliable. Apps like Facebook — which serves 2 billion people a day, for free — have never effectively scaled basic programs like customer helplines to assist people who are locked out of their accounts, and verification has always been selective. Often, the users who receive personal attention are VIPs like government officials, celebrities, media figures, or people who happened to know someone who worked at the company.

So while it may seem like Facebook is charging for something it used to do for free, it’s actually charging for something it never did well.

If you’re an average user, you may not want to pay $24 a month for a blue badge on Facebook and Instagram, but if you run a business on these apps, it’s a different story.

Mae Karwowski, CEO of the social media influencer marketing firm Obviously, said that she could easily see “so many people who run business empires” on social media paying for the Meta Verified package as the “next logical step,” because it could bring them even more business. The influencer industry on social media was worth an estimated $16 billion in 2022, and although TikTok is growing, Instagram is still the most popular influencer marketing platform for brands. Facebook and Instagram are also especially popular with business owners, with over 200 million businesses active on Facebook alone, many of whom run their businesses on the network.

The blue badge is important to creators and business owners, Karwowski said, because “it’s important to some people to have that credibility, or perceived credibility.”

Before Meta announced this paid tier, Karwowski said clients would often ask her for help getting verified on Instagram. You can apply to be verified on Instagram if you make the case that you’re a notable public figure. But since so many people apply, it can take a long time to get your application through.

“Previously, it would have to be like, ‘Oh, like so-and-so’s best friend’s cousin works at Instagram.’ And you find them on LinkedIn and send them a message,” said Karwowski. “There was very little standardization. At least now there’s some process.”

Still, some influencers Recode spoke with said they didn’t see enough value in Meta Verified.

“I don’t have a lot of people that are impersonating me. So that wouldn’t really make it very important to me,” said Oorbee Roy, a skateboarder and mom who goes by the handle @auntyskates. “And the other thing is, I feel like I’m close to getting [verified] on my own.”

What Roy did see as valuable was Instagram’s promise of increased visibility.

“I have content that’s very specific to a niche, and I would love to be able to get to that niche,” she said.

That gets us to our next point, about arguably the most valuable part of Facebook and Instagram’s pay-to-play perks: more attention.

Paying for reach

Before this announcement, if you wanted to boost a post or your account on Facebook or Instagram, you would have to run it as an ad — one that’s clearly labeled as such to users, as either an ad, sponsored, or “paid content.” (Instagram has long had a problem with creators posting unlabeled sponcon, but that wasn’t by design; users were essentially breaking the platform’s rules.)

Now, Instagram and Facebook are actually building in the ability for people to pay for eyeballs, without marking that promotion as advertising.

“The notion that you’re going to pay some subscription fee and then you’ll feature more prominently in the algorithm — there’s a name for that: It’s advertising,” said Jason Goldman, a former VP of product at Twitter from 2007 to 2010. “It’s just a different way of pricing it.”

While these subscriptions may help make more money for Instagram and Facebook at a time when its traditional advertising business is struggling, it could also jeopardize its standing with users who don’t want to see more promoted content.

“It’s kind of disappointing to see Instagram start to trend toward that commercial, more money-seeking business,” said Erin Sheehan, a New York City-based lifestyle influencer with over 12,000 followers who goes by the handle @girlmeetsnewyorkcity.

“I kind of wanted to switch over to TikTok and get into that organic market, and I feel like this might even push me that step further,” said Sheehan. “Because if I don’t subscribe, then I may find that my content is even more hidden than it is now.”

TikTok has attracted a new generation of creators, many of whom switched to the platform from older apps like Instagram because they say it’s easier to go viral even if you’re a relative amateur creating what Sheehan referenced as “organic content.” The app currently doesn’t have a premium subscription model, but it’s successfully expanding its advertising business at a time when that of competitors like Meta and Snap have slowed down.

Meta and other social media incumbents like YouTube have been battling TikTok for younger users and creators, with Instagram in particular rolling out new programs to court creators for Reels, its TikTok clone. So it’s imperative that Instagram and Facebook make sure that users aren’t turned off by promoted content from paid subscribers, and that creators keep wanting to share their content on their apps.

Meta told Recode that it’s still focused on surfacing content that people want to see.

“Our intent is to surface content that we think people will enjoy, and that doesn’t change with the increased visibility we offer through Meta Verified,” said Meta spokesperson Paige Cohen, in part, in a statement. “As we test and learn with Meta Verified, we’ll be focused on ensuring we’re enhancing the visibility of subscribers’ content in a way that is most valuable to the ecosystem at large.”

Meta also said that it’s not prioritizing paid content everywhere, for example: Subscribers will get prioritization in Explore and Reels on Instagram but not on the main feed. Reels, however, is a major focus for the company as it competes with TikTok in the short-form video space, so prioritization there is in some ways more important than feed.

It’s still the early days of this developing pay-to-play social media model. But from what we know so far, only a small subset of users may be willing to pay. It’s not a perfect comparison because it’s a different platform with a distinct audience, but Twitter reportedly only has 0.2 percent of its total user base paying for Twitter Blue as of mid-January. (The service launched in November.)

Meta may have a better chance of finding more customers for its verified program because of its sheer scale (Meta has over 10 times the number of users as Twitter), the fact that it has more influencers who run real businesses on the platform, and that it’s rolling this out in a more measured way than Twitter did.

But there are major risks to this pay-to-play model. Whether it’s normies posting pictures of their dogs and babies or professional influencers building their followings and careers, social media networks are built on their users. Creating tiers of those users could turn off some people from sharing at all. At a time when many young people are turning away from social media, by either logging off completely or seeking alternative apps that feel more authentic and less commercial, Meta could be pushing away the users it needs the most to stay relevant in the future.

Read More 

Stop using your phone number to log in

iStock/Getty Images

Phone numbers were never meant to protect or identify us, but we use them to do that all the time. We shouldn’t. When Ugo moved to a new country last October, he got a new phone number. Ugo, who lives in Europe, where WhatsApp is very popular, didn’t immediately register his new phone number on the app, but was able to continue to use it as normal. It was only when he told WhatsApp that he had a new phone number that the trouble began.
His profile photo changed to a picture of a young woman, and his phone was flooded with new messages from Italian-speaking strangers, including from group chats he was suddenly added to — one of which seemed to be for a family that was not his own.
Ugo, who did not want his last name revealed for privacy reasons, had unintentionally taken over the WhatsApp account of the woman who had the new phone number before he did. She was an active WhatsApp user, but she’d also, apparently, neglected to tell the app what her new phone number was. So when Ugo told his account that he had a new phone number, he assumed control of the WhatsApp account that was still tied to it, and it was merged with his.
“I don’t even know if she was able to regain access to her account at all because for days — weeks, in fact — I was still receiving her messages, even though I kept telling all these people I wasn’t the person they thought I was,” Ugo told Recode. “She was lucky I had good intentions. Her account could’ve merged with someone much less forgiving.”
Ugo isn’t the only WhatsApp user this has happened to. Phone number recycling is a problem WhatsApp is aware of and has largely left to its users to prevent or solve. But it’s also not unique to WhatsApp.
Countless apps and services rely on your phone number to identify you, and that number is not necessarily permanent. Phone numbers are also vulnerable to hackers. They were never meant to be permanent identifiers, so incidents like what happened to Ugo are widespread, ongoing problems that the industry has known about for years. There are at least two research papers about phone number recycling that lay out the potential risks, from targeted attacks by hackers or people who easily buy up recently discarded phone numbers to being cut off from your accounts entirely and a stranger getting access to your life.
Yet the burden is often on users to protect themselves from a security issue that was created for them by some of their favorite apps. Even things that those services might recommend as an added security measure — like text, SMS, or multi-factor authentication — can actually introduce more vulnerabilities.
The number problem
If we didn’t reuse phone numbers, we’d soon run out of them. An estimated 35 million phone numbers are recycled every year in the United States, according to a 2017 FCC analysis of data from the North American Numbering Plan Administrator (NANPA). And there are currently 2.74 billion assignable phone numbers in the US and its territories, NANPA told Recode, though that doesn’t mean all of those numbers have actually been assigned (about half of them haven’t, according to FCC data). So when you give up your phone number, it’s only a matter of time before it gets reassigned to someone else.
In the United States, carriers have to wait at least 45 days before they can assign it to a new user. But that minimum waiting period was only put into effect in 2020. Before that, it was up to the carriers to decide how long to wait before recycling a phone number. Some only waited a few days, according to an FCC report. In France, where Ugo got his new phone number, the minimum waiting time was recently reduced from three months to 45 days.
This makes it pretty easy for misdirected calls to happen. A few decades ago, getting phone calls on your landline that were meant for whoever had the number before you might be annoying, but you weren’t being blasted with large blocks of texts, images, and videos that were meant for someone else, nor was your phone number the key to unlocking various goods and services.
Countless apps and services rely on your phone number to identify you, and that number is not necessarily permanent
In the age of the smartphone, however, phone number recycling is a major privacy and security problem. Many of us keep huge parts of our lives in our phones and the apps on them. Some of those apps, like WhatsApp, require our phone numbers to register for accounts. Or we use our phone number as a security measure. But phone numbers were never intended to perform these functions. And, as Ugo’s story shows, there are unintended consequences when they do.
But even before the iPhone changed the mobile game, there were concerns over using phone numbers as identifiers.
“Back in 2001 when I worked at Vodafone, we saw this problem coming,” said Marc Rogers, who is now chief security officer at the cybersecurity firm Q-Net Security.
SFGate published a story in 2006 about a man who got a recycled number and was barraged with texts from various women, which both displeased his fianceé and were charged to him because, again, this was in 2006, when pay-per-text was much more common. More recently, we’ve seen plenty of stories about phone numbers changing hands, causing accounts to be taken over by strangers on platforms like Facebook and Airbnb. It’s even happened on WhatsApp before.
The problem isn’t just accidental takeovers. Mobile phones have what’s known as a SIM, or subscriber identity module. That’s usually stored on a tiny removable card, although newer iPhones have embedded them into the devices themselves. If a bad actor gets control of your SIM — this is known as SIM jacking or SIM swapping — or they’re able to reroute text messages that are meant for you, they can access the accounts your phone number unlocks.
“The entire SIM swap ecosystem has sprung up around the vulnerability of SMS,” Rogers said.
In a study about security risks due to recycled phone numbers, Princeton computer science professor Arvind Narayanan and researcher Kevin Lee found that most of the available phone numbers at T-Mobile and Verizon were still attached to accounts on various websites, indicating that the people who had those numbers previously hadn’t yet told those services their numbers had changed. Of the 200 recycled numbers Lee and Narayanan bought for the study, they were able to obtain sensitive data (defined as anything with personally identifiable information or multi-factor authentication passcodes) that was meant for the number’s previous owner on nearly 10 percent of them. And that was after just one week.
It’s not just phone numbers that we’ve turned into problematic identifiers. There are also Social Security numbers, which started out as a way to track workers’ earnings even if they changed jobs, addresses, and names, but have evolved into national identifiers, used by the IRS, financial institutions, and even health providers. Anyone whose identity has been stolen can tell you that this Social Security number system isn’t perfect. Email addresses serve a similar unintended purpose, which causes privacy problems if you happen to have an email address that is constantly mistaken for someone else’s.
The industry could do more, but it probably won’t
WhatsApp says it takes several steps to prevent scenarios like Ugo’s, such as removing account data from accounts that have been inactive for at least 45 days and are then activated on a different mobile device.
“If for some reason you no longer want to use WhatsApp tied to a particular phone number, then the best thing to do is transfer it to a new phone number or delete the account within the app,” WhatsApp told Recode. “In all cases, we strongly encourage people to use two-step verification for added security.”
Those solutions leave most of the work to users, some of whom aren’t aware of their responsibilities. Enabling two-step or multi-factor authentication by default, which companies like Google and Amazon have done on some of their services, would stop these hijackings. WhatsApp could also ask users to verify their phone numbers occasionally, which would prod people like the previous owner of Ugo’s new number to transfer her account before it was hijacked.
Businesses will always have their best interests at heart, and those aren’t always yours
There are other things the industry — apps, carriers, phone operating system developers — can do. But they usually don’t unless they’re legally required to or something truly egregious happens. In the meantime, many of them like to demand phone numbers from users even in cases where it’s not necessary that they have them. And they’re not always very responsible with those numbers, either.
“We knew it was a problem 20 years ago, but almost nothing has happened to reduce the risk for consumers. It’s probably about time for policymakers to step in and start putting pressure on the telecommunications companies to look at ways this can be resolved technically,” Rogers said.
In the end, businesses will always have their best interests at heart, and those aren’t always yours. You have to protect yourself.
What you can do
You may be thinking that this doesn’t apply to you if you aren’t planning on changing your number. But that change may not be planned. A hit song might come out with your phone number as its chorus. Or the president could give it out during a campaign rally. Or you might reveal it on Twitter to make a point about AI chatbots that you didn’t think through. There are more serious reasons why you might have to change your phone number. Or you might die, in which case you won’t care about privacy and security issues anymore, but the people you leave behind might. Even if you keep your phone number forever, you’re not immune to some of these privacy issues.
“Even if you’re not planning on changing your number anytime soon, you may interact with friends or family members who have, and unknowingly end up sending sensitive information to new owners of those recycled numbers,” Lee, the Princeton researcher, said.
The best way to solve the problem is never to let it become one. That is, don’t attach your phone number to your accounts wherever possible. In some cases, like signing up for a WhatsApp account, you don’t have a choice. But you can at least minimize your exposure.
“People change their numbers for all sorts of reasons, and it’s practically impossible to update one’s number in every system and contact list out there,” Narayanan said.
You’ll also want to enable two-factor authentication everywhere you can, but don’t use your phone number as that second factor. Not only is it useless if you no longer have access to that phone number, but it’s also just not a good way to protect your account in general, considering how vulnerable phone numbers can be. Use an authenticator app or hardware key instead. Those can’t be SIM jacked, and they’re independent of your phone number.
There are some apps and services that you have to attach your phone number to or that only offer text authentication. You can try to avoid using them, but that’s not always possible. You can keep your old number from going back into circulation by using a phone number parking service, as Lee and Narayanan suggest in their study. Some are just a few dollars a month. It doesn’t even have to be forever; you may just want to do this for a year or two to give yourself more time to identify and switch your accounts over to the new number, and for your contacts to realize your number has changed.
Considering all the things that could go wrong when your phone number is given to someone else, however, the marginal cost might be worth it. Otherwise, you’re entrusting what could be very sensitive information to carriers, apps, websites, and whoever gets your phone number next. At that point, you can only hope that they take good care of it.

iStock/Getty Images

Phone numbers were never meant to protect or identify us, but we use them to do that all the time. We shouldn’t.

When Ugo moved to a new country last October, he got a new phone number. Ugo, who lives in Europe, where WhatsApp is very popular, didn’t immediately register his new phone number on the app, but was able to continue to use it as normal. It was only when he told WhatsApp that he had a new phone number that the trouble began.

His profile photo changed to a picture of a young woman, and his phone was flooded with new messages from Italian-speaking strangers, including from group chats he was suddenly added to — one of which seemed to be for a family that was not his own.

Ugo, who did not want his last name revealed for privacy reasons, had unintentionally taken over the WhatsApp account of the woman who had the new phone number before he did. She was an active WhatsApp user, but she’d also, apparently, neglected to tell the app what her new phone number was. So when Ugo told his account that he had a new phone number, he assumed control of the WhatsApp account that was still tied to it, and it was merged with his.

“I don’t even know if she was able to regain access to her account at all because for days — weeks, in fact — I was still receiving her messages, even though I kept telling all these people I wasn’t the person they thought I was,” Ugo told Recode. “She was lucky I had good intentions. Her account could’ve merged with someone much less forgiving.”

Ugo isn’t the only WhatsApp user this has happened to. Phone number recycling is a problem WhatsApp is aware of and has largely left to its users to prevent or solve. But it’s also not unique to WhatsApp.

Countless apps and services rely on your phone number to identify you, and that number is not necessarily permanent. Phone numbers are also vulnerable to hackers. They were never meant to be permanent identifiers, so incidents like what happened to Ugo are widespread, ongoing problems that the industry has known about for years. There are at least two research papers about phone number recycling that lay out the potential risks, from targeted attacks by hackers or people who easily buy up recently discarded phone numbers to being cut off from your accounts entirely and a stranger getting access to your life.

Yet the burden is often on users to protect themselves from a security issue that was created for them by some of their favorite apps. Even things that those services might recommend as an added security measure — like text, SMS, or multi-factor authentication — can actually introduce more vulnerabilities.

The number problem

If we didn’t reuse phone numbers, we’d soon run out of them. An estimated 35 million phone numbers are recycled every year in the United States, according to a 2017 FCC analysis of data from the North American Numbering Plan Administrator (NANPA). And there are currently 2.74 billion assignable phone numbers in the US and its territories, NANPA told Recode, though that doesn’t mean all of those numbers have actually been assigned (about half of them haven’t, according to FCC data). So when you give up your phone number, it’s only a matter of time before it gets reassigned to someone else.

In the United States, carriers have to wait at least 45 days before they can assign it to a new user. But that minimum waiting period was only put into effect in 2020. Before that, it was up to the carriers to decide how long to wait before recycling a phone number. Some only waited a few days, according to an FCC report. In France, where Ugo got his new phone number, the minimum waiting time was recently reduced from three months to 45 days.

This makes it pretty easy for misdirected calls to happen. A few decades ago, getting phone calls on your landline that were meant for whoever had the number before you might be annoying, but you weren’t being blasted with large blocks of texts, images, and videos that were meant for someone else, nor was your phone number the key to unlocking various goods and services.

Countless apps and services rely on your phone number to identify you, and that number is not necessarily permanent

In the age of the smartphone, however, phone number recycling is a major privacy and security problem. Many of us keep huge parts of our lives in our phones and the apps on them. Some of those apps, like WhatsApp, require our phone numbers to register for accounts. Or we use our phone number as a security measure. But phone numbers were never intended to perform these functions. And, as Ugo’s story shows, there are unintended consequences when they do.

But even before the iPhone changed the mobile game, there were concerns over using phone numbers as identifiers.

“Back in 2001 when I worked at Vodafone, we saw this problem coming,” said Marc Rogers, who is now chief security officer at the cybersecurity firm Q-Net Security.

SFGate published a story in 2006 about a man who got a recycled number and was barraged with texts from various women, which both displeased his fianceé and were charged to him because, again, this was in 2006, when pay-per-text was much more common. More recently, we’ve seen plenty of stories about phone numbers changing hands, causing accounts to be taken over by strangers on platforms like Facebook and Airbnb. It’s even happened on WhatsApp before.

The problem isn’t just accidental takeovers. Mobile phones have what’s known as a SIM, or subscriber identity module. That’s usually stored on a tiny removable card, although newer iPhones have embedded them into the devices themselves. If a bad actor gets control of your SIM — this is known as SIM jacking or SIM swapping — or they’re able to reroute text messages that are meant for you, they can access the accounts your phone number unlocks.

“The entire SIM swap ecosystem has sprung up around the vulnerability of SMS,” Rogers said.

In a study about security risks due to recycled phone numbers, Princeton computer science professor Arvind Narayanan and researcher Kevin Lee found that most of the available phone numbers at T-Mobile and Verizon were still attached to accounts on various websites, indicating that the people who had those numbers previously hadn’t yet told those services their numbers had changed. Of the 200 recycled numbers Lee and Narayanan bought for the study, they were able to obtain sensitive data (defined as anything with personally identifiable information or multi-factor authentication passcodes) that was meant for the number’s previous owner on nearly 10 percent of them. And that was after just one week.

It’s not just phone numbers that we’ve turned into problematic identifiers. There are also Social Security numbers, which started out as a way to track workers’ earnings even if they changed jobs, addresses, and names, but have evolved into national identifiers, used by the IRS, financial institutions, and even health providers. Anyone whose identity has been stolen can tell you that this Social Security number system isn’t perfect. Email addresses serve a similar unintended purpose, which causes privacy problems if you happen to have an email address that is constantly mistaken for someone else’s.

The industry could do more, but it probably won’t

WhatsApp says it takes several steps to prevent scenarios like Ugo’s, such as removing account data from accounts that have been inactive for at least 45 days and are then activated on a different mobile device.

“If for some reason you no longer want to use WhatsApp tied to a particular phone number, then the best thing to do is transfer it to a new phone number or delete the account within the app,” WhatsApp told Recode. “In all cases, we strongly encourage people to use two-step verification for added security.”

Those solutions leave most of the work to users, some of whom aren’t aware of their responsibilities. Enabling two-step or multi-factor authentication by default, which companies like Google and Amazon have done on some of their services, would stop these hijackings. WhatsApp could also ask users to verify their phone numbers occasionally, which would prod people like the previous owner of Ugo’s new number to transfer her account before it was hijacked.

Businesses will always have their best interests at heart, and those aren’t always yours

There are other things the industry — apps, carriers, phone operating system developers — can do. But they usually don’t unless they’re legally required to or something truly egregious happens. In the meantime, many of them like to demand phone numbers from users even in cases where it’s not necessary that they have them. And they’re not always very responsible with those numbers, either.

“We knew it was a problem 20 years ago, but almost nothing has happened to reduce the risk for consumers. It’s probably about time for policymakers to step in and start putting pressure on the telecommunications companies to look at ways this can be resolved technically,” Rogers said.

In the end, businesses will always have their best interests at heart, and those aren’t always yours. You have to protect yourself.

What you can do

You may be thinking that this doesn’t apply to you if you aren’t planning on changing your number. But that change may not be planned. A hit song might come out with your phone number as its chorus. Or the president could give it out during a campaign rally. Or you might reveal it on Twitter to make a point about AI chatbots that you didn’t think through. There are more serious reasons why you might have to change your phone number. Or you might die, in which case you won’t care about privacy and security issues anymore, but the people you leave behind might. Even if you keep your phone number forever, you’re not immune to some of these privacy issues.

“Even if you’re not planning on changing your number anytime soon, you may interact with friends or family members who have, and unknowingly end up sending sensitive information to new owners of those recycled numbers,” Lee, the Princeton researcher, said.

The best way to solve the problem is never to let it become one. That is, don’t attach your phone number to your accounts wherever possible. In some cases, like signing up for a WhatsApp account, you don’t have a choice. But you can at least minimize your exposure.

“People change their numbers for all sorts of reasons, and it’s practically impossible to update one’s number in every system and contact list out there,” Narayanan said.

You’ll also want to enable two-factor authentication everywhere you can, but don’t use your phone number as that second factor. Not only is it useless if you no longer have access to that phone number, but it’s also just not a good way to protect your account in general, considering how vulnerable phone numbers can be. Use an authenticator app or hardware key instead. Those can’t be SIM jacked, and they’re independent of your phone number.

There are some apps and services that you have to attach your phone number to or that only offer text authentication. You can try to avoid using them, but that’s not always possible. You can keep your old number from going back into circulation by using a phone number parking service, as Lee and Narayanan suggest in their study. Some are just a few dollars a month. It doesn’t even have to be forever; you may just want to do this for a year or two to give yourself more time to identify and switch your accounts over to the new number, and for your contacts to realize your number has changed.

Considering all the things that could go wrong when your phone number is given to someone else, however, the marginal cost might be worth it. Otherwise, you’re entrusting what could be very sensitive information to carriers, apps, websites, and whoever gets your phone number next. At that point, you can only hope that they take good care of it.

Read More 

Musk’s Twitter is getting worse

Twitter’s quality has suffered at the hands of Musk’s leadership. | Jonathan Raa/NurPhoto via Getty Images

The broken Twitter everyone warned us about is finally here. If you were accustomed to a time when Twitter — while far from perfect — was a place where you could dependably digest a wide range of breaking news, politics, celebrity gossip, or personal musings, it’s time to accept a new reality.
Twitter is becoming a degraded product.
In the four months since Elon Musk took over the company, the app has experienced major glitches — such as when, last week, users around the world couldn’t post tweets, send messages, or follow new accounts for several hours. While Twitter, like other social media networks, has always had periodic outages, under Musk, the app’s unpredictability isn’t just limited to technical issues. Musk’s erratic decisions are degrading the integrity of Twitter’s core product and alienating wide swaths of users.
Musk’s Super Bowl meltdown, as reported by Platformer, is one of the clearest signs so far of Twitter’s decline. Musk, apparently livid because his tweets about the Super Bowl were getting fewer views than President Joe Biden’s, flew to Twitter’s headquarters and ordered engineers to change the algorithm underlying Twitter’s main product to boost his own tweets above everyone else’s so that they show at the top of Twitter users’ “For You” page. Musk’s cousin, James Musk — who is now a full-time employee and a reported “fixer type” within the company — reportedly sent an urgent 2 am message asking all capable engineers to help, and the company tasked 80 engineers to manually tweak Twitter’s underlying system to promote Musk’s tweets.
Soon after the change, many users started noticing their feeds had been bombarded with Musk’s tweets. Musk seemed to acknowledge the phenomenon, posting a meme showing a woman labeled “Elon’s tweets” force-feeding a bottle of milk to another woman labeled “Twitter,” and later posting that Twitter was making “adjustments” to its algorithm.
The episode demonstrates how Twitter has become less and less dependable. The platform’s basic product design is now tailored to the whims of Musk, a leader who seems to prioritize his own image and “free speech absolutist” ideology above business interests.

A few examples: Musk, in the free-speech spirit of letting people say almost anything they want on Twitter, restored the accounts of thousands of previously suspended users, including neo-Nazi and QAnon accounts. That was one of the driving factors, researchers told the New York Times, behind a rise in hate speech on the platform, including an over 200 percent increase in anti-Black slurs from when Musk took over until December 2022 — upsetting many users who already struggled with harassment on the platform.
On the product front, Musk has rushed projects that have caused chaos on the platform. Musk’s most high-profile product, Twitter Blue, a paid version of the app that let anyone buy a verification checkmark badge, had a disastrous initial rollout. Musk — who has long beefed with the mainstream press — framed Twitter Blue as a way to take away the special privileges, such as checkmarks, that “elites” like journalists had on the platform, unless they paid up. But the poorly thought-out changes to Twitter’s verification policy ended up flooding the platform with spam, as newly verified accounts used their checkmarks to convincingly impersonate public figures, including Musk. The release was pulled back and delayed twice before finally coming out in December.
Under Musk, Twitter also recently blocked third-party apps that improved people’s experience on the app, like Tweetbot. While Twitter is promising developers a revamped paid version of its API, the way Twitter suddenly cut off access has soured its relationship with outside programmers whose add-on apps enriched the site.
Since Musk has laid off or fired more than half of Twitter’s staff, the people left to clean up the mess are short-handed. That includes teams that deal with fixing bugs, content moderation, and courting advertisers.
When Elon Musk first bought Twitter, even though many were skeptical about the billionaire, there was also some optimism that Musk could turn the company around. Investors hoped that Musk, the prolific and successful entrepreneur, could revive a company that was unprofitable and seen as not living up to its full business potential. Musk’s ideological supporters saw him, a self-appointed “free speech absolutist,” as someone who could make Twitter less restrictive and open to a wider range of speech.
Now we’re seeing Musk’s potential to improve Twitter — on the business and ideological fronts — unrealized.
On the business side, Twitter’s main line of revenue is in jeopardy as 500 big-name advertisers have paused spending on the platform since Musk took over, in large part over concerns about Musk’s overall erratic behavior and the rise in what researchers say is an “unprecedented” rise in hate speech on the platform. Twitter’s top 30 advertisers dropped their spending on Twitter by an average of 42 percent from when Musk took over until the end of 2022, according to Reuters. Musk’s solution to Twitter’s loss of advertiser dollars is to get more people to pay for Twitter, but that doesn’t seem to be working so far. Twitter only has around 180,000 people in the US who are paying for subscriptions to Twitter as of mid-January 2023, or less than 0.2 percent of monthly active users, according to a recent report by the Information.
While Musk claimed in November that Twitter’s user base is bigger than ever, outside data contradicts that claim. According to the data intelligence firm SimilarWeb, Twitter actually had higher traffic in March 2022 — before Musk took over — than it does now, and Twitter saw the growth in the number of visitors decline year over year from 4.7 percent in November 2022, when Musk took over, to -2 percent in Jan 2023.
On an ideological front, Musk’s Twitter has failed to live up to its free speech standards time and time again, starting with Musk suspending comedians like Kathy Griffin (who made fun of him) and barring users from talking on the platform about Twitter’s competitors, like decentralized social network Mastodon (after a flurry of criticism, Musk reversed the policy).
Even some popular figures who supported Musk for his free speech stance, like independent journalist Bari Weiss, have recanted their support after Musk banned several prominent journalists who have criticized him (Musk argued that the journalists doxxed him, which they denied). In recent months, former Twitter CEO and co-founder Jack Dorsey, who in April endorsed Musk as his successor and said he is the “singular solution” he trusts to run Twitter and “extend the light of consciousness,” has also shifted his stance and started to openly criticize Musk’s leadership, including all the recent technical glitches.
The main group of people who seem to steadfastly support the new Twitter is conservative figures and politicians. After Musk granted amnesty to many suspended accounts of right-wing provocateurs and political leaders, including shock jock Andrew Tate, Rep. Marjorie Taylor Greene (R-GA), and former President Donald Trump, Musk has achieved hero status in right-wing circles, and has even had Republican-led legislation drafted in his name that would require the Department of Justice to disclose money it spends on Big Tech companies. Musk has also earned conservative admiration for his work to uncover examples of alleged liberal bias in Twitter’s old guard, most prominently with the “Twitter Files,” a series of documents showing how Twitter made decisions about its content policies with input, at times, from US politicians and government agencies.
Even if Musk’s conservative fans love how he’s running Twitter, if the app is glitchy and more users leave the platform altogether, it won’t be of much use to them anymore. Nor will it be for Musk, who needs a healthy, money-making app in order to pay back some $13 billion he borrowed from creditors to buy Twitter.

Twitter’s quality has suffered at the hands of Musk’s leadership. | Jonathan Raa/NurPhoto via Getty Images

The broken Twitter everyone warned us about is finally here.

If you were accustomed to a time when Twitter — while far from perfect — was a place where you could dependably digest a wide range of breaking news, politics, celebrity gossip, or personal musings, it’s time to accept a new reality.

Twitter is becoming a degraded product.

In the four months since Elon Musk took over the company, the app has experienced major glitches — such as when, last week, users around the world couldn’t post tweets, send messages, or follow new accounts for several hours. While Twitter, like other social media networks, has always had periodic outages, under Musk, the app’s unpredictability isn’t just limited to technical issues. Musk’s erratic decisions are degrading the integrity of Twitter’s core product and alienating wide swaths of users.

Musk’s Super Bowl meltdown, as reported by Platformer, is one of the clearest signs so far of Twitter’s decline. Musk, apparently livid because his tweets about the Super Bowl were getting fewer views than President Joe Biden’s, flew to Twitter’s headquarters and ordered engineers to change the algorithm underlying Twitter’s main product to boost his own tweets above everyone else’s so that they show at the top of Twitter users’ “For You” page. Musk’s cousin, James Musk — who is now a full-time employee and a reported “fixer type” within the company — reportedly sent an urgent 2 am message asking all capable engineers to help, and the company tasked 80 engineers to manually tweak Twitter’s underlying system to promote Musk’s tweets.

Soon after the change, many users started noticing their feeds had been bombarded with Musk’s tweets. Musk seemed to acknowledge the phenomenon, posting a meme showing a woman labeled “Elon’s tweets” force-feeding a bottle of milk to another woman labeled “Twitter,” and later posting that Twitter was making “adjustments” to its algorithm.

The episode demonstrates how Twitter has become less and less dependable. The platform’s basic product design is now tailored to the whims of Musk, a leader who seems to prioritize his own image and “free speech absolutist” ideology above business interests.

A few examples: Musk, in the free-speech spirit of letting people say almost anything they want on Twitter, restored the accounts of thousands of previously suspended users, including neo-Nazi and QAnon accounts. That was one of the driving factors, researchers told the New York Times, behind a rise in hate speech on the platform, including an over 200 percent increase in anti-Black slurs from when Musk took over until December 2022 — upsetting many users who already struggled with harassment on the platform.

On the product front, Musk has rushed projects that have caused chaos on the platform. Musk’s most high-profile product, Twitter Blue, a paid version of the app that let anyone buy a verification checkmark badge, had a disastrous initial rollout. Musk — who has long beefed with the mainstream press — framed Twitter Blue as a way to take away the special privileges, such as checkmarks, that “elites” like journalists had on the platform, unless they paid up. But the poorly thought-out changes to Twitter’s verification policy ended up flooding the platform with spam, as newly verified accounts used their checkmarks to convincingly impersonate public figures, including Musk. The release was pulled back and delayed twice before finally coming out in December.

Under Musk, Twitter also recently blocked third-party apps that improved people’s experience on the app, like Tweetbot. While Twitter is promising developers a revamped paid version of its API, the way Twitter suddenly cut off access has soured its relationship with outside programmers whose add-on apps enriched the site.

Since Musk has laid off or fired more than half of Twitter’s staff, the people left to clean up the mess are short-handed. That includes teams that deal with fixing bugs, content moderation, and courting advertisers.

When Elon Musk first bought Twitter, even though many were skeptical about the billionaire, there was also some optimism that Musk could turn the company around. Investors hoped that Musk, the prolific and successful entrepreneur, could revive a company that was unprofitable and seen as not living up to its full business potential. Musk’s ideological supporters saw him, a self-appointed “free speech absolutist,” as someone who could make Twitter less restrictive and open to a wider range of speech.

Now we’re seeing Musk’s potential to improve Twitter — on the business and ideological fronts — unrealized.

On the business side, Twitter’s main line of revenue is in jeopardy as 500 big-name advertisers have paused spending on the platform since Musk took over, in large part over concerns about Musk’s overall erratic behavior and the rise in what researchers say is an “unprecedented” rise in hate speech on the platform. Twitter’s top 30 advertisers dropped their spending on Twitter by an average of 42 percent from when Musk took over until the end of 2022, according to Reuters. Musk’s solution to Twitter’s loss of advertiser dollars is to get more people to pay for Twitter, but that doesn’t seem to be working so far. Twitter only has around 180,000 people in the US who are paying for subscriptions to Twitter as of mid-January 2023, or less than 0.2 percent of monthly active users, according to a recent report by the Information.

While Musk claimed in November that Twitter’s user base is bigger than ever, outside data contradicts that claim. According to the data intelligence firm SimilarWeb, Twitter actually had higher traffic in March 2022 — before Musk took over — than it does now, and Twitter saw the growth in the number of visitors decline year over year from 4.7 percent in November 2022, when Musk took over, to -2 percent in Jan 2023.

On an ideological front, Musk’s Twitter has failed to live up to its free speech standards time and time again, starting with Musk suspending comedians like Kathy Griffin (who made fun of him) and barring users from talking on the platform about Twitter’s competitors, like decentralized social network Mastodon (after a flurry of criticism, Musk reversed the policy).

Even some popular figures who supported Musk for his free speech stance, like independent journalist Bari Weiss, have recanted their support after Musk banned several prominent journalists who have criticized him (Musk argued that the journalists doxxed him, which they denied). In recent months, former Twitter CEO and co-founder Jack Dorsey, who in April endorsed Musk as his successor and said he is the “singular solution” he trusts to run Twitter and “extend the light of consciousness,” has also shifted his stance and started to openly criticize Musk’s leadership, including all the recent technical glitches.

The main group of people who seem to steadfastly support the new Twitter is conservative figures and politicians. After Musk granted amnesty to many suspended accounts of right-wing provocateurs and political leaders, including shock jock Andrew Tate, Rep. Marjorie Taylor Greene (R-GA), and former President Donald Trump, Musk has achieved hero status in right-wing circles, and has even had Republican-led legislation drafted in his name that would require the Department of Justice to disclose money it spends on Big Tech companies. Musk has also earned conservative admiration for his work to uncover examples of alleged liberal bias in Twitter’s old guard, most prominently with the “Twitter Files,” a series of documents showing how Twitter made decisions about its content policies with input, at times, from US politicians and government agencies.

Even if Musk’s conservative fans love how he’s running Twitter, if the app is glitchy and more users leave the platform altogether, it won’t be of much use to them anymore. Nor will it be for Musk, who needs a healthy, money-making app in order to pay back some $13 billion he borrowed from creditors to buy Twitter.

Read More 

Scroll to top
Generated by Feedzy