verge-rss

Microsoft Word just fixed its default paste option

Illustration: The Verge

Have you ever pasted text into your beautifully formatted Microsoft Word document, only for it to ruin everything? Well, the days of the should finally be over, as Microsoft Word will now merge the text’s formatting with your document by default.
Unlike the previous “keep source formatting” default, the “merge formatting” option preserves the original bold and underlined text, along with list and table structure. But it also changes the visual aspects of the text, such as font family, size, and color, to match the document you’re working on. That should save you from messing up the formatting of your entire document when pasting in text from another source.

You could previously choose the “merge formatting” option from Word’s pasting menu, but it wasn’t the default. If you still want to use the “keep source formatting” option as the default, you can change it by heading to File > Options > Advanced > Cut, copy, and paste and then selecting the Pasting from other program drop-down menu. From there, choose Keep Source Formatting.
Last year, Microsoft finally started supporting the Control + Shift + V shortcut, which lets you paste in text without formatting at all.

Illustration: The Verge

Have you ever pasted text into your beautifully formatted Microsoft Word document, only for it to ruin everything? Well, the days of the should finally be over, as Microsoft Word will now merge the text’s formatting with your document by default.

Unlike the previous “keep source formatting” default, the “merge formatting” option preserves the original bold and underlined text, along with list and table structure. But it also changes the visual aspects of the text, such as font family, size, and color, to match the document you’re working on. That should save you from messing up the formatting of your entire document when pasting in text from another source.

You could previously choose the “merge formatting” option from Word’s pasting menu, but it wasn’t the default. If you still want to use the “keep source formatting” option as the default, you can change it by heading to File > Options > Advanced > Cut, copy, and paste and then selecting the Pasting from other program drop-down menu. From there, choose Keep Source Formatting.

Last year, Microsoft finally started supporting the Control + Shift + V shortcut, which lets you paste in text without formatting at all.

Read More 

The Pixel 8A and the camera you have with you

You know, normal backyard nighttime stuff.

Sometimes you travel thousands of miles from home to see something amazing. And every once in a while, that something appears right in your backyard.
That’s what happened on Friday. “What’s the aurora forecast for tonight?” I asked a friend who keeps up with those things. “Amazing,” he said. He wasn’t lying. At 9:30PM, my husband and I went to the backyard to check the sky — nothing but hazy light pollution on the northern horizon. But just an hour later, the sky erupted.
I didn’t think twice about what camera to use to photograph the event since I already had my SIM card in the Pixel 8A — it had arrived a couple of days ago fresh off its announcement. I handed my husband the Pixel 8 Pro; he left his night mode-less iPhone XR in the house. The rippling lights were visible even to the naked eye, but the sky came alive with greens and purples in our photos. It put the wispy blob we’d seen from our flight to Reykjavik, Iceland, the previous year to shame.

I’m still reeling from seeing this in my backyard.

We had everything on our side on that cold night in Iceland. The sky was clear; our tour boat in the bay put the city lights far behind us. I had the excellent Xiaomi 13 Pro in hand ready to photograph it — but there was nothing to see.
The sky just wasn’t talking. The ship’s crew passed around plastic cups of aquavit as consolation. We gave it our best shot, we thought, and at least we saw that one light from the plane. Then, just over a year later, improbably, we saw the light show of our lives right in our backyard. Life is funny.

The next day was unseasonably sunny and warm, and I made a last-minute decision to take my two-year-old to West Seattle’s finest sandy beach. It was the kind of day that reminds you why you suffer through months of dark and drizzle to live here; from our little patch of man-made sandy shoreline, we had a view of the snow-capped Olympic Mountains and downtown Seattle. A small armada of sailboats approached from the north. We walked down to the low tide, and I watched my son put his bare feet in saltwater for the first time.
The thing about core memories is sometimes you just stumble into them. Sometimes you’re just trying to have a perfectly average morning by the water or a night at home eating ice cream and playing Diablo, and boom — you’re face-to-face with a once-in-a-lifetime moment.

The Pixel 8A also comes with full dust resistance, and boy was I grateful for that at the beach.

I have a tendency to shrug off phone cameras as all being good enough. “The best camera is the one you have with you,” etc. And that’s true for 90 percent of the things we take pictures of day to day. But in those core memory moments, having a camera that can do the scene justice really does matter.
The Pixel 8A was the camera I happened to have with me this weekend; it’s far from the most expensive phone I have on hand, but it delivered. Not every budget phone can take a good portrait mode photo of a toddler running in the surf having the time of his life. Not every budget phone can take a decent night sky photo. I have more testing to do with the Pixel 8A, and a full review is in the works. In the meantime, I’m grateful that it has a camera that can keep up.
Photography by Allison Johnson / The Verge

You know, normal backyard nighttime stuff.

Sometimes you travel thousands of miles from home to see something amazing. And every once in a while, that something appears right in your backyard.

That’s what happened on Friday. “What’s the aurora forecast for tonight?” I asked a friend who keeps up with those things. “Amazing,” he said. He wasn’t lying. At 9:30PM, my husband and I went to the backyard to check the sky — nothing but hazy light pollution on the northern horizon. But just an hour later, the sky erupted.

I didn’t think twice about what camera to use to photograph the event since I already had my SIM card in the Pixel 8A — it had arrived a couple of days ago fresh off its announcement. I handed my husband the Pixel 8 Pro; he left his night mode-less iPhone XR in the house. The rippling lights were visible even to the naked eye, but the sky came alive with greens and purples in our photos. It put the wispy blob we’d seen from our flight to Reykjavik, Iceland, the previous year to shame.

I’m still reeling from seeing this in my backyard.

We had everything on our side on that cold night in Iceland. The sky was clear; our tour boat in the bay put the city lights far behind us. I had the excellent Xiaomi 13 Pro in hand ready to photograph it — but there was nothing to see.

The sky just wasn’t talking. The ship’s crew passed around plastic cups of aquavit as consolation. We gave it our best shot, we thought, and at least we saw that one light from the plane. Then, just over a year later, improbably, we saw the light show of our lives right in our backyard. Life is funny.

The next day was unseasonably sunny and warm, and I made a last-minute decision to take my two-year-old to West Seattle’s finest sandy beach. It was the kind of day that reminds you why you suffer through months of dark and drizzle to live here; from our little patch of man-made sandy shoreline, we had a view of the snow-capped Olympic Mountains and downtown Seattle. A small armada of sailboats approached from the north. We walked down to the low tide, and I watched my son put his bare feet in saltwater for the first time.

The thing about core memories is sometimes you just stumble into them. Sometimes you’re just trying to have a perfectly average morning by the water or a night at home eating ice cream and playing Diablo, and boom — you’re face-to-face with a once-in-a-lifetime moment.

The Pixel 8A also comes with full dust resistance, and boy was I grateful for that at the beach.

I have a tendency to shrug off phone cameras as all being good enough. “The best camera is the one you have with you,” etc. And that’s true for 90 percent of the things we take pictures of day to day. But in those core memory moments, having a camera that can do the scene justice really does matter.

The Pixel 8A was the camera I happened to have with me this weekend; it’s far from the most expensive phone I have on hand, but it delivered. Not every budget phone can take a good portrait mode photo of a toddler running in the surf having the time of his life. Not every budget phone can take a decent night sky photo. I have more testing to do with the Pixel 8A, and a full review is in the works. In the meantime, I’m grateful that it has a camera that can keep up.

Photography by Allison Johnson / The Verge

Read More 

Why Adobe CEO Shantanu Narayen is confident we’ll all adapt to AI

Photo illustration by The Verge / Photo: Adobe

The tech and the consumers both might not be quite ready yet, but he’s betting big on an AI future. Today, I’m talking with Adobe CEO Shantanu Narayen. Shantanu’s been at the top of my list of people I’ve wanted to talk to for the show since we first launched — he’s led Adobe for nearly 17 years now, but he doesn’t do too many wide-ranging interviews. I’ve always thought Adobe was an underappreciated company — its tools sit at the center of nearly every major creative workflow you can think of — and with generative AI poised to change the very nature of creative software, it seemed particularly important to talk with Shantanu now.
Adobe has an enormously long and influential history when it comes to creative software. It began in the early 1980s, developing something called PostScript that became the first industry-standard language for connecting computers to printers — a huge deal at the time. Then, in the 1980s and 1990s, it released the first versions of software that’s now so ubiquitous that it’s hard to imagine the computing and design industries without them. Adobe created the PDF, the document standard everyone now kind of loves to hate, as well as programs like Illustrator, Premiere, and — of course — Photoshop. If you work in a creative field, it’s a near certainty that there’s Adobe software running somewhere close to you.

All that influence puts Adobe right at the center of the whole web of tensions we like to talk about on Decoder — especially as the company has evolved its business and business model over time. Shantanu joined the company in 1998, back when desktop software was a thing you sold on a shelf. He was with the company when it started bundling a whole bunch of its flagship products into the Creative Suite, and he was the CEO who led the company’s pivot to subscription software with Creative Cloud in 2012. He also led some big acquisitions that turned into Adobe’s large but under-the-radar marketing business — so much of what gets made in tools like Photoshop is marketing and advertising collateral, after all, and the company is a growing business in helping businesses create, distribute, and track the performance of all that work around the web.
But AI really changes what it means to make and distribute creative work — even what it means to track advertising performance across the web — and you’ll hear us talk a lot about all the different things generative AI means for a company like Adobe. There are strategic problems, like cost: everyone’s pouring tons of money into R&D for AI, but not many people are seeing revenue returns on it just yet, and Shantanu explained how he’s betting on that investment return.
Then there are the fundamental philosophical challenges of adding AI to photo and video tools. How do you sustain human creativity when so much of it can be outsourced to the tools themselves with AI? And I asked a question I’ve been thinking about for a long time as more and more of the internet gets so deeply commercialized: What does it mean when a company like Adobe, which makes the tools so many people use to make their art, sees the creative process as a step in a marketing chain, instead of a goal in and of itself?
This one got deep — like I said, Shantanu doesn’t do many interviews like this, so I took my shots.
Okay: Adobe CEO Shantanu Narayen. Here we go.

This transcript has been lightly edited for length and clarity.
Shantanu Narayen, you’re the CEO of Adobe. Welcome to Decoder!
Thanks for having me, Nilay.
I am very excited to talk to you. You are one of the first guests I ever put on a list of guests I wanted on the show because I think Adobe is under-covered. As the CEO, you’ve been there for a long time. You don’t give a lot of interviews, so I’m very excited you chose to join us on the show.
Adobe is 40-plus years old. It has been a lot of different kinds of companies. You have been there since 1998. You became CEO in 2007. You saw at least one paradigm shift in computing. You led the company through another shift in computing. How would you describe Adobe today?
I think Adobe has always been about fundamental innovation, and I think we are guided by our mission to change the world through digital experiences. I think what motivates us is: Are we leveraging technology to deliver great value to customers and staying true to this mission of digital experiences?
What do you mean, specifically, by digital experiences?
The way people create digital experiences, the way they consume digital experiences, the new media types that are emerging, the devices on which people are engaging with digital, and the data associated with it as well. I think we started off way more with the creative process, and now we’re also into the science and data aspects. Think about the content lifecycle — how people create content, manage it, measure it, mobilize it, and monetize it. We want to play a role across that entire content life cycle.
I love this; you’re already way into what I wanted to talk about. Most people think of Adobe as the Photoshop company or, increasingly, the Premiere company. Wherever you are in the digital economy, Adobe is there, but what most people see is Creative Cloud.
You’re talking about everything that happens after you make the asset. You make the picture in Photoshop, and then a whole bunch of stuff might happen to it. You make the video in Premiere, and then a lot of things might happen. If you’re a marketer, you might make a sale. If you’re a content creator, you might run an ad. Something will happen there. You’re describing that whole expansive set of things that happen after the asset is made. Is that where your focus is, or is it still at the first step, which is someone has to double-click on Photoshop and do a thing?
I think it is across the entire chain — and, Nilay, I’d be remiss if I didn’t also say we are also pretty well known for PDF and everything associated with PDF!
[Laughs] Don’t worry, I have a lot of PDF questions coming for you.
I think as it relates to the content, which was your question: it doesn’t matter which platform you’re using to create content, whether it’s a desktop, whether it’s a mobile device, whether it’s web — that’s just the first step. It’s how people consume it, whether it’s on a social media site or whether it’s a company that’s engaging with customers and they’re creating some sort of a personalized experience. So, you’re right — very much, we’ve changed our aspirations. I think 20 years ago, we were probably known just for desktop applications, and now we’ve expanded that to the web, and the entire chain has certainly been one of the areas in which we’ve both innovated and grown.
I want to come back to that because there are a lot of ideas embedded in that. One thing that’s on my mind as I’ve been talking to people in this industry and all the CEOs on Decoder: half of them tell me that AI is a paradigm shift on the order of mobile, on the order of desktop publishing, all things that you have lived through. Do you buy that AI is another one of these paradigm shifts?
I think AI is something that we’ve actually been working on for a long time. What do computers do really well? Computers are great at pattern matching. Computers are great at automating inefficient tasks. I think all the buzz is around generative AI, which is the starting point of whether you’re having a conversational interface with your computer or you’re trying to create something and it enables you to start that entire process. I do think it’s going to be fairly fundamental because of the amount of energy, the amount of capital, the amount of great talent that’s focused on, “What does it mean to allow computers to have a conversation and reason and think?” That’s unprecedented. Even more so than, I would say, what happened in the move to mobile or the move to cloud because those were happening at the same time, and perhaps the energy and investment were divided among both, whereas now it’s all about generative AI and the implications.
If you are Microsoft or Google or someone else, one of the reasons this paradigm shift excites you is because it lets you get past some gatekeepers in mobile, it lets you create some new business models, it lets you invent some new products maybe that shift some usage in another way. I look at that for them and I say: Okay, I understand it. I don’t quite see that paradigm shift for Adobe. Do you see that we’re going to have to invent a new business model for Adobe the way that some of the other companies see it?
I think any technology shift has the same profound impact in terms of being a tailwind. If you think about what Microsoft does with productivity, and if you think about what Adobe does with creativity, one can argue that creativity is actually going to be more relevant to every skill moving forward. So I do think it has the same amount of profound implication for Adobe. And we’ve innovated in a dramatic way. We like to break up what we are doing with AI in terms of what we do at the interface layer, which is what people use to accomplish something; what we’re doing with foundation models; and what models are we creating for ourselves that are the underlying brain of the things that we are attempting to do, and what’s the data? I think Adobe has innovated across all three. And in our different clouds — we can touch on this later — Creative Cloud, Document Cloud, and Experience Cloud, we’re actually monetizing in different ways, too. So I am really proud of both the innovation on the product side and the experimentation on the business model side.
The reason I asked that question that way, and right at the top, is generative AI. So much of the excitement around it is letting people who maybe don’t have an affinity for creative tools or an artistic ability make art. It further democratizes the ability to generate culture, however you wish to define culture. For one set of companies, that’s not their business, and you can see that expands their market in some way. The tools can do more things. Their users have more capabilities. The features get added.
For Adobe, that first step has always been serving the creative professional, and that set of customers actually feels under threat. They don’t feel more empowered. I’m just wondering how you see that, in the broadest possible sense. I am the world’s foremost, “What is a photo?” philosophical handwringer, and then I use AI Denoise in Lightroom without a second’s hesitation, and I think it’s magic. There’s something there that is very big, and I’m wondering if you see that as just a moment we’re all going to go through or something that fundamentally changes your business.
Whether you’re a student, whether you’re a business professional, or whether you’re a creative, we like to say at Adobe that you have a story to tell. The reality is that there are way more stories that people want to tell than skills that exist to be able to tell that story with the soul that they want and the emotion that they want. I think generative AI is going to attract a whole new set of people who previously perhaps didn’t invest the time and energy into using the tools to be able to tell that story. So, I think it’s going to be tremendously additive in terms of the number of people who now say, “Wow, it has further democratized the ability for us to tell that story,” and so, on the creative side, whether you’re ideating, whether you’re trying to take some picture and fix it but you don’t quite know how to do it.
When people have looked at things like Generative Fill, their jaws drop. What’s amazing to us is when, despite decades of innovation in Photoshop, something like Generative Fill captures the imagination of the community — and the adoption of that feature has been dramatically higher than any other feature that we’ve introduced in Photoshop. When layers first came out, people looked at it, and their jaws dropped. It just speaks to how much more we can do for our customers to be able to get them to tell their story. I think it’s going to be dramatically expansive.
I feel like [Google CEO] Sundar Pichai likes to say AI is more profound than electricity —
You still need electricity to run the AI, so I think they’re both interrelated.
But I honestly think “used as much as layers” is the same statement. It’s at the same level of change. It’s pretty good.
I want to drill down into some of these ideas. You have been the CEO since 2007. That’s right at the beginning of the mobile era. Many things have changed. You’ve turned Adobe into a cloud business. You started as a product manager in 1998. I’m assuming your framework for making decisions has evolved. How do you make decisions now, and what’s your framework?
I think there are a whole bunch of things that have perhaps remained the same and a whole bunch of things that are different. I think at your core, when you make decisions — whether it’s our transition to the cloud, whether it’s what we did with getting into the digital marketing business — it’s always been about: Are we expanding the horizons and the aspirations we look at? How can we get more customers to the platform and deliver more value? At our core, what’s remained the same is this fundamental belief that by investing in deep technology platforms and delivering fundamental value, you will be able to deliver, value, and monetize it and grow as a company.
I think what’s different is the company has scaled how you recognize the importance, which was always important but becomes increasingly obvious: how do you create a structure in which people can innovate and how do you scale that? At $20 billion, how do you scale that business and make decisions that are appropriate? I think that’s changed. But at my core … I managed seven people then, I manage seven people now, and it’s leveraging them to do amazing things.
That gets into the next Decoder question almost perfectly: How is Adobe structured today? How did you arrive at that structure?
I think structures are pendulums, and you change the pendulum based on what’s really important. We have three businesses: One is what we call the creative business that you touch on so much about, and the vision there is how we enable creativity for all. We have the document business, and in the document business, it’s really thinking about how we accelerate document productivity, and powering digital businesses is the marketing business. I would say we have product units. We call the first two Creative Cloud and Document Cloud as our digital media business, and we call the marketing business the digital experience business. So we have two core product units run by two presidents, Anil [Chakravarthy, president of Adobe’s digital experience business] and David [Wadhwani, president of Adobe’s digital media business]. And with the rest of the company, we have somebody focused on strategy and corporate development. Partnerships is an important part. And then you have finance, legal, marketing, and HR as functional areas of expertise.
Where do you spend your time? I always think about CEOs as having timelines. There’s a problem today some customer is having, you’ve gotta solve that in five minutes. There’s an acquisition that takes a year or maybe even more than that. Where do you spend your time? What timeline do you operate on?
Time is our most valuable commodity, right? I think prioritization is something where we’ve been increasingly trying to say: what moves the needle? One of the things I like to do — both for myself as well as at the end of the year with my senior executives — is say, “How do we move the needle and have an impact for the company?” And that might change over time.
I think what’s constant is product. I love products, I love building products, I love using our products, but the initiatives might change. A few years ago, it was all about building this product that we call the Adobe Experience Platform — a real-time customer data platform — because we had this vision that if you had to deliver personalized engaging experiences, you needed a next-generation infrastructure. This was not about the old generation of, “Where was your customer data stored?” It was more about: what’s a real-time platform that enables you to activate that data in real time? And that business has now exploded. We have tens of billions of profiles. The business has crossed $750 million in the book of business.
Incubating new businesses is hard. In companies, the power structure tends to be with businesses that are making money today. And so incubating businesses require sponsorship. Adobe Express is another product that we’ll talk about. We just released a phenomenal new version of Adobe Express on both mobile and web, which is all about this creativity for all. And so I think what are the needle-moving initiatives? Sometimes, it might be about partnerships. And as we think about LLMs and what’s happening in generative AI, where do we partner versus where do we build? While it changes, I would say there are three parts of where I spend my time.
There’s strategy because, at the end of the day, our jobs are planting the flag for where the company has to go and the vision for the company. I think it’s a cadence of execution. If you don’t execute against the things that are important for you, it doesn’t matter how good your strategy is. And the third set of things that you focus on are people. Are you creating a culture where people want to come in and work, so they can do their best work? Is the structure optimized to accomplish what is most important, and are you investing in the right places? I would say those are the three buckets, but it ebbs and flows based on the critical part. And you’re right — you do get interrupted, and having to deal with whatever is the interruption of the day is also an important part of what you do.
You said you had three core divisions. There’s the Creative Cloud — the digital media side of the business. There’s the Experience Cloud, which is the marketing side of the business, and then there’s … I think you have a small advertising line of revenue in that report. Is that the right structure for the AI moment? Do you think you’re going to have to change that? Because you’ve been in that structure for quite some time now.
I think what’s been really amazing and gratifying to us is, at the end of the day, while you have a portfolio of businesses, if you can integrate them where you deliver value to somebody that is incredible and that no other company can do by themselves, that’s the magic that a company can do. We just had a recent summit at MAX in London. We had our summit here in Las Vegas. These are our big customer events. And the story even at the financial analyst meetings is all about how these are coming together: how the integration of the clouds is where we’re delivering value. When you talk about generative AI, we do creation and we do production. We have to do these asset managements.
If you’re a marketer and you’re creating all this content, whether it’s for social, whether it’s for email campaigns, whether it’s for media placement or just TV, where is all that content stored, and how do you localize it, and how do you distribute it? How do you activate it? How do you create these campaigns? What do you do with workflow and collaboration? And then what is the analysis and insight and reporting?
This entire framework’s called the GenStudio, and it’s actually the bringing together of the cloud businesses. The challenge in a company is you want people who are ruthlessly focused on driving innovation in a competitive way and leading the market and what they are responsible for, but you also want them to take a step back and realize that it’s actually putting these together in a way that only Adobe can uniquely do that differentiates us from everybody else. So, while we have these businesses, I think we really run the company as one Adobe, and we recognize the power of one Adobe, and that’s a big part of my job, too.
How do you think about investing at the cutting edge of technology? I’m sure you made AI investments years ago before anyone knew what they could become. I’m sure you have some next-gen graphics capabilities right now that are just in the research phase. That’s pure cost. I think Adobe has to have that R&D function in order to remain Adobe. At the same time, even the cost of deploying AI is going up as more and more people use Firefly or Generative Fill or anything else. And then you have a partnership with OpenAI to use Sora in Premiere, and that might be cheaper than developing on your own. How do you think about making those kinds of bets?
Again, we are in the business of investing in technology. A couple of things have really influenced how we think about it at the company. Software has an S-curve. You have things that are in incubation and have a horizon that’s not immediate, and you have other things that are mature. I would say our PostScript business is a mature business. It changed the world as we know it right now. But it’s a more mature business. And so, I think being thoughtful about where something is in its stage of evolution, and therefore, you’re making investments certainly ahead of the “monetization” part, but you have other metrics. And you say, am I making progress against metrics? But we’re thoughtful about having this portfolio approach. Some people call it a horizon approach and which phase you’re in. But in each one of them, are we impatient for success in some way? It may be impatient for usage. It may be impatient for making technology advancements. It may be impatience for revenue and monetization. It may be impatience for geographic distribution. I think you still have to create a culture where the expectations of why you are investing are clear and you measure the success against that criteria.
What are some of the longer-term bets you’re making right now that you don’t know when they’re going to pay off?
Well, we’re always investing. AI, building our own foundation models. I think we’re all fairly early right in this phase. We decided very early on that with Firefly, we’re going to be investing in our models. We are doing the same on the PDF side. We had Liquid mode, which allowed you to make all your PDFs responsive on a mobile device. In the Experience Cloud, how do you think about customers, and what’s a model for customers and profiles and recommendations? Across the spectrum, we’re doing it.
I would say the area where we probably make the most fundamental research is in Creative [Cloud]: what’s happening with compression models or resolution or image enhancement techniques or mathematical models for that? We’ve always had advanced technology in that. There, you actually want the team to experiment with things that are further from the tree because if you’re too close to the tree and your only metric is what part of that ships, you are perhaps going to miss some fundamental moves. So, again, you have to be thoughtful about what you are. But I would say core imaging science, core video science, is clearly the area — 3D immersive. That’s where we are probably making the most fundamental research investments.
You mentioned AI and where you are in the monetization curve. Most companies, as near as I can tell, are investing a lot in AI, rolling out a lot of AI features, and the best idea anyone has is, “We’ll charge you 20 bucks a month to ask this chatbot a question, and maybe it will confidently hallucinate at you.” And we’ll see if that’s the right business. But that’s where we are right now for monetization. Adobe is in a different spot. You already have a huge SaaS business. People are already using the features. Is the use of Firefly creating any margin pressure on Creative Cloud subscribers? You’re not charging extra for it, but you could in the future. How are you thinking about that increased cost?
We have been thoughtful about different models for the different products that we have. You’re right in Creative. Think about Express versus Creative Cloud. In Creative Cloud, we want low friction. We want people to experiment with it. Most people look at it and say, “Hey, are you acquiring new customers?” And that’s certainly an important part. What’s also equally important is, if that helps with retention and usage that also, for a subscription business, has a material impact on how you engage value with customers.
Express is very different. Express is an AI-first new product that’s designed to be this paradigm change where, instead of knowing exactly what you want to do, you have a conversation with the computer: I want to create this flyer or I want to remove the background of an image or I want to do something even more exciting and I want to post something on a social media site. And there, it’s, again, about acquisition and successful exports.
You’re right in that there’s a cost associated with it. I would say for the most part, for most companies, the training cost is probably higher right now than the inference costs, both because we can start to offload the inferencing as well on computers as that becomes a reality. But it’s what we do for a living. If you are uncomfortable investing in fundamental technology, you’re in the wrong business. And we’re not a company that has actually focused on being a fast follower, let somebody else invent it. We like creating markets. And so you have to recognize who you are as a company, and that comes with the consequences of how you have to operate.
I think it remains to be seen how consumer AI is monetized. It remains to be seen even with generative AI in Photoshop. At the individual creative level, I think it remains to be seen. Maybe it will just help you with retention, but I feel like retention in Photoshop is already pretty high. Maybe it will bring you new customers, but you already have a pretty high penetration of people who need to use Photoshop.
It’s never enough. We’re always trying to attract more customers.
But that’s one part of the business. I think there’s just a lot of question marks there. There’s another part of your business that, to me, is the most fascinating. When I say Adobe is under-covered, the part of the business that I think is just fully under-covered is — you mentioned it — GenStudio. It’s the marketing side of the business, the experience side of the business. We’re going to have creatives at an ad agency make some assets for a store. The store is going to pump its analytics into Adobe’s software. The software is going to optimize the assets, and then maybe at some turn, the AI is going to make new assets for you and target those directly to customers. That seems like a very big vision, and it’s already pre-monetized in its way. That’s just selling marketing services to e-commerce sites. Is that the whole of the vision or is it bigger than that?
It’s a big part of the vision, Nilay. We’ve been talking about this vision of personalization at scale. Whether you’re running a promotion or a campaign, you’re making a recommendation on what to watch next, and we’re in our infancy in terms of what happens. When I looked and focused on how we create our own content and partner with great agencies, the amount of content that’s created, and the way to personalize that and run variations and experiment and run this across 180 countries where we might do business — that entire process from a campaign brief to where an individual in some country is experiencing that content — it’s a long laborious process. And we think that we can bring a tremendous amount of technology to bear in making that way more seamless. So I think that is an explosive opportunity, and every consumer is now demanding it, and they’re demanding it on their mobile device.
I think people talk about the content supply chain and the amount of content that’s being created and the efficacy of that piece of content. It is a big part of our vision. But documents also. The world’s information is in documents, and we’re equally excited about what we are doing with PDF and the fact that now, in Reader, you can have a conversational interface, and you can say, “Hey, summarize for me,” and then over time, how does this document, if I’m doing medical research, correlate with the other research that’s in there and then go find things that might be on my computer or might be out there on the internet. You have to pose these interesting problems for your product team: how can we add value in this particular use case or scenario? And then they unleash their magic on it. Our job is posing these hard things, which is like, “Why am I starting the process for Black Friday or Cyber Monday five months in advance? Why can’t I decide a week before what campaign I want to run and what promotion I want to run?” And enabling that, I think we will deliver tremendous value.
I promised you I would ask you a lot of questions about PDF, and I’m not going to let go of that promise, but not yet. I want to stay focused on the marketing side.
There’s an idea embedded in two phrases you just said that I find myself wrestling with. I think it is the story of the internet. It is how commercialized the internet has become. You said “content supply chain” and “content life cycle.” The point of the content is to lead to a transaction that is an advertising and marketing-driven view of the internet. Someone, for money, is going to make content, and that content will help someone else down the purchase funnel, and then they’re going to a pair of shoes or a toothbrush or whatever it is. And that I think is in tension with creativity in a real way. That’s in tension with creativity and art and culture. Adobe sits at the center of this. Everybody uses your software. How do you think about that tension? Because it’s the thing that I worry about the most.
Specifically, the tension is as a result of what? The fact that we’re using it for commerce?
Yeah. I think if the tools are designed and organized and optimized for commerce, then they will pull everybody toward commerce. I look at young creators on social platforms, and they are just slowly becoming ad agencies. Like one-person ad agencies is where a creator ends if they are at the top of their game. MrBeast is such a successful ad agency that his rates are too high, and it is better for him to sell energy bars and make ads for his own energy bars than it is for him to sell ads to someone else. That is a success story in one particular way, and I don’t deny that it’s a success story, but it’s also where the tools and the platforms pull the creatives because that’s the money. And because the tools — particularly Adobe’s tools — are used by everybody for everything, I wonder if you at the very top think about that tension and the pull, the optimization that occurs, and what influence that has on the work.
We view our job as enablement. If you’re a solopreneur or you want to run a business, you want to be a one-person shop in terms of being able to do whatever your passion is and create it. And the internet has turned out to be this massively positive influence for a lot of people because it allows them distribution. It allows them reach. But I wouldn’t underplay the —
There are some people who would make, at this point, a very different argument about the effect of the internet on people.
But I was going to go to the other side. Whether it’s just communication and expressing themselves, one shouldn’t minimize the number of people for whom this is a creative outlet and it’s an expression, and it has nothing to do with commerce and they’re not looking to monetize it, but they’re looking to express themselves. Our tools, I think, do both phenomenally well. And I think that is our job. Our job is not doing value judgment on what people are using this for. Our job is [to ask], “How do we enable people to pursue their passion?”
I think we do a great job at that. If you’re a K–12 student today, when you write a project, you’re just using text. How archaic is that? Why not put in some images? Why not create a video? Why not point to other links? The whole learning process is going to be dramatically expanded visually for billions of people on the internet, and we enable that to happen. I think there are different users and different motivations, and again, as I said, we’re very comfortable with that.
One of the other tensions I think about right now when it comes to AI is that the whole business — the marketing business, the experience business you have — requires a feedback loop of analytics. You’re going to put some content ideally on the web. You’re going to put some Adobe software on the website. You own Omniture. You own a big analytics suite that you acquired with Omniture back in the day. Then that’s going to result in some conversions. You’ll do some more tracking. You’ll sell some stuff.
That all depends on a vibrant web. I’m guessing when people make videos in Premiere and upload them to YouTube, you don’t get to see what happens on YouTube. You don’t have great analytics from there. I’m guessing you have even worse analytics from TikTok and Instagram Reels. More and more people are going to those closed platforms, and the web is getting choked by AI. You can feel that it’s being overrun by low-quality SEO spam or AI content, or it’s mostly e-commerce sites because you can avoid some transaction fees if you can get people to go to a website. Do you worry about the pressure that AI is putting on the web itself and how people are going to the more closed platforms? Because that feels like it directly hits this business, but it also directly impacts the future of how people use Photoshop.
I think your point really brings to the forefront the fact that the more people use your products, the more differentiating yourself with your content is a challenge. I think that comes with the democratization of access to tools and information. It’s no different from if you’re a software engineer and you have all this access to GitHub and everything that you can do with software. How do you differentiate yourself as a great engineer, or if you’re a business, how do you differentiate yourself with a business? But as it relates to the content creation parts —
Actually, can I just interrupt you?
Sure.
I want you to talk about the distribution side. This is the part that I think is under the most pressure. Content creation is getting easier and more democratic. However you feel about AI, it is easier to make a picture or a video than it’s ever been before. On the distribution side, the web is being choked by a flood of AI content. The social platforms, which are closed distribution, are also being flooded with AI content. How do you think about Adobe living in that world? How do you think about the distribution problem? Because it seems like the problem we all have to solve.
You’re absolutely right in that, as the internet has evolved, there’s what you might consider open platforms and closed platforms. But we produce content for all of that. You pointed out that, whether it’s YouTube, TikTok, or just the open internet, we can help you create content for all of that. I don’t know that I’d use the word “choked.” I used the word “explosion” of content certainly, and “flooded” also is a word that you used. It’s a consequence. It’s a consequence of the access. And I do think that for all the companies that are in that business, even for companies that are doing commerce, I think there are a couple of key hypotheses that when they do, they become lasting platforms. The first is transparency of optics of what they are doing with that data and how they’re using that data. What’s the monetization model, and how are they sharing whatever content is being distributed through their sites with the people who are making those platforms incredibly successful?
I don’t know that I worry about that a lot, honestly. I think most of the creators I’ve spoken to like a proliferation of channels because they fundamentally believe that their content will be differentiated on those channels, and getting exposure to the broadest set of eyeballs is what they aspire to. So I haven’t had a lot of conversations with creators where they are telling us, as Adobe, that they don’t like the fact that there are more platforms on which they have the ability to create content. They do recognize that it’s harder, then, for them to differentiate themselves and stand out. Ironically, that’s an opportunity for Adobe because the question is, for that piece of content, how do you differentiate yourself in the era of AI if there’s going to be more and more lookalikes, and how do you have that piece of content have soul? And that’s the challenge for a creative.
How do you think about the other tension embedded in that, which is that you can go to a number of image generators, and if someone is distinctive enough, you can say, “Make me an image in the style of X,” and that can be trained upon and immediately lifted, and that distinction goes to zero pretty fast. Is that a tension that you’re thinking about?
Given the role that Adobe plays in the content creation business, I think we take both the innovation angle and the responsibility angle very seriously. And I know you’ve had conversations with Dana [Rao, Adobe counsel] and others about what we are doing with content credentials and what we are doing with the Fair Act. If you look at Photoshop, we’re also taking a very thoughtful approach about saying when you upload a picture for which you want to do a structure match or style match, you bear the responsibility of saying you have access to that IP and license to that IP in order to do that.
So I can interpret your questions in one of two ways. One is: how do we look at all of the different image generators that have happened? In that case, we are both creating our own image generator, but at the NAB Show, we showed how we can support other third parties. It was really critical for us to sequence this by first creating our own image model. Both because we had one that was designed to be commercially safe. It respected the rights of the creative community because we have to champion it. But if others have decided that they are going to use a different model but want to use our interfaces, then with the appropriate permissions and policies, we will support that as well.
And so I interpret your questions in those two ways, which is we’re taking responsibility in terms of when we provide something ourselves, how are we making sure that we recognize IP because it is important, and it’s people’s IP. I think at some point, the courts will opine on this, but we’ve taken a very designed-to-be commercially safe approach where we recognize the creator’s IP. Others have not. And the question might be, well, why are you supporting them in some of our products? And a lot of our customers are saying, “Well, we will take the responsibility, but please integrate this in our interfaces,” and that’s something that we are pushing as third-party models.
It bears mentioning that literally today, as we’re speaking, an additional set of newspapers has sued OpenAI for copyright infringement. And that seems like the thing that is burbling along underneath this entire revolution is, yeah, the courts are going to have to help us figure this out. That seems like the very real answer. I did have a long conversation with Dana [Rao] about that. I don’t want to sit in the weeds of that. I’m just wondering for you as the CEO of Adobe, where is your level of risk? How risky do you think this is right now for your company?
I think the approach that we’ve taken has shown just tremendous leadership by saying … Look at our own content. We have a stock business where we have rights to train the models based on our stock business. We have Behance, and Behance is the creative professional social site for people sharing their images. While that’s owned by Adobe, we did not train our Firefly image models based on that because that was not the agreement that we had with people who do it.
I think we’ve taken a very responsible way, so I feel really good about what we are doing. I feel really good about how we are indemnifying customers. I feel really good about how we are doing custom models where we allow a person in the media business or the CPG business to say, “We will upload our content to you Adobe, and we will create a custom model for us that only we can use, what we have rights for.” So, we have done a great job. I think other companies, to your point, are not completely transparent yet about what data they use and [if] they scrape the internet, and that will play out in the industry. But I like the approach that we’ve taken, and I like the way in which we’ve engaged with our community on this.
It’s an election year. There are a lot of concerns about misinformation and disinformation with AI. The AI systems hallucinate a lot. It’s just real. It’s the reality of the products that exist today. As the CEO of Adobe, is there a red line of capability that you won’t let your AI tools cross right now?
To your point, I think it’s something like 50 percent of the world’s population over a 12-month period is going to the polls, including the US and other major democracies in the world. And so, we’ve been actively working with all these governments. For any piece of content that’s being created, how does somebody put their digital signature on what the provenance of that content was? Where did it get created? Where did it get consumed? We’ve done an amazing job of partnering with so many companies in the camera space, in the distribution of content space, in the PC space to all say we need to do it. We’ve also now, I think, made the switch associated with, how do you visually identify that there is this watermark or this digital signature about where the content came from?
I think the unsolved problem to some degree is how do you, as a society, get consumers to say, “I’m not going to trust any piece of content until I see that content credential”? We’ve had nutrition labels on food for a long time — this is the nutrition label on a piece of content. Not everybody reads the nutrition label before they eat whatever they’re eating, so I think it’s a similar thing, but I think we’ve done a good job of acting responsibly. We’ve done a great job of partnering with other people. The infrastructure is there. Now it’s the change management with society and people saying, “If I’m going to go see a piece of video, I want to know the provenance of that.” The technology exists. Will people want to do that? And I think that’s—
The thing everyone says about this idea is, well, Photoshop existed. You could have done this in Photoshop. What’s the difference? That’s you. You’ve been here through all these debates. I’m going to tell you what you are describing to me sounds a little bit naive. No one’s going to look at the picture of Mark Zuckerberg with the beard and say, “Where’s the nutrition label on that?” They’re going to say, “Look at this cool picture.” And then Zuck is going to lean into the meme and post a picture of his razor. That’s what’s happening. And that’s innocent. A bunch of extremely polarized voters in a superheated election cycle is not going to look at a nutrition label. It just doesn’t seem realistic. Are you saying that because it’s convenient to say, or do you just hope that we can get there?
I actually acknowledge that the last step in this process is getting the consumer to care and getting the consumer to care [about] pieces of information that are important. To your point again, you had a couple of examples where some of them are in fun and in jest and everybody knows they’re in fun and jest and it doesn’t matter. Whereas others are pieces of information. But there is precedence to this. When we all transacted business on the internet, we said we want to see that HTTPS. We want to know that my credit card information is being kept securely. And I agree with you. I think it’s an unsolved problem in terms of when consumers will care and what percentage of consumers will care. So, I think our job is the infrastructure, which we’ve done. Our job is educating, which we are doing. But there is a missing step in all of this. We are going into this with our eyes open, and if there are ideas that you have on what else we can do, we’re all ears.
Is there a red line for you where you’ve said, “We are not going to cross this line and enable this kind of feature”?
Photoshop has actually done a couple of things in the past. I think with creating currency, if you remember, that was a place. I think pornography is another place. There’s some things in terms of content where we have drawn the line. But that’s a judgment call, and we’ll keep iterating on that, and we’ll keep refining what we do.
Alright. Let’s talk about PDF. PDF is an open standard. You can make a PDF pretty much anywhere all the time. You’ve built a huge business around managing these documents. And the next turn of it is, as you described, “Let an AI summarize a bunch of documents, have an archive of documents that you can treat almost like a wiki, and pull a bunch of intelligence out of it.” The challenge is that the AI is hallucinating. The future of the PDF seems like training data for an AI. And the thing that makes that really happen is the AIs have to be rock-solid reliable. Do you think we’re there yet?
It’s getting better, but no. Even the fact that we use the word hallucinate. The incredible thing about technology right now is we use these really creative words that become part of the lexicon in terms of what happens. But I think we’ve been thoughtful in Acrobat about how we get customer value, and it’s different because when you’re doing a summary of it and you can point back to the links in that document from which that information was gleaned, I think there are ways in which you provide the right checks and balances. So, this is not about creation when you’re summarizing and you’re trying to provide insight and you’re correlating it with other documents. It will get better, and it’ll get better through customer usage. But it’s a subset of the problem of all hallucinations that we have in images. And so I think in PDF, while we’re doing research fundamentally in all of that, I think the problems that we’re trying to solve immediately are summarization — being able to use that content and then create a presentation or use it in an email or use it in a campaign. And so I think for those use cases, the technology is fairly advanced.
There’s a thing I think about all the time. An AI researcher told you this a few years ago. If you just pull the average document off the average website, the document is useless. It’s machine-generated. It’s a status update for an IoT sensor on top of a light pole. That is the vast majority statistically of all the documents on the internet. When you think about how much machine-generated documentation any business makes, the AI problem amps it up. Now I’m having an AI write an email to you; you’re having an AI summarize the email for you. We might need to do a transaction or get a signature. My lawyer will auto-generate some AI-written form or contract. Your AI will read it and say it’s fine. Is there a part where the PDF just drops out of that because it really is just machines talking to each other to complete a transaction and the document isn’t important anymore?
Well, I think this is so nascent that we’ll have different kinds of experiences. I’ll push back first a little — the world’s information is in PDF. And so if we think about knowledge management of the universe as we know it today, I think the job that Adobe and our partners did to capture the world’s information and archive it [has] been a huge societal benefit that exists. So you’re right in that there are a lot of documents that are transient that perhaps don’t have that fundamental value. But I did want to say that societies and cultures are also represented in PDF documents. And that part is important. I think — to your other question associated with “where do you eliminate people even being part of a process and let your computer talk to my computer to figure out this deal” — you are going to see that for things that don’t matter, and judgment will always be about which ones of those matter. If I’m making a big financial investment, does that matter? If I’m just getting an NDA signed, does that matter? But you are going to see more automation I think in that particular respect. I think you’re right.
The PDF to me represents a classic paradigm of computing. We’re generating documents. We’re signing documents. There are documents. There are files and folders. You move into the mobile era, and the entire concept of a file system gets abstracted. And maybe kids, they don’t even know what file systems are, but they still know what PDFs are. You make the next turn. And this is just to bring things back to where we started. You say AI is a paradigm shift, and now you’re just going to talk to a chatbot and that is the interface for your computer, and we’ve abstracted one whole other set of things away. You don’t even know how the computer is getting the task done. It’s just happening. The computer might be using other computers on your behalf. Does that represent a new application model for you? I’ll give you the example: I think most desktop applications have moved to the web. That’s how we distribute many new applications. Photoshop and Premiere are the big stalwarts of big, heavy desktop applications at this point in time. Does the chatbox represent, “Okay, we need yet another new application model”?
I think you are going to see some fundamental innovation. And the way I would answer that question is first abstracting the entire world’s information. It doesn’t matter whether it was in a file on your machine, whether it was somewhere on the internet, and being able to have access to it and through search, find the information that you want. You’re absolutely right that the power of AI will allow all of this world’s information to come together in one massive repository that you can get insight from. I think there’s always going to be a role though for permanence in that. And I think the role of PDF in that permanence aspect of what you’re trying to share or store or do some action with or conduct business with, I think that role of permanence will also play an important role. And so I think we’re going to innovate in both those spaces, which is how do you allow the world’s information to appear as one big blob on which you can perform queries or do something interesting? But then how do you make it permanent, and what does that permanence look like, and what’s the application of that permanence? Whether it’s for me alone or for a conversation that you and I had, which records that for posterity?
I think both of these will evolve. And it’s areas that — how does that document become intelligent? Instead of just having data, it has process and workflow associated with it. And I think there’s a power associated with that as well. I think we’ll push in both of these areas right now.
Do you think that happens on people’s desktops? Do you think it happens in cloud computing centers? Where does that happen?
Both and on mobile devices. Look at a product like Lightroom. You talked about Denoising and Lightroom earlier. When Lightroom works exactly the same across all these surfaces, that power in terms of people saying, oh my God, it’s exactly the same. So I think the boundaries of what’s on your personal computer and what’s on a mobile device and what’s in the cloud will certainly blur because you don’t want to be tethered to a device or a computer to get access to whatever you want. And we’ve already started to see that power, and I think it’ll increase because you can just describe it. It may not have that permanent structure that we talked about, but it’ll get created for you on the fly, which is, I think, really powerful.
Do you see any limits to desktop chip architectures where you’re saying, “Okay, we want to do inference at scale. We’re going to end up relying on a cloud more because inference at scale on a mobile device will make people’s phones explode”? Do you see any technical limitations?
It’s actually just the opposite. We had a great meeting with Qualcomm the other day, and we talked to Nvidia and AMD and Qualcomm. I think a lot of the training, that’s the focus that’s happening on the cloud. That’s the infrastructure. I think the inference is going to increasingly get offloaded. If you want a model for yourself based on your information, I think even today with a billion parameters, there’s no reason why that just doesn’t get downloaded to your phone or downloaded to your PC. Because otherwise, all that compute power that we have in our hands or on our desktop is really not being used. I think the models are more nascent in terms of how you can download it and offload that processing. But that’s definitely going to happen without a doubt. In fact, it’s already happening, and we’re partnering with the companies that I talked about to figure out how that power of Photoshop can actually then be on your mobile device and on your desktop. But we’re a little early in that because we’re still trying to learn, and the model’s getting on the server.
I can’t think of a company that is more tied to the general valence of the GPU market than Adobe. Literally, the capabilities you ship have always been at the boundary of GPU capabilities. Now that market is constrained in different ways. Different people want to buy GPUs for vastly different reasons. Is that something you’re thinking about: how the GPU market will shape as the overwhelming financial pressure to optimize for training begins to alter the products themselves?
For the most part, people look at the product . I don’t know anybody who says, “I’ve got enough processing power,” or “I’ve got enough network bandwidth,” or “I’ve got enough storage space.” And so I think all those will explode – you’re right. We tend to be a company that wants to exploit all of the above to deliver great value, but when you can have a conversation with [Nvidia CEO] Jensen [Huang] and talk about what they are doing and how they want to partner with us, I think that partnership is so valuable in times like this because they want this to happen.
Shantanu, I think we are out of time. Thank you so much for being on Decoder. Like I said, you were one of the first names I ever wrote down. I really appreciate you coming on.
Thanks for having me. Really enjoyed the conversation, Nilay.

Photo illustration by The Verge / Photo: Adobe

The tech and the consumers both might not be quite ready yet, but he’s betting big on an AI future.

Today, I’m talking with Adobe CEO Shantanu Narayen. Shantanu’s been at the top of my list of people I’ve wanted to talk to for the show since we first launched — he’s led Adobe for nearly 17 years now, but he doesn’t do too many wide-ranging interviews. I’ve always thought Adobe was an underappreciated company — its tools sit at the center of nearly every major creative workflow you can think of — and with generative AI poised to change the very nature of creative software, it seemed particularly important to talk with Shantanu now.

Adobe has an enormously long and influential history when it comes to creative software. It began in the early 1980s, developing something called PostScript that became the first industry-standard language for connecting computers to printers — a huge deal at the time. Then, in the 1980s and 1990s, it released the first versions of software that’s now so ubiquitous that it’s hard to imagine the computing and design industries without them. Adobe created the PDF, the document standard everyone now kind of loves to hate, as well as programs like Illustrator, Premiere, and — of course — Photoshop. If you work in a creative field, it’s a near certainty that there’s Adobe software running somewhere close to you.

All that influence puts Adobe right at the center of the whole web of tensions we like to talk about on Decoder — especially as the company has evolved its business and business model over time. Shantanu joined the company in 1998, back when desktop software was a thing you sold on a shelf. He was with the company when it started bundling a whole bunch of its flagship products into the Creative Suite, and he was the CEO who led the company’s pivot to subscription software with Creative Cloud in 2012. He also led some big acquisitions that turned into Adobe’s large but under-the-radar marketing business — so much of what gets made in tools like Photoshop is marketing and advertising collateral, after all, and the company is a growing business in helping businesses create, distribute, and track the performance of all that work around the web.

But AI really changes what it means to make and distribute creative work — even what it means to track advertising performance across the web — and you’ll hear us talk a lot about all the different things generative AI means for a company like Adobe. There are strategic problems, like cost: everyone’s pouring tons of money into R&D for AI, but not many people are seeing revenue returns on it just yet, and Shantanu explained how he’s betting on that investment return.

Then there are the fundamental philosophical challenges of adding AI to photo and video tools. How do you sustain human creativity when so much of it can be outsourced to the tools themselves with AI? And I asked a question I’ve been thinking about for a long time as more and more of the internet gets so deeply commercialized: What does it mean when a company like Adobe, which makes the tools so many people use to make their art, sees the creative process as a step in a marketing chain, instead of a goal in and of itself?

This one got deep — like I said, Shantanu doesn’t do many interviews like this, so I took my shots.

Okay: Adobe CEO Shantanu Narayen. Here we go.

This transcript has been lightly edited for length and clarity.

Shantanu Narayen, you’re the CEO of Adobe. Welcome to Decoder!

Thanks for having me, Nilay.

I am very excited to talk to you. You are one of the first guests I ever put on a list of guests I wanted on the show because I think Adobe is under-covered. As the CEO, you’ve been there for a long time. You don’t give a lot of interviews, so I’m very excited you chose to join us on the show.

Adobe is 40-plus years old. It has been a lot of different kinds of companies. You have been there since 1998. You became CEO in 2007. You saw at least one paradigm shift in computing. You led the company through another shift in computing. How would you describe Adobe today?

I think Adobe has always been about fundamental innovation, and I think we are guided by our mission to change the world through digital experiences. I think what motivates us is: Are we leveraging technology to deliver great value to customers and staying true to this mission of digital experiences?

What do you mean, specifically, by digital experiences?

The way people create digital experiences, the way they consume digital experiences, the new media types that are emerging, the devices on which people are engaging with digital, and the data associated with it as well. I think we started off way more with the creative process, and now we’re also into the science and data aspects. Think about the content lifecycle — how people create content, manage it, measure it, mobilize it, and monetize it. We want to play a role across that entire content life cycle.

I love this; you’re already way into what I wanted to talk about. Most people think of Adobe as the Photoshop company or, increasingly, the Premiere company. Wherever you are in the digital economy, Adobe is there, but what most people see is Creative Cloud.

You’re talking about everything that happens after you make the asset. You make the picture in Photoshop, and then a whole bunch of stuff might happen to it. You make the video in Premiere, and then a lot of things might happen. If you’re a marketer, you might make a sale. If you’re a content creator, you might run an ad. Something will happen there. You’re describing that whole expansive set of things that happen after the asset is made. Is that where your focus is, or is it still at the first step, which is someone has to double-click on Photoshop and do a thing?

I think it is across the entire chain — and, Nilay, I’d be remiss if I didn’t also say we are also pretty well known for PDF and everything associated with PDF!

[Laughs] Don’t worry, I have a lot of PDF questions coming for you.

I think as it relates to the content, which was your question: it doesn’t matter which platform you’re using to create content, whether it’s a desktop, whether it’s a mobile device, whether it’s web — that’s just the first step. It’s how people consume it, whether it’s on a social media site or whether it’s a company that’s engaging with customers and they’re creating some sort of a personalized experience. So, you’re right — very much, we’ve changed our aspirations. I think 20 years ago, we were probably known just for desktop applications, and now we’ve expanded that to the web, and the entire chain has certainly been one of the areas in which we’ve both innovated and grown.

I want to come back to that because there are a lot of ideas embedded in that. One thing that’s on my mind as I’ve been talking to people in this industry and all the CEOs on Decoder: half of them tell me that AI is a paradigm shift on the order of mobile, on the order of desktop publishing, all things that you have lived through. Do you buy that AI is another one of these paradigm shifts?

I think AI is something that we’ve actually been working on for a long time. What do computers do really well? Computers are great at pattern matching. Computers are great at automating inefficient tasks. I think all the buzz is around generative AI, which is the starting point of whether you’re having a conversational interface with your computer or you’re trying to create something and it enables you to start that entire process. I do think it’s going to be fairly fundamental because of the amount of energy, the amount of capital, the amount of great talent that’s focused on, “What does it mean to allow computers to have a conversation and reason and think?” That’s unprecedented. Even more so than, I would say, what happened in the move to mobile or the move to cloud because those were happening at the same time, and perhaps the energy and investment were divided among both, whereas now it’s all about generative AI and the implications.

If you are Microsoft or Google or someone else, one of the reasons this paradigm shift excites you is because it lets you get past some gatekeepers in mobile, it lets you create some new business models, it lets you invent some new products maybe that shift some usage in another way. I look at that for them and I say: Okay, I understand it. I don’t quite see that paradigm shift for Adobe. Do you see that we’re going to have to invent a new business model for Adobe the way that some of the other companies see it?

I think any technology shift has the same profound impact in terms of being a tailwind. If you think about what Microsoft does with productivity, and if you think about what Adobe does with creativity, one can argue that creativity is actually going to be more relevant to every skill moving forward. So I do think it has the same amount of profound implication for Adobe. And we’ve innovated in a dramatic way. We like to break up what we are doing with AI in terms of what we do at the interface layer, which is what people use to accomplish something; what we’re doing with foundation models; and what models are we creating for ourselves that are the underlying brain of the things that we are attempting to do, and what’s the data? I think Adobe has innovated across all three. And in our different clouds — we can touch on this later — Creative Cloud, Document Cloud, and Experience Cloud, we’re actually monetizing in different ways, too. So I am really proud of both the innovation on the product side and the experimentation on the business model side.

The reason I asked that question that way, and right at the top, is generative AI. So much of the excitement around it is letting people who maybe don’t have an affinity for creative tools or an artistic ability make art. It further democratizes the ability to generate culture, however you wish to define culture. For one set of companies, that’s not their business, and you can see that expands their market in some way. The tools can do more things. Their users have more capabilities. The features get added.

For Adobe, that first step has always been serving the creative professional, and that set of customers actually feels under threat. They don’t feel more empowered. I’m just wondering how you see that, in the broadest possible sense. I am the world’s foremost, “What is a photo?” philosophical handwringer, and then I use AI Denoise in Lightroom without a second’s hesitation, and I think it’s magic. There’s something there that is very big, and I’m wondering if you see that as just a moment we’re all going to go through or something that fundamentally changes your business.

Whether you’re a student, whether you’re a business professional, or whether you’re a creative, we like to say at Adobe that you have a story to tell. The reality is that there are way more stories that people want to tell than skills that exist to be able to tell that story with the soul that they want and the emotion that they want. I think generative AI is going to attract a whole new set of people who previously perhaps didn’t invest the time and energy into using the tools to be able to tell that story. So, I think it’s going to be tremendously additive in terms of the number of people who now say, “Wow, it has further democratized the ability for us to tell that story,” and so, on the creative side, whether you’re ideating, whether you’re trying to take some picture and fix it but you don’t quite know how to do it.

When people have looked at things like Generative Fill, their jaws drop. What’s amazing to us is when, despite decades of innovation in Photoshop, something like Generative Fill captures the imagination of the community — and the adoption of that feature has been dramatically higher than any other feature that we’ve introduced in Photoshop. When layers first came out, people looked at it, and their jaws dropped. It just speaks to how much more we can do for our customers to be able to get them to tell their story. I think it’s going to be dramatically expansive.

I feel like [Google CEO] Sundar Pichai likes to say AI is more profound than electricity —

You still need electricity to run the AI, so I think they’re both interrelated.

But I honestly think “used as much as layers” is the same statement. It’s at the same level of change. It’s pretty good.

I want to drill down into some of these ideas. You have been the CEO since 2007. That’s right at the beginning of the mobile era. Many things have changed. You’ve turned Adobe into a cloud business. You started as a product manager in 1998. I’m assuming your framework for making decisions has evolved. How do you make decisions now, and what’s your framework?

I think there are a whole bunch of things that have perhaps remained the same and a whole bunch of things that are different. I think at your core, when you make decisions — whether it’s our transition to the cloud, whether it’s what we did with getting into the digital marketing business — it’s always been about: Are we expanding the horizons and the aspirations we look at? How can we get more customers to the platform and deliver more value? At our core, what’s remained the same is this fundamental belief that by investing in deep technology platforms and delivering fundamental value, you will be able to deliver, value, and monetize it and grow as a company.

I think what’s different is the company has scaled how you recognize the importance, which was always important but becomes increasingly obvious: how do you create a structure in which people can innovate and how do you scale that? At $20 billion, how do you scale that business and make decisions that are appropriate? I think that’s changed. But at my core … I managed seven people then, I manage seven people now, and it’s leveraging them to do amazing things.

That gets into the next Decoder question almost perfectly: How is Adobe structured today? How did you arrive at that structure?

I think structures are pendulums, and you change the pendulum based on what’s really important. We have three businesses: One is what we call the creative business that you touch on so much about, and the vision there is how we enable creativity for all. We have the document business, and in the document business, it’s really thinking about how we accelerate document productivity, and powering digital businesses is the marketing business. I would say we have product units. We call the first two Creative Cloud and Document Cloud as our digital media business, and we call the marketing business the digital experience business. So we have two core product units run by two presidents, Anil [Chakravarthy, president of Adobe’s digital experience business] and David [Wadhwani, president of Adobe’s digital media business]. And with the rest of the company, we have somebody focused on strategy and corporate development. Partnerships is an important part. And then you have finance, legal, marketing, and HR as functional areas of expertise.

Where do you spend your time? I always think about CEOs as having timelines. There’s a problem today some customer is having, you’ve gotta solve that in five minutes. There’s an acquisition that takes a year or maybe even more than that. Where do you spend your time? What timeline do you operate on?

Time is our most valuable commodity, right? I think prioritization is something where we’ve been increasingly trying to say: what moves the needle? One of the things I like to do — both for myself as well as at the end of the year with my senior executives — is say, “How do we move the needle and have an impact for the company?” And that might change over time.

I think what’s constant is product. I love products, I love building products, I love using our products, but the initiatives might change. A few years ago, it was all about building this product that we call the Adobe Experience Platform — a real-time customer data platform — because we had this vision that if you had to deliver personalized engaging experiences, you needed a next-generation infrastructure. This was not about the old generation of, “Where was your customer data stored?” It was more about: what’s a real-time platform that enables you to activate that data in real time? And that business has now exploded. We have tens of billions of profiles. The business has crossed $750 million in the book of business.

Incubating new businesses is hard. In companies, the power structure tends to be with businesses that are making money today. And so incubating businesses require sponsorship. Adobe Express is another product that we’ll talk about. We just released a phenomenal new version of Adobe Express on both mobile and web, which is all about this creativity for all. And so I think what are the needle-moving initiatives? Sometimes, it might be about partnerships. And as we think about LLMs and what’s happening in generative AI, where do we partner versus where do we build? While it changes, I would say there are three parts of where I spend my time.

There’s strategy because, at the end of the day, our jobs are planting the flag for where the company has to go and the vision for the company. I think it’s a cadence of execution. If you don’t execute against the things that are important for you, it doesn’t matter how good your strategy is. And the third set of things that you focus on are people. Are you creating a culture where people want to come in and work, so they can do their best work? Is the structure optimized to accomplish what is most important, and are you investing in the right places? I would say those are the three buckets, but it ebbs and flows based on the critical part. And you’re right — you do get interrupted, and having to deal with whatever is the interruption of the day is also an important part of what you do.

You said you had three core divisions. There’s the Creative Cloud — the digital media side of the business. There’s the Experience Cloud, which is the marketing side of the business, and then there’s … I think you have a small advertising line of revenue in that report. Is that the right structure for the AI moment? Do you think you’re going to have to change that? Because you’ve been in that structure for quite some time now.

I think what’s been really amazing and gratifying to us is, at the end of the day, while you have a portfolio of businesses, if you can integrate them where you deliver value to somebody that is incredible and that no other company can do by themselves, that’s the magic that a company can do. We just had a recent summit at MAX in London. We had our summit here in Las Vegas. These are our big customer events. And the story even at the financial analyst meetings is all about how these are coming together: how the integration of the clouds is where we’re delivering value. When you talk about generative AI, we do creation and we do production. We have to do these asset managements.

If you’re a marketer and you’re creating all this content, whether it’s for social, whether it’s for email campaigns, whether it’s for media placement or just TV, where is all that content stored, and how do you localize it, and how do you distribute it? How do you activate it? How do you create these campaigns? What do you do with workflow and collaboration? And then what is the analysis and insight and reporting?

This entire framework’s called the GenStudio, and it’s actually the bringing together of the cloud businesses. The challenge in a company is you want people who are ruthlessly focused on driving innovation in a competitive way and leading the market and what they are responsible for, but you also want them to take a step back and realize that it’s actually putting these together in a way that only Adobe can uniquely do that differentiates us from everybody else. So, while we have these businesses, I think we really run the company as one Adobe, and we recognize the power of one Adobe, and that’s a big part of my job, too.

How do you think about investing at the cutting edge of technology? I’m sure you made AI investments years ago before anyone knew what they could become. I’m sure you have some next-gen graphics capabilities right now that are just in the research phase. That’s pure cost. I think Adobe has to have that R&D function in order to remain Adobe. At the same time, even the cost of deploying AI is going up as more and more people use Firefly or Generative Fill or anything else. And then you have a partnership with OpenAI to use Sora in Premiere, and that might be cheaper than developing on your own. How do you think about making those kinds of bets?

Again, we are in the business of investing in technology. A couple of things have really influenced how we think about it at the company. Software has an S-curve. You have things that are in incubation and have a horizon that’s not immediate, and you have other things that are mature. I would say our PostScript business is a mature business. It changed the world as we know it right now. But it’s a more mature business. And so, I think being thoughtful about where something is in its stage of evolution, and therefore, you’re making investments certainly ahead of the “monetization” part, but you have other metrics. And you say, am I making progress against metrics? But we’re thoughtful about having this portfolio approach. Some people call it a horizon approach and which phase you’re in. But in each one of them, are we impatient for success in some way? It may be impatient for usage. It may be impatient for making technology advancements. It may be impatience for revenue and monetization. It may be impatience for geographic distribution. I think you still have to create a culture where the expectations of why you are investing are clear and you measure the success against that criteria.

What are some of the longer-term bets you’re making right now that you don’t know when they’re going to pay off?

Well, we’re always investing. AI, building our own foundation models. I think we’re all fairly early right in this phase. We decided very early on that with Firefly, we’re going to be investing in our models. We are doing the same on the PDF side. We had Liquid mode, which allowed you to make all your PDFs responsive on a mobile device. In the Experience Cloud, how do you think about customers, and what’s a model for customers and profiles and recommendations? Across the spectrum, we’re doing it.

I would say the area where we probably make the most fundamental research is in Creative [Cloud]: what’s happening with compression models or resolution or image enhancement techniques or mathematical models for that? We’ve always had advanced technology in that. There, you actually want the team to experiment with things that are further from the tree because if you’re too close to the tree and your only metric is what part of that ships, you are perhaps going to miss some fundamental moves. So, again, you have to be thoughtful about what you are. But I would say core imaging science, core video science, is clearly the area — 3D immersive. That’s where we are probably making the most fundamental research investments.

You mentioned AI and where you are in the monetization curve. Most companies, as near as I can tell, are investing a lot in AI, rolling out a lot of AI features, and the best idea anyone has is, “We’ll charge you 20 bucks a month to ask this chatbot a question, and maybe it will confidently hallucinate at you.” And we’ll see if that’s the right business. But that’s where we are right now for monetization. Adobe is in a different spot. You already have a huge SaaS business. People are already using the features. Is the use of Firefly creating any margin pressure on Creative Cloud subscribers? You’re not charging extra for it, but you could in the future. How are you thinking about that increased cost?

We have been thoughtful about different models for the different products that we have. You’re right in Creative. Think about Express versus Creative Cloud. In Creative Cloud, we want low friction. We want people to experiment with it. Most people look at it and say, “Hey, are you acquiring new customers?” And that’s certainly an important part. What’s also equally important is, if that helps with retention and usage that also, for a subscription business, has a material impact on how you engage value with customers.

Express is very different. Express is an AI-first new product that’s designed to be this paradigm change where, instead of knowing exactly what you want to do, you have a conversation with the computer: I want to create this flyer or I want to remove the background of an image or I want to do something even more exciting and I want to post something on a social media site. And there, it’s, again, about acquisition and successful exports.

You’re right in that there’s a cost associated with it. I would say for the most part, for most companies, the training cost is probably higher right now than the inference costs, both because we can start to offload the inferencing as well on computers as that becomes a reality. But it’s what we do for a living. If you are uncomfortable investing in fundamental technology, you’re in the wrong business. And we’re not a company that has actually focused on being a fast follower, let somebody else invent it. We like creating markets. And so you have to recognize who you are as a company, and that comes with the consequences of how you have to operate.

I think it remains to be seen how consumer AI is monetized. It remains to be seen even with generative AI in Photoshop. At the individual creative level, I think it remains to be seen. Maybe it will just help you with retention, but I feel like retention in Photoshop is already pretty high. Maybe it will bring you new customers, but you already have a pretty high penetration of people who need to use Photoshop.

It’s never enough. We’re always trying to attract more customers.

But that’s one part of the business. I think there’s just a lot of question marks there. There’s another part of your business that, to me, is the most fascinating. When I say Adobe is under-covered, the part of the business that I think is just fully under-covered is — you mentioned it — GenStudio. It’s the marketing side of the business, the experience side of the business. We’re going to have creatives at an ad agency make some assets for a store. The store is going to pump its analytics into Adobe’s software. The software is going to optimize the assets, and then maybe at some turn, the AI is going to make new assets for you and target those directly to customers. That seems like a very big vision, and it’s already pre-monetized in its way. That’s just selling marketing services to e-commerce sites. Is that the whole of the vision or is it bigger than that?

It’s a big part of the vision, Nilay. We’ve been talking about this vision of personalization at scale. Whether you’re running a promotion or a campaign, you’re making a recommendation on what to watch next, and we’re in our infancy in terms of what happens. When I looked and focused on how we create our own content and partner with great agencies, the amount of content that’s created, and the way to personalize that and run variations and experiment and run this across 180 countries where we might do business — that entire process from a campaign brief to where an individual in some country is experiencing that content — it’s a long laborious process. And we think that we can bring a tremendous amount of technology to bear in making that way more seamless. So I think that is an explosive opportunity, and every consumer is now demanding it, and they’re demanding it on their mobile device.

I think people talk about the content supply chain and the amount of content that’s being created and the efficacy of that piece of content. It is a big part of our vision. But documents also. The world’s information is in documents, and we’re equally excited about what we are doing with PDF and the fact that now, in Reader, you can have a conversational interface, and you can say, “Hey, summarize for me,” and then over time, how does this document, if I’m doing medical research, correlate with the other research that’s in there and then go find things that might be on my computer or might be out there on the internet. You have to pose these interesting problems for your product team: how can we add value in this particular use case or scenario? And then they unleash their magic on it. Our job is posing these hard things, which is like, “Why am I starting the process for Black Friday or Cyber Monday five months in advance? Why can’t I decide a week before what campaign I want to run and what promotion I want to run?” And enabling that, I think we will deliver tremendous value.

I promised you I would ask you a lot of questions about PDF, and I’m not going to let go of that promise, but not yet. I want to stay focused on the marketing side.

There’s an idea embedded in two phrases you just said that I find myself wrestling with. I think it is the story of the internet. It is how commercialized the internet has become. You said “content supply chain” and “content life cycle.” The point of the content is to lead to a transaction that is an advertising and marketing-driven view of the internet. Someone, for money, is going to make content, and that content will help someone else down the purchase funnel, and then they’re going to a pair of shoes or a toothbrush or whatever it is. And that I think is in tension with creativity in a real way. That’s in tension with creativity and art and culture. Adobe sits at the center of this. Everybody uses your software. How do you think about that tension? Because it’s the thing that I worry about the most.

Specifically, the tension is as a result of what? The fact that we’re using it for commerce?

Yeah. I think if the tools are designed and organized and optimized for commerce, then they will pull everybody toward commerce. I look at young creators on social platforms, and they are just slowly becoming ad agencies. Like one-person ad agencies is where a creator ends if they are at the top of their game. MrBeast is such a successful ad agency that his rates are too high, and it is better for him to sell energy bars and make ads for his own energy bars than it is for him to sell ads to someone else. That is a success story in one particular way, and I don’t deny that it’s a success story, but it’s also where the tools and the platforms pull the creatives because that’s the money. And because the tools — particularly Adobe’s tools — are used by everybody for everything, I wonder if you at the very top think about that tension and the pull, the optimization that occurs, and what influence that has on the work.

We view our job as enablement. If you’re a solopreneur or you want to run a business, you want to be a one-person shop in terms of being able to do whatever your passion is and create it. And the internet has turned out to be this massively positive influence for a lot of people because it allows them distribution. It allows them reach. But I wouldn’t underplay the —

There are some people who would make, at this point, a very different argument about the effect of the internet on people.

But I was going to go to the other side. Whether it’s just communication and expressing themselves, one shouldn’t minimize the number of people for whom this is a creative outlet and it’s an expression, and it has nothing to do with commerce and they’re not looking to monetize it, but they’re looking to express themselves. Our tools, I think, do both phenomenally well. And I think that is our job. Our job is not doing value judgment on what people are using this for. Our job is [to ask], “How do we enable people to pursue their passion?”

I think we do a great job at that. If you’re a K–12 student today, when you write a project, you’re just using text. How archaic is that? Why not put in some images? Why not create a video? Why not point to other links? The whole learning process is going to be dramatically expanded visually for billions of people on the internet, and we enable that to happen. I think there are different users and different motivations, and again, as I said, we’re very comfortable with that.

One of the other tensions I think about right now when it comes to AI is that the whole business — the marketing business, the experience business you have — requires a feedback loop of analytics. You’re going to put some content ideally on the web. You’re going to put some Adobe software on the website. You own Omniture. You own a big analytics suite that you acquired with Omniture back in the day. Then that’s going to result in some conversions. You’ll do some more tracking. You’ll sell some stuff.

That all depends on a vibrant web. I’m guessing when people make videos in Premiere and upload them to YouTube, you don’t get to see what happens on YouTube. You don’t have great analytics from there. I’m guessing you have even worse analytics from TikTok and Instagram Reels. More and more people are going to those closed platforms, and the web is getting choked by AI. You can feel that it’s being overrun by low-quality SEO spam or AI content, or it’s mostly e-commerce sites because you can avoid some transaction fees if you can get people to go to a website. Do you worry about the pressure that AI is putting on the web itself and how people are going to the more closed platforms? Because that feels like it directly hits this business, but it also directly impacts the future of how people use Photoshop.

I think your point really brings to the forefront the fact that the more people use your products, the more differentiating yourself with your content is a challenge. I think that comes with the democratization of access to tools and information. It’s no different from if you’re a software engineer and you have all this access to GitHub and everything that you can do with software. How do you differentiate yourself as a great engineer, or if you’re a business, how do you differentiate yourself with a business? But as it relates to the content creation parts —

Actually, can I just interrupt you?

Sure.

I want you to talk about the distribution side. This is the part that I think is under the most pressure. Content creation is getting easier and more democratic. However you feel about AI, it is easier to make a picture or a video than it’s ever been before. On the distribution side, the web is being choked by a flood of AI content. The social platforms, which are closed distribution, are also being flooded with AI content. How do you think about Adobe living in that world? How do you think about the distribution problem? Because it seems like the problem we all have to solve.

You’re absolutely right in that, as the internet has evolved, there’s what you might consider open platforms and closed platforms. But we produce content for all of that. You pointed out that, whether it’s YouTube, TikTok, or just the open internet, we can help you create content for all of that. I don’t know that I’d use the word “choked.” I used the word “explosion” of content certainly, and “flooded” also is a word that you used. It’s a consequence. It’s a consequence of the access. And I do think that for all the companies that are in that business, even for companies that are doing commerce, I think there are a couple of key hypotheses that when they do, they become lasting platforms. The first is transparency of optics of what they are doing with that data and how they’re using that data. What’s the monetization model, and how are they sharing whatever content is being distributed through their sites with the people who are making those platforms incredibly successful?

I don’t know that I worry about that a lot, honestly. I think most of the creators I’ve spoken to like a proliferation of channels because they fundamentally believe that their content will be differentiated on those channels, and getting exposure to the broadest set of eyeballs is what they aspire to. So I haven’t had a lot of conversations with creators where they are telling us, as Adobe, that they don’t like the fact that there are more platforms on which they have the ability to create content. They do recognize that it’s harder, then, for them to differentiate themselves and stand out. Ironically, that’s an opportunity for Adobe because the question is, for that piece of content, how do you differentiate yourself in the era of AI if there’s going to be more and more lookalikes, and how do you have that piece of content have soul? And that’s the challenge for a creative.

How do you think about the other tension embedded in that, which is that you can go to a number of image generators, and if someone is distinctive enough, you can say, “Make me an image in the style of X,” and that can be trained upon and immediately lifted, and that distinction goes to zero pretty fast. Is that a tension that you’re thinking about?

Given the role that Adobe plays in the content creation business, I think we take both the innovation angle and the responsibility angle very seriously. And I know you’ve had conversations with Dana [Rao, Adobe counsel] and others about what we are doing with content credentials and what we are doing with the Fair Act. If you look at Photoshop, we’re also taking a very thoughtful approach about saying when you upload a picture for which you want to do a structure match or style match, you bear the responsibility of saying you have access to that IP and license to that IP in order to do that.

So I can interpret your questions in one of two ways. One is: how do we look at all of the different image generators that have happened? In that case, we are both creating our own image generator, but at the NAB Show, we showed how we can support other third parties. It was really critical for us to sequence this by first creating our own image model. Both because we had one that was designed to be commercially safe. It respected the rights of the creative community because we have to champion it. But if others have decided that they are going to use a different model but want to use our interfaces, then with the appropriate permissions and policies, we will support that as well.

And so I interpret your questions in those two ways, which is we’re taking responsibility in terms of when we provide something ourselves, how are we making sure that we recognize IP because it is important, and it’s people’s IP. I think at some point, the courts will opine on this, but we’ve taken a very designed-to-be commercially safe approach where we recognize the creator’s IP. Others have not. And the question might be, well, why are you supporting them in some of our products? And a lot of our customers are saying, “Well, we will take the responsibility, but please integrate this in our interfaces,” and that’s something that we are pushing as third-party models.

It bears mentioning that literally today, as we’re speaking, an additional set of newspapers has sued OpenAI for copyright infringement. And that seems like the thing that is burbling along underneath this entire revolution is, yeah, the courts are going to have to help us figure this out. That seems like the very real answer. I did have a long conversation with Dana [Rao] about that. I don’t want to sit in the weeds of that. I’m just wondering for you as the CEO of Adobe, where is your level of risk? How risky do you think this is right now for your company?

I think the approach that we’ve taken has shown just tremendous leadership by saying … Look at our own content. We have a stock business where we have rights to train the models based on our stock business. We have Behance, and Behance is the creative professional social site for people sharing their images. While that’s owned by Adobe, we did not train our Firefly image models based on that because that was not the agreement that we had with people who do it.

I think we’ve taken a very responsible way, so I feel really good about what we are doing. I feel really good about how we are indemnifying customers. I feel really good about how we are doing custom models where we allow a person in the media business or the CPG business to say, “We will upload our content to you Adobe, and we will create a custom model for us that only we can use, what we have rights for.” So, we have done a great job. I think other companies, to your point, are not completely transparent yet about what data they use and [if] they scrape the internet, and that will play out in the industry. But I like the approach that we’ve taken, and I like the way in which we’ve engaged with our community on this.

It’s an election year. There are a lot of concerns about misinformation and disinformation with AI. The AI systems hallucinate a lot. It’s just real. It’s the reality of the products that exist today. As the CEO of Adobe, is there a red line of capability that you won’t let your AI tools cross right now?

To your point, I think it’s something like 50 percent of the world’s population over a 12-month period is going to the polls, including the US and other major democracies in the world. And so, we’ve been actively working with all these governments. For any piece of content that’s being created, how does somebody put their digital signature on what the provenance of that content was? Where did it get created? Where did it get consumed? We’ve done an amazing job of partnering with so many companies in the camera space, in the distribution of content space, in the PC space to all say we need to do it. We’ve also now, I think, made the switch associated with, how do you visually identify that there is this watermark or this digital signature about where the content came from?

I think the unsolved problem to some degree is how do you, as a society, get consumers to say, “I’m not going to trust any piece of content until I see that content credential”? We’ve had nutrition labels on food for a long time — this is the nutrition label on a piece of content. Not everybody reads the nutrition label before they eat whatever they’re eating, so I think it’s a similar thing, but I think we’ve done a good job of acting responsibly. We’ve done a great job of partnering with other people. The infrastructure is there. Now it’s the change management with society and people saying, “If I’m going to go see a piece of video, I want to know the provenance of that.” The technology exists. Will people want to do that? And I think that’s—

The thing everyone says about this idea is, well, Photoshop existed. You could have done this in Photoshop. What’s the difference? That’s you. You’ve been here through all these debates. I’m going to tell you what you are describing to me sounds a little bit naive. No one’s going to look at the picture of Mark Zuckerberg with the beard and say, “Where’s the nutrition label on that?” They’re going to say, “Look at this cool picture.” And then Zuck is going to lean into the meme and post a picture of his razor. That’s what’s happening. And that’s innocent. A bunch of extremely polarized voters in a superheated election cycle is not going to look at a nutrition label. It just doesn’t seem realistic. Are you saying that because it’s convenient to say, or do you just hope that we can get there?

I actually acknowledge that the last step in this process is getting the consumer to care and getting the consumer to care [about] pieces of information that are important. To your point again, you had a couple of examples where some of them are in fun and in jest and everybody knows they’re in fun and jest and it doesn’t matter. Whereas others are pieces of information. But there is precedence to this. When we all transacted business on the internet, we said we want to see that HTTPS. We want to know that my credit card information is being kept securely. And I agree with you. I think it’s an unsolved problem in terms of when consumers will care and what percentage of consumers will care. So, I think our job is the infrastructure, which we’ve done. Our job is educating, which we are doing. But there is a missing step in all of this. We are going into this with our eyes open, and if there are ideas that you have on what else we can do, we’re all ears.

Is there a red line for you where you’ve said, “We are not going to cross this line and enable this kind of feature”?

Photoshop has actually done a couple of things in the past. I think with creating currency, if you remember, that was a place. I think pornography is another place. There’s some things in terms of content where we have drawn the line. But that’s a judgment call, and we’ll keep iterating on that, and we’ll keep refining what we do.

Alright. Let’s talk about PDF. PDF is an open standard. You can make a PDF pretty much anywhere all the time. You’ve built a huge business around managing these documents. And the next turn of it is, as you described, “Let an AI summarize a bunch of documents, have an archive of documents that you can treat almost like a wiki, and pull a bunch of intelligence out of it.” The challenge is that the AI is hallucinating. The future of the PDF seems like training data for an AI. And the thing that makes that really happen is the AIs have to be rock-solid reliable. Do you think we’re there yet?

It’s getting better, but no. Even the fact that we use the word hallucinate. The incredible thing about technology right now is we use these really creative words that become part of the lexicon in terms of what happens. But I think we’ve been thoughtful in Acrobat about how we get customer value, and it’s different because when you’re doing a summary of it and you can point back to the links in that document from which that information was gleaned, I think there are ways in which you provide the right checks and balances. So, this is not about creation when you’re summarizing and you’re trying to provide insight and you’re correlating it with other documents. It will get better, and it’ll get better through customer usage. But it’s a subset of the problem of all hallucinations that we have in images. And so I think in PDF, while we’re doing research fundamentally in all of that, I think the problems that we’re trying to solve immediately are summarization — being able to use that content and then create a presentation or use it in an email or use it in a campaign. And so I think for those use cases, the technology is fairly advanced.

There’s a thing I think about all the time. An AI researcher told you this a few years ago. If you just pull the average document off the average website, the document is useless. It’s machine-generated. It’s a status update for an IoT sensor on top of a light pole. That is the vast majority statistically of all the documents on the internet. When you think about how much machine-generated documentation any business makes, the AI problem amps it up. Now I’m having an AI write an email to you; you’re having an AI summarize the email for you. We might need to do a transaction or get a signature. My lawyer will auto-generate some AI-written form or contract. Your AI will read it and say it’s fine. Is there a part where the PDF just drops out of that because it really is just machines talking to each other to complete a transaction and the document isn’t important anymore?

Well, I think this is so nascent that we’ll have different kinds of experiences. I’ll push back first a little — the world’s information is in PDF. And so if we think about knowledge management of the universe as we know it today, I think the job that Adobe and our partners did to capture the world’s information and archive it [has] been a huge societal benefit that exists. So you’re right in that there are a lot of documents that are transient that perhaps don’t have that fundamental value. But I did want to say that societies and cultures are also represented in PDF documents. And that part is important. I think — to your other question associated with “where do you eliminate people even being part of a process and let your computer talk to my computer to figure out this deal” — you are going to see that for things that don’t matter, and judgment will always be about which ones of those matter. If I’m making a big financial investment, does that matter? If I’m just getting an NDA signed, does that matter? But you are going to see more automation I think in that particular respect. I think you’re right.

The PDF to me represents a classic paradigm of computing. We’re generating documents. We’re signing documents. There are documents. There are files and folders. You move into the mobile era, and the entire concept of a file system gets abstracted. And maybe kids, they don’t even know what file systems are, but they still know what PDFs are. You make the next turn. And this is just to bring things back to where we started. You say AI is a paradigm shift, and now you’re just going to talk to a chatbot and that is the interface for your computer, and we’ve abstracted one whole other set of things away. You don’t even know how the computer is getting the task done. It’s just happening. The computer might be using other computers on your behalf. Does that represent a new application model for you? I’ll give you the example: I think most desktop applications have moved to the web. That’s how we distribute many new applications. Photoshop and Premiere are the big stalwarts of big, heavy desktop applications at this point in time. Does the chatbox represent, “Okay, we need yet another new application model”?

I think you are going to see some fundamental innovation. And the way I would answer that question is first abstracting the entire world’s information. It doesn’t matter whether it was in a file on your machine, whether it was somewhere on the internet, and being able to have access to it and through search, find the information that you want. You’re absolutely right that the power of AI will allow all of this world’s information to come together in one massive repository that you can get insight from. I think there’s always going to be a role though for permanence in that. And I think the role of PDF in that permanence aspect of what you’re trying to share or store or do some action with or conduct business with, I think that role of permanence will also play an important role. And so I think we’re going to innovate in both those spaces, which is how do you allow the world’s information to appear as one big blob on which you can perform queries or do something interesting? But then how do you make it permanent, and what does that permanence look like, and what’s the application of that permanence? Whether it’s for me alone or for a conversation that you and I had, which records that for posterity?

I think both of these will evolve. And it’s areas that — how does that document become intelligent? Instead of just having data, it has process and workflow associated with it. And I think there’s a power associated with that as well. I think we’ll push in both of these areas right now.

Do you think that happens on people’s desktops? Do you think it happens in cloud computing centers? Where does that happen?

Both and on mobile devices. Look at a product like Lightroom. You talked about Denoising and Lightroom earlier. When Lightroom works exactly the same across all these surfaces, that power in terms of people saying, oh my God, it’s exactly the same. So I think the boundaries of what’s on your personal computer and what’s on a mobile device and what’s in the cloud will certainly blur because you don’t want to be tethered to a device or a computer to get access to whatever you want. And we’ve already started to see that power, and I think it’ll increase because you can just describe it. It may not have that permanent structure that we talked about, but it’ll get created for you on the fly, which is, I think, really powerful.

Do you see any limits to desktop chip architectures where you’re saying, “Okay, we want to do inference at scale. We’re going to end up relying on a cloud more because inference at scale on a mobile device will make people’s phones explode”? Do you see any technical limitations?

It’s actually just the opposite. We had a great meeting with Qualcomm the other day, and we talked to Nvidia and AMD and Qualcomm. I think a lot of the training, that’s the focus that’s happening on the cloud. That’s the infrastructure. I think the inference is going to increasingly get offloaded. If you want a model for yourself based on your information, I think even today with a billion parameters, there’s no reason why that just doesn’t get downloaded to your phone or downloaded to your PC. Because otherwise, all that compute power that we have in our hands or on our desktop is really not being used. I think the models are more nascent in terms of how you can download it and offload that processing. But that’s definitely going to happen without a doubt. In fact, it’s already happening, and we’re partnering with the companies that I talked about to figure out how that power of Photoshop can actually then be on your mobile device and on your desktop. But we’re a little early in that because we’re still trying to learn, and the model’s getting on the server.

I can’t think of a company that is more tied to the general valence of the GPU market than Adobe. Literally, the capabilities you ship have always been at the boundary of GPU capabilities. Now that market is constrained in different ways. Different people want to buy GPUs for vastly different reasons. Is that something you’re thinking about: how the GPU market will shape as the overwhelming financial pressure to optimize for training begins to alter the products themselves?

For the most part, people look at the product . I don’t know anybody who says, “I’ve got enough processing power,” or “I’ve got enough network bandwidth,” or “I’ve got enough storage space.” And so I think all those will explode – you’re right. We tend to be a company that wants to exploit all of the above to deliver great value, but when you can have a conversation with [Nvidia CEO] Jensen [Huang] and talk about what they are doing and how they want to partner with us, I think that partnership is so valuable in times like this because they want this to happen.

Shantanu, I think we are out of time. Thank you so much for being on Decoder. Like I said, you were one of the first names I ever wrote down. I really appreciate you coming on.

Thanks for having me. Really enjoyed the conversation, Nilay.

Read More 

TCL’s new Mini LED TVs offer blazing brightness on a budget

Image: TCL

5,000 nits. That’s the incredible peak brightness of TCL’s latest flagship QM8 Mini LED TV, and there’s no better example of how the company is hoping to stand out from competitors like Samsung, LG, Hisense, Sony, and others. That’s just on a different level than what OLED can offer. Are we getting to a point where TVs are getting too bright? No such thing, right?
Starting at $1,999.99 for a 65-inch size (and ranging up to an enormous 115-inch model), the QM8 contains thousands of dimming zones — “up to over 5,000,” to be specific. Combined with TCL’s image processing, the company is confident that people will get an enthralling home theater experience from this Mini LED set for significantly less than, say, a top-tier OLED would cost.
The step-down QM7 is no slouch in the brightness department, either: it can hit a peak of 2,400 nits and still includes TCL’s premium features like a variable refresh rate that can be pushed all the way to 240Hz. Both the QM8 and QM7 are what TCL now refers to as “QD-Mini LED,” which designates that you’re getting the best picture quality that the company is capable of. (The “QD” part stands for quantum dot color.)
From TCL’s perspective, “Mini LED” by itself has become meaningless tech jargon since there’s no set standard for what it actually means within the industry. It’s not just about the size or number of dimming zones. The algorithms that control those dimming zones are just as important to avoid crushed blacks and other image issues.
Giant-sized screens are another focus this year. Consumer buying habits are trending bigger and bigger in recent years. An 85-inch TV is basically the new 65-inch, haven’t you heard? Aside from the jumbo QM8, TCL is also offering three different 98-inch TVs throughout its lineup. Are they cheap? Absolutely not. The 98-inch QM8 is a penny shy of $8,000. But if you’ve got the space and the money, they might be preferable to a projector.
Even the more basic, budget-priced S5 series is 25 percent brighter than last year’s model. It’s natively a 60Hz panel, but TCL’s software trickery can push it to 120Hz for gaming. And you’re still getting other features like the new “enhanced dialog” mode to help voices cut through clearer. The company’s 2024 lineup continues to run Google TV software, with all Q-series and S-series TVs set to begin shipping imminently.

Image: TCL

5,000 nits. That’s the incredible peak brightness of TCL’s latest flagship QM8 Mini LED TV, and there’s no better example of how the company is hoping to stand out from competitors like Samsung, LG, Hisense, Sony, and others. That’s just on a different level than what OLED can offer. Are we getting to a point where TVs are getting too bright? No such thing, right?

Starting at $1,999.99 for a 65-inch size (and ranging up to an enormous 115-inch model), the QM8 contains thousands of dimming zones — “up to over 5,000,” to be specific. Combined with TCL’s image processing, the company is confident that people will get an enthralling home theater experience from this Mini LED set for significantly less than, say, a top-tier OLED would cost.

The step-down QM7 is no slouch in the brightness department, either: it can hit a peak of 2,400 nits and still includes TCL’s premium features like a variable refresh rate that can be pushed all the way to 240Hz. Both the QM8 and QM7 are what TCL now refers to as “QD-Mini LED,” which designates that you’re getting the best picture quality that the company is capable of. (The “QD” part stands for quantum dot color.)

From TCL’s perspective, “Mini LED” by itself has become meaningless tech jargon since there’s no set standard for what it actually means within the industry. It’s not just about the size or number of dimming zones. The algorithms that control those dimming zones are just as important to avoid crushed blacks and other image issues.

Giant-sized screens are another focus this year. Consumer buying habits are trending bigger and bigger in recent years. An 85-inch TV is basically the new 65-inch, haven’t you heard? Aside from the jumbo QM8, TCL is also offering three different 98-inch TVs throughout its lineup. Are they cheap? Absolutely not. The 98-inch QM8 is a penny shy of $8,000. But if you’ve got the space and the money, they might be preferable to a projector.

Even the more basic, budget-priced S5 series is 25 percent brighter than last year’s model. It’s natively a 60Hz panel, but TCL’s software trickery can push it to 120Hz for gaming. And you’re still getting other features like the new “enhanced dialog” mode to help voices cut through clearer. The company’s 2024 lineup continues to run Google TV software, with all Q-series and S-series TVs set to begin shipping imminently.

Read More 

Amazon’s robotaxi company is under investigation after two crashes with motorcyclists

Image: Tayfun Coskun / Anadolu Agency via Getty Images

US safety regulators are looking into two crashes involving Amazon’s robotaxi company, Zoox. The Office of Defects Investigation, under the US National Highway Traffic Safety Administration, opened a preliminary evaluation into Zoox after two separate reports of the vehicles suddenly braking and causing motorcyclists to crash into their rear end.
NHTSA confirms that the Zoox vehicles were operating in driverless mode without safety drivers when the incidents occurred. The vehicles involved in both crashes were Toyota Highlander SUVs, which Zoox uses for testing and data gathering. According to the Office of Defects Investigation, the investigation covers an estimated 500 vehicles.
The crashes did not involve Zoox’s unique toaster-looking vehicles that lack traditional pedals and steering wheels — which were approved for testing on California roads in 2023. Those vehicles just started to appear on roads in March.
This isn’t Zoox’s first run-in with NHTSA. Last year, the agency investigated claims by the company that its driverless vehicle met federal safety standards without an exemption from the government.

Image: Tayfun Coskun / Anadolu Agency via Getty Images

US safety regulators are looking into two crashes involving Amazon’s robotaxi company, Zoox. The Office of Defects Investigation, under the US National Highway Traffic Safety Administration, opened a preliminary evaluation into Zoox after two separate reports of the vehicles suddenly braking and causing motorcyclists to crash into their rear end.

NHTSA confirms that the Zoox vehicles were operating in driverless mode without safety drivers when the incidents occurred. The vehicles involved in both crashes were Toyota Highlander SUVs, which Zoox uses for testing and data gathering. According to the Office of Defects Investigation, the investigation covers an estimated 500 vehicles.

The crashes did not involve Zoox’s unique toaster-looking vehicles that lack traditional pedals and steering wheels — which were approved for testing on California roads in 2023. Those vehicles just started to appear on roads in March.

This isn’t Zoox’s first run-in with NHTSA. Last year, the agency investigated claims by the company that its driverless vehicle met federal safety standards without an exemption from the government.

Read More 

The Gamma app brings PS1 emulation to the iPhone

Time for some Abe’s Oddysee! | Screenshot: Gamma

iPhone users without a penchant for jailbreaking can finally enjoy the blocky polygons and shifty textures of the original PlayStation with Gamma, a free PS1 emulator that hit the iOS App Store last night. Gamma comes courtesy of developer ZodTTD, which has been creating emulators for the iPhone since the earliest days of third-party iOS apps.
The app has both iPhone and iPad versions with support for Bluetooth controllers and keyboards, as well as customizable on-screen controller skins. It uses Google Drive and Dropbox syncing for backing up your game files and save states (those are the snapshots you can save at any time and reload, a little like pausing your game — great for old-school games that don’t let you save any time you want). Like the Delta emulator that ruled the App Store’s top free apps list for weeks before being unseated by free donuts, the app will also go grab game cover artwork for you automatically.

Screen recording: Gamma
PS1, emulated.

The default skin for landscape orientation is mostly transparent and hard to see, though, so you’ll want to replace that when you can.

Screenshot: Gamma
I’ve never played this game, and I probably never will.

Thankfully, Gamma doesn’t require you to go find any BIOS files to run PS1 games. That said, I had trouble getting the first two games I tried — NASCAR 98 and Shrek Treasure Hunt. But that may have just been the game files I was using, as I could run Oddworld: Abe’s Oddysee just fine. Third time’s the charm, right?

According to Gamma’s App Store page, it collects identifiers that can be used to track you, and may collect location and usage data. For what it’s worth, the app didn’t trigger a location data access request for me, nor did it prompt me for tracking permission (though it did do so for my colleague, Sean Hollister).
Benjamin Stark, aka ZodTTD, has been around the block. Stark pointed out to The Verge via email that Delta developer Riley Testut’s first iOS emulator, GBA4iOS, borrowed code from an emulator Stark had made called gpSPhone (something Testut wrote about in 2013). But even that app, Stark said, was based (with permission, he added) on gpSP, an Android emulator created by a developer called Exophase.
Stark also developed TurboGrafx-16 and N64 emulators for the iPhone in 2008 and 2009, respectively. Later, he had a run-in with Google when the company pulled his app PSX4Droid, also a PS1 emulator, from the Android Market in 2011, at a time when Google was removing many of the most popular emulators from the online store. He later made the emulator freely available and open-sourced the code.
Update May 12th, 11:36AM ET: Added additional context and details shared by Stark.

Time for some Abe’s Oddysee! | Screenshot: Gamma

iPhone users without a penchant for jailbreaking can finally enjoy the blocky polygons and shifty textures of the original PlayStation with Gamma, a free PS1 emulator that hit the iOS App Store last night. Gamma comes courtesy of developer ZodTTD, which has been creating emulators for the iPhone since the earliest days of third-party iOS apps.

The app has both iPhone and iPad versions with support for Bluetooth controllers and keyboards, as well as customizable on-screen controller skins. It uses Google Drive and Dropbox syncing for backing up your game files and save states (those are the snapshots you can save at any time and reload, a little like pausing your game — great for old-school games that don’t let you save any time you want). Like the Delta emulator that ruled the App Store’s top free apps list for weeks before being unseated by free donuts, the app will also go grab game cover artwork for you automatically.

Screen recording: Gamma
PS1, emulated.

The default skin for landscape orientation is mostly transparent and hard to see, though, so you’ll want to replace that when you can.

Screenshot: Gamma
I’ve never played this game, and I probably never will.

Thankfully, Gamma doesn’t require you to go find any BIOS files to run PS1 games. That said, I had trouble getting the first two games I tried — NASCAR 98 and Shrek Treasure Hunt. But that may have just been the game files I was using, as I could run Oddworld: Abe’s Oddysee just fine. Third time’s the charm, right?

According to Gamma’s App Store page, it collects identifiers that can be used to track you, and may collect location and usage data. For what it’s worth, the app didn’t trigger a location data access request for me, nor did it prompt me for tracking permission (though it did do so for my colleague, Sean Hollister).

Benjamin Stark, aka ZodTTD, has been around the block. Stark pointed out to The Verge via email that Delta developer Riley Testut’s first iOS emulator, GBA4iOS, borrowed code from an emulator Stark had made called gpSPhone (something Testut wrote about in 2013). But even that app, Stark said, was based (with permission, he added) on gpSP, an Android emulator created by a developer called Exophase.

Stark also developed TurboGrafx-16 and N64 emulators for the iPhone in 2008 and 2009, respectively. Later, he had a run-in with Google when the company pulled his app PSX4Droid, also a PS1 emulator, from the Android Market in 2011, at a time when Google was removing many of the most popular emulators from the online store. He later made the emulator freely available and open-sourced the code.

Update May 12th, 11:36AM ET: Added additional context and details shared by Stark.

Read More 

The DJI Pocket 3 is almost everything I wanted my iPhone camera to be

Despite its name, the Pocket 3 isn’t exactly comfortable to stuff in tighter pockets. | Photo by Quentyn Kennemer / The Verge

I can’t think of anything permeating mainstream camera culture as aggressively as the DJI Osmo Pocket 3. The Fujifilm X100VI has stolen some of its thunder among film simulation enthusiasts, but DJI’s still having somewhat of a cultural moment on YouTube, Instagram, and the troubled TikTok by spurring all sorts of creator glee.
Of course, the camera buffs are all over it, but serious and casual creators from other genres have paused their usual programming to rave about how it transcends amateur vlogging pursuits, whether you’re filming a wedding or self-shooting a scene for a Sundance-hopeful short film.
Some of us at The Verge are excited, too: Vjeran liked it enough to call it his favorite gadget of 2023, and Sean just bought one after using it to elevate his Today I’m Toying With videos.

I felt tingles about the $519 Osmo Pocket 3 when DJI first announced it, but it wasn’t until I purchased a Creator Combo that I fully understood the hype. The video quality often comes close to my full-frame Sony mirrorless (although I can’t get all the same shots) and is very noticeably better than my phone.
The original Osmo Pocket and Pocket 2 couldn’t make those boasts, but the Pocket 3 is a cut above. Its larger one-inch-equivalent sensor is now bigger than those in most phones, with better low-light performance and more reliable autofocusing than predecessors. It has a much bigger display, longer battery life, faster charge time, more microphones — the list goes on like that for nearly everything that makes it tick.

Photo: Quentyn Kennemer / The Verge

My first heavy outing with the Pocket 3 was at a WWE SmackDown show at the American Airlines Center in Dallas. Without a photographer’s pass, I couldn’t enter the venue with my Sony A7 IV or anything else bigger than pocket-sized. But the Osmo got in after I showed security that its battery grip wasn’t a selfie stick.
I’d gone with the simple hope of capturing some good stabilized audience point-of-view footage that might look a touch better than what my iPhone 12 Pro Max produced at the last show I attended. I left with clips that look so good that I could see them appearing in WWE’s social media reels or pre-match hype promos.

I didn’t care to watch many of the clips I got at similar shows with my iPhone, but I keep coming back to moments like these that I shot on the Osmo Pocket 3.

The Pocket 3 was better at capturing the majesty of the heavy light rays and pyrotechnic embers that define WWE’s grand productions than my iPhone, and its microphones did a better job at taming the loud audio levels without overly dampening the sound and stripping it of acoustic character. The footage was also considerably less hazy compared to the iPhone’s, with smoother stabilization, though the iPhone’s software stabilization compared decently.
Even if I could have brought a mirrorless or DSLR, the Osmo let me live more in the moment. I had a large popcorn and a cold one occupying one hand for most of the night, so I’d have been miserable trying to adjust dials and deep-dive menus. With the Pocket 3, powering it on is just a matter of swiveling open the display. The record button’s right under your thumb, and settings are a single swipe away.

Even with a limited zoom range, the Pocket 3’s sensor can produce some great cinematic visuals.

The Pocket 3 has its limitations. It can only manage a 2X-equivalent digital zoom, for starters. That’s enough to capture impromptu closeups — like then-WrestleMania-bound Cody Rhodes looking into the rafters after he walked right past my seat, for example. But you won’t be able to achieve the dreamy, bokeh-heavy images reserved for interchangeable lens cameras.
Meanwhile, my iPhone’s telephoto sensor offered better reach at a Monday Night Raw show in October. I sat in the same exact seat at both shows, with a great view of the ring and decent visibility of the entrance stage from the first row of the risers. My iPhone gave me clear face shots of Becky Lynch and Damian Priest’s entrances, even if I greatly preferred the overall color, clarity, and exposure of the Osmo during the SmackDown show.

The iPhone 12 Pro Max couldn’t match the Pocket 3’s fidelity in similar environments, but it’s not the fairest comparison. I’m certain the iPhone 15 with Apple ProRes Log would come much closer.

I’ve shot a number of personal videos since SmackDown and spent a fair bit of time comparing my footage to my Sony and iPhone results. Compared to my phone, colors don’t look overly muddy and washed out in low light, and there’s far less noise. I get more leeway to push and pull colors in post-process when shooting in D-Log M. (Though, that might be a wash if I had an iPhone 15 Pro with a similarly flexible ProRes Log color profile.)
Even in well-lit scenarios, there’s still a decent gap: the bokeh on the Osmo Pocket 3, while subtle, is more noticeable and pronounced than the iPhone. It’s enough to draw the viewer’s eye to your subject while muting an otherwise distracting background.

View this post on Instagram

A post shared by The Verge (@verge)

Sean filmed the Transformer above with iPhone 14 Pro and Pocket 3 — you can probably tell which shot is which!
And it’s just so easy to use. Going from powered off to an effortlessly stabilized video is as simple as swiveling open the screen and hitting the record button right next to it, no separate multi-pound gimbal or balancing weights needed. Tap the screen to flip it into selfie mode, and it’ll automatically pan and tilt to keep your face in frame.
Most phones don’t let you use the higher quality sensor to record yourself while previewing your shot; here, you can frame your own walk-and-talking headshots on the two-inch OLED screen, then spin the same sensor around to capture viral content, short films, and the world’s beauty in front of you.
You can also fire up DJI’s smartphone app to remotely preview and control the entire camera over Bluetooth — and if you spring for the $669 Creator Combo, you get a high-quality wireless lav mic with 32-bit float recording that effortlessly integrates, too. The mic automatically connects to the Osmo as soon as you power it on, can record separately to its own internal storage, has both a clip and a strong magnet to keep it attached to clothing, vibrates in specific patterns so you know when you’re rolling, and can charge and transfer recordings over USB-C. (Plus, the combo comes with a nice extended battery grip, an iffy wide-angle lens, and other accessories.)
No, you won’t find the same shooting options that enthusiasts and professionals seek out of a proper camera body. You can adjust white balance, shutter, and ISO to varying degrees, but you don’t get advanced recording codecs, LUT previews, alternative metering modes, and the like. It’s not exactly comfortable to have in your pocket despite the name, and for still photography, I’d sooner grab my phone. Did I mention you should run like hell if you see a raindrop? There’s no waterproofing at all.

From my seat, the Osmo was too wide to record The Rock’s new Final Boss entrance in a way that clearly shows his face and makes him feel bigger than the Hollywood-level lighting emanating from the fixtures around him.

But everything about the Osmo Pocket 3 makes me want to get out and record because it’s fun and easy to do. It encourages the lazy part of my brain to stop whining. It narrows the gap for people who need an ultra-portable camera that can shoot better-looking footage than their iPhone and lightens the load for those who don’t need a more complex camera for every shoot. For me, right now, it’s up there with the wallet, keys, and phone as something I’ll always consider grabbing on my way out the door.
That’s remarkable for a camera that isn’t much larger than the average vape pen — and costs less than a new phone.

Despite its name, the Pocket 3 isn’t exactly comfortable to stuff in tighter pockets. | Photo by Quentyn Kennemer / The Verge

I can’t think of anything permeating mainstream camera culture as aggressively as the DJI Osmo Pocket 3. The Fujifilm X100VI has stolen some of its thunder among film simulation enthusiasts, but DJI’s still having somewhat of a cultural moment on YouTube, Instagram, and the troubled TikTok by spurring all sorts of creator glee.

Of course, the camera buffs are all over it, but serious and casual creators from other genres have paused their usual programming to rave about how it transcends amateur vlogging pursuits, whether you’re filming a wedding or self-shooting a scene for a Sundance-hopeful short film.

Some of us at The Verge are excited, too: Vjeran liked it enough to call it his favorite gadget of 2023, and Sean just bought one after using it to elevate his Today I’m Toying With videos.

I felt tingles about the $519 Osmo Pocket 3 when DJI first announced it, but it wasn’t until I purchased a Creator Combo that I fully understood the hype. The video quality often comes close to my full-frame Sony mirrorless (although I can’t get all the same shots) and is very noticeably better than my phone.

The original Osmo Pocket and Pocket 2 couldn’t make those boasts, but the Pocket 3 is a cut above. Its larger one-inch-equivalent sensor is now bigger than those in most phones, with better low-light performance and more reliable autofocusing than predecessors. It has a much bigger display, longer battery life, faster charge time, more microphones — the list goes on like that for nearly everything that makes it tick.

Photo: Quentyn Kennemer / The Verge

My first heavy outing with the Pocket 3 was at a WWE SmackDown show at the American Airlines Center in Dallas. Without a photographer’s pass, I couldn’t enter the venue with my Sony A7 IV or anything else bigger than pocket-sized. But the Osmo got in after I showed security that its battery grip wasn’t a selfie stick.

I’d gone with the simple hope of capturing some good stabilized audience point-of-view footage that might look a touch better than what my iPhone 12 Pro Max produced at the last show I attended. I left with clips that look so good that I could see them appearing in WWE’s social media reels or pre-match hype promos.

I didn’t care to watch many of the clips I got at similar shows with my iPhone, but I keep coming back to moments like these that I shot on the Osmo Pocket 3.

The Pocket 3 was better at capturing the majesty of the heavy light rays and pyrotechnic embers that define WWE’s grand productions than my iPhone, and its microphones did a better job at taming the loud audio levels without overly dampening the sound and stripping it of acoustic character. The footage was also considerably less hazy compared to the iPhone’s, with smoother stabilization, though the iPhone’s software stabilization compared decently.

Even if I could have brought a mirrorless or DSLR, the Osmo let me live more in the moment. I had a large popcorn and a cold one occupying one hand for most of the night, so I’d have been miserable trying to adjust dials and deep-dive menus. With the Pocket 3, powering it on is just a matter of swiveling open the display. The record button’s right under your thumb, and settings are a single swipe away.

Even with a limited zoom range, the Pocket 3’s sensor can produce some great cinematic visuals.

The Pocket 3 has its limitations. It can only manage a 2X-equivalent digital zoom, for starters. That’s enough to capture impromptu closeups — like then-WrestleMania-bound Cody Rhodes looking into the rafters after he walked right past my seat, for example. But you won’t be able to achieve the dreamy, bokeh-heavy images reserved for interchangeable lens cameras.

Meanwhile, my iPhone’s telephoto sensor offered better reach at a Monday Night Raw show in October. I sat in the same exact seat at both shows, with a great view of the ring and decent visibility of the entrance stage from the first row of the risers. My iPhone gave me clear face shots of Becky Lynch and Damian Priest’s entrances, even if I greatly preferred the overall color, clarity, and exposure of the Osmo during the SmackDown show.

The iPhone 12 Pro Max couldn’t match the Pocket 3’s fidelity in similar environments, but it’s not the fairest comparison. I’m certain the iPhone 15 with Apple ProRes Log would come much closer.

I’ve shot a number of personal videos since SmackDown and spent a fair bit of time comparing my footage to my Sony and iPhone results. Compared to my phone, colors don’t look overly muddy and washed out in low light, and there’s far less noise. I get more leeway to push and pull colors in post-process when shooting in D-Log M. (Though, that might be a wash if I had an iPhone 15 Pro with a similarly flexible ProRes Log color profile.)

Even in well-lit scenarios, there’s still a decent gap: the bokeh on the Osmo Pocket 3, while subtle, is more noticeable and pronounced than the iPhone. It’s enough to draw the viewer’s eye to your subject while muting an otherwise distracting background.

Sean filmed the Transformer above with iPhone 14 Pro and Pocket 3 — you can probably tell which shot is which!

And it’s just so easy to use. Going from powered off to an effortlessly stabilized video is as simple as swiveling open the screen and hitting the record button right next to it, no separate multi-pound gimbal or balancing weights needed. Tap the screen to flip it into selfie mode, and it’ll automatically pan and tilt to keep your face in frame.

Most phones don’t let you use the higher quality sensor to record yourself while previewing your shot; here, you can frame your own walk-and-talking headshots on the two-inch OLED screen, then spin the same sensor around to capture viral content, short films, and the world’s beauty in front of you.

You can also fire up DJI’s smartphone app to remotely preview and control the entire camera over Bluetooth — and if you spring for the $669 Creator Combo, you get a high-quality wireless lav mic with 32-bit float recording that effortlessly integrates, too. The mic automatically connects to the Osmo as soon as you power it on, can record separately to its own internal storage, has both a clip and a strong magnet to keep it attached to clothing, vibrates in specific patterns so you know when you’re rolling, and can charge and transfer recordings over USB-C. (Plus, the combo comes with a nice extended battery grip, an iffy wide-angle lens, and other accessories.)

No, you won’t find the same shooting options that enthusiasts and professionals seek out of a proper camera body. You can adjust white balance, shutter, and ISO to varying degrees, but you don’t get advanced recording codecs, LUT previews, alternative metering modes, and the like. It’s not exactly comfortable to have in your pocket despite the name, and for still photography, I’d sooner grab my phone. Did I mention you should run like hell if you see a raindrop? There’s no waterproofing at all.

From my seat, the Osmo was too wide to record The Rock’s new Final Boss entrance in a way that clearly shows his face and makes him feel bigger than the Hollywood-level lighting emanating from the fixtures around him.

But everything about the Osmo Pocket 3 makes me want to get out and record because it’s fun and easy to do. It encourages the lazy part of my brain to stop whining. It narrows the gap for people who need an ultra-portable camera that can shoot better-looking footage than their iPhone and lightens the load for those who don’t need a more complex camera for every shoot. For me, right now, it’s up there with the wallet, keys, and phone as something I’ll always consider grabbing on my way out the door.

That’s remarkable for a camera that isn’t much larger than the average vape pen — and costs less than a new phone.

Read More 

The rise of the audio-only video game

Image: Samar Haddad / The Verge

Not all video games need video. Over the years, games that exist only in audio have taken players into entirely new worlds in which there’s nothing to see and still everything to do. These games have huge accessibility implications, allowing people who can’t see to play an equally fun, equally immersive game with their other senses. And when all you have is sound, there’s actually even more you can do to make your game great.
On this episode of The Vergecast, we explore the history of audio-only games with Paul Bennun, who has been in this space longer than most. Years ago, Bennun and his team at Somethin’ Else made a series of games called Papa Sangre that were among the most innovative and most popular games of their kind. He explains what makes an audio game work, why the iPhone 4 was such a crucial technological achievement for these games, and more.

Bennun also makes the case that, right now, even in this ultra-visual time, is the perfect time for a rebirth of audio games. He points to AirPods and other spatial audio headphones along with devices like the Vision Pro, advances in location tracking, and improvements in multiplayer gaming as reasons to think that audio-first games could be a huge hit now. It even sounds a bit like Bennun might have a game in the works, but he won’t tell us about that.
If you want to know more about the topics we cover in this episode, here are a few links to get you started:

Gaming in darkness: Papa Sangre II is a terrifying world made entirely of sound
From Polygon: Blind games: the next battleground in accessibility

From Audiomob: Six of the best audio-only (and audio-first) video games

Blind Drive
Feer
The Papa Sangre II trailer
The Audio Defence trailer

Image: Samar Haddad / The Verge

Not all video games need video. Over the years, games that exist only in audio have taken players into entirely new worlds in which there’s nothing to see and still everything to do. These games have huge accessibility implications, allowing people who can’t see to play an equally fun, equally immersive game with their other senses. And when all you have is sound, there’s actually even more you can do to make your game great.

On this episode of The Vergecast, we explore the history of audio-only games with Paul Bennun, who has been in this space longer than most. Years ago, Bennun and his team at Somethin’ Else made a series of games called Papa Sangre that were among the most innovative and most popular games of their kind. He explains what makes an audio game work, why the iPhone 4 was such a crucial technological achievement for these games, and more.

Bennun also makes the case that, right now, even in this ultra-visual time, is the perfect time for a rebirth of audio games. He points to AirPods and other spatial audio headphones along with devices like the Vision Pro, advances in location tracking, and improvements in multiplayer gaming as reasons to think that audio-first games could be a huge hit now. It even sounds a bit like Bennun might have a game in the works, but he won’t tell us about that.

If you want to know more about the topics we cover in this episode, here are a few links to get you started:

Gaming in darkness: Papa Sangre II is a terrifying world made entirely of sound
From Polygon: Blind games: the next battleground in accessibility

From Audiomob: Six of the best audio-only (and audio-first) video games

Blind Drive
Feer
The Papa Sangre II trailer
The Audio Defence trailer

Read More 

The new iPad Pro looks like a winner

Image: David Pierce / The Verge

Hi, friends! Welcome to Installer No. 37, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, send me links, and also, you can read all the old editions at the Installer homepage.)
This week, I’ve been writing about iPads and LinkedIn games, reading about auto shows and typewriters and treasure hunters, watching Everybody’s in LA and Sugar, looking for reasons to buy Yeti’s new French press even though I definitely don’t need more coffee gear, following almost all of Jerry Saltz’s favorite Instagram accounts, testing Capacities and Heptabase for all my note-taking needs and Plinky for all my link-saving, and playing a lot of Blind Drive.
I also have for you a thoroughly impressive new iPad, a clever new smart home hub, a Twitter documentary to watch this weekend, a sci-fi show to check out, a cheap streaming box, and much more. Let’s do it.
(As always, the best part of Installer is your ideas and tips. What are you reading / watching / cooking / playing / building right now? What should everyone else be into as well? Email me at installer@theverge.com or find me on Signal at @davidpierce.11. And if you know someone else who might enjoy Installer, and tell them to subscribe here.)

The Drop

The new iPad Pro. The new Pro is easily the most impressive piece of hardware I’ve seen in a while. It’s so thin and light, and that OLED screen… gorgeous. It’s bonkers expensive, and the iPad’s big problem continues to be its software, but this is how you build a tablet, folks.

Animal Well. Our friends over at Polygon called this “one of the most inventive games of the last decade,” which is obviously high praise! By all accounts, it’s unusual, surprising, occasionally frustrating, very smart, and incredibly engaging. Even the trailer looks like nothing I’ve seen before. (I got a lot of recommendations for this one this week — thanks to everyone who sent it in!)

Final Cut Camera. This only got a quick mention at Apple’s event this week, but it’s kind of a huge deal! It’s a first-party, pro-level camera app for iPhones and iPads that gives you lots of manual control and editing features. It’s exactly what a lot of creatives have been asking for. No word yet on exactly when it’ll be available, but I’m excited.

The Aqara Hub M3. The only way to manage your smart home is to make sure your devices can support as many assistants, protocols, and platforms as possible. This seems like a way to do it: it’s a Matter-ready device that can handle just about any smart-home gear you throw at it.
“Battle of the Clipboard Managers.” I don’t think I’ve ever linked to a Reddit thread here, but check this one out: it’s a long discussion about why a clipboard manager is a useful tool, plus a bunch of good options to choose from. (I agree with all the folks who love Raycast, but there are a lot of choices and ideas here.)

Proton Pass. My ongoing No. 1 piece of technology advice is that everyone needs a password manager. I’m a longtime 1Password fan, but Proton’s app is starting to look tempting — this week, it got a new monitoring tool for security threats, in addition to all the smart email hiding and sharing features it already has.

The Onn 4K Pro. Basically all streaming boxes are ad-riddled, slow, and bad. This Google TV box from Walmart is at least also cheap, comes with voice control and support for all the specs you’d want, and works as a smart speaker. I love a customizable button, too.

Dark Matter. I’ve mostly loved all the Blake Crouch sci-fi books I’ve read, so I have high hopes for this Apple TV Plus series about life in a parallel universe. Apple TV Plus, by the way? Really good at the whole sci-fi thing.

The Wordle archive. More than 1,000 days of Wordle, all ready to be played and replayed (because, let’s be honest, who remembers Wordle from three weeks ago?). I don’t have access to the archive yet, but you better believe I’ll be playing it all the way through as soon as it’s out.

Black Twitter: A People’s History. Based on a really fun Wired series, this is a three-part deep dive Hulu doc about the ways Black Twitter took over social media and a tour of the internet’s experience of some of the biggest events of the last decade.

Screen share
Kylie Robison, The Verge’s new senior AI reporter, tweeted a video of her old iPhone the other day that was like a perfect time capsule of a device. She had approximately 90,000 games, including a bunch that I’m 100 percent sure were scams, and that iPod logo in her dock made me feel a lot of things. Those were good days.
I messaged Kylie in Slack roughly eight minutes after she became a Verge employee, hoping I could convince her to share her current homescreen — and what she’d been up to during her funemployment time ahead of starting with us.
Sadly, she says she tamed her homescreen chaos before starting, because something something professionalism, or whatever. And now she swears she can’t even find a screenshot of her old homescreen! SURE, KYLIE. Anyway, here’s Kylie’s newly functional homescreen, plus some info on the apps she uses and why.

The phone: iPhone 14 Pro Max.
The wallpaper: A black screen because I think it’s too noisy otherwise. (My lock screen is about 20 revolving photos, though.)
The apps: Apple Maps, Notes, Spotify, Messages, FaceTime, Safari, Phone.
I need calendar and weather apps right in front of me when I unlock my phone because I’m forgetful. I use Spotify for all things music and podcasts.
Work is life so I have all those apps front and center, too (Signal, Google Drive, Okta).
Just before starting, I reorganized my phone screen because 1) I had time and 2) I knew I’d have to show it off for David. All the apps are sorted into folders now, but before, they were completely free-range because I use the search bar to find apps; I rarely scroll around. So just imagine about 25 random apps filling up all the pages: Pegasus for some international flight I booked, a random stuffed bell pepper recipe, what have you.
I also asked Kylie to share a few things she’s into right now. Here’s what she shared:

Stardew Valley took over my life during my work break.
I actually started 3 Body Problem because of an old Installer. Also, I loved Fallout and need more episodes.
My serious guilty pleasure is Love Island UK, and I’ve been watching the latest season during my break.

Crowdsourced
Here’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or hit me up on Signal — I’m @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. And if you want even more recommendations, check out the replies to this post on Threads.
“I have always found Spotify’s recommendation algorithm and music channels to be terrible; wayyy too much fussing and tailoring required when all I want is to hit play and get a good diversity of music I will like. So I finally gave up and tried Pandora again. Its recommendation / station algorithm is so wildly better than Spotify’s (at least for me), it’s shocking how it has seemed to fade into cultural anonymity. Can’t speak for others, but if anyone out there is similarly frustrated with Spotify playlists, I highly recommend the Pandora option.” – Will
“Everything coming out of Netflix Is a Joke Fest has been 10/10.” – Mike
“Mantella mod for Skyrim (and Fallout 4). Not so much a single mod, but a mod plus a collection of apps that gives (basically) every NPC their own lives and stories. It’s like suddenly being allowed to participate in the fun and games with Woody and Buzz, rather than them having to say the words when you pull the string.” – Jonathan
“The Snipd podcast app (whose primary selling point is AI transcription of podcasts and the ability to easily capture, manage, and export text snippets from podcasts) has a new feature that shows you a name, bio, and picture for podcast guests, and allows you to find more podcasts with the same guest or even follow specific guests. Pretty cool!” – Andy
“I have recently bought a new Kindle, and I’m trying to figure out how to get news on it! My current plan is to use Omnivore as my bookmarks app, which will sync with this awesome community tool that converts those bookmarks into a Kindle-friendly website.” – David
“Turtles All the Way Down! Great depiction of OCD.” – Saad
“With all the conversation around Delta on iOS, I have recently procured and am currently enamored with my Miyoo Mini Plus. It’s customizable and perfectly sized, and in my advanced years with no love for Fortnite, PUBG, or any of the myriad of online connected games, it’s lovely to go back and play some of these ‘legally obtained’ games that I played in my childhood.” – Benjamin
“Rusty’s Retirement is a great, mostly idle farm sim that sits at the bottom or the side of your monitor for both Mac and Windows. Rusty just goes and completes little tasks of his own accord while you work or do other stuff. It rocks. Look at him go!” – Brendon
“Last week, Nicholas talked about YACReader and was asking for another great comic e-reader app for DRM-free files. After much searching myself, I settled on Panels for iPad. Great Apple-native UI, thoughtful features, and decent performance. The free version can handle a local library, but to unlock its full potential, the Pro version (sub or lifetime) supports iCloud, so you can keep all your comics in iCloud Drive, manage the files via a Mac, and only download what you’re currently reading — great for lower-end iPads with less storage.” – Diogo

Signing off
I have spent so much time over the years trying to both figure out and explain to people the basics of a camera. There are a billion metaphors for ISO, shutter speed, and aperture, and all of them fall short. That’s probably why a lot of the photographer types I know have been passing around this very fun depth of field simulator over the last few days, which lets you play with aperture, focal length, sensor size, and more in order to understand how different settings change the way you take photos. It’s a really clever, simple way to see how it all works — and to understand what becomes possible when you really start to control your camera. I’ll be sharing this link a lot, I suspect, and I’m learning a lot from it, too.
See you next week!

Image: David Pierce / The Verge

Hi, friends! Welcome to Installer No. 37, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, send me links, and also, you can read all the old editions at the Installer homepage.)

This week, I’ve been writing about iPads and LinkedIn games, reading about auto shows and typewriters and treasure hunters, watching Everybody’s in LA and Sugar, looking for reasons to buy Yeti’s new French press even though I definitely don’t need more coffee gear, following almost all of Jerry Saltz’s favorite Instagram accounts, testing Capacities and Heptabase for all my note-taking needs and Plinky for all my link-saving, and playing a lot of Blind Drive.

I also have for you a thoroughly impressive new iPad, a clever new smart home hub, a Twitter documentary to watch this weekend, a sci-fi show to check out, a cheap streaming box, and much more. Let’s do it.

(As always, the best part of Installer is your ideas and tips. What are you reading / watching / cooking / playing / building right now? What should everyone else be into as well? Email me at installer@theverge.com or find me on Signal at @davidpierce.11. And if you know someone else who might enjoy Installer, and tell them to subscribe here.)

The Drop

The new iPad Pro. The new Pro is easily the most impressive piece of hardware I’ve seen in a while. It’s so thin and light, and that OLED screen… gorgeous. It’s bonkers expensive, and the iPad’s big problem continues to be its software, but this is how you build a tablet, folks.

Animal Well. Our friends over at Polygon called this “one of the most inventive games of the last decade,” which is obviously high praise! By all accounts, it’s unusual, surprising, occasionally frustrating, very smart, and incredibly engaging. Even the trailer looks like nothing I’ve seen before. (I got a lot of recommendations for this one this week — thanks to everyone who sent it in!)

Final Cut Camera. This only got a quick mention at Apple’s event this week, but it’s kind of a huge deal! It’s a first-party, pro-level camera app for iPhones and iPads that gives you lots of manual control and editing features. It’s exactly what a lot of creatives have been asking for. No word yet on exactly when it’ll be available, but I’m excited.

The Aqara Hub M3. The only way to manage your smart home is to make sure your devices can support as many assistants, protocols, and platforms as possible. This seems like a way to do it: it’s a Matter-ready device that can handle just about any smart-home gear you throw at it.
Battle of the Clipboard Managers.” I don’t think I’ve ever linked to a Reddit thread here, but check this one out: it’s a long discussion about why a clipboard manager is a useful tool, plus a bunch of good options to choose from. (I agree with all the folks who love Raycast, but there are a lot of choices and ideas here.)

Proton Pass. My ongoing No. 1 piece of technology advice is that everyone needs a password manager. I’m a longtime 1Password fan, but Proton’s app is starting to look tempting — this week, it got a new monitoring tool for security threats, in addition to all the smart email hiding and sharing features it already has.

The Onn 4K Pro. Basically all streaming boxes are ad-riddled, slow, and bad. This Google TV box from Walmart is at least also cheap, comes with voice control and support for all the specs you’d want, and works as a smart speaker. I love a customizable button, too.

Dark Matter. I’ve mostly loved all the Blake Crouch sci-fi books I’ve read, so I have high hopes for this Apple TV Plus series about life in a parallel universe. Apple TV Plus, by the way? Really good at the whole sci-fi thing.

The Wordle archive. More than 1,000 days of Wordle, all ready to be played and replayed (because, let’s be honest, who remembers Wordle from three weeks ago?). I don’t have access to the archive yet, but you better believe I’ll be playing it all the way through as soon as it’s out.

Black Twitter: A People’s History. Based on a really fun Wired series, this is a three-part deep dive Hulu doc about the ways Black Twitter took over social media and a tour of the internet’s experience of some of the biggest events of the last decade.

Screen share

Kylie Robison, The Verge’s new senior AI reporter, tweeted a video of her old iPhone the other day that was like a perfect time capsule of a device. She had approximately 90,000 games, including a bunch that I’m 100 percent sure were scams, and that iPod logo in her dock made me feel a lot of things. Those were good days.

I messaged Kylie in Slack roughly eight minutes after she became a Verge employee, hoping I could convince her to share her current homescreen — and what she’d been up to during her funemployment time ahead of starting with us.

Sadly, she says she tamed her homescreen chaos before starting, because something something professionalism, or whatever. And now she swears she can’t even find a screenshot of her old homescreen! SURE, KYLIE. Anyway, here’s Kylie’s newly functional homescreen, plus some info on the apps she uses and why.

The phone: iPhone 14 Pro Max.

The wallpaper: A black screen because I think it’s too noisy otherwise. (My lock screen is about 20 revolving photos, though.)

The apps: Apple Maps, Notes, Spotify, Messages, FaceTime, Safari, Phone.

I need calendar and weather apps right in front of me when I unlock my phone because I’m forgetful. I use Spotify for all things music and podcasts.

Work is life so I have all those apps front and center, too (Signal, Google Drive, Okta).

Just before starting, I reorganized my phone screen because 1) I had time and 2) I knew I’d have to show it off for David. All the apps are sorted into folders now, but before, they were completely free-range because I use the search bar to find apps; I rarely scroll around. So just imagine about 25 random apps filling up all the pages: Pegasus for some international flight I booked, a random stuffed bell pepper recipe, what have you.

I also asked Kylie to share a few things she’s into right now. Here’s what she shared:

Stardew Valley took over my life during my work break.
I actually started 3 Body Problem because of an old Installer. Also, I loved Fallout and need more episodes.
My serious guilty pleasure is Love Island UK, and I’ve been watching the latest season during my break.

Crowdsourced

Here’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or hit me up on Signal — I’m @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. And if you want even more recommendations, check out the replies to this post on Threads.

“I have always found Spotify’s recommendation algorithm and music channels to be terrible; wayyy too much fussing and tailoring required when all I want is to hit play and get a good diversity of music I will like. So I finally gave up and tried Pandora again. Its recommendation / station algorithm is so wildly better than Spotify’s (at least for me), it’s shocking how it has seemed to fade into cultural anonymity. Can’t speak for others, but if anyone out there is similarly frustrated with Spotify playlists, I highly recommend the Pandora option.” – Will

“Everything coming out of Netflix Is a Joke Fest has been 10/10.” – Mike

Mantella mod for Skyrim (and Fallout 4). Not so much a single mod, but a mod plus a collection of apps that gives (basically) every NPC their own lives and stories. It’s like suddenly being allowed to participate in the fun and games with Woody and Buzz, rather than them having to say the words when you pull the string.” – Jonathan

“The Snipd podcast app (whose primary selling point is AI transcription of podcasts and the ability to easily capture, manage, and export text snippets from podcasts) has a new feature that shows you a name, bio, and picture for podcast guests, and allows you to find more podcasts with the same guest or even follow specific guests. Pretty cool!” – Andy

“I have recently bought a new Kindle, and I’m trying to figure out how to get news on it! My current plan is to use Omnivore as my bookmarks app, which will sync with this awesome community tool that converts those bookmarks into a Kindle-friendly website.” – David

Turtles All the Way Down! Great depiction of OCD.” – Saad

“With all the conversation around Delta on iOS, I have recently procured and am currently enamored with my Miyoo Mini Plus. It’s customizable and perfectly sized, and in my advanced years with no love for Fortnite, PUBG, or any of the myriad of online connected games, it’s lovely to go back and play some of these ‘legally obtained’ games that I played in my childhood.” – Benjamin

Rusty’s Retirement is a great, mostly idle farm sim that sits at the bottom or the side of your monitor for both Mac and Windows. Rusty just goes and completes little tasks of his own accord while you work or do other stuff. It rocks. Look at him go!” – Brendon

“Last week, Nicholas talked about YACReader and was asking for another great comic e-reader app for DRM-free files. After much searching myself, I settled on Panels for iPad. Great Apple-native UI, thoughtful features, and decent performance. The free version can handle a local library, but to unlock its full potential, the Pro version (sub or lifetime) supports iCloud, so you can keep all your comics in iCloud Drive, manage the files via a Mac, and only download what you’re currently reading — great for lower-end iPads with less storage.” – Diogo

Signing off

I have spent so much time over the years trying to both figure out and explain to people the basics of a camera. There are a billion metaphors for ISO, shutter speed, and aperture, and all of them fall short. That’s probably why a lot of the photographer types I know have been passing around this very fun depth of field simulator over the last few days, which lets you play with aperture, focal length, sensor size, and more in order to understand how different settings change the way you take photos. It’s a really clever, simple way to see how it all works — and to understand what becomes possible when you really start to control your camera. I’ll be sharing this link a lot, I suspect, and I’m learning a lot from it, too.

See you next week!

Read More 

Game stores are refunding Ghost of Tsushima pre-orders in non-PSN countries

Screenshot: Ghost of Tsushima

Steam is refunding preorders of the director’s cut of Ghost of Tsushima for buyers who live in countries without PlayStation Network access. That’s despite the fact that arguably the most important part of the game is still playable without PlayStation Network account linking. The news comes after Valve abruptly delisted the game yesterday.
Ghost of Tsushima only requires PSN account linking for its Legends multiplayer mode, a requirement the single player campaign is exempt from, the game’s developer went out of its way to say in a recent post. Steam, Green Man Gaming, and Epic Games Store each have disclaimers noting the same thing. In theory, that would mean if you don’t care about multiplayer modes, you could still play, but in practice, not so much.
The Steam team is sending this message out to players who are being refunded:
You are receiving a refund for a game you pre-purchased – Ghost of Tsushima. The publisher of this game is now requiring a secondary account to play portions of this game – and this account cannot be created from your country.

Frustrating as it is, the situation with Tsushima feels cut-and-dry compared to that of Helldivers 2. Earlier this month, Sony announced it would add mandatory PSN account linking to Helldivers 2, which had already been available to buy in non-PSN countries for almost three months. Steam quickly restricted where the game could be sold to only countries where PSN was available. Players weren’t happy.
Following a review-bombing campaign that slid the game’s Steam rating from “overwhelmingly positive” to “overwhelmingly negative” in a matter of days, Sony walked back the change. But despite that, Steam didn’t remove the sale restrictions.

Arrowhead’s CEO says they have no idea why Sony just restricted more countries from purchasing Helldivers 2 and found out they did so through the community. Arrowhead wants the game to be available worldwide. Operation Defeat Sony continues! #Helldivers2 pic.twitter.com/14PXrL8JwY— Rebs Gaming (@Mr_Rebs_) May 11, 2024

Then yesterday, three more countries— Estonia, Latvia, and Lithuania — were added to Steam’s list of sale-restricted countries. The CEO of Arrowhead, Johan Pilestedt, said on Discord he wasn’t told about the newly-added regions, only finding out about them through the game’s Discord community.

Screenshot: Discord

He later said this came down to Valve noticing an administrative error, and that the countries were supposed to be there from the start.

Screenshot: Discord

Pilestedt has said he is trying to get both PlayStation and Valve to undo the sale restrictions. That this decision was made by Sony seems plausible, given the situation with Ghost of Tsushima on multiple game store platforms. However, Since neither Sony nor Valve have responded to The Verge’s request for comment on this situation, it’s impossible to say for sure whether that’s true, or if the stores are delisting Sony’s games on their own.

Screenshot: Ghost of Tsushima

Steam is refunding preorders of the director’s cut of Ghost of Tsushima for buyers who live in countries without PlayStation Network access. That’s despite the fact that arguably the most important part of the game is still playable without PlayStation Network account linking. The news comes after Valve abruptly delisted the game yesterday.

Ghost of Tsushima only requires PSN account linking for its Legends multiplayer mode, a requirement the single player campaign is exempt from, the game’s developer went out of its way to say in a recent post. Steam, Green Man Gaming, and Epic Games Store each have disclaimers noting the same thing. In theory, that would mean if you don’t care about multiplayer modes, you could still play, but in practice, not so much.

The Steam team is sending this message out to players who are being refunded:

You are receiving a refund for a game you pre-purchased – Ghost of Tsushima. The publisher of this game is now requiring a secondary account to play portions of this game – and this account cannot be created from your country.

Frustrating as it is, the situation with Tsushima feels cut-and-dry compared to that of Helldivers 2. Earlier this month, Sony announced it would add mandatory PSN account linking to Helldivers 2, which had already been available to buy in non-PSN countries for almost three months. Steam quickly restricted where the game could be sold to only countries where PSN was available. Players weren’t happy.

Following a review-bombing campaign that slid the game’s Steam rating from “overwhelmingly positive” to “overwhelmingly negative” in a matter of days, Sony walked back the change. But despite that, Steam didn’t remove the sale restrictions.

Arrowhead’s CEO says they have no idea why Sony just restricted more countries from purchasing Helldivers 2 and found out they did so through the community. Arrowhead wants the game to be available worldwide. Operation Defeat Sony continues! #Helldivers2 pic.twitter.com/14PXrL8JwY

— Rebs Gaming (@Mr_Rebs_) May 11, 2024

Then yesterday, three more countries— Estonia, Latvia, and Lithuania — were added to Steam’s list of sale-restricted countries. The CEO of Arrowhead, Johan Pilestedt, said on Discord he wasn’t told about the newly-added regions, only finding out about them through the game’s Discord community.

Screenshot: Discord

He later said this came down to Valve noticing an administrative error, and that the countries were supposed to be there from the start.

Screenshot: Discord

Pilestedt has said he is trying to get both PlayStation and Valve to undo the sale restrictions. That this decision was made by Sony seems plausible, given the situation with Ghost of Tsushima on multiple game store platforms. However, Since neither Sony nor Valve have responded to The Verge’s request for comment on this situation, it’s impossible to say for sure whether that’s true, or if the stores are delisting Sony’s games on their own.

Read More 

Scroll to top
Generated by Feedzy