verge-rss

Cyberpunk 2077 will launch on Mac next year

Image: CD Projekt Red

Cyberpunk 2077 is making the jump to Macs. According to Apple’s announcement video for the new, M4-equipped MacBook Pros, CD Projekt Red’s sci-fi RPG will be available on Macs “early next year.”
“Taking full advantage of Apple silicon and advanced technologies of Metal, the world of the dark future is available to Mac gamers for the very first time,” CD Projekt Red says in an announcement. “Players can enjoy advanced features like path tracing, frame generation, and built-in Spatial Audio for even more immersive gameplay and stunning visuals.

The game will be available on the Mac App Store and on Steam when it launches next year. The version that’s launching on Mac is the Ultimate Edition that includes the very good Phantom Liberty expansion, and CD Projekt Red says that “existing PC Steam purchases of the game” will carry over to the Mac. You’ll need to have a Mac with Apple Silicon to play it.
Cyberpunk 2077 first launched in 2020, and while it had an infamously rocky release, the game is in great shape now.

Image: CD Projekt Red

Cyberpunk 2077 is making the jump to Macs. According to Apple’s announcement video for the new, M4-equipped MacBook Pros, CD Projekt Red’s sci-fi RPG will be available on Macs “early next year.”

“Taking full advantage of Apple silicon and advanced technologies of Metal, the world of the dark future is available to Mac gamers for the very first time,” CD Projekt Red says in an announcement. “Players can enjoy advanced features like path tracing, frame generation, and built-in Spatial Audio for even more immersive gameplay and stunning visuals.

The game will be available on the Mac App Store and on Steam when it launches next year. The version that’s launching on Mac is the Ultimate Edition that includes the very good Phantom Liberty expansion, and CD Projekt Red says that “existing PC Steam purchases of the game” will carry over to the Mac. You’ll need to have a Mac with Apple Silicon to play it.

Cyberpunk 2077 first launched in 2020, and while it had an infamously rocky release, the game is in great shape now.

Read More 

Watch Apple’s M4 MacBook Pro announcement video

Image: Apple

Apple’s week of episodic hardware releases continues today with a new video announcing the updated MacBook Pros with M4, M4 Pro, and M4 Max chips. Their announcement follows the updated M4 iMac on Monday and the redesigned M4 Mac Mini on Tuesday.
Like the iMac and Mac Mini videos, the new MacBook Pro video is available to watch on Apple’s website by clicking the “watch the announcement” link on the product page or over on the company’s YouTube channel. This video yet again starts with Apple’s SVP of engineering, John Ternus, doing the hardware reveal honors. He says it’s “packed with pro features, Apple Intelligence, and Apple silicon that’s more powerful than ever before.” Then he opens the new notebook and a gust of wind blows his hair back.

Image: Apple

Next comes senior engineering program manager for Mac systems Trevor McLeod who reveals that the new entry M4 MacBook Pro, still starting at $1,599, now comes in space black color, which was previously reserved for the higher-end specs. McLeod says the new 14-inch M4 is up to 1.8 times faster than the M1 for tasks like editing photos and has a neural engine that’s “up to 3 times more powerful than in M1,” which helps make Apple Intelligence features perform better. Like the iMac and Mac Mini, the MacBook Pro line has also shed the anemic 8GB RAM base option and now starts with 16GB for the entry-level model with the M4 chip.
Meanwhile, the M4 Pro version of the MacBook Pro now starts with 24GB of RAM instead of 18GB from the M3 Pro version. According to Apple’s VP of platform architecture, Tim Millet, the higher-end M4 Max chip option has 16 CPU cores and 40 GPU cores. Plus, it has more than half a terabyte per second of unified memory bandwidth that has “four times the bandwidth of the latest AI PC chip.” Apple also flashed a CPU performance chart that promises the M4 Max is 1.2 times faster than the M3 Max.
Apple has also announced an updated MacBook Air with its minimum RAM bumped to 16GB.

Image: Apple

Apple’s week of episodic hardware releases continues today with a new video announcing the updated MacBook Pros with M4, M4 Pro, and M4 Max chips. Their announcement follows the updated M4 iMac on Monday and the redesigned M4 Mac Mini on Tuesday.

Like the iMac and Mac Mini videos, the new MacBook Pro video is available to watch on Apple’s website by clicking the “watch the announcement” link on the product page or over on the company’s YouTube channel. This video yet again starts with Apple’s SVP of engineering, John Ternus, doing the hardware reveal honors. He says it’s “packed with pro features, Apple Intelligence, and Apple silicon that’s more powerful than ever before.” Then he opens the new notebook and a gust of wind blows his hair back.

Image: Apple

Next comes senior engineering program manager for Mac systems Trevor McLeod who reveals that the new entry M4 MacBook Pro, still starting at $1,599, now comes in space black color, which was previously reserved for the higher-end specs. McLeod says the new 14-inch M4 is up to 1.8 times faster than the M1 for tasks like editing photos and has a neural engine that’s “up to 3 times more powerful than in M1,” which helps make Apple Intelligence features perform better. Like the iMac and Mac Mini, the MacBook Pro line has also shed the anemic 8GB RAM base option and now starts with 16GB for the entry-level model with the M4 chip.

Meanwhile, the M4 Pro version of the MacBook Pro now starts with 24GB of RAM instead of 18GB from the M3 Pro version. According to Apple’s VP of platform architecture, Tim Millet, the higher-end M4 Max chip option has 16 CPU cores and 40 GPU cores. Plus, it has more than half a terabyte per second of unified memory bandwidth that has “four times the bandwidth of the latest AI PC chip.” Apple also flashed a CPU performance chart that promises the M4 Max is 1.2 times faster than the M3 Max.

Apple has also announced an updated MacBook Air with its minimum RAM bumped to 16GB.

Read More 

The MacBook Air gets a surprise upgrade to 16GB of RAM

Photo by Amelia Holowaty Krales / The Verge

Apple has just announced that 16GB of RAM is now the minimum for the M2 and M3 MacBook Air, giving the laptops the same RAM bump as all of the company’s other new computers this week. The cheapest MacBook Air still starts at $999.
Before this, it was a $200 upsell to get a MacBook Air with 16GB of RAM. Now, that’s no longer the case, bringing Apple’s cheapest laptop more in line with its competitors. Apple made a similar move in 2016 when it made 8GB the minimum for the 13-inch MacBook Air and discontinued the 4GB 11-inch version months later.

Of the company’s new iMacs, a smaller Mac Mini, and refreshed MacBook Pros introduced this week, none start with less than 16GB. Given the high RAM floor of the existing Mac Studio and Mac Pro models, the 8GB RAM tier is now gone from the company’s computer lineup.
The most likely reason for all that extra RAM? It’s Apple Intelligence — AI models tend to need a lot of RAM to function well. While 8GB seems to be fine for the iPhone, 16GB is the new table stakes for machines running the much more open macOS.

Photo by Amelia Holowaty Krales / The Verge

Apple has just announced that 16GB of RAM is now the minimum for the M2 and M3 MacBook Air, giving the laptops the same RAM bump as all of the company’s other new computers this week. The cheapest MacBook Air still starts at $999.

Before this, it was a $200 upsell to get a MacBook Air with 16GB of RAM. Now, that’s no longer the case, bringing Apple’s cheapest laptop more in line with its competitors. Apple made a similar move in 2016 when it made 8GB the minimum for the 13-inch MacBook Air and discontinued the 4GB 11-inch version months later.

Of the company’s new iMacs, a smaller Mac Mini, and refreshed MacBook Pros introduced this week, none start with less than 16GB. Given the high RAM floor of the existing Mac Studio and Mac Pro models, the 8GB RAM tier is now gone from the company’s computer lineup.

The most likely reason for all that extra RAM? It’s Apple Intelligence — AI models tend to need a lot of RAM to function well. While 8GB seems to be fine for the iPhone, 16GB is the new table stakes for machines running the much more open macOS.

Read More 

Apple’s Mac week: everything announced

Screenshot: Apple

Apple appears to have wrapped up its week of Mac announcements — and it had a lot to share. Along with updating several devices across its Mac lineup with a more powerful M4 chip, Apple showed off some revamped accessories, too.
If you want to catch up on all the news, here’s a quick rundown.
The M4 iMac adds new colors and 16GB of RAM

Image: Apple

The new iMac comes with an upgraded M4 chip, offering up to a 10-core CPU and a 10-core GPU. It comes with a 24-inch display with an optional “nano-texture glass” to help reduce glare, along with a base 16GB of RAM.
The entry-level iMac starts at $1,299 and has two Thunderbolt 4 ports. There’s also a more expensive $1,499 model with four Thunderbolt 4 ports. The new iMac comes in seven colors: green, yellow, orange, pink, purple, blue, and silver.
The MacBook Pro gets an M4-powered performance boost

Image: Apple

Apple is giving its MacBook Pro a big upgrade as well, with three new options featuring a standard M4 chip, an M4 Pro, and its highest-performing M4 Max processor. The MacBook Pro comes in both 14- and 16-inch models with the same nano-texture display option as the iMac. And this time around, the entry-level model starts with 16GB of RAM.
The 14-inch M4 MacBook Pro is priced at $1,599, while the 16-inch model starts at $2,499. The laptops are available in space black and silver finishes.
An even more miniature Mac Mini

Image: Apple

The new Mac Mini comes with an M4 chip and 16GB of RAM — all in a smaller package, measuring just five inches in length and width. Despite its size, the Mac Mini comes with several ports, including two USB-C ports and an audio jack on its front, along with ethernet, HDMI, and three Thunderbolt USB-C ports on the back.
The Mac Mini starts at $599 with a standard M4 chip, but Apple bumps the price to $1,399 with the more powerful M4 Pro.
Apple doubles the MacBook Air’s RAM

Photo by Amelia Holowaty Krales / The Verge

Even though the MacBook Air didn’t get a full update this time around, Apple is increasing the base amount of RAM that comes with the M2 and M3-equipped versions of the laptops from 8GB to 16GB — likely to account for the addition of Apple Intelligence.
Before this change, it cost $200 more to upgrade the MacBook Air’s RAM to 16GB. But now, both devices will still ship with the extra memory, without a bump in their starting prices.
USB-C comes to Apple’s accessories — without other big changes

Screenshot: Apple 3D iMac model

Apple has finally brought USB-C to all of its Mac accessories, including the Magic Keyboard, the Magic Trackpad, and the Magic Mouse. While this is great and all, Apple still hasn’t changed the location of the charging port on the Magic Mouse, leaving it on the underside of the device.
The new Magic Mouse starts at $79; the Magic Keyboard starts at $99; and the Magic Trackpad starts at $129.
Apple Intelligence starts rolling out

Image: Apple

Apple highlighted the rollout of Apple Intelligence during each of its announcements. The launch introduces AI-powered writing tools and a redesigned Siri on the Mac, iPhone, and iPad, but Apple plans to add other features, like an integration with ChatGPT, in December. You can sign up once your device is updated, but there’s still a waitlist to access the features.

Screenshot: Apple

Apple appears to have wrapped up its week of Mac announcements — and it had a lot to share. Along with updating several devices across its Mac lineup with a more powerful M4 chip, Apple showed off some revamped accessories, too.

If you want to catch up on all the news, here’s a quick rundown.

The M4 iMac adds new colors and 16GB of RAM

Image: Apple

The new iMac comes with an upgraded M4 chip, offering up to a 10-core CPU and a 10-core GPU. It comes with a 24-inch display with an optional “nano-texture glass” to help reduce glare, along with a base 16GB of RAM.

The entry-level iMac starts at $1,299 and has two Thunderbolt 4 ports. There’s also a more expensive $1,499 model with four Thunderbolt 4 ports. The new iMac comes in seven colors: green, yellow, orange, pink, purple, blue, and silver.

The MacBook Pro gets an M4-powered performance boost

Image: Apple

Apple is giving its MacBook Pro a big upgrade as well, with three new options featuring a standard M4 chip, an M4 Pro, and its highest-performing M4 Max processor. The MacBook Pro comes in both 14- and 16-inch models with the same nano-texture display option as the iMac. And this time around, the entry-level model starts with 16GB of RAM.

The 14-inch M4 MacBook Pro is priced at $1,599, while the 16-inch model starts at $2,499. The laptops are available in space black and silver finishes.

An even more miniature Mac Mini

Image: Apple

The new Mac Mini comes with an M4 chip and 16GB of RAM — all in a smaller package, measuring just five inches in length and width. Despite its size, the Mac Mini comes with several ports, including two USB-C ports and an audio jack on its front, along with ethernet, HDMI, and three Thunderbolt USB-C ports on the back.

The Mac Mini starts at $599 with a standard M4 chip, but Apple bumps the price to $1,399 with the more powerful M4 Pro.

Apple doubles the MacBook Air’s RAM

Photo by Amelia Holowaty Krales / The Verge

Even though the MacBook Air didn’t get a full update this time around, Apple is increasing the base amount of RAM that comes with the M2 and M3-equipped versions of the laptops from 8GB to 16GB — likely to account for the addition of Apple Intelligence.

Before this change, it cost $200 more to upgrade the MacBook Air’s RAM to 16GB. But now, both devices will still ship with the extra memory, without a bump in their starting prices.

USB-C comes to Apple’s accessories — without other big changes

Screenshot: Apple 3D iMac model

Apple has finally brought USB-C to all of its Mac accessories, including the Magic Keyboard, the Magic Trackpad, and the Magic Mouse. While this is great and all, Apple still hasn’t changed the location of the charging port on the Magic Mouse, leaving it on the underside of the device.

The new Magic Mouse starts at $79; the Magic Keyboard starts at $99; and the Magic Trackpad starts at $129.

Apple Intelligence starts rolling out

Image: Apple

Apple highlighted the rollout of Apple Intelligence during each of its announcements. The launch introduces AI-powered writing tools and a redesigned Siri on the Mac, iPhone, and iPad, but Apple plans to add other features, like an integration with ChatGPT, in December. You can sign up once your device is updated, but there’s still a waitlist to access the features.

Read More 

Apple updates the MacBook Pro with M4 Pro and M4 Max chips

They look a whole lot like last year’s models, even down to the wallpaper choice. | Image: Apple

Apple is updating the MacBook Pro and introducing some even more powerful chips. Announced this morning via a low-key press release, the 14- and 16-inch MacBook Pros are being updated to the M4 line of processors, which now includes the M4 Pro chip that debuted yesterday in the Mac Mini and a new, even higher-end M4 Max. The entry-level 14-inch MacBook Pro is also getting a small design upgrade in the form of an extra USB-C / Thunderbolt 4 port on the right-hand side and a space black option to match its more premium brethren.
Like previous models, the M4 Pro-equipped laptops will start at $1,999 for the 14-inch and $2,499 for the 16-inch, but both are getting an upgrade from 18GB of base RAM to 24GB. The basic 14-inch M4 MacBook Pro still starts at $1,599, but that model now (mercifully) starts with 16GB of RAM instead of just 8GB. The new MacBook Pros will be available on November 8th, with preorders available now.

Image: Apple
In addition to Thunderbolt 5 ports, the M4 Pro / Max MacBook Pros still include an SD card slot, HDMI-out, and MagSafe.

Outside of chip bumps and RAM improvements, the 14- and 16-inch Pros with M4 Pro / Max chips are also the first Mac laptops with Thunderbolt 5 ports. All three MacBook Pros come with new 12-megapixel webcams that feature a desk view, and they can each be configured with a new nano-texture display capable of up to 1,000 nits of SDR brightness and 1,600 nits in HDR.
Those ancillary upgrades are nice, but the big changes are still the chips. Apple claims that the entire M4 generation of chips has “the world’s fastest CPU core” and “the industry’s best single-threaded performance.” Single-core performance has been a strong suit of Apple’s Mac chips since the M1 generation, and the M4 chips are promised to have “dramatically faster multithreaded performance.” The M4 Pro and Max also have faster GPU cores, with a ray-tracing engine that’s twice as fast. The neural engine is also 2x faster than the M3 generation for improved machine learning and AI workloads.

Image: Apple

The latest most powerful silicon from Apple should be best equipped for Apple’s big AI push with Apple Intelligence, which just launched this week across supported Macs, iPhones, and iPads. That said, Apple Intelligence runs on Macs going back to the M1 chip from 2020 — the big hurdle for Macs seems to be RAM. Coinciding with today’s MacBook Pro announcements, Apple is also bumping up the base configurations of M2 and M3 MacBook Air models from 8GB of RAM to 16GB, starting at $999 for the M2. So perhaps it’s only a matter of time before older models with lesser specs get left behind on the future features Apple continues to slow-roll via updates.

The M3 generation of MacBook Pros was a mix of continued excellence in the form of the M3 Pro / Max models and an awkward middle child in the non-Pro M3 14-inch. The top-tier Mac laptops with Pro and Max chips have remained great choices for creatives who have performance-intensive workflows with apps like Premiere Pro or Final Cut Pro. The entry-level 14-inch M3 model, on the other hand, was a little hard to justify compared to a cheaper MacBook Air or one of its pricier siblings on a sale. The new 14-inch with M4 looks a little more interesting now and a tiny bit more deserving of that “Pro” branding. It’s amazing how an extra USB port can make you feel.
Update October 30th: Added base RAM amounts for both the 14- and 16-inch MacBook Pros with M4 Pro chips.

They look a whole lot like last year’s models, even down to the wallpaper choice. | Image: Apple

Apple is updating the MacBook Pro and introducing some even more powerful chips. Announced this morning via a low-key press release, the 14- and 16-inch MacBook Pros are being updated to the M4 line of processors, which now includes the M4 Pro chip that debuted yesterday in the Mac Mini and a new, even higher-end M4 Max. The entry-level 14-inch MacBook Pro is also getting a small design upgrade in the form of an extra USB-C / Thunderbolt 4 port on the right-hand side and a space black option to match its more premium brethren.

Like previous models, the M4 Pro-equipped laptops will start at $1,999 for the 14-inch and $2,499 for the 16-inch, but both are getting an upgrade from 18GB of base RAM to 24GB. The basic 14-inch M4 MacBook Pro still starts at $1,599, but that model now (mercifully) starts with 16GB of RAM instead of just 8GB. The new MacBook Pros will be available on November 8th, with preorders available now.

Image: Apple
In addition to Thunderbolt 5 ports, the M4 Pro / Max MacBook Pros still include an SD card slot, HDMI-out, and MagSafe.

Outside of chip bumps and RAM improvements, the 14- and 16-inch Pros with M4 Pro / Max chips are also the first Mac laptops with Thunderbolt 5 ports. All three MacBook Pros come with new 12-megapixel webcams that feature a desk view, and they can each be configured with a new nano-texture display capable of up to 1,000 nits of SDR brightness and 1,600 nits in HDR.

Those ancillary upgrades are nice, but the big changes are still the chips. Apple claims that the entire M4 generation of chips has “the world’s fastest CPU core” and “the industry’s best single-threaded performance.” Single-core performance has been a strong suit of Apple’s Mac chips since the M1 generation, and the M4 chips are promised to have “dramatically faster multithreaded performance.” The M4 Pro and Max also have faster GPU cores, with a ray-tracing engine that’s twice as fast. The neural engine is also 2x faster than the M3 generation for improved machine learning and AI workloads.

Image: Apple

The latest most powerful silicon from Apple should be best equipped for Apple’s big AI push with Apple Intelligence, which just launched this week across supported Macs, iPhones, and iPads. That said, Apple Intelligence runs on Macs going back to the M1 chip from 2020 — the big hurdle for Macs seems to be RAM. Coinciding with today’s MacBook Pro announcements, Apple is also bumping up the base configurations of M2 and M3 MacBook Air models from 8GB of RAM to 16GB, starting at $999 for the M2. So perhaps it’s only a matter of time before older models with lesser specs get left behind on the future features Apple continues to slow-roll via updates.

The M3 generation of MacBook Pros was a mix of continued excellence in the form of the M3 Pro / Max models and an awkward middle child in the non-Pro M3 14-inch. The top-tier Mac laptops with Pro and Max chips have remained great choices for creatives who have performance-intensive workflows with apps like Premiere Pro or Final Cut Pro. The entry-level 14-inch M3 model, on the other hand, was a little hard to justify compared to a cheaper MacBook Air or one of its pricier siblings on a sale. The new 14-inch with M4 looks a little more interesting now and a tiny bit more deserving of that “Pro” branding. It’s amazing how an extra USB port can make you feel.

Update October 30th: Added base RAM amounts for both the 14- and 16-inch MacBook Pros with M4 Pro chips.

Read More 

All of Apple’s Macs now start with 16GB of RAM

Illustration: The Verge

This week’s new Macs all have one thing in common: a minimum of 16GB of RAM. That’s true of the new Mac Mini, MacBook Pros, and iMac, which were all refreshed with M4 processors this week. The MacBook Air was updated to start at 16GB of RAM, too, even though it didn’t get a bump up to the M4 chip. The change brings an end to the long-running era of 8GB of RAM as the default on consumer-grade Macs.
Apple had transitioned most of its Macs to 8GB of RAM by 2016. But now, after eight years, that quantity feels increasingly insufficient. Reviewers have criticized the entry-level RAM as limited since at least 2022. Local AI features like Apple Intelligence, which need constant RAM to work, have only accentuated the need to change things.
That said, Apple isn’t getting generous with RAM everywhere. If you want more, it’ll still cost you a pretty penny. Apple charges $200 for extra memory — for example, bumping the iMac from 16GB to 24GB is $200, while it costs $400 to go all the way to 32GB.

The RAM upgrade likely has to do with the launch of Apple Intelligence. As I wrote in September, the general approach to running on-device AI models is to keep them persistently loaded in RAM. 8GB already felt like a pittance (even if Apple itself thought it was just as good as 16GB), and that would’ve been felt much harder if users had to give some of that up to run AI.

Illustration: The Verge

This week’s new Macs all have one thing in common: a minimum of 16GB of RAM. That’s true of the new Mac Mini, MacBook Pros, and iMac, which were all refreshed with M4 processors this week. The MacBook Air was updated to start at 16GB of RAM, too, even though it didn’t get a bump up to the M4 chip. The change brings an end to the long-running era of 8GB of RAM as the default on consumer-grade Macs.

Apple had transitioned most of its Macs to 8GB of RAM by 2016. But now, after eight years, that quantity feels increasingly insufficient. Reviewers have criticized the entry-level RAM as limited since at least 2022. Local AI features like Apple Intelligence, which need constant RAM to work, have only accentuated the need to change things.

That said, Apple isn’t getting generous with RAM everywhere. If you want more, it’ll still cost you a pretty penny. Apple charges $200 for extra memory — for example, bumping the iMac from 16GB to 24GB is $200, while it costs $400 to go all the way to 32GB.

The RAM upgrade likely has to do with the launch of Apple Intelligence. As I wrote in September, the general approach to running on-device AI models is to keep them persistently loaded in RAM. 8GB already felt like a pittance (even if Apple itself thought it was just as good as 16GB), and that would’ve been felt much harder if users had to give some of that up to run AI.

Read More 

Deadpool & Wolverine hits Disney Plus this November

Jay Maidment

For folks who have been waiting to stream Deadpool & Wolverine from the comfort of their couches, the time is nigh.
Today, Disney announced that Deadpool & Wolverine is finally making its way to Disney Plus on November 12th along with a commentary track from writer / director Shawn Levy and Ryan Reynolds. In… celebration of the film’s streaming debut, Disney is also rolling out a series of Deadpool & Wolverine bathroom “makeovers” at a handful of sports venues in Philadelphia, Memphis, and Columbus beginning today.

Marvel / Disney Plus
A mockup of Disney’s new Deadpool & Wolverine bathroom ad campaign.

Only Disney knows why it’s leaning so hard into potty humor for one of its most successful successful films yet. But that might just be the studio’s way of reminding everyone that the movie’s not exactly for younger viewers.

Jay Maidment

For folks who have been waiting to stream Deadpool & Wolverine from the comfort of their couches, the time is nigh.

Today, Disney announced that Deadpool & Wolverine is finally making its way to Disney Plus on November 12th along with a commentary track from writer / director Shawn Levy and Ryan Reynolds. In… celebration of the film’s streaming debut, Disney is also rolling out a series of Deadpool & Wolverine bathroom “makeovers” at a handful of sports venues in Philadelphia, Memphis, and Columbus beginning today.

Marvel / Disney Plus
A mockup of Disney’s new Deadpool & Wolverine bathroom ad campaign.

Only Disney knows why it’s leaning so hard into potty humor for one of its most successful successful films yet. But that might just be the studio’s way of reminding everyone that the movie’s not exactly for younger viewers.

Read More 

The Alexa Skills revolution that wasn’t

Image: Mojo Wang for The Verge

Ten years ago, Amazon imagined a future beyond apps — and it had the idea basically right. But the perfect ambient computer remains frustratingly far away. The first Amazon Echo, all the way back in 2014, was pitched as a device for a few simple things: playing music, asking basic questions, getting the weather. Since then, Amazon has found a few new things for people to do, like control smart home devices. But a decade later, Alexa is still mostly for playing music, asking basic questions, and getting the weather. And that’s largely because, even as Amazon made Alexa ubiquitous in devices and homes all over the place, it never convinced developers to care.
Alexa was never supposed to have an app store. Instead, it had “skills,” which Amazon hoped developers would use to connect Alexa to new functionality and information. Developers weren’t supposed to build their own things on top of an operating system, they were supposed to build new things for Alexa to do. The difference is subtle but important. Our phones are mostly a series of disconnected experiences — Instagram is a universe entirely apart from TikTok and Snapchat and your calendar app and Gmail. That just doesn’t work for Alexa or any other successful assistant. If it knows your to-do list but not your calendar or knows your favorite kind of pizza but not your credit card number, it can’t do much. It needs access to everything, and all the necessary tools at its disposal, to get things done for you.

In Amazon’s dream world, where “ambient computing” is perfect and everywhere, you’d just ask Alexa a question or give it an instruction: “Find me something fun to do this weekend.” “Book my train to New York next week.” “Get me up to speed on deep learning.” Alexa would have access to all the apps and information sources it needs, but you’d never need to worry about that; Alexa would just handle it however it needed and bring you the answers. There are a thousand complicated questions about how it actually works, but that’s still the big idea.
“Alexa Skills made it fast and easy for developers to build voice-driven experiences, unlocking an entirely new way for developers and brands to engage with their customers,” Amazon spokesperson Jill Tornifoglio said in a statement. Customers use them billions of times a year, she said, and as the company embraces generative AI, “we’re excited for what’s next.”
In retrospect, Amazon’s idea was pretty much exactly right. All these years later, OpenAI and other companies are also trying to build their own third-party ecosystems around chatbots, which are just another take on the idea of an interactive interface for the internet. But for all its prescience on the AI revolution, Amazon never figured out how to make skills work. It never solved some fundamental problems for developers, never cracked the user interface, and never found a way to show people all the things their Alexa device could do if only they’d ask.
In retrospect, Amazon’s idea was pretty much exactly right
Amazon certainly tried its best to make skills happen. The company steadily rolled out new tools for developers, paid them in AWS credits and cash when their skills got used (though it recently stopped doing so), and tried to make skill development practically effortless. And on some level, all that effort paid off: Amazon says there are more than 160,000 skills available for the platform. That pales next to the millions of app store apps on smartphones, but it’s still a big number.
The interface for finding and using all those skills, though, has always been a mess. Let’s just take one simple example: if you ask Alexa to order you pizza, it might tell you it has a few skills for that and recommend Domino’s. (If you’re wondering why Amazon would pick Domino’s and not Pizza Hut or DoorDash or any other pizza-summoning service? Great question. No idea.) You respond yes. “Here’s Domino’s,” Alexa says. Then a moment later: “Here’s the skill Domino’s, by Domino’s Pizza, LLC.” Another moment, then: “To link your Domino’s Pizza Profile please go to the Skills setting in your Alexa app. We’ll need your email address to place a guest order. Please enable ‘Email Address’ permissions in your Alexa app.” At this point, you have to find a buried setting in an app you might not even have on your phone; it would be vastly easier to just go to Domino’s website. Or, heck, call the place.
If you know the skill you’re looking for, the system is a little better. You can say “Alexa, open Nature Sounds” or “Alexa, enable Jeopardy,” and it’ll open the skill with that name. But if you don’t remember that the skill is called “Easy Yoga,” asking Alexa to start a yoga workout won’t get you anywhere.

Image: Amazon
Alexa can do a lot of things. Figuring out which ones is the real challenge.

There are little friction points like this all across the system. When you’ve activated a skill, you have to explicitly say “stop” or “cancel” to back out of it in order to use another one. You can’t easily do things across skills — I’d like to price-check my pizza, but Alexa won’t let me. And maybe most frustrating of all, even once you’ve enabled a skill, you still have to address it specifically. Saying “Alexa, ask AnyList to add spaghetti to my grocery list” is not seamless interaction with an all-knowing assistant; that’s having to learn a computer’s incredibly specific language just to use it properly.
As it has turned out, many of the most popular Alexa skills have two things in common: they’re simple Q&A games, and they’re made by a company called Volley. From Song Quiz to Jeopardy to Who Wants to Be a Millionaire to Are You Smarter Than a 5th Grader, Volley is one of the companies that has figured out how to make skills that really work. And Max Child, Volley’s cofounder and CEO, says that getting your skill in front of people is one of the most important — and hardest — parts of the job.
“I think one of the underrated reasons that the iOS and Android app stores are so successful is because Facebook ads are so good,” he says. The pipeline from a hyper-targeted ad to an app install has been ruthlessly perfected over the years, and there’s just nothing like that for voice assistants. The nearest equivalent is probably people asking their Alexa devices what they can do — which Child says does happen! — but there’s just no competing with in-feed ads and hours of social scrolling. “Because you don’t have that hyper-targeted marketing, you end up having to do broad marketing, and you have to build broad games.” Hence games like Jeopardy and Millionaire, which are huge brands that appeal to practically everyone.
One way Volley makes money is through subscriptions. The full Jeopardy experience, for instance, is $12.99 a month, and like so many other modern subscriptions, it’s a lot easier to subscribe than to cancel. It’s also one of the few ways to make money with a skill: developers are allowed to have audio ads in some kinds of skills, or to ask users to add their credit card details directly the way Domino’s does, but asking a voice-first user to pick up their phone and dig through settings is a high bar to clear. Ads are only useful at vast scale — there was a brief moment when a lot of media companies thought the so-called “flash briefings” might be a hit, but that hasn’t turned into much.
These are hardly unique challenges, by the way. Mobile app stores have similar huge discovery problems, issues with monetization, sketchy subscription systems, and more. It’s just that with Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store. You should just be able to ask for what you want, and Alexa can go do it for you.
With Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store
A decade on, it appears that an all-powerful, omni-capable voice AI might just be impossible to pull off. If Amazon were to make everything so seamless and fast that you never even have to know you’re interacting with a third-party developer and your pizza just magically appears at your door, it raises some huge privacy concerns and questions about how Amazon picks those providers. If it asked you to choose all those defaults for yourself, it’s signing every new user up for an awful lot of busy work. If it allows developers to own and operate even more of the experience, it wrecks the ambient simplicity that makes Alexa so enticing in the first place. Too much simplicity and abstraction is actually a problem.
We’re at something of an inflection point, though. A decade after its launch, Alexa is changing in two key ways. One is good news for the future of skills, the other might be bad. The good is that Alexa is no longer a voice-only, or even voice-first, experience — as Echo Show and Fire TV devices have gotten more popular, more people are interacting with Alexa with a screen nearby. That could solve a lot of interaction problems and give developers new ways to put their skills in front of users. (Screens are also a great place to advertise your skill, a fact Amazon knows maybe too well.) When Alexa can show you things, it can do a lot more.
Already, Child says that a majority of Volley’s players are on a device with a screen. “We’re very long on smart TVs,” he says, laughing. “Every single smart TV that’s sold now has a microphone in the remote. I really think casual voice games … might make a lot of sense, and I think could be even more immersive.”

Amazon is also about to re-architect Alexa around LLMs, which could be the key to making all of this work. A smarter, AI-powered Alexa could finally understand what you’re actually trying to do, and do away with some of the awkward syntax required to use skills. It could understand more complicated questions and multistep instructions and use skills on your behalf. “Developers now need to only describe the capabilities of their device,” Amazon’s Charlie French said at Amazon’s AI Alexa launch event last year. “They don’t need to try and predict what a customer is going to say.” Amazon is just one of the companies promising that LLMs will be able to do things on your behalf with no extra work required; in that world, do skills even need to exist, or will the model simply figure out how to order pizza?
There’s some evidence that Amazon is behind in its AI work and that plugging in a language model won’t suddenly make Alexa amazing. (Even the best LLMs feel like they’re only sort of slightly close to almost being good enough to do this stuff.) But even if it does, it only makes the bigger question more important: what can virtual assistants really do for us? And how do we ask them to do it? The correct answers are “anything you want,” and “any way you like.” That requires a lot of developers to give Alexa new powers. Which requires Amazon to give them a product, and a business, worth the effort.

Image: Mojo Wang for The Verge

Ten years ago, Amazon imagined a future beyond apps — and it had the idea basically right. But the perfect ambient computer remains frustratingly far away.

The first Amazon Echo, all the way back in 2014, was pitched as a device for a few simple things: playing music, asking basic questions, getting the weather. Since then, Amazon has found a few new things for people to do, like control smart home devices. But a decade later, Alexa is still mostly for playing music, asking basic questions, and getting the weather. And that’s largely because, even as Amazon made Alexa ubiquitous in devices and homes all over the place, it never convinced developers to care.

Alexa was never supposed to have an app store. Instead, it had “skills,” which Amazon hoped developers would use to connect Alexa to new functionality and information. Developers weren’t supposed to build their own things on top of an operating system, they were supposed to build new things for Alexa to do. The difference is subtle but important. Our phones are mostly a series of disconnected experiences — Instagram is a universe entirely apart from TikTok and Snapchat and your calendar app and Gmail. That just doesn’t work for Alexa or any other successful assistant. If it knows your to-do list but not your calendar or knows your favorite kind of pizza but not your credit card number, it can’t do much. It needs access to everything, and all the necessary tools at its disposal, to get things done for you.

In Amazon’s dream world, where “ambient computing” is perfect and everywhere, you’d just ask Alexa a question or give it an instruction: “Find me something fun to do this weekend.” “Book my train to New York next week.” “Get me up to speed on deep learning.” Alexa would have access to all the apps and information sources it needs, but you’d never need to worry about that; Alexa would just handle it however it needed and bring you the answers. There are a thousand complicated questions about how it actually works, but that’s still the big idea.

“Alexa Skills made it fast and easy for developers to build voice-driven experiences, unlocking an entirely new way for developers and brands to engage with their customers,” Amazon spokesperson Jill Tornifoglio said in a statement. Customers use them billions of times a year, she said, and as the company embraces generative AI, “we’re excited for what’s next.”

In retrospect, Amazon’s idea was pretty much exactly right. All these years later, OpenAI and other companies are also trying to build their own third-party ecosystems around chatbots, which are just another take on the idea of an interactive interface for the internet. But for all its prescience on the AI revolution, Amazon never figured out how to make skills work. It never solved some fundamental problems for developers, never cracked the user interface, and never found a way to show people all the things their Alexa device could do if only they’d ask.

In retrospect, Amazon’s idea was pretty much exactly right

Amazon certainly tried its best to make skills happen. The company steadily rolled out new tools for developers, paid them in AWS credits and cash when their skills got used (though it recently stopped doing so), and tried to make skill development practically effortless. And on some level, all that effort paid off: Amazon says there are more than 160,000 skills available for the platform. That pales next to the millions of app store apps on smartphones, but it’s still a big number.

The interface for finding and using all those skills, though, has always been a mess. Let’s just take one simple example: if you ask Alexa to order you pizza, it might tell you it has a few skills for that and recommend Domino’s. (If you’re wondering why Amazon would pick Domino’s and not Pizza Hut or DoorDash or any other pizza-summoning service? Great question. No idea.) You respond yes. “Here’s Domino’s,” Alexa says. Then a moment later: “Here’s the skill Domino’s, by Domino’s Pizza, LLC.” Another moment, then: “To link your Domino’s Pizza Profile please go to the Skills setting in your Alexa app. We’ll need your email address to place a guest order. Please enable ‘Email Address’ permissions in your Alexa app.” At this point, you have to find a buried setting in an app you might not even have on your phone; it would be vastly easier to just go to Domino’s website. Or, heck, call the place.

If you know the skill you’re looking for, the system is a little better. You can say “Alexa, open Nature Sounds” or “Alexa, enable Jeopardy,” and it’ll open the skill with that name. But if you don’t remember that the skill is called “Easy Yoga,” asking Alexa to start a yoga workout won’t get you anywhere.

Image: Amazon
Alexa can do a lot of things. Figuring out which ones is the real challenge.

There are little friction points like this all across the system. When you’ve activated a skill, you have to explicitly say “stop” or “cancel” to back out of it in order to use another one. You can’t easily do things across skills — I’d like to price-check my pizza, but Alexa won’t let me. And maybe most frustrating of all, even once you’ve enabled a skill, you still have to address it specifically. Saying “Alexa, ask AnyList to add spaghetti to my grocery list” is not seamless interaction with an all-knowing assistant; that’s having to learn a computer’s incredibly specific language just to use it properly.

As it has turned out, many of the most popular Alexa skills have two things in common: they’re simple Q&A games, and they’re made by a company called Volley. From Song Quiz to Jeopardy to Who Wants to Be a Millionaire to Are You Smarter Than a 5th Grader, Volley is one of the companies that has figured out how to make skills that really work. And Max Child, Volley’s cofounder and CEO, says that getting your skill in front of people is one of the most important — and hardest — parts of the job.

“I think one of the underrated reasons that the iOS and Android app stores are so successful is because Facebook ads are so good,” he says. The pipeline from a hyper-targeted ad to an app install has been ruthlessly perfected over the years, and there’s just nothing like that for voice assistants. The nearest equivalent is probably people asking their Alexa devices what they can do — which Child says does happen! — but there’s just no competing with in-feed ads and hours of social scrolling. “Because you don’t have that hyper-targeted marketing, you end up having to do broad marketing, and you have to build broad games.” Hence games like Jeopardy and Millionaire, which are huge brands that appeal to practically everyone.

One way Volley makes money is through subscriptions. The full Jeopardy experience, for instance, is $12.99 a month, and like so many other modern subscriptions, it’s a lot easier to subscribe than to cancel. It’s also one of the few ways to make money with a skill: developers are allowed to have audio ads in some kinds of skills, or to ask users to add their credit card details directly the way Domino’s does, but asking a voice-first user to pick up their phone and dig through settings is a high bar to clear. Ads are only useful at vast scale — there was a brief moment when a lot of media companies thought the so-called “flash briefings” might be a hit, but that hasn’t turned into much.

These are hardly unique challenges, by the way. Mobile app stores have similar huge discovery problems, issues with monetization, sketchy subscription systems, and more. It’s just that with Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store. You should just be able to ask for what you want, and Alexa can go do it for you.

With Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store

A decade on, it appears that an all-powerful, omni-capable voice AI might just be impossible to pull off. If Amazon were to make everything so seamless and fast that you never even have to know you’re interacting with a third-party developer and your pizza just magically appears at your door, it raises some huge privacy concerns and questions about how Amazon picks those providers. If it asked you to choose all those defaults for yourself, it’s signing every new user up for an awful lot of busy work. If it allows developers to own and operate even more of the experience, it wrecks the ambient simplicity that makes Alexa so enticing in the first place. Too much simplicity and abstraction is actually a problem.

We’re at something of an inflection point, though. A decade after its launch, Alexa is changing in two key ways. One is good news for the future of skills, the other might be bad. The good is that Alexa is no longer a voice-only, or even voice-first, experience — as Echo Show and Fire TV devices have gotten more popular, more people are interacting with Alexa with a screen nearby. That could solve a lot of interaction problems and give developers new ways to put their skills in front of users. (Screens are also a great place to advertise your skill, a fact Amazon knows maybe too well.) When Alexa can show you things, it can do a lot more.

Already, Child says that a majority of Volley’s players are on a device with a screen. “We’re very long on smart TVs,” he says, laughing. “Every single smart TV that’s sold now has a microphone in the remote. I really think casual voice games … might make a lot of sense, and I think could be even more immersive.”

Amazon is also about to re-architect Alexa around LLMs, which could be the key to making all of this work. A smarter, AI-powered Alexa could finally understand what you’re actually trying to do, and do away with some of the awkward syntax required to use skills. It could understand more complicated questions and multistep instructions and use skills on your behalf. “Developers now need to only describe the capabilities of their device,” Amazon’s Charlie French said at Amazon’s AI Alexa launch event last year. “They don’t need to try and predict what a customer is going to say.” Amazon is just one of the companies promising that LLMs will be able to do things on your behalf with no extra work required; in that world, do skills even need to exist, or will the model simply figure out how to order pizza?

There’s some evidence that Amazon is behind in its AI work and that plugging in a language model won’t suddenly make Alexa amazing. (Even the best LLMs feel like they’re only sort of slightly close to almost being good enough to do this stuff.) But even if it does, it only makes the bigger question more important: what can virtual assistants really do for us? And how do we ask them to do it? The correct answers are “anything you want,” and “any way you like.” That requires a lot of developers to give Alexa new powers. Which requires Amazon to give them a product, and a business, worth the effort.

Read More 

Shinichirō Watanabe is ready to tell humanity something about itself with Lazarus

Image: Verge Staff, Adobe Stock

The visionary behind Cowboy Bebop opens up about his return to sci-fi and collaborating with Chad Stahelski. After some time away, Shinichirō Watanabe is jumping back into science fiction. Though each of the director’s original projects has spoken to his artistic sensibilities, Cowboy Bebop solidified him as a sci-fi visionary whose style and eclectic taste in music set him apart. Almost 30 years later, Cowboy Bebop is still regarded as one of the most iconic and influential anime series of the 20th century. But Watanabe’s desire to grow by trying new things led him away from telling stories about far-flung futures and toward projects like the historical action series Samurai Champloo and Kids on the Slope.
That same feeling is what brought him back to his roots and inspired him to dream up Lazarus — a new series premiering on Adult Swim in 2025 that enlists John Wick director Chad Stahelski for its action sequences.
Set in a future where most of the world’s population has begun using a new wonder drug called Hapna, Lazarus tells the story of what happens when the painkiller is revealed to be a time-delayed toxin that is guaranteed to kill. The revelation sets off a race to track down Hapna’s creator in hopes of stopping his plan to punish humanity for its self-destructive sins against the planet. But the situation also sets off a wave of panic and confusion as people come to grips with the idea of being killed by the very same thing that once seemed to be the key to their salvation.
When I recently sat down with Watanabe to talk about Lazarus, he told me that, as excited as he was to return to hard sci-fi, he wanted the series to feel like a heightened rumination on our own present-day reality. Lazarus, he explained, is a kind of fantasy — one that’s trying to make you think about how the present shapes the future.

This interview has been edited for length and clarity.
How did the concept for Lazarus first come to you initially? What was on your mind as this story began coming into focus?
The image of Axel, our main protagonist, was actually what came first to me, and I had an impression about his physicality — how I wanted him to move through the world. I also knew that I wanted to do a story about humanity facing the end of the world. In the very first episode, our character Dough delivers a monologue that’s set to a montage of images, and a lot of what he’s saying is actually very close to what I’ve imagined at being how our world could fall apart.
What aspects of our own present-day society did you want to explore or unpack through this specific vision of the future you’ve created for the show?
When people think about the end of the world in fiction, usually the cause is some kind of war or maybe an alien invasion. But with this story, the collapse of everything begins with the creation of this new painkiller, Hapna. The general real-world opioid crisis was one of my bigger inspirations for this series, but also the fact that many of the musicians I love listening to ultimately died from drug overdoses.
In the old days, you would hear about musicians overdosing on illegal street drugs, but over the years, you’ve seen more and more cases like Prince, for instance, who wind up dying while taking prescribed painkillers. Prince’s death still shocks me. I love hip-hop culture and rap as well, and unfortunately you’re seeing more of this kind of overdosing with younger artists.
What made you want to come back to sci-fi after being away from the genre for so long?
After Cowboy Bebop, I wanted to try something different genre-wise, which was how I ended up making Kids on the Slope and Carole & Tuesday. When I wound up working on Blade Runner Black Out 2022, it felt so good to come back to sci-fi, but because that was just a short, I still felt like I needed to find an opportunity to stretch those specific creative muscles.
I didn’t just want to repeat or rehash what I’d done with Cowboy Bebop, though, and that’s part of why I initially reached out to Chad Stahelski, who worked on the John Wick films. I thought that he was able to really update action sequences in a new way, and I wanted to bring that kind of energy to my next project.

Adult Swim

Talk to me about collaborating with Chad.
When I first mentioned Chad’s name, a lot of people were skeptical about whether we would be able to find the time to collaborate because of how busy he is and how many people want to work with him. But I felt such a strong affinity for Chad’s approach to building action scenes, and so I still reached out. It turned out that he had seen and was a big fan of Cowboy Bebop and Samurai Champloo, and he immediately said yes to coming onto Lazarus.
Chad’s team would do their own interpretation of fight scene choreography and send videos to us, and then we would study that footage to find different elements of the action that we wanted to incorporate into Lazarus. Obviously, live-action and animation are different mediums, so our process involved a lot of figuring out which aspects of the footage felt like things we could heighten and stylize.
What was that process like?
Different episodes have different ways of incorporating Chad’s choreography. For the premiere, we were actually still in the early stages and we weren’t able to benefit from Chad’s input as much. And for episodes two and three, we created very short action sequences specifically to incorporate things we saw in the videos from Chad’s team. But for the fourth episode, we went with a much bigger set piece because at that point, we had really found our rhythm.
For some episodes, we gave them information about what kind of scene we wanted, and they would just go brainstorming. But in a lot of cases, before getting detailed instructions from us, Chad’s team would offer up their ideas, and we used quite a bit of those as well. It was a constant ongoing discussion between our teams, and that open communication was key to striking the right tone for Lazarus’ action. For instance, the John Wick movies’ fight scenes involve a lot of headshot kills, but that was a little too much for us because Axel isn’t really a killer the way John Wick himself is.
You mentioned earlier that Axel was the first piece of this story that came into focus for you. How did you envision him?
I don’t want you to get the wrong idea when I say this, but Axel was somewhat inspired by Tom Cruise. Axel thrives on danger and at times, it seems like he’s almost addicted to it. Lazarus features a lot of parkour because we wanted the action built around Axel to always feel as if one wrong move could lead to him falling. He’s risking his life, but that danger is something he gets off on — it makes him feel alive.

You’ve always been known for populating your worlds with diverse arrays of people, but there’s a pronounced multiculturalism to Babylonia City [one of the show’s important locations] that feels really distinct in the larger anime landscape. What was your thinking behind building a story around such a culturally diverse group of characters?
Whenever I’m thinking of the specific environments my characters will exist in, the most important thing is that the space feels real and like a place where people can actually move around in a realistic way. What I’ve always felt looking at other sci-fi depictions of the future is that often, they don’t have a sense of being truly lived-in, and that’s what I want to avoid. So for Babylonia City, my thinking was that a big, busy cityscape would lend itself to characters’ expressions of their personalities.
I always try to incorporate multicultural elements into my stories, I think, because they were such an important part of Blade Runner, which really stuck with me after I first saw it when I was young. Blade Runner’s multiculturalism — the cultural blending — was part of how the movie illustrated how society had changed in the future. I half expected the future to be more like that, which is funny to say today because the original film is set in 2019.
People will be able to hear Lazarus’ soundtrack for themselves, but what did the series sound like in your mind as you were thinking about the musical palettes you wanted to build for it?
With Cowboy Bebop, we used somewhat older jazz music to create a contrast with the story’s futuristic feel. But for Lazarus, I wanted to find a different kind of sound and feature more of the relatively recent music I’ve been listening to.
Was there a specific song or songs that really crystallized the show for you?
This hasn’t been made public yet, but the end credits sequence features The Boo Radleys’ song “Lazarus,” which was really a huge inspiration for this series as a whole. I’m very interested to hear what they think of the show.

You talked about your process for collaborating on visuals, but what about for the music?
There was a lot of back and forth with that process as well. Typically for animation, you try to have all the music cues created and ready while the production is going on. We had a lot of discussions with our musical collaborators about the show and the specific kind of feeling we wanted to evoke from scene-to-scene, and the challenging thing about that process is always that the visuals we’re creating music for just aren’t entirely finished. They’re part way there, but the musician has to imagine something more complete to compose to. But then, when the visuals are finished, there might be requests for retakes or small adjustments that make the musical piece fit more cohesively.
There has been an increase in focus on the working conditions that make it harder and harder for illustrators and animators to thrive and cultivate sustainable careers. What’s your read on the current state of the industry?
In a nutshell, the problem is that there are too many shows being made and there aren’t enough experienced animators to go around. Even for Lazarus, we weren’t able to get all of the experienced animators we needed domestically, so we had to bring in quite a few from overseas. With the first episode, there are many non-Japanese animators, especially for the action scenes.
Big picture, what do you think needs to really change in order for animators to get that kind of experience?
In order for an animator to really develop their skills, I think they need to be working on a project, and being able to focus solely on it. But more often than not, because of the sheer number of shows and films, many animators have to jump from one project to another, and really scramble to finish their work, and it’s not an environment that’s conducive to genuine artistic growth.
Going back to that first episode, the action scenes in the first half were drawn by a single animator, and another animator handled all of the action in the second half. They each had 50 shots to work on. That, to me, is the ideal way — to have someone who’s already good at action animation be able to focus on a substantial chunk of scenes. That’s how you grow. But so often in other animated projects, you see experienced animators limited to working on maybe two or three shots max, and the end product just isn’t as good.

Image: Verge Staff, Adobe Stock

The visionary behind Cowboy Bebop opens up about his return to sci-fi and collaborating with Chad Stahelski.

After some time away, Shinichirō Watanabe is jumping back into science fiction. Though each of the director’s original projects has spoken to his artistic sensibilities, Cowboy Bebop solidified him as a sci-fi visionary whose style and eclectic taste in music set him apart. Almost 30 years later, Cowboy Bebop is still regarded as one of the most iconic and influential anime series of the 20th century. But Watanabe’s desire to grow by trying new things led him away from telling stories about far-flung futures and toward projects like the historical action series Samurai Champloo and Kids on the Slope.

That same feeling is what brought him back to his roots and inspired him to dream up Lazarus — a new series premiering on Adult Swim in 2025 that enlists John Wick director Chad Stahelski for its action sequences.

Set in a future where most of the world’s population has begun using a new wonder drug called Hapna, Lazarus tells the story of what happens when the painkiller is revealed to be a time-delayed toxin that is guaranteed to kill. The revelation sets off a race to track down Hapna’s creator in hopes of stopping his plan to punish humanity for its self-destructive sins against the planet. But the situation also sets off a wave of panic and confusion as people come to grips with the idea of being killed by the very same thing that once seemed to be the key to their salvation.

When I recently sat down with Watanabe to talk about Lazarus, he told me that, as excited as he was to return to hard sci-fi, he wanted the series to feel like a heightened rumination on our own present-day reality. Lazarus, he explained, is a kind of fantasy — one that’s trying to make you think about how the present shapes the future.

This interview has been edited for length and clarity.

How did the concept for Lazarus first come to you initially? What was on your mind as this story began coming into focus?

The image of Axel, our main protagonist, was actually what came first to me, and I had an impression about his physicality — how I wanted him to move through the world. I also knew that I wanted to do a story about humanity facing the end of the world. In the very first episode, our character Dough delivers a monologue that’s set to a montage of images, and a lot of what he’s saying is actually very close to what I’ve imagined at being how our world could fall apart.

What aspects of our own present-day society did you want to explore or unpack through this specific vision of the future you’ve created for the show?

When people think about the end of the world in fiction, usually the cause is some kind of war or maybe an alien invasion. But with this story, the collapse of everything begins with the creation of this new painkiller, Hapna. The general real-world opioid crisis was one of my bigger inspirations for this series, but also the fact that many of the musicians I love listening to ultimately died from drug overdoses.

In the old days, you would hear about musicians overdosing on illegal street drugs, but over the years, you’ve seen more and more cases like Prince, for instance, who wind up dying while taking prescribed painkillers. Prince’s death still shocks me. I love hip-hop culture and rap as well, and unfortunately you’re seeing more of this kind of overdosing with younger artists.

What made you want to come back to sci-fi after being away from the genre for so long?

After Cowboy Bebop, I wanted to try something different genre-wise, which was how I ended up making Kids on the Slope and Carole & Tuesday. When I wound up working on Blade Runner Black Out 2022, it felt so good to come back to sci-fi, but because that was just a short, I still felt like I needed to find an opportunity to stretch those specific creative muscles.

I didn’t just want to repeat or rehash what I’d done with Cowboy Bebop, though, and that’s part of why I initially reached out to Chad Stahelski, who worked on the John Wick films. I thought that he was able to really update action sequences in a new way, and I wanted to bring that kind of energy to my next project.

Adult Swim

Talk to me about collaborating with Chad.

When I first mentioned Chad’s name, a lot of people were skeptical about whether we would be able to find the time to collaborate because of how busy he is and how many people want to work with him. But I felt such a strong affinity for Chad’s approach to building action scenes, and so I still reached out. It turned out that he had seen and was a big fan of Cowboy Bebop and Samurai Champloo, and he immediately said yes to coming onto Lazarus.

Chad’s team would do their own interpretation of fight scene choreography and send videos to us, and then we would study that footage to find different elements of the action that we wanted to incorporate into Lazarus. Obviously, live-action and animation are different mediums, so our process involved a lot of figuring out which aspects of the footage felt like things we could heighten and stylize.

What was that process like?

Different episodes have different ways of incorporating Chad’s choreography. For the premiere, we were actually still in the early stages and we weren’t able to benefit from Chad’s input as much. And for episodes two and three, we created very short action sequences specifically to incorporate things we saw in the videos from Chad’s team. But for the fourth episode, we went with a much bigger set piece because at that point, we had really found our rhythm.

For some episodes, we gave them information about what kind of scene we wanted, and they would just go brainstorming. But in a lot of cases, before getting detailed instructions from us, Chad’s team would offer up their ideas, and we used quite a bit of those as well. It was a constant ongoing discussion between our teams, and that open communication was key to striking the right tone for Lazarus’ action. For instance, the John Wick movies’ fight scenes involve a lot of headshot kills, but that was a little too much for us because Axel isn’t really a killer the way John Wick himself is.

You mentioned earlier that Axel was the first piece of this story that came into focus for you. How did you envision him?

I don’t want you to get the wrong idea when I say this, but Axel was somewhat inspired by Tom Cruise. Axel thrives on danger and at times, it seems like he’s almost addicted to it. Lazarus features a lot of parkour because we wanted the action built around Axel to always feel as if one wrong move could lead to him falling. He’s risking his life, but that danger is something he gets off on — it makes him feel alive.

You’ve always been known for populating your worlds with diverse arrays of people, but there’s a pronounced multiculturalism to Babylonia City [one of the show’s important locations] that feels really distinct in the larger anime landscape. What was your thinking behind building a story around such a culturally diverse group of characters?

Whenever I’m thinking of the specific environments my characters will exist in, the most important thing is that the space feels real and like a place where people can actually move around in a realistic way. What I’ve always felt looking at other sci-fi depictions of the future is that often, they don’t have a sense of being truly lived-in, and that’s what I want to avoid. So for Babylonia City, my thinking was that a big, busy cityscape would lend itself to characters’ expressions of their personalities.

I always try to incorporate multicultural elements into my stories, I think, because they were such an important part of Blade Runner, which really stuck with me after I first saw it when I was young. Blade Runner’s multiculturalism — the cultural blending — was part of how the movie illustrated how society had changed in the future. I half expected the future to be more like that, which is funny to say today because the original film is set in 2019.

People will be able to hear Lazarus’ soundtrack for themselves, but what did the series sound like in your mind as you were thinking about the musical palettes you wanted to build for it?

With Cowboy Bebop, we used somewhat older jazz music to create a contrast with the story’s futuristic feel. But for Lazarus, I wanted to find a different kind of sound and feature more of the relatively recent music I’ve been listening to.

Was there a specific song or songs that really crystallized the show for you?

This hasn’t been made public yet, but the end credits sequence features The Boo Radleys’ song “Lazarus,” which was really a huge inspiration for this series as a whole. I’m very interested to hear what they think of the show.

You talked about your process for collaborating on visuals, but what about for the music?

There was a lot of back and forth with that process as well. Typically for animation, you try to have all the music cues created and ready while the production is going on. We had a lot of discussions with our musical collaborators about the show and the specific kind of feeling we wanted to evoke from scene-to-scene, and the challenging thing about that process is always that the visuals we’re creating music for just aren’t entirely finished. They’re part way there, but the musician has to imagine something more complete to compose to. But then, when the visuals are finished, there might be requests for retakes or small adjustments that make the musical piece fit more cohesively.

There has been an increase in focus on the working conditions that make it harder and harder for illustrators and animators to thrive and cultivate sustainable careers. What’s your read on the current state of the industry?

In a nutshell, the problem is that there are too many shows being made and there aren’t enough experienced animators to go around. Even for Lazarus, we weren’t able to get all of the experienced animators we needed domestically, so we had to bring in quite a few from overseas. With the first episode, there are many non-Japanese animators, especially for the action scenes.

Big picture, what do you think needs to really change in order for animators to get that kind of experience?

In order for an animator to really develop their skills, I think they need to be working on a project, and being able to focus solely on it. But more often than not, because of the sheer number of shows and films, many animators have to jump from one project to another, and really scramble to finish their work, and it’s not an environment that’s conducive to genuine artistic growth.

Going back to that first episode, the action scenes in the first half were drawn by a single animator, and another animator handled all of the action in the second half. They each had 50 shots to work on. That, to me, is the ideal way — to have someone who’s already good at action animation be able to focus on a substantial chunk of scenes. That’s how you grow. But so often in other animated projects, you see experienced animators limited to working on maybe two or three shots max, and the end product just isn’t as good.

Read More 

Judges are using algorithms to justify doing what they already want

Image: Cath Virginia / The Verge, Getty Images

When Northwestern University graduate student Sino Esthappan began researching how algorithms decide who stays in jail, he expected “a story about humans versus technology.” On one side would be human judges, who Esthappan interviewed extensively. On the other would be risk assessment algorithms, which are used in hundreds of US counties to assess the danger of granting bail to accused criminals. What he found was more complicated — and suggests these tools could obscure bigger problems with the bail system itself.
Algorithmic risk assessments are intended to calculate the risk of a criminal defendant not returning to court — or, worse, harming others — if they’re released. By comparing criminal defendants’ backgrounds to a vast database of past cases, they’re supposed to help judges gauge how risky releasing someone from jail would be. Along with other algorithm-driven tools, they play an increasingly large role in a frequently overburdened criminal justice system. And in theory, they’re supposed to help reduce bias from human judges.
But Esthappan’s work, published in the journal Social Problems, found that judges aren’t wholesale adopting or rejecting the advice of these algorithms. Instead, they report using them selectively, motivated by deeply human factors to accept or disregard their scores.
Pretrial risk assessment tools estimate the likelihood that accused criminals will return for court dates if they’re released from jail. The tools take in details fed to them by pretrial officers, including things like criminal history and family profiles. They compare this information with a database that holds hundreds of thousands of previous case records, looking at how defendants with similar histories behaved. Then they deliver an assessment that could take the form of a “low,” “medium,” or “high” risk label or a number on a scale. Judges are given the scores for use in pretrial hearings: short meetings, held soon after a defendant is arrested, that determine whether (and on what conditions) they’ll be released.
As with other algorithmic criminal justice tools, supporters position them as neutral, data-driven correctives to human capriciousness and bias. Opponents raise issues like the risk of racial profiling. “Because a lot of these tools rely on criminal history, the argument is that criminal history is also racially encoded based on law enforcement surveillance practices,” Esthappan says. “So there already is an argument that these tools are reproducing biases from the past, and they’re encoding them into the future.”
It’s also not clear how well they work. A 2016 ProPublica investigation found that a risk score algorithm used in Broward County, Florida, was “remarkably unreliable in forecasting violent crime.” Just 20 percent of those the algorithm predicted would commit violent crimes actually did in the next two years after their arrest. The program was also more likely to label Black defendants as future criminals or higher risk compared to white defendants, ProPublica found.
Both the fears and promises around algorithms in the courtroom assume judges are consistently using them
Still, University of Pennsylvania criminology professor Richard Berk argues that human decision-makers can be just as flawed. “These criminal justice systems are made with human institutions and human beings, all of which are imperfect, and not surprisingly, they don’t do a very good job in identifying or forecasting people’s behaviors,” Berk says. “So the bar is really pretty low, and the question is, can algorithms raise the bar? And the answer is yes, if proper information is provided.”
Both the fears and promises around algorithms in the courtroom, however, assume judges are consistently using them. Esthappan’s study shows that’s a flawed assumption at best.
Esthappan interviewed 27 judges across four criminal courts in different regions of the country over one year between 2022 and 2023, asking questions like, “When do you find risk scores more or less useful?” and “How and with whom do you discuss risk scores in pretrial hearings?” He also analyzed local news coverage and case files, observed 50 hours of bond court, and interviewed others who work in the judicial system to help contextualize the findings.
Judges told Esthappan that they used algorithmic tools to process lower-stakes cases quickly, leaning on automated scores even when they weren’t confident in their legitimacy. Overall, they were leery of following low risk scores for defendants accused of offenses like sexual assault and intimate partner violence — sometimes because they believed the algorithms under- or over-weighted various risk factors, but also because their own reputations were on the line. And conversely, some described using the systems to explain why they’d made an unpopular decision — believing the risk scores added authoritative weight.
“Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law.”
The interviews revealed recurring patterns in judges’ decisions to use risk assessment scores, frequently based on defendants’ criminal history or social background. Some judges believed the systems underestimated the importance of certain red flags — like extensive juvenile records or certain kinds of gun charges — or overemphasized factors like an old criminal record or low education level. “Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law,” Esthappan writes.
Some judges also said they used the scores as a matter of efficiency. These pretrial hearings are short — often less than five minutes — and require snap decisions based on limited information. The algorithmic score at least provides one more factor to consider.
Judges also, however, were keenly aware of how a decision would reflect on them — and according to Esthappan, this was a huge factor in whether they trusted risk scores. When judges saw a charge they believed to be less of a public safety issue and more of a result of poverty or addiction, they would often defer to risk scores, seeing a small risk to their own reputation if they got it wrong and viewing their role, as one judge described it, as calling “balls and strikes,” rather than becoming a “social engineer.”
For high-level charges that involved some sort of moral weight, like rape or domestic violence, judges said they were more likely to be skeptical. This was partly because they identified problems with how the system weighted information for specific crimes — in intimate partner violence cases, for instance, they believed even defendants without a long criminal history could be dangerous. But they also recognized that the stakes — for themselves and others — were higher. “Your worst nightmare is you let someone out on a lower bond and then they go and hurt someone. I mean, all of us, when I see those stories on the news, I think that could have been any of us,” said one judge quoted in the study.
Keeping a truly low-risk defendant in jail has costs, too. It keeps someone who’s unlikely to harm anyone away from their job, their school, or their family before they’ve been convicted of a crime. But there’s little reputational risk for judges — and adding a risk score doesn’t change that calculus.
The deciding factor for judges often wasn’t whether the algorithm seemed trustworthy, but whether it would help them justify a decision they wanted to make. Judges who released a defendant based on a low risk score, for instance, could “shift some of that accountability away from themselves and towards the score,” Esthappan said. If an alleged victim “wants someone locked up,” one subject said, “what you’ll do as the judge is say ‘We’re guided by a risk assessment that scores for success in the defendant’s likelihood to appear and rearrest. And, based on the statute and this score, my job is to set a bond that protects others in the community.’”
“In practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions”
Esthappan’s study pokes holes in the idea that algorithmic tools result in fairer, more consistent decisions. If judges are picking when to rely on scores based on factors like reputational risk, Esthappan notes, they may not be reducing human-driven bias — they could actually be legitimizing that bias and making it hard to spot. “Whereas policymakers tout their ability to curb judicial discretion, in practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions,” Esthappan writes in the study.
Megan Stevenson, an economist and criminal justice scholar at the University of Virginia School of Law, says risk assessments are something of “a technocratic toy of policymakers and academics.” She says it’s seemed to be an attractive tool to try to “take the randomness and the uncertainty out of this process,” but based on studies of their impact, they often don’t have a major effect on outcomes either way.
A larger problem is that judges are forced to work with highly limited time and information. Berk, the University of Pennsylvania professor, says collecting more and better information could help the algorithms make better assessments. But that would require time and resources court systems may not have.
But when Esthappan interviewed public defenders, they raised an even more fundamental question: should pretrial detention, in its current form, exist at all? Judges aren’t just working with spotty data. They’re determining someone’s freedom before that person even gets a chance to fight their charges, often based on predictions that are largely guesswork. “Within this context, I think it makes sense that judges would rely on a risk assessment tool because they have so limited information,” Esthappan tells The Verge. “But on the other hand, I sort of see it as a bit of a distraction.”
Algorithmic tools are aiming to address a real issue with imperfect human decision-making. “The question that I have is, is that really the problem?” Esthappan tells The Verge. “Is it that judges are acting in a biased way, or is there something more structurally problematic about the way that we’re hearing people at pretrial?” The answer, he says, is that “there’s an issue that can’t necessarily be fixed with risk assessments, but that it goes into a deeper cultural issue within criminal courts.”

Image: Cath Virginia / The Verge, Getty Images

When Northwestern University graduate student Sino Esthappan began researching how algorithms decide who stays in jail, he expected “a story about humans versus technology.” On one side would be human judges, who Esthappan interviewed extensively. On the other would be risk assessment algorithms, which are used in hundreds of US counties to assess the danger of granting bail to accused criminals. What he found was more complicated — and suggests these tools could obscure bigger problems with the bail system itself.

Algorithmic risk assessments are intended to calculate the risk of a criminal defendant not returning to court — or, worse, harming others — if they’re released. By comparing criminal defendants’ backgrounds to a vast database of past cases, they’re supposed to help judges gauge how risky releasing someone from jail would be. Along with other algorithm-driven tools, they play an increasingly large role in a frequently overburdened criminal justice system. And in theory, they’re supposed to help reduce bias from human judges.

But Esthappan’s work, published in the journal Social Problems, found that judges aren’t wholesale adopting or rejecting the advice of these algorithms. Instead, they report using them selectively, motivated by deeply human factors to accept or disregard their scores.

Pretrial risk assessment tools estimate the likelihood that accused criminals will return for court dates if they’re released from jail. The tools take in details fed to them by pretrial officers, including things like criminal history and family profiles. They compare this information with a database that holds hundreds of thousands of previous case records, looking at how defendants with similar histories behaved. Then they deliver an assessment that could take the form of a “low,” “medium,” or “high” risk label or a number on a scale. Judges are given the scores for use in pretrial hearings: short meetings, held soon after a defendant is arrested, that determine whether (and on what conditions) they’ll be released.

As with other algorithmic criminal justice tools, supporters position them as neutral, data-driven correctives to human capriciousness and bias. Opponents raise issues like the risk of racial profiling. “Because a lot of these tools rely on criminal history, the argument is that criminal history is also racially encoded based on law enforcement surveillance practices,” Esthappan says. “So there already is an argument that these tools are reproducing biases from the past, and they’re encoding them into the future.”

It’s also not clear how well they work. A 2016 ProPublica investigation found that a risk score algorithm used in Broward County, Florida, was “remarkably unreliable in forecasting violent crime.” Just 20 percent of those the algorithm predicted would commit violent crimes actually did in the next two years after their arrest. The program was also more likely to label Black defendants as future criminals or higher risk compared to white defendants, ProPublica found.

Both the fears and promises around algorithms in the courtroom assume judges are consistently using them

Still, University of Pennsylvania criminology professor Richard Berk argues that human decision-makers can be just as flawed. “These criminal justice systems are made with human institutions and human beings, all of which are imperfect, and not surprisingly, they don’t do a very good job in identifying or forecasting people’s behaviors,” Berk says. “So the bar is really pretty low, and the question is, can algorithms raise the bar? And the answer is yes, if proper information is provided.”

Both the fears and promises around algorithms in the courtroom, however, assume judges are consistently using them. Esthappan’s study shows that’s a flawed assumption at best.

Esthappan interviewed 27 judges across four criminal courts in different regions of the country over one year between 2022 and 2023, asking questions like, “When do you find risk scores more or less useful?” and “How and with whom do you discuss risk scores in pretrial hearings?” He also analyzed local news coverage and case files, observed 50 hours of bond court, and interviewed others who work in the judicial system to help contextualize the findings.

Judges told Esthappan that they used algorithmic tools to process lower-stakes cases quickly, leaning on automated scores even when they weren’t confident in their legitimacy. Overall, they were leery of following low risk scores for defendants accused of offenses like sexual assault and intimate partner violence — sometimes because they believed the algorithms under- or over-weighted various risk factors, but also because their own reputations were on the line. And conversely, some described using the systems to explain why they’d made an unpopular decision — believing the risk scores added authoritative weight.

“Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law.”

The interviews revealed recurring patterns in judges’ decisions to use risk assessment scores, frequently based on defendants’ criminal history or social background. Some judges believed the systems underestimated the importance of certain red flags — like extensive juvenile records or certain kinds of gun charges — or overemphasized factors like an old criminal record or low education level. “Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law,” Esthappan writes.

Some judges also said they used the scores as a matter of efficiency. These pretrial hearings are short — often less than five minutes — and require snap decisions based on limited information. The algorithmic score at least provides one more factor to consider.

Judges also, however, were keenly aware of how a decision would reflect on them — and according to Esthappan, this was a huge factor in whether they trusted risk scores. When judges saw a charge they believed to be less of a public safety issue and more of a result of poverty or addiction, they would often defer to risk scores, seeing a small risk to their own reputation if they got it wrong and viewing their role, as one judge described it, as calling “balls and strikes,” rather than becoming a “social engineer.”

For high-level charges that involved some sort of moral weight, like rape or domestic violence, judges said they were more likely to be skeptical. This was partly because they identified problems with how the system weighted information for specific crimes — in intimate partner violence cases, for instance, they believed even defendants without a long criminal history could be dangerous. But they also recognized that the stakes — for themselves and others — were higher. “Your worst nightmare is you let someone out on a lower bond and then they go and hurt someone. I mean, all of us, when I see those stories on the news, I think that could have been any of us,” said one judge quoted in the study.

Keeping a truly low-risk defendant in jail has costs, too. It keeps someone who’s unlikely to harm anyone away from their job, their school, or their family before they’ve been convicted of a crime. But there’s little reputational risk for judges — and adding a risk score doesn’t change that calculus.

The deciding factor for judges often wasn’t whether the algorithm seemed trustworthy, but whether it would help them justify a decision they wanted to make. Judges who released a defendant based on a low risk score, for instance, could “shift some of that accountability away from themselves and towards the score,” Esthappan said. If an alleged victim “wants someone locked up,” one subject said, “what you’ll do as the judge is say ‘We’re guided by a risk assessment that scores for success in the defendant’s likelihood to appear and rearrest. And, based on the statute and this score, my job is to set a bond that protects others in the community.’”

“In practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions”

Esthappan’s study pokes holes in the idea that algorithmic tools result in fairer, more consistent decisions. If judges are picking when to rely on scores based on factors like reputational risk, Esthappan notes, they may not be reducing human-driven bias — they could actually be legitimizing that bias and making it hard to spot. “Whereas policymakers tout their ability to curb judicial discretion, in practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions,” Esthappan writes in the study.

Megan Stevenson, an economist and criminal justice scholar at the University of Virginia School of Law, says risk assessments are something of “a technocratic toy of policymakers and academics.” She says it’s seemed to be an attractive tool to try to “take the randomness and the uncertainty out of this process,” but based on studies of their impact, they often don’t have a major effect on outcomes either way.

A larger problem is that judges are forced to work with highly limited time and information. Berk, the University of Pennsylvania professor, says collecting more and better information could help the algorithms make better assessments. But that would require time and resources court systems may not have.

But when Esthappan interviewed public defenders, they raised an even more fundamental question: should pretrial detention, in its current form, exist at all? Judges aren’t just working with spotty data. They’re determining someone’s freedom before that person even gets a chance to fight their charges, often based on predictions that are largely guesswork. “Within this context, I think it makes sense that judges would rely on a risk assessment tool because they have so limited information,” Esthappan tells The Verge. “But on the other hand, I sort of see it as a bit of a distraction.”

Algorithmic tools are aiming to address a real issue with imperfect human decision-making. “The question that I have is, is that really the problem?” Esthappan tells The Verge. “Is it that judges are acting in a biased way, or is there something more structurally problematic about the way that we’re hearing people at pretrial?” The answer, he says, is that “there’s an issue that can’t necessarily be fixed with risk assessments, but that it goes into a deeper cultural issue within criminal courts.”

Read More 

Scroll to top
Generated by Feedzy