verge-rss

The Alexa Skills revolution that wasn’t

Image: Mojo Wang for The Verge

Ten years ago, Amazon imagined a future beyond apps — and it had the idea basically right. But the perfect ambient computer remains frustratingly far away. The first Amazon Echo, all the way back in 2014, was pitched as a device for a few simple things: playing music, asking basic questions, getting the weather. Since then, Amazon has found a few new things for people to do, like control smart home devices. But a decade later, Alexa is still mostly for playing music, asking basic questions, and getting the weather. And that’s largely because, even as Amazon made Alexa ubiquitous in devices and homes all over the place, it never convinced developers to care.
Alexa was never supposed to have an app store. Instead, it had “skills,” which Amazon hoped developers would use to connect Alexa to new functionality and information. Developers weren’t supposed to build their own things on top of an operating system, they were supposed to build new things for Alexa to do. The difference is subtle but important. Our phones are mostly a series of disconnected experiences — Instagram is a universe entirely apart from TikTok and Snapchat and your calendar app and Gmail. That just doesn’t work for Alexa or any other successful assistant. If it knows your to-do list but not your calendar or knows your favorite kind of pizza but not your credit card number, it can’t do much. It needs access to everything, and all the necessary tools at its disposal, to get things done for you.

In Amazon’s dream world, where “ambient computing” is perfect and everywhere, you’d just ask Alexa a question or give it an instruction: “Find me something fun to do this weekend.” “Book my train to New York next week.” “Get me up to speed on deep learning.” Alexa would have access to all the apps and information sources it needs, but you’d never need to worry about that; Alexa would just handle it however it needed and bring you the answers. There are a thousand complicated questions about how it actually works, but that’s still the big idea.
“Alexa Skills made it fast and easy for developers to build voice-driven experiences, unlocking an entirely new way for developers and brands to engage with their customers,” Amazon spokesperson Jill Tornifoglio said in a statement. Customers use them billions of times a year, she said, and as the company embraces generative AI, “we’re excited for what’s next.”
In retrospect, Amazon’s idea was pretty much exactly right. All these years later, OpenAI and other companies are also trying to build their own third-party ecosystems around chatbots, which are just another take on the idea of an interactive interface for the internet. But for all its prescience on the AI revolution, Amazon never figured out how to make skills work. It never solved some fundamental problems for developers, never cracked the user interface, and never found a way to show people all the things their Alexa device could do if only they’d ask.
In retrospect, Amazon’s idea was pretty much exactly right
Amazon certainly tried its best to make skills happen. The company steadily rolled out new tools for developers, paid them in AWS credits and cash when their skills got used (though it recently stopped doing so), and tried to make skill development practically effortless. And on some level, all that effort paid off: Amazon says there are more than 160,000 skills available for the platform. That pales next to the millions of app store apps on smartphones, but it’s still a big number.
The interface for finding and using all those skills, though, has always been a mess. Let’s just take one simple example: if you ask Alexa to order you pizza, it might tell you it has a few skills for that and recommend Domino’s. (If you’re wondering why Amazon would pick Domino’s and not Pizza Hut or DoorDash or any other pizza-summoning service? Great question. No idea.) You respond yes. “Here’s Domino’s,” Alexa says. Then a moment later: “Here’s the skill Domino’s, by Domino’s Pizza, LLC.” Another moment, then: “To link your Domino’s Pizza Profile please go to the Skills setting in your Alexa app. We’ll need your email address to place a guest order. Please enable ‘Email Address’ permissions in your Alexa app.” At this point, you have to find a buried setting in an app you might not even have on your phone; it would be vastly easier to just go to Domino’s website. Or, heck, call the place.
If you know the skill you’re looking for, the system is a little better. You can say “Alexa, open Nature Sounds” or “Alexa, enable Jeopardy,” and it’ll open the skill with that name. But if you don’t remember that the skill is called “Easy Yoga,” asking Alexa to start a yoga workout won’t get you anywhere.

Image: Amazon
Alexa can do a lot of things. Figuring out which ones is the real challenge.

There are little friction points like this all across the system. When you’ve activated a skill, you have to explicitly say “stop” or “cancel” to back out of it in order to use another one. You can’t easily do things across skills — I’d like to price-check my pizza, but Alexa won’t let me. And maybe most frustrating of all, even once you’ve enabled a skill, you still have to address it specifically. Saying “Alexa, ask AnyList to add spaghetti to my grocery list” is not seamless interaction with an all-knowing assistant; that’s having to learn a computer’s incredibly specific language just to use it properly.
As it has turned out, many of the most popular Alexa skills have two things in common: they’re simple Q&A games, and they’re made by a company called Volley. From Song Quiz to Jeopardy to Who Wants to Be a Millionaire to Are You Smarter Than a 5th Grader, Volley is one of the companies that has figured out how to make skills that really work. And Max Child, Volley’s cofounder and CEO, says that getting your skill in front of people is one of the most important — and hardest — parts of the job.
“I think one of the underrated reasons that the iOS and Android app stores are so successful is because Facebook ads are so good,” he says. The pipeline from a hyper-targeted ad to an app install has been ruthlessly perfected over the years, and there’s just nothing like that for voice assistants. The nearest equivalent is probably people asking their Alexa devices what they can do — which Child says does happen! — but there’s just no competing with in-feed ads and hours of social scrolling. “Because you don’t have that hyper-targeted marketing, you end up having to do broad marketing, and you have to build broad games.” Hence games like Jeopardy and Millionaire, which are huge brands that appeal to practically everyone.
One way Volley makes money is through subscriptions. The full Jeopardy experience, for instance, is $12.99 a month, and like so many other modern subscriptions, it’s a lot easier to subscribe than to cancel. It’s also one of the few ways to make money with a skill: developers are allowed to have audio ads in some kinds of skills, or to ask users to add their credit card details directly the way Domino’s does, but asking a voice-first user to pick up their phone and dig through settings is a high bar to clear. Ads are only useful at vast scale — there was a brief moment when a lot of media companies thought the so-called “flash briefings” might be a hit, but that hasn’t turned into much.
These are hardly unique challenges, by the way. Mobile app stores have similar huge discovery problems, issues with monetization, sketchy subscription systems, and more. It’s just that with Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store. You should just be able to ask for what you want, and Alexa can go do it for you.
With Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store
A decade on, it appears that an all-powerful, omni-capable voice AI might just be impossible to pull off. If Amazon were to make everything so seamless and fast that you never even have to know you’re interacting with a third-party developer and your pizza just magically appears at your door, it raises some huge privacy concerns and questions about how Amazon picks those providers. If it asked you to choose all those defaults for yourself, it’s signing every new user up for an awful lot of busy work. If it allows developers to own and operate even more of the experience, it wrecks the ambient simplicity that makes Alexa so enticing in the first place. Too much simplicity and abstraction is actually a problem.
We’re at something of an inflection point, though. A decade after its launch, Alexa is changing in two key ways. One is good news for the future of skills, the other might be bad. The good is that Alexa is no longer a voice-only, or even voice-first, experience — as Echo Show and Fire TV devices have gotten more popular, more people are interacting with Alexa with a screen nearby. That could solve a lot of interaction problems and give developers new ways to put their skills in front of users. (Screens are also a great place to advertise your skill, a fact Amazon knows maybe too well.) When Alexa can show you things, it can do a lot more.
Already, Child says that a majority of Volley’s players are on a device with a screen. “We’re very long on smart TVs,” he says, laughing. “Every single smart TV that’s sold now has a microphone in the remote. I really think casual voice games … might make a lot of sense, and I think could be even more immersive.”

Amazon is also about to re-architect Alexa around LLMs, which could be the key to making all of this work. A smarter, AI-powered Alexa could finally understand what you’re actually trying to do, and do away with some of the awkward syntax required to use skills. It could understand more complicated questions and multistep instructions and use skills on your behalf. “Developers now need to only describe the capabilities of their device,” Amazon’s Charlie French said at Amazon’s AI Alexa launch event last year. “They don’t need to try and predict what a customer is going to say.” Amazon is just one of the companies promising that LLMs will be able to do things on your behalf with no extra work required; in that world, do skills even need to exist, or will the model simply figure out how to order pizza?
There’s some evidence that Amazon is behind in its AI work and that plugging in a language model won’t suddenly make Alexa amazing. (Even the best LLMs feel like they’re only sort of slightly close to almost being good enough to do this stuff.) But even if it does, it only makes the bigger question more important: what can virtual assistants really do for us? And how do we ask them to do it? The correct answers are “anything you want,” and “any way you like.” That requires a lot of developers to give Alexa new powers. Which requires Amazon to give them a product, and a business, worth the effort.

Image: Mojo Wang for The Verge

Ten years ago, Amazon imagined a future beyond apps — and it had the idea basically right. But the perfect ambient computer remains frustratingly far away.

The first Amazon Echo, all the way back in 2014, was pitched as a device for a few simple things: playing music, asking basic questions, getting the weather. Since then, Amazon has found a few new things for people to do, like control smart home devices. But a decade later, Alexa is still mostly for playing music, asking basic questions, and getting the weather. And that’s largely because, even as Amazon made Alexa ubiquitous in devices and homes all over the place, it never convinced developers to care.

Alexa was never supposed to have an app store. Instead, it had “skills,” which Amazon hoped developers would use to connect Alexa to new functionality and information. Developers weren’t supposed to build their own things on top of an operating system, they were supposed to build new things for Alexa to do. The difference is subtle but important. Our phones are mostly a series of disconnected experiences — Instagram is a universe entirely apart from TikTok and Snapchat and your calendar app and Gmail. That just doesn’t work for Alexa or any other successful assistant. If it knows your to-do list but not your calendar or knows your favorite kind of pizza but not your credit card number, it can’t do much. It needs access to everything, and all the necessary tools at its disposal, to get things done for you.

In Amazon’s dream world, where “ambient computing” is perfect and everywhere, you’d just ask Alexa a question or give it an instruction: “Find me something fun to do this weekend.” “Book my train to New York next week.” “Get me up to speed on deep learning.” Alexa would have access to all the apps and information sources it needs, but you’d never need to worry about that; Alexa would just handle it however it needed and bring you the answers. There are a thousand complicated questions about how it actually works, but that’s still the big idea.

“Alexa Skills made it fast and easy for developers to build voice-driven experiences, unlocking an entirely new way for developers and brands to engage with their customers,” Amazon spokesperson Jill Tornifoglio said in a statement. Customers use them billions of times a year, she said, and as the company embraces generative AI, “we’re excited for what’s next.”

In retrospect, Amazon’s idea was pretty much exactly right. All these years later, OpenAI and other companies are also trying to build their own third-party ecosystems around chatbots, which are just another take on the idea of an interactive interface for the internet. But for all its prescience on the AI revolution, Amazon never figured out how to make skills work. It never solved some fundamental problems for developers, never cracked the user interface, and never found a way to show people all the things their Alexa device could do if only they’d ask.

In retrospect, Amazon’s idea was pretty much exactly right

Amazon certainly tried its best to make skills happen. The company steadily rolled out new tools for developers, paid them in AWS credits and cash when their skills got used (though it recently stopped doing so), and tried to make skill development practically effortless. And on some level, all that effort paid off: Amazon says there are more than 160,000 skills available for the platform. That pales next to the millions of app store apps on smartphones, but it’s still a big number.

The interface for finding and using all those skills, though, has always been a mess. Let’s just take one simple example: if you ask Alexa to order you pizza, it might tell you it has a few skills for that and recommend Domino’s. (If you’re wondering why Amazon would pick Domino’s and not Pizza Hut or DoorDash or any other pizza-summoning service? Great question. No idea.) You respond yes. “Here’s Domino’s,” Alexa says. Then a moment later: “Here’s the skill Domino’s, by Domino’s Pizza, LLC.” Another moment, then: “To link your Domino’s Pizza Profile please go to the Skills setting in your Alexa app. We’ll need your email address to place a guest order. Please enable ‘Email Address’ permissions in your Alexa app.” At this point, you have to find a buried setting in an app you might not even have on your phone; it would be vastly easier to just go to Domino’s website. Or, heck, call the place.

If you know the skill you’re looking for, the system is a little better. You can say “Alexa, open Nature Sounds” or “Alexa, enable Jeopardy,” and it’ll open the skill with that name. But if you don’t remember that the skill is called “Easy Yoga,” asking Alexa to start a yoga workout won’t get you anywhere.

Image: Amazon
Alexa can do a lot of things. Figuring out which ones is the real challenge.

There are little friction points like this all across the system. When you’ve activated a skill, you have to explicitly say “stop” or “cancel” to back out of it in order to use another one. You can’t easily do things across skills — I’d like to price-check my pizza, but Alexa won’t let me. And maybe most frustrating of all, even once you’ve enabled a skill, you still have to address it specifically. Saying “Alexa, ask AnyList to add spaghetti to my grocery list” is not seamless interaction with an all-knowing assistant; that’s having to learn a computer’s incredibly specific language just to use it properly.

As it has turned out, many of the most popular Alexa skills have two things in common: they’re simple Q&A games, and they’re made by a company called Volley. From Song Quiz to Jeopardy to Who Wants to Be a Millionaire to Are You Smarter Than a 5th Grader, Volley is one of the companies that has figured out how to make skills that really work. And Max Child, Volley’s cofounder and CEO, says that getting your skill in front of people is one of the most important — and hardest — parts of the job.

“I think one of the underrated reasons that the iOS and Android app stores are so successful is because Facebook ads are so good,” he says. The pipeline from a hyper-targeted ad to an app install has been ruthlessly perfected over the years, and there’s just nothing like that for voice assistants. The nearest equivalent is probably people asking their Alexa devices what they can do — which Child says does happen! — but there’s just no competing with in-feed ads and hours of social scrolling. “Because you don’t have that hyper-targeted marketing, you end up having to do broad marketing, and you have to build broad games.” Hence games like Jeopardy and Millionaire, which are huge brands that appeal to practically everyone.

One way Volley makes money is through subscriptions. The full Jeopardy experience, for instance, is $12.99 a month, and like so many other modern subscriptions, it’s a lot easier to subscribe than to cancel. It’s also one of the few ways to make money with a skill: developers are allowed to have audio ads in some kinds of skills, or to ask users to add their credit card details directly the way Domino’s does, but asking a voice-first user to pick up their phone and dig through settings is a high bar to clear. Ads are only useful at vast scale — there was a brief moment when a lot of media companies thought the so-called “flash briefings” might be a hit, but that hasn’t turned into much.

These are hardly unique challenges, by the way. Mobile app stores have similar huge discovery problems, issues with monetization, sketchy subscription systems, and more. It’s just that with Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store. You should just be able to ask for what you want, and Alexa can go do it for you.

With Alexa, the solution seemed so enticing: you shouldn’t, and wouldn’t, even need an app store

A decade on, it appears that an all-powerful, omni-capable voice AI might just be impossible to pull off. If Amazon were to make everything so seamless and fast that you never even have to know you’re interacting with a third-party developer and your pizza just magically appears at your door, it raises some huge privacy concerns and questions about how Amazon picks those providers. If it asked you to choose all those defaults for yourself, it’s signing every new user up for an awful lot of busy work. If it allows developers to own and operate even more of the experience, it wrecks the ambient simplicity that makes Alexa so enticing in the first place. Too much simplicity and abstraction is actually a problem.

We’re at something of an inflection point, though. A decade after its launch, Alexa is changing in two key ways. One is good news for the future of skills, the other might be bad. The good is that Alexa is no longer a voice-only, or even voice-first, experience — as Echo Show and Fire TV devices have gotten more popular, more people are interacting with Alexa with a screen nearby. That could solve a lot of interaction problems and give developers new ways to put their skills in front of users. (Screens are also a great place to advertise your skill, a fact Amazon knows maybe too well.) When Alexa can show you things, it can do a lot more.

Already, Child says that a majority of Volley’s players are on a device with a screen. “We’re very long on smart TVs,” he says, laughing. “Every single smart TV that’s sold now has a microphone in the remote. I really think casual voice games … might make a lot of sense, and I think could be even more immersive.”

Amazon is also about to re-architect Alexa around LLMs, which could be the key to making all of this work. A smarter, AI-powered Alexa could finally understand what you’re actually trying to do, and do away with some of the awkward syntax required to use skills. It could understand more complicated questions and multistep instructions and use skills on your behalf. “Developers now need to only describe the capabilities of their device,” Amazon’s Charlie French said at Amazon’s AI Alexa launch event last year. “They don’t need to try and predict what a customer is going to say.” Amazon is just one of the companies promising that LLMs will be able to do things on your behalf with no extra work required; in that world, do skills even need to exist, or will the model simply figure out how to order pizza?

There’s some evidence that Amazon is behind in its AI work and that plugging in a language model won’t suddenly make Alexa amazing. (Even the best LLMs feel like they’re only sort of slightly close to almost being good enough to do this stuff.) But even if it does, it only makes the bigger question more important: what can virtual assistants really do for us? And how do we ask them to do it? The correct answers are “anything you want,” and “any way you like.” That requires a lot of developers to give Alexa new powers. Which requires Amazon to give them a product, and a business, worth the effort.

Read More 

Shinichirō Watanabe is ready to tell humanity something about itself with Lazarus

Image: Verge Staff, Adobe Stock

The visionary behind Cowboy Bebop opens up about his return to sci-fi and collaborating with Chad Stahelski. After some time away, Shinichirō Watanabe is jumping back into science fiction. Though each of the director’s original projects has spoken to his artistic sensibilities, Cowboy Bebop solidified him as a sci-fi visionary whose style and eclectic taste in music set him apart. Almost 30 years later, Cowboy Bebop is still regarded as one of the most iconic and influential anime series of the 20th century. But Watanabe’s desire to grow by trying new things led him away from telling stories about far-flung futures and toward projects like the historical action series Samurai Champloo and Kids on the Slope.
That same feeling is what brought him back to his roots and inspired him to dream up Lazarus — a new series premiering on Adult Swim in 2025 that enlists John Wick director Chad Stahelski for its action sequences.
Set in a future where most of the world’s population has begun using a new wonder drug called Hapna, Lazarus tells the story of what happens when the painkiller is revealed to be a time-delayed toxin that is guaranteed to kill. The revelation sets off a race to track down Hapna’s creator in hopes of stopping his plan to punish humanity for its self-destructive sins against the planet. But the situation also sets off a wave of panic and confusion as people come to grips with the idea of being killed by the very same thing that once seemed to be the key to their salvation.
When I recently sat down with Watanabe to talk about Lazarus, he told me that, as excited as he was to return to hard sci-fi, he wanted the series to feel like a heightened rumination on our own present-day reality. Lazarus, he explained, is a kind of fantasy — one that’s trying to make you think about how the present shapes the future.

This interview has been edited for length and clarity.
How did the concept for Lazarus first come to you initially? What was on your mind as this story began coming into focus?
The image of Axel, our main protagonist, was actually what came first to me, and I had an impression about his physicality — how I wanted him to move through the world. I also knew that I wanted to do a story about humanity facing the end of the world. In the very first episode, our character Dough delivers a monologue that’s set to a montage of images, and a lot of what he’s saying is actually very close to what I’ve imagined at being how our world could fall apart.
What aspects of our own present-day society did you want to explore or unpack through this specific vision of the future you’ve created for the show?
When people think about the end of the world in fiction, usually the cause is some kind of war or maybe an alien invasion. But with this story, the collapse of everything begins with the creation of this new painkiller, Hapna. The general real-world opioid crisis was one of my bigger inspirations for this series, but also the fact that many of the musicians I love listening to ultimately died from drug overdoses.
In the old days, you would hear about musicians overdosing on illegal street drugs, but over the years, you’ve seen more and more cases like Prince, for instance, who wind up dying while taking prescribed painkillers. Prince’s death still shocks me. I love hip-hop culture and rap as well, and unfortunately you’re seeing more of this kind of overdosing with younger artists.
What made you want to come back to sci-fi after being away from the genre for so long?
After Cowboy Bebop, I wanted to try something different genre-wise, which was how I ended up making Kids on the Slope and Carole & Tuesday. When I wound up working on Blade Runner Black Out 2022, it felt so good to come back to sci-fi, but because that was just a short, I still felt like I needed to find an opportunity to stretch those specific creative muscles.
I didn’t just want to repeat or rehash what I’d done with Cowboy Bebop, though, and that’s part of why I initially reached out to Chad Stahelski, who worked on the John Wick films. I thought that he was able to really update action sequences in a new way, and I wanted to bring that kind of energy to my next project.

Adult Swim

Talk to me about collaborating with Chad.
When I first mentioned Chad’s name, a lot of people were skeptical about whether we would be able to find the time to collaborate because of how busy he is and how many people want to work with him. But I felt such a strong affinity for Chad’s approach to building action scenes, and so I still reached out. It turned out that he had seen and was a big fan of Cowboy Bebop and Samurai Champloo, and he immediately said yes to coming onto Lazarus.
Chad’s team would do their own interpretation of fight scene choreography and send videos to us, and then we would study that footage to find different elements of the action that we wanted to incorporate into Lazarus. Obviously, live-action and animation are different mediums, so our process involved a lot of figuring out which aspects of the footage felt like things we could heighten and stylize.
What was that process like?
Different episodes have different ways of incorporating Chad’s choreography. For the premiere, we were actually still in the early stages and we weren’t able to benefit from Chad’s input as much. And for episodes two and three, we created very short action sequences specifically to incorporate things we saw in the videos from Chad’s team. But for the fourth episode, we went with a much bigger set piece because at that point, we had really found our rhythm.
For some episodes, we gave them information about what kind of scene we wanted, and they would just go brainstorming. But in a lot of cases, before getting detailed instructions from us, Chad’s team would offer up their ideas, and we used quite a bit of those as well. It was a constant ongoing discussion between our teams, and that open communication was key to striking the right tone for Lazarus’ action. For instance, the John Wick movies’ fight scenes involve a lot of headshot kills, but that was a little too much for us because Axel isn’t really a killer the way John Wick himself is.
You mentioned earlier that Axel was the first piece of this story that came into focus for you. How did you envision him?
I don’t want you to get the wrong idea when I say this, but Axel was somewhat inspired by Tom Cruise. Axel thrives on danger and at times, it seems like he’s almost addicted to it. Lazarus features a lot of parkour because we wanted the action built around Axel to always feel as if one wrong move could lead to him falling. He’s risking his life, but that danger is something he gets off on — it makes him feel alive.

You’ve always been known for populating your worlds with diverse arrays of people, but there’s a pronounced multiculturalism to Babylonia City [one of the show’s important locations] that feels really distinct in the larger anime landscape. What was your thinking behind building a story around such a culturally diverse group of characters?
Whenever I’m thinking of the specific environments my characters will exist in, the most important thing is that the space feels real and like a place where people can actually move around in a realistic way. What I’ve always felt looking at other sci-fi depictions of the future is that often, they don’t have a sense of being truly lived-in, and that’s what I want to avoid. So for Babylonia City, my thinking was that a big, busy cityscape would lend itself to characters’ expressions of their personalities.
I always try to incorporate multicultural elements into my stories, I think, because they were such an important part of Blade Runner, which really stuck with me after I first saw it when I was young. Blade Runner’s multiculturalism — the cultural blending — was part of how the movie illustrated how society had changed in the future. I half expected the future to be more like that, which is funny to say today because the original film is set in 2019.
People will be able to hear Lazarus’ soundtrack for themselves, but what did the series sound like in your mind as you were thinking about the musical palettes you wanted to build for it?
With Cowboy Bebop, we used somewhat older jazz music to create a contrast with the story’s futuristic feel. But for Lazarus, I wanted to find a different kind of sound and feature more of the relatively recent music I’ve been listening to.
Was there a specific song or songs that really crystallized the show for you?
This hasn’t been made public yet, but the end credits sequence features The Boo Radleys’ song “Lazarus,” which was really a huge inspiration for this series as a whole. I’m very interested to hear what they think of the show.

You talked about your process for collaborating on visuals, but what about for the music?
There was a lot of back and forth with that process as well. Typically for animation, you try to have all the music cues created and ready while the production is going on. We had a lot of discussions with our musical collaborators about the show and the specific kind of feeling we wanted to evoke from scene-to-scene, and the challenging thing about that process is always that the visuals we’re creating music for just aren’t entirely finished. They’re part way there, but the musician has to imagine something more complete to compose to. But then, when the visuals are finished, there might be requests for retakes or small adjustments that make the musical piece fit more cohesively.
There has been an increase in focus on the working conditions that make it harder and harder for illustrators and animators to thrive and cultivate sustainable careers. What’s your read on the current state of the industry?
In a nutshell, the problem is that there are too many shows being made and there aren’t enough experienced animators to go around. Even for Lazarus, we weren’t able to get all of the experienced animators we needed domestically, so we had to bring in quite a few from overseas. With the first episode, there are many non-Japanese animators, especially for the action scenes.
Big picture, what do you think needs to really change in order for animators to get that kind of experience?
In order for an animator to really develop their skills, I think they need to be working on a project, and being able to focus solely on it. But more often than not, because of the sheer number of shows and films, many animators have to jump from one project to another, and really scramble to finish their work, and it’s not an environment that’s conducive to genuine artistic growth.
Going back to that first episode, the action scenes in the first half were drawn by a single animator, and another animator handled all of the action in the second half. They each had 50 shots to work on. That, to me, is the ideal way — to have someone who’s already good at action animation be able to focus on a substantial chunk of scenes. That’s how you grow. But so often in other animated projects, you see experienced animators limited to working on maybe two or three shots max, and the end product just isn’t as good.

Image: Verge Staff, Adobe Stock

The visionary behind Cowboy Bebop opens up about his return to sci-fi and collaborating with Chad Stahelski.

After some time away, Shinichirō Watanabe is jumping back into science fiction. Though each of the director’s original projects has spoken to his artistic sensibilities, Cowboy Bebop solidified him as a sci-fi visionary whose style and eclectic taste in music set him apart. Almost 30 years later, Cowboy Bebop is still regarded as one of the most iconic and influential anime series of the 20th century. But Watanabe’s desire to grow by trying new things led him away from telling stories about far-flung futures and toward projects like the historical action series Samurai Champloo and Kids on the Slope.

That same feeling is what brought him back to his roots and inspired him to dream up Lazarus — a new series premiering on Adult Swim in 2025 that enlists John Wick director Chad Stahelski for its action sequences.

Set in a future where most of the world’s population has begun using a new wonder drug called Hapna, Lazarus tells the story of what happens when the painkiller is revealed to be a time-delayed toxin that is guaranteed to kill. The revelation sets off a race to track down Hapna’s creator in hopes of stopping his plan to punish humanity for its self-destructive sins against the planet. But the situation also sets off a wave of panic and confusion as people come to grips with the idea of being killed by the very same thing that once seemed to be the key to their salvation.

When I recently sat down with Watanabe to talk about Lazarus, he told me that, as excited as he was to return to hard sci-fi, he wanted the series to feel like a heightened rumination on our own present-day reality. Lazarus, he explained, is a kind of fantasy — one that’s trying to make you think about how the present shapes the future.

This interview has been edited for length and clarity.

How did the concept for Lazarus first come to you initially? What was on your mind as this story began coming into focus?

The image of Axel, our main protagonist, was actually what came first to me, and I had an impression about his physicality — how I wanted him to move through the world. I also knew that I wanted to do a story about humanity facing the end of the world. In the very first episode, our character Dough delivers a monologue that’s set to a montage of images, and a lot of what he’s saying is actually very close to what I’ve imagined at being how our world could fall apart.

What aspects of our own present-day society did you want to explore or unpack through this specific vision of the future you’ve created for the show?

When people think about the end of the world in fiction, usually the cause is some kind of war or maybe an alien invasion. But with this story, the collapse of everything begins with the creation of this new painkiller, Hapna. The general real-world opioid crisis was one of my bigger inspirations for this series, but also the fact that many of the musicians I love listening to ultimately died from drug overdoses.

In the old days, you would hear about musicians overdosing on illegal street drugs, but over the years, you’ve seen more and more cases like Prince, for instance, who wind up dying while taking prescribed painkillers. Prince’s death still shocks me. I love hip-hop culture and rap as well, and unfortunately you’re seeing more of this kind of overdosing with younger artists.

What made you want to come back to sci-fi after being away from the genre for so long?

After Cowboy Bebop, I wanted to try something different genre-wise, which was how I ended up making Kids on the Slope and Carole & Tuesday. When I wound up working on Blade Runner Black Out 2022, it felt so good to come back to sci-fi, but because that was just a short, I still felt like I needed to find an opportunity to stretch those specific creative muscles.

I didn’t just want to repeat or rehash what I’d done with Cowboy Bebop, though, and that’s part of why I initially reached out to Chad Stahelski, who worked on the John Wick films. I thought that he was able to really update action sequences in a new way, and I wanted to bring that kind of energy to my next project.

Adult Swim

Talk to me about collaborating with Chad.

When I first mentioned Chad’s name, a lot of people were skeptical about whether we would be able to find the time to collaborate because of how busy he is and how many people want to work with him. But I felt such a strong affinity for Chad’s approach to building action scenes, and so I still reached out. It turned out that he had seen and was a big fan of Cowboy Bebop and Samurai Champloo, and he immediately said yes to coming onto Lazarus.

Chad’s team would do their own interpretation of fight scene choreography and send videos to us, and then we would study that footage to find different elements of the action that we wanted to incorporate into Lazarus. Obviously, live-action and animation are different mediums, so our process involved a lot of figuring out which aspects of the footage felt like things we could heighten and stylize.

What was that process like?

Different episodes have different ways of incorporating Chad’s choreography. For the premiere, we were actually still in the early stages and we weren’t able to benefit from Chad’s input as much. And for episodes two and three, we created very short action sequences specifically to incorporate things we saw in the videos from Chad’s team. But for the fourth episode, we went with a much bigger set piece because at that point, we had really found our rhythm.

For some episodes, we gave them information about what kind of scene we wanted, and they would just go brainstorming. But in a lot of cases, before getting detailed instructions from us, Chad’s team would offer up their ideas, and we used quite a bit of those as well. It was a constant ongoing discussion between our teams, and that open communication was key to striking the right tone for Lazarus’ action. For instance, the John Wick movies’ fight scenes involve a lot of headshot kills, but that was a little too much for us because Axel isn’t really a killer the way John Wick himself is.

You mentioned earlier that Axel was the first piece of this story that came into focus for you. How did you envision him?

I don’t want you to get the wrong idea when I say this, but Axel was somewhat inspired by Tom Cruise. Axel thrives on danger and at times, it seems like he’s almost addicted to it. Lazarus features a lot of parkour because we wanted the action built around Axel to always feel as if one wrong move could lead to him falling. He’s risking his life, but that danger is something he gets off on — it makes him feel alive.

You’ve always been known for populating your worlds with diverse arrays of people, but there’s a pronounced multiculturalism to Babylonia City [one of the show’s important locations] that feels really distinct in the larger anime landscape. What was your thinking behind building a story around such a culturally diverse group of characters?

Whenever I’m thinking of the specific environments my characters will exist in, the most important thing is that the space feels real and like a place where people can actually move around in a realistic way. What I’ve always felt looking at other sci-fi depictions of the future is that often, they don’t have a sense of being truly lived-in, and that’s what I want to avoid. So for Babylonia City, my thinking was that a big, busy cityscape would lend itself to characters’ expressions of their personalities.

I always try to incorporate multicultural elements into my stories, I think, because they were such an important part of Blade Runner, which really stuck with me after I first saw it when I was young. Blade Runner’s multiculturalism — the cultural blending — was part of how the movie illustrated how society had changed in the future. I half expected the future to be more like that, which is funny to say today because the original film is set in 2019.

People will be able to hear Lazarus’ soundtrack for themselves, but what did the series sound like in your mind as you were thinking about the musical palettes you wanted to build for it?

With Cowboy Bebop, we used somewhat older jazz music to create a contrast with the story’s futuristic feel. But for Lazarus, I wanted to find a different kind of sound and feature more of the relatively recent music I’ve been listening to.

Was there a specific song or songs that really crystallized the show for you?

This hasn’t been made public yet, but the end credits sequence features The Boo Radleys’ song “Lazarus,” which was really a huge inspiration for this series as a whole. I’m very interested to hear what they think of the show.

You talked about your process for collaborating on visuals, but what about for the music?

There was a lot of back and forth with that process as well. Typically for animation, you try to have all the music cues created and ready while the production is going on. We had a lot of discussions with our musical collaborators about the show and the specific kind of feeling we wanted to evoke from scene-to-scene, and the challenging thing about that process is always that the visuals we’re creating music for just aren’t entirely finished. They’re part way there, but the musician has to imagine something more complete to compose to. But then, when the visuals are finished, there might be requests for retakes or small adjustments that make the musical piece fit more cohesively.

There has been an increase in focus on the working conditions that make it harder and harder for illustrators and animators to thrive and cultivate sustainable careers. What’s your read on the current state of the industry?

In a nutshell, the problem is that there are too many shows being made and there aren’t enough experienced animators to go around. Even for Lazarus, we weren’t able to get all of the experienced animators we needed domestically, so we had to bring in quite a few from overseas. With the first episode, there are many non-Japanese animators, especially for the action scenes.

Big picture, what do you think needs to really change in order for animators to get that kind of experience?

In order for an animator to really develop their skills, I think they need to be working on a project, and being able to focus solely on it. But more often than not, because of the sheer number of shows and films, many animators have to jump from one project to another, and really scramble to finish their work, and it’s not an environment that’s conducive to genuine artistic growth.

Going back to that first episode, the action scenes in the first half were drawn by a single animator, and another animator handled all of the action in the second half. They each had 50 shots to work on. That, to me, is the ideal way — to have someone who’s already good at action animation be able to focus on a substantial chunk of scenes. That’s how you grow. But so often in other animated projects, you see experienced animators limited to working on maybe two or three shots max, and the end product just isn’t as good.

Read More 

Judges are using algorithms to justify doing what they already want

Image: Cath Virginia / The Verge, Getty Images

When Northwestern University graduate student Sino Esthappan began researching how algorithms decide who stays in jail, he expected “a story about humans versus technology.” On one side would be human judges, who Esthappan interviewed extensively. On the other would be risk assessment algorithms, which are used in hundreds of US counties to assess the danger of granting bail to accused criminals. What he found was more complicated — and suggests these tools could obscure bigger problems with the bail system itself.
Algorithmic risk assessments are intended to calculate the risk of a criminal defendant not returning to court — or, worse, harming others — if they’re released. By comparing criminal defendants’ backgrounds to a vast database of past cases, they’re supposed to help judges gauge how risky releasing someone from jail would be. Along with other algorithm-driven tools, they play an increasingly large role in a frequently overburdened criminal justice system. And in theory, they’re supposed to help reduce bias from human judges.
But Esthappan’s work, published in the journal Social Problems, found that judges aren’t wholesale adopting or rejecting the advice of these algorithms. Instead, they report using them selectively, motivated by deeply human factors to accept or disregard their scores.
Pretrial risk assessment tools estimate the likelihood that accused criminals will return for court dates if they’re released from jail. The tools take in details fed to them by pretrial officers, including things like criminal history and family profiles. They compare this information with a database that holds hundreds of thousands of previous case records, looking at how defendants with similar histories behaved. Then they deliver an assessment that could take the form of a “low,” “medium,” or “high” risk label or a number on a scale. Judges are given the scores for use in pretrial hearings: short meetings, held soon after a defendant is arrested, that determine whether (and on what conditions) they’ll be released.
As with other algorithmic criminal justice tools, supporters position them as neutral, data-driven correctives to human capriciousness and bias. Opponents raise issues like the risk of racial profiling. “Because a lot of these tools rely on criminal history, the argument is that criminal history is also racially encoded based on law enforcement surveillance practices,” Esthappan says. “So there already is an argument that these tools are reproducing biases from the past, and they’re encoding them into the future.”
It’s also not clear how well they work. A 2016 ProPublica investigation found that a risk score algorithm used in Broward County, Florida, was “remarkably unreliable in forecasting violent crime.” Just 20 percent of those the algorithm predicted would commit violent crimes actually did in the next two years after their arrest. The program was also more likely to label Black defendants as future criminals or higher risk compared to white defendants, ProPublica found.
Both the fears and promises around algorithms in the courtroom assume judges are consistently using them
Still, University of Pennsylvania criminology professor Richard Berk argues that human decision-makers can be just as flawed. “These criminal justice systems are made with human institutions and human beings, all of which are imperfect, and not surprisingly, they don’t do a very good job in identifying or forecasting people’s behaviors,” Berk says. “So the bar is really pretty low, and the question is, can algorithms raise the bar? And the answer is yes, if proper information is provided.”
Both the fears and promises around algorithms in the courtroom, however, assume judges are consistently using them. Esthappan’s study shows that’s a flawed assumption at best.
Esthappan interviewed 27 judges across four criminal courts in different regions of the country over one year between 2022 and 2023, asking questions like, “When do you find risk scores more or less useful?” and “How and with whom do you discuss risk scores in pretrial hearings?” He also analyzed local news coverage and case files, observed 50 hours of bond court, and interviewed others who work in the judicial system to help contextualize the findings.
Judges told Esthappan that they used algorithmic tools to process lower-stakes cases quickly, leaning on automated scores even when they weren’t confident in their legitimacy. Overall, they were leery of following low risk scores for defendants accused of offenses like sexual assault and intimate partner violence — sometimes because they believed the algorithms under- or over-weighted various risk factors, but also because their own reputations were on the line. And conversely, some described using the systems to explain why they’d made an unpopular decision — believing the risk scores added authoritative weight.
“Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law.”
The interviews revealed recurring patterns in judges’ decisions to use risk assessment scores, frequently based on defendants’ criminal history or social background. Some judges believed the systems underestimated the importance of certain red flags — like extensive juvenile records or certain kinds of gun charges — or overemphasized factors like an old criminal record or low education level. “Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law,” Esthappan writes.
Some judges also said they used the scores as a matter of efficiency. These pretrial hearings are short — often less than five minutes — and require snap decisions based on limited information. The algorithmic score at least provides one more factor to consider.
Judges also, however, were keenly aware of how a decision would reflect on them — and according to Esthappan, this was a huge factor in whether they trusted risk scores. When judges saw a charge they believed to be less of a public safety issue and more of a result of poverty or addiction, they would often defer to risk scores, seeing a small risk to their own reputation if they got it wrong and viewing their role, as one judge described it, as calling “balls and strikes,” rather than becoming a “social engineer.”
For high-level charges that involved some sort of moral weight, like rape or domestic violence, judges said they were more likely to be skeptical. This was partly because they identified problems with how the system weighted information for specific crimes — in intimate partner violence cases, for instance, they believed even defendants without a long criminal history could be dangerous. But they also recognized that the stakes — for themselves and others — were higher. “Your worst nightmare is you let someone out on a lower bond and then they go and hurt someone. I mean, all of us, when I see those stories on the news, I think that could have been any of us,” said one judge quoted in the study.
Keeping a truly low-risk defendant in jail has costs, too. It keeps someone who’s unlikely to harm anyone away from their job, their school, or their family before they’ve been convicted of a crime. But there’s little reputational risk for judges — and adding a risk score doesn’t change that calculus.
The deciding factor for judges often wasn’t whether the algorithm seemed trustworthy, but whether it would help them justify a decision they wanted to make. Judges who released a defendant based on a low risk score, for instance, could “shift some of that accountability away from themselves and towards the score,” Esthappan said. If an alleged victim “wants someone locked up,” one subject said, “what you’ll do as the judge is say ‘We’re guided by a risk assessment that scores for success in the defendant’s likelihood to appear and rearrest. And, based on the statute and this score, my job is to set a bond that protects others in the community.’”
“In practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions”
Esthappan’s study pokes holes in the idea that algorithmic tools result in fairer, more consistent decisions. If judges are picking when to rely on scores based on factors like reputational risk, Esthappan notes, they may not be reducing human-driven bias — they could actually be legitimizing that bias and making it hard to spot. “Whereas policymakers tout their ability to curb judicial discretion, in practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions,” Esthappan writes in the study.
Megan Stevenson, an economist and criminal justice scholar at the University of Virginia School of Law, says risk assessments are something of “a technocratic toy of policymakers and academics.” She says it’s seemed to be an attractive tool to try to “take the randomness and the uncertainty out of this process,” but based on studies of their impact, they often don’t have a major effect on outcomes either way.
A larger problem is that judges are forced to work with highly limited time and information. Berk, the University of Pennsylvania professor, says collecting more and better information could help the algorithms make better assessments. But that would require time and resources court systems may not have.
But when Esthappan interviewed public defenders, they raised an even more fundamental question: should pretrial detention, in its current form, exist at all? Judges aren’t just working with spotty data. They’re determining someone’s freedom before that person even gets a chance to fight their charges, often based on predictions that are largely guesswork. “Within this context, I think it makes sense that judges would rely on a risk assessment tool because they have so limited information,” Esthappan tells The Verge. “But on the other hand, I sort of see it as a bit of a distraction.”
Algorithmic tools are aiming to address a real issue with imperfect human decision-making. “The question that I have is, is that really the problem?” Esthappan tells The Verge. “Is it that judges are acting in a biased way, or is there something more structurally problematic about the way that we’re hearing people at pretrial?” The answer, he says, is that “there’s an issue that can’t necessarily be fixed with risk assessments, but that it goes into a deeper cultural issue within criminal courts.”

Image: Cath Virginia / The Verge, Getty Images

When Northwestern University graduate student Sino Esthappan began researching how algorithms decide who stays in jail, he expected “a story about humans versus technology.” On one side would be human judges, who Esthappan interviewed extensively. On the other would be risk assessment algorithms, which are used in hundreds of US counties to assess the danger of granting bail to accused criminals. What he found was more complicated — and suggests these tools could obscure bigger problems with the bail system itself.

Algorithmic risk assessments are intended to calculate the risk of a criminal defendant not returning to court — or, worse, harming others — if they’re released. By comparing criminal defendants’ backgrounds to a vast database of past cases, they’re supposed to help judges gauge how risky releasing someone from jail would be. Along with other algorithm-driven tools, they play an increasingly large role in a frequently overburdened criminal justice system. And in theory, they’re supposed to help reduce bias from human judges.

But Esthappan’s work, published in the journal Social Problems, found that judges aren’t wholesale adopting or rejecting the advice of these algorithms. Instead, they report using them selectively, motivated by deeply human factors to accept or disregard their scores.

Pretrial risk assessment tools estimate the likelihood that accused criminals will return for court dates if they’re released from jail. The tools take in details fed to them by pretrial officers, including things like criminal history and family profiles. They compare this information with a database that holds hundreds of thousands of previous case records, looking at how defendants with similar histories behaved. Then they deliver an assessment that could take the form of a “low,” “medium,” or “high” risk label or a number on a scale. Judges are given the scores for use in pretrial hearings: short meetings, held soon after a defendant is arrested, that determine whether (and on what conditions) they’ll be released.

As with other algorithmic criminal justice tools, supporters position them as neutral, data-driven correctives to human capriciousness and bias. Opponents raise issues like the risk of racial profiling. “Because a lot of these tools rely on criminal history, the argument is that criminal history is also racially encoded based on law enforcement surveillance practices,” Esthappan says. “So there already is an argument that these tools are reproducing biases from the past, and they’re encoding them into the future.”

It’s also not clear how well they work. A 2016 ProPublica investigation found that a risk score algorithm used in Broward County, Florida, was “remarkably unreliable in forecasting violent crime.” Just 20 percent of those the algorithm predicted would commit violent crimes actually did in the next two years after their arrest. The program was also more likely to label Black defendants as future criminals or higher risk compared to white defendants, ProPublica found.

Both the fears and promises around algorithms in the courtroom assume judges are consistently using them

Still, University of Pennsylvania criminology professor Richard Berk argues that human decision-makers can be just as flawed. “These criminal justice systems are made with human institutions and human beings, all of which are imperfect, and not surprisingly, they don’t do a very good job in identifying or forecasting people’s behaviors,” Berk says. “So the bar is really pretty low, and the question is, can algorithms raise the bar? And the answer is yes, if proper information is provided.”

Both the fears and promises around algorithms in the courtroom, however, assume judges are consistently using them. Esthappan’s study shows that’s a flawed assumption at best.

Esthappan interviewed 27 judges across four criminal courts in different regions of the country over one year between 2022 and 2023, asking questions like, “When do you find risk scores more or less useful?” and “How and with whom do you discuss risk scores in pretrial hearings?” He also analyzed local news coverage and case files, observed 50 hours of bond court, and interviewed others who work in the judicial system to help contextualize the findings.

Judges told Esthappan that they used algorithmic tools to process lower-stakes cases quickly, leaning on automated scores even when they weren’t confident in their legitimacy. Overall, they were leery of following low risk scores for defendants accused of offenses like sexual assault and intimate partner violence — sometimes because they believed the algorithms under- or over-weighted various risk factors, but also because their own reputations were on the line. And conversely, some described using the systems to explain why they’d made an unpopular decision — believing the risk scores added authoritative weight.

“Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law.”

The interviews revealed recurring patterns in judges’ decisions to use risk assessment scores, frequently based on defendants’ criminal history or social background. Some judges believed the systems underestimated the importance of certain red flags — like extensive juvenile records or certain kinds of gun charges — or overemphasized factors like an old criminal record or low education level. “Many judges deployed their own moral views about specific charges as yardsticks to decide when risk scores were and were not legitimate in the eyes of the law,” Esthappan writes.

Some judges also said they used the scores as a matter of efficiency. These pretrial hearings are short — often less than five minutes — and require snap decisions based on limited information. The algorithmic score at least provides one more factor to consider.

Judges also, however, were keenly aware of how a decision would reflect on them — and according to Esthappan, this was a huge factor in whether they trusted risk scores. When judges saw a charge they believed to be less of a public safety issue and more of a result of poverty or addiction, they would often defer to risk scores, seeing a small risk to their own reputation if they got it wrong and viewing their role, as one judge described it, as calling “balls and strikes,” rather than becoming a “social engineer.”

For high-level charges that involved some sort of moral weight, like rape or domestic violence, judges said they were more likely to be skeptical. This was partly because they identified problems with how the system weighted information for specific crimes — in intimate partner violence cases, for instance, they believed even defendants without a long criminal history could be dangerous. But they also recognized that the stakes — for themselves and others — were higher. “Your worst nightmare is you let someone out on a lower bond and then they go and hurt someone. I mean, all of us, when I see those stories on the news, I think that could have been any of us,” said one judge quoted in the study.

Keeping a truly low-risk defendant in jail has costs, too. It keeps someone who’s unlikely to harm anyone away from their job, their school, or their family before they’ve been convicted of a crime. But there’s little reputational risk for judges — and adding a risk score doesn’t change that calculus.

The deciding factor for judges often wasn’t whether the algorithm seemed trustworthy, but whether it would help them justify a decision they wanted to make. Judges who released a defendant based on a low risk score, for instance, could “shift some of that accountability away from themselves and towards the score,” Esthappan said. If an alleged victim “wants someone locked up,” one subject said, “what you’ll do as the judge is say ‘We’re guided by a risk assessment that scores for success in the defendant’s likelihood to appear and rearrest. And, based on the statute and this score, my job is to set a bond that protects others in the community.’”

“In practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions”

Esthappan’s study pokes holes in the idea that algorithmic tools result in fairer, more consistent decisions. If judges are picking when to rely on scores based on factors like reputational risk, Esthappan notes, they may not be reducing human-driven bias — they could actually be legitimizing that bias and making it hard to spot. “Whereas policymakers tout their ability to curb judicial discretion, in practice, risk scores expand the uses of discretion among judges who strategically use them to justify punitive sanctions,” Esthappan writes in the study.

Megan Stevenson, an economist and criminal justice scholar at the University of Virginia School of Law, says risk assessments are something of “a technocratic toy of policymakers and academics.” She says it’s seemed to be an attractive tool to try to “take the randomness and the uncertainty out of this process,” but based on studies of their impact, they often don’t have a major effect on outcomes either way.

A larger problem is that judges are forced to work with highly limited time and information. Berk, the University of Pennsylvania professor, says collecting more and better information could help the algorithms make better assessments. But that would require time and resources court systems may not have.

But when Esthappan interviewed public defenders, they raised an even more fundamental question: should pretrial detention, in its current form, exist at all? Judges aren’t just working with spotty data. They’re determining someone’s freedom before that person even gets a chance to fight their charges, often based on predictions that are largely guesswork. “Within this context, I think it makes sense that judges would rely on a risk assessment tool because they have so limited information,” Esthappan tells The Verge. “But on the other hand, I sort of see it as a bit of a distraction.”

Algorithmic tools are aiming to address a real issue with imperfect human decision-making. “The question that I have is, is that really the problem?” Esthappan tells The Verge. “Is it that judges are acting in a biased way, or is there something more structurally problematic about the way that we’re hearing people at pretrial?” The answer, he says, is that “there’s an issue that can’t necessarily be fixed with risk assessments, but that it goes into a deeper cultural issue within criminal courts.”

Read More 

The Amazon Echo graveyard

Image: Mojo Wang for The Verge

The Echo ecosystem has seen its fair share of failures while trying to popularize Amazon’s Alexa smart assistant. For the past decade, Amazon has aspired for Alexa to be more than just a convenient way to start a cooking timer. To convince consumers of the smart assistant’s potential, the company has reinvented its Echo line again and again. From fashion-critiquing cameras to microwaves you can ask to make popcorn, Echo’s repeated renaissance has often felt wildly experimental in a way that hasn’t always clicked with consumers.
Although the Echo smart speaker has endured, many other Echo spinoffs, accessories, and variations have not. They were either too weird, too redundant, or too ahead of their time to survive longer than a few years before being quietly disappeared from Amazon’s online store.
Let’s take a look at the Echo products that failed to win consumers over or failed to convince Amazon they were worth keeping around.

Echo Look

Photo by Vjeran Pavic / The Verge

Using a camera and built-in LED lighting, the Echo Look could capture body-length photos and videos of users wearing various outfits that were cataloged through a standalone app and rated using “machine learning algorithms with advice from fashion specialists.”
It remains one of Amazon’s most peculiar and controversial Echo devices and immediately raised concerns about privacy and AI when it debuted in 2017. At $199.99, it was also one of the more expensive Echo spin-offs. It was eventually discontinued in 2020.
Should Amazon resurrect it? No one needed it in 2017. No one needs it now.
Amazon Tap

Photo: The Verge

The Tap was Amazon’s first smart speaker to disconnect Alexa from a power outlet. It was a smart Bluetooth speaker with nine hours of battery life and a convenient charging dock. Unlike Amazon’s Echo smart speakers, the Tap required users to physically press a button to summon Alexa but was eventually updated so that the smart assistant was always listening for voice commands.
At $130, it was priced competitively with similarly sized wireless speakers, but its smart capabilities were only available while it had Wi-Fi connectivity. The Tap was discontinued in 2018, just two years after it launched.
Should Amazon resurrect it? Yes, not every device needs to be always listening.
Echo Buttons

Photo by Lauren Goode / The Verge

The first in a line of new “Alexa Gadgets” that never really took off, the Echo Buttons debuted in 2017 as wireless, puck-shaped buzzers that could be used to play single or multiplayer trivia games through an Echo smart speaker.
Available in two-packs for $19.99, the Echo Buttons were meant to expand the usefulness of Echo products as fun and playful devices, but they were discontinued a few years later as smart speakers never really caught on as gaming devices.
Should Amazon resurrect it? No, we have better ways to game.
Echo Spot

Photo by Amelia Holowaty Krales / The Verge

Following the Echo Look, the Echo Spot was the second Amazon product to sneak a camera into bedrooms. With a circular 2.5-inch screen, the Spot was a smaller, cheaper, and subtler version of the Echo Show, allowing it to be more discreetly used around a home, but it functioned best as a smart alarm clock on a bedside table.
The Spot could be used for video calls, but the camera could also be disabled for those with privacy concerns. Amazon discontinued the Echo Spot in 2019 but revived it in 2024 without the camera.
Should Amazon resurrect it? It’s already back from the dead.
Echo Connect

Image: Amazon

In 2017, the Echo Connect arrived to expand the Echo’s calling abilities to actual phone numbers, not just other Echo devices. When plugged into a telephone jack, the small black box turned Echo smart speakers into speakerphones that could call landline numbers, including 911.
Amazon stopped selling the hardware a few years after its debut as similar functionality was added to later Echo speakers — even though it was limited to a select number of contacts and only outgoing calls made to numbers in the US, Canada, and UK.
Should Amazon resurrect it? Yes, if only for our grandparents.
Echo Plus

Photo by Dan Seifert / The Verge

Debuting three years after the original Amazon Echo launched in 2014, the Echo Plus included a redesigned speaker with improved sound and had aspirations of being a one-stop smart home hub. The Echo Plus was cheaper than the original and included Zigbee support, allowing it to control smart lights, outlets, and locks without the need for a separate hub. But the Echo Plus lacked support for Z-Wave, the other popular smart home protocol at the time, and was $50 more expensive than a smaller Echo that debuted alongside it.
An updated version of the Echo Plus was announced in 2018, but the product was eventually discontinued in 2020 as the smart home technologies evolved.
Should Amazon resurrect it? No, there are now better smart home solutions.
Echo Wall Clock

Photo by Dan Seifert / The Verge

The Echo Wall Clock, announced in 2018, lacked a microphone and was instead designed to be an accessory for Echo smart speakers that displayed the current time and the progress of running timers using a ring of LEDs.
Amazon later partnered with Disney for a Mickey Mouse version of the clock, while Citizen introduced alternate designs. The clock’s limited functionality, and a problematic rollout with many users experiencing connectivity issues, contributed to Amazon eventually discontinuing the clock.
Should Amazon resurrect it? No, its usefulness was a little too limited.
AmazonBasics Microwave

Photo by Amelia Holowaty Krales / The Verge

Although it lacked a microphone and speaker of its own, the $59.99 AmazonBasics Microwave was designed to connect to existing Echo devices in a home so you could ask Alexa to microwave a potato or a bag of popcorn without having to navigate a menu of cooking presets on the oven itself.
Being able to quickly stop the microwave with a voice command when you smelled burning food was a useful feature, but the microwave was more useful as a tool for Amazon to demonstrate its Alexa Connect Kit as it tried to convince other hardware makers to integrate its smart assistant. Four years after its debut, the microwave was discontinued.
Should Amazon resurrect it? No, but we’ll take an Alexa-equipped air fryer.
Echo Input

Photo by Dieter Bohn / The Verge

The Echo Input was a small puck-shaped dongle that used an audio cable or Bluetooth to bring music-streaming capabilities and access to Amazon Alexa to existing speakers and audio setups.
When it debuted in 2018, its ability to connect to Amazon’s smart assistant gave the Echo Input an advantage over Google’s Chromecast Audio. But given other Echo products could also be connected to existing speakers, the Input was redundant and eventually discontinued.
Should Amazon resurrect it? No.
Echo Link and Echo Link Amp

Image: Amazon

The Echo Link and Echo Link Amp offered similar functionality to the Echo Input but with features that catered to those using music services with higher-quality audio streams. The $199.99 Echo Link included more output options than the Echo Input for connecting to an audio system’s receiver or amplifier, plus its own volume knob.
As the name implies, the $299.99 Echo Link Amp also included a built-in 60-watt amplifier, allowing it to directly connect to speakers. Both products were meant to help Amazon compete with Sonos but were discontinued within a few years.
Should Amazon resurrect it? No, just buy a Sonos.
Echo Dot with Clock

Photo by Chris Welch / The Verge

By 2019, the compact Echo Dot had become one of the best-selling products on Amazon, and that same year, it gained one of its most useful features. The Echo Dot with Clock had a four-digit, seven-segment LED display hidden beneath its fabric cover that made info like the time, weather, and timers accessible with just a quick glance. I
t would eventually be updated with a spherical design in 2020 and an improved LED dot matrix display in 2022, but that would be the last version. The Echo Dot with Clock was discontinued in 2024 and replaced by the revival of the Echo Spot featuring a full-color LCD display.
Should Amazon resurrect it? Yes, not every device needs a screen.
Echo Loop

Photo: The Verge

Amazon’s Echo Loop smart ring debuted in 2019 as a tiny wearable Echo smart speaker. While companies like Oura were pushing smart rings as health-tracking tools, the Echo Loop featured a speaker and microphones so users could talk to their hands to interact with Alexa.
Although the Echo Loop allowed for discrete interactions, it had a limited battery life, was expensive at $179.99, and its speaker was sometimes too quiet to actually hear. Smartwatches, headphones, and smart glasses proved to be better ways to quietly interact with smart assistants, and Amazon discontinued the Echo Loop a year later.
Should Amazon resurrect it? No, there are better uses for smart rings.
Echo Flex

Photo by Dan Seifert / The Verge

A voice-activated smart assistant is only useful if it’s close enough to hear you. For $24.99, the Echo Flex, which debuted in 2019, was an affordable way to put Alexa in every room of your home.
The tiny smart speaker plugged directly into a wall outlet, and its functionality could be expanded through modular accessories, including a night light, motion sensor, and digital clock.
But that clock accessory pushed the price of the Echo Flex closer to the Echo Dot with Clock, which had a better speaker for listening to music. The Echo Flex was eventually discontinued in 2023.
Should Amazon resurrect it? Yes, but integrate all the functionality of the modular accessories.

Image: Mojo Wang for The Verge

The Echo ecosystem has seen its fair share of failures while trying to popularize Amazon’s Alexa smart assistant.

For the past decade, Amazon has aspired for Alexa to be more than just a convenient way to start a cooking timer. To convince consumers of the smart assistant’s potential, the company has reinvented its Echo line again and again. From fashion-critiquing cameras to microwaves you can ask to make popcorn, Echo’s repeated renaissance has often felt wildly experimental in a way that hasn’t always clicked with consumers.

Although the Echo smart speaker has endured, many other Echo spinoffs, accessories, and variations have not. They were either too weird, too redundant, or too ahead of their time to survive longer than a few years before being quietly disappeared from Amazon’s online store.

Let’s take a look at the Echo products that failed to win consumers over or failed to convince Amazon they were worth keeping around.

Echo Look

Photo by Vjeran Pavic / The Verge

Using a camera and built-in LED lighting, the Echo Look could capture body-length photos and videos of users wearing various outfits that were cataloged through a standalone app and rated using “machine learning algorithms with advice from fashion specialists.”

It remains one of Amazon’s most peculiar and controversial Echo devices and immediately raised concerns about privacy and AI when it debuted in 2017. At $199.99, it was also one of the more expensive Echo spin-offs. It was eventually discontinued in 2020.

Should Amazon resurrect it? No one needed it in 2017. No one needs it now.

Amazon Tap

Photo: The Verge

The Tap was Amazon’s first smart speaker to disconnect Alexa from a power outlet. It was a smart Bluetooth speaker with nine hours of battery life and a convenient charging dock. Unlike Amazon’s Echo smart speakers, the Tap required users to physically press a button to summon Alexa but was eventually updated so that the smart assistant was always listening for voice commands.

At $130, it was priced competitively with similarly sized wireless speakers, but its smart capabilities were only available while it had Wi-Fi connectivity. The Tap was discontinued in 2018, just two years after it launched.

Should Amazon resurrect it? Yes, not every device needs to be always listening.

Echo Buttons

Photo by Lauren Goode / The Verge

The first in a line of new “Alexa Gadgets” that never really took off, the Echo Buttons debuted in 2017 as wireless, puck-shaped buzzers that could be used to play single or multiplayer trivia games through an Echo smart speaker.

Available in two-packs for $19.99, the Echo Buttons were meant to expand the usefulness of Echo products as fun and playful devices, but they were discontinued a few years later as smart speakers never really caught on as gaming devices.

Should Amazon resurrect it? No, we have better ways to game.

Echo Spot

Photo by Amelia Holowaty Krales / The Verge

Following the Echo Look, the Echo Spot was the second Amazon product to sneak a camera into bedrooms. With a circular 2.5-inch screen, the Spot was a smaller, cheaper, and subtler version of the Echo Show, allowing it to be more discreetly used around a home, but it functioned best as a smart alarm clock on a bedside table.

The Spot could be used for video calls, but the camera could also be disabled for those with privacy concerns. Amazon discontinued the Echo Spot in 2019 but revived it in 2024 without the camera.

Should Amazon resurrect it? It’s already back from the dead.

Echo Connect

Image: Amazon

In 2017, the Echo Connect arrived to expand the Echo’s calling abilities to actual phone numbers, not just other Echo devices. When plugged into a telephone jack, the small black box turned Echo smart speakers into speakerphones that could call landline numbers, including 911.

Amazon stopped selling the hardware a few years after its debut as similar functionality was added to later Echo speakers — even though it was limited to a select number of contacts and only outgoing calls made to numbers in the US, Canada, and UK.

Should Amazon resurrect it? Yes, if only for our grandparents.

Echo Plus

Photo by Dan Seifert / The Verge

Debuting three years after the original Amazon Echo launched in 2014, the Echo Plus included a redesigned speaker with improved sound and had aspirations of being a one-stop smart home hub. The Echo Plus was cheaper than the original and included Zigbee support, allowing it to control smart lights, outlets, and locks without the need for a separate hub. But the Echo Plus lacked support for Z-Wave, the other popular smart home protocol at the time, and was $50 more expensive than a smaller Echo that debuted alongside it.

An updated version of the Echo Plus was announced in 2018, but the product was eventually discontinued in 2020 as the smart home technologies evolved.

Should Amazon resurrect it? No, there are now better smart home solutions.

Echo Wall Clock

Photo by Dan Seifert / The Verge

The Echo Wall Clock, announced in 2018, lacked a microphone and was instead designed to be an accessory for Echo smart speakers that displayed the current time and the progress of running timers using a ring of LEDs.

Amazon later partnered with Disney for a Mickey Mouse version of the clock, while Citizen introduced alternate designs. The clock’s limited functionality, and a problematic rollout with many users experiencing connectivity issues, contributed to Amazon eventually discontinuing the clock.

Should Amazon resurrect it? No, its usefulness was a little too limited.

AmazonBasics Microwave

Photo by Amelia Holowaty Krales / The Verge

Although it lacked a microphone and speaker of its own, the $59.99 AmazonBasics Microwave was designed to connect to existing Echo devices in a home so you could ask Alexa to microwave a potato or a bag of popcorn without having to navigate a menu of cooking presets on the oven itself.

Being able to quickly stop the microwave with a voice command when you smelled burning food was a useful feature, but the microwave was more useful as a tool for Amazon to demonstrate its Alexa Connect Kit as it tried to convince other hardware makers to integrate its smart assistant. Four years after its debut, the microwave was discontinued.

Should Amazon resurrect it? No, but we’ll take an Alexa-equipped air fryer.

Echo Input

Photo by Dieter Bohn / The Verge

The Echo Input was a small puck-shaped dongle that used an audio cable or Bluetooth to bring music-streaming capabilities and access to Amazon Alexa to existing speakers and audio setups.

When it debuted in 2018, its ability to connect to Amazon’s smart assistant gave the Echo Input an advantage over Google’s Chromecast Audio. But given other Echo products could also be connected to existing speakers, the Input was redundant and eventually discontinued.

Should Amazon resurrect it? No.

Echo Link and Echo Link Amp

Image: Amazon

The Echo Link and Echo Link Amp offered similar functionality to the Echo Input but with features that catered to those using music services with higher-quality audio streams. The $199.99 Echo Link included more output options than the Echo Input for connecting to an audio system’s receiver or amplifier, plus its own volume knob.

As the name implies, the $299.99 Echo Link Amp also included a built-in 60-watt amplifier, allowing it to directly connect to speakers. Both products were meant to help Amazon compete with Sonos but were discontinued within a few years.

Should Amazon resurrect it? No, just buy a Sonos.

Echo Dot with Clock

Photo by Chris Welch / The Verge

By 2019, the compact Echo Dot had become one of the best-selling products on Amazon, and that same year, it gained one of its most useful features. The Echo Dot with Clock had a four-digit, seven-segment LED display hidden beneath its fabric cover that made info like the time, weather, and timers accessible with just a quick glance. I

t would eventually be updated with a spherical design in 2020 and an improved LED dot matrix display in 2022, but that would be the last version. The Echo Dot with Clock was discontinued in 2024 and replaced by the revival of the Echo Spot featuring a full-color LCD display.

Should Amazon resurrect it? Yes, not every device needs a screen.

Echo Loop

Photo: The Verge

Amazon’s Echo Loop smart ring debuted in 2019 as a tiny wearable Echo smart speaker. While companies like Oura were pushing smart rings as health-tracking tools, the Echo Loop featured a speaker and microphones so users could talk to their hands to interact with Alexa.

Although the Echo Loop allowed for discrete interactions, it had a limited battery life, was expensive at $179.99, and its speaker was sometimes too quiet to actually hear. Smartwatches, headphones, and smart glasses proved to be better ways to quietly interact with smart assistants, and Amazon discontinued the Echo Loop a year later.

Should Amazon resurrect it? No, there are better uses for smart rings.

Echo Flex

Photo by Dan Seifert / The Verge

A voice-activated smart assistant is only useful if it’s close enough to hear you. For $24.99, the Echo Flex, which debuted in 2019, was an affordable way to put Alexa in every room of your home.

The tiny smart speaker plugged directly into a wall outlet, and its functionality could be expanded through modular accessories, including a night light, motion sensor, and digital clock.

But that clock accessory pushed the price of the Echo Flex closer to the Echo Dot with Clock, which had a better speaker for listening to music. The Echo Flex was eventually discontinued in 2023.

Should Amazon resurrect it? Yes, but integrate all the functionality of the modular accessories.

Read More 

Alexa, thank you for the music

Image: Mojo Wang for The Verge

When dealing with an aging parent, Alexa was a great help — in both practical and emotionally important ways. About two years ago, I got a call from my mother. “You know,” she said, “that Alexa is really working out. I was feeling a little depressed, so I told Alexa to play some nice music, and that’s exactly what she did. In a few minutes, I was feeling so much better!”
Alexa had become, not exactly a companion, but a presence in my mother’s home — one that made both her, and myself, feel a little better. This was at least part of what I hoped would happen when I first went shopping for an Echo device. Websites focused on senior care are full of advice on how to add Amazon’s smart speakers as a useful tool, and Amazon’s technology was designed to make tech more approachable and accessible — goals that it often, though not always, succeeds at.

Here’s how it started. My mother had lived most of her life as a teacher in the NYC public schools system, a smart, savvy woman with a master’s in education, a progressive political point of view, and a sometimes irritating ability to assume charge of almost any situation. But she was now entering her late 90s and beginning to have serious problems with her health and her short-term memory recall. Despite her determination to stay independent as long as possible — by playing games on her computer, keeping up with the news, and writing copious journal entries of her day-to-day activities — this increasingly affected her ability to do simple tasks, to learn new skills, and to live independently.
We were able to hire an aide to help her during the daylight hours — make meals, clean up, and help with other chores that she was now unable to do herself. But my mom was also stubborn and refused to have anyone there at night or to wear any kind of emergency button in case she needed help. I lived about 40 minutes away and only spent weekends with her. We needed some way of making sure she was okay when she was the only person in the apartment.
My mother grew up at a time when just having a home telephone was new and exciting
So I got her an Amazon Echo Show 8 smart display in the hopes that it could be the beginning of a smart home system that would help keep her safe and active. It all depended on how well my mother, who grew up at a time when just having a home telephone was new and exciting, would accept the device. The Echo’s eight-inch screen was large enough for her to be able to view it easily but small enough so it wouldn’t overwhelm the room. She could interact with the personal assistant, while the camera would allow me to interact with her remotely. I set it up and introduced her to Alexa.
And — it worked. Sort of.
I thought we could start by using it as a way to communicate visually. That was pretty much a failure. My mother was used to calling people on a phone, and while she was impressed with the whole “see the person you’re talking to” idea, she wasn’t very enthusiastic about using it herself. “It’s not for me,” she said firmly.

Photo by Jennifer Pattison Tuohy / The Verge
Verger Jennifer Pattison Tuohy could drop in on her dad via an Echo Show. My mother wasn’t as cooperative.

Okay, I thought, there’s always the “drop-in” feature. I could use it to monitor what was happening in the apartment. However, the Echo Show had been placed in a small room off the kitchen that we called “The Den” where my mother had her meals, wrote in her journal, and spent a lot of her time — and as a result, it could only “see” into that room and the kitchen. The one time I suggested that I put cameras around the apartment, I got one of her looks — the one that made me feel as if I were five years old again. A camera in the bedroom? No way.
But luckily, there were some things the Echo did help with. About that time, my mother’s ancient bedside clock radio finally gave up the ghost. With some trepidation, I replaced it with an Echo Dot with Clock — and was delighted when my mother informed me that she loved it! She could not only see what time it was but also ask Alexa what the weather was, right from her bed. And what made me happy was that I was able to teach her to yell, “Alexa, call Barbara” if she needed me in an emergency. Between the Dot and the Show, Alexa could now respond no matter where my mother was in the apartment — including the bathroom with the door closed. (I checked.) She only used the feature a couple of times, and never for an actual emergency, but it was there for “just in case.”
In the end, though, the most important gift that the two Echos gave to my mother was music.
Decades ago, my parents bought what was then the latest in audio technology: a modular stereo system that consisted of a turntable, a receiver, an AM / FM radio, and a cassette tape player. Now it sat unused, having become too complicated for my mother to deal with. But with the Echo, she could play music whenever she liked. She didn’t even have to remember the names of the songs she liked or the musicians that she had once doted on. All she had to do was say, “Alexa, play some quiet music,” or “Alexa, play some happy music.” Alexa would play some old-time blues or folk or big-band music. And I’d get a call about how she had listened to her music and how good it made her feel.

Photo by Jennifer Pattison Tuohy / The Verge
An Echo Dot with Clock substituted nicely for the old clock radio.

Did the two Echos do everything I had hoped they would? Well, yes and no. They certainly gave my mother a simple and friendly way to get information and reminders. More importantly, they provided a way that she could contact me in an emergency. But I never found the time to install other smart setups that were available. It was, at least then, just too complex a task to deal with.
In fact, Amazon has experimented with extending the usefulness of its smart devices for seniors. I never got around to trying Amazon’s $20-a-month Alexa Together service, which connected to its own 24/7 emergency service — and it was apparently not very successful, since it was discontinued in June of this year. I might have opted for the less expensive Emergency Assistant feature, which allows users to contact emergency services and was introduced last September. But by that time, my mother was getting round-the-clock care from family and aides and was no longer in need of it.
Still, the Echo was good to have. Near the end of her life, when my mother was bedridden and too weak to speak, I could sit next to her and say, “Alexa, play some Woody Guthrie” or “Alexa, play some Bessie Smith” or “Alexa, play some Count Basie.” The music would start, and my mother would smile — and would, for a time, feel better. And although Amazon’s smart speaker was not the perfect answer to all our needs, for those few moments, I will always be grateful to Alexa.

Image: Mojo Wang for The Verge

When dealing with an aging parent, Alexa was a great help — in both practical and emotionally important ways.

About two years ago, I got a call from my mother. “You know,” she said, “that Alexa is really working out. I was feeling a little depressed, so I told Alexa to play some nice music, and that’s exactly what she did. In a few minutes, I was feeling so much better!”

Alexa had become, not exactly a companion, but a presence in my mother’s home — one that made both her, and myself, feel a little better. This was at least part of what I hoped would happen when I first went shopping for an Echo device. Websites focused on senior care are full of advice on how to add Amazon’s smart speakers as a useful tool, and Amazon’s technology was designed to make tech more approachable and accessible — goals that it often, though not always, succeeds at.

Here’s how it started. My mother had lived most of her life as a teacher in the NYC public schools system, a smart, savvy woman with a master’s in education, a progressive political point of view, and a sometimes irritating ability to assume charge of almost any situation. But she was now entering her late 90s and beginning to have serious problems with her health and her short-term memory recall. Despite her determination to stay independent as long as possible — by playing games on her computer, keeping up with the news, and writing copious journal entries of her day-to-day activities — this increasingly affected her ability to do simple tasks, to learn new skills, and to live independently.

We were able to hire an aide to help her during the daylight hours — make meals, clean up, and help with other chores that she was now unable to do herself. But my mom was also stubborn and refused to have anyone there at night or to wear any kind of emergency button in case she needed help. I lived about 40 minutes away and only spent weekends with her. We needed some way of making sure she was okay when she was the only person in the apartment.

My mother grew up at a time when just having a home telephone was new and exciting

So I got her an Amazon Echo Show 8 smart display in the hopes that it could be the beginning of a smart home system that would help keep her safe and active. It all depended on how well my mother, who grew up at a time when just having a home telephone was new and exciting, would accept the device. The Echo’s eight-inch screen was large enough for her to be able to view it easily but small enough so it wouldn’t overwhelm the room. She could interact with the personal assistant, while the camera would allow me to interact with her remotely. I set it up and introduced her to Alexa.

And — it worked. Sort of.

I thought we could start by using it as a way to communicate visually. That was pretty much a failure. My mother was used to calling people on a phone, and while she was impressed with the whole “see the person you’re talking to” idea, she wasn’t very enthusiastic about using it herself. “It’s not for me,” she said firmly.

Photo by Jennifer Pattison Tuohy / The Verge
Verger Jennifer Pattison Tuohy could drop in on her dad via an Echo Show. My mother wasn’t as cooperative.

Okay, I thought, there’s always the “drop-in” feature. I could use it to monitor what was happening in the apartment. However, the Echo Show had been placed in a small room off the kitchen that we called “The Den” where my mother had her meals, wrote in her journal, and spent a lot of her time — and as a result, it could only “see” into that room and the kitchen. The one time I suggested that I put cameras around the apartment, I got one of her looks — the one that made me feel as if I were five years old again. A camera in the bedroom? No way.

But luckily, there were some things the Echo did help with. About that time, my mother’s ancient bedside clock radio finally gave up the ghost. With some trepidation, I replaced it with an Echo Dot with Clock — and was delighted when my mother informed me that she loved it! She could not only see what time it was but also ask Alexa what the weather was, right from her bed. And what made me happy was that I was able to teach her to yell, “Alexa, call Barbara” if she needed me in an emergency. Between the Dot and the Show, Alexa could now respond no matter where my mother was in the apartment — including the bathroom with the door closed. (I checked.) She only used the feature a couple of times, and never for an actual emergency, but it was there for “just in case.”

In the end, though, the most important gift that the two Echos gave to my mother was music.

Decades ago, my parents bought what was then the latest in audio technology: a modular stereo system that consisted of a turntable, a receiver, an AM / FM radio, and a cassette tape player. Now it sat unused, having become too complicated for my mother to deal with. But with the Echo, she could play music whenever she liked. She didn’t even have to remember the names of the songs she liked or the musicians that she had once doted on. All she had to do was say, “Alexa, play some quiet music,” or “Alexa, play some happy music.” Alexa would play some old-time blues or folk or big-band music. And I’d get a call about how she had listened to her music and how good it made her feel.

Photo by Jennifer Pattison Tuohy / The Verge
An Echo Dot with Clock substituted nicely for the old clock radio.

Did the two Echos do everything I had hoped they would? Well, yes and no. They certainly gave my mother a simple and friendly way to get information and reminders. More importantly, they provided a way that she could contact me in an emergency. But I never found the time to install other smart setups that were available. It was, at least then, just too complex a task to deal with.

In fact, Amazon has experimented with extending the usefulness of its smart devices for seniors. I never got around to trying Amazon’s $20-a-month Alexa Together service, which connected to its own 24/7 emergency service — and it was apparently not very successful, since it was discontinued in June of this year. I might have opted for the less expensive Emergency Assistant feature, which allows users to contact emergency services and was introduced last September. But by that time, my mother was getting round-the-clock care from family and aides and was no longer in need of it.

Still, the Echo was good to have. Near the end of her life, when my mother was bedridden and too weak to speak, I could sit next to her and say, “Alexa, play some Woody Guthrie” or “Alexa, play some Bessie Smith” or “Alexa, play some Count Basie.” The music would start, and my mother would smile — and would, for a time, feel better. And although Amazon’s smart speaker was not the perfect answer to all our needs, for those few moments, I will always be grateful to Alexa.

Read More 

Avride rolls out its next-gen sidewalk delivery robots

Avride, the robotics company that spun out of Russian search giant Yandex, has a new sidewalk delivery robot to show off.
The company currently has plans to operate a fleet of six-wheeled delivery robots in Austin, Texas, delivering Uber Eats orders to customers, as well as South Korea. Now Avride’s next generation model is shedding a couple of wheels — and showing big gains in efficiency.
The new robot only has four wheels, which Avride says is more energy efficient than its six-wheeled model. The six-wheeled versions were simple to build, and could turn confidently on a variety of surfaces. But they also created a lot of friction, which ate up a lot of energy from the robot’s internal battery.
The new four-wheeled designs are much more efficient in their energy consumption, which means they can stay in operation longer before needing to be recharged. And Avride redesigned the chassis to support improved maneuverability and precision.

The robot’s wheels are mounted on movable arms attached to a pivoting axle, which allows the wheels to rotate both inward and outward, reducing friction during turns. And instead of using traditional front and rear axles, the wheels are mechanically connected in pairs on each side. This allows for “simultaneous adjustment of the turning angles of both wheels on each side, enabling precise positioning for executing maneuvers,” Avride says.
The new generation models can turn 180-degrees almost instantly, which the company says will improve the robot’s ability to navigate narrow sidewalks and reverse out of the way for someone in a wheelchair or a stroller.
This video shows how Avride’s new robot can navigate tight turns, as well as inclines.

The company also made improvements to the robot’s control system for improved torque, and updated the hardware with Nvidia’s Jetson Orin platform. A modular cargo area will now allow Avride’s operators to swap in a variety of compartments based on the size of the package. And a new front-facing LED panel can display friendly-seeming digital eyes — to reduce instances of the robot being attacked or vandalized.
“The various eye expressions not only ‘bring the robot to life’ but also create a sense of interaction for clients when the robot looks around or winks after delivering an order,” the company says.
Avride’s new robots are being manufactured in Taiwan, and are expected to join its Austin-based fleet in the coming days. Avride spokesperson Yulia Shveyko said the company expects to have “at least a hundred” deployed by January 2025.
The company recently struck a deal with Uber to expand operations to Jersey City and Dallas, as well as a robotaxi service.

Avride, the robotics company that spun out of Russian search giant Yandex, has a new sidewalk delivery robot to show off.

The company currently has plans to operate a fleet of six-wheeled delivery robots in Austin, Texas, delivering Uber Eats orders to customers, as well as South Korea. Now Avride’s next generation model is shedding a couple of wheels — and showing big gains in efficiency.

The new robot only has four wheels, which Avride says is more energy efficient than its six-wheeled model. The six-wheeled versions were simple to build, and could turn confidently on a variety of surfaces. But they also created a lot of friction, which ate up a lot of energy from the robot’s internal battery.

The new four-wheeled designs are much more efficient in their energy consumption, which means they can stay in operation longer before needing to be recharged. And Avride redesigned the chassis to support improved maneuverability and precision.

The robot’s wheels are mounted on movable arms attached to a pivoting axle, which allows the wheels to rotate both inward and outward, reducing friction during turns. And instead of using traditional front and rear axles, the wheels are mechanically connected in pairs on each side. This allows for “simultaneous adjustment of the turning angles of both wheels on each side, enabling precise positioning for executing maneuvers,” Avride says.

The new generation models can turn 180-degrees almost instantly, which the company says will improve the robot’s ability to navigate narrow sidewalks and reverse out of the way for someone in a wheelchair or a stroller.

This video shows how Avride’s new robot can navigate tight turns, as well as inclines.

The company also made improvements to the robot’s control system for improved torque, and updated the hardware with Nvidia’s Jetson Orin platform. A modular cargo area will now allow Avride’s operators to swap in a variety of compartments based on the size of the package. And a new front-facing LED panel can display friendly-seeming digital eyes — to reduce instances of the robot being attacked or vandalized.

“The various eye expressions not only ‘bring the robot to life’ but also create a sense of interaction for clients when the robot looks around or winks after delivering an order,” the company says.

Avride’s new robots are being manufactured in Taiwan, and are expected to join its Austin-based fleet in the coming days. Avride spokesperson Yulia Shveyko said the company expects to have “at least a hundred” deployed by January 2025.

The company recently struck a deal with Uber to expand operations to Jersey City and Dallas, as well as a robotaxi service.

Read More 

Alexa, where’s my Star Trek Computer?

Image: Mojo Wang for The Verge

When Alexa launched 10 years ago, Amazon envisioned a new computer platform that could do anything for you. A decade later, the company is still trying to build it. Amazon’s Alexa was announced on November 6th, 2014. A passion project for its founder, Jeff Bezos, Amazon’s digital voice assistant was inspired by and aspired to be Star Trek’s “Computer” — an omniscient, omnipresent, and proactive artificial intelligence controlled by your voice. “It has been a dream since the early days of science fiction to have a computer that you can talk to in a natural way and actually ask it to have a conversation with you and ask it to do things for you,” Bezos said shortly after Alexa’s launch. “And that is coming true.”
At the time, that future felt within reach. In the months following Alexa’s launch, it wowed early buyers with its capabilities. Playing music, getting the weather, and setting a timer were suddenly hands-free experiences. Packaged inside a Pringles can-shaped speaker called an Echo, Alexa moved into 5 million homes in just two years, including my own.
Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers
Fast-forward to today, and there are over 40 million Echo smart speakers in US households, with Alexa processing billions of commands a week globally. But despite this proliferation of products and popularity, the “superhuman assistant who is there when you need it, disappears when you don’t, and is always working in the background on your behalf” that Amazon promised just isn’t here.
Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers. Its capabilities have expanded — Alexa can now do useful things like control your lights, call your mom, and remind you to take out the trash. But despite a significant investment of time, money, and resources over the last decade, the voice assistant hasn’t become noticeably more intelligent. As one former Amazon employee said, “We worried we’ve hired 10,000 people and we’ve built a smart timer.”
It’s disappointing. Alexa holds so much promise. While its capabilities are undoubtedly impressive — not to mention indispensable for many people (particularly in areas like accessibility and elder care) — it’s still basically a remote control in our homes. I’m holding out hope for the dream of a highly capable ambient computer — an artificial intelligence that will help manage our lives and homes as seamlessly as Captain Picard ran the Starship Enterprise. (Only preferably with fewer red alerts).
Today, I may have an Alexa smart speaker in every room of my house, but it hasn’t made it more useful. Alexa has gained thousands of abilities over the last few years, but I still won’t rely on it to do anything more complicated than execute a command on a schedule, add milk to my shopping list, and maybe tell me if grapes are poisonous to chickens. (They’re not, but Alexa says I should check with my vet to be sure.) If anything, on the eve of the voice assistant’s 10th birthday, Alexa’s original dream feels further away.

Photo by Jennifer Pattison Tuohy
My first Echo arrived under the Christmas tree in 2015. A decade on, and it’s still plugging away.

It’s easy to forget how groundbreaking Alexa was when it first appeared. Instead of being trapped in a phone like Apple’s Siri or a computer like Microsoft’s Cortana, Alexa came inside the Echo, the world’s first voice-activated speaker. Its far-field speech recognition, powered by a seven-microphone array, was seriously impressive — using it felt almost magical. You could shout at an Echo from anywhere in a room, and that glowing blue ring would (almost) always turn on, signaling Alexa was ready to tell you a joke or set that egg timer.
It was Amazon’s pivot into smart home control that provided the first hints of the promised Star Trek-like future. Silly fart jokes and encyclopedic-type knowledge aside, the release of an Alexa smart home API in 2016, followed by the Echo Plus packing a Zigbee radio in 2017, allowed the assistant to connect to and control devices in our homes. Saying “Alexa, Tea. Earl Grey. Hot,” and having a toasty cuppa in your hands a few moments later felt closer than ever.
This was genuinely exciting. While tea from a replicator wasn’t here yet, asking Alexa to turn your lights off while sitting on the couch or to turn up your thermostat without getting out from under the covers felt like living in the future. We finally had something that resembled Star Trek’s “Computer” in our homes — Amazon even let us call it “Computer.”
In retrospect, Alexa brought with it the beginnings of the modern smart home. Simple voice control made the Internet of Things accessible; it brought technology into the home without being locked behind a complicated computer interface or inside a personal device. Plus, Amazon’s open approach to the connected home — in a time of proprietary smart home ecosystems — helped spur a wave of new consumer-level connected devices. Nest, August, Philips Hue, Ecobee, Lutron, and LiFX all partly owe their success to Alexa’s ease of operation.
But the ecosystem that sprang up around Alexa grew too quickly. Anyone could develop capabilities (which Amazon calls skills) for the assistant with little oversight. While some skills were simple and fun, many were buggy and unreliable, and specific wording was needed to evoke each one. It all added up to an inconsistent and often frustrating experience.
Asking Alexa to turn up your thermostat without getting out from under the covers felt like living in the future
Then Alexa hit a wall. There’s an assumption with technology that it will just keep improving. But instead of developing the core technology, Amazon relied on third-party developers to make Alexa do more, focusing its resources on putting the voice assistant in more devices and making it capable of controlling more things.
The more devices that worked with Alexa and the more capabilities Amazon added to the platform, the harder it became to manage, control, and access them all. Voice control is great for simple commands, but without easier ways to talk to Alexa, these new features were lost on most users.
Alexa Routines emerged as a solution to corralling all the different gadgets and functions you could use Alexa for, but this relied on you spending time programming in an app, alongside constantly troubleshooting devices and their connectivity.
Hearing “‘Lamp’ isn’t responding. Please check its network connection and power supply” after issuing a command is beyond frustrating. And spending hours a month configuring and troubleshooting your smart home wasn’t part of the promise. This is what a smart computer should be able to do for you.
We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us
Amazon masked Alexa’s failure to get smarter with an ever-ballooning line of Echo hardware. New smart speakers arrived annually, Alexa moved into a clock and a microwave, and new form factors attempted to push customers to take Alexa outside of the house in their ears (Echo Buds), on their fingers (Echo Loop), on their faces (Echo Glasses), and in their cars (Echo Auto).
Many of these devices were forgettable, did little to advance Alexa’s capabilities, and mostly served to lose Amazon money. The Wall Street Journal reported earlier this year that Amazon has lost tens of billions of dollars on its broader devices unit.
Even with this “throw everything at the wall and see what sticks” approach, Amazon never cracked that second must-have form factor. In 2017, it invented the smart display — an Echo with a touchscreen that added benefits like video calling, watching security cameras, and showing information rather than just telling you. But sluggish processors, finicky touchscreens, and too many ads meant the smart display never really furthered Alexa’s core benefit.
Today, people buy Echo devices primarily because they’re cheaper than the competition, and they can use them to do basically what Alexa could do in 2014: set timers, check the weather, and listen to music. There’s no expectation for something better from a device that costs as little as $18.

Photo by David Pierce / The Verge
Dave Limp, Amazon’s former head of devices and services, at the company’s last big hardware event in September 2023. He demoed a “new” Alexa that is more conversational and less transactional. It has yet to arrive.

After all these years, just talking to Alexa remains the biggest hurdle. We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us. Case in point, my connected kitchen faucet still requires me to say, “Alexa, ask Moen to dispense 2 cups of hot water.” As my husband points out, if Alexa really was “smart,” wouldn’t it just know that I’m standing in front of the kitchen sink and doing what I ask without the need for hard-to-remember phrases?
The good news, at least on that front, is that technology is catching up. Large language models and generative AI could bring us an Alexa we can talk to more naturally. Last year, Amazon announced that it’s working on a “new” LLM-powered Alexa that is more proactive and conversational and less pedestrian and transactional.
This alone would be a big leap forward. But while generative AI could make voice assistants smarter, it’s not a silver bullet. LLMs solve the basic “make sense of language” problem, but they don’t — yet — have the ability to act on that language, not to mention the concerns about a powerful AI hallucinating in your home.
What Alexa really needs to become “Computer” is context
What Alexa really needs to become a “Computer” is context. To be effective, an omniscient voice assistant needs to know everything about you, your home, and the people and devices in it. This is a hard task. And while Echo speakers with ultrasound tech and smart home sensors can provide some context, there is one crucial area where Amazon is way behind the competition: you.
Unlike Google and Apple — which have access to data about you through your smartphone, calendar, email, or internet searches — Amazon has largely been locked out of your personal life beyond what you buy on its store or select data you give it access to. And its privacy missteps have kept people from trusting it.
But Google and Apple haven’t cracked the smart home yet, and while they are making serious moves in the space, Alexa still has a sizable head start. According to Amazon, the “New Alexa” can complete multistep routines you can create just by listing tasks. Add in context about who lives in your home, where they are at any point, and what they should be doing, and it’s feasible that the assistant could handle a task like this with just one command:
Alexa, tell my son not to forget his science project; set the alarm when he leaves. Disarm the alarm and unlock the back door for the plumber at 4PM, then lock it again at 5PM. Preheat the oven to 375 degrees at 6PM, but if I’m running late, adjust the time.
This type of capability would bring a whole new level of utility to Alexa, maybe enough to justify charging for it, as the company has said it plans to.
It’s time for Alexa to boldly go where no voice assistant has gone before
However, despite last year’s splashy launch of this LLM-powered assistant, we’ve heard nothing more. Amazon even skipped its big hardware event this year, where it traditionally announces dozens of new Alexa and Alexa-compatible devices and services. This is likely because, based on reports, Amazon is far from achieving its promised “New Alexa.”
But it needs to pull off its promised reinvention of Alexa, or Apple and Google will overtake it.
In 2014, Amazon set the stage for voice control in the home and, over the last decade, laid the groundwork for a smarter home. Today, Alexa is the most popular voice assistant inside a smart speaker — with over two-thirds of the US market. Outside of the home, Google’s Assistant and Apple’s Siri dominate. As those companies invest more in the smart home and eventually bring Apple Intelligence and Gemini smarts to their home products, Alexa’s days of dominance may be numbered.
The path to a generative AI-powered, context-aware smart home is fraught with pitfalls, but with all its history here, Amazon feels best poised to pull it off — if it can get out of its own way. The home is the final frontier, and it’s time for Alexa to boldly go where no voice assistant has gone before and become truly intelligent.

Image: Mojo Wang for The Verge

When Alexa launched 10 years ago, Amazon envisioned a new computer platform that could do anything for you. A decade later, the company is still trying to build it.

Amazon’s Alexa was announced on November 6th, 2014. A passion project for its founder, Jeff Bezos, Amazon’s digital voice assistant was inspired by and aspired to be Star Trek’s “Computer” — an omniscient, omnipresent, and proactive artificial intelligence controlled by your voice. “It has been a dream since the early days of science fiction to have a computer that you can talk to in a natural way and actually ask it to have a conversation with you and ask it to do things for you,” Bezos said shortly after Alexa’s launch. “And that is coming true.”

At the time, that future felt within reach. In the months following Alexa’s launch, it wowed early buyers with its capabilities. Playing music, getting the weather, and setting a timer were suddenly hands-free experiences. Packaged inside a Pringles can-shaped speaker called an Echo, Alexa moved into 5 million homes in just two years, including my own.

Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers

Fast-forward to today, and there are over 40 million Echo smart speakers in US households, with Alexa processing billions of commands a week globally. But despite this proliferation of products and popularity, the “superhuman assistant who is there when you need it, disappears when you don’t, and is always working in the background on your behalf” that Amazon promised just isn’t here.

Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers. Its capabilities have expanded — Alexa can now do useful things like control your lights, call your mom, and remind you to take out the trash. But despite a significant investment of time, money, and resources over the last decade, the voice assistant hasn’t become noticeably more intelligent. As one former Amazon employee said, “We worried we’ve hired 10,000 people and we’ve built a smart timer.”

It’s disappointing. Alexa holds so much promise. While its capabilities are undoubtedly impressive — not to mention indispensable for many people (particularly in areas like accessibility and elder care) — it’s still basically a remote control in our homes. I’m holding out hope for the dream of a highly capable ambient computer — an artificial intelligence that will help manage our lives and homes as seamlessly as Captain Picard ran the Starship Enterprise. (Only preferably with fewer red alerts).

Today, I may have an Alexa smart speaker in every room of my house, but it hasn’t made it more useful. Alexa has gained thousands of abilities over the last few years, but I still won’t rely on it to do anything more complicated than execute a command on a schedule, add milk to my shopping list, and maybe tell me if grapes are poisonous to chickens. (They’re not, but Alexa says I should check with my vet to be sure.) If anything, on the eve of the voice assistant’s 10th birthday, Alexa’s original dream feels further away.

Photo by Jennifer Pattison Tuohy
My first Echo arrived under the Christmas tree in 2015. A decade on, and it’s still plugging away.

It’s easy to forget how groundbreaking Alexa was when it first appeared. Instead of being trapped in a phone like Apple’s Siri or a computer like Microsoft’s Cortana, Alexa came inside the Echo, the world’s first voice-activated speaker. Its far-field speech recognition, powered by a seven-microphone array, was seriously impressive — using it felt almost magical. You could shout at an Echo from anywhere in a room, and that glowing blue ring would (almost) always turn on, signaling Alexa was ready to tell you a joke or set that egg timer.

It was Amazon’s pivot into smart home control that provided the first hints of the promised Star Trek-like future. Silly fart jokes and encyclopedic-type knowledge aside, the release of an Alexa smart home API in 2016, followed by the Echo Plus packing a Zigbee radio in 2017, allowed the assistant to connect to and control devices in our homes. Saying “Alexa, Tea. Earl Grey. Hot,” and having a toasty cuppa in your hands a few moments later felt closer than ever.

This was genuinely exciting. While tea from a replicator wasn’t here yet, asking Alexa to turn your lights off while sitting on the couch or to turn up your thermostat without getting out from under the covers felt like living in the future. We finally had something that resembled Star Trek’s “Computer” in our homes — Amazon even let us call it “Computer.”

In retrospect, Alexa brought with it the beginnings of the modern smart home. Simple voice control made the Internet of Things accessible; it brought technology into the home without being locked behind a complicated computer interface or inside a personal device. Plus, Amazon’s open approach to the connected home — in a time of proprietary smart home ecosystems — helped spur a wave of new consumer-level connected devices. Nest, August, Philips Hue, Ecobee, Lutron, and LiFX all partly owe their success to Alexa’s ease of operation.

But the ecosystem that sprang up around Alexa grew too quickly. Anyone could develop capabilities (which Amazon calls skills) for the assistant with little oversight. While some skills were simple and fun, many were buggy and unreliable, and specific wording was needed to evoke each one. It all added up to an inconsistent and often frustrating experience.

Asking Alexa to turn up your thermostat without getting out from under the covers felt like living in the future

Then Alexa hit a wall. There’s an assumption with technology that it will just keep improving. But instead of developing the core technology, Amazon relied on third-party developers to make Alexa do more, focusing its resources on putting the voice assistant in more devices and making it capable of controlling more things.

The more devices that worked with Alexa and the more capabilities Amazon added to the platform, the harder it became to manage, control, and access them all. Voice control is great for simple commands, but without easier ways to talk to Alexa, these new features were lost on most users.

Alexa Routines emerged as a solution to corralling all the different gadgets and functions you could use Alexa for, but this relied on you spending time programming in an app, alongside constantly troubleshooting devices and their connectivity.

Hearing “‘Lamp’ isn’t responding. Please check its network connection and power supply” after issuing a command is beyond frustrating. And spending hours a month configuring and troubleshooting your smart home wasn’t part of the promise. This is what a smart computer should be able to do for you.

We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us

Amazon masked Alexa’s failure to get smarter with an ever-ballooning line of Echo hardware. New smart speakers arrived annually, Alexa moved into a clock and a microwave, and new form factors attempted to push customers to take Alexa outside of the house in their ears (Echo Buds), on their fingers (Echo Loop), on their faces (Echo Glasses), and in their cars (Echo Auto).

Many of these devices were forgettable, did little to advance Alexa’s capabilities, and mostly served to lose Amazon money. The Wall Street Journal reported earlier this year that Amazon has lost tens of billions of dollars on its broader devices unit.

Even with this “throw everything at the wall and see what sticks” approach, Amazon never cracked that second must-have form factor. In 2017, it invented the smart display — an Echo with a touchscreen that added benefits like video calling, watching security cameras, and showing information rather than just telling you. But sluggish processors, finicky touchscreens, and too many ads meant the smart display never really furthered Alexa’s core benefit.

Today, people buy Echo devices primarily because they’re cheaper than the competition, and they can use them to do basically what Alexa could do in 2014: set timers, check the weather, and listen to music. There’s no expectation for something better from a device that costs as little as $18.

Photo by David Pierce / The Verge
Dave Limp, Amazon’s former head of devices and services, at the company’s last big hardware event in September 2023. He demoed a “new” Alexa that is more conversational and less transactional. It has yet to arrive.

After all these years, just talking to Alexa remains the biggest hurdle. We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us. Case in point, my connected kitchen faucet still requires me to say, “Alexa, ask Moen to dispense 2 cups of hot water.” As my husband points out, if Alexa really was “smart,” wouldn’t it just know that I’m standing in front of the kitchen sink and doing what I ask without the need for hard-to-remember phrases?

The good news, at least on that front, is that technology is catching up. Large language models and generative AI could bring us an Alexa we can talk to more naturally. Last year, Amazon announced that it’s working on a “new” LLM-powered Alexa that is more proactive and conversational and less pedestrian and transactional.

This alone would be a big leap forward. But while generative AI could make voice assistants smarter, it’s not a silver bullet. LLMs solve the basic “make sense of language” problem, but they don’t — yet — have the ability to act on that language, not to mention the concerns about a powerful AI hallucinating in your home.

What Alexa really needs to become “Computer” is context

What Alexa really needs to become a “Computer” is context. To be effective, an omniscient voice assistant needs to know everything about you, your home, and the people and devices in it. This is a hard task. And while Echo speakers with ultrasound tech and smart home sensors can provide some context, there is one crucial area where Amazon is way behind the competition: you.

Unlike Google and Apple — which have access to data about you through your smartphone, calendar, email, or internet searches — Amazon has largely been locked out of your personal life beyond what you buy on its store or select data you give it access to. And its privacy missteps have kept people from trusting it.

But Google and Apple haven’t cracked the smart home yet, and while they are making serious moves in the space, Alexa still has a sizable head start. According to Amazon, the “New Alexa” can complete multistep routines you can create just by listing tasks. Add in context about who lives in your home, where they are at any point, and what they should be doing, and it’s feasible that the assistant could handle a task like this with just one command:

Alexa, tell my son not to forget his science project; set the alarm when he leaves. Disarm the alarm and unlock the back door for the plumber at 4PM, then lock it again at 5PM. Preheat the oven to 375 degrees at 6PM, but if I’m running late, adjust the time.

This type of capability would bring a whole new level of utility to Alexa, maybe enough to justify charging for it, as the company has said it plans to.

It’s time for Alexa to boldly go where no voice assistant has gone before

However, despite last year’s splashy launch of this LLM-powered assistant, we’ve heard nothing more. Amazon even skipped its big hardware event this year, where it traditionally announces dozens of new Alexa and Alexa-compatible devices and services. This is likely because, based on reports, Amazon is far from achieving its promised “New Alexa.”

But it needs to pull off its promised reinvention of Alexa, or Apple and Google will overtake it.

In 2014, Amazon set the stage for voice control in the home and, over the last decade, laid the groundwork for a smarter home. Today, Alexa is the most popular voice assistant inside a smart speaker — with over two-thirds of the US market. Outside of the home, Google’s Assistant and Apple’s Siri dominate. As those companies invest more in the smart home and eventually bring Apple Intelligence and Gemini smarts to their home products, Alexa’s days of dominance may be numbered.

The path to a generative AI-powered, context-aware smart home is fraught with pitfalls, but with all its history here, Amazon feels best poised to pull it off — if it can get out of its own way. The home is the final frontier, and it’s time for Alexa to boldly go where no voice assistant has gone before and become truly intelligent.

Read More 

Nothing is making a glow-in-the-dark phone

Only 1,000 units of the Phone (2a) and Phone (2a) Plus Community Edition will be produced. | Image: Nothing

Nothing has announced a new version of its Phone 2A Plus featuring a customized glow-in-the-dark design and packaging created in part by some of the company’s “most talented followers.” The Phone 2A Plus Community Edition is the result of a contest held by the company encouraging its community to “build a smartphone of their own imagination.”
The Phone 2A Plus Community Edition is Nothing’s first “major pilot to co-create hardware,” the company says, and resulted in over 900 entries from its community customizing everything from its look to how it will be marketed. The phone will be available to purchase starting on November 12th through Nothing’s website for $399 but is being limited to just 1,000 units.

Image: Nothing
The Phone (2a) Plus Community Edition’s glowing finish doesn’t draw any power.

The concept for the Phone 2A Plus Community Edition’s updated design was created by Astrid Vanhuyse and Kenta Akasaki and realized through a collaboration with Nothing’s Adam Bates and Lucy Birley. The phone’s functionality, including three light strips around its rear cameras, hasn’t changed. But the back of the phone is now tinted with a green phosphorescent material that will “emit a soft glow in dark environments” for hours, Nothing says, requiring just daylight to charge.
The glow-in-the-dark accents are carried forward to the Phone 2A Plus Community Edition’s new packaging, which was reinterpreted by Ian Henry Simmonds with reflective elements and a macro crop of the phone itself.

Image: Nothing
The Phone (2a) Plus Community Edition will also include a collection of new matching wallpapers.

Inspired by the original phone’s hardware, Andrés Mateos and Nothing’s software designers used a mix of design tools and AI to create a new set of six matching wallpapers called the “Connected Collection” that will be bundled with the Phone 2A Plus Community Edition. Lastly, Sonya Palma created a new “Find your light. Capture your light” marketing campaign that will be used to promote the Community Edition.

Although this is Nothing’s first attempt to enlist its community to help design hardware, the project is reminiscent of a collaboration between CMF (Nothing’s affordability-focused subbrand) and Bambu Lab encouraging CMF Phone 1 users to design 3D-printed accessories and contraptions that could be added to the back of that phone.

Only 1,000 units of the Phone (2a) and Phone (2a) Plus Community Edition will be produced. | Image: Nothing

Nothing has announced a new version of its Phone 2A Plus featuring a customized glow-in-the-dark design and packaging created in part by some of the company’s “most talented followers.” The Phone 2A Plus Community Edition is the result of a contest held by the company encouraging its community to “build a smartphone of their own imagination.”

The Phone 2A Plus Community Edition is Nothing’s first “major pilot to co-create hardware,” the company says, and resulted in over 900 entries from its community customizing everything from its look to how it will be marketed. The phone will be available to purchase starting on November 12th through Nothing’s website for $399 but is being limited to just 1,000 units.

Image: Nothing
The Phone (2a) Plus Community Edition’s glowing finish doesn’t draw any power.

The concept for the Phone 2A Plus Community Edition’s updated design was created by Astrid Vanhuyse and Kenta Akasaki and realized through a collaboration with Nothing’s Adam Bates and Lucy Birley. The phone’s functionality, including three light strips around its rear cameras, hasn’t changed. But the back of the phone is now tinted with a green phosphorescent material that will “emit a soft glow in dark environments” for hours, Nothing says, requiring just daylight to charge.

The glow-in-the-dark accents are carried forward to the Phone 2A Plus Community Edition’s new packaging, which was reinterpreted by Ian Henry Simmonds with reflective elements and a macro crop of the phone itself.

Image: Nothing
The Phone (2a) Plus Community Edition will also include a collection of new matching wallpapers.

Inspired by the original phone’s hardware, Andrés Mateos and Nothing’s software designers used a mix of design tools and AI to create a new set of six matching wallpapers called the “Connected Collection” that will be bundled with the Phone 2A Plus Community Edition. Lastly, Sonya Palma created a new “Find your light. Capture your light” marketing campaign that will be used to promote the Community Edition.

Although this is Nothing’s first attempt to enlist its community to help design hardware, the project is reminiscent of a collaboration between CMF (Nothing’s affordability-focused subbrand) and Bambu Lab encouraging CMF Phone 1 users to design 3D-printed accessories and contraptions that could be added to the back of that phone.

Read More 

Sennheiser’s new wireless clip-on mics can convert to a tabletop microphone

The Profile Wireless microphones can be clipped or magnetically attached to clothing. | Image: Sennheiser

Sennheiser has announced a new portable wireless microphone kit designed to be an affordable and flexible all-in-one solution for content creators and videographers. The Profile Wireless system features a wireless receiver that can be connected to various devices, a pair of compact clip-on transmitters with built-in microphones that can also be used as handheld or tabletop mics, and a mobile charger.
The Sennheiser Profile Wireless kit isn’t expected to start shipping until late 2024 or early 2025, but it’s available for preorder starting today for $299. That’s cheaper than both the popular $349 DJI Mic 2 kit, which includes similar hardware, and the Shure MoveMic system, which is $499 when bundled with a wireless receiver. Rode’s Wireless Go II kit is also $299, but it doesn’t include an on-the-go charging solution.

Image: Sennheiser
The Profile Wireless’ charging bar features a 2,000mAh battery.

The Profile Wireless microphones are similar in size to the DJI Mic 2’s and can be attached to clothing using either a clip on the back or a magnet, which allows for more freedom with placement. If you want to use a higher-quality microphone or need a more discreet lav mic, the transmitter includes a lockable 3.5mm connector for attaching external mics.
The microphones come pre-paired to a two-channel receiver and communicate over a 2.4GHz wireless signal that has a range of just over 800 feet with a clear line of sight. If anybody gets in between the receiver and mic, the range drops to around 490 feet. Sennheiser says the battery life for the mics and wireless receiver is around seven hours, but all three can be recharged while away from a power outlet using the included charging bar that is equipped with a 2,000mAh battery.
Each microphone has 16GB of built-in storage with an optional “Backup Recording Mode” that will automatically start recording locally if the connection to the wireless receiver becomes unreliable. There’s also a “Safety Channel Mode” that will record a second copy of the audio at a lower level to help prevent louder sounds from being clipped or distorted.

Image: Sennheiser
The included wireless receiver connects to laptops, mobile devices, or cameras with a cable or charging port adapter.

Since the Profile Wireless system doesn’t use Bluetooth, capturing audio to another device requires the receiver to be connected using an included USB-C or Lightning adapter for mobile devices, a USB-C cable for computers, or an audio cable for cameras. The receiver itself includes an OLED screen that displays information like audio levels and the charge level of the mics; thanks to an included gyro sensor, the screen will automatically flip 180 degrees as needed.

Image: Sennheiser
Attach the wireless microphones to the included charging bar, and they become easier to use as handheld mics or on a desk.

Although wireless mic systems like this are becoming more popular because of their ease of use and convenient size, using a tiny clip-on mic in hand to conduct an impromptu interview can sometimes be challenging. Sennheiser’s solution to that problem has you attaching one of the microphones to the end of the included charging bar and then adding a foam windscreen.
This results in a larger microphone that’s easier to hold or use on a desk when connected to a microphone support or a tiny tripod. The larger microphone’s shape is a bit odd and may result in an extra question or two when sticking it in someone’s face, but it does bring some extra flexibility to an affordable microphone kit that already offers a lot of functionality.

The Profile Wireless microphones can be clipped or magnetically attached to clothing. | Image: Sennheiser

Sennheiser has announced a new portable wireless microphone kit designed to be an affordable and flexible all-in-one solution for content creators and videographers. The Profile Wireless system features a wireless receiver that can be connected to various devices, a pair of compact clip-on transmitters with built-in microphones that can also be used as handheld or tabletop mics, and a mobile charger.

The Sennheiser Profile Wireless kit isn’t expected to start shipping until late 2024 or early 2025, but it’s available for preorder starting today for $299. That’s cheaper than both the popular $349 DJI Mic 2 kit, which includes similar hardware, and the Shure MoveMic system, which is $499 when bundled with a wireless receiver. Rode’s Wireless Go II kit is also $299, but it doesn’t include an on-the-go charging solution.

Image: Sennheiser
The Profile Wireless’ charging bar features a 2,000mAh battery.

The Profile Wireless microphones are similar in size to the DJI Mic 2’s and can be attached to clothing using either a clip on the back or a magnet, which allows for more freedom with placement. If you want to use a higher-quality microphone or need a more discreet lav mic, the transmitter includes a lockable 3.5mm connector for attaching external mics.

The microphones come pre-paired to a two-channel receiver and communicate over a 2.4GHz wireless signal that has a range of just over 800 feet with a clear line of sight. If anybody gets in between the receiver and mic, the range drops to around 490 feet. Sennheiser says the battery life for the mics and wireless receiver is around seven hours, but all three can be recharged while away from a power outlet using the included charging bar that is equipped with a 2,000mAh battery.

Each microphone has 16GB of built-in storage with an optional “Backup Recording Mode” that will automatically start recording locally if the connection to the wireless receiver becomes unreliable. There’s also a “Safety Channel Mode” that will record a second copy of the audio at a lower level to help prevent louder sounds from being clipped or distorted.

Image: Sennheiser
The included wireless receiver connects to laptops, mobile devices, or cameras with a cable or charging port adapter.

Since the Profile Wireless system doesn’t use Bluetooth, capturing audio to another device requires the receiver to be connected using an included USB-C or Lightning adapter for mobile devices, a USB-C cable for computers, or an audio cable for cameras. The receiver itself includes an OLED screen that displays information like audio levels and the charge level of the mics; thanks to an included gyro sensor, the screen will automatically flip 180 degrees as needed.

Image: Sennheiser
Attach the wireless microphones to the included charging bar, and they become easier to use as handheld mics or on a desk.

Although wireless mic systems like this are becoming more popular because of their ease of use and convenient size, using a tiny clip-on mic in hand to conduct an impromptu interview can sometimes be challenging. Sennheiser’s solution to that problem has you attaching one of the microphones to the end of the included charging bar and then adding a foam windscreen.

This results in a larger microphone that’s easier to hold or use on a desk when connected to a microphone support or a tiny tripod. The larger microphone’s shape is a bit odd and may result in an extra question or two when sticking it in someone’s face, but it does bring some extra flexibility to an affordable microphone kit that already offers a lot of functionality.

Read More 

Canon’s budget-friendly 3D lens will be available in November

Canon’s budget-friendly VR lens will be available in November with an estimated retail price of $449.99. | Image: Canon

Canon has officially announced its new RF-S7.8mm F4 STM Dual lens, which features stereoscopic elements that have been squeezed into a body no larger than a traditional 2D camera lens. It was originally teased during the Apple WWDC 2024 keynote last June and is designed to work with a Canon EOS R7 as a more affordable tool for creators making 3D VR content for headsets like the Meta Quest 3 or spatial videos for the Apple Vision Pro.
The company hasn’t set a specific date for when the new 3D lens will be available, but it says it will be sometime in November 2024, with an “estimated retail price” of $449.99. That’s considerably cheaper than Canon’s existing dual-fisheye lenses designed to capture 3D video content, including the $1,999 RF5.2mm F2.8 L Dual and the $1,099 RF-S3.9mm F3.5 STM Dual.
Pairing Canon’s new 3D lens with the company’s 32.5MP EOS R7 digital camera — which itself starts at $1,299 — pushes the total price tag of the kit to over $1,700. However, that’s still cheaper than Canon’s higher-end 3D solutions, which start at $2,498 (and can go as high as $6,298) when paired with their requisite camera gear.

Image: Canon
The lens is designed to be easy to use, with minimal controls.

Canon’s new 3D lens has an aperture range of f/4.0 to f/16, supports autofocus, and features a button and a control wheel for making separate manual focus adjustments to the left and right sides. What makes it so much cheaper than Canon’s existing 3D lenses is its limited field of view. Canon’s pricier lenses are capable of capturing 180-degree video and images — close to what the human eye is capable of seeing — while the new RF-S7.8mm F4 STM Dual lens only takes in about a third of that at 63 degrees.

Image: Canon
The lenses on Canon’s new 3D lens are much smaller than the fisheye lenses on its pricier 3D lenses.

Using a standard Canon RF mount, the new lens has stereoscopic elements aligned in a straight optical path, resulting in its front lenses being positioned just 11.8mm apart compared to the 60mm gap between the dual-fisheye lenses on Canon’s existing 3D lenses. As a result, Canon says the strongest 3D effect will be experienced when capturing subjects or objects that are just 6 to 20 inches from the lens. When using it to capture something that’s farther away, the 3D effect will be less pronounced.
Images and videos captured using this lens need to be processed before they can be viewed using VR or AR headsets, either through the EOS VR plugin that’s available for Adobe Premiere Pro, or Canon’s own EOS VR Utility software, available for Macs and PCs. Both tools require a paid subscription but can generate 180-degree 3D, VR, or spatial video content.

Canon’s budget-friendly VR lens will be available in November with an estimated retail price of $449.99. | Image: Canon

Canon has officially announced its new RF-S7.8mm F4 STM Dual lens, which features stereoscopic elements that have been squeezed into a body no larger than a traditional 2D camera lens. It was originally teased during the Apple WWDC 2024 keynote last June and is designed to work with a Canon EOS R7 as a more affordable tool for creators making 3D VR content for headsets like the Meta Quest 3 or spatial videos for the Apple Vision Pro.

The company hasn’t set a specific date for when the new 3D lens will be available, but it says it will be sometime in November 2024, with an “estimated retail price” of $449.99. That’s considerably cheaper than Canon’s existing dual-fisheye lenses designed to capture 3D video content, including the $1,999 RF5.2mm F2.8 L Dual and the $1,099 RF-S3.9mm F3.5 STM Dual.

Pairing Canon’s new 3D lens with the company’s 32.5MP EOS R7 digital camera — which itself starts at $1,299 — pushes the total price tag of the kit to over $1,700. However, that’s still cheaper than Canon’s higher-end 3D solutions, which start at $2,498 (and can go as high as $6,298) when paired with their requisite camera gear.

Image: Canon
The lens is designed to be easy to use, with minimal controls.

Canon’s new 3D lens has an aperture range of f/4.0 to f/16, supports autofocus, and features a button and a control wheel for making separate manual focus adjustments to the left and right sides. What makes it so much cheaper than Canon’s existing 3D lenses is its limited field of view. Canon’s pricier lenses are capable of capturing 180-degree video and images — close to what the human eye is capable of seeing — while the new RF-S7.8mm F4 STM Dual lens only takes in about a third of that at 63 degrees.

Image: Canon
The lenses on Canon’s new 3D lens are much smaller than the fisheye lenses on its pricier 3D lenses.

Using a standard Canon RF mount, the new lens has stereoscopic elements aligned in a straight optical path, resulting in its front lenses being positioned just 11.8mm apart compared to the 60mm gap between the dual-fisheye lenses on Canon’s existing 3D lenses. As a result, Canon says the strongest 3D effect will be experienced when capturing subjects or objects that are just 6 to 20 inches from the lens. When using it to capture something that’s farther away, the 3D effect will be less pronounced.

Images and videos captured using this lens need to be processed before they can be viewed using VR or AR headsets, either through the EOS VR plugin that’s available for Adobe Premiere Pro, or Canon’s own EOS VR Utility software, available for Macs and PCs. Both tools require a paid subscription but can generate 180-degree 3D, VR, or spatial video content.

Read More 

Scroll to top
Generated by Feedzy