verge-rss

Alexa, thank you for the music

Image: Mojo Wang for The Verge

When dealing with an aging parent, Alexa was a great help — in both practical and emotionally important ways. About two years ago, I got a call from my mother. “You know,” she said, “that Alexa is really working out. I was feeling a little depressed, so I told Alexa to play some nice music, and that’s exactly what she did. In a few minutes, I was feeling so much better!”
Alexa had become, not exactly a companion, but a presence in my mother’s home — one that made both her, and myself, feel a little better. This was at least part of what I hoped would happen when I first went shopping for an Echo device. Websites focused on senior care are full of advice on how to add Amazon’s smart speakers as a useful tool, and Amazon’s technology was designed to make tech more approachable and accessible — goals that it often, though not always, succeeds at.

Here’s how it started. My mother had lived most of her life as a teacher in the NYC public schools system, a smart, savvy woman with a master’s in education, a progressive political point of view, and a sometimes irritating ability to assume charge of almost any situation. But she was now entering her late 90s and beginning to have serious problems with her health and her short-term memory recall. Despite her determination to stay independent as long as possible — by playing games on her computer, keeping up with the news, and writing copious journal entries of her day-to-day activities — this increasingly affected her ability to do simple tasks, to learn new skills, and to live independently.
We were able to hire an aide to help her during the daylight hours — make meals, clean up, and help with other chores that she was now unable to do herself. But my mom was also stubborn and refused to have anyone there at night or to wear any kind of emergency button in case she needed help. I lived about 40 minutes away and only spent weekends with her. We needed some way of making sure she was okay when she was the only person in the apartment.
My mother grew up at a time when just having a home telephone was new and exciting
So I got her an Amazon Echo Show 8 smart display in the hopes that it could be the beginning of a smart home system that would help keep her safe and active. It all depended on how well my mother, who grew up at a time when just having a home telephone was new and exciting, would accept the device. The Echo’s eight-inch screen was large enough for her to be able to view it easily but small enough so it wouldn’t overwhelm the room. She could interact with the personal assistant, while the camera would allow me to interact with her remotely. I set it up and introduced her to Alexa.
And — it worked. Sort of.
I thought we could start by using it as a way to communicate visually. That was pretty much a failure. My mother was used to calling people on a phone, and while she was impressed with the whole “see the person you’re talking to” idea, she wasn’t very enthusiastic about using it herself. “It’s not for me,” she said firmly.

Photo by Jennifer Pattison Tuohy / The Verge
Verger Jennifer Pattison Tuohy could drop in on her dad via an Echo Show. My mother wasn’t as cooperative.

Okay, I thought, there’s always the “drop-in” feature. I could use it to monitor what was happening in the apartment. However, the Echo Show had been placed in a small room off the kitchen that we called “The Den” where my mother had her meals, wrote in her journal, and spent a lot of her time — and as a result, it could only “see” into that room and the kitchen. The one time I suggested that I put cameras around the apartment, I got one of her looks — the one that made me feel as if I were five years old again. A camera in the bedroom? No way.
But luckily, there were some things the Echo did help with. About that time, my mother’s ancient bedside clock radio finally gave up the ghost. With some trepidation, I replaced it with an Echo Dot with Clock — and was delighted when my mother informed me that she loved it! She could not only see what time it was but also ask Alexa what the weather was, right from her bed. And what made me happy was that I was able to teach her to yell, “Alexa, call Barbara” if she needed me in an emergency. Between the Dot and the Show, Alexa could now respond no matter where my mother was in the apartment — including the bathroom with the door closed. (I checked.) She only used the feature a couple of times, and never for an actual emergency, but it was there for “just in case.”
In the end, though, the most important gift that the two Echos gave to my mother was music.
Decades ago, my parents bought what was then the latest in audio technology: a modular stereo system that consisted of a turntable, a receiver, an AM / FM radio, and a cassette tape player. Now it sat unused, having become too complicated for my mother to deal with. But with the Echo, she could play music whenever she liked. She didn’t even have to remember the names of the songs she liked or the musicians that she had once doted on. All she had to do was say, “Alexa, play some quiet music,” or “Alexa, play some happy music.” Alexa would play some old-time blues or folk or big-band music. And I’d get a call about how she had listened to her music and how good it made her feel.

Photo by Jennifer Pattison Tuohy / The Verge
An Echo Dot with Clock substituted nicely for the old clock radio.

Did the two Echos do everything I had hoped they would? Well, yes and no. They certainly gave my mother a simple and friendly way to get information and reminders. More importantly, they provided a way that she could contact me in an emergency. But I never found the time to install other smart setups that were available. It was, at least then, just too complex a task to deal with.
In fact, Amazon has experimented with extending the usefulness of its smart devices for seniors. I never got around to trying Amazon’s $20-a-month Alexa Together service, which connected to its own 24/7 emergency service — and it was apparently not very successful, since it was discontinued in June of this year. I might have opted for the less expensive Emergency Assistant feature, which allows users to contact emergency services and was introduced last September. But by that time, my mother was getting round-the-clock care from family and aides and was no longer in need of it.
Still, the Echo was good to have. Near the end of her life, when my mother was bedridden and too weak to speak, I could sit next to her and say, “Alexa, play some Woody Guthrie” or “Alexa, play some Bessie Smith” or “Alexa, play some Count Basie.” The music would start, and my mother would smile — and would, for a time, feel better. And although Amazon’s smart speaker was not the perfect answer to all our needs, for those few moments, I will always be grateful to Alexa.

Image: Mojo Wang for The Verge

When dealing with an aging parent, Alexa was a great help — in both practical and emotionally important ways.

About two years ago, I got a call from my mother. “You know,” she said, “that Alexa is really working out. I was feeling a little depressed, so I told Alexa to play some nice music, and that’s exactly what she did. In a few minutes, I was feeling so much better!”

Alexa had become, not exactly a companion, but a presence in my mother’s home — one that made both her, and myself, feel a little better. This was at least part of what I hoped would happen when I first went shopping for an Echo device. Websites focused on senior care are full of advice on how to add Amazon’s smart speakers as a useful tool, and Amazon’s technology was designed to make tech more approachable and accessible — goals that it often, though not always, succeeds at.

Here’s how it started. My mother had lived most of her life as a teacher in the NYC public schools system, a smart, savvy woman with a master’s in education, a progressive political point of view, and a sometimes irritating ability to assume charge of almost any situation. But she was now entering her late 90s and beginning to have serious problems with her health and her short-term memory recall. Despite her determination to stay independent as long as possible — by playing games on her computer, keeping up with the news, and writing copious journal entries of her day-to-day activities — this increasingly affected her ability to do simple tasks, to learn new skills, and to live independently.

We were able to hire an aide to help her during the daylight hours — make meals, clean up, and help with other chores that she was now unable to do herself. But my mom was also stubborn and refused to have anyone there at night or to wear any kind of emergency button in case she needed help. I lived about 40 minutes away and only spent weekends with her. We needed some way of making sure she was okay when she was the only person in the apartment.

My mother grew up at a time when just having a home telephone was new and exciting

So I got her an Amazon Echo Show 8 smart display in the hopes that it could be the beginning of a smart home system that would help keep her safe and active. It all depended on how well my mother, who grew up at a time when just having a home telephone was new and exciting, would accept the device. The Echo’s eight-inch screen was large enough for her to be able to view it easily but small enough so it wouldn’t overwhelm the room. She could interact with the personal assistant, while the camera would allow me to interact with her remotely. I set it up and introduced her to Alexa.

And — it worked. Sort of.

I thought we could start by using it as a way to communicate visually. That was pretty much a failure. My mother was used to calling people on a phone, and while she was impressed with the whole “see the person you’re talking to” idea, she wasn’t very enthusiastic about using it herself. “It’s not for me,” she said firmly.

Photo by Jennifer Pattison Tuohy / The Verge
Verger Jennifer Pattison Tuohy could drop in on her dad via an Echo Show. My mother wasn’t as cooperative.

Okay, I thought, there’s always the “drop-in” feature. I could use it to monitor what was happening in the apartment. However, the Echo Show had been placed in a small room off the kitchen that we called “The Den” where my mother had her meals, wrote in her journal, and spent a lot of her time — and as a result, it could only “see” into that room and the kitchen. The one time I suggested that I put cameras around the apartment, I got one of her looks — the one that made me feel as if I were five years old again. A camera in the bedroom? No way.

But luckily, there were some things the Echo did help with. About that time, my mother’s ancient bedside clock radio finally gave up the ghost. With some trepidation, I replaced it with an Echo Dot with Clock — and was delighted when my mother informed me that she loved it! She could not only see what time it was but also ask Alexa what the weather was, right from her bed. And what made me happy was that I was able to teach her to yell, “Alexa, call Barbara” if she needed me in an emergency. Between the Dot and the Show, Alexa could now respond no matter where my mother was in the apartment — including the bathroom with the door closed. (I checked.) She only used the feature a couple of times, and never for an actual emergency, but it was there for “just in case.”

In the end, though, the most important gift that the two Echos gave to my mother was music.

Decades ago, my parents bought what was then the latest in audio technology: a modular stereo system that consisted of a turntable, a receiver, an AM / FM radio, and a cassette tape player. Now it sat unused, having become too complicated for my mother to deal with. But with the Echo, she could play music whenever she liked. She didn’t even have to remember the names of the songs she liked or the musicians that she had once doted on. All she had to do was say, “Alexa, play some quiet music,” or “Alexa, play some happy music.” Alexa would play some old-time blues or folk or big-band music. And I’d get a call about how she had listened to her music and how good it made her feel.

Photo by Jennifer Pattison Tuohy / The Verge
An Echo Dot with Clock substituted nicely for the old clock radio.

Did the two Echos do everything I had hoped they would? Well, yes and no. They certainly gave my mother a simple and friendly way to get information and reminders. More importantly, they provided a way that she could contact me in an emergency. But I never found the time to install other smart setups that were available. It was, at least then, just too complex a task to deal with.

In fact, Amazon has experimented with extending the usefulness of its smart devices for seniors. I never got around to trying Amazon’s $20-a-month Alexa Together service, which connected to its own 24/7 emergency service — and it was apparently not very successful, since it was discontinued in June of this year. I might have opted for the less expensive Emergency Assistant feature, which allows users to contact emergency services and was introduced last September. But by that time, my mother was getting round-the-clock care from family and aides and was no longer in need of it.

Still, the Echo was good to have. Near the end of her life, when my mother was bedridden and too weak to speak, I could sit next to her and say, “Alexa, play some Woody Guthrie” or “Alexa, play some Bessie Smith” or “Alexa, play some Count Basie.” The music would start, and my mother would smile — and would, for a time, feel better. And although Amazon’s smart speaker was not the perfect answer to all our needs, for those few moments, I will always be grateful to Alexa.

Read More 

Avride rolls out its next-gen sidewalk delivery robots

Avride, the robotics company that spun out of Russian search giant Yandex, has a new sidewalk delivery robot to show off.
The company currently has plans to operate a fleet of six-wheeled delivery robots in Austin, Texas, delivering Uber Eats orders to customers, as well as South Korea. Now Avride’s next generation model is shedding a couple of wheels — and showing big gains in efficiency.
The new robot only has four wheels, which Avride says is more energy efficient than its six-wheeled model. The six-wheeled versions were simple to build, and could turn confidently on a variety of surfaces. But they also created a lot of friction, which ate up a lot of energy from the robot’s internal battery.
The new four-wheeled designs are much more efficient in their energy consumption, which means they can stay in operation longer before needing to be recharged. And Avride redesigned the chassis to support improved maneuverability and precision.

The robot’s wheels are mounted on movable arms attached to a pivoting axle, which allows the wheels to rotate both inward and outward, reducing friction during turns. And instead of using traditional front and rear axles, the wheels are mechanically connected in pairs on each side. This allows for “simultaneous adjustment of the turning angles of both wheels on each side, enabling precise positioning for executing maneuvers,” Avride says.
The new generation models can turn 180-degrees almost instantly, which the company says will improve the robot’s ability to navigate narrow sidewalks and reverse out of the way for someone in a wheelchair or a stroller.
This video shows how Avride’s new robot can navigate tight turns, as well as inclines.

The company also made improvements to the robot’s control system for improved torque, and updated the hardware with Nvidia’s Jetson Orin platform. A modular cargo area will now allow Avride’s operators to swap in a variety of compartments based on the size of the package. And a new front-facing LED panel can display friendly-seeming digital eyes — to reduce instances of the robot being attacked or vandalized.
“The various eye expressions not only ‘bring the robot to life’ but also create a sense of interaction for clients when the robot looks around or winks after delivering an order,” the company says.
Avride’s new robots are being manufactured in Taiwan, and are expected to join its Austin-based fleet in the coming days. Avride spokesperson Yulia Shveyko said the company expects to have “at least a hundred” deployed by January 2025.
The company recently struck a deal with Uber to expand operations to Jersey City and Dallas, as well as a robotaxi service.

Avride, the robotics company that spun out of Russian search giant Yandex, has a new sidewalk delivery robot to show off.

The company currently has plans to operate a fleet of six-wheeled delivery robots in Austin, Texas, delivering Uber Eats orders to customers, as well as South Korea. Now Avride’s next generation model is shedding a couple of wheels — and showing big gains in efficiency.

The new robot only has four wheels, which Avride says is more energy efficient than its six-wheeled model. The six-wheeled versions were simple to build, and could turn confidently on a variety of surfaces. But they also created a lot of friction, which ate up a lot of energy from the robot’s internal battery.

The new four-wheeled designs are much more efficient in their energy consumption, which means they can stay in operation longer before needing to be recharged. And Avride redesigned the chassis to support improved maneuverability and precision.

The robot’s wheels are mounted on movable arms attached to a pivoting axle, which allows the wheels to rotate both inward and outward, reducing friction during turns. And instead of using traditional front and rear axles, the wheels are mechanically connected in pairs on each side. This allows for “simultaneous adjustment of the turning angles of both wheels on each side, enabling precise positioning for executing maneuvers,” Avride says.

The new generation models can turn 180-degrees almost instantly, which the company says will improve the robot’s ability to navigate narrow sidewalks and reverse out of the way for someone in a wheelchair or a stroller.

This video shows how Avride’s new robot can navigate tight turns, as well as inclines.

The company also made improvements to the robot’s control system for improved torque, and updated the hardware with Nvidia’s Jetson Orin platform. A modular cargo area will now allow Avride’s operators to swap in a variety of compartments based on the size of the package. And a new front-facing LED panel can display friendly-seeming digital eyes — to reduce instances of the robot being attacked or vandalized.

“The various eye expressions not only ‘bring the robot to life’ but also create a sense of interaction for clients when the robot looks around or winks after delivering an order,” the company says.

Avride’s new robots are being manufactured in Taiwan, and are expected to join its Austin-based fleet in the coming days. Avride spokesperson Yulia Shveyko said the company expects to have “at least a hundred” deployed by January 2025.

The company recently struck a deal with Uber to expand operations to Jersey City and Dallas, as well as a robotaxi service.

Read More 

Alexa, where’s my Star Trek Computer?

Image: Mojo Wang for The Verge

When Alexa launched 10 years ago, Amazon envisioned a new computer platform that could do anything for you. A decade later, the company is still trying to build it. Amazon’s Alexa was announced on November 6th, 2014. A passion project for its founder, Jeff Bezos, Amazon’s digital voice assistant was inspired by and aspired to be Star Trek’s “Computer” — an omniscient, omnipresent, and proactive artificial intelligence controlled by your voice. “It has been a dream since the early days of science fiction to have a computer that you can talk to in a natural way and actually ask it to have a conversation with you and ask it to do things for you,” Bezos said shortly after Alexa’s launch. “And that is coming true.”
At the time, that future felt within reach. In the months following Alexa’s launch, it wowed early buyers with its capabilities. Playing music, getting the weather, and setting a timer were suddenly hands-free experiences. Packaged inside a Pringles can-shaped speaker called an Echo, Alexa moved into 5 million homes in just two years, including my own.
Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers
Fast-forward to today, and there are over 40 million Echo smart speakers in US households, with Alexa processing billions of commands a week globally. But despite this proliferation of products and popularity, the “superhuman assistant who is there when you need it, disappears when you don’t, and is always working in the background on your behalf” that Amazon promised just isn’t here.
Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers. Its capabilities have expanded — Alexa can now do useful things like control your lights, call your mom, and remind you to take out the trash. But despite a significant investment of time, money, and resources over the last decade, the voice assistant hasn’t become noticeably more intelligent. As one former Amazon employee said, “We worried we’ve hired 10,000 people and we’ve built a smart timer.”
It’s disappointing. Alexa holds so much promise. While its capabilities are undoubtedly impressive — not to mention indispensable for many people (particularly in areas like accessibility and elder care) — it’s still basically a remote control in our homes. I’m holding out hope for the dream of a highly capable ambient computer — an artificial intelligence that will help manage our lives and homes as seamlessly as Captain Picard ran the Starship Enterprise. (Only preferably with fewer red alerts).
Today, I may have an Alexa smart speaker in every room of my house, but it hasn’t made it more useful. Alexa has gained thousands of abilities over the last few years, but I still won’t rely on it to do anything more complicated than execute a command on a schedule, add milk to my shopping list, and maybe tell me if grapes are poisonous to chickens. (They’re not, but Alexa says I should check with my vet to be sure.) If anything, on the eve of the voice assistant’s 10th birthday, Alexa’s original dream feels further away.

Photo by Jennifer Pattison Tuohy
My first Echo arrived under the Christmas tree in 2015. A decade on, and it’s still plugging away.

It’s easy to forget how groundbreaking Alexa was when it first appeared. Instead of being trapped in a phone like Apple’s Siri or a computer like Microsoft’s Cortana, Alexa came inside the Echo, the world’s first voice-activated speaker. Its far-field speech recognition, powered by a seven-microphone array, was seriously impressive — using it felt almost magical. You could shout at an Echo from anywhere in a room, and that glowing blue ring would (almost) always turn on, signaling Alexa was ready to tell you a joke or set that egg timer.
It was Amazon’s pivot into smart home control that provided the first hints of the promised Star Trek-like future. Silly fart jokes and encyclopedic-type knowledge aside, the release of an Alexa smart home API in 2016, followed by the Echo Plus packing a Zigbee radio in 2017, allowed the assistant to connect to and control devices in our homes. Saying “Alexa, Tea. Earl Grey. Hot,” and having a toasty cuppa in your hands a few moments later felt closer than ever.
This was genuinely exciting. While tea from a replicator wasn’t here yet, asking Alexa to turn your lights off while sitting on the couch or to turn up your thermostat without getting out from under the covers felt like living in the future. We finally had something that resembled Star Trek’s “Computer” in our homes — Amazon even let us call it “Computer.”
In retrospect, Alexa brought with it the beginnings of the modern smart home. Simple voice control made the Internet of Things accessible; it brought technology into the home without being locked behind a complicated computer interface or inside a personal device. Plus, Amazon’s open approach to the connected home — in a time of proprietary smart home ecosystems — helped spur a wave of new consumer-level connected devices. Nest, August, Philips Hue, Ecobee, Lutron, and LiFX all partly owe their success to Alexa’s ease of operation.
But the ecosystem that sprang up around Alexa grew too quickly. Anyone could develop capabilities (which Amazon calls skills) for the assistant with little oversight. While some skills were simple and fun, many were buggy and unreliable, and specific wording was needed to evoke each one. It all added up to an inconsistent and often frustrating experience.
Asking Alexa to turn up your thermostat without getting out from under the covers felt like living in the future
Then Alexa hit a wall. There’s an assumption with technology that it will just keep improving. But instead of developing the core technology, Amazon relied on third-party developers to make Alexa do more, focusing its resources on putting the voice assistant in more devices and making it capable of controlling more things.
The more devices that worked with Alexa and the more capabilities Amazon added to the platform, the harder it became to manage, control, and access them all. Voice control is great for simple commands, but without easier ways to talk to Alexa, these new features were lost on most users.
Alexa Routines emerged as a solution to corralling all the different gadgets and functions you could use Alexa for, but this relied on you spending time programming in an app, alongside constantly troubleshooting devices and their connectivity.
Hearing “‘Lamp’ isn’t responding. Please check its network connection and power supply” after issuing a command is beyond frustrating. And spending hours a month configuring and troubleshooting your smart home wasn’t part of the promise. This is what a smart computer should be able to do for you.
We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us
Amazon masked Alexa’s failure to get smarter with an ever-ballooning line of Echo hardware. New smart speakers arrived annually, Alexa moved into a clock and a microwave, and new form factors attempted to push customers to take Alexa outside of the house in their ears (Echo Buds), on their fingers (Echo Loop), on their faces (Echo Glasses), and in their cars (Echo Auto).
Many of these devices were forgettable, did little to advance Alexa’s capabilities, and mostly served to lose Amazon money. The Wall Street Journal reported earlier this year that Amazon has lost tens of billions of dollars on its broader devices unit.
Even with this “throw everything at the wall and see what sticks” approach, Amazon never cracked that second must-have form factor. In 2017, it invented the smart display — an Echo with a touchscreen that added benefits like video calling, watching security cameras, and showing information rather than just telling you. But sluggish processors, finicky touchscreens, and too many ads meant the smart display never really furthered Alexa’s core benefit.
Today, people buy Echo devices primarily because they’re cheaper than the competition, and they can use them to do basically what Alexa could do in 2014: set timers, check the weather, and listen to music. There’s no expectation for something better from a device that costs as little as $18.

Photo by David Pierce / The Verge
Dave Limp, Amazon’s former head of devices and services, at the company’s last big hardware event in September 2023. He demoed a “new” Alexa that is more conversational and less transactional. It has yet to arrive.

After all these years, just talking to Alexa remains the biggest hurdle. We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us. Case in point, my connected kitchen faucet still requires me to say, “Alexa, ask Moen to dispense 2 cups of hot water.” As my husband points out, if Alexa really was “smart,” wouldn’t it just know that I’m standing in front of the kitchen sink and doing what I ask without the need for hard-to-remember phrases?
The good news, at least on that front, is that technology is catching up. Large language models and generative AI could bring us an Alexa we can talk to more naturally. Last year, Amazon announced that it’s working on a “new” LLM-powered Alexa that is more proactive and conversational and less pedestrian and transactional.
This alone would be a big leap forward. But while generative AI could make voice assistants smarter, it’s not a silver bullet. LLMs solve the basic “make sense of language” problem, but they don’t — yet — have the ability to act on that language, not to mention the concerns about a powerful AI hallucinating in your home.
What Alexa really needs to become “Computer” is context
What Alexa really needs to become a “Computer” is context. To be effective, an omniscient voice assistant needs to know everything about you, your home, and the people and devices in it. This is a hard task. And while Echo speakers with ultrasound tech and smart home sensors can provide some context, there is one crucial area where Amazon is way behind the competition: you.
Unlike Google and Apple — which have access to data about you through your smartphone, calendar, email, or internet searches — Amazon has largely been locked out of your personal life beyond what you buy on its store or select data you give it access to. And its privacy missteps have kept people from trusting it.
But Google and Apple haven’t cracked the smart home yet, and while they are making serious moves in the space, Alexa still has a sizable head start. According to Amazon, the “New Alexa” can complete multistep routines you can create just by listing tasks. Add in context about who lives in your home, where they are at any point, and what they should be doing, and it’s feasible that the assistant could handle a task like this with just one command:
Alexa, tell my son not to forget his science project; set the alarm when he leaves. Disarm the alarm and unlock the back door for the plumber at 4PM, then lock it again at 5PM. Preheat the oven to 375 degrees at 6PM, but if I’m running late, adjust the time.
This type of capability would bring a whole new level of utility to Alexa, maybe enough to justify charging for it, as the company has said it plans to.
It’s time for Alexa to boldly go where no voice assistant has gone before
However, despite last year’s splashy launch of this LLM-powered assistant, we’ve heard nothing more. Amazon even skipped its big hardware event this year, where it traditionally announces dozens of new Alexa and Alexa-compatible devices and services. This is likely because, based on reports, Amazon is far from achieving its promised “New Alexa.”
But it needs to pull off its promised reinvention of Alexa, or Apple and Google will overtake it.
In 2014, Amazon set the stage for voice control in the home and, over the last decade, laid the groundwork for a smarter home. Today, Alexa is the most popular voice assistant inside a smart speaker — with over two-thirds of the US market. Outside of the home, Google’s Assistant and Apple’s Siri dominate. As those companies invest more in the smart home and eventually bring Apple Intelligence and Gemini smarts to their home products, Alexa’s days of dominance may be numbered.
The path to a generative AI-powered, context-aware smart home is fraught with pitfalls, but with all its history here, Amazon feels best poised to pull it off — if it can get out of its own way. The home is the final frontier, and it’s time for Alexa to boldly go where no voice assistant has gone before and become truly intelligent.

Image: Mojo Wang for The Verge

When Alexa launched 10 years ago, Amazon envisioned a new computer platform that could do anything for you. A decade later, the company is still trying to build it.

Amazon’s Alexa was announced on November 6th, 2014. A passion project for its founder, Jeff Bezos, Amazon’s digital voice assistant was inspired by and aspired to be Star Trek’s “Computer” — an omniscient, omnipresent, and proactive artificial intelligence controlled by your voice. “It has been a dream since the early days of science fiction to have a computer that you can talk to in a natural way and actually ask it to have a conversation with you and ask it to do things for you,” Bezos said shortly after Alexa’s launch. “And that is coming true.”

At the time, that future felt within reach. In the months following Alexa’s launch, it wowed early buyers with its capabilities. Playing music, getting the weather, and setting a timer were suddenly hands-free experiences. Packaged inside a Pringles can-shaped speaker called an Echo, Alexa moved into 5 million homes in just two years, including my own.

Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers

Fast-forward to today, and there are over 40 million Echo smart speakers in US households, with Alexa processing billions of commands a week globally. But despite this proliferation of products and popularity, the “superhuman assistant who is there when you need it, disappears when you don’t, and is always working in the background on your behalf” that Amazon promised just isn’t here.

Alexa is still mainly doing what it’s always done: playing music, reporting the weather, and setting timers. Its capabilities have expanded — Alexa can now do useful things like control your lights, call your mom, and remind you to take out the trash. But despite a significant investment of time, money, and resources over the last decade, the voice assistant hasn’t become noticeably more intelligent. As one former Amazon employee said, “We worried we’ve hired 10,000 people and we’ve built a smart timer.”

It’s disappointing. Alexa holds so much promise. While its capabilities are undoubtedly impressive — not to mention indispensable for many people (particularly in areas like accessibility and elder care) — it’s still basically a remote control in our homes. I’m holding out hope for the dream of a highly capable ambient computer — an artificial intelligence that will help manage our lives and homes as seamlessly as Captain Picard ran the Starship Enterprise. (Only preferably with fewer red alerts).

Today, I may have an Alexa smart speaker in every room of my house, but it hasn’t made it more useful. Alexa has gained thousands of abilities over the last few years, but I still won’t rely on it to do anything more complicated than execute a command on a schedule, add milk to my shopping list, and maybe tell me if grapes are poisonous to chickens. (They’re not, but Alexa says I should check with my vet to be sure.) If anything, on the eve of the voice assistant’s 10th birthday, Alexa’s original dream feels further away.

Photo by Jennifer Pattison Tuohy
My first Echo arrived under the Christmas tree in 2015. A decade on, and it’s still plugging away.

It’s easy to forget how groundbreaking Alexa was when it first appeared. Instead of being trapped in a phone like Apple’s Siri or a computer like Microsoft’s Cortana, Alexa came inside the Echo, the world’s first voice-activated speaker. Its far-field speech recognition, powered by a seven-microphone array, was seriously impressive — using it felt almost magical. You could shout at an Echo from anywhere in a room, and that glowing blue ring would (almost) always turn on, signaling Alexa was ready to tell you a joke or set that egg timer.

It was Amazon’s pivot into smart home control that provided the first hints of the promised Star Trek-like future. Silly fart jokes and encyclopedic-type knowledge aside, the release of an Alexa smart home API in 2016, followed by the Echo Plus packing a Zigbee radio in 2017, allowed the assistant to connect to and control devices in our homes. Saying “Alexa, Tea. Earl Grey. Hot,” and having a toasty cuppa in your hands a few moments later felt closer than ever.

This was genuinely exciting. While tea from a replicator wasn’t here yet, asking Alexa to turn your lights off while sitting on the couch or to turn up your thermostat without getting out from under the covers felt like living in the future. We finally had something that resembled Star Trek’s “Computer” in our homes — Amazon even let us call it “Computer.”

In retrospect, Alexa brought with it the beginnings of the modern smart home. Simple voice control made the Internet of Things accessible; it brought technology into the home without being locked behind a complicated computer interface or inside a personal device. Plus, Amazon’s open approach to the connected home — in a time of proprietary smart home ecosystems — helped spur a wave of new consumer-level connected devices. Nest, August, Philips Hue, Ecobee, Lutron, and LiFX all partly owe their success to Alexa’s ease of operation.

But the ecosystem that sprang up around Alexa grew too quickly. Anyone could develop capabilities (which Amazon calls skills) for the assistant with little oversight. While some skills were simple and fun, many were buggy and unreliable, and specific wording was needed to evoke each one. It all added up to an inconsistent and often frustrating experience.

Asking Alexa to turn up your thermostat without getting out from under the covers felt like living in the future

Then Alexa hit a wall. There’s an assumption with technology that it will just keep improving. But instead of developing the core technology, Amazon relied on third-party developers to make Alexa do more, focusing its resources on putting the voice assistant in more devices and making it capable of controlling more things.

The more devices that worked with Alexa and the more capabilities Amazon added to the platform, the harder it became to manage, control, and access them all. Voice control is great for simple commands, but without easier ways to talk to Alexa, these new features were lost on most users.

Alexa Routines emerged as a solution to corralling all the different gadgets and functions you could use Alexa for, but this relied on you spending time programming in an app, alongside constantly troubleshooting devices and their connectivity.

Hearing “‘Lamp’ isn’t responding. Please check its network connection and power supply” after issuing a command is beyond frustrating. And spending hours a month configuring and troubleshooting your smart home wasn’t part of the promise. This is what a smart computer should be able to do for you.

We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us

Amazon masked Alexa’s failure to get smarter with an ever-ballooning line of Echo hardware. New smart speakers arrived annually, Alexa moved into a clock and a microwave, and new form factors attempted to push customers to take Alexa outside of the house in their ears (Echo Buds), on their fingers (Echo Loop), on their faces (Echo Glasses), and in their cars (Echo Auto).

Many of these devices were forgettable, did little to advance Alexa’s capabilities, and mostly served to lose Amazon money. The Wall Street Journal reported earlier this year that Amazon has lost tens of billions of dollars on its broader devices unit.

Even with this “throw everything at the wall and see what sticks” approach, Amazon never cracked that second must-have form factor. In 2017, it invented the smart display — an Echo with a touchscreen that added benefits like video calling, watching security cameras, and showing information rather than just telling you. But sluggish processors, finicky touchscreens, and too many ads meant the smart display never really furthered Alexa’s core benefit.

Today, people buy Echo devices primarily because they’re cheaper than the competition, and they can use them to do basically what Alexa could do in 2014: set timers, check the weather, and listen to music. There’s no expectation for something better from a device that costs as little as $18.

Photo by David Pierce / The Verge
Dave Limp, Amazon’s former head of devices and services, at the company’s last big hardware event in September 2023. He demoed a “new” Alexa that is more conversational and less transactional. It has yet to arrive.

After all these years, just talking to Alexa remains the biggest hurdle. We’ve had to learn how to speak to Alexa rather than Alexa learning to speak to us. Case in point, my connected kitchen faucet still requires me to say, “Alexa, ask Moen to dispense 2 cups of hot water.” As my husband points out, if Alexa really was “smart,” wouldn’t it just know that I’m standing in front of the kitchen sink and doing what I ask without the need for hard-to-remember phrases?

The good news, at least on that front, is that technology is catching up. Large language models and generative AI could bring us an Alexa we can talk to more naturally. Last year, Amazon announced that it’s working on a “new” LLM-powered Alexa that is more proactive and conversational and less pedestrian and transactional.

This alone would be a big leap forward. But while generative AI could make voice assistants smarter, it’s not a silver bullet. LLMs solve the basic “make sense of language” problem, but they don’t — yet — have the ability to act on that language, not to mention the concerns about a powerful AI hallucinating in your home.

What Alexa really needs to become “Computer” is context

What Alexa really needs to become a “Computer” is context. To be effective, an omniscient voice assistant needs to know everything about you, your home, and the people and devices in it. This is a hard task. And while Echo speakers with ultrasound tech and smart home sensors can provide some context, there is one crucial area where Amazon is way behind the competition: you.

Unlike Google and Apple — which have access to data about you through your smartphone, calendar, email, or internet searches — Amazon has largely been locked out of your personal life beyond what you buy on its store or select data you give it access to. And its privacy missteps have kept people from trusting it.

But Google and Apple haven’t cracked the smart home yet, and while they are making serious moves in the space, Alexa still has a sizable head start. According to Amazon, the “New Alexa” can complete multistep routines you can create just by listing tasks. Add in context about who lives in your home, where they are at any point, and what they should be doing, and it’s feasible that the assistant could handle a task like this with just one command:

Alexa, tell my son not to forget his science project; set the alarm when he leaves. Disarm the alarm and unlock the back door for the plumber at 4PM, then lock it again at 5PM. Preheat the oven to 375 degrees at 6PM, but if I’m running late, adjust the time.

This type of capability would bring a whole new level of utility to Alexa, maybe enough to justify charging for it, as the company has said it plans to.

It’s time for Alexa to boldly go where no voice assistant has gone before

However, despite last year’s splashy launch of this LLM-powered assistant, we’ve heard nothing more. Amazon even skipped its big hardware event this year, where it traditionally announces dozens of new Alexa and Alexa-compatible devices and services. This is likely because, based on reports, Amazon is far from achieving its promised “New Alexa.”

But it needs to pull off its promised reinvention of Alexa, or Apple and Google will overtake it.

In 2014, Amazon set the stage for voice control in the home and, over the last decade, laid the groundwork for a smarter home. Today, Alexa is the most popular voice assistant inside a smart speaker — with over two-thirds of the US market. Outside of the home, Google’s Assistant and Apple’s Siri dominate. As those companies invest more in the smart home and eventually bring Apple Intelligence and Gemini smarts to their home products, Alexa’s days of dominance may be numbered.

The path to a generative AI-powered, context-aware smart home is fraught with pitfalls, but with all its history here, Amazon feels best poised to pull it off — if it can get out of its own way. The home is the final frontier, and it’s time for Alexa to boldly go where no voice assistant has gone before and become truly intelligent.

Read More 

Nothing is making a glow-in-the-dark phone

Only 1,000 units of the Phone (2a) and Phone (2a) Plus Community Edition will be produced. | Image: Nothing

Nothing has announced a new version of its Phone 2A Plus featuring a customized glow-in-the-dark design and packaging created in part by some of the company’s “most talented followers.” The Phone 2A Plus Community Edition is the result of a contest held by the company encouraging its community to “build a smartphone of their own imagination.”
The Phone 2A Plus Community Edition is Nothing’s first “major pilot to co-create hardware,” the company says, and resulted in over 900 entries from its community customizing everything from its look to how it will be marketed. The phone will be available to purchase starting on November 12th through Nothing’s website for $399 but is being limited to just 1,000 units.

Image: Nothing
The Phone (2a) Plus Community Edition’s glowing finish doesn’t draw any power.

The concept for the Phone 2A Plus Community Edition’s updated design was created by Astrid Vanhuyse and Kenta Akasaki and realized through a collaboration with Nothing’s Adam Bates and Lucy Birley. The phone’s functionality, including three light strips around its rear cameras, hasn’t changed. But the back of the phone is now tinted with a green phosphorescent material that will “emit a soft glow in dark environments” for hours, Nothing says, requiring just daylight to charge.
The glow-in-the-dark accents are carried forward to the Phone 2A Plus Community Edition’s new packaging, which was reinterpreted by Ian Henry Simmonds with reflective elements and a macro crop of the phone itself.

Image: Nothing
The Phone (2a) Plus Community Edition will also include a collection of new matching wallpapers.

Inspired by the original phone’s hardware, Andrés Mateos and Nothing’s software designers used a mix of design tools and AI to create a new set of six matching wallpapers called the “Connected Collection” that will be bundled with the Phone 2A Plus Community Edition. Lastly, Sonya Palma created a new “Find your light. Capture your light” marketing campaign that will be used to promote the Community Edition.

Although this is Nothing’s first attempt to enlist its community to help design hardware, the project is reminiscent of a collaboration between CMF (Nothing’s affordability-focused subbrand) and Bambu Lab encouraging CMF Phone 1 users to design 3D-printed accessories and contraptions that could be added to the back of that phone.

Only 1,000 units of the Phone (2a) and Phone (2a) Plus Community Edition will be produced. | Image: Nothing

Nothing has announced a new version of its Phone 2A Plus featuring a customized glow-in-the-dark design and packaging created in part by some of the company’s “most talented followers.” The Phone 2A Plus Community Edition is the result of a contest held by the company encouraging its community to “build a smartphone of their own imagination.”

The Phone 2A Plus Community Edition is Nothing’s first “major pilot to co-create hardware,” the company says, and resulted in over 900 entries from its community customizing everything from its look to how it will be marketed. The phone will be available to purchase starting on November 12th through Nothing’s website for $399 but is being limited to just 1,000 units.

Image: Nothing
The Phone (2a) Plus Community Edition’s glowing finish doesn’t draw any power.

The concept for the Phone 2A Plus Community Edition’s updated design was created by Astrid Vanhuyse and Kenta Akasaki and realized through a collaboration with Nothing’s Adam Bates and Lucy Birley. The phone’s functionality, including three light strips around its rear cameras, hasn’t changed. But the back of the phone is now tinted with a green phosphorescent material that will “emit a soft glow in dark environments” for hours, Nothing says, requiring just daylight to charge.

The glow-in-the-dark accents are carried forward to the Phone 2A Plus Community Edition’s new packaging, which was reinterpreted by Ian Henry Simmonds with reflective elements and a macro crop of the phone itself.

Image: Nothing
The Phone (2a) Plus Community Edition will also include a collection of new matching wallpapers.

Inspired by the original phone’s hardware, Andrés Mateos and Nothing’s software designers used a mix of design tools and AI to create a new set of six matching wallpapers called the “Connected Collection” that will be bundled with the Phone 2A Plus Community Edition. Lastly, Sonya Palma created a new “Find your light. Capture your light” marketing campaign that will be used to promote the Community Edition.

Although this is Nothing’s first attempt to enlist its community to help design hardware, the project is reminiscent of a collaboration between CMF (Nothing’s affordability-focused subbrand) and Bambu Lab encouraging CMF Phone 1 users to design 3D-printed accessories and contraptions that could be added to the back of that phone.

Read More 

Sennheiser’s new wireless clip-on mics can convert to a tabletop microphone

The Profile Wireless microphones can be clipped or magnetically attached to clothing. | Image: Sennheiser

Sennheiser has announced a new portable wireless microphone kit designed to be an affordable and flexible all-in-one solution for content creators and videographers. The Profile Wireless system features a wireless receiver that can be connected to various devices, a pair of compact clip-on transmitters with built-in microphones that can also be used as handheld or tabletop mics, and a mobile charger.
The Sennheiser Profile Wireless kit isn’t expected to start shipping until late 2024 or early 2025, but it’s available for preorder starting today for $299. That’s cheaper than both the popular $349 DJI Mic 2 kit, which includes similar hardware, and the Shure MoveMic system, which is $499 when bundled with a wireless receiver. Rode’s Wireless Go II kit is also $299, but it doesn’t include an on-the-go charging solution.

Image: Sennheiser
The Profile Wireless’ charging bar features a 2,000mAh battery.

The Profile Wireless microphones are similar in size to the DJI Mic 2’s and can be attached to clothing using either a clip on the back or a magnet, which allows for more freedom with placement. If you want to use a higher-quality microphone or need a more discreet lav mic, the transmitter includes a lockable 3.5mm connector for attaching external mics.
The microphones come pre-paired to a two-channel receiver and communicate over a 2.4GHz wireless signal that has a range of just over 800 feet with a clear line of sight. If anybody gets in between the receiver and mic, the range drops to around 490 feet. Sennheiser says the battery life for the mics and wireless receiver is around seven hours, but all three can be recharged while away from a power outlet using the included charging bar that is equipped with a 2,000mAh battery.
Each microphone has 16GB of built-in storage with an optional “Backup Recording Mode” that will automatically start recording locally if the connection to the wireless receiver becomes unreliable. There’s also a “Safety Channel Mode” that will record a second copy of the audio at a lower level to help prevent louder sounds from being clipped or distorted.

Image: Sennheiser
The included wireless receiver connects to laptops, mobile devices, or cameras with a cable or charging port adapter.

Since the Profile Wireless system doesn’t use Bluetooth, capturing audio to another device requires the receiver to be connected using an included USB-C or Lightning adapter for mobile devices, a USB-C cable for computers, or an audio cable for cameras. The receiver itself includes an OLED screen that displays information like audio levels and the charge level of the mics; thanks to an included gyro sensor, the screen will automatically flip 180 degrees as needed.

Image: Sennheiser
Attach the wireless microphones to the included charging bar, and they become easier to use as handheld mics or on a desk.

Although wireless mic systems like this are becoming more popular because of their ease of use and convenient size, using a tiny clip-on mic in hand to conduct an impromptu interview can sometimes be challenging. Sennheiser’s solution to that problem has you attaching one of the microphones to the end of the included charging bar and then adding a foam windscreen.
This results in a larger microphone that’s easier to hold or use on a desk when connected to a microphone support or a tiny tripod. The larger microphone’s shape is a bit odd and may result in an extra question or two when sticking it in someone’s face, but it does bring some extra flexibility to an affordable microphone kit that already offers a lot of functionality.

The Profile Wireless microphones can be clipped or magnetically attached to clothing. | Image: Sennheiser

Sennheiser has announced a new portable wireless microphone kit designed to be an affordable and flexible all-in-one solution for content creators and videographers. The Profile Wireless system features a wireless receiver that can be connected to various devices, a pair of compact clip-on transmitters with built-in microphones that can also be used as handheld or tabletop mics, and a mobile charger.

The Sennheiser Profile Wireless kit isn’t expected to start shipping until late 2024 or early 2025, but it’s available for preorder starting today for $299. That’s cheaper than both the popular $349 DJI Mic 2 kit, which includes similar hardware, and the Shure MoveMic system, which is $499 when bundled with a wireless receiver. Rode’s Wireless Go II kit is also $299, but it doesn’t include an on-the-go charging solution.

Image: Sennheiser
The Profile Wireless’ charging bar features a 2,000mAh battery.

The Profile Wireless microphones are similar in size to the DJI Mic 2’s and can be attached to clothing using either a clip on the back or a magnet, which allows for more freedom with placement. If you want to use a higher-quality microphone or need a more discreet lav mic, the transmitter includes a lockable 3.5mm connector for attaching external mics.

The microphones come pre-paired to a two-channel receiver and communicate over a 2.4GHz wireless signal that has a range of just over 800 feet with a clear line of sight. If anybody gets in between the receiver and mic, the range drops to around 490 feet. Sennheiser says the battery life for the mics and wireless receiver is around seven hours, but all three can be recharged while away from a power outlet using the included charging bar that is equipped with a 2,000mAh battery.

Each microphone has 16GB of built-in storage with an optional “Backup Recording Mode” that will automatically start recording locally if the connection to the wireless receiver becomes unreliable. There’s also a “Safety Channel Mode” that will record a second copy of the audio at a lower level to help prevent louder sounds from being clipped or distorted.

Image: Sennheiser
The included wireless receiver connects to laptops, mobile devices, or cameras with a cable or charging port adapter.

Since the Profile Wireless system doesn’t use Bluetooth, capturing audio to another device requires the receiver to be connected using an included USB-C or Lightning adapter for mobile devices, a USB-C cable for computers, or an audio cable for cameras. The receiver itself includes an OLED screen that displays information like audio levels and the charge level of the mics; thanks to an included gyro sensor, the screen will automatically flip 180 degrees as needed.

Image: Sennheiser
Attach the wireless microphones to the included charging bar, and they become easier to use as handheld mics or on a desk.

Although wireless mic systems like this are becoming more popular because of their ease of use and convenient size, using a tiny clip-on mic in hand to conduct an impromptu interview can sometimes be challenging. Sennheiser’s solution to that problem has you attaching one of the microphones to the end of the included charging bar and then adding a foam windscreen.

This results in a larger microphone that’s easier to hold or use on a desk when connected to a microphone support or a tiny tripod. The larger microphone’s shape is a bit odd and may result in an extra question or two when sticking it in someone’s face, but it does bring some extra flexibility to an affordable microphone kit that already offers a lot of functionality.

Read More 

Canon’s budget-friendly 3D lens will be available in November

Canon’s budget-friendly VR lens will be available in November with an estimated retail price of $449.99. | Image: Canon

Canon has officially announced its new RF-S7.8mm F4 STM Dual lens, which features stereoscopic elements that have been squeezed into a body no larger than a traditional 2D camera lens. It was originally teased during the Apple WWDC 2024 keynote last June and is designed to work with a Canon EOS R7 as a more affordable tool for creators making 3D VR content for headsets like the Meta Quest 3 or spatial videos for the Apple Vision Pro.
The company hasn’t set a specific date for when the new 3D lens will be available, but it says it will be sometime in November 2024, with an “estimated retail price” of $449.99. That’s considerably cheaper than Canon’s existing dual-fisheye lenses designed to capture 3D video content, including the $1,999 RF5.2mm F2.8 L Dual and the $1,099 RF-S3.9mm F3.5 STM Dual.
Pairing Canon’s new 3D lens with the company’s 32.5MP EOS R7 digital camera — which itself starts at $1,299 — pushes the total price tag of the kit to over $1,700. However, that’s still cheaper than Canon’s higher-end 3D solutions, which start at $2,498 (and can go as high as $6,298) when paired with their requisite camera gear.

Image: Canon
The lens is designed to be easy to use, with minimal controls.

Canon’s new 3D lens has an aperture range of f/4.0 to f/16, supports autofocus, and features a button and a control wheel for making separate manual focus adjustments to the left and right sides. What makes it so much cheaper than Canon’s existing 3D lenses is its limited field of view. Canon’s pricier lenses are capable of capturing 180-degree video and images — close to what the human eye is capable of seeing — while the new RF-S7.8mm F4 STM Dual lens only takes in about a third of that at 63 degrees.

Image: Canon
The lenses on Canon’s new 3D lens are much smaller than the fisheye lenses on its pricier 3D lenses.

Using a standard Canon RF mount, the new lens has stereoscopic elements aligned in a straight optical path, resulting in its front lenses being positioned just 11.8mm apart compared to the 60mm gap between the dual-fisheye lenses on Canon’s existing 3D lenses. As a result, Canon says the strongest 3D effect will be experienced when capturing subjects or objects that are just 6 to 20 inches from the lens. When using it to capture something that’s farther away, the 3D effect will be less pronounced.
Images and videos captured using this lens need to be processed before they can be viewed using VR or AR headsets, either through the EOS VR plugin that’s available for Adobe Premiere Pro, or Canon’s own EOS VR Utility software, available for Macs and PCs. Both tools require a paid subscription but can generate 180-degree 3D, VR, or spatial video content.

Canon’s budget-friendly VR lens will be available in November with an estimated retail price of $449.99. | Image: Canon

Canon has officially announced its new RF-S7.8mm F4 STM Dual lens, which features stereoscopic elements that have been squeezed into a body no larger than a traditional 2D camera lens. It was originally teased during the Apple WWDC 2024 keynote last June and is designed to work with a Canon EOS R7 as a more affordable tool for creators making 3D VR content for headsets like the Meta Quest 3 or spatial videos for the Apple Vision Pro.

The company hasn’t set a specific date for when the new 3D lens will be available, but it says it will be sometime in November 2024, with an “estimated retail price” of $449.99. That’s considerably cheaper than Canon’s existing dual-fisheye lenses designed to capture 3D video content, including the $1,999 RF5.2mm F2.8 L Dual and the $1,099 RF-S3.9mm F3.5 STM Dual.

Pairing Canon’s new 3D lens with the company’s 32.5MP EOS R7 digital camera — which itself starts at $1,299 — pushes the total price tag of the kit to over $1,700. However, that’s still cheaper than Canon’s higher-end 3D solutions, which start at $2,498 (and can go as high as $6,298) when paired with their requisite camera gear.

Image: Canon
The lens is designed to be easy to use, with minimal controls.

Canon’s new 3D lens has an aperture range of f/4.0 to f/16, supports autofocus, and features a button and a control wheel for making separate manual focus adjustments to the left and right sides. What makes it so much cheaper than Canon’s existing 3D lenses is its limited field of view. Canon’s pricier lenses are capable of capturing 180-degree video and images — close to what the human eye is capable of seeing — while the new RF-S7.8mm F4 STM Dual lens only takes in about a third of that at 63 degrees.

Image: Canon
The lenses on Canon’s new 3D lens are much smaller than the fisheye lenses on its pricier 3D lenses.

Using a standard Canon RF mount, the new lens has stereoscopic elements aligned in a straight optical path, resulting in its front lenses being positioned just 11.8mm apart compared to the 60mm gap between the dual-fisheye lenses on Canon’s existing 3D lenses. As a result, Canon says the strongest 3D effect will be experienced when capturing subjects or objects that are just 6 to 20 inches from the lens. When using it to capture something that’s farther away, the 3D effect will be less pronounced.

Images and videos captured using this lens need to be processed before they can be viewed using VR or AR headsets, either through the EOS VR plugin that’s available for Adobe Premiere Pro, or Canon’s own EOS VR Utility software, available for Macs and PCs. Both tools require a paid subscription but can generate 180-degree 3D, VR, or spatial video content.

Read More 

AMD confirms its next-gen RDNA 4 GPUs will launch in early 2025

An AMD Radeon GPU. | Image: AMD

AMD’s Q3 2024 earnings call today wasn’t bullish on gaming revenue overall, but it did confirm a hot new rumor on GPUs — specifically, the launch of AMD’s next-gen RDNA 4 parts early next year. “We are on track to launch the first RDNA4 GPUs in early 2025,” said AMD CEO Lisa Su, and the company confirmed to PCWorld that it’s the first time it’s shared those plans publicly.
“In addition to a strong increase in gaming performance, RDNA 4 delivers significantly higher ray tracing performance and adds new AI capabilities,” Su said on the call.
AMD confirming those chips might help lend credibility to other leaks, too. Earlier today, a Chiphell leaker rumored that AMD would announce its RDNA 4 graphics at CES 2025 in January, alongside its leaked Strix Halo and Fire Range gaming notebook parts, its confirmed Ryzen Z2 handheld gaming chips, and more.
AMD expects its gaming revenue to continue to decline this quarter, due in no small part to the PlayStation 5 and Xbox Series consoles aging out, and it’s not exactly the company’s primary focus these days anyhow. On today’s call, Su pointed out how gaming only accounts for two percent of the company’s revenue, while data center is now well over half of the company’s business. She says that after spending 10 years turning AMD around, her next task is to “make AMD the end-to-end AI leader.”
The company had previously revealed it’s turning its back on flagship GPUs to chase AI first, so you shouldn’t expect new consumer RDNA 4 parts to compete with Nvidia’s best and priciest GPUs.

An AMD Radeon GPU. | Image: AMD

AMD’s Q3 2024 earnings call today wasn’t bullish on gaming revenue overall, but it did confirm a hot new rumor on GPUs — specifically, the launch of AMD’s next-gen RDNA 4 parts early next year. “We are on track to launch the first RDNA4 GPUs in early 2025,” said AMD CEO Lisa Su, and the company confirmed to PCWorld that it’s the first time it’s shared those plans publicly.

“In addition to a strong increase in gaming performance, RDNA 4 delivers significantly higher ray tracing performance and adds new AI capabilities,” Su said on the call.

AMD confirming those chips might help lend credibility to other leaks, too. Earlier today, a Chiphell leaker rumored that AMD would announce its RDNA 4 graphics at CES 2025 in January, alongside its leaked Strix Halo and Fire Range gaming notebook parts, its confirmed Ryzen Z2 handheld gaming chips, and more.

AMD expects its gaming revenue to continue to decline this quarter, due in no small part to the PlayStation 5 and Xbox Series consoles aging out, and it’s not exactly the company’s primary focus these days anyhow. On today’s call, Su pointed out how gaming only accounts for two percent of the company’s revenue, while data center is now well over half of the company’s business. She says that after spending 10 years turning AMD around, her next task is to “make AMD the end-to-end AI leader.”

The company had previously revealed it’s turning its back on flagship GPUs to chase AI first, so you shouldn’t expect new consumer RDNA 4 parts to compete with Nvidia’s best and priciest GPUs.

Read More 

Reddit is profitable for the first time ever, with nearly 100 million daily users

Image: The Verge

Reddit just turned a profit for the first time. As part of its third-quarter earnings results released on Tuesday, the company reported a profit of $29.9 million, along with $348.4 million in revenue — a 68 percent increase year over year.
The company hasn’t been profitable at any point in its nearly 20-year history. Since going public, Reddit lost $575 million during its first quarter on the market, but it decreased that loss to $10 million last quarter, and is now finally in the green.
Reddit also grew to 97.2 million daily users over the past few months, marking a 47 percent increase from the same time last year. That number exceeded 100 million users on some days during the quarter, Reddit says.
Reddit’s advertising revenue grew to $315.1 million, while “other” revenue reached $33.2 million on account of “data licensing agreements signed earlier this year.” Both Google and OpenAI have cut deals with Reddit to train their AI models on its posts.
In a letter to shareholders, Reddit CEO Steve Huffman attributed the recent increase in users to the platform’s AI-powered translation feature. Reddit started letting users translate posts into French last year before expanding it to Spanish, Portuguese, Italian, and German. Now Huffman says Reddit plans to expand translation to over 30 countries through 2025.

“Reddit’s influence continues to grow across the broader internet,” Huffman wrote. “In 2024 so far, ‘Reddit’ was the sixth most Googled word in the U.S., underscoring that when people are looking for answers, advice, or community, they’re turning to Reddit.” The platform is also working to make its search feature “easier and more intuitive.”
Since going public earlier this year, Reddit has made a number of changes to generate more revenue, including inking advertising deals with professional sports leagues, upgrading its “ask me anything” posts, and cracking down on web crawlers attempting to scrape its content. Huffman has even weighed the idea of letting users create paid subreddits and even moved to prevent sitewide protests.

Image: The Verge

Reddit just turned a profit for the first time. As part of its third-quarter earnings results released on Tuesday, the company reported a profit of $29.9 million, along with $348.4 million in revenue — a 68 percent increase year over year.

The company hasn’t been profitable at any point in its nearly 20-year history. Since going public, Reddit lost $575 million during its first quarter on the market, but it decreased that loss to $10 million last quarter, and is now finally in the green.

Reddit also grew to 97.2 million daily users over the past few months, marking a 47 percent increase from the same time last year. That number exceeded 100 million users on some days during the quarter, Reddit says.

Reddit’s advertising revenue grew to $315.1 million, while “other” revenue reached $33.2 million on account of “data licensing agreements signed earlier this year.” Both Google and OpenAI have cut deals with Reddit to train their AI models on its posts.

In a letter to shareholders, Reddit CEO Steve Huffman attributed the recent increase in users to the platform’s AI-powered translation feature. Reddit started letting users translate posts into French last year before expanding it to Spanish, Portuguese, Italian, and German. Now Huffman says Reddit plans to expand translation to over 30 countries through 2025.

“Reddit’s influence continues to grow across the broader internet,” Huffman wrote. “In 2024 so far, ‘Reddit’ was the sixth most Googled word in the U.S., underscoring that when people are looking for answers, advice, or community, they’re turning to Reddit.” The platform is also working to make its search feature “easier and more intuitive.”

Since going public earlier this year, Reddit has made a number of changes to generate more revenue, including inking advertising deals with professional sports leagues, upgrading its “ask me anything” posts, and cracking down on web crawlers attempting to scrape its content. Huffman has even weighed the idea of letting users create paid subreddits and even moved to prevent sitewide protests.

Read More 

More than a quarter of new code at Google is generated by AI

Illustration: The Verge

Google is building a bunch of AI products, and it’s using AI quite a bit as part of building those products, too. “More than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers,” CEO Sundar Pichai said on the company’s third quarter 2024 earnings call. It’s a big milestone that marks just how important AI is to the company.
AI is helping Google make money as well. Alphabet reported $88.3 billion in revenue for the quarter, with Google Services (which includes Search) revenue of $76.5 billion, up 13 percent year-over-year, and Google Cloud (which includes its AI infrastructure products for other companies) revenue of $11.4 billion, up 35 percent year-over-year.
Operating incomes were also strong. Google Services hit $30.9 billion, up from $23.9 billion last year, and Google Cloud hit $1.95 billion, significantly up from last year’s $270 million.
The results indicate that, while many people feel Google isn’t as reliable as it once was, the company continues to operate a very strong business. AI is a huge focus across Google, with the release of features like custom AI chatbots powered by Gemini (called “Gems”), automatic AI note-taking in Google Meet, and a bunch of generative AI tools to help YouTube creators. Google’s well-reviewed Pixel 9 lineup of smartphones were also packed with AI tools.
“In Search, our new AI features are expanding what people can search for and how they search for it,” CEO Sundar Pichai says in a statement. “In Cloud, our AI solutions are helping drive deeper product adoption with existing customers, attract new customers and win larger deals. And YouTube’s total ads and subscription revenues surpassed $50 billion over the past four quarters for the first time.”
Google is facing a potentially tough road ahead, however, following the August ruling that the company is a monopolist in the search and advertising markets. That case, brought by the US Department of Justice, is now in its remedies phase, and while there’s still a ways to go for the dust to settle, a Google breakup is on the table.

Illustration: The Verge

Google is building a bunch of AI products, and it’s using AI quite a bit as part of building those products, too. “More than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers,” CEO Sundar Pichai said on the company’s third quarter 2024 earnings call. It’s a big milestone that marks just how important AI is to the company.

AI is helping Google make money as well. Alphabet reported $88.3 billion in revenue for the quarter, with Google Services (which includes Search) revenue of $76.5 billion, up 13 percent year-over-year, and Google Cloud (which includes its AI infrastructure products for other companies) revenue of $11.4 billion, up 35 percent year-over-year.

Operating incomes were also strong. Google Services hit $30.9 billion, up from $23.9 billion last year, and Google Cloud hit $1.95 billion, significantly up from last year’s $270 million.

The results indicate that, while many people feel Google isn’t as reliable as it once was, the company continues to operate a very strong business. AI is a huge focus across Google, with the release of features like custom AI chatbots powered by Gemini (called “Gems”), automatic AI note-taking in Google Meet, and a bunch of generative AI tools to help YouTube creators. Google’s well-reviewed Pixel 9 lineup of smartphones were also packed with AI tools.

“In Search, our new AI features are expanding what people can search for and how they search for it,” CEO Sundar Pichai says in a statement. “In Cloud, our AI solutions are helping drive deeper product adoption with existing customers, attract new customers and win larger deals. And YouTube’s total ads and subscription revenues surpassed $50 billion over the past four quarters for the first time.”

Google is facing a potentially tough road ahead, however, following the August ruling that the company is a monopolist in the search and advertising markets. That case, brought by the US Department of Justice, is now in its remedies phase, and while there’s still a ways to go for the dust to settle, a Google breakup is on the table.

Read More 

Google accused of violating labor law for asking workers to ‘refrain’ from talking about antitrust case

Illustration: The Verge

The Alphabet Workers Union filed a charge against Google with the National Labor Relations Board after Google management asked workers to “refrain” from talking about its ongoing Search antitrust case.
The union charges that Google issued an “overly broad directive” on discussing the case to employees, according to a copy of the charge filed in August and viewed by The Verge. On August 5th, just after US District Court Judge Amit Mehta issued his decision finding Google to have an illegal monopoly, president of global affairs Kent Walker sent an email (also reviewed by The Verge) directing employees to “please refrain from commenting on this case, both internally and externally.” Walker sent a similar message at the start of the trial last fall, Business Insider reported at the time.
That could be a problem for Google if the NLRB concludes that Walker’s directive might chill protected concerted activity: actions by two or more employees together that are protected by labor law, like discussing working conditions. “I could certainly imagine that there would be ways that the case would ultimately bear on working conditions,” says Charlotte Garden, a professor at the University of Minnesota who specializes in labor law. The DOJ has since suggested that remedying Google’s anticompetitive harms could mean something as drastic as a breakup of its Android and Chrome businesses — something that could plausibly result in significant changes for workers in those units.
“We respect Googlers’ rights to speak about their terms and conditions of employment”
Still, Garden says there are some discussions employees might have about the case that might not be protected, like pondering how management should respond to the government. The NLRB will also weigh Google’s legitimate business interests — perhaps including controlling the course of their own litigation or only authorizing specific spokespeople to speak on it on behalf of the company — and how likely management’s statements are to chill protected conversations between employees.
“We respect Googlers’ rights to speak about their terms and conditions of employment,” Google spokesperson Peter Schottenfels said in a statement to The Verge. “As is standard practice, we’re simply asking that employees not speak about ongoing litigation on behalf of Google without prior approval.”
Even though Walker’s email did not include an outright prohibition on speaking about the antitrust case, the NLRB could still find it to be a violation if it concludes it would likely chill employee speech, says Garden. The board will evaluate how employees did and were likely to interpret the email — either as general guidance that wouldn’t be enforced or a line not to cross or risk getting in trouble or forgoing future opportunities, she says. To do that, Garden explains, the NLRB would look at employees’ own reactions and interpretations of the directives and how the company has responded when workers went against such guidance in the past.
“I think that the company does have a history of silencing or retaliating against workers who speak about their working conditions or raise complaints”
Stephen McMurtry, a senior software engineer at Google and communications chair of the Alphabet Workers Union, sees his employer’s past actions as a warning. “I think that the company does have a history of silencing or retaliating against workers who speak about their working conditions or raise complaints with the company with things that they believe are wrong or unethical. So even if the language is a kind of corporate ‘please refrain,’ I think we can all see what’s happened to some of our coworkers in the past who have raised concerns about different issues.”
McMurtry pointed to the massive 2018 walkout in the wake of the #MeToo movement. Two of the organizers claimed retaliation for their role in the demonstration (which Google denied) and ultimately left the company. Another former Google engineer told The Verge in 2019 that she was fired for creating a browser popup for employees letting them know of their labor protections. A Google spokesperson at the time did not confirm the employee’s termination, saying they had fired someone who “abused privileged access to modify an internal security tool” but that it wasn’t a matter of its contents. “It doesn’t seem so far fetched that it could happen in this situation,” McMurtry says.
McMurtry doesn’t really know what his coworkers think about the outcome of the case and what remedies could impact their jobs because he says it’s not really discussed. He doesn’t even have much of an opinion on the remedies the DOJ has suggested so far but says being able to talk through it with his coworkers would make it easier to reach an informed opinion about likely effects on workers.
The case could take a while to resolve, if the NLRB even decides to take it up. Garden says a regional office would first investigate the charge to determine whether to move forward with it — though many cases settle before that happens. NLRB spokesperson Kayla Blado told The Verge that its Oakland office is investigating the charge, which was filed on August 15th. The NLRB says it typically takes seven to 14 weeks to determine the merits of a charge, which could kick off a case before an administrative law judge if the government chooses to pursue it. Meanwhile, Google and the Justice Department are set to return to court in April to argue about which remedies that judge should impose to fix Google’s anticompetitive effects.

Illustration: The Verge

The Alphabet Workers Union filed a charge against Google with the National Labor Relations Board after Google management asked workers to “refrain” from talking about its ongoing Search antitrust case.

The union charges that Google issued an “overly broad directive” on discussing the case to employees, according to a copy of the charge filed in August and viewed by The Verge. On August 5th, just after US District Court Judge Amit Mehta issued his decision finding Google to have an illegal monopoly, president of global affairs Kent Walker sent an email (also reviewed by The Verge) directing employees to “please refrain from commenting on this case, both internally and externally.” Walker sent a similar message at the start of the trial last fall, Business Insider reported at the time.

That could be a problem for Google if the NLRB concludes that Walker’s directive might chill protected concerted activity: actions by two or more employees together that are protected by labor law, like discussing working conditions. “I could certainly imagine that there would be ways that the case would ultimately bear on working conditions,” says Charlotte Garden, a professor at the University of Minnesota who specializes in labor law. The DOJ has since suggested that remedying Google’s anticompetitive harms could mean something as drastic as a breakup of its Android and Chrome businesses — something that could plausibly result in significant changes for workers in those units.

“We respect Googlers’ rights to speak about their terms and conditions of employment”

Still, Garden says there are some discussions employees might have about the case that might not be protected, like pondering how management should respond to the government. The NLRB will also weigh Google’s legitimate business interests — perhaps including controlling the course of their own litigation or only authorizing specific spokespeople to speak on it on behalf of the company — and how likely management’s statements are to chill protected conversations between employees.

“We respect Googlers’ rights to speak about their terms and conditions of employment,” Google spokesperson Peter Schottenfels said in a statement to The Verge. “As is standard practice, we’re simply asking that employees not speak about ongoing litigation on behalf of Google without prior approval.”

Even though Walker’s email did not include an outright prohibition on speaking about the antitrust case, the NLRB could still find it to be a violation if it concludes it would likely chill employee speech, says Garden. The board will evaluate how employees did and were likely to interpret the email — either as general guidance that wouldn’t be enforced or a line not to cross or risk getting in trouble or forgoing future opportunities, she says. To do that, Garden explains, the NLRB would look at employees’ own reactions and interpretations of the directives and how the company has responded when workers went against such guidance in the past.

“I think that the company does have a history of silencing or retaliating against workers who speak about their working conditions or raise complaints”

Stephen McMurtry, a senior software engineer at Google and communications chair of the Alphabet Workers Union, sees his employer’s past actions as a warning. “I think that the company does have a history of silencing or retaliating against workers who speak about their working conditions or raise complaints with the company with things that they believe are wrong or unethical. So even if the language is a kind of corporate ‘please refrain,’ I think we can all see what’s happened to some of our coworkers in the past who have raised concerns about different issues.”

McMurtry pointed to the massive 2018 walkout in the wake of the #MeToo movement. Two of the organizers claimed retaliation for their role in the demonstration (which Google denied) and ultimately left the company. Another former Google engineer told The Verge in 2019 that she was fired for creating a browser popup for employees letting them know of their labor protections. A Google spokesperson at the time did not confirm the employee’s termination, saying they had fired someone who “abused privileged access to modify an internal security tool” but that it wasn’t a matter of its contents. “It doesn’t seem so far fetched that it could happen in this situation,” McMurtry says.

McMurtry doesn’t really know what his coworkers think about the outcome of the case and what remedies could impact their jobs because he says it’s not really discussed. He doesn’t even have much of an opinion on the remedies the DOJ has suggested so far but says being able to talk through it with his coworkers would make it easier to reach an informed opinion about likely effects on workers.

The case could take a while to resolve, if the NLRB even decides to take it up. Garden says a regional office would first investigate the charge to determine whether to move forward with it — though many cases settle before that happens. NLRB spokesperson Kayla Blado told The Verge that its Oakland office is investigating the charge, which was filed on August 15th. The NLRB says it typically takes seven to 14 weeks to determine the merits of a charge, which could kick off a case before an administrative law judge if the government chooses to pursue it. Meanwhile, Google and the Justice Department are set to return to court in April to argue about which remedies that judge should impose to fix Google’s anticompetitive effects.

Read More 

Scroll to top
Generated by Feedzy