verge-rss

Gannett is shuttering site accused of publishing AI product reviews

Photo: Getty Images

Newspaper giant Gannett is shutting down Reviewed, its product reviews site, effective November 1st, according to sources familiar with the decision. The site offers recommendations for products ranging from shoes to home appliances and employs journalists to test and review items — but has also been at the center of questions around whether its work is actually produced by humans.
“After careful consideration and evaluation of our Reviewed business, we have decided to close the operation. We extend our sincere gratitude to our employees who have provided consumers with trusted product reviews,” Reviewed spokesperson Lark-Marie Antón told The Verge in an email.
But the site more recently has been the subject of scrutiny, at times by its own unionized employees. Last October, Reviewed staff publicly accused Gannett of publishing AI-generated product reviews on the site. The articles in question were written in a strange, stilted manner, and staff found that the authors the articles were attributed to didn’t seem to exist on LinkedIn and other platforms. Some questioned whether they were real at all. In response to questions, Gannett said the articles were produced by a third-party marketing company called AdVon Commerce and that the original reviews didn’t include proper disclosure. But Gannett denied that AI was involved.
As The Verge reported last fall, the marketing firm behind the Reviewed content is the same company that was responsible for a similar dust-up at Sports Illustrated, in which remarkably similar product reviews were published and attributed to freelancers. But in the case of Sports Illustrated, the evidence that AI was involved was obvious: authors’ headshots were for sale on AI image websites. Sports Illustrated maintained that though authors’ names were indeed not real, AdVon had assured the company that real humans wrote the content.
But an investigation by The Verge into AdVon showed that the company has spammed the web with marketing content, some of which former employees say was indeed AI-generated. Ben Faw, CEO and cofounder of AdVon, has for years used his connections in media to land contracts with news outlets, often setting up elaborate marketing schemes to enrich himself. AdVon’s marketing content appeared everywhere from small blogs to outlets like Us Weekly and the Los Angeles Times. In response to The Verge’s reporting, Faw said in an emailed statement that the company “generate[s] affiliate revenue which publishers use to fund newsroom operations and salaries.” He also said AdVon offers “human-only, AI-enhanced, and hybrid solutions” to customers hiring the firm.
Antón didn’t offer a reason for shutting down Reviewed. Product reviews are often seen as a lucrative venture for publishers, who can draw readers looking for purchasing advice on search engines and make money when readers buy items from the articles. In recent months, other news organizations, including The Associated Press, have announced similar ventures. But even content that has historically made news outlets money is vulnerable to changes in Google Search, where a bulk of traffic comes from. Some independent sites have said their search traffic has steadily evaporated, and Google’s pivot to AI search tools threatens to eat into revenue even further.
Unionized workers at Reviewed have gone on limited strikes multiple times after impasses with Gannett management. Most recently, in July, staffers staged a temporary work stoppage, saying they were expected to take on additional work without adjustments to compensation. Gannett didn’t comment on whether staff at Reviewed will be offered new roles at the company or whether they would be laid off.
Correction, August 26th: This story previously stated that Reviewed staff were given additional work with adjustments to pay. Their compensation was not adjusted.

Photo: Getty Images

Newspaper giant Gannett is shutting down Reviewed, its product reviews site, effective November 1st, according to sources familiar with the decision. The site offers recommendations for products ranging from shoes to home appliances and employs journalists to test and review items — but has also been at the center of questions around whether its work is actually produced by humans.

“After careful consideration and evaluation of our Reviewed business, we have decided to close the operation. We extend our sincere gratitude to our employees who have provided consumers with trusted product reviews,” Reviewed spokesperson Lark-Marie Antón told The Verge in an email.

But the site more recently has been the subject of scrutiny, at times by its own unionized employees. Last October, Reviewed staff publicly accused Gannett of publishing AI-generated product reviews on the site. The articles in question were written in a strange, stilted manner, and staff found that the authors the articles were attributed to didn’t seem to exist on LinkedIn and other platforms. Some questioned whether they were real at all. In response to questions, Gannett said the articles were produced by a third-party marketing company called AdVon Commerce and that the original reviews didn’t include proper disclosure. But Gannett denied that AI was involved.

As The Verge reported last fall, the marketing firm behind the Reviewed content is the same company that was responsible for a similar dust-up at Sports Illustrated, in which remarkably similar product reviews were published and attributed to freelancers. But in the case of Sports Illustrated, the evidence that AI was involved was obvious: authors’ headshots were for sale on AI image websites. Sports Illustrated maintained that though authors’ names were indeed not real, AdVon had assured the company that real humans wrote the content.

But an investigation by The Verge into AdVon showed that the company has spammed the web with marketing content, some of which former employees say was indeed AI-generated. Ben Faw, CEO and cofounder of AdVon, has for years used his connections in media to land contracts with news outlets, often setting up elaborate marketing schemes to enrich himself. AdVon’s marketing content appeared everywhere from small blogs to outlets like Us Weekly and the Los Angeles Times. In response to The Verge’s reporting, Faw said in an emailed statement that the company “generate[s] affiliate revenue which publishers use to fund newsroom operations and salaries.” He also said AdVon offers “human-only, AI-enhanced, and hybrid solutions” to customers hiring the firm.

Antón didn’t offer a reason for shutting down Reviewed. Product reviews are often seen as a lucrative venture for publishers, who can draw readers looking for purchasing advice on search engines and make money when readers buy items from the articles. In recent months, other news organizations, including The Associated Press, have announced similar ventures. But even content that has historically made news outlets money is vulnerable to changes in Google Search, where a bulk of traffic comes from. Some independent sites have said their search traffic has steadily evaporated, and Google’s pivot to AI search tools threatens to eat into revenue even further.

Unionized workers at Reviewed have gone on limited strikes multiple times after impasses with Gannett management. Most recently, in July, staffers staged a temporary work stoppage, saying they were expected to take on additional work without adjustments to compensation. Gannett didn’t comment on whether staff at Reviewed will be offered new roles at the company or whether they would be laid off.

Correction, August 26th: This story previously stated that Reviewed staff were given additional work with adjustments to pay. Their compensation was not adjusted.

Read More 

D&D publisher walks back controversial changes to online tools

Image: Wizards of the Coast

Wizards of the Coast has walked back some of its planned updates to D&D Beyond. All your current Dungeons & Dragons characters sheets are once again safe with the publisher no longer forcing updates to newer versions of spells, weapons, and magic items.
Last week, as a part of the updates to Dungeons & Dragons Fifth Edition — collectively known as the 2024 revision — the publisher announced that it would update D&D Beyond, the tabletop RPG’s official digital toolkit that players use to reference content and create characters using a host of official and third-party sources. The update would add the new 2024 rulebooks to the toolkit, mark outdated content with a “legacy” badge, and change players’ character sheets to reflect all the new rules and features.

Over the coming months, you’re going to see big changes here on D&D Beyond! ⚙️ Here’s a breakdown of what we’re working on: https://t.co/QJMhaPigZD pic.twitter.com/GttPl9GJdf— D&D Beyond (@DnDBeyond) August 21, 2024

That last part is critical to understanding why some D&D players (including my own dungeon master) spent the last 72 hours in a state of panic. Though some of the 2024 revisions are essentially cosmetic in nature — for example, “races” will be updated to “species” — other updates like the ones to weapons, spells, and magic items fundamentally alter the game. Wizards of the Coast would have essentially overwritten every user’s character sheet with the new information whether they wanted it or not.
“All entries for mundane and magical items, weapons, armor, and spells will also be updated to their 2024 version,” Wizards said in its initial announcement.
The publisher did say that players would have the option to continue to use the 2014 version of spells and magic items. But doing so requires using the game’s homebrew rules. which aren’t known for being user-friendly.
To put this in perspective, think of it like owning a car. Imagine that after 10 years with one car, learning its ins and outs, the manufacturer decides that when it rolls out the latest model of your car, it’s going to magically change your car to the new model, too. Now, though your car is essentially the same, it doesn’t work like you’re used to. And when you ask the manufacturer if you can go back to your old car, it says you can but that you’ll have to manually restore it yourself.
Thankfully, Wizards of the Coast isn’t in the car business, and after a weekend of backlash on social media, the company will no longer force the new changes on players.
“We misjudged the impact of this change, and we agree that you should be free to choose your own way to play,” Wizard’s said in its latest announcement. Current character sheets will only be updated with new terminology while the older versions of spells, magic items, and weapons will be preserved. Also, players who have access to both the 2014 and 2024 digital versions will have the option to use both when creating new characters.
Essentially, Wizards of the Coast is doing what it should have done in the first place: simply adding the new content and giving players the choice to opt in.

Image: Wizards of the Coast

Wizards of the Coast has walked back some of its planned updates to D&D Beyond. All your current Dungeons & Dragons characters sheets are once again safe with the publisher no longer forcing updates to newer versions of spells, weapons, and magic items.

Last week, as a part of the updates to Dungeons & Dragons Fifth Edition — collectively known as the 2024 revision — the publisher announced that it would update D&D Beyond, the tabletop RPG’s official digital toolkit that players use to reference content and create characters using a host of official and third-party sources. The update would add the new 2024 rulebooks to the toolkit, mark outdated content with a “legacy” badge, and change players’ character sheets to reflect all the new rules and features.

Over the coming months, you’re going to see big changes here on D&D Beyond!

⚙️ Here’s a breakdown of what we’re working on: https://t.co/QJMhaPigZD pic.twitter.com/GttPl9GJdf

— D&D Beyond (@DnDBeyond) August 21, 2024

That last part is critical to understanding why some D&D players (including my own dungeon master) spent the last 72 hours in a state of panic. Though some of the 2024 revisions are essentially cosmetic in nature — for example, “races” will be updated to “species” — other updates like the ones to weapons, spells, and magic items fundamentally alter the game. Wizards of the Coast would have essentially overwritten every user’s character sheet with the new information whether they wanted it or not.

“All entries for mundane and magical items, weapons, armor, and spells will also be updated to their 2024 version,” Wizards said in its initial announcement.

The publisher did say that players would have the option to continue to use the 2014 version of spells and magic items. But doing so requires using the game’s homebrew rules. which aren’t known for being user-friendly.

To put this in perspective, think of it like owning a car. Imagine that after 10 years with one car, learning its ins and outs, the manufacturer decides that when it rolls out the latest model of your car, it’s going to magically change your car to the new model, too. Now, though your car is essentially the same, it doesn’t work like you’re used to. And when you ask the manufacturer if you can go back to your old car, it says you can but that you’ll have to manually restore it yourself.

Thankfully, Wizards of the Coast isn’t in the car business, and after a weekend of backlash on social media, the company will no longer force the new changes on players.

“We misjudged the impact of this change, and we agree that you should be free to choose your own way to play,” Wizard’s said in its latest announcement. Current character sheets will only be updated with new terminology while the older versions of spells, magic items, and weapons will be preserved. Also, players who have access to both the 2014 and 2024 digital versions will have the option to use both when creating new characters.

Essentially, Wizards of the Coast is doing what it should have done in the first place: simply adding the new content and giving players the choice to opt in.

Read More 

Hope and disparity: a colorful new way to visualize air quality around the world

A visualization of PM2.5 air pollution concentrations in Islamabad, Pakistan, from 1850 to 2021. | Image: Air Quality Stripes

A new tool shows how much air quality has changed since the Industrial Revolution in cities across the world. It generates a single image made up of different colored stripes representing pollution each year in each major city.
You can see stark contrasts from place to place, showing how much work is left to do to clean up pollution and also how well those efforts can pay off in the long run. Air pollution has fallen sharply in wealthy Western nations but is still a serious health risk in many places around the world.
“These images make the invisible visible.”
“Air pollution is often called the ‘invisible killer,’ but these images make the invisible visible,” Kirsty Pringle, a codirector of the project who is based at the University of Edinburgh, said in a press release.
The project was a collaboration between the University of Leeds, the University of Edinburgh, North Carolina State University, and the UK Met Office. Researchers use data from the UK Met Office to estimate average annual concentrations of fine particle pollution, or PM2.5. That takes into account particulates with a diameter less than a 30th of the width of a human hair — small enough to potentially enter the lungs and bloodstream. This kind of pollution — which might include dust, soot, and smoke — comes from smokestacks, tailpipes, and increasingly from wildfires made worse by climate change.

Image: airqualitystripes.info

The researchers created colored stripes for the capitals of each country, along with some other major cities and their universities’ hometowns. Each image represents changes in air pollution from 1850 to 2021. Satellite and ground-level readings of PM2.5 provide data for roughly the past two decades. Since that was largely lacking before 2000, they also rely on computer model simulations to peer back in time.
The stripes range in color from light blue to dark brown or black to represent “extremely poor” air quality. The scientists worked with artist Ethan Brain to come up with a color palette, sourced from some 200 images gathered by searching Google for “air pollution.”
The lightest blue indicates air quality below the World Health Organization’s recommendation of less than 5 micrograms of fine particle pollution per cubic meter of air (5 ug/m³). You can see London and Los Angeles start to approach those levels in recent years after decades of efforts to rein in pollution from industry and transportation. In the US, pollution levels started to fall after the enactment of the landmark Clean Air Act in 1970.
Air quality can be very different from neighborhood to neighborhood, though, with communities of color in the US often burdened with a disproportionate amount of air pollution from nearby highways and industrial facilities.

Image: airqualitystripes.info

The unfortunate reality is that 99 percent of the world’s population live in places with air quality that’s worse than the World Health Organization’s guideline for PM2.5. Cities in low and middle-income countries in parts of South Asia and Africa are particularly hard-hit, the Air Quality Stripes researchers find. Air quality in Delhi, India, and Abuja, Nigeria, has climbed toward “extremely poor” and “very poor,” respectively, since the 1970s, for example.

Image: airqualitystripes.info

You can check out the Air Quality Stripes website to see visualizations for each city. The images resemble climate warming stripes that have become a popular way to show temperatures rising as a result of greenhouse gas emissions from fossil fuels.
Fortunately, action on climate change could also improve air quality this decade. At least 118 countries pledged to triple the world’s renewable energy capacity by 2030 during the United Nations annual climate summit last year. To stop climate change, the transition to renewable energy can’t leave any countries behind. Activists around the world are calling on wealthy nations and Wall Street to stop funding new fossil fuel projects and to cancel debt that makes it harder for less affluent nations to invest in clean energy.

Image: airqualitystripes.info

After all, there’s still hope to be found in stripes the color of blue skies.
“The images show that it is possible to reduce air pollution; the air in many cities in Europe is much cleaner now than it was 100 years ago, and this is improving our health. We really hope similar improvements can be achieved across the globe,” Pringle said.

A visualization of PM2.5 air pollution concentrations in Islamabad, Pakistan, from 1850 to 2021. | Image: Air Quality Stripes

A new tool shows how much air quality has changed since the Industrial Revolution in cities across the world. It generates a single image made up of different colored stripes representing pollution each year in each major city.

You can see stark contrasts from place to place, showing how much work is left to do to clean up pollution and also how well those efforts can pay off in the long run. Air pollution has fallen sharply in wealthy Western nations but is still a serious health risk in many places around the world.

“These images make the invisible visible.”

“Air pollution is often called the ‘invisible killer,’ but these images make the invisible visible,” Kirsty Pringle, a codirector of the project who is based at the University of Edinburgh, said in a press release.

The project was a collaboration between the University of Leeds, the University of Edinburgh, North Carolina State University, and the UK Met Office. Researchers use data from the UK Met Office to estimate average annual concentrations of fine particle pollution, or PM2.5. That takes into account particulates with a diameter less than a 30th of the width of a human hair — small enough to potentially enter the lungs and bloodstream. This kind of pollution — which might include dust, soot, and smoke — comes from smokestacks, tailpipes, and increasingly from wildfires made worse by climate change.

Image: airqualitystripes.info

The researchers created colored stripes for the capitals of each country, along with some other major cities and their universities’ hometowns. Each image represents changes in air pollution from 1850 to 2021. Satellite and ground-level readings of PM2.5 provide data for roughly the past two decades. Since that was largely lacking before 2000, they also rely on computer model simulations to peer back in time.

The stripes range in color from light blue to dark brown or black to represent “extremely poor” air quality. The scientists worked with artist Ethan Brain to come up with a color palette, sourced from some 200 images gathered by searching Google for “air pollution.”

The lightest blue indicates air quality below the World Health Organization’s recommendation of less than 5 micrograms of fine particle pollution per cubic meter of air (5 ug/m³). You can see London and Los Angeles start to approach those levels in recent years after decades of efforts to rein in pollution from industry and transportation. In the US, pollution levels started to fall after the enactment of the landmark Clean Air Act in 1970.

Air quality can be very different from neighborhood to neighborhood, though, with communities of color in the US often burdened with a disproportionate amount of air pollution from nearby highways and industrial facilities.

Image: airqualitystripes.info

The unfortunate reality is that 99 percent of the world’s population live in places with air quality that’s worse than the World Health Organization’s guideline for PM2.5. Cities in low and middle-income countries in parts of South Asia and Africa are particularly hard-hit, the Air Quality Stripes researchers find. Air quality in Delhi, India, and Abuja, Nigeria, has climbed toward “extremely poor” and “very poor,” respectively, since the 1970s, for example.

Image: airqualitystripes.info

You can check out the Air Quality Stripes website to see visualizations for each city. The images resemble climate warming stripes that have become a popular way to show temperatures rising as a result of greenhouse gas emissions from fossil fuels.

Fortunately, action on climate change could also improve air quality this decade. At least 118 countries pledged to triple the world’s renewable energy capacity by 2030 during the United Nations annual climate summit last year. To stop climate change, the transition to renewable energy can’t leave any countries behind. Activists around the world are calling on wealthy nations and Wall Street to stop funding new fossil fuel projects and to cancel debt that makes it harder for less affluent nations to invest in clean energy.

Image: airqualitystripes.info

After all, there’s still hope to be found in stripes the color of blue skies.

“The images show that it is possible to reduce air pollution; the air in many cities in Europe is much cleaner now than it was 100 years ago, and this is improving our health. We really hope similar improvements can be achieved across the globe,” Pringle said.

Read More 

Ikea is testing a secondhand marketplace, but only in two countries

Image: Ikea

Ikea is now testing an online platform in Madrid and Oslo where people can sell their used Ikea furniture to others. The platform, Ikea Preowned, lets users list their furniture with photos and a price, and the actual listings (like this example) look a lot like what you might see on Ikea’s main website. However, buyers and sellers have to agree on a place and time to meet to hand over the furniture.
Right now, sellers have the option to get paid via a bank transfer with no added fees or by receiving an Ikea gift card with an extra 15 percent added to the furniture’s purchase price, per the Ikea Preowned website. You can’t return items that are damaged or aren’t in the condition you might have expected, though you can request a refund, Ikea says.
Making a listing is free, but down the line, there could be “a symbolic fee, a humble fee,” Jesper Brodin, CEO of Ingka Group (which operates the vast majority of Ikea retail stores), said to the Financial Times. Brodin also told the FT that the tests in Madrid and Oslo will last through the end of this year and that the company plans to expand the platform globally.
Ingka Group didn’t immediately respond to a request for comment.

Image: Ikea

Ikea is now testing an online platform in Madrid and Oslo where people can sell their used Ikea furniture to others. The platform, Ikea Preowned, lets users list their furniture with photos and a price, and the actual listings (like this example) look a lot like what you might see on Ikea’s main website. However, buyers and sellers have to agree on a place and time to meet to hand over the furniture.

Right now, sellers have the option to get paid via a bank transfer with no added fees or by receiving an Ikea gift card with an extra 15 percent added to the furniture’s purchase price, per the Ikea Preowned website. You can’t return items that are damaged or aren’t in the condition you might have expected, though you can request a refund, Ikea says.

Making a listing is free, but down the line, there could be “a symbolic fee, a humble fee,” Jesper Brodin, CEO of Ingka Group (which operates the vast majority of Ikea retail stores), said to the Financial Times. Brodin also told the FT that the tests in Madrid and Oslo will last through the end of this year and that the company plans to expand the platform globally.

Ingka Group didn’t immediately respond to a request for comment.

Read More 

Hello, you’re here because you said AI image editing was just like Photoshop

Image: Cath Virginia / The Verge, Getty Images

Let’s put this sloppy, bad-faith argument to rest. “We’ve had Photoshop for 35 years” is a common response to rebut concerns about generative AI, and you’ve landed here because you’ve made that argument in a comment thread or social media.
There are countless reasons to be concerned about how AI image editing and generation tools will impact the trust we place in photographs and how that trust (or lack thereof) could be used to manipulate us. That’s bad, and we know it’s already happening. So, to save us all time and energy, and from wearing our fingers down to nubs by constantly responding to the same handful of arguments, we’re just putting them all in a list in this post.
Sharing this will be far more efficient after all — just like AI! Isn’t that delightful!
Argument: “You can already manipulate images like this in Photoshop”
It’s easy to make this argument if you’ve never actually gone through the process of manually editing a photo in apps like Adobe Photoshop, but it’s a frustratingly over-simplified comparison. Let’s say some dastardly miscreant wants to manipulate an image to make it look like someone has a drug problem — here are just a few things they’d need to do:

Have access to (potentially expensive) desktop software. Sure, mobile editing apps exist, but they’re not really suitable for much outside of small tweaks like skin smoothing and color adjustment. So, for this job, you’ll need a computer — a costly investment for internet fuckery. And while some desktop editing apps are free (Gimp, Photopea, etc.), most professional-level tools are not. Adobe’s Creative Cloud apps are among the most popular, and the recurring subscriptions ($263.88 per year for Photoshop alone) are notoriously hard to cancel.

Locate suitable pictures of drug paraphernalia. Even if you have some on hand, you can’t just slap any old image in and hope it’ll look right. You have to account for the appropriate lighting and positioning of the photo they’re being added to, so everything needs to match up. Any reflections on bottles should be hitting from the same angle, for example, and objects photographed at eye level will look obviously fake if dropped into an image that was snapped at more of an angle.

Understand and use a smorgasbord of complicated editing tools. Any inserts need to be cut from whatever background they were on and then blended seamlessly into their new environment. That might require adjusting color balance, tone, and exposure levels, smoothing edges, or adding in new shadows or reflections. It takes both time and experience to ensure the results look even passable, let alone natural.

There are some genuinely useful AI tools in Photoshop that do make this easier, such as automated object selection and background removal. But even if you’re using them, it’ll still take a decent chunk of time and energy to manipulate a single image. By contrast, here’s what The Verge editor Chris Welch had to do to get the same results using the “Reimagine” feature on a Google Pixel 9:

Launch the Google Photos app on their smartphone. Tap an area, and tell it to add a “medical syringe filled with red liquid,” some “thin lines of crumbled chalk,” alongside wine and rubber tubing.

That’s it. A similarly easy process exists on Samsung’s newest phones. The skill and time barrier isn’t just reduced — it’s gone. Google’s tool is also freakishly good at blending any generated materials into the images: lighting, shadows, opacity, and even focal points are all taken into consideration. Photoshop itself now has an AI image generator built-in, and the results from that often aren’t half as convincing as what this free Android app from Google can spit out.

Image manipulation techniques and other methods of fakery have existed for close to 200 years — almost as long as photography itself. (Cases in point: 19th-century spirit photography and the Cottingley Fairies.) But the skill requirements and time investment needed to make those changes are why we don’t think to inspect every photo we see. Manipulations were rare and unexpected for most of photography’s history. But the simplicity and scale of AI on smartphones will mean any bozo can churn out manipulative images at a frequency and scale we’ve never experienced before. It should be obvious why that’s alarming.
Argument: “People will adapt to this becoming the new normal”
Just because you have the estimable ability to clock when an image is fake doesn’t mean everyone can. Not everyone skulks around on tech forums (we love you all, fellow skulkers), so the typical indicators of AI that seem obvious to us can be easy to miss for those who don’t know what signs to look for — if they’re even there at all. AI is rapidly getting better at producing natural-looking images that don’t have seven fingers or Cronenberg-esque distortions.
In a world where everything might be fake, it’s vastly harder to prove something is real
Maybe it was easy to spot when the occasional deepfake was dumped into our feeds, but the scale of production has shifted seismically in the last two years alone. It’s incredibly easy to make this stuff, so now it’s fucking everywhere. We are dangerously close to living in a world in which we have to be wary about being deceived by every single image put in front of us.
And when everything might be fake, it’s vastly harder to prove something is real. That doubt is easy to prey on, opening the door for people like former President Donald Trump to throw around false accusations about Kamala Harris manipulating the size of her rally crowds.
Argument: “Photoshop was a huge, barrier-lowering tech, too — but we ended up being fine”
It’s true: even if AI is a lot easier to use than Photoshop, the latter was still a technological revolution that forced people to reckon with a whole new world of fakery. But Photoshop and other pre-AI editing tools did create social problems that persist to this day and still cause meaningful harm. The ability to digitally retouch photographs on magazines and billboards promoted impossible beauty standards for both men and women, with the latter disproportionately impacted. In 2003, for instance, a then-27-year-old Kate Winslet was unknowingly slimmed down on the cover of GQ — and the British magazine’s editor, Dylan Jones, justified it by saying her appearance had been altered “no more than any other cover star.”
Edits like this were pervasive and rarely disclosed, despite major scandals when early blogs like Jezebel published unretouched photos of celebrities on fashion magazine covers. (France even passed a law requiring airbrushing disclosures.) And as easier-to-use tools like FaceTtune emerged on exploding social media platforms, they became even more insidious.
One study in 2020 found that 71 percent of Instagram users would edit their selfies with Facetune before publishing them, and another found that media images caused the same drop in body image for women and girls with or without a label disclaiming they’d been digitally altered. There’s a direct pipeline from social media to real-life plastic surgery, sometimes aiming for physically impossible results. And men are not immune — social media has real and measurable impacts on boys and their self-image as well.

Impossible beauty standards aren’t the only issue, either. Staged pictures and photo editing could mislead viewers, undercut trust in photojournalism, and even emphasize racist narratives — as in a 1994 photo illustration that made OJ Simpson’s face darker in a mugshot.
Generative AI image editing not only amplifies these problems by further lowering barriers — it sometimes does so with no explicit direction. AI tools and apps have been accused of giving women larger breasts and revealing clothes without being told to do so. Forget viewers not being able to trust what they’re seeing is real — now photographers can’t trust their own tools!
Argument: “I’m sure laws will be passed to protect us”
First of all, crafting good speech laws — and, let’s be clear, these likely would be speech laws — is incredibly hard. Governing how people can produce and release edited images will require separating uses that are overwhelmingly harmful from ones lots of people find valuable, like art, commentary, and parody. Lawmakers and regulators will have to reckon with existing laws around free speech and access to information, including the First Amendment in the US.
Tech giants ran full speed into the AI era seemingly without considering the possibility of regulation
Tech giants also ran full-speed into the AI era seemingly without even considering the possibility of regulation. Global governments are still scrambling to enact laws that can rein in those who do abuse generative AI tech (including the companies building it), and the development of systems for identifying real photographs from manipulated ones is proving slow and woefully inadequate.
Meanwhile, easy AI tools have already been used for voter manipulation, digitally undressing pictures of children, and to grotesquely deepfake celebrities like Taylor Swift. That’s just in the last year, and the technology is only going to keep improving.
In an ideal world, adequate guardrails would have been put in place before a free, idiot-proof tool capable of adding bombs, car collisions, and other nasties to photographs in seconds landed in our pockets. Maybe we are fucked. Optimism and willful ignorance aren’t going to fix this, and it’s not clear what will or even can at this stage.

Image: Cath Virginia / The Verge, Getty Images

Let’s put this sloppy, bad-faith argument to rest.

“We’ve had Photoshop for 35 years” is a common response to rebut concerns about generative AI, and you’ve landed here because you’ve made that argument in a comment thread or social media.

There are countless reasons to be concerned about how AI image editing and generation tools will impact the trust we place in photographs and how that trust (or lack thereof) could be used to manipulate us. That’s bad, and we know it’s already happening. So, to save us all time and energy, and from wearing our fingers down to nubs by constantly responding to the same handful of arguments, we’re just putting them all in a list in this post.

Sharing this will be far more efficient after all — just like AI! Isn’t that delightful!

Argument: “You can already manipulate images like this in Photoshop”

It’s easy to make this argument if you’ve never actually gone through the process of manually editing a photo in apps like Adobe Photoshop, but it’s a frustratingly over-simplified comparison. Let’s say some dastardly miscreant wants to manipulate an image to make it look like someone has a drug problem — here are just a few things they’d need to do:

Have access to (potentially expensive) desktop software. Sure, mobile editing apps exist, but they’re not really suitable for much outside of small tweaks like skin smoothing and color adjustment. So, for this job, you’ll need a computer — a costly investment for internet fuckery. And while some desktop editing apps are free (Gimp, Photopea, etc.), most professional-level tools are not. Adobe’s Creative Cloud apps are among the most popular, and the recurring subscriptions ($263.88 per year for Photoshop alone) are notoriously hard to cancel.

Locate suitable pictures of drug paraphernalia. Even if you have some on hand, you can’t just slap any old image in and hope it’ll look right. You have to account for the appropriate lighting and positioning of the photo they’re being added to, so everything needs to match up. Any reflections on bottles should be hitting from the same angle, for example, and objects photographed at eye level will look obviously fake if dropped into an image that was snapped at more of an angle.

Understand and use a smorgasbord of complicated editing tools. Any inserts need to be cut from whatever background they were on and then blended seamlessly into their new environment. That might require adjusting color balance, tone, and exposure levels, smoothing edges, or adding in new shadows or reflections. It takes both time and experience to ensure the results look even passable, let alone natural.

There are some genuinely useful AI tools in Photoshop that do make this easier, such as automated object selection and background removal. But even if you’re using them, it’ll still take a decent chunk of time and energy to manipulate a single image. By contrast, here’s what The Verge editor Chris Welch had to do to get the same results using the “Reimagine” feature on a Google Pixel 9:

Launch the Google Photos app on their smartphone. Tap an area, and tell it to add a “medical syringe filled with red liquid,” some “thin lines of crumbled chalk,” alongside wine and rubber tubing.

That’s it. A similarly easy process exists on Samsung’s newest phones. The skill and time barrier isn’t just reduced — it’s gone. Google’s tool is also freakishly good at blending any generated materials into the images: lighting, shadows, opacity, and even focal points are all taken into consideration. Photoshop itself now has an AI image generator built-in, and the results from that often aren’t half as convincing as what this free Android app from Google can spit out.

Image manipulation techniques and other methods of fakery have existed for close to 200 years — almost as long as photography itself. (Cases in point: 19th-century spirit photography and the Cottingley Fairies.) But the skill requirements and time investment needed to make those changes are why we don’t think to inspect every photo we see. Manipulations were rare and unexpected for most of photography’s history. But the simplicity and scale of AI on smartphones will mean any bozo can churn out manipulative images at a frequency and scale we’ve never experienced before. It should be obvious why that’s alarming.

Argument: “People will adapt to this becoming the new normal”

Just because you have the estimable ability to clock when an image is fake doesn’t mean everyone can. Not everyone skulks around on tech forums (we love you all, fellow skulkers), so the typical indicators of AI that seem obvious to us can be easy to miss for those who don’t know what signs to look for — if they’re even there at all. AI is rapidly getting better at producing natural-looking images that don’t have seven fingers or Cronenberg-esque distortions.

In a world where everything might be fake, it’s vastly harder to prove something is real

Maybe it was easy to spot when the occasional deepfake was dumped into our feeds, but the scale of production has shifted seismically in the last two years alone. It’s incredibly easy to make this stuff, so now it’s fucking everywhere. We are dangerously close to living in a world in which we have to be wary about being deceived by every single image put in front of us.

And when everything might be fake, it’s vastly harder to prove something is real. That doubt is easy to prey on, opening the door for people like former President Donald Trump to throw around false accusations about Kamala Harris manipulating the size of her rally crowds.

Argument: “Photoshop was a huge, barrier-lowering tech, too — but we ended up being fine”

It’s true: even if AI is a lot easier to use than Photoshop, the latter was still a technological revolution that forced people to reckon with a whole new world of fakery. But Photoshop and other pre-AI editing tools did create social problems that persist to this day and still cause meaningful harm. The ability to digitally retouch photographs on magazines and billboards promoted impossible beauty standards for both men and women, with the latter disproportionately impacted. In 2003, for instance, a then-27-year-old Kate Winslet was unknowingly slimmed down on the cover of GQ — and the British magazine’s editor, Dylan Jones, justified it by saying her appearance had been altered “no more than any other cover star.”

Edits like this were pervasive and rarely disclosed, despite major scandals when early blogs like Jezebel published unretouched photos of celebrities on fashion magazine covers. (France even passed a law requiring airbrushing disclosures.) And as easier-to-use tools like FaceTtune emerged on exploding social media platforms, they became even more insidious.

One study in 2020 found that 71 percent of Instagram users would edit their selfies with Facetune before publishing them, and another found that media images caused the same drop in body image for women and girls with or without a label disclaiming they’d been digitally altered. There’s a direct pipeline from social media to real-life plastic surgery, sometimes aiming for physically impossible results. And men are not immune — social media has real and measurable impacts on boys and their self-image as well.

Impossible beauty standards aren’t the only issue, either. Staged pictures and photo editing could mislead viewers, undercut trust in photojournalism, and even emphasize racist narratives — as in a 1994 photo illustration that made OJ Simpson’s face darker in a mugshot.

Generative AI image editing not only amplifies these problems by further lowering barriers — it sometimes does so with no explicit direction. AI tools and apps have been accused of giving women larger breasts and revealing clothes without being told to do so. Forget viewers not being able to trust what they’re seeing is real — now photographers can’t trust their own tools!

Argument: “I’m sure laws will be passed to protect us”

First of all, crafting good speech laws — and, let’s be clear, these likely would be speech laws — is incredibly hard. Governing how people can produce and release edited images will require separating uses that are overwhelmingly harmful from ones lots of people find valuable, like art, commentary, and parody. Lawmakers and regulators will have to reckon with existing laws around free speech and access to information, including the First Amendment in the US.

Tech giants ran full speed into the AI era seemingly without considering the possibility of regulation

Tech giants also ran full-speed into the AI era seemingly without even considering the possibility of regulation. Global governments are still scrambling to enact laws that can rein in those who do abuse generative AI tech (including the companies building it), and the development of systems for identifying real photographs from manipulated ones is proving slow and woefully inadequate.

Meanwhile, easy AI tools have already been used for voter manipulation, digitally undressing pictures of children, and to grotesquely deepfake celebrities like Taylor Swift. That’s just in the last year, and the technology is only going to keep improving.

In an ideal world, adequate guardrails would have been put in place before a free, idiot-proof tool capable of adding bombs, car collisions, and other nasties to photographs in seconds landed in our pockets. Maybe we are fucked. Optimism and willful ignorance aren’t going to fix this, and it’s not clear what will or even can at this stage.

Read More 

Samsung will soon let you create AI wallpapers on its touchscreen refrigerators

Illustration by Alex Castro / The Verge

Samsung has announced new AI capabilities for its smart appliances, including new AI-generated wallpapers for certain refrigerators, as well as an AI upgrade for its Bixby voice assistant. The updated Bixby will hit five Samsung Bespoke appliances starting on August 27th.
AI-generated wallpapers are coming to Samsung Family Hub refrigerators released in the US and Korea after 2022, the company says. To create them, users will pick a theme from seven categories and one of six art styles. Once the new wallpaper is created, it can go on the fridge’s cover screen or whiteboard, or saved in an album for later. It will require a Wi-Fi connection, and users will be limited to creating up to five wallpapers per day.

The improved Bixby will be able to understand complex commands, remember your previous conversations, and answer troubleshooting questions. That means you can ask your washing machine what time dinner is and follow up by asking it to have the laundry done an hour before that, according to one of the examples Samsung listed. In another, the company suggests asking Bixby about error codes or why your refrigerator might be making clicking noises.
Only certain Samsung appliances launched in 2024 will get the Bixby upgrade, though. That includes the Bespoke 4-Door Refrigerator with AI Family Hub, the Bespoke AI Laundry Combo, and the AI WindFree Gallery Freestanding Air Conditioner.

Illustration by Alex Castro / The Verge

Samsung has announced new AI capabilities for its smart appliances, including new AI-generated wallpapers for certain refrigerators, as well as an AI upgrade for its Bixby voice assistant. The updated Bixby will hit five Samsung Bespoke appliances starting on August 27th.

AI-generated wallpapers are coming to Samsung Family Hub refrigerators released in the US and Korea after 2022, the company says. To create them, users will pick a theme from seven categories and one of six art styles. Once the new wallpaper is created, it can go on the fridge’s cover screen or whiteboard, or saved in an album for later. It will require a Wi-Fi connection, and users will be limited to creating up to five wallpapers per day.

The improved Bixby will be able to understand complex commands, remember your previous conversations, and answer troubleshooting questions. That means you can ask your washing machine what time dinner is and follow up by asking it to have the laundry done an hour before that, according to one of the examples Samsung listed. In another, the company suggests asking Bixby about error codes or why your refrigerator might be making clicking noises.

Only certain Samsung appliances launched in 2024 will get the Bixby upgrade, though. That includes the Bespoke 4-Door Refrigerator with AI Family Hub, the Bespoke AI Laundry Combo, and the AI WindFree Gallery Freestanding Air Conditioner.

Read More 

Indiegogo will guarantee shipping for some crowdfunding campaigns

Image: Indiegogo

Indiegogo is taking a renewed swing at holding crowdfunding campaigns accountable for shipping products with a new program. The company’s “Shipping Guarantee” initiative promises that anytime you support an eligible campaign, you’ll get your order on time or your money back.
The guarantee will apply to campaigns that meet eligibility requirements, which include launching with a product that’s in the “final manufacturing stages” and having “a proven track record of successful crowdfunding on the site.” Covered campaigns will have a Shipping Guarantee badge above the Pick Your Perk button on the site. If you back one of these campaigns and don’t receive shipping confirmation by the date a campaign promised, you can immediately request a refund, chief revenue officer Julie dePontbriand tells The Verge.
Those refunds will come straight from Indiegogo, which will hold backers’ funds until a campaign has successfully shipped — under the guarantee, campaign operators won’t get any money until they’ve confirmed shipping. And campaigners that don’t fulfill their obligation “may forfeit their Trust Badge in the future,” dePontbriand said, referring to the company’s Trust Proven Program. It’s not a feasible system for campaigns that need backers’ cash to finish their product, but one that makes sense for established companies using Indiegogo to gauge interest and take preorders.

Image: Indiegogo
The Shipping Guarantee badge tells you whether a campaign is covered.

Indiegogo has experimented with shipping guarantees as far back as 2018, offering very similar terms. It’s also tried other methods to keep campaigns honest, including closer screening and threats to refer campaigners to collections agencies. “Indiegogo is building on the promising results of our initial test programs with an expanded Shipping Guarantee initiative,” dePontbriand tells The Verge.

Image: Indiegogo

Indiegogo is taking a renewed swing at holding crowdfunding campaigns accountable for shipping products with a new program. The company’s “Shipping Guarantee” initiative promises that anytime you support an eligible campaign, you’ll get your order on time or your money back.

The guarantee will apply to campaigns that meet eligibility requirements, which include launching with a product that’s in the “final manufacturing stages” and having “a proven track record of successful crowdfunding on the site.” Covered campaigns will have a Shipping Guarantee badge above the Pick Your Perk button on the site. If you back one of these campaigns and don’t receive shipping confirmation by the date a campaign promised, you can immediately request a refund, chief revenue officer Julie dePontbriand tells The Verge.

Those refunds will come straight from Indiegogo, which will hold backers’ funds until a campaign has successfully shipped — under the guarantee, campaign operators won’t get any money until they’ve confirmed shipping. And campaigners that don’t fulfill their obligation “may forfeit their Trust Badge in the future,” dePontbriand said, referring to the company’s Trust Proven Program. It’s not a feasible system for campaigns that need backers’ cash to finish their product, but one that makes sense for established companies using Indiegogo to gauge interest and take preorders.

Image: Indiegogo
The Shipping Guarantee badge tells you whether a campaign is covered.

Indiegogo has experimented with shipping guarantees as far back as 2018, offering very similar terms. It’s also tried other methods to keep campaigns honest, including closer screening and threats to refer campaigners to collections agencies. “Indiegogo is building on the promising results of our initial test programs with an expanded Shipping Guarantee initiative,” dePontbriand tells The Verge.

Read More 

Apple’s iPhone 16 launch event is set for September

Screenshot by Jay Peters / The Verge

Apple has announced the date of its next big event: September 9th, 2024, at 1PM ET / 10AM PT. The event, which has the tagline “It’s Glowtime,” will take place at the Steve Jobs Theater at Apple Park.
During the show, the company is expected to launch the iPhone 16 lineup. The big change to the iPhone 16 and 16 Plus is expected to be a switch to a vertically aligned camera system on the back. (If the final phones look like what we’ve seen on iPhone 16 dummy units, I’m already a big fan of this change.) The iPhone 16 Pro and 16 Pro Max phones might get bigger screens but are rumored to keep Apple’s familiar three-camera layout. Those phones could also come in a new bronze color.

Image: Apple
It’s (almost) Glowtime.

All four iPhone 16 models are expected to have the Action Button, which was exclusive to the Pro line with the iPhone 15. Apple’s new iPhones may also have a new button dedicated to capturing photos and videos, but it’s unclear if that will be a Pro-exclusive feature or will be available on the regular iPhone 16 models as well.
AI and the company’s Apple Intelligence features will likely be a big part of Apple’s event, too. Right now, the only iPhone that supports Apple Intelligence are the iPhone 15 Pro and 15 Pro Max, but the full iPhone 16 lineup is rumored to be able to use Apple Intelligence. (Well, when Apple Intelligence is actually available, that is.)
Apple is also rumored to launch the Apple Watch Series 10 and two versions of new AirPods at the event.

Screenshot by Jay Peters / The Verge

Apple has announced the date of its next big event: September 9th, 2024, at 1PM ET / 10AM PT. The event, which has the tagline “It’s Glowtime,” will take place at the Steve Jobs Theater at Apple Park.

During the show, the company is expected to launch the iPhone 16 lineup. The big change to the iPhone 16 and 16 Plus is expected to be a switch to a vertically aligned camera system on the back. (If the final phones look like what we’ve seen on iPhone 16 dummy units, I’m already a big fan of this change.) The iPhone 16 Pro and 16 Pro Max phones might get bigger screens but are rumored to keep Apple’s familiar three-camera layout. Those phones could also come in a new bronze color.

Image: Apple
It’s (almost) Glowtime.

All four iPhone 16 models are expected to have the Action Button, which was exclusive to the Pro line with the iPhone 15. Apple’s new iPhones may also have a new button dedicated to capturing photos and videos, but it’s unclear if that will be a Pro-exclusive feature or will be available on the regular iPhone 16 models as well.

AI and the company’s Apple Intelligence features will likely be a big part of Apple’s event, too. Right now, the only iPhone that supports Apple Intelligence are the iPhone 15 Pro and 15 Pro Max, but the full iPhone 16 lineup is rumored to be able to use Apple Intelligence. (Well, when Apple Intelligence is actually available, that is.)

Apple is also rumored to launch the Apple Watch Series 10 and two versions of new AirPods at the event.

Read More 

HoverAir X1 Pro and Promax: folding, self-flying 4K and 8K drones with modular controllers

The HoverAir X1 Promax. | Photo by Owen Grove / The Verge

The HoverAir X1 wasn’t your traditional drone. People who have little interest in piloting, like my colleague Thomas Ricker, use the highly crash-resistant self-flying $350 camera to easily get aerial video just by pressing two buttons. But it’s not particularly high-quality video, it’s not particularly fast, and you can’t fly it far away even if you wanted to.
That’s where the new HoverAir X1 Pro and X1 Promax come in.
Zero Zero Robotics has just put both drones up for sale on Indiegogo. We briefly checked them out in person, and they sound like an improvement on the X1 in practically every way.

Photo by Owen Grove / The Verge
The new HoverAir X1 Pro and Promax, next to the original X1.

While they are slightly bigger and heavier, they’d still fit in a cargo pants pocket, and they weigh under 200 grams — meaning you shouldn’t need to register them with aviation authorities since they’re under the typical 250-gram limit.
In exchange for that size bump, they now shoot 4K60 or 8K30 footage, respectively; have a wider field of view; last 4.5 minutes longer on a charge (16 minutes total); track you nearly twice as fast (26mph); can resist higher levels of wind (10.7 m/s); support microSD storage so you can finally hold more than 32GB of footage; and offer a two-axis gimbal for increased video stability, up from just one axis previously.
In addition to 8K30 footage, the Promax model can also shoot 4K at 120 frames per second for a slow motion effect, and 4K HDR footage in 10-bit HLG at up to 60fps. Both drones also shoot 24fps video across all resolutions if you’re looking for a more cinematic look. The original X1 only offered 30fps and 60fps.
While they still don’t have GPS, they do have a new visual positioning system that lets them fly over water, snow, and cliffs — the original would get confused and slowly land, which was bad!

Photo by Owen Grove / The Verge
The new drone, flanked by a set of new ND filters, batteries, and a multi-battery charger.

And, for those who actually want to frame shots or pilot a drone themselves, Zero Zero is introducing a multi-part modular controller system called the Beacon with its own built-in 1.78-inch OLED display.

Photo by Owen Grove / The Verge
The core of the Beacon system.

By itself, the $129 Beacon should hopefully already be a much more capable way of connecting to the X1 Pro or Promax than your phone. It offers one kilometer of video transmission range (which still pales in comparison to DJI’s up to 20km drones), acts as a tracker to help the drone follow you, and has built-in Wi-Fi, Bluetooth, and a microphone for recording audio and some voice commands.

Image: Zero Zero Robotics
The X1 Pro’s modular controller system.

But add one of Zero Zero’s little $69 modular joystick controllers, and now you can manually aim the drone one-handed and extend the Beacon’s battery life from an hour to an hour and a half. Add two joysticks, and you get a full gamepad-style drone controller, plus a shelf for your smartphone as an extra monitor underneath.
The company’s also devised a $169 “Power Case” that gives you 2.5 drone charges (40 minutes of flight time, it claims) just by sliding the folded drone in, and my colleague Owen Grove spotted a $69 set of ND filters during his hands-on, as well as a $79 charging hub for multiple batteries.

DJI has set the bar very high for prosumer drones like these, and it wouldn’t be surprising if Zero Zero hasn’t caught up in terms of image quality, reliable connectivity, or collision avoidance quite yet. (These drones now have a time-of-flight proximity sensor and / or a camera in the rear, but it only protects that side of the drone from crashes.) DJI’s $759 Mini 4 Pro has omnidirectional avoidance, nearly triple the battery life, and vastly longer range.

But HoverAir’s advantages are that its drones are cheaper, easier, more durable, and far faster to use, and those don’t seem to be changing today. Owen launched one from his hand with no instructions whatsoever, just by hitting two buttons. These new drones will retail for $499 and $699, respectively, with discounts for Indiegogo early adopters, while the original X1 appears to be sticking around for $350.
DJI seems nearly ready to announce its own budget easy-to-launch drone around the $350 mark, too: it’s called the DJI Neo.
Zero Zero says it should start shipping its new drones in October — and unlike most Indiegogo projects, this one will come with a guarantee. Zero Zero says it’s Indiegogo’s first partner for a new Shipping Guarantee program that guarantees you a full refund if the product doesn’t ship within the promised timeframe — because Indiegogo will withhold funds so it can process those refunds if they don’t ship.
“By providing a guarantee that backers will receive their products or their money back, we are enhancing the overall crowdfunding experience and encouraging more people to support innovative projects,” writes Indiegogo CEO Becky Center in Zero Zero’s press release.
Here’s Zero Zero’s spec comparison between the HoverAir X1, X1 Pro, and X1 Promax:

The HoverAir X1 Promax. | Photo by Owen Grove / The Verge

The HoverAir X1 wasn’t your traditional drone. People who have little interest in piloting, like my colleague Thomas Ricker, use the highly crash-resistant self-flying $350 camera to easily get aerial video just by pressing two buttons. But it’s not particularly high-quality video, it’s not particularly fast, and you can’t fly it far away even if you wanted to.

That’s where the new HoverAir X1 Pro and X1 Promax come in.

Zero Zero Robotics has just put both drones up for sale on Indiegogo. We briefly checked them out in person, and they sound like an improvement on the X1 in practically every way.

Photo by Owen Grove / The Verge
The new HoverAir X1 Pro and Promax, next to the original X1.

While they are slightly bigger and heavier, they’d still fit in a cargo pants pocket, and they weigh under 200 grams — meaning you shouldn’t need to register them with aviation authorities since they’re under the typical 250-gram limit.

In exchange for that size bump, they now shoot 4K60 or 8K30 footage, respectively; have a wider field of view; last 4.5 minutes longer on a charge (16 minutes total); track you nearly twice as fast (26mph); can resist higher levels of wind (10.7 m/s); support microSD storage so you can finally hold more than 32GB of footage; and offer a two-axis gimbal for increased video stability, up from just one axis previously.

In addition to 8K30 footage, the Promax model can also shoot 4K at 120 frames per second for a slow motion effect, and 4K HDR footage in 10-bit HLG at up to 60fps. Both drones also shoot 24fps video across all resolutions if you’re looking for a more cinematic look. The original X1 only offered 30fps and 60fps.

While they still don’t have GPS, they do have a new visual positioning system that lets them fly over water, snow, and cliffs — the original would get confused and slowly land, which was bad!

Photo by Owen Grove / The Verge
The new drone, flanked by a set of new ND filters, batteries, and a multi-battery charger.

And, for those who actually want to frame shots or pilot a drone themselves, Zero Zero is introducing a multi-part modular controller system called the Beacon with its own built-in 1.78-inch OLED display.

Photo by Owen Grove / The Verge
The core of the Beacon system.

By itself, the $129 Beacon should hopefully already be a much more capable way of connecting to the X1 Pro or Promax than your phone. It offers one kilometer of video transmission range (which still pales in comparison to DJI’s up to 20km drones), acts as a tracker to help the drone follow you, and has built-in Wi-Fi, Bluetooth, and a microphone for recording audio and some voice commands.

Image: Zero Zero Robotics
The X1 Pro’s modular controller system.

But add one of Zero Zero’s little $69 modular joystick controllers, and now you can manually aim the drone one-handed and extend the Beacon’s battery life from an hour to an hour and a half. Add two joysticks, and you get a full gamepad-style drone controller, plus a shelf for your smartphone as an extra monitor underneath.

The company’s also devised a $169 “Power Case” that gives you 2.5 drone charges (40 minutes of flight time, it claims) just by sliding the folded drone in, and my colleague Owen Grove spotted a $69 set of ND filters during his hands-on, as well as a $79 charging hub for multiple batteries.

DJI has set the bar very high for prosumer drones like these, and it wouldn’t be surprising if Zero Zero hasn’t caught up in terms of image quality, reliable connectivity, or collision avoidance quite yet. (These drones now have a time-of-flight proximity sensor and / or a camera in the rear, but it only protects that side of the drone from crashes.) DJI’s $759 Mini 4 Pro has omnidirectional avoidance, nearly triple the battery life, and vastly longer range.

But HoverAir’s advantages are that its drones are cheaper, easier, more durable, and far faster to use, and those don’t seem to be changing today. Owen launched one from his hand with no instructions whatsoever, just by hitting two buttons. These new drones will retail for $499 and $699, respectively, with discounts for Indiegogo early adopters, while the original X1 appears to be sticking around for $350.

DJI seems nearly ready to announce its own budget easy-to-launch drone around the $350 mark, too: it’s called the DJI Neo.

Zero Zero says it should start shipping its new drones in October — and unlike most Indiegogo projects, this one will come with a guarantee. Zero Zero says it’s Indiegogo’s first partner for a new Shipping Guarantee program that guarantees you a full refund if the product doesn’t ship within the promised timeframe — because Indiegogo will withhold funds so it can process those refunds if they don’t ship.

“By providing a guarantee that backers will receive their products or their money back, we are enhancing the overall crowdfunding experience and encouraging more people to support innovative projects,” writes Indiegogo CEO Becky Center in Zero Zero’s press release.

Here’s Zero Zero’s spec comparison between the HoverAir X1, X1 Pro, and X1 Promax:

Read More 

Ted Lasso could come back for a fourth season

Image: Apple

Though Ted Lasso’s season 3 finale felt like a solid place for the series to end, it might not be long before AFC Richmond’s back on our screens.
Deadline reports that Warner Bros. Television — which coproduces Ted Lasso with Universal Television for Apple — has tapped a number of the show’s core cast members to return for a not-yet-greenlit fourth season. Actors Hannah Waddingham, Brett Goldstein, and Jeremy Swift have all had their options picked up by the studio.
Cocreator Jason Sudeikis is said to be attached to the “revival” (which feels silly to say considering the season three finale only debuted last year). As Variety notes, it’s also not yet clear which actors (Waddingham, Goldstein, and Swift are all UK Equity members) with SAG-AFTRA contracts may return for the new season. Of course, nothing’s official until the studios all announce that production is up and running. But at this point, it seems very likely that there’s more Ted Lasso to come.

Image: Apple

Though Ted Lasso’s season 3 finale felt like a solid place for the series to end, it might not be long before AFC Richmond’s back on our screens.

Deadline reports that Warner Bros. Television — which coproduces Ted Lasso with Universal Television for Apple — has tapped a number of the show’s core cast members to return for a not-yet-greenlit fourth season. Actors Hannah Waddingham, Brett Goldstein, and Jeremy Swift have all had their options picked up by the studio.

Cocreator Jason Sudeikis is said to be attached to the “revival” (which feels silly to say considering the season three finale only debuted last year). As Variety notes, it’s also not yet clear which actors (Waddingham, Goldstein, and Swift are all UK Equity members) with SAG-AFTRA contracts may return for the new season. Of course, nothing’s official until the studios all announce that production is up and running. But at this point, it seems very likely that there’s more Ted Lasso to come.

Read More 

Scroll to top
Generated by Feedzy