Month: August 2024

A Lot of New In-car Tech is ‘Not Necessary,’ Survey Finds

Car buyers are increasingly skeptical of advanced automotive technologies, a new JD Power survey reveals. The study found that while drivers appreciate practical innovations like blind spot monitoring, they see little value in features such as automatic parking systems and passenger-side infotainment screens. The survey measured user experiences with new vehicle technologies. Results show that systems partially automating driving tasks had low perceived usefulness, aligning with recent Insurance Institute for Highway Safety data indicating no safety improvements from such features. The survey identified AI-based smart climate control as popular among users. However, facial recognition, fingerprint scanners, and gesture controls were largely viewed negatively.

Read more of this story at Slashdot.

Car buyers are increasingly skeptical of advanced automotive technologies, a new JD Power survey reveals. The study found that while drivers appreciate practical innovations like blind spot monitoring, they see little value in features such as automatic parking systems and passenger-side infotainment screens. The survey measured user experiences with new vehicle technologies. Results show that systems partially automating driving tasks had low perceived usefulness, aligning with recent Insurance Institute for Highway Safety data indicating no safety improvements from such features. The survey identified AI-based smart climate control as popular among users. However, facial recognition, fingerprint scanners, and gesture controls were largely viewed negatively.

Read more of this story at Slashdot.

Read More 

Apple Shares ‘It’s Glowtime’ Event Placeholder on YouTube

Following this morning’s announcement of an upcoming iPhone-centric event that’s set to take place on September 9 at 10:00 a.m. Pacific Time, Apple has added an event placeholder on its YouTube channel.

Apple plans to stream the iPhone event on YouTube, on its website, and through the Apple TV app. There’s also an event placeholder on the Apple Events site. For YouTube, users can click on the video placeholder and choose the “Notify me” option to get a notification when the livestream is up.

If YouTube has your location information, the notification and countdown are in local time, so it’s a good way to double check when you’ll need to tune in to see Apple’s announcements. Apple almost always holds events at 10:00 a.m. Pacific Time, which is 1:00 p.m. Eastern Time, evening in Europe, and early morning in countries in Asia and Australia.

Apple’s Events website also has an option to add the event to your calendar for a cross-platform reminder.

For those who are unable to watch, MacRumors will be providing full event coverage on MacRumors.com and the MacRumorsLive Twitter account so you can follow along with the announcements.Tag: September 2024 Apple EventThis article, “Apple Shares ‘It’s Glowtime’ Event Placeholder on YouTube” first appeared on MacRumors.comDiscuss this article in our forums

Following this morning’s announcement of an upcoming iPhone-centric event that’s set to take place on September 9 at 10:00 a.m. Pacific Time, Apple has added an event placeholder on its YouTube channel.

Apple plans to stream the iPhone event on YouTube, on its website, and through the Apple TV app. There’s also an event placeholder on the Apple Events site. For YouTube, users can click on the video placeholder and choose the “Notify me” option to get a notification when the livestream is up.

If YouTube has your location information, the notification and countdown are in local time, so it’s a good way to double check when you’ll need to tune in to see Apple’s announcements. Apple almost always holds events at 10:00 a.m. Pacific Time, which is 1:00 p.m. Eastern Time, evening in Europe, and early morning in countries in Asia and Australia.

Apple’s Events website also has an option to add the event to your calendar for a cross-platform reminder.

For those who are unable to watch, MacRumors will be providing full event coverage on MacRumors.com and the MacRumorsLive Twitter account so you can follow along with the announcements.

This article, “Apple Shares ‘It’s Glowtime’ Event Placeholder on YouTube” first appeared on MacRumors.com

Discuss this article in our forums

Read More 

One More Thing: Don’t Buy Apple Products Right Now video

With new Apple products expected to drop in September, it might be best to wait to upgrade your watch or phone.

With new Apple products expected to drop in September, it might be best to wait to upgrade your watch or phone.

Read More 

Hello, you’re here because you said AI image editing was just like Photoshop

Image: Cath Virginia / The Verge, Getty Images

Let’s put this sloppy, bad-faith argument to rest. “We’ve had Photoshop for 35 years” is a common response to rebut concerns about generative AI, and you’ve landed here because you’ve made that argument in a comment thread or social media.
There are countless reasons to be concerned about how AI image editing and generation tools will impact the trust we place in photographs and how that trust (or lack thereof) could be used to manipulate us. That’s bad, and we know it’s already happening. So, to save us all time and energy, and from wearing our fingers down to nubs by constantly responding to the same handful of arguments, we’re just putting them all in a list in this post.
Sharing this will be far more efficient after all — just like AI! Isn’t that delightful!
Argument: “You can already manipulate images like this in Photoshop”
It’s easy to make this argument if you’ve never actually gone through the process of manually editing a photo in apps like Adobe Photoshop, but it’s a frustratingly over-simplified comparison. Let’s say some dastardly miscreant wants to manipulate an image to make it look like someone has a drug problem — here are just a few things they’d need to do:

Have access to (potentially expensive) desktop software. Sure, mobile editing apps exist, but they’re not really suitable for much outside of small tweaks like skin smoothing and color adjustment. So, for this job, you’ll need a computer — a costly investment for internet fuckery. And while some desktop editing apps are free (Gimp, Photopea, etc.), most professional-level tools are not. Adobe’s Creative Cloud apps are among the most popular, and the recurring subscriptions ($263.88 per year for Photoshop alone) are notoriously hard to cancel.

Locate suitable pictures of drug paraphernalia. Even if you have some on hand, you can’t just slap any old image in and hope it’ll look right. You have to account for the appropriate lighting and positioning of the photo they’re being added to, so everything needs to match up. Any reflections on bottles should be hitting from the same angle, for example, and objects photographed at eye level will look obviously fake if dropped into an image that was snapped at more of an angle.

Understand and use a smorgasbord of complicated editing tools. Any inserts need to be cut from whatever background they were on and then blended seamlessly into their new environment. That might require adjusting color balance, tone, and exposure levels, smoothing edges, or adding in new shadows or reflections. It takes both time and experience to ensure the results look even passable, let alone natural.

There are some genuinely useful AI tools in Photoshop that do make this easier, such as automated object selection and background removal. But even if you’re using them, it’ll still take a decent chunk of time and energy to manipulate a single image. By contrast, here’s what The Verge editor Chris Welch had to do to get the same results using the “Reimagine” feature on a Google Pixel 9:

Launch the Google Photos app on their smartphone. Tap an area, and tell it to add a “medical syringe filled with red liquid,” some “thin lines of crumbled chalk,” alongside wine and rubber tubing.

That’s it. A similarly easy process exists on Samsung’s newest phones. The skill and time barrier isn’t just reduced — it’s gone. Google’s tool is also freakishly good at blending any generated materials into the images: lighting, shadows, opacity, and even focal points are all taken into consideration. Photoshop itself now has an AI image generator built-in, and the results from that often aren’t half as convincing as what this free Android app from Google can spit out.

Image manipulation techniques and other methods of fakery have existed for close to 200 years — almost as long as photography itself. (Cases in point: 19th-century spirit photography and the Cottingley Fairies.) But the skill requirements and time investment needed to make those changes are why we don’t think to inspect every photo we see. Manipulations were rare and unexpected for most of photography’s history. But the simplicity and scale of AI on smartphones will mean any bozo can churn out manipulative images at a frequency and scale we’ve never experienced before. It should be obvious why that’s alarming.
Argument: “People will adapt to this becoming the new normal”
Just because you have the estimable ability to clock when an image is fake doesn’t mean everyone can. Not everyone skulks around on tech forums (we love you all, fellow skulkers), so the typical indicators of AI that seem obvious to us can be easy to miss for those who don’t know what signs to look for — if they’re even there at all. AI is rapidly getting better at producing natural-looking images that don’t have seven fingers or Cronenberg-esque distortions.
In a world where everything might be fake, it’s vastly harder to prove something is real
Maybe it was easy to spot when the occasional deepfake was dumped into our feeds, but the scale of production has shifted seismically in the last two years alone. It’s incredibly easy to make this stuff, so now it’s fucking everywhere. We are dangerously close to living in a world in which we have to be wary about being deceived by every single image put in front of us.
And when everything might be fake, it’s vastly harder to prove something is real. That doubt is easy to prey on, opening the door for people like former President Donald Trump to throw around false accusations about Kamala Harris manipulating the size of her rally crowds.
Argument: “Photoshop was a huge, barrier-lowering tech, too — but we ended up being fine”
It’s true: even if AI is a lot easier to use than Photoshop, the latter was still a technological revolution that forced people to reckon with a whole new world of fakery. But Photoshop and other pre-AI editing tools did create social problems that persist to this day and still cause meaningful harm. The ability to digitally retouch photographs on magazines and billboards promoted impossible beauty standards for both men and women, with the latter disproportionately impacted. In 2003, for instance, a then-27-year-old Kate Winslet was unknowingly slimmed down on the cover of GQ — and the British magazine’s editor, Dylan Jones, justified it by saying her appearance had been altered “no more than any other cover star.”
Edits like this were pervasive and rarely disclosed, despite major scandals when early blogs like Jezebel published unretouched photos of celebrities on fashion magazine covers. (France even passed a law requiring airbrushing disclosures.) And as easier-to-use tools like FaceTtune emerged on exploding social media platforms, they became even more insidious.
One study in 2020 found that 71 percent of Instagram users would edit their selfies with Facetune before publishing them, and another found that media images caused the same drop in body image for women and girls with or without a label disclaiming they’d been digitally altered. There’s a direct pipeline from social media to real-life plastic surgery, sometimes aiming for physically impossible results. And men are not immune — social media has real and measurable impacts on boys and their self-image as well.

Impossible beauty standards aren’t the only issue, either. Staged pictures and photo editing could mislead viewers, undercut trust in photojournalism, and even emphasize racist narratives — as in a 1994 photo illustration that made OJ Simpson’s face darker in a mugshot.
Generative AI image editing not only amplifies these problems by further lowering barriers — it sometimes does so with no explicit direction. AI tools and apps have been accused of giving women larger breasts and revealing clothes without being told to do so. Forget viewers not being able to trust what they’re seeing is real — now photographers can’t trust their own tools!
Argument: “I’m sure laws will be passed to protect us”
First of all, crafting good speech laws — and, let’s be clear, these likely would be speech laws — is incredibly hard. Governing how people can produce and release edited images will require separating uses that are overwhelmingly harmful from ones lots of people find valuable, like art, commentary, and parody. Lawmakers and regulators will have to reckon with existing laws around free speech and access to information, including the First Amendment in the US.
Tech giants ran full speed into the AI era seemingly without considering the possibility of regulation
Tech giants also ran full-speed into the AI era seemingly without even considering the possibility of regulation. Global governments are still scrambling to enact laws that can rein in those who do abuse generative AI tech (including the companies building it), and the development of systems for identifying real photographs from manipulated ones is proving slow and woefully inadequate.
Meanwhile, easy AI tools have already been used for voter manipulation, digitally undressing pictures of children, and to grotesquely deepfake celebrities like Taylor Swift. That’s just in the last year, and the technology is only going to keep improving.
In an ideal world, adequate guardrails would have been put in place before a free, idiot-proof tool capable of adding bombs, car collisions, and other nasties to photographs in seconds landed in our pockets. Maybe we are fucked. Optimism and willful ignorance aren’t going to fix this, and it’s not clear what will or even can at this stage.

Image: Cath Virginia / The Verge, Getty Images

Let’s put this sloppy, bad-faith argument to rest.

“We’ve had Photoshop for 35 years” is a common response to rebut concerns about generative AI, and you’ve landed here because you’ve made that argument in a comment thread or social media.

There are countless reasons to be concerned about how AI image editing and generation tools will impact the trust we place in photographs and how that trust (or lack thereof) could be used to manipulate us. That’s bad, and we know it’s already happening. So, to save us all time and energy, and from wearing our fingers down to nubs by constantly responding to the same handful of arguments, we’re just putting them all in a list in this post.

Sharing this will be far more efficient after all — just like AI! Isn’t that delightful!

Argument: “You can already manipulate images like this in Photoshop”

It’s easy to make this argument if you’ve never actually gone through the process of manually editing a photo in apps like Adobe Photoshop, but it’s a frustratingly over-simplified comparison. Let’s say some dastardly miscreant wants to manipulate an image to make it look like someone has a drug problem — here are just a few things they’d need to do:

Have access to (potentially expensive) desktop software. Sure, mobile editing apps exist, but they’re not really suitable for much outside of small tweaks like skin smoothing and color adjustment. So, for this job, you’ll need a computer — a costly investment for internet fuckery. And while some desktop editing apps are free (Gimp, Photopea, etc.), most professional-level tools are not. Adobe’s Creative Cloud apps are among the most popular, and the recurring subscriptions ($263.88 per year for Photoshop alone) are notoriously hard to cancel.

Locate suitable pictures of drug paraphernalia. Even if you have some on hand, you can’t just slap any old image in and hope it’ll look right. You have to account for the appropriate lighting and positioning of the photo they’re being added to, so everything needs to match up. Any reflections on bottles should be hitting from the same angle, for example, and objects photographed at eye level will look obviously fake if dropped into an image that was snapped at more of an angle.

Understand and use a smorgasbord of complicated editing tools. Any inserts need to be cut from whatever background they were on and then blended seamlessly into their new environment. That might require adjusting color balance, tone, and exposure levels, smoothing edges, or adding in new shadows or reflections. It takes both time and experience to ensure the results look even passable, let alone natural.

There are some genuinely useful AI tools in Photoshop that do make this easier, such as automated object selection and background removal. But even if you’re using them, it’ll still take a decent chunk of time and energy to manipulate a single image. By contrast, here’s what The Verge editor Chris Welch had to do to get the same results using the “Reimagine” feature on a Google Pixel 9:

Launch the Google Photos app on their smartphone. Tap an area, and tell it to add a “medical syringe filled with red liquid,” some “thin lines of crumbled chalk,” alongside wine and rubber tubing.

That’s it. A similarly easy process exists on Samsung’s newest phones. The skill and time barrier isn’t just reduced — it’s gone. Google’s tool is also freakishly good at blending any generated materials into the images: lighting, shadows, opacity, and even focal points are all taken into consideration. Photoshop itself now has an AI image generator built-in, and the results from that often aren’t half as convincing as what this free Android app from Google can spit out.

Image manipulation techniques and other methods of fakery have existed for close to 200 years — almost as long as photography itself. (Cases in point: 19th-century spirit photography and the Cottingley Fairies.) But the skill requirements and time investment needed to make those changes are why we don’t think to inspect every photo we see. Manipulations were rare and unexpected for most of photography’s history. But the simplicity and scale of AI on smartphones will mean any bozo can churn out manipulative images at a frequency and scale we’ve never experienced before. It should be obvious why that’s alarming.

Argument: “People will adapt to this becoming the new normal”

Just because you have the estimable ability to clock when an image is fake doesn’t mean everyone can. Not everyone skulks around on tech forums (we love you all, fellow skulkers), so the typical indicators of AI that seem obvious to us can be easy to miss for those who don’t know what signs to look for — if they’re even there at all. AI is rapidly getting better at producing natural-looking images that don’t have seven fingers or Cronenberg-esque distortions.

In a world where everything might be fake, it’s vastly harder to prove something is real

Maybe it was easy to spot when the occasional deepfake was dumped into our feeds, but the scale of production has shifted seismically in the last two years alone. It’s incredibly easy to make this stuff, so now it’s fucking everywhere. We are dangerously close to living in a world in which we have to be wary about being deceived by every single image put in front of us.

And when everything might be fake, it’s vastly harder to prove something is real. That doubt is easy to prey on, opening the door for people like former President Donald Trump to throw around false accusations about Kamala Harris manipulating the size of her rally crowds.

Argument: “Photoshop was a huge, barrier-lowering tech, too — but we ended up being fine”

It’s true: even if AI is a lot easier to use than Photoshop, the latter was still a technological revolution that forced people to reckon with a whole new world of fakery. But Photoshop and other pre-AI editing tools did create social problems that persist to this day and still cause meaningful harm. The ability to digitally retouch photographs on magazines and billboards promoted impossible beauty standards for both men and women, with the latter disproportionately impacted. In 2003, for instance, a then-27-year-old Kate Winslet was unknowingly slimmed down on the cover of GQ — and the British magazine’s editor, Dylan Jones, justified it by saying her appearance had been altered “no more than any other cover star.”

Edits like this were pervasive and rarely disclosed, despite major scandals when early blogs like Jezebel published unretouched photos of celebrities on fashion magazine covers. (France even passed a law requiring airbrushing disclosures.) And as easier-to-use tools like FaceTtune emerged on exploding social media platforms, they became even more insidious.

One study in 2020 found that 71 percent of Instagram users would edit their selfies with Facetune before publishing them, and another found that media images caused the same drop in body image for women and girls with or without a label disclaiming they’d been digitally altered. There’s a direct pipeline from social media to real-life plastic surgery, sometimes aiming for physically impossible results. And men are not immune — social media has real and measurable impacts on boys and their self-image as well.

Impossible beauty standards aren’t the only issue, either. Staged pictures and photo editing could mislead viewers, undercut trust in photojournalism, and even emphasize racist narratives — as in a 1994 photo illustration that made OJ Simpson’s face darker in a mugshot.

Generative AI image editing not only amplifies these problems by further lowering barriers — it sometimes does so with no explicit direction. AI tools and apps have been accused of giving women larger breasts and revealing clothes without being told to do so. Forget viewers not being able to trust what they’re seeing is real — now photographers can’t trust their own tools!

Argument: “I’m sure laws will be passed to protect us”

First of all, crafting good speech laws — and, let’s be clear, these likely would be speech laws — is incredibly hard. Governing how people can produce and release edited images will require separating uses that are overwhelmingly harmful from ones lots of people find valuable, like art, commentary, and parody. Lawmakers and regulators will have to reckon with existing laws around free speech and access to information, including the First Amendment in the US.

Tech giants ran full speed into the AI era seemingly without considering the possibility of regulation

Tech giants also ran full-speed into the AI era seemingly without even considering the possibility of regulation. Global governments are still scrambling to enact laws that can rein in those who do abuse generative AI tech (including the companies building it), and the development of systems for identifying real photographs from manipulated ones is proving slow and woefully inadequate.

Meanwhile, easy AI tools have already been used for voter manipulation, digitally undressing pictures of children, and to grotesquely deepfake celebrities like Taylor Swift. That’s just in the last year, and the technology is only going to keep improving.

In an ideal world, adequate guardrails would have been put in place before a free, idiot-proof tool capable of adding bombs, car collisions, and other nasties to photographs in seconds landed in our pockets. Maybe we are fucked. Optimism and willful ignorance aren’t going to fix this, and it’s not clear what will or even can at this stage.

Read More 

When to Expect the iPhone SE 4 to Launch

Over two and a half years have passed since Apple released the current iPhone SE, so the device is due for an update. Below, we recap the latest rumors about the next-generation iPhone SE, including potential features and launch timing.

Timing
The latest word comes from Bloomberg’s Mark Gurman. In his Power On newsletter on Sunday, he said he expects the next iPhone SE to launch in the spring of 2025. All three existing iPhone SE models were announced in March over the years, so it seems likely that the fourth-generation model will also be introduced in March next year.

Apple analyst Ming-Chi Kuo and the technology publication The Information also previously claimed that the next iPhone SE will launch in the first quarter of 2025, so that timeframe has been corroborated by multiple sources. The Information said that Apple suppliers would begin ramping up mass production of the device in October.

Features
The current iPhone SE starts at $429 in the U.S. with 64GB of storage and 4GB of RAM. The device’s key features include a 4.7-inch LCD display, A15 Bionic chip, Touch ID, 5G support with a Qualcomm chip, a single 12-megapixel rear camera, and a Lightning port.

The next iPhone SE is rumored to have an iPhone 14-like design with the following features:
6.1-inch OLED display
A18 chip
Face ID
USB-C port
Action button
Apple-designed 5G chip
A single 48-megapixel rear camera
8GB of RAM for Apple Intelligence
Gurman believes the next iPhone SE will likely be priced in the $400 to $500 range, so the device would remain a lower-cost option in Apple’s smartphone lineup, despite gaining a larger display and a more modern set of features.Related Roundup: iPhone SETags: Bloomberg, Mark GurmanBuyer’s Guide: iPhone SE (Don’t Buy)Related Forum: iPhoneThis article, “When to Expect the iPhone SE 4 to Launch” first appeared on MacRumors.comDiscuss this article in our forums

Over two and a half years have passed since Apple released the current iPhone SE, so the device is due for an update. Below, we recap the latest rumors about the next-generation iPhone SE, including potential features and launch timing.

Timing

The latest word comes from Bloomberg‘s Mark Gurman. In his Power On newsletter on Sunday, he said he expects the next iPhone SE to launch in the spring of 2025. All three existing iPhone SE models were announced in March over the years, so it seems likely that the fourth-generation model will also be introduced in March next year.

Apple analyst Ming-Chi Kuo and the technology publication The Information also previously claimed that the next iPhone SE will launch in the first quarter of 2025, so that timeframe has been corroborated by multiple sources. The Information said that Apple suppliers would begin ramping up mass production of the device in October.

Features

The current iPhone SE starts at $429 in the U.S. with 64GB of storage and 4GB of RAM. The device’s key features include a 4.7-inch LCD display, A15 Bionic chip, Touch ID, 5G support with a Qualcomm chip, a single 12-megapixel rear camera, and a Lightning port.

The next iPhone SE is rumored to have an iPhone 14-like design with the following features:

6.1-inch OLED display

A18 chip

Face ID

USB-C port

Action button

Apple-designed 5G chip

A single 48-megapixel rear camera

8GB of RAM for Apple Intelligence

Gurman believes the next iPhone SE will likely be priced in the $400 to $500 range, so the device would remain a lower-cost option in Apple’s smartphone lineup, despite gaining a larger display and a more modern set of features.

Related Roundup: iPhone SE
Buyer’s Guide: iPhone SE (Don’t Buy)
Related Forum: iPhone

This article, “When to Expect the iPhone SE 4 to Launch” first appeared on MacRumors.com

Discuss this article in our forums

Read More 

Scroll to top
Generated by Feedzy