Month: August 2024

This system can sort real pictures from AI fakes — why aren’t platforms using it?

Image: Cath Virginia / The Verge, Chris Strider

Big tech companies are backing the C2PA’s authentication standard, but they’re taking too long to put it to use. As the US presidential election approaches, the web has been filled with photos of Donald Trump and Kamala Harris: spectacularly well-timed photos of an attempted assassination; utterly mundane photos of rally crowds; and shockingly out-of-character photos of the candidates burning flags and holding guns. Some of these things didn’t actually happen, of course. But generative AI imaging tools are now so adept and accessible that we can’t really trust our eyes anymore.
Some of the biggest names in digital media have been working to sort out this mess, and their solution so far is: more data — specifically, metadata that attaches to a photo and tells you what’s real, what’s fake, and how that fakery happened. One of the best-known systems for this, C2PA authentication, already has the backing of companies like Microsoft, Adobe, Arm, OpenAI, Intel, Truepic, and Google. The technical standard provides key information about where images originate from, letting viewers identify whether they’ve been manipulated.
“Provenance technologies like Content Credentials — which act like a nutrition label for digital content — offer a promising solution by enabling official event photos and other content to carry verifiable metadata like date and time, or if needed, signal whether or not AI was used,” Andy Parsons, a steering committee member of C2PA and senior director for CAI at Adobe, told The Verge. “This level of transparency can help dispel doubt, particularly during breaking news and election cycles.”

But if all the information needed to authenticate images can already be embedded in the files, where is it? And why aren’t we seeing some kind of “verified” mark when the photos are published online?
The problem is interoperability. There are still huge gaps in how this system is being implemented, and it’s taking years to get all the necessary players on board to make it work. And if we can’t get everyone on board, then the initiative might be doomed to fail.
The Coalition for Content Provenance and Authenticity (C2PA) is one of the largest groups trying to address this chaos, alongside the Content Authenticity Initiative (CAI) that Adobe kicked off in 2019. The technical standard they’ve developed uses cryptographic digital signatures to verify the authenticity of digital media, and it’s already been established. But this progress is still frustratingly inaccessible to the everyday folks who stumble across questionable images online.

“It’s important to realize that we’re still in the early stage of adoption,” said Parsons. “The spec is locked. It’s robust. It’s been looked at by security professionals. The implementations are few and far between, but that’s just the natural course of getting standards adopted.”
The problems start from the origin of the images: the camera. Some camera brands like Sony and Leica already embed cryptographic digital signatures based on C2PA’s open technical standard — which provides information like the camera settings and the date and location where an image was taken — into photographs the moment they’re taken.
This is currently only supported on a handful of cameras, across both new models like the Leica M11-P or via firmware updates for existing models like Sony’s Alpha 1, Alpha 7S III, and Alpha 7 IV. While other brands like Nikon and Canon have also pledged to adopt the C2PA standard, most have yet to meaningfully do so. Smartphones, which are typically the most accessible cameras for most folks, are also lacking. Neither Apple nor Google responded to our inquiries about implementing C2PA support or a similar standard into iPhone or Android devices.
If the cameras themselves don’t record this precious data, important information can still be applied during the editing process. Software like Adobe’s Photoshop and Lightroom, two of the most widely used image editing apps in the photography industry, can automatically embed this data in the form of C2PA-supported Content Credentials, which note how and when an image has been altered. That includes any use of generative AI tools, which could help to identify images that have been falsely doctored.

But again, many applications, including Affinity Photo and GIMP, don’t support a unified, interoperable metadata solution that can help resolve authenticity issues. Some members of these software communities have expressed a desire for them to do so, which might bring more attention to the issue. Phase One, developers of the popular pro photo editor Capture One, told The Verge that it was “committed to supporting photographers” being impacted by AI and is “looking into traceability features like C2PA, amongst others.”
Even when a camera does support authenticity data, it doesn’t always make it to viewers. A C2PA-compliant Sony camera was used to take the now-iconic photo of Trump’s fist pump following the assassination attempt as well as a photo that seemed to capture the bullet that was shot at him flying through the air. That metadata information isn’t widely accessible to the general public, though, because online platforms where these images were being circulated, like X and Reddit, don’t display it when images are uploaded and published. Even media websites that are backing the standard, like The New York Times, don’t visibly flag verification credentials after they’ve used them to authenticate a photograph.
Part of that roadblock, besides getting platforms on board in the first place, is figuring out the best way to present that information to users. Facebook and Instagram are two of the largest platforms that check content for markers like the C2PA standard, but they only flag images that have been manipulated using generative AI tools — no information is presented to validate “real” images.

Image: Meta
Meta has updated its AI labels, but none of its platforms currently flag when images are verifiably authentic.

When those labels are unclear, it can cause a problem, too. Meta’s “Made with AI” labels angered photographers when they were applied so aggressively that they seemed to cover even minor retouching. The labels have since been updated to deemphasize the use of AI. And while Meta didn’t disclose to us if it will expand this system, the company told us it believes a “widespread adoption of Content Credentials” is needed to establish trust.
Truepic, an authenticity infrastructure provider and another member of C2PA, says there’s enough information present in these digital markers to provide more detail than platforms currently offer. “The architecture is there, but we need to research the optimal way to display these visual indicators so that everyone on the internet can actually see them and use them to make better decisions without just saying something is either all generative AI or all authentic,” Truepic chief communications officer Mounir Ibrahim said to The Verge.
X doesn’t currently support the standard, but Elon Musk has previously said the platform “should probably do it”
A cornerstone of this plan involves getting online platforms to adopt the standard. X, which has attracted regulatory scrutiny as a hotbed for spreading misinformation, isn’t a member of the C2PA initiative and seemingly offers no alternative. But X owner Elon Musk does appear willing to get behind it. “That sounds like a good idea, we should probably do it,” Musk said when pitched by Parsons at the 2023 AI Safety Summit. “Some way of authenticating would be good.”
Even if, by some miracle, we were to wake up tomorrow in a tech landscape where every platform, camera, and creative application supported the C2PA standard, denialism is a potent, pervasive, and potentially insurmountable obstacle. Providing people with documented, evidence-based information won’t help if they just discount it. Misinformation can even be utterly baseless, as seen by how readily Trump supporters believed accusations about Harris supposedly faking her rally crowds, despite widespread evidence proving otherwise. Some people will just believe what they want to believe.
But a cryptographic labeling system is likely the best approach we currently have to reliably identify authentic, manipulated, and artificially generated content at scale. Alternative pattern analyzing methods like online AI detection services, for instance, are notoriously unreliable. “Detection is probabilistic at best — we do not believe that you will get a detection mechanism where you can upload any image, video, or digital content and get 99.99 percent accuracy in real-time and at scale,” Ibrahim says. “And while watermarking can be robust and highly effective, in our view it isn’t interoperable.”
No system is perfect, though, and even more robust options like the C2PA standard can only do so much. Image metadata can be easily stripped simply by taking a screenshot, for example — for which there is currently no solution — and its effectiveness is otherwise dictated by how many platforms and products support it.
“None of it is a panacea,” Ibrahim says. “It will mitigate the downside risk, but bad actors will always be there using generative tools to try and deceive people.”

Image: Cath Virginia / The Verge, Chris Strider

Big tech companies are backing the C2PA’s authentication standard, but they’re taking too long to put it to use.

As the US presidential election approaches, the web has been filled with photos of Donald Trump and Kamala Harris: spectacularly well-timed photos of an attempted assassination; utterly mundane photos of rally crowds; and shockingly out-of-character photos of the candidates burning flags and holding guns. Some of these things didn’t actually happen, of course. But generative AI imaging tools are now so adept and accessible that we can’t really trust our eyes anymore.

Some of the biggest names in digital media have been working to sort out this mess, and their solution so far is: more data — specifically, metadata that attaches to a photo and tells you what’s real, what’s fake, and how that fakery happened. One of the best-known systems for this, C2PA authentication, already has the backing of companies like Microsoft, Adobe, Arm, OpenAI, Intel, Truepic, and Google. The technical standard provides key information about where images originate from, letting viewers identify whether they’ve been manipulated.

“Provenance technologies like Content Credentials — which act like a nutrition label for digital content — offer a promising solution by enabling official event photos and other content to carry verifiable metadata like date and time, or if needed, signal whether or not AI was used,” Andy Parsons, a steering committee member of C2PA and senior director for CAI at Adobe, told The Verge. “This level of transparency can help dispel doubt, particularly during breaking news and election cycles.”

But if all the information needed to authenticate images can already be embedded in the files, where is it? And why aren’t we seeing some kind of “verified” mark when the photos are published online?

The problem is interoperability. There are still huge gaps in how this system is being implemented, and it’s taking years to get all the necessary players on board to make it work. And if we can’t get everyone on board, then the initiative might be doomed to fail.

The Coalition for Content Provenance and Authenticity (C2PA) is one of the largest groups trying to address this chaos, alongside the Content Authenticity Initiative (CAI) that Adobe kicked off in 2019. The technical standard they’ve developed uses cryptographic digital signatures to verify the authenticity of digital media, and it’s already been established. But this progress is still frustratingly inaccessible to the everyday folks who stumble across questionable images online.

“It’s important to realize that we’re still in the early stage of adoption,” said Parsons. “The spec is locked. It’s robust. It’s been looked at by security professionals. The implementations are few and far between, but that’s just the natural course of getting standards adopted.”

The problems start from the origin of the images: the camera. Some camera brands like Sony and Leica already embed cryptographic digital signatures based on C2PA’s open technical standard — which provides information like the camera settings and the date and location where an image was taken — into photographs the moment they’re taken.

This is currently only supported on a handful of cameras, across both new models like the Leica M11-P or via firmware updates for existing models like Sony’s Alpha 1, Alpha 7S III, and Alpha 7 IV. While other brands like Nikon and Canon have also pledged to adopt the C2PA standard, most have yet to meaningfully do so. Smartphones, which are typically the most accessible cameras for most folks, are also lacking. Neither Apple nor Google responded to our inquiries about implementing C2PA support or a similar standard into iPhone or Android devices.

If the cameras themselves don’t record this precious data, important information can still be applied during the editing process. Software like Adobe’s Photoshop and Lightroom, two of the most widely used image editing apps in the photography industry, can automatically embed this data in the form of C2PA-supported Content Credentials, which note how and when an image has been altered. That includes any use of generative AI tools, which could help to identify images that have been falsely doctored.

But again, many applications, including Affinity Photo and GIMP, don’t support a unified, interoperable metadata solution that can help resolve authenticity issues. Some members of these software communities have expressed a desire for them to do so, which might bring more attention to the issue. Phase One, developers of the popular pro photo editor Capture One, told The Verge that it was “committed to supporting photographers” being impacted by AI and is “looking into traceability features like C2PA, amongst others.”

Even when a camera does support authenticity data, it doesn’t always make it to viewers. A C2PA-compliant Sony camera was used to take the now-iconic photo of Trump’s fist pump following the assassination attempt as well as a photo that seemed to capture the bullet that was shot at him flying through the air. That metadata information isn’t widely accessible to the general public, though, because online platforms where these images were being circulated, like X and Reddit, don’t display it when images are uploaded and published. Even media websites that are backing the standard, like The New York Times, don’t visibly flag verification credentials after they’ve used them to authenticate a photograph.

Part of that roadblock, besides getting platforms on board in the first place, is figuring out the best way to present that information to users. Facebook and Instagram are two of the largest platforms that check content for markers like the C2PA standard, but they only flag images that have been manipulated using generative AI tools — no information is presented to validate “real” images.

Image: Meta
Meta has updated its AI labels, but none of its platforms currently flag when images are verifiably authentic.

When those labels are unclear, it can cause a problem, too. Meta’s “Made with AI” labels angered photographers when they were applied so aggressively that they seemed to cover even minor retouching. The labels have since been updated to deemphasize the use of AI. And while Meta didn’t disclose to us if it will expand this system, the company told us it believes a “widespread adoption of Content Credentials” is needed to establish trust.

Truepic, an authenticity infrastructure provider and another member of C2PA, says there’s enough information present in these digital markers to provide more detail than platforms currently offer. “The architecture is there, but we need to research the optimal way to display these visual indicators so that everyone on the internet can actually see them and use them to make better decisions without just saying something is either all generative AI or all authentic,” Truepic chief communications officer Mounir Ibrahim said to The Verge.

X doesn’t currently support the standard, but Elon Musk has previously said the platform “should probably do it”

A cornerstone of this plan involves getting online platforms to adopt the standard. X, which has attracted regulatory scrutiny as a hotbed for spreading misinformation, isn’t a member of the C2PA initiative and seemingly offers no alternative. But X owner Elon Musk does appear willing to get behind it. “That sounds like a good idea, we should probably do it,” Musk said when pitched by Parsons at the 2023 AI Safety Summit. “Some way of authenticating would be good.”

Even if, by some miracle, we were to wake up tomorrow in a tech landscape where every platform, camera, and creative application supported the C2PA standard, denialism is a potent, pervasive, and potentially insurmountable obstacle. Providing people with documented, evidence-based information won’t help if they just discount it. Misinformation can even be utterly baseless, as seen by how readily Trump supporters believed accusations about Harris supposedly faking her rally crowds, despite widespread evidence proving otherwise. Some people will just believe what they want to believe.

But a cryptographic labeling system is likely the best approach we currently have to reliably identify authentic, manipulated, and artificially generated content at scale. Alternative pattern analyzing methods like online AI detection services, for instance, are notoriously unreliable. “Detection is probabilistic at best — we do not believe that you will get a detection mechanism where you can upload any image, video, or digital content and get 99.99 percent accuracy in real-time and at scale,” Ibrahim says. “And while watermarking can be robust and highly effective, in our view it isn’t interoperable.”

No system is perfect, though, and even more robust options like the C2PA standard can only do so much. Image metadata can be easily stripped simply by taking a screenshot, for example — for which there is currently no solution — and its effectiveness is otherwise dictated by how many platforms and products support it.

“None of it is a panacea,” Ibrahim says. “It will mitigate the downside risk, but bad actors will always be there using generative tools to try and deceive people.”

Read More 

Missing Scissors Cause 36 Flight Cancellations In Japan

An anonymous reader quotes a report from The Register: Thirty-six flights were cancelled at Japan’s New Chitose airport on Saturday after a pair of scissors went missing. Japanese media report that retail outlets at the airport — which serves the regional city of Chitose on Japan’s northernmost island, Hokkaido — are required to store scissors in a locker. When staff need to cut something, they withdraw the scissors and then replace them after they’re done snipping. But last Saturday, an unnamed retailer at the airport was unable to find a pair of scissors. A lengthy search ensued, during which security checks for incoming passengers were paused for at least two hours.

Chaos ensued as queues expanded, passengers were denied entry, and airport authorities scrambled to determine whether the scissors had been swiped by somebody with malicious intent. The incident saw over 200 flights delayed, and 36 cancelled altogether. The mess meant some artists didn’t appear at a music festival. Happily, the scissors were eventually found — in the very same shop from which they had gone missing, and not in the hands of someone nefarious. But it took time for authorities to verify the scissors were the missing cutters and not another misplaced pair.

Read more of this story at Slashdot.

An anonymous reader quotes a report from The Register: Thirty-six flights were cancelled at Japan’s New Chitose airport on Saturday after a pair of scissors went missing. Japanese media report that retail outlets at the airport — which serves the regional city of Chitose on Japan’s northernmost island, Hokkaido — are required to store scissors in a locker. When staff need to cut something, they withdraw the scissors and then replace them after they’re done snipping. But last Saturday, an unnamed retailer at the airport was unable to find a pair of scissors. A lengthy search ensued, during which security checks for incoming passengers were paused for at least two hours.

Chaos ensued as queues expanded, passengers were denied entry, and airport authorities scrambled to determine whether the scissors had been swiped by somebody with malicious intent. The incident saw over 200 flights delayed, and 36 cancelled altogether. The mess meant some artists didn’t appear at a music festival. Happily, the scissors were eventually found — in the very same shop from which they had gone missing, and not in the hands of someone nefarious. But it took time for authorities to verify the scissors were the missing cutters and not another misplaced pair.

Read more of this story at Slashdot.

Read More 

Ring’s new video doorbell offers premium features for an entry-level price

The new Ring Battery Doorbell is available to pre-order now, with shipping due to start in early September.

Ring has launched a new entry-level battery-operated video doorbell with features like full-height video that you’d previously only find in much more expensive models. 

In terms of specs, the new Ring Battery Doorbell is similar to the Ring Battery Doorbell Plus, which launched last year. That includes head-to-toe HD video with a 150-degree by 150-degree field of view so you can get a proper view of the person at your door. The wide field of view also makes it easier to see packages left on the ground, which is particularly useful if you have a Ring Protect subscription with Package Alerts.

There’s color night vision so you can see who has visited after dark, and like all other Ring doorbells, it gives you a live video feed through the Ring mobile app, two-way talk, and motion detection with alerts. This supports custom motion zones, but it’s worth noting that this feature is not the same as the radar-powered 3D motion detection offered by the Ring Battery Doorbell Pro, which also lets you see where people have been with an aerial view of your home.

The new Ring Battery Doorbell offers a much wider field of view than other entry-level devices (Image credit: Ring)

Spot the difference

One feature that’s new for the Ring Battery Doorbell is a specially designed quick-release mount, which makes it easier to take down the device for charging. Push the doorbell into the mount to install it, then use the push-pin tool (included in the box) when you need to remove it.

The biggest difference between the new doorbell and existing models, however, is the price. The Battery Doorbell Plus retails at $149.99 / £129.99 (about AU$230), but the new Battery Doorbell is only $99.99 (about £80 / AU$150).

The Ring Battery Doorbell is available to pre-order now direct from Ring and from Amazon in the US, and will begin shipping on September 4. Official pricing and shipping details for other territories have not yet been confirmed, but hopefully it will be available outside the US soon.

You might also like…

Ring Battery Doorbell Pro delivers serious smart security features minus the wiringHow to install a Ring doorbell: a step by step guideHow to change your Ring doorbell sound

Read More 

The Impact of Major Acquisitions on Australia’s Gaming Tech Ecosystem

Australia’s gaming industry is burgeoning, but some aspects may cause concern across a wider range of people. According to figures released by the Australian Game Developer Survey (AGDS) at the end of 2023, revenues for the industry are up a
The post The Impact of Major Acquisitions on Australia’s Gaming Tech Ecosystem first appeared on Tech Startups.

Australia’s gaming industry is burgeoning, but some aspects may cause concern across a wider range of people. According to figures released by the Australian Game Developer Survey (AGDS) at the end of 2023, revenues for the industry are up a […]

The post The Impact of Major Acquisitions on Australia’s Gaming Tech Ecosystem first appeared on Tech Startups.

Read More 

Get Your Hands on a Dyson V15 Detect, One of Our Favorite Vacuums, With Up to $180 Off

Power around your house and clean those floors with ease with this cordless vacuum.

Power around your house and clean those floors with ease with this cordless vacuum.

Read More 

Samsung wants you to try the Galaxy Z Fold 6 experience using two phones

You can now try the Galaxy Z Fold experience at home… as long as you have two phones lying around.

Samsung is offering all Android users the chance to try out the Galaxy Z Fold 6 experience – or at least as close as you can get with two regular smartphones. 

Thanks to a new update, foldable-curious Android users can get a sense of the Galaxy Z Fold 6’s large inner screen by pairing two regular ‘slab’ phones through the Try Galaxy app.

Each phone then acts as if it were half of the Z Fold 6’s 7.6 inch inner screen, allowing for large-scale video playback and multitasking. 

How to install Try Galaxy

(Image credit: Future)

Try Galaxy isn’t available on the Google Play Store. To try the Fold Experience feature for yourself, first head to the Try Galaxy website and scan the QR code to install the app on the two phones you intend to use.

Once the app has installed, swipe across to the second home screen and tap the Fold Experience icon. The app will walk you through assigning each phone as the left or right side of the screen.

You can then pair the two phones with a provided code.

Try Galaxy simulates Samsung’s One UI, a variant of Android, providing users with the chance to try out exclusive apps and AI demonstrations, including Circle to Search and Live Translate.

Try before you buy

In our Samsung Galaxy Z Fold 6 review we found it to be Samsung’s best foldable yet, and the new Try Galaxy update might just be as close as you can get to the Z Fold 6 without seeing it in person.

As SamMobile reports, the update also added support for new products unveiled at Samsung’s Unpacked event, which took place in Paris in July. These include the all-new Galaxy Ring wearable, Galaxy Watch 7, Galaxy Watch Ultra, and Galaxy Buds 3.

If foldable phones pique your curiosity, be sure to take a look at TechRadar’s guide to the best foldable phones. Or keep it strictly Samsung with our guide to the best Samsung phones.

You might also like

Samsung Galaxy Z Fold 6 Slim may switch to a titanium frameThe Samsung Galaxy S25 Ultra could have a bigger screen to match the iPhone 16 Pro MaxIt looks as though the Samsung Galaxy S24 FE is edging closer to launching

Read More 

AMD explains its AI PC strategy

Over the past few years, the concept of “AI PCs” has gone from sounding like a desperate attempt to revive the computer industry, to something that could actually change the way we live with our PCs. To recap, an AI PC is any system running a CPU that’s equipped with a neural processing unit (NPU), which is specially designed for AI workloads. NPUs have been around for years in mobile hardware, but AMD was the first company to bring them to x86 PCs with the Ryzen Pro 7040 chips.
Now with its Ryzen AI 300 chips, AMD is making its biggest push yet for AI PCs — something that could pay off in the future as we see more AI-driven features like Microsoft’s Recall. (Which, it’s worth noting, has also been dogged with privacy concerns and subsequently delayed.) To get a better sense of how AMD is approaching the AI PC era, I chatted with Ryzen AI lead Rakesh Anigundi, the Ryzen AI product lead and Jason Banta, CVP and GM of Client OEM. You can listen to the full interview on the Engadget Podcast.

My most pressing question: How does AMD plan to get developers onboard with building AI-powered features? NPUs aren’t exactly a selling point if nobody is making apps that use them, after all. Anigundi said he was well aware that developers broadly “just want things to work,” so the company built a strategy around three pillars: A robust software stack; performant hardware; and bringing in open-source solutions.
“We are of the philosophy that we don’t want to invent standards, but follow the standards,” Anigundi said. “That’s why we are really double clicking on ONNX, which is a cross platform framework to extract the maximum performance out of our system. This is very closely aligned with how we are working with Microsoft, enabling their next generation of experiences and also OEMs. And on the other side, where there’s a lot of innovation happening with the smaller ISVs [independent software vendors], this strategy works out very well as well.”
He points to AMD’s recently launched Amuse 2.0 beta as one way the company is showing off the AI capabilities of its hardware. It’s a simple program for generating AI images, and runs entirely on your NPU-equipped device, with no need to reach out to OpenAI’s DallE or Google’s Gemini in the cloud.
AMD
AMD’s Banta reiterated the need for a great tool set and software stack, but he pointed out that the company also works closely with partners like Microsoft on prototype hardware to ensure the quality of the customer experience. “[Consumers] can have all the hardware, they can have all the tools, they can have all the foundational models, but making that end customer experience great requires a lot of direct one to one time between us and those ISV partners.”
In this case, Banta is also referring to AMD’s relationship with Microsoft when it comes to building Copilot+ experiences for its systems. While we’ve seen a handful of AI features on the first batch of Qualcomm Snapdragon-powered Copilot+ machines, like the new Surface Pro and Surface Laptop, they’re not available yet on Copilot+ systems running x86 chips from AMD and Intel.
“We’re making that experience perfect,” Banta said. At this point, you can consider Ryzen AI 300 machines to be “Copilot+ ready,” but not yet fully Copilot+ capable. (As I mentioned in my Surface Pro review, Microsoft’s current AI features are fairly basic, and that likely won’t change until Recall is officially released.)
As for those rumors around AMD developing an Arm-based CPU, the company’s executives, naturally, didn’t reveal much. “Arm is a close partner of AMD’s,” Banta said. “We work together on a number of solutions across our roadmaps… As far as [the] overall CPU roadmap, I can’t really talk about what’s coming around the corner.” But given that the same rumor points to NVIDIA also developing its own Arm chip, and considering the astounding performance we’ve seen from Apple and Qualcomm’s latest mobile chips, it wouldn’t be too surprising to see AMD go down the same Arm-paved road. This article originally appeared on Engadget at https://www.engadget.com/computing/amd-explains-its-ai-pc-strategy-123004804.html?src=rss

Over the past few years, the concept of “AI PCs” has gone from sounding like a desperate attempt to revive the computer industry, to something that could actually change the way we live with our PCs. To recap, an AI PC is any system running a CPU that’s equipped with a neural processing unit (NPU), which is specially designed for AI workloads. NPUs have been around for years in mobile hardware, but AMD was the first company to bring them to x86 PCs with the Ryzen Pro 7040 chips.

Now with its Ryzen AI 300 chips, AMD is making its biggest push yet for AI PCs — something that could pay off in the future as we see more AI-driven features like Microsoft’s Recall. (Which, it’s worth noting, has also been dogged with privacy concerns and subsequently delayed.) To get a better sense of how AMD is approaching the AI PC era, I chatted with Ryzen AI lead Rakesh Anigundi, the Ryzen AI product lead and Jason Banta, CVP and GM of Client OEM. You can listen to the full interview on the Engadget Podcast.

My most pressing question: How does AMD plan to get developers onboard with building AI-powered features? NPUs aren’t exactly a selling point if nobody is making apps that use them, after all. Anigundi said he was well aware that developers broadly “just want things to work,” so the company built a strategy around three pillars: A robust software stack; performant hardware; and bringing in open-source solutions.

“We are of the philosophy that we don’t want to invent standards, but follow the standards,” Anigundi said. “That’s why we are really double clicking on ONNX, which is a cross platform framework to extract the maximum performance out of our system. This is very closely aligned with how we are working with Microsoft, enabling their next generation of experiences and also OEMs. And on the other side, where there’s a lot of innovation happening with the smaller ISVs [independent software vendors], this strategy works out very well as well.”

He points to AMD’s recently launched Amuse 2.0 beta as one way the company is showing off the AI capabilities of its hardware. It’s a simple program for generating AI images, and runs entirely on your NPU-equipped device, with no need to reach out to OpenAI’s DallE or Google’s Gemini in the cloud.

AMD

AMD’s Banta reiterated the need for a great tool set and software stack, but he pointed out that the company also works closely with partners like Microsoft on prototype hardware to ensure the quality of the customer experience. “[Consumers] can have all the hardware, they can have all the tools, they can have all the foundational models, but making that end customer experience great requires a lot of direct one to one time between us and those ISV partners.”

In this case, Banta is also referring to AMD’s relationship with Microsoft when it comes to building Copilot+ experiences for its systems. While we’ve seen a handful of AI features on the first batch of Qualcomm Snapdragon-powered Copilot+ machines, like the new Surface Pro and Surface Laptop, they’re not available yet on Copilot+ systems running x86 chips from AMD and Intel.

“We’re making that experience perfect,” Banta said. At this point, you can consider Ryzen AI 300 machines to be “Copilot+ ready,” but not yet fully Copilot+ capable. (As I mentioned in my Surface Pro review, Microsoft’s current AI features are fairly basic, and that likely won’t change until Recall is officially released.)

As for those rumors around AMD developing an Arm-based CPU, the company’s executives, naturally, didn’t reveal much. “Arm is a close partner of AMD’s,” Banta said. “We work together on a number of solutions across our roadmaps… As far as [the] overall CPU roadmap, I can’t really talk about what’s coming around the corner.” But given that the same rumor points to NVIDIA also developing its own Arm chip, and considering the astounding performance we’ve seen from Apple and Qualcomm’s latest mobile chips, it wouldn’t be too surprising to see AMD go down the same Arm-paved road. 

This article originally appeared on Engadget at https://www.engadget.com/computing/amd-explains-its-ai-pc-strategy-123004804.html?src=rss

Read More 

UK ‘silent hangar’ to battle-ready aircraft, vehicles amid Russian GPS assault

In March, a plane transporting UK defence minister Grant Shapps to Poland had its GPS signal jammed as it flew near Russian territory.  While the disruption forced the plane to use alternative ways to navigate for over half an hour, the British aircraft was most likely not the intended target. Russia regularly jams satellite signals to disrupt enemy equipment — from drones to tanks. These attacks often spill over to other GPS users in the area, including commercial aircraft.  The UK government is responding to this rising threat by building a massive anti-jamming facility in Wiltshire, it announced today. The…This story continues at The Next Web

In March, a plane transporting UK defence minister Grant Shapps to Poland had its GPS signal jammed as it flew near Russian territory.  While the disruption forced the plane to use alternative ways to navigate for over half an hour, the British aircraft was most likely not the intended target. Russia regularly jams satellite signals to disrupt enemy equipment — from drones to tanks. These attacks often spill over to other GPS users in the area, including commercial aircraft.  The UK government is responding to this rising threat by building a massive anti-jamming facility in Wiltshire, it announced today. The…

This story continues at The Next Web

Read More 

Scroll to top
Generated by Feedzy