Month: November 2023

Meta sues FTC, hoping to block ban on monetizing kids’ Facebook data

Accused of violating kids’ privacy, Facebook owner challenges FTC authority.

Enlarge (credit: Getty Images | Chesnot)

Meta sued the Federal Trade Commission yesterday in a lawsuit that challenges the FTC’s authority to impose new privacy obligations on the social media firm.

The complaint stems from the FTC’s May 2023 allegation that Meta-owned Facebook violated a 2020 privacy settlement and the Children’s Online Privacy Protection Act. The FTC proposed changes to the 2020 privacy order that would, among other things, prohibit Facebook from monetizing data it collects from users under 18.

Meta’s lawsuit against the FTC challenges what it calls “the structurally unconstitutional authority exercised by the FTC through its Commissioners in an administrative reopening proceeding against Meta.” It was filed against the FTC, Chair Lina Khan, and other commissioners in US District Court for the District of Columbia. Meta is seeking a preliminary injunction to stop the FTC proceeding pending resolution of the lawsuit.

Read 15 remaining paragraphs | Comments

Read More 

How to Get Peacock Premium for Free – CNET

With a free subscription, you’ll have full access to everything on the app — like new Hallmark movies and sports — if you’re eligible.

With a free subscription, you’ll have full access to everything on the app — like new Hallmark movies and sports — if you’re eligible.

Read More 

Apple Adds New Features to Final Cut Pro, iMovie, Motion, Compressor, and Logic Pro

Apple today released updates for its Final Cut Pro, iMovie, Motion, Compressor, and Logic Pro software, introducing new features and optimizations. Apple announced the Final Cut Pro updates for iPad and Mac earlier this month and has now launched them.

Final Cut Pro on the Mac has been updated with organizational refinements like automatic timeline scrolling. Users will be able to keep their clips in view during playback, with the view able to be adjusted using keyboard shortcuts or the Zoom option.

The organization of the timeline is viewable at a glance, and it is easier to differentiate clips by assigned role using distinct colors. Apple also added tools for cleaning up complex timeline sections and fine tuning edits by combining overlapping connected clips into a single connected storyline.

On Apple silicon Macs, exporting projects in H.264 and HEVC is faster than before as Final Cut Pro is able to send video segments to available media engines for simultaneous processing.

On the ‌iPad‌, the updated version of Final Cut Pro includes voiceover capabilities that let creators record narration and audio directly into the timeline with the ‌iPad‌. In pro camera mode, stabilization now improves shaky footage for smoother video, and there are new options for combining connected clips. Editing is being sped up through new keyboard shortcuts for voiceover and grouping clips, plus there are new color-grading presets and titles.

iMovie and Compressor include the same file exporting optimizations on M1 Max, M1 Ultra, M2 Max, ‌M2‌ Ultra, and M3 Max machines. iMovie also includes new options for creating stereoscopic packages for the iTunes Store, and support for JSON and XML when batch exporting submissions using the command line.

Motion has improvements to the Object Tracker on Macs with Apple silicon, and for Logic Pro, Apple has added bug fixes.

Final Cut Pro for Mac is priced at $300, and Final Cut Pro for ‌iPad‌ is priced at $4.99 per month or $49 per year. Compressor and Motion for Mac are priced at $50 each, Logic Pro is $200, and iMovie is free.Tags: Final Cut Pro, iMovie, Motion, CompressorThis article, “Apple Adds New Features to Final Cut Pro, iMovie, Motion, Compressor, and Logic Pro” first appeared on MacRumors.comDiscuss this article in our forums

Apple today released updates for its Final Cut Pro, iMovie, Motion, Compressor, and Logic Pro software, introducing new features and optimizations. Apple announced the Final Cut Pro updates for iPad and Mac earlier this month and has now launched them.

Final Cut Pro on the Mac has been updated with organizational refinements like automatic timeline scrolling. Users will be able to keep their clips in view during playback, with the view able to be adjusted using keyboard shortcuts or the Zoom option.

The organization of the timeline is viewable at a glance, and it is easier to differentiate clips by assigned role using distinct colors. Apple also added tools for cleaning up complex timeline sections and fine tuning edits by combining overlapping connected clips into a single connected storyline.

On Apple silicon Macs, exporting projects in H.264 and HEVC is faster than before as Final Cut Pro is able to send video segments to available media engines for simultaneous processing.

On the ‌iPad‌, the updated version of Final Cut Pro includes voiceover capabilities that let creators record narration and audio directly into the timeline with the ‌iPad‌. In pro camera mode, stabilization now improves shaky footage for smoother video, and there are new options for combining connected clips. Editing is being sped up through new keyboard shortcuts for voiceover and grouping clips, plus there are new color-grading presets and titles.

iMovie and Compressor include the same file exporting optimizations on M1 Max, M1 Ultra, M2 Max, ‌M2‌ Ultra, and M3 Max machines. iMovie also includes new options for creating stereoscopic packages for the iTunes Store, and support for JSON and XML when batch exporting submissions using the command line.

Motion has improvements to the Object Tracker on Macs with Apple silicon, and for Logic Pro, Apple has added bug fixes.

Final Cut Pro for Mac is priced at $300, and Final Cut Pro for ‌iPad‌ is priced at $4.99 per month or $49 per year. Compressor and Motion for Mac are priced at $50 each, Logic Pro is $200, and iMovie is free.

This article, “Apple Adds New Features to Final Cut Pro, iMovie, Motion, Compressor, and Logic Pro” first appeared on MacRumors.com

Discuss this article in our forums

Read More 

Can digital watermarking protect us from generative AI?

The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.
A quick history of watermarking
Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.
Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.
Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.
The here and now
Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.
The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.
“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”
Zhao says that while the White House’s executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”
He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”
“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.
We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.
“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.
How Content Credentials work
With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021. 
CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology! 

Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.
“The analogy that we’ve used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that’s where the watermark sits. It’s actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”
Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.
Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That’s the beauty of Content Credentials and watermarks together,” Sickles said. “They become a much, much stronger system as a basis for authenticity and understanding providence around an image” than they would individually.” Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.

In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.
That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it’s adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.
Nightshade: The CR alternative that’s deadly to databases
Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.
Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these “glazed” images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.
While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it’s trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.
The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.
Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss

The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.

A quick history of watermarking

Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.

Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.

Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.

The here and now

Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.

The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.

“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”

Zhao says that while the White House’s executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”

He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”

“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.

We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.

“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.

How Content Credentials work

With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021. 

CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology! 

Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.

“The analogy that we’ve used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that’s where the watermark sits. It’s actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”

Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.

Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That’s the beauty of Content Credentials and watermarks together,” Sickles said. “They become a much, much stronger system as a basis for authenticity and understanding providence around an image” than they would individually.” Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.

In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.

That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it’s adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.

Nightshade: The CR alternative that’s deadly to databases

Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.

Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these “glazed” images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.

While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it’s trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.

The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.

Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”

This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss

Read More 

Over 75% of Web3 Games ‘Failed’ in Last Five Years

Web3 research and analytics firm CoinGecko: Around 2,127 web3 games have failed in the last five years since the GameFi niche emerged, representing 75.5% of the 2,817 web3 games launched. In other words, 3 out of every 4 web3 games have become inactive. The average annual failure rate for web3 games has been 80.8% from 2018 to 2023, based on the number of web3 games failed compared to launched.

Read more of this story at Slashdot.

Web3 research and analytics firm CoinGecko: Around 2,127 web3 games have failed in the last five years since the GameFi niche emerged, representing 75.5% of the 2,817 web3 games launched. In other words, 3 out of every 4 web3 games have become inactive. The average annual failure rate for web3 games has been 80.8% from 2018 to 2023, based on the number of web3 games failed compared to launched.

Read more of this story at Slashdot.

Read More 

Meta’s “overpriced” ad-free subscriptions make privacy a “luxury good”: EU suit

Meta’s terms for data collection are still too vague, consumer groups allege.

Enlarge (credit: NurPhoto / Contributor | NurPhoto)

Backlash over Meta’s ad-free subscription model in the European Union has begun just one month into its launch.

On Thursday, Europe’s largest consumer group, the European Consumer Organization (BEUC), filed a complaint with the network of consumer protection authorities. In a press release, BEUC alleges that Meta’s subscription fees for ad-free access to Facebook and Instagram are so unreasonably high that they breach laws designed to protect user privacy as a fundamental right.

“Meta has been rolling out changes to its service in the EU in November 2023, which require Facebook and Instagram users to either consent to the processing of their data for advertising purposes by the company or pay in order not to be shown advertisements,” BEUC’s press release said. “The tech giant’s pay-or-consent approach is unfair and must be stopped.”

Read 22 remaining paragraphs | Comments

Read More 

Need to Clean a Scorched Cast-Iron Pan? Use This Common Pantry Staple – CNET

Put down the soap and back away slowly. The secret to safely cleaning a cast-iron skillet with a scorched surface or caked-on foods is sitting in your cupboard.

Put down the soap and back away slowly. The secret to safely cleaning a cast-iron skillet with a scorched surface or caked-on foods is sitting in your cupboard.

Read More 

YouTube Music brings personalized album art to its 2023 Recap

YouTube Music users who have seen their Spotify- and Apple Music-using friends share their listening stats from this year can now join the party. YouTube Music Recap is now live and you can access it from the 2023 Recap page in the app. You’ll be able to see your top artists, songs, moods, genres, albums, playlists and more from 2023. There’s also the option to view your Recap in the main YouTube app, along with some other new features for 2023.
This year, you’ll be able to add custom album art. YouTube will create this using your top song and moods from the year, as well as your energy score. The platform will mash together colors, vibes and visuals to create a representation of your year in music.
YouTube Music
YouTube says another feature will match your mood with your top songs of the year. You might see, for instance, the percentages of songs you listened to that are classed as upbeat, fun, dancey or chill. Last but not least, you can use snaps from Google Photos to create a customized visual that sums up your year in music (and perhaps your year in travel too).This article originally appeared on Engadget at https://www.engadget.com/youtube-music-brings-personalized-album-art-to-its-2023-recap-182904330.html?src=rss

YouTube Music users who have seen their Spotify- and Apple Music-using friends share their listening stats from this year can now join the party. YouTube Music Recap is now live and you can access it from the 2023 Recap page in the app. You’ll be able to see your top artists, songs, moods, genres, albums, playlists and more from 2023. There’s also the option to view your Recap in the main YouTube app, along with some other new features for 2023.

This year, you’ll be able to add custom album art. YouTube will create this using your top song and moods from the year, as well as your energy score. The platform will mash together colors, vibes and visuals to create a representation of your year in music.

YouTube Music

YouTube says another feature will match your mood with your top songs of the year. You might see, for instance, the percentages of songs you listened to that are classed as upbeat, fun, dancey or chill. Last but not least, you can use snaps from Google Photos to create a customized visual that sums up your year in music (and perhaps your year in travel too).

This article originally appeared on Engadget at https://www.engadget.com/youtube-music-brings-personalized-album-art-to-its-2023-recap-182904330.html?src=rss

Read More 

Scroll to top
Generated by Feedzy