verge-rss
Apple updates Logic Pro with new sounds and search features
Apple Logic Pro for Mac 11.1 with Quantec Room Simulator | Image: Apple
Apple today announced some minor updates to Logic Pro for both the Mac and the iPad, including the ability to search for plug-ins and sources and the addition of more analog-simulating sounds.
In Logic Pro for Mac 11.1 and Logic Pro for iPad 2.1, you can now reorder channel strips and plug-ins in the mixer and plug-in windows to make it easier to organize the layout of an audio mix.
As for the new sounds, Apple added a library of analog synthesizer samples called Modular Melodies, akin to the Modular Rhythms pack already found in Logic.
A more exciting sonic addition is the new Quantec Room Simulator (QRS) plug-in, which emulates the vintage digital reverb hardware of the same name, found in professional recording studios all over the world. Apple has acquired the technology for the classic QRS model and the later YardStick models to integrate into this software.
Image: Apple
I wish the QRS plug-in looked like the real life reverb unit
Specific to Logic Pro for Mac, you are now able to share a song to the Mac’s Voice Memos app — which may be a great feature for when Voice Memos gets that multitrack option on the iPhone in iOS 18.2
Added to the iPad version of Logic Pro is the ability to add your own local third party sample folders to the browser window, to make it easier to bring external audio files into tracks and sampler plug-ins.
These upgrades are small for current Logic users, but they do overall make the digital audio workplace easier to use and adds to the plethora of useful tools for no additional cost. Users will have access to upgrade to Logic Pro for Mac 11.1 and Logic Pro for iPad 2.1 today.
Apple Logic Pro for Mac 11.1 with Quantec Room Simulator | Image: Apple
Apple today announced some minor updates to Logic Pro for both the Mac and the iPad, including the ability to search for plug-ins and sources and the addition of more analog-simulating sounds.
In Logic Pro for Mac 11.1 and Logic Pro for iPad 2.1, you can now reorder channel strips and plug-ins in the mixer and plug-in windows to make it easier to organize the layout of an audio mix.
As for the new sounds, Apple added a library of analog synthesizer samples called Modular Melodies, akin to the Modular Rhythms pack already found in Logic.
A more exciting sonic addition is the new Quantec Room Simulator (QRS) plug-in, which emulates the vintage digital reverb hardware of the same name, found in professional recording studios all over the world. Apple has acquired the technology for the classic QRS model and the later YardStick models to integrate into this software.
Image: Apple
I wish the QRS plug-in looked like the real life reverb unit
Specific to Logic Pro for Mac, you are now able to share a song to the Mac’s Voice Memos app — which may be a great feature for when Voice Memos gets that multitrack option on the iPhone in iOS 18.2
Added to the iPad version of Logic Pro is the ability to add your own local third party sample folders to the browser window, to make it easier to bring external audio files into tracks and sampler plug-ins.
These upgrades are small for current Logic users, but they do overall make the digital audio workplace easier to use and adds to the plethora of useful tools for no additional cost. Users will have access to upgrade to Logic Pro for Mac 11.1 and Logic Pro for iPad 2.1 today.
Apple launches Final Cut Pro 11 with even more AI features
It’s been 25 years since the first Final Cut Pro was announced.
More than a decade after the launch of Final Cut Pro X, Apple’s video editing software is taking a step forward. The app is now being updated to Final Cut Pro 11, after dropping the number in its name for the past few years. The update includes new AI masking tools, the ability to generate captions directly in your timeline, spatial video editing features, and a set of workflow improvements. The new version is free for existing users and a $299 one-time purchase for new users. Final Cut Pro for iPad and Final Cut Camera are also getting some updates today, too.
I’ve spent the last week testing out these new features, and many of them are great improvements. I’ve been particularly impressed by the speed and accuracy of one new feature coming to the desktop: Magnetic Mask. With one click, you can easily isolate a subject, like a person, from the background and apply different color adjustments to that part of the footage.
I tested Magnetic Mask in various scenarios, like static talking head videos and fast-moving snowboarding footage. In each scenario, Final Cut Pro did a very good job of isolating the subjects. But don’t expect a pixel-perfect mask each time. I still had to jump in and do a few smaller adjustments to help it out. You can either manually fine-tune your mask with a brush or add or remove tracking points and let Final Cut Pro analyze the footage.
One thing that was impressive is that it automatically detected my flapping backpack straps.
I was impressed by the speed of the whole process. Granted, these were fairly short clips (about 45 seconds each), but each mask took less than a minute on my four-year-old 10-core M1 Pro MacBook Pro — a lot less time than the tedious and exhausting process of manually rotoscoping in After Effects.
I did notice that analysis slowed down significantly once I started screen recording my process. This feature will work on Intel-based Macs as well.
I am an avid user of Adobe’s Premiere Pro, but features like these always make me want to give Final Cut Pro another shot. I may not be left behind for long, though: Adobe announced a similar feature for Premiere earlier this year. DaVinci Resolve also already has a similar feature called Magic Mask.
The next new highlight is the ability to autogenerate captions in your timeline. Final Cut Pro does this using an Apple-trained language mode, and the whole process takes place locally on-device without sending information to the cloud. The process is fast but not always accurate and often misspells common words. It fumbled at proper nouns like “The Verge” and even more common nouns like “machine,” where it would just write “macine” instead. Those are just a few of many examples. There’s also no way to stylize your captions if you were hoping to add them to your TikToks. For that, you’ll need to look into some third-party plug-ins.
It is a good update, but I wish Apple went a step further and added text-based editing, which lets you edit videos solely by using text instead of on the timeline itself. Text-based editing in Premiere has helped me immensely when working on longer documentaries or sit-down interviews, and I wish it were possible in Final Cut Pro 11.
Other changes include the ability to edit spatial videos for the Vision Pro and some new keyboard shortcuts. My favorite is Option + Arrow Up / Arrow Down to move clips between layers. It’s the little things!
Final Cut Pro for iPad is also getting a few new updates. The AI-enabled “enhance light and color” tool that was initially released in Final Cut Pro 10.8 for the Mac is making its way to the iPad app. It is the fastest way to quickly improve the color, contrast, and overall tonality of your footage.
The AI-enhanced light and color tool originally came out for Final Cut Pro for Mac but has made its way to the iPad version in this update.
In a few tests that I ran, the tool did a really good job of getting me started on my coloring process. It cleans up the overall exposure nicely and adds very subtle stylistic color choices. For far less subtle color grades, Apple is expanding the number of presets available in the app. In addition to those presets, Apple is also adding new modular transitions and new songs to its soundtrack library.
If you’re using the Apple Pencil Pro to edit, you’ll finally be able to unlock all those new brushes that were released alongside the M4 iPad Pro and features like tilt recognition and take advantage of the haptic feedback. Haptic feedback works particularly well on the iPad, and I’m enjoying it more than I thought I would. It really makes the editing process feel a lot more tactile. In fact, I wish more gestures had some haptic feedback.
Lastly, there are some minor but significant workflow improvements. You can now resize the height of your clips in your timeline by using the pinch gesture, you can edit in 120fps timelines, and the picture-in-picture mode is dynamic. Apple also mentioned there are new keyboard shortcuts, but I’ve only found one: Render Entire Timeline.
I’m glad to see more frequent updates coming to the iPad version of Final Cut Pro, but there are still features that are desperately needed to really make this app worth the $4.99 a month subscription plan. At the top of my wish list are things like custom LUTs, better file management, and some of the other AI-powered features that have already made their way to the desktop version. Since my initial review, I’ve mostly started using DaVinci Resolve on the iPad, which continues to impress me with how similar it is to its desktop equivalent.
The trifecta of updates ends with the Final Cut Camera, which can now film HEVC files in Apple Log — no need to stick with storage-hungry ProRes files anymore. HEVC Log capture will work for both standalone capture or as part of the Live Multicam session. Final Cut Camera will also include LUT previews during recording, meaning you can monitor your exposure and color while filming in Apple Log.
Final Cut Camera can shoot in 120fps in Apple Log and has a new leveler to help you frame up your shots.
And in order to make sure your framing is correct and aligned, Apple is adding a new level indicator to the app. The new leveler includes tilt and roll indicators and crosshairs for your top-down and bottom-up shots, too.
The introduction of new AI features and workflow improvements mark significant steps forward for content creators, but they don’t address some of the community’s requests to fully compete with the likes of DaVinci and Adobe. I’d still like to see text-based editing, more robust coloring options, and custom captions. It will be interesting to see if these new updates convert any new users. Magnetic Mask alone could be enough of a reason to switch.
It’s been 25 years since the first Final Cut Pro was announced.
More than a decade after the launch of Final Cut Pro X, Apple’s video editing software is taking a step forward. The app is now being updated to Final Cut Pro 11, after dropping the number in its name for the past few years. The update includes new AI masking tools, the ability to generate captions directly in your timeline, spatial video editing features, and a set of workflow improvements. The new version is free for existing users and a $299 one-time purchase for new users. Final Cut Pro for iPad and Final Cut Camera are also getting some updates today, too.
I’ve spent the last week testing out these new features, and many of them are great improvements. I’ve been particularly impressed by the speed and accuracy of one new feature coming to the desktop: Magnetic Mask. With one click, you can easily isolate a subject, like a person, from the background and apply different color adjustments to that part of the footage.
I tested Magnetic Mask in various scenarios, like static talking head videos and fast-moving snowboarding footage. In each scenario, Final Cut Pro did a very good job of isolating the subjects. But don’t expect a pixel-perfect mask each time. I still had to jump in and do a few smaller adjustments to help it out. You can either manually fine-tune your mask with a brush or add or remove tracking points and let Final Cut Pro analyze the footage.
One thing that was impressive is that it automatically detected my flapping backpack straps.
I was impressed by the speed of the whole process. Granted, these were fairly short clips (about 45 seconds each), but each mask took less than a minute on my four-year-old 10-core M1 Pro MacBook Pro — a lot less time than the tedious and exhausting process of manually rotoscoping in After Effects.
I did notice that analysis slowed down significantly once I started screen recording my process. This feature will work on Intel-based Macs as well.
I am an avid user of Adobe’s Premiere Pro, but features like these always make me want to give Final Cut Pro another shot. I may not be left behind for long, though: Adobe announced a similar feature for Premiere earlier this year. DaVinci Resolve also already has a similar feature called Magic Mask.
The next new highlight is the ability to autogenerate captions in your timeline. Final Cut Pro does this using an Apple-trained language mode, and the whole process takes place locally on-device without sending information to the cloud. The process is fast but not always accurate and often misspells common words. It fumbled at proper nouns like “The Verge” and even more common nouns like “machine,” where it would just write “macine” instead. Those are just a few of many examples. There’s also no way to stylize your captions if you were hoping to add them to your TikToks. For that, you’ll need to look into some third-party plug-ins.
It is a good update, but I wish Apple went a step further and added text-based editing, which lets you edit videos solely by using text instead of on the timeline itself. Text-based editing in Premiere has helped me immensely when working on longer documentaries or sit-down interviews, and I wish it were possible in Final Cut Pro 11.
Other changes include the ability to edit spatial videos for the Vision Pro and some new keyboard shortcuts. My favorite is Option + Arrow Up / Arrow Down to move clips between layers. It’s the little things!
Final Cut Pro for iPad is also getting a few new updates. The AI-enabled “enhance light and color” tool that was initially released in Final Cut Pro 10.8 for the Mac is making its way to the iPad app. It is the fastest way to quickly improve the color, contrast, and overall tonality of your footage.
The AI-enhanced light and color tool originally came out for Final Cut Pro for Mac but has made its way to the iPad version in this update.
In a few tests that I ran, the tool did a really good job of getting me started on my coloring process. It cleans up the overall exposure nicely and adds very subtle stylistic color choices. For far less subtle color grades, Apple is expanding the number of presets available in the app. In addition to those presets, Apple is also adding new modular transitions and new songs to its soundtrack library.
If you’re using the Apple Pencil Pro to edit, you’ll finally be able to unlock all those new brushes that were released alongside the M4 iPad Pro and features like tilt recognition and take advantage of the haptic feedback. Haptic feedback works particularly well on the iPad, and I’m enjoying it more than I thought I would. It really makes the editing process feel a lot more tactile. In fact, I wish more gestures had some haptic feedback.
Lastly, there are some minor but significant workflow improvements. You can now resize the height of your clips in your timeline by using the pinch gesture, you can edit in 120fps timelines, and the picture-in-picture mode is dynamic. Apple also mentioned there are new keyboard shortcuts, but I’ve only found one: Render Entire Timeline.
I’m glad to see more frequent updates coming to the iPad version of Final Cut Pro, but there are still features that are desperately needed to really make this app worth the $4.99 a month subscription plan. At the top of my wish list are things like custom LUTs, better file management, and some of the other AI-powered features that have already made their way to the desktop version. Since my initial review, I’ve mostly started using DaVinci Resolve on the iPad, which continues to impress me with how similar it is to its desktop equivalent.
The trifecta of updates ends with the Final Cut Camera, which can now film HEVC files in Apple Log — no need to stick with storage-hungry ProRes files anymore. HEVC Log capture will work for both standalone capture or as part of the Live Multicam session. Final Cut Camera will also include LUT previews during recording, meaning you can monitor your exposure and color while filming in Apple Log.
Final Cut Camera can shoot in 120fps in Apple Log and has a new leveler to help you frame up your shots.
And in order to make sure your framing is correct and aligned, Apple is adding a new level indicator to the app. The new leveler includes tilt and roll indicators and crosshairs for your top-down and bottom-up shots, too.
The introduction of new AI features and workflow improvements mark significant steps forward for content creators, but they don’t address some of the community’s requests to fully compete with the likes of DaVinci and Adobe. I’d still like to see text-based editing, more robust coloring options, and custom captions. It will be interesting to see if these new updates convert any new users. Magnetic Mask alone could be enough of a reason to switch.
OpenAI reportedly plans to launch an AI agent early next year
Illustration by Cath Virginia / The Verge | Photos by Getty Images
OpenAI is preparing to release an autonomous AI agent that can control computers and perform tasks independently, code-named “Operator.” The company plans to debut it as a research preview and developer tool in January, according to Bloomberg.
This move intensifies the competition among tech giants developing AI agents: Anthropic recently introduced its “computer use” capability, while Google is reportedly preparing its own version for a December release. The timing of Operator’s eventual consumer release remains under wraps, but its development signals a pivotal shift toward AI systems that can actively engage with computer interfaces rather than just process text and images.
All the leading AI companies have promised autonomous AI agents, and OpenAI has hyped up the possibility recently. In a Reddit “Ask Me Anything” forum a few weeks ago, OpenAI CEO Sam Altman said “we will have better and better models,” but “I think the thing that will feel like the next giant breakthrough will be agents.” At an OpenAI press event ahead of the company’s annual Dev Day last month, chief product officer Kevin Weil said: “I think 2025 is going to be the year that agentic systems finally hit the mainstream.”
AI labs face mounting pressure to monetize their costly models, especially as incremental improvements may not justify higher prices for users. The hope is that autonomous agents are the next breakthrough product — a ChatGPT-scale innovation that validates the massive investment in AI development.
Illustration by Cath Virginia / The Verge | Photos by Getty Images
OpenAI is preparing to release an autonomous AI agent that can control computers and perform tasks independently, code-named “Operator.” The company plans to debut it as a research preview and developer tool in January, according to Bloomberg.
This move intensifies the competition among tech giants developing AI agents: Anthropic recently introduced its “computer use” capability, while Google is reportedly preparing its own version for a December release. The timing of Operator’s eventual consumer release remains under wraps, but its development signals a pivotal shift toward AI systems that can actively engage with computer interfaces rather than just process text and images.
All the leading AI companies have promised autonomous AI agents, and OpenAI has hyped up the possibility recently. In a Reddit “Ask Me Anything” forum a few weeks ago, OpenAI CEO Sam Altman said “we will have better and better models,” but “I think the thing that will feel like the next giant breakthrough will be agents.” At an OpenAI press event ahead of the company’s annual Dev Day last month, chief product officer Kevin Weil said: “I think 2025 is going to be the year that agentic systems finally hit the mainstream.”
AI labs face mounting pressure to monetize their costly models, especially as incremental improvements may not justify higher prices for users. The hope is that autonomous agents are the next breakthrough product — a ChatGPT-scale innovation that validates the massive investment in AI development.
Sonos revenue falls in the aftermath of company’s messy app debacle
Image: Cath Virginia / The Verge
Sonos is still trying to climb out from the hole it dug itself earlier this year by recklessly shipping an overhauled mobile app well before the software was actually ready. Today, just a couple weeks after the release of its latest hardware products — the Arc Ultra and Sub 4 — Sonos reported its fiscal Q4 2024 earnings. And the damage done by the app debacle is clear.
Revenue was down 8 percent year over year, which Sonos attributed to “softer demand due to challenging market conditions and challenges resulting from our recent app rollout.” During the quarter, the company sank $4 million into unspecified “app recovery investments.” (Sonos previously estimated it could spend up to $30 million to resolve all of the trouble that has stemmed from the rebuilt app.)
“To date, we have released 16 updates and restored 90 percent of missing features,” the company wrote in its earnings presentation. “Moving forward, we’ll alternate between major and minor releases. This will allow us to maintain our momentum of making improvements while also ensuring adequate beta testing.”
CEO Patrick Spence has taken accountability for the app situation, and last month, Sonos announced multiple commitments that it believes will prevent another colossal misstep like this from happening again. Some aspects of the plan are focused on more rigorous testing and greater transparency — both inside the company and out. But others, like executives potentially losing out on their annual bonuses, have been mocked by customers as meaningless, half-hearted measures.
“The Sonos flywheel remains strong, as evidenced by the fact that the number of new products per home increased in fiscal 2024,” Spence said in today’s press release. The company also reported its “all-time highest annual market share” in home theater, another positive sign at a time when morale among Sonos employees has taken a serious hit.
The rebuilt app is in a better place now, which you’d hope would be the case after several months of bug fixes and performance enhancements. The mood within Sonos community spaces like the company’s subreddit has also improved, with less of the vitriol that felt non-stop (understandably so) from late spring through the early fall.
As far as hardware is concerned, Sonos seems to be getting back on track. Early reviews of the Arc Ultra have been largely positive. (Yes, I’ll have one coming in the near future.) One early bug with the new soundbar affected Trueplay tuning and, for some customers, resulted in lackluster bass response from a paired subwoofer. Sonos just rectified this issue with a software update that went out earlier today.
But some of the company’s most loyal customers are still feeling a sense of wariness and frayed trust towards the brand. Sonos’ next major new product is rumored to be a video streaming box. I’m still flummoxed as to just how the company plans to stand out from competitors in that space. But hopefully there won’t be another major controversy to derail the product, as was the case with the Sonos Ace headphones.
Image: Cath Virginia / The Verge
Sonos is still trying to climb out from the hole it dug itself earlier this year by recklessly shipping an overhauled mobile app well before the software was actually ready. Today, just a couple weeks after the release of its latest hardware products — the Arc Ultra and Sub 4 — Sonos reported its fiscal Q4 2024 earnings. And the damage done by the app debacle is clear.
Revenue was down 8 percent year over year, which Sonos attributed to “softer demand due to challenging market conditions and challenges resulting from our recent app rollout.” During the quarter, the company sank $4 million into unspecified “app recovery investments.” (Sonos previously estimated it could spend up to $30 million to resolve all of the trouble that has stemmed from the rebuilt app.)
“To date, we have released 16 updates and restored 90 percent of missing features,” the company wrote in its earnings presentation. “Moving forward, we’ll alternate between major and minor releases. This will allow us to maintain our momentum of making improvements while also ensuring adequate beta testing.”
CEO Patrick Spence has taken accountability for the app situation, and last month, Sonos announced multiple commitments that it believes will prevent another colossal misstep like this from happening again. Some aspects of the plan are focused on more rigorous testing and greater transparency — both inside the company and out. But others, like executives potentially losing out on their annual bonuses, have been mocked by customers as meaningless, half-hearted measures.
“The Sonos flywheel remains strong, as evidenced by the fact that the number of new products per home increased in fiscal 2024,” Spence said in today’s press release. The company also reported its “all-time highest annual market share” in home theater, another positive sign at a time when morale among Sonos employees has taken a serious hit.
The rebuilt app is in a better place now, which you’d hope would be the case after several months of bug fixes and performance enhancements. The mood within Sonos community spaces like the company’s subreddit has also improved, with less of the vitriol that felt non-stop (understandably so) from late spring through the early fall.
As far as hardware is concerned, Sonos seems to be getting back on track. Early reviews of the Arc Ultra have been largely positive. (Yes, I’ll have one coming in the near future.) One early bug with the new soundbar affected Trueplay tuning and, for some customers, resulted in lackluster bass response from a paired subwoofer. Sonos just rectified this issue with a software update that went out earlier today.
But some of the company’s most loyal customers are still feeling a sense of wariness and frayed trust towards the brand. Sonos’ next major new product is rumored to be a video streaming box. I’m still flummoxed as to just how the company plans to stand out from competitors in that space. But hopefully there won’t be another major controversy to derail the product, as was the case with the Sonos Ace headphones.
Pixel phones will be able to detect and report malicious apps in real time
Google versus the bad guys. | Illustration: Alex Castro / The Verge
Google is beefing up its malware detection with new protections designed to suss out ever-sneakier bad actors.
Android’s Google Play Protect service is getting an update called live threat detection which seeks out potentially harmful apps on your phone by analyzing app behavior and alerts you in realtime if something looks fishy. The update was first announced at Google I/O earlier this year and is available now to Pixel 6 and newer phones. It should come to additional non-Pixel Android phones from Lenovo, OnePlus, Nothing, and Oppo, among others “in the coming months.”
Live threat detection targets particularly hard-to-spot malware apps that hide their intentions well. Rather than just scanning apps for malicious code when you download them, Play Protect will keep looking for signs of suspicious app behavior even after they’re on your phone. This can help it spot malware that remains dormant at first and later starts engaging in malicious activity. This detection takes place on-device using an Android privacy infrastructure called Private Compute Core to help keep user data secure, and users will get real-time alerts to take action if needed.
Google is rolling out another security feature today, too: scam call detection. Also announced at I/O, this feature uses on-device AI to analyze phone calls and looks for signs that that caller is a scammer. If it spots suspicious conversational patterns or requests typical of scam attempts, it will flag the user and encourage them to end the call. It’s only available to members of the Phone by Google app’s beta program with a Pixel 6 or later (as of this morning that program appears to be full) and will roll out to more Android phones in the future.
Google versus the bad guys. | Illustration: Alex Castro / The Verge
Google is beefing up its malware detection with new protections designed to suss out ever-sneakier bad actors.
Android’s Google Play Protect service is getting an update called live threat detection which seeks out potentially harmful apps on your phone by analyzing app behavior and alerts you in realtime if something looks fishy. The update was first announced at Google I/O earlier this year and is available now to Pixel 6 and newer phones. It should come to additional non-Pixel Android phones from Lenovo, OnePlus, Nothing, and Oppo, among others “in the coming months.”
Live threat detection targets particularly hard-to-spot malware apps that hide their intentions well. Rather than just scanning apps for malicious code when you download them, Play Protect will keep looking for signs of suspicious app behavior even after they’re on your phone. This can help it spot malware that remains dormant at first and later starts engaging in malicious activity. This detection takes place on-device using an Android privacy infrastructure called Private Compute Core to help keep user data secure, and users will get real-time alerts to take action if needed.
Google is rolling out another security feature today, too: scam call detection. Also announced at I/O, this feature uses on-device AI to analyze phone calls and looks for signs that that caller is a scammer. If it spots suspicious conversational patterns or requests typical of scam attempts, it will flag the user and encourage them to end the call. It’s only available to members of the Phone by Google app’s beta program with a Pixel 6 or later (as of this morning that program appears to be full) and will roll out to more Android phones in the future.
The Wall Street Journal is testing AI article summaries
Image: The Verge
The Wall Street Journal is experimenting with AI-generated article summaries that appear at the top of its news stories. The summaries appear as a “Key Points” box with bullets summarizing the piece. The Verge spotted the test on a story about Trump’s plans for the Department of Education, and the Journal confirmed it’s trialing the feature to see how readers respond.
The “Key Points” box has a message explaining that an “artificial intelligence tool created this summary” and that the summary was checked by an editor. The box also points to a page about how the WSJ and Dow Jones Newswires use AI tools.
Screenshot by Jay Peters / The Verge
The AI-generated “Key Points” from this WSJ article.
“We are always assessing new technologies and methods of storytelling to provide more value to our subscribers,” Taneth Evans, head of digital at the WSJ, says in a statement to The Verge. “To that end, we are currently running a series of A/B tests to understand our users’ needs with regards to summarization. The newsroom does this hand-in-hand with colleagues in technology and while speaking with readers at every step of the way. We also disclose how we leverage artificial intelligence tools to support our journalism whenever it’s used.”
AI summaries have been spreading across news sites and platforms. USA Today owner Gannett has also experimented with adding AI-generated summaries to its articles — it’s even using a similar “Key Points” format. Apps like Particle summarize articles using AI, too. Personally, I’d recommend reading full articles when you can in case the AI tool hallucinates something that’s incorrect.
Image: The Verge
The Wall Street Journal is experimenting with AI-generated article summaries that appear at the top of its news stories. The summaries appear as a “Key Points” box with bullets summarizing the piece. The Verge spotted the test on a story about Trump’s plans for the Department of Education, and the Journal confirmed it’s trialing the feature to see how readers respond.
The “Key Points” box has a message explaining that an “artificial intelligence tool created this summary” and that the summary was checked by an editor. The box also points to a page about how the WSJ and Dow Jones Newswires use AI tools.
Screenshot by Jay Peters / The Verge
The AI-generated “Key Points” from this WSJ article.
“We are always assessing new technologies and methods of storytelling to provide more value to our subscribers,” Taneth Evans, head of digital at the WSJ, says in a statement to The Verge. “To that end, we are currently running a series of A/B tests to understand our users’ needs with regards to summarization. The newsroom does this hand-in-hand with colleagues in technology and while speaking with readers at every step of the way. We also disclose how we leverage artificial intelligence tools to support our journalism whenever it’s used.”
AI summaries have been spreading across news sites and platforms. USA Today owner Gannett has also experimented with adding AI-generated summaries to its articles — it’s even using a similar “Key Points” format. Apps like Particle summarize articles using AI, too. Personally, I’d recommend reading full articles when you can in case the AI tool hallucinates something that’s incorrect.
Mark Zuckerberg just dropped a single with T-Pain
Image: Cath Virginia / The Verge; Getty Images
If you want to preserve the existing version of Lil Jon’s “Get Low” in your brain, maybe don’t listen to this cover T-Pain made with Mark Zuckerberg — excuse me, Z-Pain. Their version transforms the “Get Low” I got down to at all my school dances into a song with a much slower tempo complete with an acoustic guitar.
Trust me, I may be scarred for life after hearing Zuckerberg’s autotuned voice serenade me with “‘Til the sweat drop down my balls.” If you get halfway through the song, you’ll also hear a cameo from T-Pain.
Apparently, Zuckerberg made the song for his wife, Priscilla. “‘Get Low’ was playing when I first met Priscilla at a college party, so every year we listen to it on our dating anniversary,” Zuckerberg wrote on Instagram. “This year I worked with @tpain on our own version of this lyrical masterpiece. Sound on for the track and also available on Spotify.”
He also attempted to include the song in his post, but it doesn’t seem to be working, whether it’s due to a copyright strike or some other issue. Maybe that’s for the best?
Image: Cath Virginia / The Verge; Getty Images
If you want to preserve the existing version of Lil Jon’s “Get Low” in your brain, maybe don’t listen to this cover T-Pain made with Mark Zuckerberg — excuse me, Z-Pain. Their version transforms the “Get Low” I got down to at all my school dances into a song with a much slower tempo complete with an acoustic guitar.
Trust me, I may be scarred for life after hearing Zuckerberg’s autotuned voice serenade me with “‘Til the sweat drop down my balls.” If you get halfway through the song, you’ll also hear a cameo from T-Pain.
Apparently, Zuckerberg made the song for his wife, Priscilla. “‘Get Low’ was playing when I first met Priscilla at a college party, so every year we listen to it on our dating anniversary,” Zuckerberg wrote on Instagram. “This year I worked with @tpain on our own version of this lyrical masterpiece. Sound on for the track and also available on Spotify.”
He also attempted to include the song in his post, but it doesn’t seem to be working, whether it’s due to a copyright strike or some other issue. Maybe that’s for the best?
GOG’s new preservation program intends to keep classic games playable ‘forever’
PC game platform GOG has launched a new preservation program dedicated to keeping beloved older games playable, “now and in the future.”
“If a game is part of the Preservation Program, it means that we commit our own resources to maintaining its compatibility with modern and future systems,” the announcement blog reads.
The program is launching with 100 games including Diablo, System Shock 2, and Resident Evil 1-3, with GOG planning to add emore titles in the coming months. Games featured in the program will come with a number of perks. GOG says that when you buy a game from the program, you can:
“expect it to work on current and future popular PC configurations”
“be sure that this version is the best and most complete available anywhere, including compatibility, manuals, and other bonus content, but also DLCs and even features that are missing in other editions”
“access GOG’s Tech Support if you encounter technical issues with running the game”
“as with all titles in our catalog, always keep access to their offline installers, granting you the power to safeguard them how you want”
GOG says that its goal is to acquire “all games no longer updated or maintained by their original publisher,” no matter their age. A recent study from the Video Game History Foundation found that 87 percent of games released before 2010 are no longer able to played. These figures are exacerbated by the trend of delisting classic games from storefronts or shutting down entire storefronts altogether. Even new games are subject to inaccessibility with Sony’s Concord going permanently offline mere weeks after its launch earlier this year.
With the launch of this preservation program, GOG is one of the only major game platforms to directly acknowledge the grim reality of video game preservation while taking significant action to keep older games accessible.
“Games shaped us,” reads the announcement. “Being able to play them is an essential part of reconnecting with ourselves. They must stay accessible, playable, and alive.”
PC game platform GOG has launched a new preservation program dedicated to keeping beloved older games playable, “now and in the future.”
“If a game is part of the Preservation Program, it means that we commit our own resources to maintaining its compatibility with modern and future systems,” the announcement blog reads.
The program is launching with 100 games including Diablo, System Shock 2, and Resident Evil 1-3, with GOG planning to add emore titles in the coming months. Games featured in the program will come with a number of perks. GOG says that when you buy a game from the program, you can:
“expect it to work on current and future popular PC configurations”
“be sure that this version is the best and most complete available anywhere, including compatibility, manuals, and other bonus content, but also DLCs and even features that are missing in other editions”
“access GOG’s Tech Support if you encounter technical issues with running the game”
“as with all titles in our catalog, always keep access to their offline installers, granting you the power to safeguard them how you want”
GOG says that its goal is to acquire “all games no longer updated or maintained by their original publisher,” no matter their age. A recent study from the Video Game History Foundation found that 87 percent of games released before 2010 are no longer able to played. These figures are exacerbated by the trend of delisting classic games from storefronts or shutting down entire storefronts altogether. Even new games are subject to inaccessibility with Sony’s Concord going permanently offline mere weeks after its launch earlier this year.
With the launch of this preservation program, GOG is one of the only major game platforms to directly acknowledge the grim reality of video game preservation while taking significant action to keep older games accessible.
“Games shaped us,” reads the announcement. “Being able to play them is an essential part of reconnecting with ourselves. They must stay accessible, playable, and alive.”
GM offers free nighttime charging to Chevy EV owners in Texas
Image: Reliant
General Motors is teaming up with Reliant Energy to offer free nighttime charging to some Chevy electric vehicle owners in Texas.
Chevy owners who enroll in Reliant’s EV charging plan will receive free nighttime charging through monthly bill credits that offset charges incurred between 11PM. and 6AM, the companies said. Customers must also designate an EV to receive the charging credit through GM Energy’s Smart Charging Portal. (GM Energy is the automaker’s home energy subsidiary, and Reliant is a subsidiary of NRG Energy.)
The new plan is the latest promotion to discount charging costs for EV owners, as automakers pile on perks in the hopes of winning over skeptical consumers. Ford recently announced a similar deal in Texas, partnering with one of Texas’ largest electric providers to give some customers free EV charging at home. It also has a promotion in place to provide free home chargers to all new EV buyers through the end of the year.
Chevy owners who enroll in Reliant’s EV charging plan will receive free nighttime charging through monthly bill credits
GM and Reliant claim the energy for nighttime EV charging will be powered by renewable sources through the purchase of renewable energy certificates (RECs), a popular method among private players to burnish their environmental bonafides. In the corporate world, a company purchases a REC when it wants to claim that something is being powered with 100 percent renewable energy — even when it is still being powered by fossil fuels.
But RECs are often used to mask more polluting behavior. A recent study of 115 major companies that use Renewable Energy Certificates found that many of them overstated the environmental benefits.
Image: Reliant
General Motors is teaming up with Reliant Energy to offer free nighttime charging to some Chevy electric vehicle owners in Texas.
Chevy owners who enroll in Reliant’s EV charging plan will receive free nighttime charging through monthly bill credits that offset charges incurred between 11PM. and 6AM, the companies said. Customers must also designate an EV to receive the charging credit through GM Energy’s Smart Charging Portal. (GM Energy is the automaker’s home energy subsidiary, and Reliant is a subsidiary of NRG Energy.)
The new plan is the latest promotion to discount charging costs for EV owners, as automakers pile on perks in the hopes of winning over skeptical consumers. Ford recently announced a similar deal in Texas, partnering with one of Texas’ largest electric providers to give some customers free EV charging at home. It also has a promotion in place to provide free home chargers to all new EV buyers through the end of the year.
GM and Reliant claim the energy for nighttime EV charging will be powered by renewable sources through the purchase of renewable energy certificates (RECs), a popular method among private players to burnish their environmental bonafides. In the corporate world, a company purchases a REC when it wants to claim that something is being powered with 100 percent renewable energy — even when it is still being powered by fossil fuels.
But RECs are often used to mask more polluting behavior. A recent study of 115 major companies that use Renewable Energy Certificates found that many of them overstated the environmental benefits.
Spotify will start paying creators for popular videos
Cath Virginia / The Verge
Spotify is going all in on video.
The company will soon begin paying creators based on how much engagement their videos receive from paid subscribers. Automated ad breaks in videos will also be turned off for paid Spotify subscribers to encourage more consumption. Both of these changes go into effect starting January 2nd, 2025 in the US, UK, Australia, and Canada.
Paying video creators directly based on engagement puts Spotify on more of a collision course with YouTube, which is also leaning into podcasts and already pays its creators billions a year in shared ad revenue. “We can provide an experience for your audience that is superior to any other platform,” CEO Daniel Ek said onstage Wednesday at a Spotify creator event in Los Angles.
Since Spotify made video podcasts widely available in 2022, consumption of the format has skyrocketed, with the number of video creators on the platform more than doubling each year. There are now over 300,000 video podcasts on Spotify, up from 250,000 in late June, and “video consumption hours have grown faster than audio-only consumption hours year-over-year,” according to company spokesperson Grey Munford.
Creators will be able to access their payout details in a hub called Spotify for Creators, which will also help them determine if they’re eligible for video payments and offer more advanced analytics along with the ability to upload short, vertical video clips.
There’s uncertainty about how much Spotify plans to pay video creators, however. The company isn’t explaining exactly how it calculates video payouts, though Munford said creators will be able to see their breakdowns in the Spotify for Creators hub.
Cath Virginia / The Verge
Spotify is going all in on video.
The company will soon begin paying creators based on how much engagement their videos receive from paid subscribers. Automated ad breaks in videos will also be turned off for paid Spotify subscribers to encourage more consumption. Both of these changes go into effect starting January 2nd, 2025 in the US, UK, Australia, and Canada.
Paying video creators directly based on engagement puts Spotify on more of a collision course with YouTube, which is also leaning into podcasts and already pays its creators billions a year in shared ad revenue. “We can provide an experience for your audience that is superior to any other platform,” CEO Daniel Ek said onstage Wednesday at a Spotify creator event in Los Angles.
Since Spotify made video podcasts widely available in 2022, consumption of the format has skyrocketed, with the number of video creators on the platform more than doubling each year. There are now over 300,000 video podcasts on Spotify, up from 250,000 in late June, and “video consumption hours have grown faster than audio-only consumption hours year-over-year,” according to company spokesperson Grey Munford.
Creators will be able to access their payout details in a hub called Spotify for Creators, which will also help them determine if they’re eligible for video payments and offer more advanced analytics along with the ability to upload short, vertical video clips.
There’s uncertainty about how much Spotify plans to pay video creators, however. The company isn’t explaining exactly how it calculates video payouts, though Munford said creators will be able to see their breakdowns in the Spotify for Creators hub.