verge-rss

Bluesky now lets you send DMs

Image: Bluesky

Bluesky will now let you send a direct message to other users. For now, you can only send messages containing text, but Bluesky is working on adding support for images and videos.
To send a DM, select the chat icon at the bottom of your screen on mobile or hit the chat bubble in your sidebar on desktop. Once you create a new chat, search for the user you want to talk to, write your message, and hit send. Just keep in mind that DMs on Bluesky aren’t end-to-end encrypted just yet, but the platform plans on supporting it in the future.

Now available: DMs! Start a private conversation with a friend directly on Bluesky within the Chat tab.

Update to the latest version of the app (1.83) or refresh on desktop to start chatting!— Bluesky (@bsky.app) 2024-05-22T18:57:38.254Z

Bluesky lets anyone you follow message you by default. However, you can change this by heading to the platform’s settings menu and choosing “Everyone,” “No one,” or “Users I follow.” You can also choose whether to enable or disable message notification sounds.

Bluesky will let you report DMs and block users from within your messages. Blocked users won’t be able to DM you, but muted ones can. The platform notes that the Bluesky moderation team “may need to open your DMs to investigate broader patterns of abuse, such as spam or coordinated harassment,” but “access is extremely limited and tracked internally.”
Bluesky has been gradually adding new features since it dropped its invite system in February. The company started letting users host their own servers and even open-sourced its content moderation tool. There are some other features on the roadmap, too, including 90-second videos, improved anti-harassment tools, group DMs, and more.

Image: Bluesky

Bluesky will now let you send a direct message to other users. For now, you can only send messages containing text, but Bluesky is working on adding support for images and videos.

To send a DM, select the chat icon at the bottom of your screen on mobile or hit the chat bubble in your sidebar on desktop. Once you create a new chat, search for the user you want to talk to, write your message, and hit send. Just keep in mind that DMs on Bluesky aren’t end-to-end encrypted just yet, but the platform plans on supporting it in the future.

Now available: DMs! Start a private conversation with a friend directly on Bluesky within the Chat tab.

Update to the latest version of the app (1.83) or refresh on desktop to start chatting!

Bluesky (@bsky.app) 2024-05-22T18:57:38.254Z

Bluesky lets anyone you follow message you by default. However, you can change this by heading to the platform’s settings menu and choosing “Everyone,” “No one,” or “Users I follow.” You can also choose whether to enable or disable message notification sounds.

Bluesky will let you report DMs and block users from within your messages. Blocked users won’t be able to DM you, but muted ones can. The platform notes that the Bluesky moderation team “may need to open your DMs to investigate broader patterns of abuse, such as spam or coordinated harassment,” but “access is extremely limited and tracked internally.”

Bluesky has been gradually adding new features since it dropped its invite system in February. The company started letting users host their own servers and even open-sourced its content moderation tool. There are some other features on the roadmap, too, including 90-second videos, improved anti-harassment tools, group DMs, and more.

Read More 

iRobot halts its Roomba subscription service as it gets a new CEO

The iRobot Select subscription plan that launched with the Roomba j7 Plus has been suspended. | Photo by Jennifer Pattison Tuohy / The Verge

iRobot has ended its robot vacuum subscription service iRobot Select, which got you a high-end Roomba for less money upfront. But along with free replacement parts and accessories and the option to upgrade every three years, the program also enabled iRobot to shut down the vacuum remotely if you stopped paying.
iRobot Select launched in 2021 along with the j7 Plus Roomba, offering that robot with its sleek auto-empty station to subscribers. According to a statement from iRobot provided to TechHive, the company suspended new subscriptions as part of its organizational restructuring. While you can no longer sign up for the service, existing subscribers will keep their benefits for now.

With Roombas frequently on sale, the value of the iRobot Select plan today was tepid at best. At launch, iRobot Select cost $29 a month with a $99 activation fee for a vacuum that was $850 to buy outright. But that activation fee eventually went up to $199 while Roomba prices went down. And while iRobot introduced a fixed two-year plan with a lower fee, it came with stiff penalties if you tried to bail early.

According to iRobot, all current subscribers to the fixed plan are being moved to the month-to-month plan when their term ends. What will happen to those customers’ vacuums should the company discontinue the program entirely is unclear. Here’s hoping iRobot doesn’t just reach out and deactivate them.
The move comes on the heels of iRobot announcing a new CEO, Gary Cohen, who will lead a company that’s in financial trouble after Amazon’s failed purchase attempt and layoffs that impacted over 350 employees. An executive with a track record of turning around troubled companies, Cohen has had stints at Timex, Energizer, and Playtex following almost two decades at Gillette. He has quite a big mess to clean up at the company. While iRobot was one of the first robot vacuum makers, its dominant market share has dwindled thanks to competitive pressure from Chinese manufacturers that churn out dozens of new robots every month.

The iRobot Select subscription plan that launched with the Roomba j7 Plus has been suspended. | Photo by Jennifer Pattison Tuohy / The Verge

iRobot has ended its robot vacuum subscription service iRobot Select, which got you a high-end Roomba for less money upfront. But along with free replacement parts and accessories and the option to upgrade every three years, the program also enabled iRobot to shut down the vacuum remotely if you stopped paying.

iRobot Select launched in 2021 along with the j7 Plus Roomba, offering that robot with its sleek auto-empty station to subscribers. According to a statement from iRobot provided to TechHive, the company suspended new subscriptions as part of its organizational restructuring. While you can no longer sign up for the service, existing subscribers will keep their benefits for now.

With Roombas frequently on sale, the value of the iRobot Select plan today was tepid at best. At launch, iRobot Select cost $29 a month with a $99 activation fee for a vacuum that was $850 to buy outright. But that activation fee eventually went up to $199 while Roomba prices went down. And while iRobot introduced a fixed two-year plan with a lower fee, it came with stiff penalties if you tried to bail early.

According to iRobot, all current subscribers to the fixed plan are being moved to the month-to-month plan when their term ends. What will happen to those customers’ vacuums should the company discontinue the program entirely is unclear. Here’s hoping iRobot doesn’t just reach out and deactivate them.

The move comes on the heels of iRobot announcing a new CEO, Gary Cohen, who will lead a company that’s in financial trouble after Amazon’s failed purchase attempt and layoffs that impacted over 350 employees. An executive with a track record of turning around troubled companies, Cohen has had stints at Timex, Energizer, and Playtex following almost two decades at Gillette. He has quite a big mess to clean up at the company. While iRobot was one of the first robot vacuum makers, its dominant market share has dwindled thanks to competitive pressure from Chinese manufacturers that churn out dozens of new robots every month.

Read More 

Political ads could require AI-generated content disclosures soon

Illustration by Cath Virginia / The Verge | Photos from Getty Images

The chair of the Federal Communications Commission introduced a proposal Wednesday that could require political advertisers to disclose when they use AI-generated content on radio and TV ads.
If the proposal is implemented, the FCC will seek comment on whether to require on-air and written disclosure of AI-generated content in political ads and will propose to apply these disclosure requirements to certain mediums. In a press release, the FCC notes that the disclosure requirements wouldn’t prohibit such content but would instead require political advertisers to be transparent about their use of AI.
“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” FCC Chair Jessica Rosenworcel said in a statement.

But as The Associated Press notes, any disclosure regulations wouldn’t apply to streaming services.
Still, this is part of a broader effort to regulate the use of AI in political communications. In February, the FCC banned the use of AI-generated voices in robocalls. The ruling came a month after some New Hampshire residents received a robocall telling them not to vote in the state’s presidential primary. The voice in the robocall, which sounded like President Joe Biden’s, was AI-generated. The two Texas-based companies behind the ad had previously been accused of illegal robocalls, according to the FCC.
Major political players have also begun using AI in their ads. The Republican National Committee released an AI-generated Biden attack ad last year that depicted a dystopian future awaiting us if Biden was reelected. That ad featured a disclosure: “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.” And Never Back Down, the super PAC associated with former Republican presidential hopeful Ron DeSantis, released an ad that used AI to mimic former President Donald Trump’s voice last July. And in March, the Democratic National Committee made a bizarre AI-generated parody of a Lara Trump song.
The Federal Election Commission has also attempted to crack down on AI. Last August, in response to a petition filed by the advocacy group Public Citizen, the FEC decided to consider regulating the use of AI-generated content in political ads. More recently, Sens. Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced a bipartisan bill in March that would require disclaimers on political ads that include AI-generated images, audio, or video, “except when AI is used only for minor alterations, such as color editing, cropping, resizing, and other immaterial uses.” The Senate Rules Committee advanced the Klobuchar and Murkowski bill, called the AI Transparency in Elections Act, and two other AI-related bills on Wednesday.

Illustration by Cath Virginia / The Verge | Photos from Getty Images

The chair of the Federal Communications Commission introduced a proposal Wednesday that could require political advertisers to disclose when they use AI-generated content on radio and TV ads.

If the proposal is implemented, the FCC will seek comment on whether to require on-air and written disclosure of AI-generated content in political ads and will propose to apply these disclosure requirements to certain mediums. In a press release, the FCC notes that the disclosure requirements wouldn’t prohibit such content but would instead require political advertisers to be transparent about their use of AI.

“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” FCC Chair Jessica Rosenworcel said in a statement.

But as The Associated Press notes, any disclosure regulations wouldn’t apply to streaming services.

Still, this is part of a broader effort to regulate the use of AI in political communications. In February, the FCC banned the use of AI-generated voices in robocalls. The ruling came a month after some New Hampshire residents received a robocall telling them not to vote in the state’s presidential primary. The voice in the robocall, which sounded like President Joe Biden’s, was AI-generated. The two Texas-based companies behind the ad had previously been accused of illegal robocalls, according to the FCC.

Major political players have also begun using AI in their ads. The Republican National Committee released an AI-generated Biden attack ad last year that depicted a dystopian future awaiting us if Biden was reelected. That ad featured a disclosure: “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.” And Never Back Down, the super PAC associated with former Republican presidential hopeful Ron DeSantis, released an ad that used AI to mimic former President Donald Trump’s voice last July. And in March, the Democratic National Committee made a bizarre AI-generated parody of a Lara Trump song.

The Federal Election Commission has also attempted to crack down on AI. Last August, in response to a petition filed by the advocacy group Public Citizen, the FEC decided to consider regulating the use of AI-generated content in political ads. More recently, Sens. Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced a bipartisan bill in March that would require disclaimers on political ads that include AI-generated images, audio, or video, “except when AI is used only for minor alterations, such as color editing, cropping, resizing, and other immaterial uses.” The Senate Rules Committee advanced the Klobuchar and Murkowski bill, called the AI Transparency in Elections Act, and two other AI-related bills on Wednesday.

Read More 

You can get rid of AI Overviews in Google Search

Not everyone is a fan of AI Overviews. | Screenshot: Emma Roth / The Verge

If you’ve searched for something on Google lately, you might’ve noticed a wall of text that appears before the actual search results. This feature, called AI Overviews, offers an AI-generated answer to certain queries. But it also pushes your list of links further down the page, which makes it a bit annoying to scroll past when you want to do your own research — and it will get even more annoying once Google starts stuffing ads into it.
But even though Google doesn’t let you disable the feature, there are a few ways around it.
One of the best ways to “turn off” the feature is to reconfigure your browser’s default search engine options. The website tenbluelinks offers instructions on how to do this in Chrome on Android, iOS, Windows, and Mac as well as in Firefox on Windows and Mac.

Screenshot: Emma Roth / The Verge
You can block AI Overviews by configuring your default search engine URL.

In Chrome on Windows or Mac, head to Settings > Search engine > Manage search engines and site search.
From there, select Add next to the “Site search” section.
Add a nickname for your AI-less version of Google in the “Name” section, add a shortcut, and paste in this URL: {google:baseURL}search?q=%s&udm=14.
When you’re done, hit Save or Add. Then select the three dots next to the entry and choose Make default.

The next time you conduct a search, you should no longer see AI Overviews.

Screenshot: Emma Roth / The Verge
Or you can try setting a filter in uBlock.

There are some other simple ways to avoid AI Overviews, too, including with uBlock Origin. One Reddit user found that you can avoid the feature by downloading the uBlock extension for your browser. You then head to the settings menu, go to the My filters section, paste google.com##.GcKpu into the list, and select Apply changes.
You can also try Bye Bye, Google AI, a Chrome extension created by Tom’s Hardware’s Avram Piltch. The extension uses CSS to hide AI Overviews by default, but you can also customize it to remove Google’s discussions section, shopping blocks, sponsored links, and more. Tedium also lists some solutions for getting rid of AI Overviews, including in Safari.
And finally, if you just want to avoid AI Overviews occasionally, you can do your search, then select More > Web to get an AI Overviews-less (and ad-less) list of sites.
Any of these methods should give you a break from AI search results — at least until Google rolls out an “off” button.

Not everyone is a fan of AI Overviews. | Screenshot: Emma Roth / The Verge

If you’ve searched for something on Google lately, you might’ve noticed a wall of text that appears before the actual search results. This feature, called AI Overviews, offers an AI-generated answer to certain queries. But it also pushes your list of links further down the page, which makes it a bit annoying to scroll past when you want to do your own research — and it will get even more annoying once Google starts stuffing ads into it.

But even though Google doesn’t let you disable the feature, there are a few ways around it.

One of the best ways to “turn off” the feature is to reconfigure your browser’s default search engine options. The website tenbluelinks offers instructions on how to do this in Chrome on Android, iOS, Windows, and Mac as well as in Firefox on Windows and Mac.

Screenshot: Emma Roth / The Verge
You can block AI Overviews by configuring your default search engine URL.

In Chrome on Windows or Mac, head to Settings > Search engine > Manage search engines and site search.
From there, select Add next to the “Site search” section.
Add a nickname for your AI-less version of Google in the “Name” section, add a shortcut, and paste in this URL: {google:baseURL}search?q=%s&udm=14.
When you’re done, hit Save or Add. Then select the three dots next to the entry and choose Make default.

The next time you conduct a search, you should no longer see AI Overviews.

Screenshot: Emma Roth / The Verge
Or you can try setting a filter in uBlock.

There are some other simple ways to avoid AI Overviews, too, including with uBlock Origin. One Reddit user found that you can avoid the feature by downloading the uBlock extension for your browser. You then head to the settings menu, go to the My filters section, paste google.com##.GcKpu into the list, and select Apply changes.

You can also try Bye Bye, Google AI, a Chrome extension created by Tom’s Hardware’s Avram Piltch. The extension uses CSS to hide AI Overviews by default, but you can also customize it to remove Google’s discussions section, shopping blocks, sponsored links, and more. Tedium also lists some solutions for getting rid of AI Overviews, including in Safari.

And finally, if you just want to avoid AI Overviews occasionally, you can do your search, then select More > Web to get an AI Overviews-less (and ad-less) list of sites.

Any of these methods should give you a break from AI search results — at least until Google rolls out an “off” button.

Read More 

UK court calls man claiming to have invented Bitcoin a liar

Judge Mellor ruled that Craig Wright (pictured) lied “extensively and repeatedly” about being Satoshi Nakamoto. | Photo by Dan Kitwood / Getty Images

Australian computer scientist Craig Wright lied “extensively and repeatedly” to courts and committed forgery “on a grand scale” in efforts to falsely claim that he invented Bitcoin, a judge at London’s High Court ruled on Monday.
Wright has long claimed to be “Satoshi Nakamoto” — the pseudonym used by the author of Bitcoin’s 2008 foundational white paper. Very little is known about the mysterious cryptocurrency creator, though they’re widely presumed to be the largest holder of Bitcoin, controlling an estimated 1.1 million BTC (worth roughly $77 billion at the time of writing).
Satoshi’s identity was connected to Wright in reports published by Gizmodo and Wired in 2015, though the latter publication noted that some of the clues that suggested Wright was the elusive Bitcoin creator were planted by Wright himself. Later reports from Wired, Vice, Forbes, and other outlets also found inconsistencies in the evidence pointing to Wright being the creator of Bitcoin.
Wright spent the following years aggressively attempting to prove that he is Satoshi: testifying on several occasions that he wrote the currency’s original white paper, challenging people developing Bitcoin-related projects, and filing defamation lawsuits against those who accused him of lying. Wright’s latest legal battle was instigated by the Crypto Open Patent Alliance (COPA), a nonprofit organization that sought to stop him from allegedly threatening developers by disproving his claim to Satoshi’s identity.

ICYMI: Our trial to prove, once and for all, that Craig Wright is not Satoshi Nakamoto kicked off this week. Here’s a recap of the start of trial: https://t.co/xq64GmdmhD— COPA (@opencryptoorg) February 7, 2024

COPA rejected Wright’s offer to settle the case in January, alleging the deal contained “loopholes that would allow him to sue people all over again.”
The case concluded on March 14th after a six-week investigation, with London High Court Judge James Mellor ruling that Wright didn’t create Bitcoin and is not Satoshi. “In both his written evidence and in days of oral evidence under cross-examination, I am entirely satisfied that Dr Wright lied to the court extensively and repeatedly,” said Judge Mellor in the 231-page ruling released this week. “In my judgment, he is not nearly as clever as he thinks he is.”
In his ruling, Judge Mellor said:
Having considered all the evidence and submissions presented to me during the Trial, I reached the conclusion the evidence was overwhelming. At that point, I made certain declarations (because I was satisfied they are useful and are necessary to do justice between the parties), as follows:First, that Dr Wright is not the author of the Bitcoin White Paper. Second, Dr Wright is not the person who adopted or operated under the pseudonym Satoshi Nakamoto in the period between 2008 and 2011. Third, Dr Wright is not the person who created the Bitcoin system. Fourth, Dr Wright is not the author of the initial versions of the Bitcoin Software.
Most of the forged evidence was “clumsy,” according to Mellor, who said that Wright frequently resorted to laying the blame on other (often unidentified) people or “what can only be described as technobabble delivered by him in the witness box” when his lies were exposed. “I tried to identify whether there was any reliable evidence to support Dr Wright’s claim and concluded there was none,” said Mellor, concluding that “the case that Dr Wright is not Satoshi Nakamoto is overwhelming.”
The matter of injunctive relief will be argued in a future hearing. Wright says he will appeal against the ruling “on the matter of the identity issue.”

Judge Mellor ruled that Craig Wright (pictured) lied “extensively and repeatedly” about being Satoshi Nakamoto. | Photo by Dan Kitwood / Getty Images

Australian computer scientist Craig Wright lied “extensively and repeatedly” to courts and committed forgery “on a grand scale” in efforts to falsely claim that he invented Bitcoin, a judge at London’s High Court ruled on Monday.

Wright has long claimed to be “Satoshi Nakamoto” — the pseudonym used by the author of Bitcoin’s 2008 foundational white paper. Very little is known about the mysterious cryptocurrency creator, though they’re widely presumed to be the largest holder of Bitcoin, controlling an estimated 1.1 million BTC (worth roughly $77 billion at the time of writing).

Satoshi’s identity was connected to Wright in reports published by Gizmodo and Wired in 2015, though the latter publication noted that some of the clues that suggested Wright was the elusive Bitcoin creator were planted by Wright himself. Later reports from Wired, Vice, Forbes, and other outlets also found inconsistencies in the evidence pointing to Wright being the creator of Bitcoin.

Wright spent the following years aggressively attempting to prove that he is Satoshi: testifying on several occasions that he wrote the currency’s original white paper, challenging people developing Bitcoin-related projects, and filing defamation lawsuits against those who accused him of lying. Wright’s latest legal battle was instigated by the Crypto Open Patent Alliance (COPA), a nonprofit organization that sought to stop him from allegedly threatening developers by disproving his claim to Satoshi’s identity.

ICYMI: Our trial to prove, once and for all, that Craig Wright is not Satoshi Nakamoto kicked off this week. Here’s a recap of the start of trial: https://t.co/xq64GmdmhD

— COPA (@opencryptoorg) February 7, 2024

COPA rejected Wright’s offer to settle the case in January, alleging the deal contained “loopholes that would allow him to sue people all over again.”

The case concluded on March 14th after a six-week investigation, with London High Court Judge James Mellor ruling that Wright didn’t create Bitcoin and is not Satoshi. “In both his written evidence and in days of oral evidence under cross-examination, I am entirely satisfied that Dr Wright lied to the court extensively and repeatedly,” said Judge Mellor in the 231-page ruling released this week. “In my judgment, he is not nearly as clever as he thinks he is.”

In his ruling, Judge Mellor said:

Having considered all the evidence and submissions presented to me during the Trial, I reached the conclusion the evidence was overwhelming. At that point, I made certain declarations (because I was satisfied they are useful and are necessary to do justice between the parties), as follows:
First, that Dr Wright is not the author of the Bitcoin White Paper.
Second, Dr Wright is not the person who adopted or operated under the pseudonym Satoshi Nakamoto in the period between 2008 and 2011.
Third, Dr Wright is not the person who created the Bitcoin system.
Fourth, Dr Wright is not the author of the initial versions of the Bitcoin Software.

Most of the forged evidence was “clumsy,” according to Mellor, who said that Wright frequently resorted to laying the blame on other (often unidentified) people or “what can only be described as technobabble delivered by him in the witness box” when his lies were exposed. “I tried to identify whether there was any reliable evidence to support Dr Wright’s claim and concluded there was none,” said Mellor, concluding that “the case that Dr Wright is not Satoshi Nakamoto is overwhelming.”

The matter of injunctive relief will be argued in a future hearing. Wright says he will appeal against the ruling “on the matter of the identity issue.”

Read More 

You can now install Windows 11’s next big update early

Image: Microsoft

Microsoft is releasing its next Windows 11 update to its Release Preview ring today. It’s the final ring of testing before the 24H2 update will be available to all Windows 11 users. The update includes HDR background support, energy saver, Sudo for Windows, Rust in the Windows kernel, and more.
Microsoft will officially roll the update out through Windows Update later this year, but you can grab the final version through the Windows Insider option inside the Windows Update section of Windows 11. Just follow the prompts to sign up for the Windows Insider Program, and make sure you select Release Preview.

Windows 11 24H2 includes HDR background support, which allows you to use JXR images as your wallpaper background. If you use multiple monitors, Windows 11 will adapt the wallpaper on a per-display basis.
Microsoft is also reworking how its energy saving modes work in this latest update. A new Energy Saver mode reduces energy consumption by reducing system performance to save on battery life on laptops. It will also work with regular desktop PCs, so you can reduce the amount of power a gaming rig uses.

Image: Microsoft
Sudo in Windows 11.

Windows 11 24H2 also includes a built-in sudo command designed for developers. Microsoft is using sudo inside Windows to let developers run elevated tools directly from an unelevated console session. You can configure the sudo command in three modes: a new window; with input disabled; and inline. The most similar mode to Linux’s sudo is inline, whereas the other modes are more locked down.
Microsoft is also adding the ability to create 7-zip and TAR archives in File Explorer with this update, alongside a scrollable view of the Quick Settings flyout that appears above the taskbar. A new AI-powered Voice Clarity feature is also part of 24H2, instead of being limited to just devices with neural processing unit (NPU) chips. Voice Clarity will remove background noise when you’re on a call or you’re recording audio.
The final tweaks in Windows 11 24H2 will impact the built-in Copilot assistant. Copilot is now a dedicated button on the right-hand side of the system tray, and it now understands more commands for changing settings, turning on Windows features, and more.
This update also includes all of the new AI-powered features that Microsoft unveiled earlier this week, but you’ll need a new Copilot Plus PC to use them. Copilot Plus PCs won’t be available until June 18th, starting at $999.

Image: Microsoft

Microsoft is releasing its next Windows 11 update to its Release Preview ring today. It’s the final ring of testing before the 24H2 update will be available to all Windows 11 users. The update includes HDR background support, energy saver, Sudo for Windows, Rust in the Windows kernel, and more.

Microsoft will officially roll the update out through Windows Update later this year, but you can grab the final version through the Windows Insider option inside the Windows Update section of Windows 11. Just follow the prompts to sign up for the Windows Insider Program, and make sure you select Release Preview.

Windows 11 24H2 includes HDR background support, which allows you to use JXR images as your wallpaper background. If you use multiple monitors, Windows 11 will adapt the wallpaper on a per-display basis.

Microsoft is also reworking how its energy saving modes work in this latest update. A new Energy Saver mode reduces energy consumption by reducing system performance to save on battery life on laptops. It will also work with regular desktop PCs, so you can reduce the amount of power a gaming rig uses.

Image: Microsoft
Sudo in Windows 11.

Windows 11 24H2 also includes a built-in sudo command designed for developers. Microsoft is using sudo inside Windows to let developers run elevated tools directly from an unelevated console session. You can configure the sudo command in three modes: a new window; with input disabled; and inline. The most similar mode to Linux’s sudo is inline, whereas the other modes are more locked down.

Microsoft is also adding the ability to create 7-zip and TAR archives in File Explorer with this update, alongside a scrollable view of the Quick Settings flyout that appears above the taskbar. A new AI-powered Voice Clarity feature is also part of 24H2, instead of being limited to just devices with neural processing unit (NPU) chips. Voice Clarity will remove background noise when you’re on a call or you’re recording audio.

The final tweaks in Windows 11 24H2 will impact the built-in Copilot assistant. Copilot is now a dedicated button on the right-hand side of the system tray, and it now understands more commands for changing settings, turning on Windows features, and more.

This update also includes all of the new AI-powered features that Microsoft unveiled earlier this week, but you’ll need a new Copilot Plus PC to use them. Copilot Plus PCs won’t be available until June 18th, starting at $999.

Read More 

Google Pay will start showing credit card benefits at checkout

Hmm, should I get cash back or more travel points? | Image: Google

Google is adding the ability to see what shopping rewards and savings options you have on your saved credit cards as you check out using Google Pay. The feature is out today on select cards from American Express and Capital One, and it’s coming to more cards in the future.
You can see the benefits during checkout when prompted to enter a card number. A drop-down menu appears showing your card options along with a new description of benefits, like the percentage of cash back you get or what your point multiplier is for certain categories like travel. For now, this feature is only available when using Chrome on the desktop.

GIF: Google
Now, use fingerprint or other biometrics on mobile to fully fill out credit card information.

Additionally, Google is making it easier to fill in your card details (including security code) on any online store at checkout, so you have one less reason to reach for your physical card. Autofill of your full credit card details works on Android using the device’s biometrics or a PIN.
Lastly, Google Pay’s buy now, pay later (BNPL) support is expanding to more websites. The feature lets you choose a third-party financial provider, either Affirm or Zip, to pay over multiple installments. Take heed, though. BNPL can be problematic if you miss a payment.

Hmm, should I get cash back or more travel points? | Image: Google

Google is adding the ability to see what shopping rewards and savings options you have on your saved credit cards as you check out using Google Pay. The feature is out today on select cards from American Express and Capital One, and it’s coming to more cards in the future.

You can see the benefits during checkout when prompted to enter a card number. A drop-down menu appears showing your card options along with a new description of benefits, like the percentage of cash back you get or what your point multiplier is for certain categories like travel. For now, this feature is only available when using Chrome on the desktop.

GIF: Google
Now, use fingerprint or other biometrics on mobile to fully fill out credit card information.

Additionally, Google is making it easier to fill in your card details (including security code) on any online store at checkout, so you have one less reason to reach for your physical card. Autofill of your full credit card details works on Android using the device’s biometrics or a PIN.

Lastly, Google Pay’s buy now, pay later (BNPL) support is expanding to more websites. The feature lets you choose a third-party financial provider, either Affirm or Zip, to pay over multiple installments. Take heed, though. BNPL can be problematic if you miss a payment.

Read More 

Lawmakers debate ending Section 230 in order to save it

Image: Cath Virginia / The Verge

A pair of legislators have a plan to save Section 230: kill it so that Congress is forced to come up with a better version.
That was the topic of discussion at a hearing on Wednesday in the House Energy and Commerce subcommittee on communications and technology. It came on the heels of the committee leaders’ proposal for the sunset, which they announced in a Wall Street Journal op-ed last week. E&C Chair Cathy McMorris Rodgers (R-WA) and Ranking Member Frank Pallone (D-NJ) want to give Congress 18 months to come up with a new framework to replace Section 230 or risk losing it entirely. The idea is to force their colleagues to do something to change the law that’s been the subject of bipartisan ire for years.
“Big Tech lobbied to kill them every time. These companies left us with no other option.”
“Our goal is not for Section 230 to disappear,” McMorris Rodgers said at the hearing. “But the reality is that nearly 25 bills to amend Section 230 have been introduced over the last two Congresses. Many of these were good-faith attempts to reform the law, and Big Tech lobbied to kill them every time. These companies left us with no other option.”
Section 230 of the Communications Decency Act is the law that protects social media platforms from being held responsible for what their users post. It’s also what enables the platforms to moderate content on their services how they see fit, without fearing that doing so will land them in a lengthy legal dispute.
While industry players say this is essential for how the internet operates — keeping the most abhorrent content off of mainstream services while allowing for mostly open conversations and giving smaller platforms a shot at existence without being drowned in legal fees that larger platforms are able to shoulder — many policymakers have soured on the law as tech companies have grown in power.
Republicans and Democrats often have very different ideas of how exactly the law should change. Republicans who support Section 230 reform often want platforms to have fewer protections for their content moderation decisions to combat what they see as censorship of conservative views, while Democrats who support reform tend to want platforms to moderate or remove more content, such as disinformation. These days, however, both sides seem open to changes that could further protect children on the internet, as proposals like the Kids Online Safety Act have gained steam.
“I do really worry that’s the unintended consequence here”
Wednesday’s hearing showcased both sides of the 230 discussion, inviting experts engaged in advocacy both for and against reform to field questions about how far Congress should go in making changes. Kate Tummarello, executive director of startup advocacy group Engine (which has received funding from Google), shared her experience seeking out reproductive care information in online communities while experiencing a pregnancy loss two years ago. Following the Dobbs v. Jackson Women’s Health Organization decision that overturned Roe v. Wade, Tummarello said the same online communities she turned to for emotional and practical support shrunk, and she saw women express fear of posting about seeking reproductive care online.
“I don’t think anyone’s intending to repeal 230 to get at women like me,” Tummarello said. “But I do really worry that’s the unintended consequence here. And I don’t want the women who are dealing with this today and in the future to not have the resources I had, to have the community that I was able to lean on. Because, again, to me, it was life-saving.”
But victims rights attorney Carrie Goldberg and Organization for Social Media Safety CEO Marc Berkman said throughout the hearing that the fear that Section 230 repeal would lead to a tsunami of lawsuits was overblown. “When we’re talking about content removal … there still has to be a cause of action,” Goldberg said. She added that it can still take years to go through the courts. “There’s not going to be some sort of mythical rush to the courthouse by millions of people because there has to be an injury,” she said.
But Tummarello said that even without successful legal cases, stripping 230 protection could make it easier to pressure tech companies into removing lawful speech. “Absent 230, it’s not that the platforms would be held liable for the speech, it’s that the platforms could very easily be pressured into removing speech people don’t like,” she said.
Lawmakers want to hear how Section 230 will collide with generative AI
Lawmakers were also curious how Section 230 would apply to content created by generative AI and whether that should update their understanding of it. Goldberg said, “Section 230 right now is going to be used by all these companies as a reason to not be held liable” in the early days of the technology. “We’re really early in the roll-out here and we’ve already seen extremely concerning examples,” Berkman said, pointing to Snapchat’s AI bot, which came under fire for engaging in mature conversations when the platform hosts many minors. Tummarello said startups are also using AI to find and moderate harmful content.
Even though many members seemed open to some level of 230 reform, not everyone appeared on board with the proposed sunset of the protections — and it’s not yet clear if the proposal to sunset 230 can gain wide support. McMorris Rodgers would need to bring the proposal to a vote before the full committee for it to have a shot of reaching the floor under normal operating procedure.
“This is where I start to have a problem,” said Rep. Jay Obernolte (R-CA). “It seems like the premise of repealing 230 is that the world would be a better place if we just all sued each other more often.” Obernolte said that rather than “rely on the indirect route of the threat of being sued,” Congress should outlaw the specific acts they don’t want to occur. “This is something that we can solve a different way than just expanding liability.”

Image: Cath Virginia / The Verge

A pair of legislators have a plan to save Section 230: kill it so that Congress is forced to come up with a better version.

That was the topic of discussion at a hearing on Wednesday in the House Energy and Commerce subcommittee on communications and technology. It came on the heels of the committee leaders’ proposal for the sunset, which they announced in a Wall Street Journal op-ed last week. E&C Chair Cathy McMorris Rodgers (R-WA) and Ranking Member Frank Pallone (D-NJ) want to give Congress 18 months to come up with a new framework to replace Section 230 or risk losing it entirely. The idea is to force their colleagues to do something to change the law that’s been the subject of bipartisan ire for years.

“Big Tech lobbied to kill them every time. These companies left us with no other option.”

“Our goal is not for Section 230 to disappear,” McMorris Rodgers said at the hearing. “But the reality is that nearly 25 bills to amend Section 230 have been introduced over the last two Congresses. Many of these were good-faith attempts to reform the law, and Big Tech lobbied to kill them every time. These companies left us with no other option.”

Section 230 of the Communications Decency Act is the law that protects social media platforms from being held responsible for what their users post. It’s also what enables the platforms to moderate content on their services how they see fit, without fearing that doing so will land them in a lengthy legal dispute.

While industry players say this is essential for how the internet operates — keeping the most abhorrent content off of mainstream services while allowing for mostly open conversations and giving smaller platforms a shot at existence without being drowned in legal fees that larger platforms are able to shoulder — many policymakers have soured on the law as tech companies have grown in power.

Republicans and Democrats often have very different ideas of how exactly the law should change. Republicans who support Section 230 reform often want platforms to have fewer protections for their content moderation decisions to combat what they see as censorship of conservative views, while Democrats who support reform tend to want platforms to moderate or remove more content, such as disinformation. These days, however, both sides seem open to changes that could further protect children on the internet, as proposals like the Kids Online Safety Act have gained steam.

“I do really worry that’s the unintended consequence here”

Wednesday’s hearing showcased both sides of the 230 discussion, inviting experts engaged in advocacy both for and against reform to field questions about how far Congress should go in making changes. Kate Tummarello, executive director of startup advocacy group Engine (which has received funding from Google), shared her experience seeking out reproductive care information in online communities while experiencing a pregnancy loss two years ago. Following the Dobbs v. Jackson Women’s Health Organization decision that overturned Roe v. Wade, Tummarello said the same online communities she turned to for emotional and practical support shrunk, and she saw women express fear of posting about seeking reproductive care online.

“I don’t think anyone’s intending to repeal 230 to get at women like me,” Tummarello said. “But I do really worry that’s the unintended consequence here. And I don’t want the women who are dealing with this today and in the future to not have the resources I had, to have the community that I was able to lean on. Because, again, to me, it was life-saving.”

But victims rights attorney Carrie Goldberg and Organization for Social Media Safety CEO Marc Berkman said throughout the hearing that the fear that Section 230 repeal would lead to a tsunami of lawsuits was overblown. “When we’re talking about content removal … there still has to be a cause of action,” Goldberg said. She added that it can still take years to go through the courts. “There’s not going to be some sort of mythical rush to the courthouse by millions of people because there has to be an injury,” she said.

But Tummarello said that even without successful legal cases, stripping 230 protection could make it easier to pressure tech companies into removing lawful speech. “Absent 230, it’s not that the platforms would be held liable for the speech, it’s that the platforms could very easily be pressured into removing speech people don’t like,” she said.

Lawmakers want to hear how Section 230 will collide with generative AI

Lawmakers were also curious how Section 230 would apply to content created by generative AI and whether that should update their understanding of it. Goldberg said, “Section 230 right now is going to be used by all these companies as a reason to not be held liable” in the early days of the technology. “We’re really early in the roll-out here and we’ve already seen extremely concerning examples,” Berkman said, pointing to Snapchat’s AI bot, which came under fire for engaging in mature conversations when the platform hosts many minors. Tummarello said startups are also using AI to find and moderate harmful content.

Even though many members seemed open to some level of 230 reform, not everyone appeared on board with the proposed sunset of the protections — and it’s not yet clear if the proposal to sunset 230 can gain wide support. McMorris Rodgers would need to bring the proposal to a vote before the full committee for it to have a shot of reaching the floor under normal operating procedure.

“This is where I start to have a problem,” said Rep. Jay Obernolte (R-CA). “It seems like the premise of repealing 230 is that the world would be a better place if we just all sued each other more often.” Obernolte said that rather than “rely on the indirect route of the threat of being sued,” Congress should outlaw the specific acts they don’t want to occur. “This is something that we can solve a different way than just expanding liability.”

Read More 

Big Tech thinks it can plant trees better than everyone else

An area of dense primary forest in the Loango National Park, Gabon, on Wednesday, Oct. 12, 2022.  | Photo: Getty Images

Some of the biggest names in tech are joining forces to try something that many before them have failed to do: use trees to cancel out their greenhouse gas emissions. Google, Meta, Microsoft, and Salesforce are creating the Symbiosis Coalition as an effort to support “nature-based” projects aimed at taking carbon dioxide out of the atmosphere.
It’s a tactic companies have used for decades to try to offset their greenhouse gas emissions by planting trees, which take in and store carbon dioxide through photosynthesis. The hope is that paying to restore forests will amplify that process, ostensibly counteracting companies’ carbon footprint. It sounds simple enough on paper. However, a growing body of evidence has shown that this strategy fails time after time.
A growing body of evidence has shown that this strategy fails time after time
The Symbiosis Coalition seems to think it can turn things around. Together, the companies have committed to purchasing credits from “high-impact, science-based restoration projects” representing up to 20 million tons of captured carbon dioxide by 2030. They say they’ll vet projects for quality control, aiming to drum up demand for carbon credits that have earned a bad rap because so many carbon offset initiatives have fallen flat in the past.
In one recent example, a study of 26 carbon offset projects across six countries published in the journal Science last year found that few of them succeeded in stopping deforestation. Whatever climate benefits the projects were purported to have were overblown by as much as 300 percent. A separate investigation into one of the world’s leading carbon registries found that 90 percent of its rainforest offsets turned out to be “phantom credits” that likely didn’t represent real-world reductions in greenhouse gas emissions. And a 2022 report by nonprofit watchdog Carbon Market Watch determined that carbon offset credits offered by major European airlines were similarly linked to faulty forestry projects.
A big part of the problem is that it’s difficult to measure just how much carbon dioxide a tree or forest has absorbed, which has led to projects exaggerating how much good they do for the climate. Planting trees is also a tricky endeavor — if they don’t live for hundreds of years, they just wind up releasing all the carbon they’ve stored. Planting the wrong trees in the wrong place, creating tree farms instead of forests, can also harm the local environment. In 2020, Salesforce CEO Marc Benioff backed a World Economic Forum plan to plant a trillion trees — although the research undergirding the effort was quickly criticized by dozens of scientists for grossly overestimating the potential environmental benefits.
Salesforce, Google, Meta, and Microsoft are confident they can keep history from repeating itself
Nevertheless, Salesforce, Google, Meta, and Microsoft are confident they can keep history from repeating itself. To try to accomplish that, they worked alongside independent experts to establish strict criteria for forestry projects. Symbiosis also says in a press release that it’ll “involve and compensate Indigenous Peoples and local communities” to work toward “equitable outcomes.” And while it’s starting with forestry projects, Symbiosis says that, over time, it’ll incorporate other strategies, like sequestering carbon dioxide in soil.
“Nature-based projects are complex and challenging to get right and haven’t always lived up to their intended impact,” Symbiosis executive director Julia Strong said in an email to The Verge. “Symbiosis aims to address challenges around nature-based project integrity to date by setting a high-quality bar that builds on best in class market standards and the latest science, data, and best practice.”
The coalition is modeled after a similar initiative called Frontier launched by Stripe, Alphabet, Meta, Shopify, and McKinsey in 2022. Frontier is focused on supporting new technologies to take carbon dioxide out of the atmosphere. Frontier has contracted more than 510,000 tons of carbon removal — but delivered just around 1,700 tons of captured carbon so far.

Both Symbiosis and Frontier are aimed at facilitating deals between carbon removal projects and companies that want to pay for their services. Eventually, Symbiosis hopes more companies beyond its founders will hop on board.
For perspective, all of these efforts still add up to a small fraction of the emissions these companies produce. The 20 million metric tons of nature-based carbon dioxide removal that Symbiosis committed to is just slightly more than the 15.4 million metric tons of carbon dioxide Microsoft alone produced in its last fiscal year.
To be sure, safeguarding the world’s forests does a lot of good for the planet. But exploiting them in the name of fighting climate change hasn’t been a safe bet. Raising the stakes, Big Tech’s greenhouse gas emissions are growing with the rise of energy-hungry AI tools. If companies are serious about taking on climate change, they’ll still have to rein in the amount of pollution they produce in the first place. Even successful forest projects can’t do all the dirty work for them.

An area of dense primary forest in the Loango National Park, Gabon, on Wednesday, Oct. 12, 2022.  | Photo: Getty Images

Some of the biggest names in tech are joining forces to try something that many before them have failed to do: use trees to cancel out their greenhouse gas emissions. Google, Meta, Microsoft, and Salesforce are creating the Symbiosis Coalition as an effort to support “nature-based” projects aimed at taking carbon dioxide out of the atmosphere.

It’s a tactic companies have used for decades to try to offset their greenhouse gas emissions by planting trees, which take in and store carbon dioxide through photosynthesis. The hope is that paying to restore forests will amplify that process, ostensibly counteracting companies’ carbon footprint. It sounds simple enough on paper. However, a growing body of evidence has shown that this strategy fails time after time.

A growing body of evidence has shown that this strategy fails time after time

The Symbiosis Coalition seems to think it can turn things around. Together, the companies have committed to purchasing credits from “high-impact, science-based restoration projects” representing up to 20 million tons of captured carbon dioxide by 2030. They say they’ll vet projects for quality control, aiming to drum up demand for carbon credits that have earned a bad rap because so many carbon offset initiatives have fallen flat in the past.

In one recent example, a study of 26 carbon offset projects across six countries published in the journal Science last year found that few of them succeeded in stopping deforestation. Whatever climate benefits the projects were purported to have were overblown by as much as 300 percent. A separate investigation into one of the world’s leading carbon registries found that 90 percent of its rainforest offsets turned out to be “phantom credits” that likely didn’t represent real-world reductions in greenhouse gas emissions. And a 2022 report by nonprofit watchdog Carbon Market Watch determined that carbon offset credits offered by major European airlines were similarly linked to faulty forestry projects.

A big part of the problem is that it’s difficult to measure just how much carbon dioxide a tree or forest has absorbed, which has led to projects exaggerating how much good they do for the climate. Planting trees is also a tricky endeavor — if they don’t live for hundreds of years, they just wind up releasing all the carbon they’ve stored. Planting the wrong trees in the wrong place, creating tree farms instead of forests, can also harm the local environment. In 2020, Salesforce CEO Marc Benioff backed a World Economic Forum plan to plant a trillion trees — although the research undergirding the effort was quickly criticized by dozens of scientists for grossly overestimating the potential environmental benefits.

Salesforce, Google, Meta, and Microsoft are confident they can keep history from repeating itself

Nevertheless, Salesforce, Google, Meta, and Microsoft are confident they can keep history from repeating itself. To try to accomplish that, they worked alongside independent experts to establish strict criteria for forestry projects. Symbiosis also says in a press release that it’ll “involve and compensate Indigenous Peoples and local communities” to work toward “equitable outcomes.” And while it’s starting with forestry projects, Symbiosis says that, over time, it’ll incorporate other strategies, like sequestering carbon dioxide in soil.

“Nature-based projects are complex and challenging to get right and haven’t always lived up to their intended impact,” Symbiosis executive director Julia Strong said in an email to The Verge. “Symbiosis aims to address challenges around nature-based project integrity to date by setting a high-quality bar that builds on best in class market standards and the latest science, data, and best practice.”

The coalition is modeled after a similar initiative called Frontier launched by Stripe, Alphabet, Meta, Shopify, and McKinsey in 2022. Frontier is focused on supporting new technologies to take carbon dioxide out of the atmosphere. Frontier has contracted more than 510,000 tons of carbon removal — but delivered just around 1,700 tons of captured carbon so far.

Both Symbiosis and Frontier are aimed at facilitating deals between carbon removal projects and companies that want to pay for their services. Eventually, Symbiosis hopes more companies beyond its founders will hop on board.

For perspective, all of these efforts still add up to a small fraction of the emissions these companies produce. The 20 million metric tons of nature-based carbon dioxide removal that Symbiosis committed to is just slightly more than the 15.4 million metric tons of carbon dioxide Microsoft alone produced in its last fiscal year.

To be sure, safeguarding the world’s forests does a lot of good for the planet. But exploiting them in the name of fighting climate change hasn’t been a safe bet. Raising the stakes, Big Tech’s greenhouse gas emissions are growing with the rise of energy-hungry AI tools. If companies are serious about taking on climate change, they’ll still have to rein in the amount of pollution they produce in the first place. Even successful forest projects can’t do all the dirty work for them.

Read More 

Marvel’s Vision-focused Disney Plus series is coming in 2026

Photo: Marvel Studios

The MCU’s Scarlet Witch may be dead, but Marvel’s keeping more of WandaVision’s story going with yet another spinoff, this time focused on Paul Bettany’s Vision.
Variety reports that Star Trek: Picard executive producer Terry Matalas has signed on to showrun a currently unnamed Disney Plus Marvel series revolving around Vision (Bettany), the synthezoid Avenger who died in Infinity War and was subsequently resurrected in WandaVision. Back in 2022, Deadline reported that WandaVision showrunner Jac Schaeffer was working on a project titled Vision Quest that would detail his journey to recover lost memories. But Scheffer’s energies wound up being channeled into Agatha All Along, which, like the new Vision show, will pick up on plots stemming from WandaVision.
Currently, no other cast members for the new show have been announced, but Marvel plans for it to hit Disney Plus sometime in 2026.

Photo: Marvel Studios

The MCU’s Scarlet Witch may be dead, but Marvel’s keeping more of WandaVision’s story going with yet another spinoff, this time focused on Paul Bettany’s Vision.

Variety reports that Star Trek: Picard executive producer Terry Matalas has signed on to showrun a currently unnamed Disney Plus Marvel series revolving around Vision (Bettany), the synthezoid Avenger who died in Infinity War and was subsequently resurrected in WandaVision. Back in 2022, Deadline reported that WandaVision showrunner Jac Schaeffer was working on a project titled Vision Quest that would detail his journey to recover lost memories. But Scheffer’s energies wound up being channeled into Agatha All Along, which, like the new Vision show, will pick up on plots stemming from WandaVision.

Currently, no other cast members for the new show have been announced, but Marvel plans for it to hit Disney Plus sometime in 2026.

Read More 

Scroll to top
Generated by Feedzy