verge-rss

The August smart lock finally gets a fingerprint option

The wireless Yale Keypad Touch provides fingerprint and key code access for the new Yale Approach lock and several August smart locks. | Image: Yale

August makes excellent smart locks, but careful observers may have noticed that the company hasn’t released any new hardware in four years. Owners of August locks may be relieved to know that a new wireless keypad finally brings them the option of fingerprint unlocking.
The Yale Keypad Touch is a fingerprint-enabled wireless keypad for Yale and August retrofit locks that works with the fourth-gen August Wi-Fi Smart Lock, third-gen August Smart Lock Pro, third-gen August Smart Lock, and the new Yale Approach Lock with Wi-Fi. In addition to fingerprint and code unlock, it offers one-touch locking on your way out. It’s available now for $109.99 at ShopYaleHome.com and August.com.
That’s a steep upgrade for those who already have an August lock, though it might be worth it for that touch-to-unlock ease. For those considering a new lock, the new Keypad Touch comes bundled with Yale’s Approach lock for $229.99 or the August Wi-Fi-Smart Lock for $249.99 (just $50 more than the lock alone). There’s also a keypad-only version for $69.99.

Image: August
The newest August smart locks work with the new Yale keypad with a built-in fingerprint reader.

Retrofit locks like the August only replace the interior portion of your existing lock, leaving the front unchanged, so you can still use your key. They are great options for renters or people with fancy door hardware they don’t want to swap out. But the design means they don’t have room for features like keypads or fingerprint readers. The solution is a wireless keypad that connects to the lock over Bluetooth.

August already has an optional Bluetooth keypad without a fingerprint reader, but this is the first time biometric access has been an option. I’ve tested many locks, and I find a fingerprint reader the easiest way to unlock a smart door. The new Yale Keypad and Keypad Touch also have a longer Bluetooth range than the existing August keypads — up to 30 feet — giving you more placement options.
If you’re wondering why Yale is making keypads for August locks, it’s because Yale and August have been sister brands since 2017 when they were both purchased by Assa Abloy; both were later sold to Fortune Brands. But over the last few years, while Yale has integrated much of August’s technology — including its automatic unlocking feature and its app — and released a slew of new products, there hasn’t been any innovation from August.
Once a competitor to Ring, the startup’s video doorbell aspirations are long gone, as are its founders, Yves Béhar and Jason Johnson. But the last lock they launched — August Wi-Fi Smart Lock (fourth-gen) — is still a great lock and remains my pick for the best retrofit smart lock. And this new option to add a fingerprint reader just made it even better.

The wireless Yale Keypad Touch provides fingerprint and key code access for the new Yale Approach lock and several August smart locks. | Image: Yale

August makes excellent smart locks, but careful observers may have noticed that the company hasn’t released any new hardware in four years. Owners of August locks may be relieved to know that a new wireless keypad finally brings them the option of fingerprint unlocking.

The Yale Keypad Touch is a fingerprint-enabled wireless keypad for Yale and August retrofit locks that works with the fourth-gen August Wi-Fi Smart Lock, third-gen August Smart Lock Pro, third-gen August Smart Lock, and the new Yale Approach Lock with Wi-Fi. In addition to fingerprint and code unlock, it offers one-touch locking on your way out. It’s available now for $109.99 at ShopYaleHome.com and August.com.

That’s a steep upgrade for those who already have an August lock, though it might be worth it for that touch-to-unlock ease. For those considering a new lock, the new Keypad Touch comes bundled with Yale’s Approach lock for $229.99 or the August Wi-Fi-Smart Lock for $249.99 (just $50 more than the lock alone). There’s also a keypad-only version for $69.99.

Image: August
The newest August smart locks work with the new Yale keypad with a built-in fingerprint reader.

Retrofit locks like the August only replace the interior portion of your existing lock, leaving the front unchanged, so you can still use your key. They are great options for renters or people with fancy door hardware they don’t want to swap out. But the design means they don’t have room for features like keypads or fingerprint readers. The solution is a wireless keypad that connects to the lock over Bluetooth.

August already has an optional Bluetooth keypad without a fingerprint reader, but this is the first time biometric access has been an option. I’ve tested many locks, and I find a fingerprint reader the easiest way to unlock a smart door. The new Yale Keypad and Keypad Touch also have a longer Bluetooth range than the existing August keypads — up to 30 feet — giving you more placement options.

If you’re wondering why Yale is making keypads for August locks, it’s because Yale and August have been sister brands since 2017 when they were both purchased by Assa Abloy; both were later sold to Fortune Brands. But over the last few years, while Yale has integrated much of August’s technology — including its automatic unlocking feature and its app — and released a slew of new products, there hasn’t been any innovation from August.

Once a competitor to Ring, the startup’s video doorbell aspirations are long gone, as are its founders, Yves Béhar and Jason Johnson. But the last lock they launched — August Wi-Fi Smart Lock (fourth-gen) — is still a great lock and remains my pick for the best retrofit smart lock. And this new option to add a fingerprint reader just made it even better.

Read More 

There’s an electric salt spoon that adds umami flavor

A chunky electric spoon for your chunky low-sodium foods. | Image: Kirin

If you’re cutting salt out of your diet, either for medical reasons or just trying to be healthy, low-sodium foods can be a letdown. But don’t despair — Japanese company Kirin claims to have a solution in its Electric Salt Spoon, which uses electrodes to electrify your tongue to give you a little salty shock.
The idea, as Reuters puts it, is that it passes a small electric current to “concentrate sodium ion molecules on the tongue,” enhancing salty flavor. It’s like techno-umami. The company says the goal is to get people to eat healthier by letting them eat low-sodium food without being sad about how unsalty it is.

Image: Kirin

Kirin partnered with Professor Homei Miyashita from Japan’s Meiji University School of Science and Technology to test the tech in a set of chopsticks that were attached via wire to a wrist-worn battery pack. The company claims the chopsticks increased salty taste by as much as 50 percent. Longtime readers of The Verge may recall that, a couple of years ago, Miyashita explored a concept for a lickable TV that would let you taste the stuff you see in your stories, a concept I’m still struggling to come to terms with.
According to a ChatGPT translation of Kirin’s safety precautions for the Electric Salt Spoon (PDF), certain people shouldn’t use it, including those who use implanted medical devices like pacemakers or wearables like heart rate monitors, have metal allergies, have facial nerve issues, suffer from bleeding disorders, are currently undergoing dental treatment, or might be pregnant. It recommends talking to a doctor if you have conditions like febrile diseases, severe cognitive impairments, or malignant tumors. That’s a disappointing list since it probably applies to many of the people this spoon would benefit most.
The spoon is only going out in a limited-run batch of 200 at first and selling for 19,800 yen (that’s about $127). But Kirin will start selling to overseas markets next year and reportedly hopes to sell to a million people in the next five years. The company’s biggest business is beer (ever drank a Kirin Ichiban?), but according to Reuters, it’s also moving into healthcare. (Ah, the old alcohol-to-healthcare pipeline.)

A chunky electric spoon for your chunky low-sodium foods. | Image: Kirin

If you’re cutting salt out of your diet, either for medical reasons or just trying to be healthy, low-sodium foods can be a letdown. But don’t despair — Japanese company Kirin claims to have a solution in its Electric Salt Spoon, which uses electrodes to electrify your tongue to give you a little salty shock.

The idea, as Reuters puts it, is that it passes a small electric current to “concentrate sodium ion molecules on the tongue,” enhancing salty flavor. It’s like techno-umami. The company says the goal is to get people to eat healthier by letting them eat low-sodium food without being sad about how unsalty it is.

Image: Kirin

Kirin partnered with Professor Homei Miyashita from Japan’s Meiji University School of Science and Technology to test the tech in a set of chopsticks that were attached via wire to a wrist-worn battery pack. The company claims the chopsticks increased salty taste by as much as 50 percent. Longtime readers of The Verge may recall that, a couple of years ago, Miyashita explored a concept for a lickable TV that would let you taste the stuff you see in your stories, a concept I’m still struggling to come to terms with.

According to a ChatGPT translation of Kirin’s safety precautions for the Electric Salt Spoon (PDF), certain people shouldn’t use it, including those who use implanted medical devices like pacemakers or wearables like heart rate monitors, have metal allergies, have facial nerve issues, suffer from bleeding disorders, are currently undergoing dental treatment, or might be pregnant. It recommends talking to a doctor if you have conditions like febrile diseases, severe cognitive impairments, or malignant tumors. That’s a disappointing list since it probably applies to many of the people this spoon would benefit most.

The spoon is only going out in a limited-run batch of 200 at first and selling for 19,800 yen (that’s about $127). But Kirin will start selling to overseas markets next year and reportedly hopes to sell to a million people in the next five years. The company’s biggest business is beer (ever drank a Kirin Ichiban?), but according to Reuters, it’s also moving into healthcare. (Ah, the old alcohol-to-healthcare pipeline.)

Read More 

Square Enix will let Kingdom Hearts cook on Steam

Image: Square Enix

I’ve been having these weird thoughts lately… like, should I go back and replay the Kingdom Hearts series or not? Fortunately, Square Enix has made this decision trivial with the announcement that it’s releasing the series on Steam on June 13th. To celebrate, and perhaps destabilize an entire generation of adults for whom the first Kingdom Hearts trailer was a transformative experience (aka, me), a trailer was released with a brand-new recording of Hikaru Utada’s “Simple and Clean.”

The trailer was executed very intelligently, showing snippets of the Kingdom Hearts games in chronological order, suggesting to potential customers (and reminding lapsed fans like me) in what order the games should be played. Steam will have three Kingdom Hearts games for sale: Kingdom Hearts -HD 1.5+2.5 ReMIX- (yes those dashes are officially part of the name), Kingdom Hearts HD 2.8 Final Chapter Prologue, and Kingdom Hearts III + Re Mind DLC. To make things easier, there will also be a bundle available, Kingdom Hearts Integrum Masterpiece, that combines all three games for one price.
Those who purchase Kingdom Hearts III + Re Mind DLC will also get a special keyblade, “Dead of Night,” which looks pretty slick in Steam Logo blue and grey. Each of the three games is itself a bundle of a bunch of KH games spanning platforms and console generations. You can find a full breakdown of which game is in what bundle here or on each game’s Steam page.

Image: Square Enix
That Steam-exclusive keyblade looks pretty sick.

Kingdom Hearts suddenly launching on Steam isn’t that surprising. The games have already been available on PC via the Epic Games Store for a few years now with Steam being the next logical choice. There’s also the news that Square Enix plans to bring more of its games to more platforms. Throughout its more than 20-year run, the Kingdom Hearts series has been predominantly on PlayStation consoles and handhelds with two smaller games making a brief appearance on the Nintendo DS. It was only 2020 when Kingdom Hearts finally came to Xbox with Nintendo offering an abysmal cloud version of the games in 2022.
There’s also the fact that these games are releasing a week before Summer Game Fest with Geoff Keighley posting about the game possibly suggesting Kingdom Hearts IV might make an appearance. Getting a new crop of players into the series right before sharing more info on the next entry seems like a shrewd idea.

It is exceedingly hilarious that even though this Steam trailer lists the games in chronological order, the games themselves are not bundled that way. If you want to play the way the trailer lays it all out, you’ll have to start by watching (because it’s a movie, not a game) Back Cover on Kingdom Hearts HD 2.8, then jump to Kingdom Hearts HD 1.5 to play Birth By Sleep before jumping back to KH2.8 to play Birth by Sleep – A Fragmentary Passage. Simple right?
Speaking of simple, Hikaru Utada’s new version of “Simple and Clean” moved me to tears. I was one of those teenagers who wasn’t the biggest Disney fan but was into Japanese pop, visual kei (think of a Japanese version of glam metal), and Japanese rock. Hearing that new take on such a beloved song made me feel both 16 years old and 36 years old but in a good way. This thing that I have loved, which has inspired the kinds of emotions that led me to my job today, has grown up with me, but not so much that it is unrecognizable to my younger self. And when I go back and play these simple and clean games — some of them again and some of them for the first time — I hope I can face my fears from previous disappointment and be reminded of the sanctuary I loved so much.

Image: Square Enix

I’ve been having these weird thoughts lately… like, should I go back and replay the Kingdom Hearts series or not? Fortunately, Square Enix has made this decision trivial with the announcement that it’s releasing the series on Steam on June 13th. To celebrate, and perhaps destabilize an entire generation of adults for whom the first Kingdom Hearts trailer was a transformative experience (aka, me), a trailer was released with a brand-new recording of Hikaru Utada’s “Simple and Clean.”

The trailer was executed very intelligently, showing snippets of the Kingdom Hearts games in chronological order, suggesting to potential customers (and reminding lapsed fans like me) in what order the games should be played. Steam will have three Kingdom Hearts games for sale: Kingdom Hearts -HD 1.5+2.5 ReMIX- (yes those dashes are officially part of the name), Kingdom Hearts HD 2.8 Final Chapter Prologue, and Kingdom Hearts III + Re Mind DLC. To make things easier, there will also be a bundle available, Kingdom Hearts Integrum Masterpiece, that combines all three games for one price.

Those who purchase Kingdom Hearts III + Re Mind DLC will also get a special keyblade, “Dead of Night,” which looks pretty slick in Steam Logo blue and grey. Each of the three games is itself a bundle of a bunch of KH games spanning platforms and console generations. You can find a full breakdown of which game is in what bundle here or on each game’s Steam page.

Image: Square Enix
That Steam-exclusive keyblade looks pretty sick.

Kingdom Hearts suddenly launching on Steam isn’t that surprising. The games have already been available on PC via the Epic Games Store for a few years now with Steam being the next logical choice. There’s also the news that Square Enix plans to bring more of its games to more platforms. Throughout its more than 20-year run, the Kingdom Hearts series has been predominantly on PlayStation consoles and handhelds with two smaller games making a brief appearance on the Nintendo DS. It was only 2020 when Kingdom Hearts finally came to Xbox with Nintendo offering an abysmal cloud version of the games in 2022.

There’s also the fact that these games are releasing a week before Summer Game Fest with Geoff Keighley posting about the game possibly suggesting Kingdom Hearts IV might make an appearance. Getting a new crop of players into the series right before sharing more info on the next entry seems like a shrewd idea.

It is exceedingly hilarious that even though this Steam trailer lists the games in chronological order, the games themselves are not bundled that way. If you want to play the way the trailer lays it all out, you’ll have to start by watching (because it’s a movie, not a game) Back Cover on Kingdom Hearts HD 2.8, then jump to Kingdom Hearts HD 1.5 to play Birth By Sleep before jumping back to KH2.8 to play Birth by Sleep – A Fragmentary Passage. Simple right?

Speaking of simple, Hikaru Utada’s new version of “Simple and Clean” moved me to tears. I was one of those teenagers who wasn’t the biggest Disney fan but was into Japanese pop, visual kei (think of a Japanese version of glam metal), and Japanese rock. Hearing that new take on such a beloved song made me feel both 16 years old and 36 years old but in a good way. This thing that I have loved, which has inspired the kinds of emotions that led me to my job today, has grown up with me, but not so much that it is unrecognizable to my younger self. And when I go back and play these simple and clean games — some of them again and some of them for the first time — I hope I can face my fears from previous disappointment and be reminded of the sanctuary I loved so much.

Read More 

Microsoft’s big bet on building a new type of AI computer

Recall, one of Windows’ new AI features, on a Surface Laptop. | Photo: Allison Johnson / The Verge

Microsoft’s new Windows on Arm push is a milestone moment. It’s taken nearly five years to get to the point where the software maker is confident it has the upper hand over the MacBook Air on performance, battery life, and app compatibility.
But behind the scenes, Microsoft has been working on something even bigger. While Microsoft has spent years working toward an Arm transition that will now play out throughout the summer and beyond, it’s the AI overhaul at the heart of this new generation of “Copilot Plus PCs” that could fundamentally change how we use Windows on a daily basis.
I recently got the chance to hear from the leaders behind Microsoft’s big AI push, and they made it clear that this isn’t just a silicon change. Microsoft has rearchitected Windows 11 to run dozens of AI models in the background, and they believe it can reshape the experience of using a PC…

This story is exclusively for subscribers of Notepad, our newsletter uncovering Microsoft’s era-defining bets in AI, gaming, and computing.
Start your Notepad free trial now to get the full story in your inbox.

Monthly
$7/month
Get every issue of Notepad straight to your inbox. The first month is free.
START YOUR TRIAL

Annual
$70/year
Get a year of Notepad at a discounted rate. The first month is free.
START YOUR TRIAL

Bundle
$100/person/year
Get one year of both Notepad and Command Line. The first month is free.
SUBSCRIBE TO BOTH

We accept credit card, Apple Pay and Google Pay.

Recall, one of Windows’ new AI features, on a Surface Laptop. | Photo: Allison Johnson / The Verge

Microsoft’s new Windows on Arm push is a milestone moment. It’s taken nearly five years to get to the point where the software maker is confident it has the upper hand over the MacBook Air on performance, battery life, and app compatibility.

But behind the scenes, Microsoft has been working on something even bigger. While Microsoft has spent years working toward an Arm transition that will now play out throughout the summer and beyond, it’s the AI overhaul at the heart of this new generation of “Copilot Plus PCs” that could fundamentally change how we use Windows on a daily basis.

I recently got the chance to hear from the leaders behind Microsoft’s big AI push, and they made it clear that this isn’t just a silicon change. Microsoft has rearchitected Windows 11 to run dozens of AI models in the background, and they believe it can reshape the experience of using a PC…

Read More 

Microsoft Build 2024: everything announced

Microsoft CEO Satya Nadella at Build 2024. | Screenshot: YouTube

Microsoft had a lot to say about Windows and AI — and a little to say about custom emoji — during the Build 2024 keynote. The company, like just about everyone else in the industry, is charging hard at cramming AI into every nook and cranny it can find. That means Copilot watching your screen to help you play Minecraft or giving you AI agent co-workers.
The whole event was over two hours long, but you can catch the highlights below.

Microsoft wants to put AI agents to work

Image: Microsoft

Microsoft says Copilot AI agents can soon be used as something like virtual employees that businesses can use for menial tasks like monitoring emails, carrying out a series of automated tasks, helping with employee onboarding, or doing data entry, all without being prompted to do so. The company says the new Copilot abilities won’t take over jobs — just the boring parts. (Isn’t “data entry” a whole job description for some people?) The new capability will hit Copilot Studio in preview later this year.
Microsoft goes mini-multimodal
The company rolled out Phi-3-vision, a new version of the Phi-3 AI model it announced in April. It’s multimodal and can read text and look at pictures, but it’s a small language model that’s compact enough to work on a mobile device. Image analysis is one of the big use cases that AI companies have been pushing, and smartphones are about as ideal a place to use them as anywhere. Phi-3-vision is part of Microsoft’s Phi-3 family of models that the company announced in April and is available in preview now.
Microsoft Edge can translate YouTube videos while you’re watching them
Microsoft’s Edge browser is getting an AI-powered real-time video translation feature that can dub videos from sites like YouTube, LinkedIn, Reuters, and Coursera. The feature works with a handful of languages, offering translation from Spanish to English or vice versa — or from English to German, Hindi, Italian, and Russian. Microsoft says the feature is “coming soon” and that more languages and video platforms will be added in the future.
Custom emoji for Microsoft Teams
Get ready for some disco parrots and cutouts of your teammates in Microsoft Teams because the company is adding the ability to add your own emoji in Microsoft’s Slack competitor. Like in Slack, admins can limit who is allowed to add emojis, and they won’t be visible outside of your organization’s domain. They’re coming in July.
A tiny Snapdragon PC

Image: Qualcomm

Qualcomm’s roughly Mac Mini-sized $899 Snapdragon Dev Kit for Windows has a Snapdragon X Elite chip inside. It also has 32GB of RAM, a 512GB SSD, and plenty of ports, though it’s not clear if just anyone can buy it.
Microsoft File Explorer as a Git repository

Image: Microsoft

You’ll be able to use Microsoft’s File Explorer to keep track of your coding projects soon, as the company is integrating Git into the file system browser. The company says developers will be able to keep track of file status, commit messages, and their current branch from within File Explorer. Also, the app now supports 7-zip and TAR compression natively.
Windows adds AI-powered clipboard features in PowerToys

Image: Microsoft

Microsoft’s new Advanced Paste feature is available now as part of the PowerToys suite for Windows 11, giving you the ability to convert the contents of your clipboard as you go. You’ll be able to trigger the Advanced Paste menu by pressing Windows Key + Shift + V and, from there, convert your paste to formats like plaintext, markdown, or JSON, using further keyboard shortcuts. You can also convert by typing into the prompt box, which has other capabilities like altering or summarizing the text before you paste it. The catch: you’ll need an OpenAI API key and credits in your OpenAI account for the AI part.

Microsoft CEO Satya Nadella at Build 2024. | Screenshot: YouTube

Microsoft had a lot to say about Windows and AI — and a little to say about custom emoji — during the Build 2024 keynote. The company, like just about everyone else in the industry, is charging hard at cramming AI into every nook and cranny it can find. That means Copilot watching your screen to help you play Minecraft or giving you AI agent co-workers.

The whole event was over two hours long, but you can catch the highlights below.

Microsoft wants to put AI agents to work

Image: Microsoft

Microsoft says Copilot AI agents can soon be used as something like virtual employees that businesses can use for menial tasks like monitoring emails, carrying out a series of automated tasks, helping with employee onboarding, or doing data entry, all without being prompted to do so. The company says the new Copilot abilities won’t take over jobs — just the boring parts. (Isn’t “data entry” a whole job description for some people?) The new capability will hit Copilot Studio in preview later this year.

Microsoft goes mini-multimodal

The company rolled out Phi-3-vision, a new version of the Phi-3 AI model it announced in April. It’s multimodal and can read text and look at pictures, but it’s a small language model that’s compact enough to work on a mobile device. Image analysis is one of the big use cases that AI companies have been pushing, and smartphones are about as ideal a place to use them as anywhere. Phi-3-vision is part of Microsoft’s Phi-3 family of models that the company announced in April and is available in preview now.

Microsoft Edge can translate YouTube videos while you’re watching them

Microsoft’s Edge browser is getting an AI-powered real-time video translation feature that can dub videos from sites like YouTube, LinkedIn, Reuters, and Coursera. The feature works with a handful of languages, offering translation from Spanish to English or vice versa — or from English to German, Hindi, Italian, and Russian. Microsoft says the feature is “coming soon” and that more languages and video platforms will be added in the future.

Custom emoji for Microsoft Teams

Get ready for some disco parrots and cutouts of your teammates in Microsoft Teams because the company is adding the ability to add your own emoji in Microsoft’s Slack competitor. Like in Slack, admins can limit who is allowed to add emojis, and they won’t be visible outside of your organization’s domain. They’re coming in July.

A tiny Snapdragon PC

Image: Qualcomm

Qualcomm’s roughly Mac Mini-sized $899 Snapdragon Dev Kit for Windows has a Snapdragon X Elite chip inside. It also has 32GB of RAM, a 512GB SSD, and plenty of ports, though it’s not clear if just anyone can buy it.

Microsoft File Explorer as a Git repository

Image: Microsoft

You’ll be able to use Microsoft’s File Explorer to keep track of your coding projects soon, as the company is integrating Git into the file system browser. The company says developers will be able to keep track of file status, commit messages, and their current branch from within File Explorer. Also, the app now supports 7-zip and TAR compression natively.

Windows adds AI-powered clipboard features in PowerToys

Image: Microsoft

Microsoft’s new Advanced Paste feature is available now as part of the PowerToys suite for Windows 11, giving you the ability to convert the contents of your clipboard as you go. You’ll be able to trigger the Advanced Paste menu by pressing Windows Key + Shift + V and, from there, convert your paste to formats like plaintext, markdown, or JSON, using further keyboard shortcuts. You can also convert by typing into the prompt box, which has other capabilities like altering or summarizing the text before you paste it. The catch: you’ll need an OpenAI API key and credits in your OpenAI account for the AI part.

Read More 

Windows now has AI-powered copy and paste

Illustration: The Verge

Microsoft is adding a new Advanced Paste feature to PowerToys for Windows 11 that can convert your clipboard content on the fly with the power of AI. The new feature can help people speed up their workflows by doing things like copying code in one language and pasting it in another, although its best tricks require OpenAI API credits.
Advanced Paste is included in PowerToys version 0.81 and, once enabled, can be activated with a special key command: Windows Key + Shift + V. That opens an Advanced Paste text window that offers paste conversion options including plaintext, markdown, and JSON.
If you enable Paste with AI in the Advanced Paste settings, you’ll also see an OpenAI prompt where you can enter the conversion you want — summarized text, translations, generated code, a rewrite from casual to professional style, Yoda syntax, or whatever you can think to ask for.

Image: Microsoft
Advanced Paste window.

There are prerequisites to use the feature, though. You’ll need to add an OpenAI API key in PowerToys, and you’ll need to buy credits for your OpenAI account if you don’t have any. (API credits are different from a paid ChatGPT account.)

Illustration: The Verge

Microsoft is adding a new Advanced Paste feature to PowerToys for Windows 11 that can convert your clipboard content on the fly with the power of AI. The new feature can help people speed up their workflows by doing things like copying code in one language and pasting it in another, although its best tricks require OpenAI API credits.

Advanced Paste is included in PowerToys version 0.81 and, once enabled, can be activated with a special key command: Windows Key + Shift + V. That opens an Advanced Paste text window that offers paste conversion options including plaintext, markdown, and JSON.

If you enable Paste with AI in the Advanced Paste settings, you’ll also see an OpenAI prompt where you can enter the conversion you want — summarized text, translations, generated code, a rewrite from casual to professional style, Yoda syntax, or whatever you can think to ask for.

Image: Microsoft
Advanced Paste window.

There are prerequisites to use the feature, though. You’ll need to add an OpenAI API key in PowerToys, and you’ll need to buy credits for your OpenAI account if you don’t have any. (API credits are different from a paid ChatGPT account.)

Read More 

Microsoft is bringing ‘Windows Volumetric Apps’ to Meta Quest headsets

Screenshot by Sean Hollister / The Verge

You can already beam your flat Windows desktop and its VR games onto your Meta Quest headset — but what if Windows could send HoloLens-like 3D apps and digital objects to the headset, too?
At Build, Microsoft has just announced “Windows Volumetric Apps on Meta Quest,” a way to “extend Windows apps into 3D space.”
Details are slim, but the company showed off a digital exploded 3D view of an Xbox controller from the perspective of a Meta Quest 3 headset, a digital object you could manipulate with your hands — and says it took its software partner Creo a single day to bring that interactive visualization to Quest.

Screenshot: Microsoft

Screenshot: Microsoft

Microsoft says devs can sign up for the developer preview today, which’ll give you access to an unnamed “volumetric API.” The form you’ll fill out makes this sound like early days:
Microsoft is looking for developers that produce or provide plug-ins for 3D Windows desktop applications or customers that work with 3D applications on Windows desktop applications who are interested in extending those applications into 3D content with mixed reality.

It’s only been a few months since Microsoft ditched its previous Windows Mixed Reality initiative, which relied on an array of Windows PC partners to build wired headsets that users would plug directly into a PC. In April, Microsoft partnered with Meta on a limited-run Xbox-themed version of the Meta Quest, and it introduced Office apps in Quest VR and Xbox Cloud Gaming in Quest VR last December.
Meanwhile, other PC makers have begun licensing Meta’s Quest operating system for their own upcoming headsets. A more direct partnership between LG and Meta is reportedly on the rocks.

Screenshot by Sean Hollister / The Verge

You can already beam your flat Windows desktop and its VR games onto your Meta Quest headset — but what if Windows could send HoloLens-like 3D apps and digital objects to the headset, too?

At Build, Microsoft has just announced “Windows Volumetric Apps on Meta Quest,” a way to “extend Windows apps into 3D space.”

Details are slim, but the company showed off a digital exploded 3D view of an Xbox controller from the perspective of a Meta Quest 3 headset, a digital object you could manipulate with your hands — and says it took its software partner Creo a single day to bring that interactive visualization to Quest.

Screenshot: Microsoft

Screenshot: Microsoft

Microsoft says devs can sign up for the developer preview today, which’ll give you access to an unnamed “volumetric API.” The form you’ll fill out makes this sound like early days:

Microsoft is looking for developers that produce or provide plug-ins for 3D Windows desktop applications or customers that work with 3D applications on Windows desktop applications who are interested in extending those applications into 3D content with mixed reality.

It’s only been a few months since Microsoft ditched its previous Windows Mixed Reality initiative, which relied on an array of Windows PC partners to build wired headsets that users would plug directly into a PC. In April, Microsoft partnered with Meta on a limited-run Xbox-themed version of the Meta Quest, and it introduced Office apps in Quest VR and Xbox Cloud Gaming in Quest VR last December.

Meanwhile, other PC makers have begun licensing Meta’s Quest operating system for their own upcoming headsets. A more direct partnership between LG and Meta is reportedly on the rocks.

Read More 

Elon Musk’s xAI is working on making Grok multimodal

Illustration by Kristen Radtke / The Verge; Getty Images

Elon Musk’s AI company, xAI, is making progress on adding multimodal inputs to its Grok chatbot, according to public developer documents. What this means is that, soon, users may be able to upload photos to Grok and receive text-based answers.
This was first teased in a blog post last month from xAI which said Grok-1.5V will offer “multimodal models in a number of domains.” The latest update to the developer documents appear to show progress on shipping a new model.
In the developer documents, a sample Python script demonstrates how developers can use the xAI software development kit library to generate a response based on both text and images. This script reads an image file, sets up a text prompt, and uses the xAI SDK to generate a response.

Image: xAI

This is a big update for Grok, which xAI first released in November 2023 and is available to users who pay for the X Premium Plus subscription. The last update was Grok 1.5 in March, which came with improved reasoning capabilities.
The model is trained “on a variety of text data from publicly available sources from the Internet up to Q3 2023 and data sets reviewed and curated by … human reviewers,” according to a blog post from X. Grok-1 was not trained on X data (including public X posts), the blog added. However, Grok does have “real-time knowledge of the world,” including posts on X.
xAI, founded by Elon Musk in March 2023, is relatively new in the AI field and trails behind competitors such as OpenAI’s ChatGPT. However, according to a blog post from xAI, their Grok 1.5 model is closing the gap with GPT-4 on various benchmarks that span a wide range of grade school to high school competition problems. It’s important to note that benchmarks for large language models are often criticized because the models can perform well on benchmarks if those benchmarks are included in their training data. It’s sort of like memorizing test answers, rather than actually learning the material.
Multimodal conversational chatbots seem to be the next frontier for AI, with multiple advancements announced at Google I/O and OpenAI releasing GPT-4o, so Grok lacking multimodal capabilities has put it behind the curve — until now.

Illustration by Kristen Radtke / The Verge; Getty Images

Elon Musk’s AI company, xAI, is making progress on adding multimodal inputs to its Grok chatbot, according to public developer documents. What this means is that, soon, users may be able to upload photos to Grok and receive text-based answers.

This was first teased in a blog post last month from xAI which said Grok-1.5V will offer “multimodal models in a number of domains.” The latest update to the developer documents appear to show progress on shipping a new model.

In the developer documents, a sample Python script demonstrates how developers can use the xAI software development kit library to generate a response based on both text and images. This script reads an image file, sets up a text prompt, and uses the xAI SDK to generate a response.

Image: xAI

This is a big update for Grok, which xAI first released in November 2023 and is available to users who pay for the X Premium Plus subscription. The last update was Grok 1.5 in March, which came with improved reasoning capabilities.

The model is trained “on a variety of text data from publicly available sources from the Internet up to Q3 2023 and data sets reviewed and curated by … human reviewers,” according to a blog post from X. Grok-1 was not trained on X data (including public X posts), the blog added. However, Grok does have “real-time knowledge of the world,” including posts on X.

xAI, founded by Elon Musk in March 2023, is relatively new in the AI field and trails behind competitors such as OpenAI’s ChatGPT. However, according to a blog post from xAI, their Grok 1.5 model is closing the gap with GPT-4 on various benchmarks that span a wide range of grade school to high school competition problems. It’s important to note that benchmarks for large language models are often criticized because the models can perform well on benchmarks if those benchmarks are included in their training data. It’s sort of like memorizing test answers, rather than actually learning the material.

Multimodal conversational chatbots seem to be the next frontier for AI, with multiple advancements announced at Google I/O and OpenAI releasing GPT-4o, so Grok lacking multimodal capabilities has put it behind the curve — until now.

Read More 

Microsoft’s new Windows Copilot Runtime aims to win over AI developers

Photo: Allison Johnson / The Verge

Microsoft launched a range of Copilot Plus PCs yesterday that includes new AI features built directly into Windows 11. Behind the scenes, the company now has more than 40 AI models running on Windows 11 thanks to a new Windows Copilot Runtime that will also allow developers to use these models for their apps.
At Microsoft Build today, the company is providing a lot more details about exactly how this Windows Copilot Runtime works. The runtime includes a library of APIs that developers can tap into for their own apps, with AI frameworks and toolchains that are designed for developers to ship their own on-device models on Windows.
“Windows Copilot Library consists of ready-to-use AI APIs like Studio Effects, Live Captions Translations, OCR, Recall with User Activity, and Phi Silica, which will be available to developers in June,” explains Windows and Surface chief Pavan Davuluri.

Image: Microsoft
The new Windows Copilot Runtime.

Developers will be able to use the Windows Copilot Library to integrate things like Studio Effects, filters, portrait blur, and other features into their apps. Meta is adding the Windows Studio Effects into WhatsApp, so you’ll get features like background blur and eye contact during video calls. Even Live Captions and the new AI-powered translation feature can be used by developers with little to no code.
Microsoft demonstrated its Recall AI feature yesterday, allowing Copilot Plus PCs to document and store everything that you do on your PC so you can recall memories and search through a timeline. This is all powered by a new Windows Semantic Index that stores this data locally, and Microsoft plans to allow developers to build something similar.
“We will make this capability available for developers with Vector Embeddings API to build their own vector store and RAG within their applications and with their app data,” says Davuluri.

Photo: Allison Johnson / The Verge

Developers will also be able to improve Windows’ new Recall feature by adding contextual information to their apps that feeds into the database powering this feature. “This integration helps users pick up where they left off in your app, improving app engagement and users’ seamless flow between Windows and your app,” says Davuluri.
All of these improvements inside Windows for developers are the very early building blocks for more AI-powered apps on top of its new Arm-powered systems and the NPUs coming from AMD and Intel soon. While Microsoft is building the platform for developers to create AI apps for Windows, it’s now banking on this being an important part of the next decade of Windows development. Onstage at Build today, Davuluri stood in front of a slide that read “Windows is the most open platform for AI,” signaling just how important this moment is for Microsoft.

Photo: Allison Johnson / The Verge

Microsoft launched a range of Copilot Plus PCs yesterday that includes new AI features built directly into Windows 11. Behind the scenes, the company now has more than 40 AI models running on Windows 11 thanks to a new Windows Copilot Runtime that will also allow developers to use these models for their apps.

At Microsoft Build today, the company is providing a lot more details about exactly how this Windows Copilot Runtime works. The runtime includes a library of APIs that developers can tap into for their own apps, with AI frameworks and toolchains that are designed for developers to ship their own on-device models on Windows.

“Windows Copilot Library consists of ready-to-use AI APIs like Studio Effects, Live Captions Translations, OCR, Recall with User Activity, and Phi Silica, which will be available to developers in June,” explains Windows and Surface chief Pavan Davuluri.

Image: Microsoft
The new Windows Copilot Runtime.

Developers will be able to use the Windows Copilot Library to integrate things like Studio Effects, filters, portrait blur, and other features into their apps. Meta is adding the Windows Studio Effects into WhatsApp, so you’ll get features like background blur and eye contact during video calls. Even Live Captions and the new AI-powered translation feature can be used by developers with little to no code.

Microsoft demonstrated its Recall AI feature yesterday, allowing Copilot Plus PCs to document and store everything that you do on your PC so you can recall memories and search through a timeline. This is all powered by a new Windows Semantic Index that stores this data locally, and Microsoft plans to allow developers to build something similar.

“We will make this capability available for developers with Vector Embeddings API to build their own vector store and RAG within their applications and with their app data,” says Davuluri.

Photo: Allison Johnson / The Verge

Developers will also be able to improve Windows’ new Recall feature by adding contextual information to their apps that feeds into the database powering this feature. “This integration helps users pick up where they left off in your app, improving app engagement and users’ seamless flow between Windows and your app,” says Davuluri.

All of these improvements inside Windows for developers are the very early building blocks for more AI-powered apps on top of its new Arm-powered systems and the NPUs coming from AMD and Intel soon. While Microsoft is building the platform for developers to create AI apps for Windows, it’s now banking on this being an important part of the next decade of Windows development. Onstage at Build today, Davuluri stood in front of a slide that read “Windows is the most open platform for AI,” signaling just how important this moment is for Microsoft.

Read More 

Where to preorder the Sonos Ace headphones ahead of June 5th

The stylish Sonos Ace come in black and white. | Photo by Chris Welch / The Verge

At long last, the eagerly anticipated Sonos Ace have arrived. On Tuesday, the company announced that its first pair of wireless headphones will be available on June 5th. Even better, you can already preorder them ahead of launch for $449.

We’re still testing the Sonos Ace, but in our limited time playing around with them during a recent demo session, we came away impressed. The plush noise-canceling headphones are exceptionally comfortable to wear, with magnetic ear pads made of vegan leather and a memory foam headband. Plus, if you own a Sonos soundbar, you can quickly transfer TV audio from the soundbar to the headphones with a push of a button, allowing for a more immersive audio experience nobody else can hear.
Of course, we have yet to see how their overall sound, ANC, and transparency modes stack up against rivaling headphones from Bose, Sony, and Apple over extended listening sessions. We’ll be publishing our full review in the coming days, but in the meantime, here’s how to ensure you’re one of the first to get your hands on Sonos’ forthcoming headphones.
Where to preorder the Sonos Ace
The Sonos Ace are available to preorder from Sonos and Best Buy in black or white for $449 ahead of their June 5th release date. You can also preorder them at Amazon and B&H Photo starting on May 28th or sign up to receive release updates from either retailer now.

As previously said, we need more time to form a solid opinion about the Sonos Ace, but so far, we’re impressed by their capabilities and design. With just a click of a button, you can sync them with the Sonos Arc and — soon — Sonos’ other soundbars, including the Sonos Ray and the second-gen Sonos Beam. Combine that with their support for head-tracking spatial audio, and the headphones essentially let you create a private cinematic listening experience.

Granted, it’s a shame they can’t play music over Wi-Fi and you can’t group the Ace with Sonos speakers, but at least the headphones have other things going for them. They’re stylish and exceptionally comfortable to wear, for example, and we loved how much attention Sonos paid to detail. Sonos even includes a USB-C and headphone cable pouch inside the carrying case, which attaches magnetically. The company also added a fingerprint-resistant coating to reduce smudges, along with physical buttons for an intuitive experience, making for what seems so far like a well-rounded pair of headphones.

The stylish Sonos Ace come in black and white. | Photo by Chris Welch / The Verge

At long last, the eagerly anticipated Sonos Ace have arrived. On Tuesday, the company announced that its first pair of wireless headphones will be available on June 5th. Even better, you can already preorder them ahead of launch for $449.

We’re still testing the Sonos Ace, but in our limited time playing around with them during a recent demo session, we came away impressed. The plush noise-canceling headphones are exceptionally comfortable to wear, with magnetic ear pads made of vegan leather and a memory foam headband. Plus, if you own a Sonos soundbar, you can quickly transfer TV audio from the soundbar to the headphones with a push of a button, allowing for a more immersive audio experience nobody else can hear.

Of course, we have yet to see how their overall sound, ANC, and transparency modes stack up against rivaling headphones from Bose, Sony, and Apple over extended listening sessions. We’ll be publishing our full review in the coming days, but in the meantime, here’s how to ensure you’re one of the first to get your hands on Sonos’ forthcoming headphones.

Where to preorder the Sonos Ace

The Sonos Ace are available to preorder from Sonos and Best Buy in black or white for $449 ahead of their June 5th release date. You can also preorder them at Amazon and B&H Photo starting on May 28th or sign up to receive release updates from either retailer now.

As previously said, we need more time to form a solid opinion about the Sonos Ace, but so far, we’re impressed by their capabilities and design. With just a click of a button, you can sync them with the Sonos Arc and — soon — Sonos’ other soundbars, including the Sonos Ray and the second-gen Sonos Beam. Combine that with their support for head-tracking spatial audio, and the headphones essentially let you create a private cinematic listening experience.

Granted, it’s a shame they can’t play music over Wi-Fi and you can’t group the Ace with Sonos speakers, but at least the headphones have other things going for them. They’re stylish and exceptionally comfortable to wear, for example, and we loved how much attention Sonos paid to detail. Sonos even includes a USB-C and headphone cable pouch inside the carrying case, which attaches magnetically. The company also added a fingerprint-resistant coating to reduce smudges, along with physical buttons for an intuitive experience, making for what seems so far like a well-rounded pair of headphones.

Read More 

Scroll to top
Generated by Feedzy