verge-rss

Google’s Gemini video search makes factual error in demo

Google made a lot of noise about its Gemini AI taking over search at its I/O conference today, but one of its flashiest demos was once again marked by the ever-present fatal flaw of every large language model to date: confidently making up the wrong answer.

During a sizzle reel for “Search in the Gemini era,” Google demoed video search, which allows you to search by speaking over a video clip. The example is a video of a stuck film advance lever on a film camera with the query “why is the lever not moving all the way,” which Gemini recognizes and provides some suggestions to fix. Very impressive!

You know, just get in there and nudge the shutter a little bit.

The only problem is that the answers it comes up with are, on the whole, hilariously bad, and Google literally highlighted the suggestion to “open the back door and gently remove the film,” which is perhaps the worst thing you can do in this situation. (If you’re not familiar with how film cameras work, opening the door in this way in anything but a totally dark room would expose your film to light, ruining any photos you’ve taken.)

The highlight really does it here.

This is the second time Google has produced a slick asset showing one of its own AI products getting an answer wrong — last year, the Bard chatbot confidently lied about the James Webb Space Telescope being the first to photograph a planet outside our Solar System. Whoops!

Google made a lot of noise about its Gemini AI taking over search at its I/O conference today, but one of its flashiest demos was once again marked by the ever-present fatal flaw of every large language model to date: confidently making up the wrong answer.

During a sizzle reel for “Search in the Gemini era,” Google demoed video search, which allows you to search by speaking over a video clip. The example is a video of a stuck film advance lever on a film camera with the query “why is the lever not moving all the way,” which Gemini recognizes and provides some suggestions to fix. Very impressive!

You know, just get in there and nudge the shutter a little bit.

The only problem is that the answers it comes up with are, on the whole, hilariously bad, and Google literally highlighted the suggestion to “open the back door and gently remove the film,” which is perhaps the worst thing you can do in this situation. (If you’re not familiar with how film cameras work, opening the door in this way in anything but a totally dark room would expose your film to light, ruining any photos you’ve taken.)

The highlight really does it here.

This is the second time Google has produced a slick asset showing one of its own AI products getting an answer wrong — last year, the Bard chatbot confidently lied about the James Webb Space Telescope being the first to photograph a planet outside our Solar System. Whoops!

Read More 

Blink and you missed it: Google has a new pair of prototype AR glasses

I thought Google killed its augmented reality glasses after they couldn’t live up to the promise of real-time translation. I thought Project Iris was vaporware after the company shed its AR leaders and downsized the division that was reportedly facing internal turmoil.

But we may have written off Google’s Glasses too soon — because Google just revealed a new prototype pair in a blink-and-you-missed-it moment at Google I/O.
In the heat of the moment, I thought the Googler simply donned a pair of normal glasses before pulling out their smartphone at 1:30 in the Project Astra demo video below. But no, those frames are thick.

There’s even a little picture-in-picture moment where you can see them wearing the glasses. Here it is zoomed in:

Screenshot by Sean Hollister / The Verge
Google’s new AI / AR glasses prototype.

Once the glasses are on, the Googler is using them, not their Pixel phone, to ask questions and get answers hands-free.

Screenshot by Sean Hollister / The Verge
Is this a mockup, or might the glasses have a display that can present translucent text?

Are these just repurposed Project Iris prototypes? They seem similar enough, but the ones that Google prominently showed off in its 2022 video had flat nose bridges. The new nose bridge is curved.

Image: Google
Flat nose bridge.

Image: Google
Another flat nose bridge.

Google didn’t tell my colleague David anything about these glasses in our Project Astra interview, but the company doesn’t exactly seem to be hiding them, either. In the YouTube description for the Project Astra video, Google says the second chunk of the demo is running on “a prototype glasses device.”
With Meta’s Ray-Ban smart glasses proving to be the best AI wearable so far, beating out Rabbit and Humane for utility despite having no screen, it’s not surprising that Google might want to revive its glasses ambitions.
In mid-2022, Google got far enough with its previous prototypes that it planned to start testing them in public — you’ll let us know if you see these ones in your neighborhood, right?

I thought Google killed its augmented reality glasses after they couldn’t live up to the promise of real-time translation. I thought Project Iris was vaporware after the company shed its AR leaders and downsized the division that was reportedly facing internal turmoil.

But we may have written off Google’s Glasses too soon — because Google just revealed a new prototype pair in a blink-and-you-missed-it moment at Google I/O.

In the heat of the moment, I thought the Googler simply donned a pair of normal glasses before pulling out their smartphone at 1:30 in the Project Astra demo video below. But no, those frames are thick.

There’s even a little picture-in-picture moment where you can see them wearing the glasses. Here it is zoomed in:

Screenshot by Sean Hollister / The Verge
Google’s new AI / AR glasses prototype.

Once the glasses are on, the Googler is using them, not their Pixel phone, to ask questions and get answers hands-free.

Screenshot by Sean Hollister / The Verge
Is this a mockup, or might the glasses have a display that can present translucent text?

Are these just repurposed Project Iris prototypes? They seem similar enough, but the ones that Google prominently showed off in its 2022 video had flat nose bridges. The new nose bridge is curved.

Image: Google
Flat nose bridge.

Image: Google
Another flat nose bridge.

Google didn’t tell my colleague David anything about these glasses in our Project Astra interview, but the company doesn’t exactly seem to be hiding them, either. In the YouTube description for the Project Astra video, Google says the second chunk of the demo is running on “a prototype glasses device.”

With Meta’s Ray-Ban smart glasses proving to be the best AI wearable so far, beating out Rabbit and Humane for utility despite having no screen, it’s not surprising that Google might want to revive its glasses ambitions.

In mid-2022, Google got far enough with its previous prototypes that it planned to start testing them in public — you’ll let us know if you see these ones in your neighborhood, right?

Read More 

Watch this screaming, rainbow-clad musician demo Google’s AI DJ

Well, that’s one way to kick off a developer-focused tech event. | Image: Google

Developer conferences aren’t exactly known for having an energetic, party-like atmosphere, but thankfully, that didn’t stop Google’s latest hype man. The company’s I/O event this year was kicked off by Marc Rebillet — an artist known in online spaces for pairing improvised electronic tracks with amusing (and typically loud) vocals. He also wears a lot of robes.
“If you have no idea who I am, I would expect that,” said Rebillet. He introduced himself as an improvisational musician who “makes stuff up.”
That made him a good fit to demo the DJ mode that Google recently added to its generative AI text-to-music tool, MusicFX. Back in February, Google DeepMind’s Adam Roberts described the feature as an “infinite AI jam that you control.”

Rebillet’s onstage demonstration was an entertaining showcase of its capabilities. He typed in simple prompts like viola, 808 hip hop beat, and chiptunes, with the AI music tool then generating a synced-up track that incorporated all of these styles. With audience direction, Rebillet then fed the AI prompts for a track containing Persian tar, djembe, and flamenco guitar and somehow made a pretty compelling tune, overlaying improvised vocals that joked 9:30AM was “too early” to be hosting such an event.
Users are presented with a mixer-style interface that spits out music based on text prompts, layering them together and syncing the resulting track. The music can be changed in real time by adding additional prompts to the mix. You can try it out yourself now over on Google’s AI Test Kitchen. MusicFX is still in development after being introduced last year.
Rebillet has over 2 million followers on both YouTube and TikTok, where he’s best known for his viral “Night Time Bitch” sound clip and songs in which he screams at people to get out of bed while wearing a bathrobe. That nugget of context may explain why he opened I/O by clambering out of a giant coffee mug, yelled for all the “silly little nerds” to wake up, and then fired rainbow-hued robes into the crowd that say “Loop Daddy” on the back.
Bring him back next year, Google. I can’t recall the last time a tech event felt this energizing.

Well, that’s one way to kick off a developer-focused tech event. | Image: Google

Developer conferences aren’t exactly known for having an energetic, party-like atmosphere, but thankfully, that didn’t stop Google’s latest hype man. The company’s I/O event this year was kicked off by Marc Rebillet — an artist known in online spaces for pairing improvised electronic tracks with amusing (and typically loud) vocals. He also wears a lot of robes.

“If you have no idea who I am, I would expect that,” said Rebillet. He introduced himself as an improvisational musician who “makes stuff up.”

That made him a good fit to demo the DJ mode that Google recently added to its generative AI text-to-music tool, MusicFX. Back in February, Google DeepMind’s Adam Roberts described the feature as an “infinite AI jam that you control.”

Rebillet’s onstage demonstration was an entertaining showcase of its capabilities. He typed in simple prompts like viola, 808 hip hop beat, and chiptunes, with the AI music tool then generating a synced-up track that incorporated all of these styles. With audience direction, Rebillet then fed the AI prompts for a track containing Persian tar, djembe, and flamenco guitar and somehow made a pretty compelling tune, overlaying improvised vocals that joked 9:30AM was “too early” to be hosting such an event.

Users are presented with a mixer-style interface that spits out music based on text prompts, layering them together and syncing the resulting track. The music can be changed in real time by adding additional prompts to the mix. You can try it out yourself now over on Google’s AI Test Kitchen. MusicFX is still in development after being introduced last year.

Rebillet has over 2 million followers on both YouTube and TikTok, where he’s best known for his viral “Night Time Bitch” sound clip and songs in which he screams at people to get out of bed while wearing a bathrobe. That nugget of context may explain why he opened I/O by clambering out of a giant coffee mug, yelled for all the “silly little nerds” to wake up, and then fired rainbow-hued robes into the crowd that say “Loop Daddy” on the back.

Bring him back next year, Google. I can’t recall the last time a tech event felt this energizing.

Read More 

The US moves to stop buying uranium from Russia and start producing it at home

Barrels stored at the Energy Fuels White Mesa Mill uranium production facility in Blanding, Utah, on June 12th, 2023.  | Photo: Getty Images

President Joe Biden signed a new law that bars the US from importing uranium from Russia in the hopes of jumpstarting domestic mining to fuel nuclear reactors. The law also unlocks $2.7 billion in federal funding to shore up that domestic supply chain, which Congress previously approved pending limits on imports from Russia.
Russia has historically been one of the biggest suppliers of uranium to the US and other countries. When the US banned coal, oil, and gas imports in response to the war in Ukraine in 2022, it excluded uranium from its sanctions, which showed how much the US depends on foreign imports of uranium, particularly from Russia and its allies.
Since then, there’s been a bipartisan push to kickstart domestic uranium mining and processing. For the Biden administration, uranium plays a key role in meeting US climate goals by pairing renewables like solar and wind with more consistent electricity generation from nuclear reactors. But while nuclear energy might help the US reduce its greenhouse gas emissions, it also stokes conflicts with communities impacted by uranium production and nuclear waste.
“Our nation’s clean energy future will not rely on Russian imports.”
“Our nation’s clean energy future will not rely on Russian imports,” Secretary of Energy Jennifer Granholm said in a press release. “We are making investments to build out a secure nuclear fuel supply chain here in the United States.”
Russian state nuclear energy company Rosatom supplies around 20 percent of America’s enriched uranium. Russia also dominates the world’s commercial supply of more highly enriched uranium, used to fuel next-generation nuclear reactors.
Domestic uranium production has been next to nothing in the US since 2020. But that’s changing. Three mines in Arizona and Utah started churning out the material in December with growing interest in nuclear energy as an alternative to fossil fuels pushing up uranium prices.
That includes a mine near the Grand Canyon called Pinyon Plain that the Havasupai Tribe and environmental advocates have opposed for years over risks to the surrounding land, sacred sites, and water supply. The US is still cleaning up hundreds of abandoned Cold War-era uranium mines on Navajo Nation land that have been linked to cancer and other illnesses.

Nuclear energy proponents are confident there’s been enough technological advancement to prevent previous disasters, although communities along the supply chain for uranium are still on edge.
The Biden administration created a new national monument near the Grand Canyon that prevents uranium mining along vast swathes of land within borders, a move that state lawmakers are fighting. Pinyon Plain was already grandfathered in with an existing permitted claim and will still be allowed to dig up uranium within the new monument’s borders.

Barrels stored at the Energy Fuels White Mesa Mill uranium production facility in Blanding, Utah, on June 12th, 2023.  | Photo: Getty Images

President Joe Biden signed a new law that bars the US from importing uranium from Russia in the hopes of jumpstarting domestic mining to fuel nuclear reactors. The law also unlocks $2.7 billion in federal funding to shore up that domestic supply chain, which Congress previously approved pending limits on imports from Russia.

Russia has historically been one of the biggest suppliers of uranium to the US and other countries. When the US banned coal, oil, and gas imports in response to the war in Ukraine in 2022, it excluded uranium from its sanctions, which showed how much the US depends on foreign imports of uranium, particularly from Russia and its allies.

Since then, there’s been a bipartisan push to kickstart domestic uranium mining and processing. For the Biden administration, uranium plays a key role in meeting US climate goals by pairing renewables like solar and wind with more consistent electricity generation from nuclear reactors. But while nuclear energy might help the US reduce its greenhouse gas emissions, it also stokes conflicts with communities impacted by uranium production and nuclear waste.

“Our nation’s clean energy future will not rely on Russian imports.”

“Our nation’s clean energy future will not rely on Russian imports,” Secretary of Energy Jennifer Granholm said in a press release. “We are making investments to build out a secure nuclear fuel supply chain here in the United States.”

Russian state nuclear energy company Rosatom supplies around 20 percent of America’s enriched uranium. Russia also dominates the world’s commercial supply of more highly enriched uranium, used to fuel next-generation nuclear reactors.

Domestic uranium production has been next to nothing in the US since 2020. But that’s changing. Three mines in Arizona and Utah started churning out the material in December with growing interest in nuclear energy as an alternative to fossil fuels pushing up uranium prices.

That includes a mine near the Grand Canyon called Pinyon Plain that the Havasupai Tribe and environmental advocates have opposed for years over risks to the surrounding land, sacred sites, and water supply. The US is still cleaning up hundreds of abandoned Cold War-era uranium mines on Navajo Nation land that have been linked to cancer and other illnesses.

Nuclear energy proponents are confident there’s been enough technological advancement to prevent previous disasters, although communities along the supply chain for uranium are still on edge.

The Biden administration created a new national monument near the Grand Canyon that prevents uranium mining along vast swathes of land within borders, a move that state lawmakers are fighting. Pinyon Plain was already grandfathered in with an existing permitted claim and will still be allowed to dig up uranium within the new monument’s borders.

Read More 

Google I/O 2024: everything announced

Image: Google

Google I/O just ended — and it was packed with AI announcements. As expected, the event focused heavily on Google’s Gemini AI models, along with the ways they’re being integrated into apps like Workspace and Chrome.
If you didn’t get to tune in to the event live, you can catch up with all the latest from Google in the roundup below.
Google Lens now lets you search by recording a video
Google Lens already lets you search for something based on images, but now Google’s taking things a step further with the ability to search with a video. That means you can take a video of something you want to search for, ask a question during the video, and Google’s AI will attempt to pull up relevant answers from the web.
Google’s flagship Gemini model gets faster and more capable
Google has introduced a new AI model to its lineup: Gemini 1.5 Flash. The new multimodal model is just as powerful as Gemini 1.5 Pro, but it’s optimized for “narrow, high-frequency, low-latency tasks.” That makes it better at generating fast responses. Google also made some changes to Gemini 1.5 that it says will improve its ability to translate, reason, and code. Also, Google says it has doubled Gemini 1.5 Pro’s context window (how much information it can take in) from 1 million to 2 million tokens.
Gemini joins users in Workspace
Google is rolling its latest mainstream language model, Gemini 1.5 Pro, into the sidebar for Docs, Sheets, Slides, Drive, and Gmail. When it rolls out to paid subscribers next month, it will turn into more of a general-purpose assistant within Workspace that can fetch info from any and all of the content from your Drive, no matter where you are. It will also be able to do things for you, like write emails that incorporate info from a document you’re currently looking at or remind you later to respond to an email you’re perusing. Some early testers already have access to these features, but Google says it’s rolling it out to all paid Gemini subscribers next month.
Project Astra is Google’s Star Trek AI future

View this post on Instagram

A post shared by The Verge (@verge)

Google’s Project Astra is a multimodal AI assistant that the company hopes will become a do-everything virtual assistant that can watch and understand what it sees through your device’s camera, remember where your things are, and do things for you. It’s powering many of the most impressive demos from I/O this year, and the company’s aim for it is to be an honest-to-goodness AI agent that can’t just talk to you but can actually do things on your behalf.
Veo brings Sora-style video generation to creators

Image: Google
A sample of Veo’s output.

Google’s answer to OpenAI’s Sora is a new generative AI model that can output 1080p video based on text, image, and video-based prompts. Videos can be produced in a variety of styles, like aerial shots or time lapses, and can be tweaked with more prompts. The company is already offering Veo to some creators for use in YouTube videos but is also pitching it to Hollywood for use in films.
Gems bring custom chatbot creation to Gemini

Whether you need a yoga bestie or calculus tutor, in the coming months you’ll be able to customize Gemini, saving time when you have specific ways you interact with Gemini again and again. We’re calling these Gems. #GoogleIO pic.twitter.com/YQOHsUbMWE— Google (@Google) May 14, 2024

Google is rolling out a custom chatbot creator called Gems. Just like OpenAI’s GPTs, Gems lets users give instructions to Gemini to customize how it will respond and what it specializes in. If you want it to be a positive and insistent running coach with daily motivations and running plans — aka my worst nightmare — you’ll be able to do that soon (if you’re a Gemini Advanced subscriber).
Gemini becomes a better conversation partner

The new Gemini Live feature aims to make voice chats with Gemini feel more natural. The chatbot’s voice will be updated with some extra personality, and users will be able to interrupt it mid-sentence or ask it to watch through their smartphone camera and give information about what it sees in real time. Gemini is also getting new integrations that let it update or draw info from Google Calendar, Tasks, and Keep, using multimodal features to do so (like adding details from a flyer to your personal calendar).
Circle to Search can help solve math problems now

Image: Google

If you’re on an Android phone or tablet, you can now circle a math problem on your screen and get help solving it. Google’s AI won’t solve the problem for you — so it won’t help students cheat on their homework — but it will break it down into steps that should make it easier to complete.
Google Search gets an AI overhaul

Image: Google

Google will roll out “AI Overviews” — formerly known as “Search Generative Experience,” a mouthful — to everyone in the US this week. Now, a “specialized” Gemini model will design and populate results pages with summarized answers from the web (similar to what you see in AI search tools like Perplexity or Arc Search).
Android gets AI-powered scam detection

Using on-device Gemini Nano AI smarts, Google says Android phones will be able to help you avoid scam calls by looking out for red flags, like common scammer conversation patterns, and then popping up real-time warnings like the one above. The company promises to offer more details on the feature later in the year.
Android devices are about to get smarter AI

Image: Google.

Google says that Gemini will soon be able to let users ask questions about videos on-screen, and it will answer based on automatic captions. For paid Gemini Advanced users, it can also ingest PDFs and offer information. Those and other multimodal updates for Gemini on Android are coming over the next few months.
Google Chrome is getting an AI assistant
Google announced that it’s adding Gemini Nano, the lightweight version of its Gemini model, to Chrome on desktop. The built-in assistant will use on-device AI to help you generate text for social media posts, product reviews, and more from directly within Google Chrome.
Google upgrades its SynthID AI watermarking
Google says it’s expanding what SynthID can do — the company says it will embed watermarking into content created with its new Veo video generator and that it can now also detect AI-generated videos.

Image: Google

Google I/O just ended — and it was packed with AI announcements. As expected, the event focused heavily on Google’s Gemini AI models, along with the ways they’re being integrated into apps like Workspace and Chrome.

If you didn’t get to tune in to the event live, you can catch up with all the latest from Google in the roundup below.

Google Lens now lets you search by recording a video

Google Lens already lets you search for something based on images, but now Google’s taking things a step further with the ability to search with a video. That means you can take a video of something you want to search for, ask a question during the video, and Google’s AI will attempt to pull up relevant answers from the web.

Google’s flagship Gemini model gets faster and more capable

Google has introduced a new AI model to its lineup: Gemini 1.5 Flash. The new multimodal model is just as powerful as Gemini 1.5 Pro, but it’s optimized for “narrow, high-frequency, low-latency tasks.” That makes it better at generating fast responses. Google also made some changes to Gemini 1.5 that it says will improve its ability to translate, reason, and code. Also, Google says it has doubled Gemini 1.5 Pro’s context window (how much information it can take in) from 1 million to 2 million tokens.

Gemini joins users in Workspace

Google is rolling its latest mainstream language model, Gemini 1.5 Pro, into the sidebar for Docs, Sheets, Slides, Drive, and Gmail. When it rolls out to paid subscribers next month, it will turn into more of a general-purpose assistant within Workspace that can fetch info from any and all of the content from your Drive, no matter where you are. It will also be able to do things for you, like write emails that incorporate info from a document you’re currently looking at or remind you later to respond to an email you’re perusing. Some early testers already have access to these features, but Google says it’s rolling it out to all paid Gemini subscribers next month.

Project Astra is Google’s Star Trek AI future

Google’s Project Astra is a multimodal AI assistant that the company hopes will become a do-everything virtual assistant that can watch and understand what it sees through your device’s camera, remember where your things are, and do things for you. It’s powering many of the most impressive demos from I/O this year, and the company’s aim for it is to be an honest-to-goodness AI agent that can’t just talk to you but can actually do things on your behalf.

Veo brings Sora-style video generation to creators

Image: Google
A sample of Veo’s output.

Google’s answer to OpenAI’s Sora is a new generative AI model that can output 1080p video based on text, image, and video-based prompts. Videos can be produced in a variety of styles, like aerial shots or time lapses, and can be tweaked with more prompts. The company is already offering Veo to some creators for use in YouTube videos but is also pitching it to Hollywood for use in films.

Gems bring custom chatbot creation to Gemini

Whether you need a yoga bestie or calculus tutor, in the coming months you’ll be able to customize Gemini, saving time when you have specific ways you interact with Gemini again and again. We’re calling these Gems. #GoogleIO pic.twitter.com/YQOHsUbMWE

— Google (@Google) May 14, 2024

Google is rolling out a custom chatbot creator called Gems. Just like OpenAI’s GPTs, Gems lets users give instructions to Gemini to customize how it will respond and what it specializes in. If you want it to be a positive and insistent running coach with daily motivations and running plans — aka my worst nightmare — you’ll be able to do that soon (if you’re a Gemini Advanced subscriber).

Gemini becomes a better conversation partner

The new Gemini Live feature aims to make voice chats with Gemini feel more natural. The chatbot’s voice will be updated with some extra personality, and users will be able to interrupt it mid-sentence or ask it to watch through their smartphone camera and give information about what it sees in real time. Gemini is also getting new integrations that let it update or draw info from Google Calendar, Tasks, and Keep, using multimodal features to do so (like adding details from a flyer to your personal calendar).

Circle to Search can help solve math problems now

Image: Google

If you’re on an Android phone or tablet, you can now circle a math problem on your screen and get help solving it. Google’s AI won’t solve the problem for you — so it won’t help students cheat on their homework — but it will break it down into steps that should make it easier to complete.

Google Search gets an AI overhaul

Image: Google

Google will roll out “AI Overviews” — formerly known as “Search Generative Experience,” a mouthful — to everyone in the US this week. Now, a “specialized” Gemini model will design and populate results pages with summarized answers from the web (similar to what you see in AI search tools like Perplexity or Arc Search).

Android gets AI-powered scam detection

Using on-device Gemini Nano AI smarts, Google says Android phones will be able to help you avoid scam calls by looking out for red flags, like common scammer conversation patterns, and then popping up real-time warnings like the one above. The company promises to offer more details on the feature later in the year.

Android devices are about to get smarter AI

Image: Google.

Google says that Gemini will soon be able to let users ask questions about videos on-screen, and it will answer based on automatic captions. For paid Gemini Advanced users, it can also ingest PDFs and offer information. Those and other multimodal updates for Gemini on Android are coming over the next few months.

Google Chrome is getting an AI assistant

Google announced that it’s adding Gemini Nano, the lightweight version of its Gemini model, to Chrome on desktop. The built-in assistant will use on-device AI to help you generate text for social media posts, product reviews, and more from directly within Google Chrome.

Google upgrades its SynthID AI watermarking

Google says it’s expanding what SynthID can do — the company says it will embed watermarking into content created with its new Veo video generator and that it can now also detect AI-generated videos.

Read More 

The Blink Mini 2 security cam is an even better value now that it’s on sale for $30

The new Blink Mini 2 adds weather resistance so you can use it indoors and outdoors. | Photo by Jennifer Pattison Tuohy / The Verge

If you’ve got vacation plans for Memorial Day weekend, a security camera can help you keep an eye on valuables back home so you can really relax during your time off. And right now, one of the least expensive options on the market is even more affordable, with the new Blink Mini 2 selling for just $29.99 ($10 off) at Amazon, Best Buy, and The Home Depot. If you want to keep an eye on packages outdoors, you can also buy the wired camera from Amazon with a weather-resistant adapter for just $39.98 ($10 off).

The Blink Mini 2 is an excellent upgrade over its predecessor. It’s still a very basic 1080p camera, but it now offers better low-light performance, a wider field of view, and USB-C support. Most notably, the camera now features IP65 weatherproofing, which means you now use it outdoors when combined with the Blink Weather Resistant Power Adapter.
At the same time, the second-gen Blink Mini remains a great indoor camera with motion alerts, two-way audio, and other basic features. It’s relatively tiny — meaning you can fit it just about anywhere or mount it to your wall or ceiling — and it remains relatively cheap to add cloud storage ($3 a month). The optional Blink subscription fee also grants you access to extra features like the new person detection setting. It’s a shame it doesn’t offer sound detection or compatibility with smart home platforms other than Amazon Alexa, but all in all, it’s a good way to add some extra security to your home if you’re on a budget.
Read our Blink Mini 2 review.

A few more deals worth checking out

You can buy the 10th-gen iPad at Amazon with Wi-Fi and 64GB of storage for $334, which is $15 less than the tablet’s new starting price. The new M2-powered iPad Air might be faster, slightly bigger, and offer support for the Apple Pencil Pro, but Apple’s entry-level tablet is still capable of handling almost everything the latest Air can. Add in USB-C charging and the fact the front-facing camera is now positioned on the long edge of the screen, and you have a great entertainment device most people should be happy with. Read our review.

A new Nintendo Switch is expected to arrive sometime in 2025, but it’s anybody’s guess when exactly. Until then, the best option remains the Nintendo Switch OLED, which is on sale at Woot in its black-and-white configuration for $314.99 ($35 off) through May 18th. The OLED model sports an improved seven-inch OLED display that makes it even better for handheld gaming than the standard Switch, along with a sturdier kickstand. Read our review.

Speaking of Nintendo, Woot is selling the N Edition of 8BitDo’s Retro Mechanical Keyboard with a 90-day warranty for $69.99 ($30 off), an all-time low. Although it’s not an official Nintendo accessory, the mechanical board mimics the original NES controller’s style and even offers a pair of programmable “Super Buttons.” It’s also a good, customizable option with clicky, hot-swappable switches and support for Bluetooth, USB-C, and 2.4GHz wireless via an included dongle.

The new Blink Mini 2 adds weather resistance so you can use it indoors and outdoors. | Photo by Jennifer Pattison Tuohy / The Verge

If you’ve got vacation plans for Memorial Day weekend, a security camera can help you keep an eye on valuables back home so you can really relax during your time off. And right now, one of the least expensive options on the market is even more affordable, with the new Blink Mini 2 selling for just $29.99 ($10 off) at Amazon, Best Buy, and The Home Depot. If you want to keep an eye on packages outdoors, you can also buy the wired camera from Amazon with a weather-resistant adapter for just $39.98 ($10 off).

The Blink Mini 2 is an excellent upgrade over its predecessor. It’s still a very basic 1080p camera, but it now offers better low-light performance, a wider field of view, and USB-C support. Most notably, the camera now features IP65 weatherproofing, which means you now use it outdoors when combined with the Blink Weather Resistant Power Adapter.

At the same time, the second-gen Blink Mini remains a great indoor camera with motion alerts, two-way audio, and other basic features. It’s relatively tiny — meaning you can fit it just about anywhere or mount it to your wall or ceiling — and it remains relatively cheap to add cloud storage ($3 a month). The optional Blink subscription fee also grants you access to extra features like the new person detection setting. It’s a shame it doesn’t offer sound detection or compatibility with smart home platforms other than Amazon Alexa, but all in all, it’s a good way to add some extra security to your home if you’re on a budget.

Read our Blink Mini 2 review.

A few more deals worth checking out

You can buy the 10th-gen iPad at Amazon with Wi-Fi and 64GB of storage for $334, which is $15 less than the tablet’s new starting price. The new M2-powered iPad Air might be faster, slightly bigger, and offer support for the Apple Pencil Pro, but Apple’s entry-level tablet is still capable of handling almost everything the latest Air can. Add in USB-C charging and the fact the front-facing camera is now positioned on the long edge of the screen, and you have a great entertainment device most people should be happy with. Read our review.

A new Nintendo Switch is expected to arrive sometime in 2025, but it’s anybody’s guess when exactly. Until then, the best option remains the Nintendo Switch OLED, which is on sale at Woot in its black-and-white configuration for $314.99 ($35 off) through May 18th. The OLED model sports an improved seven-inch OLED display that makes it even better for handheld gaming than the standard Switch, along with a sturdier kickstand. Read our review.

Speaking of Nintendo, Woot is selling the N Edition of 8BitDo’s Retro Mechanical Keyboard with a 90-day warranty for $69.99 ($30 off), an all-time low. Although it’s not an official Nintendo accessory, the mechanical board mimics the original NES controller’s style and even offers a pair of programmable “Super Buttons.” It’s also a good, customizable option with clicky, hot-swappable switches and support for Bluetooth, USB-C, and 2.4GHz wireless via an included dongle.

Read More 

Google’s invisible AI watermark will help identify generative text and video

Illustration by Haein Jeong / The Verge

Among Google’s swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums.
Google’s DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team’s new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated as well as AI-generated text.
Watermarking AI-generated content will matter increasingly as the technology gains prevalence, especially when AI gets used for malicious purposes. It’s already been used to spread political misinformation, claim someone said something they haven’t, and create nonconsensual sexual content.
SynthID was announced last August and started as a tool to imprint AI imagery in a way that humans can’t visually decipher — but can be detected by the system. The approach is different from other aspiring watermarking protocol standards like C2PA, which adds cryptographic metadata to AI-generated content.
Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind’s Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.

Illustration by Haein Jeong / The Verge

Among Google’s swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums.

Google’s DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team’s new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated as well as AI-generated text.

Watermarking AI-generated content will matter increasingly as the technology gains prevalence, especially when AI gets used for malicious purposes. It’s already been used to spread political misinformation, claim someone said something they haven’t, and create nonconsensual sexual content.

SynthID was announced last August and started as a tool to imprint AI imagery in a way that humans can’t visually decipher — but can be detected by the system. The approach is different from other aspiring watermarking protocol standards like C2PA, which adds cryptographic metadata to AI-generated content.

Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind’s Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.

Read More 

Google will let you create personalized AI chatbots

Image: The Verge

Google is adding a bunch of new features to its Gemini AI, and one of the most powerful is a personalization option called “Gems” that allows users to create custom versions of the Gemini assistant with varying personalties.
Gems lets you create iterations of chatbots that can help you with certain tasks and retain specific characteristics, kind of like making your own bot in Character.AI, the service that lets you talk to virtualized versions of popular characters and celebrities or even a fake psychiatrist. Google says you can make Gemini your gym buddy, sous-chef, coding partner, creative writing guide, or anything you can dream up. Gems feels similar to OpenAI’s GPT Store that lets you make customized ChatGPT chatbots.
You can set up a gem by telling Gemini what to do and how to respond. For instance, you can tell it to be your running coach, provide you with a daily run schedule, and to sound upbeat and motivating. Then, in one click, Gemini will make a gem for you as you’ve described. The Gems feature is available “soon” to Gemini Advanced subscribers.

Image: The Verge

Google is adding a bunch of new features to its Gemini AI, and one of the most powerful is a personalization option called “Gems” that allows users to create custom versions of the Gemini assistant with varying personalties.

Gems lets you create iterations of chatbots that can help you with certain tasks and retain specific characteristics, kind of like making your own bot in Character.AI, the service that lets you talk to virtualized versions of popular characters and celebrities or even a fake psychiatrist. Google says you can make Gemini your gym buddy, sous-chef, coding partner, creative writing guide, or anything you can dream up. Gems feels similar to OpenAI’s GPT Store that lets you make customized ChatGPT chatbots.

You can set up a gem by telling Gemini what to do and how to respond. For instance, you can tell it to be your running coach, provide you with a daily run schedule, and to sound upbeat and motivating. Then, in one click, Gemini will make a gem for you as you’ve described. The Gems feature is available “soon” to Gemini Advanced subscribers.

Read More 

Gemini is about to get better at understanding what’s on your phone screen

The dream of contextual search is alive. | Photo by Amelia Holowaty Krales / The Verge

Google is updating Gemini on Android to let its AI better tap into what’s on your screen. The update should allow Gemini to lean into one of its best use cases, helping you make sense of a limited set of data as you go about your day.
If you set Gemini as the default assist on your Android phone, it can already summarize or answer questions about a webpage or a screenshot. Soon, it’ll also be able to tell if there’s a video on your screen and prompt you to ask questions about it. Gemini uses the video’s automatic captions to find answers — something you could already get it to do in a more roundabout way.
Gemini will also take a similar cue if you’re looking at a PDF, but there’s a catch: you’ll need access to Google’s paid version, Gemini Advanced, to use it. That’s because the feature ingests the entire PDF, so it requires the long context window available to Gemini Advanced subscribers. But once it has taken the PDF on board, you’ve basically turned it into an expert on whatever that topic is — maybe that’s your dishwasher owner’s manual or your local curbside recycling guidelines. Gemini Advanced is part of the $20 per month Google One AI Premium plan.

Image: Google.
Ask Gemini about a video (left) or turn it into a pickleball expert (right).

There’s one more minor update, too — you’ll soon be able to drag and drop images generated by Gemini into whatever it is you’re working on without having to jump between apps. You’ll just long-press an image in the Gemini overlay and drag it into a chat or an email. Altogether, it has the net effect of making Gemini feel less like a Thing You Have To Go Get and something that’s just integrated seamlessly with the rest of the system.
It’s also a reminder that Google has been pursuing this dream of context-aware search for well over a decade — remember Google Now? I think it’s a notable step forward; the best use of Gemini Assistant I’ve come across is asking it to remember a dinner recipe so I can ask it questions as I’m moving around the kitchen cooking. Sounds simple, but it feels a lot more practically useful to me than questioning AI about all the knowledge on the entirety of the internet.
Google’s Gemini on Android updates will be rolling out to “hundreds of millions of devices over the next few months,” and more contextual features are in the works.

The dream of contextual search is alive. | Photo by Amelia Holowaty Krales / The Verge

Google is updating Gemini on Android to let its AI better tap into what’s on your screen. The update should allow Gemini to lean into one of its best use cases, helping you make sense of a limited set of data as you go about your day.

If you set Gemini as the default assist on your Android phone, it can already summarize or answer questions about a webpage or a screenshot. Soon, it’ll also be able to tell if there’s a video on your screen and prompt you to ask questions about it. Gemini uses the video’s automatic captions to find answers — something you could already get it to do in a more roundabout way.

Gemini will also take a similar cue if you’re looking at a PDF, but there’s a catch: you’ll need access to Google’s paid version, Gemini Advanced, to use it. That’s because the feature ingests the entire PDF, so it requires the long context window available to Gemini Advanced subscribers. But once it has taken the PDF on board, you’ve basically turned it into an expert on whatever that topic is — maybe that’s your dishwasher owner’s manual or your local curbside recycling guidelines. Gemini Advanced is part of the $20 per month Google One AI Premium plan.

Image: Google.
Ask Gemini about a video (left) or turn it into a pickleball expert (right).

There’s one more minor update, too — you’ll soon be able to drag and drop images generated by Gemini into whatever it is you’re working on without having to jump between apps. You’ll just long-press an image in the Gemini overlay and drag it into a chat or an email. Altogether, it has the net effect of making Gemini feel less like a Thing You Have To Go Get and something that’s just integrated seamlessly with the rest of the system.

It’s also a reminder that Google has been pursuing this dream of context-aware search for well over a decade — remember Google Now? I think it’s a notable step forward; the best use of Gemini Assistant I’ve come across is asking it to remember a dinner recipe so I can ask it questions as I’m moving around the kitchen cooking. Sounds simple, but it feels a lot more practically useful to me than questioning AI about all the knowledge on the entirety of the internet.

Google’s Gemini on Android updates will be rolling out to “hundreds of millions of devices over the next few months,” and more contextual features are in the works.

Read More 

Google’s Circle to Search will help you with your math homework

Google might tell you f = ma, or something. I haven’t been to school for a long time. | Image: Google

Google is enhancing Android’s Circle to Search — the feature that lets you literally circle something on your Android phone’s screen to search it on Google — with a new ability to generate instructions on how to solve school math and physics problems.
Using an Android phone or tablet, students can now use Circle to Search to get AI assistance on mathematical word problems from their homework. The feature will help unpack the problem and list what the student needs to do to get the correct answer. According to Google, it won’t actually do the homework for you — only help you approach the problem.
Over the past year, the use of AI tools like ChatGPT has become a hot topic in the field of education, with plenty of concern over how students can and will use it to get work done quickly. Google, however, is explicitly positioning this as a feature to support education, potentially walking around some of the concerns about AI doing all of the work for students.

GIF: Google
Circle to Search on a math problem gives you step-by-step instructions without giving away the final answer.

Later this year, Circle to Search will also gain the ability to solve complex math equations that involve formulas, diagrams, graphs, and more. Google is using LearnLM, its new AI model that’s fine-tuned for learning, to make the new Circle to Search abilities work.
Circle to Search first launched on Samsung’s Galaxy S24 series in January and then on the Pixel 8 and 8 Pro later the same month. It’s one of the star new features of Android, and although iOS users can’t yet circle their math homework for help, anything is possible.

Google might tell you f = ma, or something. I haven’t been to school for a long time. | Image: Google

Google is enhancing Android’s Circle to Search — the feature that lets you literally circle something on your Android phone’s screen to search it on Google — with a new ability to generate instructions on how to solve school math and physics problems.

Using an Android phone or tablet, students can now use Circle to Search to get AI assistance on mathematical word problems from their homework. The feature will help unpack the problem and list what the student needs to do to get the correct answer. According to Google, it won’t actually do the homework for you — only help you approach the problem.

Over the past year, the use of AI tools like ChatGPT has become a hot topic in the field of education, with plenty of concern over how students can and will use it to get work done quickly. Google, however, is explicitly positioning this as a feature to support education, potentially walking around some of the concerns about AI doing all of the work for students.

GIF: Google
Circle to Search on a math problem gives you step-by-step instructions without giving away the final answer.

Later this year, Circle to Search will also gain the ability to solve complex math equations that involve formulas, diagrams, graphs, and more. Google is using LearnLM, its new AI model that’s fine-tuned for learning, to make the new Circle to Search abilities work.

Circle to Search first launched on Samsung’s Galaxy S24 series in January and then on the Pixel 8 and 8 Pro later the same month. It’s one of the star new features of Android, and although iOS users can’t yet circle their math homework for help, anything is possible.

Read More 

Scroll to top
Generated by Feedzy