verge-rss

Google is building Gemini Nano AI right into Chrome

Image: The Verge

Google is building its Gemini AI into Chrome on desktop. During its I/O event on Tuesday, Google announced that Chrome 126 will use Gemini Nano to power on-device AI features, such as text generation.
Gemini Nano is the lightweight large language model Google introduced to the Pixel 8 Pro last year — and, later, the Pixel 8. To get Gemini Nano on Chrome, Google says it tweaked the model and optimized the browser to “load the model quickly.”
The integration will let you do things like generate product reviews, social media posts, and other blurbs directly within Chrome. Microsoft similarly added its AI assistant Copilot to Edge last year, letting you ask questions and summarize the information on your screen. Unlike Gemini Nano in Chrome, Copilot in Edge doesn’t run locally on your device.
Google also announced that it will make Gemini available in Chrome DevTools, which developers use to debug and tune their apps. Gemini can provide explanations for error messages, as well as suggestions on how to fix coding issues.

Image: The Verge

Google is building its Gemini AI into Chrome on desktop. During its I/O event on Tuesday, Google announced that Chrome 126 will use Gemini Nano to power on-device AI features, such as text generation.

Gemini Nano is the lightweight large language model Google introduced to the Pixel 8 Pro last year — and, later, the Pixel 8. To get Gemini Nano on Chrome, Google says it tweaked the model and optimized the browser to “load the model quickly.”

The integration will let you do things like generate product reviews, social media posts, and other blurbs directly within Chrome. Microsoft similarly added its AI assistant Copilot to Edge last year, letting you ask questions and summarize the information on your screen. Unlike Gemini Nano in Chrome, Copilot in Edge doesn’t run locally on your device.

Google also announced that it will make Gemini available in Chrome DevTools, which developers use to debug and tune their apps. Gemini can provide explanations for error messages, as well as suggestions on how to fix coding issues.

Read More 

Google’s Gemini can build an entire vacation itinerary ‘in a matter of seconds’

Illustration: The Verge

Gemini, Google’s ChatGPT competitor, is getting new trip planning capabilities, the company announced today at its I/O developer conference.
Based on the user’s prompt, the AI model will now research publicly available information as well as tap into specific details like flight times and hotel bookings to work up a custom, multiday vacation itinerary “in a matter of seconds.”
In a briefing with reporters, Google VP and general manager of Gemini, Sissie Hsiao, said that manually planning a trip could “take me hours, days, maybe even weeks.” But with the help of Gemini, the process could be nearly instantaneous. And it will be “dynamic,” meaning it can be tweaked and adjusted through prompts and other requests thanks to Gemini’s new trip planning user interface.

Google is billing its AI model as offering something more than other chatbots by combining publicly available information with personal details that could only be found in someone’s inbox, for example.
According to the company, an example of a prompt could be something like “My family and I are going to Miami for Labor Day. My son loves art and my husband really wants fresh seafood. Can you pull my flight and hotel info from Gmail and help me plan the weekend?”
Gemini would then build its itinerary based on flight and hotel details included in a user’s email. The model will also tap into Google Maps to find nearby restaurants and cultural destinations and will filter out choices based on specific prompts, like dietary restrictions or things to avoid. Google says the new trip planning capabilities will be coming to Gemini Advanced in the coming months.
Of course, Google isn’t the first company to view trip planning as a fertile ground for AI chatbots. Expedia, Airbnb, Kayak, and others are investing in building out their own AI-powered trip planners in the hopes of steering customers away from search engines like Google.

Illustration: The Verge

Gemini, Google’s ChatGPT competitor, is getting new trip planning capabilities, the company announced today at its I/O developer conference.

Based on the user’s prompt, the AI model will now research publicly available information as well as tap into specific details like flight times and hotel bookings to work up a custom, multiday vacation itinerary “in a matter of seconds.”

In a briefing with reporters, Google VP and general manager of Gemini, Sissie Hsiao, said that manually planning a trip could “take me hours, days, maybe even weeks.” But with the help of Gemini, the process could be nearly instantaneous. And it will be “dynamic,” meaning it can be tweaked and adjusted through prompts and other requests thanks to Gemini’s new trip planning user interface.

Google is billing its AI model as offering something more than other chatbots by combining publicly available information with personal details that could only be found in someone’s inbox, for example.

According to the company, an example of a prompt could be something like “My family and I are going to Miami for Labor Day. My son loves art and my husband really wants fresh seafood. Can you pull my flight and hotel info from Gmail and help me plan the weekend?”

Gemini would then build its itinerary based on flight and hotel details included in a user’s email. The model will also tap into Google Maps to find nearby restaurants and cultural destinations and will filter out choices based on specific prompts, like dietary restrictions or things to avoid. Google says the new trip planning capabilities will be coming to Gemini Advanced in the coming months.

Of course, Google isn’t the first company to view trip planning as a fertile ground for AI chatbots. Expedia, Airbnb, Kayak, and others are investing in building out their own AI-powered trip planners in the hopes of steering customers away from search engines like Google.

Read More 

Android is getting an AI-powered scam call detection feature

Illustration by Alex Castro / The Verge

Google is working on new protections to help prevent Android users from falling victim to phone scams. During its I/O developer conference on Tuesday, Google announced that it’s testing a new call monitoring feature that will warn users if the person they’re talking to is likely attempting to scam them and encourage them to end such calls.
Google says the feature utilizes Gemini Nano — a reduced version of the company’s Gemini large language model for Android devices that can run locally and offline — to look for fraudulent language and other conversation patterns typically associated with scams. Users will then receive real-time alerts during calls where these red flags are present.
Some examples of what could trigger these alerts include calls from “bank representatives” who make requests that real banks are unlikely to make, such as asking for personal information like your passwords or card PINs, requesting payments via gift cards, or asking users to urgently transfer money to them. These new protections are entirely on-device, so the conversations monitored by Gemini Nano will remain private, according to Google.

Image: Google
Here’s an example of the notification that users will receive during suspicious calls, giving the option to either continue the call or swiftly end it.

There’s no word on when the scam detection feature will be available, but Google says users will need to opt in to utilize it and that it’ll share more information “later this year.”
While scam calls may seem easily detectable to some after years of awareness campaigns and accessible guidance on how to avoid them, there’s always a risk of getting caught out. A report from the Global Anti-Scam Alliance last October found that 1 in 4 people globally had lost money to scams or identity theft over the prior 12-month period, losing over $1 trillion during that time.
So, while the candidates who might find such tech useful is vast, compatibility could limit its applicability. Gemini Nano is only currently supported on the Google Pixel 8 Pro and Samsung S24 series, according to its developer support page.

Illustration by Alex Castro / The Verge

Google is working on new protections to help prevent Android users from falling victim to phone scams. During its I/O developer conference on Tuesday, Google announced that it’s testing a new call monitoring feature that will warn users if the person they’re talking to is likely attempting to scam them and encourage them to end such calls.

Google says the feature utilizes Gemini Nano — a reduced version of the company’s Gemini large language model for Android devices that can run locally and offline — to look for fraudulent language and other conversation patterns typically associated with scams. Users will then receive real-time alerts during calls where these red flags are present.

Some examples of what could trigger these alerts include calls from “bank representatives” who make requests that real banks are unlikely to make, such as asking for personal information like your passwords or card PINs, requesting payments via gift cards, or asking users to urgently transfer money to them. These new protections are entirely on-device, so the conversations monitored by Gemini Nano will remain private, according to Google.

Image: Google
Here’s an example of the notification that users will receive during suspicious calls, giving the option to either continue the call or swiftly end it.

There’s no word on when the scam detection feature will be available, but Google says users will need to opt in to utilize it and that it’ll share more information “later this year.”

While scam calls may seem easily detectable to some after years of awareness campaigns and accessible guidance on how to avoid them, there’s always a risk of getting caught out. A report from the Global Anti-Scam Alliance last October found that 1 in 4 people globally had lost money to scams or identity theft over the prior 12-month period, losing over $1 trillion during that time.

So, while the candidates who might find such tech useful is vast, compatibility could limit its applicability. Gemini Nano is only currently supported on the Google Pixel 8 Pro and Samsung S24 series, according to its developer support page.

Read More 

Google targets filmmakers with Veo, its new generative AI video model

Veo demos show that Google’s AI-generated video capabilities have come a long way. | Image: Google

It’s been three months since OpenAI demoed its captivating text-to-video AI, Sora, and now Google is trying to steal some of that spotlight. Announced during its I/O developer conference on Tuesday, Google says Veo — its latest generative AI video model — can generate “high-quality” 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles.

View this post on Instagram

A post shared by The Verge (@verge)

Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots.

Image: Google
Here are a few examples, but ignore the low resolution if you can — we had to compress the demo videos into GIFs.

Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.
As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it’s inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure “creators have a voice” in how Google’s AI technologies are developed.

Image: Google
You can see here how the sun correctly reappears behind the horse and how the light softly shines through its tail.

Some Veo features will also be made available to “select creators in the coming weeks” in a private preview inside VideoFX — you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts “in the future.”
This is one of several video generation models that Google has produced over the last few years, from Phenaki and Imagen Video — which produced crude, often distorted video clips — to the Lumiere model it showcased in January of this year. The latter was one of the most impressive models we’d seen before Sora was announced in February, with Google saying Veo is even more capable of understanding what’s in a video, simulating real-world physics, rendering high-definition outputs, and more.
Meanwhile, OpenAI is already pitching Sora to Hollywood and planning to release it to the public later this year, having previously teased back in March that it could be ready in “a few months.” The company is also already looking to incorporate audio into Sora and may make the model available directly within video editing applications like Adobe’s Premiere Pro. Given Veo is also being pitched as a tool for filmmakers, OpenAI’s head start could make it harder for Google’s project to compete.

Veo demos show that Google’s AI-generated video capabilities have come a long way. | Image: Google

It’s been three months since OpenAI demoed its captivating text-to-video AI, Sora, and now Google is trying to steal some of that spotlight. Announced during its I/O developer conference on Tuesday, Google says Veo — its latest generative AI video model — can generate “high-quality” 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles.

Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots.

Image: Google
Here are a few examples, but ignore the low resolution if you can — we had to compress the demo videos into GIFs.

Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it’s inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure “creators have a voice” in how Google’s AI technologies are developed.

Image: Google
You can see here how the sun correctly reappears behind the horse and how the light softly shines through its tail.

Some Veo features will also be made available to “select creators in the coming weeks” in a private preview inside VideoFX — you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts “in the future.”

This is one of several video generation models that Google has produced over the last few years, from Phenaki and Imagen Video — which produced crude, often distorted video clips — to the Lumiere model it showcased in January of this year. The latter was one of the most impressive models we’d seen before Sora was announced in February, with Google saying Veo is even more capable of understanding what’s in a video, simulating real-world physics, rendering high-definition outputs, and more.

Meanwhile, OpenAI is already pitching Sora to Hollywood and planning to release it to the public later this year, having previously teased back in March that it could be ready in “a few months.” The company is also already looking to incorporate audio into Sora and may make the model available directly within video editing applications like Adobe’s Premiere Pro. Given Veo is also being pitched as a tool for filmmakers, OpenAI’s head start could make it harder for Google’s project to compete.

Read More 

Gemini will integrate with Calendar, Tasks, and Keep

Illustration by Alex Castro / The Verge

Part of Google’s vision at its I/O developer conference this year has involved transforming its Gemini AI chatbot into more of a digital assistant that specializes in dealing with our day-to-day tedium. One of the new ways it’s doing that is by integrating Gemini with Google Calendar, Tasks, and Keep.
The new integrations, which Google says are coming “soon” come by way of the extensions the company added to Bard last year and have continued with its Gemini chatbot. To use those, you just ask Gemini to do something involving one of the new services it can tap into, like summarize your emails in Gmail for the day, or type an @ symbol into Gemini’s text box to bring up a list of extensions like Google Docs or Drive.
The Gemini chatbot also supports uploading images on the web or in the Google smartphone app, so one example Google offered is taking a picture of a list of school events and having Gemini add them to your personal Google Calendar. That sounds great for me, a parent who would love to just point my phone at a piece of paper and say, “Hey dingus, add all of this to my kid’s calendar” instead of relying on my ADHD-addled brain to remember to do it manually.
Another example Google offered is having Gemini add items from a recipe to a Google Keep shopping list. It’s not a total black box, though — you’ll have a chance to check Gemini’s work before it carries out what you’re asking for, according to head of Gemini Sissie Hsiao during a briefing The Verge attended ahead of the launch.
The new integrations could have far less friction than dealing with Google Assistant or Siri, assuming it gets all the information right. Sure, you can ask other assistants to add calendar events, but you never know if you’re saying the right incantation to get all of the information — dates, times, locations — in the right place on the right calendar.
We won’t know if the Gemini chatbot and its multimodal AI contemporaries that are starting to trickle out now will do a better job until they’re in front of us. But boy, would I rather be able to take pictures of a piece of paper than forget it’s my kid’s turn to bring snacks to school because I got distracted while I was updating my calendar. Google says the new integrations are coming “soon.”

Illustration by Alex Castro / The Verge

Part of Google’s vision at its I/O developer conference this year has involved transforming its Gemini AI chatbot into more of a digital assistant that specializes in dealing with our day-to-day tedium. One of the new ways it’s doing that is by integrating Gemini with Google Calendar, Tasks, and Keep.

The new integrations, which Google says are coming “soon” come by way of the extensions the company added to Bard last year and have continued with its Gemini chatbot. To use those, you just ask Gemini to do something involving one of the new services it can tap into, like summarize your emails in Gmail for the day, or type an @ symbol into Gemini’s text box to bring up a list of extensions like Google Docs or Drive.

The Gemini chatbot also supports uploading images on the web or in the Google smartphone app, so one example Google offered is taking a picture of a list of school events and having Gemini add them to your personal Google Calendar. That sounds great for me, a parent who would love to just point my phone at a piece of paper and say, “Hey dingus, add all of this to my kid’s calendar” instead of relying on my ADHD-addled brain to remember to do it manually.

Another example Google offered is having Gemini add items from a recipe to a Google Keep shopping list. It’s not a total black box, though — you’ll have a chance to check Gemini’s work before it carries out what you’re asking for, according to head of Gemini Sissie Hsiao during a briefing The Verge attended ahead of the launch.

The new integrations could have far less friction than dealing with Google Assistant or Siri, assuming it gets all the information right. Sure, you can ask other assistants to add calendar events, but you never know if you’re saying the right incantation to get all of the information — dates, times, locations — in the right place on the right calendar.

We won’t know if the Gemini chatbot and its multimodal AI contemporaries that are starting to trickle out now will do a better job until they’re in front of us. But boy, would I rather be able to take pictures of a piece of paper than forget it’s my kid’s turn to bring snacks to school because I got distracted while I was updating my calendar. Google says the new integrations are coming “soon.”

Read More 

Google Photos is getting its own ‘Ask Photos’ assistant this summer

Image: Google

Google Photos already has impressive search capabilities, but Google is using Gemini to dial those powers up to the next level. During today’s I/O keynote, CEO Sundar Pichai announced “Ask Photos,” a new feature coming this summer that should make the service much smarter when it comes to understanding what it is you’re looking for, using artificial intelligence to connect the dots for more sophisticated requests.
Pichai asked the app, “What’s my license plate number again?” Currently, searching for a license plate requires scrolling through photos of many different cars. But here, Google Photos was smart enough to figure out which vehicle was the intended one — based on location, how many times it has appeared in photos through the years, and other data — and came back with the actual number in a text response along with an image verifying it.

Ask Photos, a new feature coming to @GooglePhotos, makes it easier to search across your photos and videos with the help of Gemini models. It goes beyond simple search to understand context and answer more complex questions. #GoogleIO pic.twitter.com/OsYXZLo5S1— Google (@Google) May 14, 2024

“Ask Photos can also help you search your memories in a deeper way,” he said, telling the app to “show me how Lucia’s swimming has progressed.” Gemini then collected a wide net of photos that summarized years’ worth of a child’s swimming lessons.
Pichai said Ask Photos will roll out to Google Photos sometime this summer — “with more capabilities to come.” To illustrate just how essential the service has become for millions of people, Google’s CEO said that Photos now receives 6 billion uploads (a mix of photos and videos) daily since launching “almost 9 years ago.” That’s a whole lot of memories on Google’s servers.

Image: Google

Google Photos already has impressive search capabilities, but Google is using Gemini to dial those powers up to the next level. During today’s I/O keynote, CEO Sundar Pichai announced “Ask Photos,” a new feature coming this summer that should make the service much smarter when it comes to understanding what it is you’re looking for, using artificial intelligence to connect the dots for more sophisticated requests.

Pichai asked the app, “What’s my license plate number again?” Currently, searching for a license plate requires scrolling through photos of many different cars. But here, Google Photos was smart enough to figure out which vehicle was the intended one — based on location, how many times it has appeared in photos through the years, and other data — and came back with the actual number in a text response along with an image verifying it.

Ask Photos, a new feature coming to @GooglePhotos, makes it easier to search across your photos and videos with the help of Gemini models. It goes beyond simple search to understand context and answer more complex questions. #GoogleIO pic.twitter.com/OsYXZLo5S1

— Google (@Google) May 14, 2024

“Ask Photos can also help you search your memories in a deeper way,” he said, telling the app to “show me how Lucia’s swimming has progressed.” Gemini then collected a wide net of photos that summarized years’ worth of a child’s swimming lessons.

Pichai said Ask Photos will roll out to Google Photos sometime this summer — “with more capabilities to come.” To illustrate just how essential the service has become for millions of people, Google’s CEO said that Photos now receives 6 billion uploads (a mix of photos and videos) daily since launching “almost 9 years ago.” That’s a whole lot of memories on Google’s servers.

Read More 

Google I/O 2024 live blog: it’s AI time

Image: Google

The future of Gemini, Search, Android, and more. Welcome to Google I/O 2024. Google is kicking off its annual developers conference in Mountain View, California, today at 10AM PT / 1PM ET — and, as usual, we’re bringing it to you with a world-class Verge live blog.
What will we witness? One thing’s for sure: Sundar will say “AI.”
While we argued that last year’s Google developers conference was all about artificial intelligence, the AI arms race has only accelerated since then. 2024 is the year that Google reorganized vast swaths of the company around AI, realigning its Search, Android, and hardware teams toward that end. Meanwhile, it has become a $2 trillion company and gone all in on its own suite of Gemini AI models to compete with OpenAI and others.

But it’s high time to see what Google’s AI can meaningfully do for you, particularly as OpenAI sets its sights on competing in search and bringing the Her sci-fi vision of voice assistants to life.
Will Google’s rival “Pixie” assistant for Android phones appear for the first time today? Is this it? Guess you’ll follow along to find out!

‘,c(t.firstChild,i)))})(window);

Image: Google

The future of Gemini, Search, Android, and more.

Welcome to Google I/O 2024. Google is kicking off its annual developers conference in Mountain View, California, today at 10AM PT / 1PM ET — and, as usual, we’re bringing it to you with a world-class Verge live blog.

What will we witness? One thing’s for sure: Sundar will say “AI.”

While we argued that last year’s Google developers conference was all about artificial intelligence, the AI arms race has only accelerated since then. 2024 is the year that Google reorganized vast swaths of the company around AI, realigning its Search, Android, and hardware teams toward that end. Meanwhile, it has become a $2 trillion company and gone all in on its own suite of Gemini AI models to compete with OpenAI and others.

But it’s high time to see what Google’s AI can meaningfully do for you, particularly as OpenAI sets its sights on competing in search and bringing the Her sci-fi vision of voice assistants to life.

Will Google’s rival “Pixie” assistant for Android phones appear for the first time today? Is this it? Guess you’ll follow along to find out!

‘,c(t.firstChild,i)))})(window);

Read More 

Amazon follows Fallout with live-action Tomb Raider show

Image: Crystal Dynamics

In addition to announcing a second season of Mr. And Mrs. Smith and a fifth season of The Boys, Amazon is adding another new series to its television lineup: a live-action Tomb Raider series. The series will be produced in collaboration between Crystal Dynamics and Amazon MGM Studios, with Phoebe Waller-Bridge serving as the show’s writer and executive producer, confirming rumors reported earlier last year.
The Tomb Raider series is no stranger to live-action adaptations. In the 2000s, Angelina Jolie starred as the badass grave robber in Lara Croft: Tomb Raider and its sequel, The Cradle of Life. At the time, both movies were financial if not critical successes but are viewed more favorably today, especially when compared to contemporaries like Doom, Silent Hill, and Max Payne. In 2018, a reboot of the Tomb Raider movie franchise was produced to follow up the reboot of the game series, this time with Ex Machina star Alicia Vikander in the lead.
With the Fallout show hitting over 65 million viewers, Amazon has proven it can make a pretty good video game adaptation. Plus, the streamer is no stranger to successful adaptations of other mediums with The Boys, Gen V, and Invincible, so a Tomb Raider series seems like it could be a good fit. Phoebe Waller-Bridge, notable for her work on Fleabag and Killing Eve, also seems to be a fan of the series, sharing in Amazon’s press release, “Lara Croft means a lot to me, as she does to many, and I can’t wait to go on this adventure. Bats ‘n all.” I’m not sure if bats were a pain in the ass in the reboots as they were in the original trilogy, but at least, as an original trilogy fan myself, I feel like the series is in capable hands. She also has hands-on tomb-raiding experience, so to speak, starring alongside Harrison Ford in the latest Indiana Jones movie.
The streamer is producing this new show in tandem with Amazon Games publishing the next entry in the Tomb Raider game series. This also means that if the Tomb Raider series turns out to be good, fans likely won’t have to wait too long for a new game like all the poor players jonesing for a new Fallout entry.

Image: Crystal Dynamics

In addition to announcing a second season of Mr. And Mrs. Smith and a fifth season of The Boys, Amazon is adding another new series to its television lineup: a live-action Tomb Raider series. The series will be produced in collaboration between Crystal Dynamics and Amazon MGM Studios, with Phoebe Waller-Bridge serving as the show’s writer and executive producer, confirming rumors reported earlier last year.

The Tomb Raider series is no stranger to live-action adaptations. In the 2000s, Angelina Jolie starred as the badass grave robber in Lara Croft: Tomb Raider and its sequel, The Cradle of Life. At the time, both movies were financial if not critical successes but are viewed more favorably today, especially when compared to contemporaries like Doom, Silent Hill, and Max Payne. In 2018, a reboot of the Tomb Raider movie franchise was produced to follow up the reboot of the game series, this time with Ex Machina star Alicia Vikander in the lead.

With the Fallout show hitting over 65 million viewers, Amazon has proven it can make a pretty good video game adaptation. Plus, the streamer is no stranger to successful adaptations of other mediums with The Boys, Gen V, and Invincible, so a Tomb Raider series seems like it could be a good fit. Phoebe Waller-Bridge, notable for her work on Fleabag and Killing Eve, also seems to be a fan of the series, sharing in Amazon’s press release, “Lara Croft means a lot to me, as she does to many, and I can’t wait to go on this adventure. Bats ‘n all.” I’m not sure if bats were a pain in the ass in the reboots as they were in the original trilogy, but at least, as an original trilogy fan myself, I feel like the series is in capable hands. She also has hands-on tomb-raiding experience, so to speak, starring alongside Harrison Ford in the latest Indiana Jones movie.

The streamer is producing this new show in tandem with Amazon Games publishing the next entry in the Tomb Raider game series. This also means that if the Tomb Raider series turns out to be good, fans likely won’t have to wait too long for a new game like all the poor players jonesing for a new Fallout entry.

Read More 

Jeopardy! is making its first-ever streaming spinoff for Prime Video

Image: Sony Pictures

The classic competitive game show Jeopardy! is getting a spinoff series on Prime Video. Sony Pictures Television is producing Pop Culture Jeopardy!, which turns the classic academic quiz show with three challengers into a team-based trivia game that touches on music, movies, culture, sports, celebrities, entertainment, and more.
The new series marks the first time the company’s game show division is expanding the Jeopardy! brand to a streaming platform with a new show. Years ago, the company let watchers binge the main show on Hulu.
While the series sounds like a casual side mission for the franchise, Sony Pictures Television’s president of game shows, Suzanne Prete, states it will “be a nail biter” for fans and that teams will compete “at the highest level.” Pop Culture Jeopardy! is backed by producer Michael Davies, who previously worked on Who Wants to Be a Millionaire. The show was a huge success when it premiered in the US — giving this new Jeopardy! series some promise.
The mainline Jeopardy! series has had several guest hosts since Alex Trebek’s death in 2020 but is now finally settled on Ken Jennings. A host for Pop Culture Jeopardy! has yet to be announced, but here’s hoping the process isn’t quite as long or dramatic.

Image: Sony Pictures

The classic competitive game show Jeopardy! is getting a spinoff series on Prime Video. Sony Pictures Television is producing Pop Culture Jeopardy!, which turns the classic academic quiz show with three challengers into a team-based trivia game that touches on music, movies, culture, sports, celebrities, entertainment, and more.

The new series marks the first time the company’s game show division is expanding the Jeopardy! brand to a streaming platform with a new show. Years ago, the company let watchers binge the main show on Hulu.

While the series sounds like a casual side mission for the franchise, Sony Pictures Television’s president of game shows, Suzanne Prete, states it will “be a nail biter” for fans and that teams will compete “at the highest level.” Pop Culture Jeopardy! is backed by producer Michael Davies, who previously worked on Who Wants to Be a Millionaire. The show was a huge success when it premiered in the US — giving this new Jeopardy! series some promise.

The mainline Jeopardy! series has had several guest hosts since Alex Trebek’s death in 2020 but is now finally settled on Ken Jennings. A host for Pop Culture Jeopardy! has yet to be announced, but here’s hoping the process isn’t quite as long or dramatic.

Read More 

Ancient trees show how hot summers have gotten

The annual rings on the trunk of a conifer tree. | Photo: Getty Images

A summer marked by deadly heatwaves across Asia, Europe, and North America last year turns out to have been the hottest in the Northern Hemisphere in at least 2,000 years, according to a new study published in the journal Nature.
Officially, 2023 went down in history books as the hottest on record for the planet — but those records only started in 1850. To see how drastically the climate has changed over millennia, the authors of the new paper studied ancient tree rings to gauge fluctuations in temperatures over the years.
The results show us how extreme the weather is becoming. And while temperatures have reached unprecedented peaks, they’re also a warning of what’s to come unless policymakers do more to turn down the heat.
The cross section of a tree can tell us about its life and the world in which it lived
“Personally, I’m not surprised, but I’m worried,” Jan Esper, lead author of the study and a professor of climatology at Johannes Gutenberg University, said in a briefing with reporters. “The longer we wait, the more expensive it will be and the more difficult it will be to mitigate or even stop [global warming].”
For this study, Esper and his colleagues were limited to data they could collect from the Northern Hemisphere outside of tropical regions. Most of the oldest meteorological stations, dating as far back as the mid- to late 1800s, are located in the Northern Hemisphere. And of those, 45 of 58 are in Europe. To look further back in time and across a broader area, they relied on tree rings from the wood archives of archaeologists.
The cross section of a tree can tell us about its life and the world in which it lived. Many trees add a layer of light-colored “earlywood” each spring and a layer of dark “latewood” each summer. Counting up the rings shows the tree’s age. Thicker rings might indicate a warmer year in trees that time their growing seasons with changes in temperature.
This is a treasure trove of data in cooler climates with defined seasons. But unfortunately, again, it’s found mostly in the Northern Hemisphere. There’s a dearth of this data in more arid and tropical regions in the Southern Hemisphere, where there might be fewer trees or trees that don’t share the same growing patterns.
A treasure trove of data in cooler climates with defined seasons
Working with what they had, the researchers found that land temperatures in the summer of 2023 in the Northern Hemisphere were 2.2 degrees Celsius higher than average temperatures between the years 1–1890. On paper, that might look like a small difference. When it comes to life on Earth, that is a significant shift.
It’s a steeper rise in temperature than the goal set out in the landmark Paris agreement, which strives to stop global temperatures from climbing more than 1.5 to 2 degrees Celsius higher than they were before the Industrial Revolution.
Two degrees Celsius of global warming would be enough to shift 13 percent of Earth’s ecosystems to a new biome, according to the Intergovernmental Panel on Climate Change. Much of the Amazon rainforest is in danger of becoming a savannah, for example. Coral reefs would decline by 99 percent, and nearly 40 percent of the world’s population could experience severe heatwaves at least once every five years.

We saw a deadly taste of that already last year, with record-breaking heatwaves across Europe, North America, and China that would have been “extremely rare or even impossible without human-caused warming,” according to an international collaboration of researchers called World Weather Attribution.
It was a particularly sweltering year in part because of an El Niño climate pattern that dealt a double whammy alongside climate change in 2023. El Niño hasn’t ended yet, so that combo is already expected to make this summer another scorcher. Meeting the goals of the Paris accord would stop climate change in its tracks, however, if countries around the world can transition to clean energy by 2050.
“I am not concerned about myself because I’m too old, but I have two children and there are many other children out there. And for them [global warming is] really dangerous,” Esper said. “So we should do as much as possible as soon as possible.”

The annual rings on the trunk of a conifer tree. | Photo: Getty Images

A summer marked by deadly heatwaves across Asia, Europe, and North America last year turns out to have been the hottest in the Northern Hemisphere in at least 2,000 years, according to a new study published in the journal Nature.

Officially, 2023 went down in history books as the hottest on record for the planet — but those records only started in 1850. To see how drastically the climate has changed over millennia, the authors of the new paper studied ancient tree rings to gauge fluctuations in temperatures over the years.

The results show us how extreme the weather is becoming. And while temperatures have reached unprecedented peaks, they’re also a warning of what’s to come unless policymakers do more to turn down the heat.

The cross section of a tree can tell us about its life and the world in which it lived

“Personally, I’m not surprised, but I’m worried,” Jan Esper, lead author of the study and a professor of climatology at Johannes Gutenberg University, said in a briefing with reporters. “The longer we wait, the more expensive it will be and the more difficult it will be to mitigate or even stop [global warming].”

For this study, Esper and his colleagues were limited to data they could collect from the Northern Hemisphere outside of tropical regions. Most of the oldest meteorological stations, dating as far back as the mid- to late 1800s, are located in the Northern Hemisphere. And of those, 45 of 58 are in Europe. To look further back in time and across a broader area, they relied on tree rings from the wood archives of archaeologists.

The cross section of a tree can tell us about its life and the world in which it lived. Many trees add a layer of light-colored “earlywood” each spring and a layer of dark “latewood” each summer. Counting up the rings shows the tree’s age. Thicker rings might indicate a warmer year in trees that time their growing seasons with changes in temperature.

This is a treasure trove of data in cooler climates with defined seasons. But unfortunately, again, it’s found mostly in the Northern Hemisphere. There’s a dearth of this data in more arid and tropical regions in the Southern Hemisphere, where there might be fewer trees or trees that don’t share the same growing patterns.

A treasure trove of data in cooler climates with defined seasons

Working with what they had, the researchers found that land temperatures in the summer of 2023 in the Northern Hemisphere were 2.2 degrees Celsius higher than average temperatures between the years 1–1890. On paper, that might look like a small difference. When it comes to life on Earth, that is a significant shift.

It’s a steeper rise in temperature than the goal set out in the landmark Paris agreement, which strives to stop global temperatures from climbing more than 1.5 to 2 degrees Celsius higher than they were before the Industrial Revolution.

Two degrees Celsius of global warming would be enough to shift 13 percent of Earth’s ecosystems to a new biome, according to the Intergovernmental Panel on Climate Change. Much of the Amazon rainforest is in danger of becoming a savannah, for example. Coral reefs would decline by 99 percent, and nearly 40 percent of the world’s population could experience severe heatwaves at least once every five years.

We saw a deadly taste of that already last year, with record-breaking heatwaves across Europe, North America, and China that would have been “extremely rare or even impossible without human-caused warming,” according to an international collaboration of researchers called World Weather Attribution.

It was a particularly sweltering year in part because of an El Niño climate pattern that dealt a double whammy alongside climate change in 2023. El Niño hasn’t ended yet, so that combo is already expected to make this summer another scorcher. Meeting the goals of the Paris accord would stop climate change in its tracks, however, if countries around the world can transition to clean energy by 2050.

“I am not concerned about myself because I’m too old, but I have two children and there are many other children out there. And for them [global warming is] really dangerous,” Esper said. “So we should do as much as possible as soon as possible.”

Read More 

Scroll to top
Generated by Feedzy