verge-rss

This flying robot needs a hug

EPFL researchers drew inspiration from bats and owls to design a flying robot that crash lands with style. | Image: Nature

Instead of hunting for a runway, researchers have built an unmanned aerial vehicle (UAV) that can land by crashing into trees or poles and wrapping its grippy wings around them to prevent a fall. It’s an unorthodox approach, but one that could make it easier to position surveillance or inspection equipment in hard to reach areas.
The UAV, which its designers from the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology (EPFL) have called the PercHug, is yet another robot built to emulate a behavior seen in nature: bats and owls using their wings to both fly and climb or perch onto trees.
The PercHug robot is designed with dual-purpose hinged wings that remain rigid and outstretched allowing the UAV to fly but also become flexible when a tension wire is released.

Image: EPFL
A curved nose on the front of the UAV helps reposition the UAV to grab onto a structure using its flexible wings.

As explained in a recently published paper in the journal Nature, the lightweight 550g UAV features an “upturned nose design” that causes the craft to reorient itself vertically when it begins to fall after a crash. The impact of the crash also releases the tension wire causing the UAV’s spring-loaded wings to wrap around the structure and remain perched on it — most of the time.
Even with fishing hooks added to the outer segment of the wings to improve the grip, the PercHug UAV only successfully remained attached to a tree or a pole 73 percent of the time. And that was during tests where the UAV made impact after a short, gentle glide.
The unique approach of repurposing the wings for a safe landing through perching eliminates the need for additional landing mechanisms. That will allow the UAV to be built lighter, which will potentially expand its payload capacity, and how far it can fly before targeting a tree.
A higher success rate will be needed before expensive equipment like sensors or cameras can be deployed by the UAV. The researchers also plan to expand the UAV’s capabilities with avionics and control surfaces as in its current form it’s just a manually launched glider with no steering capabilities. They also want to come up with a way for it to unperch on its own and take to the skies again all on its own.

EPFL researchers drew inspiration from bats and owls to design a flying robot that crash lands with style. | Image: Nature

Instead of hunting for a runway, researchers have built an unmanned aerial vehicle (UAV) that can land by crashing into trees or poles and wrapping its grippy wings around them to prevent a fall. It’s an unorthodox approach, but one that could make it easier to position surveillance or inspection equipment in hard to reach areas.

The UAV, which its designers from the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology (EPFL) have called the PercHug, is yet another robot built to emulate a behavior seen in nature: bats and owls using their wings to both fly and climb or perch onto trees.

The PercHug robot is designed with dual-purpose hinged wings that remain rigid and outstretched allowing the UAV to fly but also become flexible when a tension wire is released.

Image: EPFL
A curved nose on the front of the UAV helps reposition the UAV to grab onto a structure using its flexible wings.

As explained in a recently published paper in the journal Nature, the lightweight 550g UAV features an “upturned nose design” that causes the craft to reorient itself vertically when it begins to fall after a crash. The impact of the crash also releases the tension wire causing the UAV’s spring-loaded wings to wrap around the structure and remain perched on it — most of the time.

Even with fishing hooks added to the outer segment of the wings to improve the grip, the PercHug UAV only successfully remained attached to a tree or a pole 73 percent of the time. And that was during tests where the UAV made impact after a short, gentle glide.

The unique approach of repurposing the wings for a safe landing through perching eliminates the need for additional landing mechanisms. That will allow the UAV to be built lighter, which will potentially expand its payload capacity, and how far it can fly before targeting a tree.

A higher success rate will be needed before expensive equipment like sensors or cameras can be deployed by the UAV. The researchers also plan to expand the UAV’s capabilities with avionics and control surfaces as in its current form it’s just a manually launched glider with no steering capabilities. They also want to come up with a way for it to unperch on its own and take to the skies again all on its own.

Read More 

How to get a transcript for a YouTube video

Illustration by Samar Haddad / The Verge

While I edit and write “how-to” articles, I also sometimes find myself in need of how-to directions myself. But when I do a search for guidance on, say, a difficult crochet stitch, it often helps to have a blow-by-blow text explanation that I can have handy without having to sit and watch a video over and over again.
For that, there’s YouTube’s automatically generated transcript. A transcript is not only useful when you want to view a text scroll of what’s being said while you’re watching the video, but it is also handy as a reference point afterward. For example, if you don’t have the time or patience to sit through an entire video, then you can simply glance through the transcript to get the gist of the subject and save it for future reference.
Here’s how to find and use YouTube’s transcript feature:
Open the video you want to view. In the description below the video, look for the …more link and select it.

Screenshot: YouTube

After clicking on the …more link, you’ll find a button labeled Show transcript.

Look at the end of the description (scroll down if you need to) for the Transcript heading and click on the Show transcript button.
You’ll be taken back to the top of the page. On the top right, there will now be a Transcript box with the entire transcript of the video, scrolling along with the audio.
The transcript will be timestamped, but if you find that distracting, you can get rid of them on the desktop app by selecting the three dots on the top right of the transcript and clicking on Toggle timestamps.

Screenshot: YouTube
The transcript, with timestamps, will appear on the side of the YouTube video.

If you want to use the text elsewhere, just use your cursor to highlight the part(s) you want to save, copy it, and paste it. (Note: I usually use either Paste and match style or Paste without formatting in order to avoid any ads or other elements that may be copied along with the transcription.)
You can also create a transcript using YouTube’s mobile app — but while the directions for finding the transcript are about the same, you can’t remove the timestamps. I was also unable to copy / paste the text (if there is a way to do that, please let me know in the comments).
One more note: as anyone who has used YouTube’s automatically generated closed captions or transcripts already knows, the text can sometimes be inaccurate to the point of hilarity. If you’re looking for a more accurate transcript, you could try Google’s Recorder app, which is built into its Pixel phones but available for other Android phones. I compared the two, and while the accuracy of the text was approximately the same, Recorder adds actual sentence structure — you know, periods and capital letters and such — making it far easier to read.
There are also a number of third-party apps out there for both Android and iOS devices that you can try out, although most are not free.

Illustration by Samar Haddad / The Verge

While I edit and write “how-to” articles, I also sometimes find myself in need of how-to directions myself. But when I do a search for guidance on, say, a difficult crochet stitch, it often helps to have a blow-by-blow text explanation that I can have handy without having to sit and watch a video over and over again.

For that, there’s YouTube’s automatically generated transcript. A transcript is not only useful when you want to view a text scroll of what’s being said while you’re watching the video, but it is also handy as a reference point afterward. For example, if you don’t have the time or patience to sit through an entire video, then you can simply glance through the transcript to get the gist of the subject and save it for future reference.

Here’s how to find and use YouTube’s transcript feature:

Open the video you want to view. In the description below the video, look for the …more link and select it.

Screenshot: YouTube

After clicking on the …more link, you’ll find a button labeled Show transcript.

Look at the end of the description (scroll down if you need to) for the Transcript heading and click on the Show transcript button.
You’ll be taken back to the top of the page. On the top right, there will now be a Transcript box with the entire transcript of the video, scrolling along with the audio.
The transcript will be timestamped, but if you find that distracting, you can get rid of them on the desktop app by selecting the three dots on the top right of the transcript and clicking on Toggle timestamps.

Screenshot: YouTube
The transcript, with timestamps, will appear on the side of the YouTube video.

If you want to use the text elsewhere, just use your cursor to highlight the part(s) you want to save, copy it, and paste it. (Note: I usually use either Paste and match style or Paste without formatting in order to avoid any ads or other elements that may be copied along with the transcription.)

You can also create a transcript using YouTube’s mobile app — but while the directions for finding the transcript are about the same, you can’t remove the timestamps. I was also unable to copy / paste the text (if there is a way to do that, please let me know in the comments).

One more note: as anyone who has used YouTube’s automatically generated closed captions or transcripts already knows, the text can sometimes be inaccurate to the point of hilarity. If you’re looking for a more accurate transcript, you could try Google’s Recorder app, which is built into its Pixel phones but available for other Android phones. I compared the two, and while the accuracy of the text was approximately the same, Recorder adds actual sentence structure — you know, periods and capital letters and such — making it far easier to read.

There are also a number of third-party apps out there for both Android and iOS devices that you can try out, although most are not free.

Read More 

The ‘godmother of AI’ has a new startup already worth $1 billion

Photo by Kimberly White/Getty Images for WIRED

Fei-Fei Li, the renowned computer scientist known as the “godmother of AI,” has created a startup dubbed World Labs. In just four months, its already valued at more than $1 billion, the Financial Times reported.
World Labs hopes to use human-like processing of visual data to make AI capable of advanced reasoning, Reuters reported in May. The research to make it human-like, much like what ChatGPT is trying to do with generative AI, is still ongoing.
Li is best known for her contributions to computer vision, a branch of AI dedicated to helping machines interpret and comprehend visual information. She also spearheaded the development of ImageNet, an extensive visual database used for visual object recognition research. Li headed AI at Google Cloud from 2017 to 2018 and currently advises the White House task force on AI.
“[World Labs] is developing a model that understands the three-dimensional physical world; essentially the dimensions of objects, where things are and what they do,” an anonymous venture capitalist with knowledge of Li’s work told the Financial Times.
This could bolster work in various fields such as robotics, augmented reality, and virtual reality
The startup has had two funding rounds, the latest was about $100 million, and it’s backed by Andreessen Horowitz and the AI fund Radical Ventures (which Li joined as a partner last year). Li founded World Labs while on partial leave from Stanford, where she co-directs the university’s Human-Centered AI Institute.
In a Ted Talk in April, Li further explained the field of research her startup will work on advancing, which involves algorithms capable of realistically extrapolating images and text into three-dimensional environments and acting on those predictions, using a concept known as “spatial intelligence.” This could bolster work in various fields such as robotics, augmented reality, virtual reality, and computer vision. If these capabilities continue to advance in the ambitious ways Li plans, it has the potential to transform industries like healthcare and manufacturing.
The investment in World Labs reflects a trend where venture capitalists eagerly align themselves with ambitious AI companies, spurred by the surprise success of OpenAI’s ChatGPT, which rapidly achieved a valuation exceeding $80 billion.

Photo by Kimberly White/Getty Images for WIRED

Fei-Fei Li, the renowned computer scientist known as the “godmother of AI,” has created a startup dubbed World Labs. In just four months, its already valued at more than $1 billion, the Financial Times reported.

World Labs hopes to use human-like processing of visual data to make AI capable of advanced reasoning, Reuters reported in May. The research to make it human-like, much like what ChatGPT is trying to do with generative AI, is still ongoing.

Li is best known for her contributions to computer vision, a branch of AI dedicated to helping machines interpret and comprehend visual information. She also spearheaded the development of ImageNet, an extensive visual database used for visual object recognition research. Li headed AI at Google Cloud from 2017 to 2018 and currently advises the White House task force on AI.

“[World Labs] is developing a model that understands the three-dimensional physical world; essentially the dimensions of objects, where things are and what they do,” an anonymous venture capitalist with knowledge of Li’s work told the Financial Times.

This could bolster work in various fields such as robotics, augmented reality, and virtual reality

The startup has had two funding rounds, the latest was about $100 million, and it’s backed by Andreessen Horowitz and the AI fund Radical Ventures (which Li joined as a partner last year). Li founded World Labs while on partial leave from Stanford, where she co-directs the university’s Human-Centered AI Institute.

In a Ted Talk in April, Li further explained the field of research her startup will work on advancing, which involves algorithms capable of realistically extrapolating images and text into three-dimensional environments and acting on those predictions, using a concept known as “spatial intelligence.” This could bolster work in various fields such as robotics, augmented reality, virtual reality, and computer vision. If these capabilities continue to advance in the ambitious ways Li plans, it has the potential to transform industries like healthcare and manufacturing.

The investment in World Labs reflects a trend where venture capitalists eagerly align themselves with ambitious AI companies, spurred by the surprise success of OpenAI’s ChatGPT, which rapidly achieved a valuation exceeding $80 billion.

Read More 

A custom sticker printer sent a pro-Trump mass SMS and enraged its clients

Image: Alex Parkin / The Verge; Photos by Jabin Botsford/The Washington Post via Getty Images

The fallout from the assassination attempt on former President Donald Trump at a rally over the weekend is seeping into even the most tangential circles — in this case, the print-on-demand custom sticker industry.
This week, Sticker Mule, a popular option for companies, creators, event organizers, and others wanting branded merchandise, shared a long message expressing support for Trump.
“People are terrified to admit they support Trump. I’ve been scared myself,” cofounder Anthony Constantino wrote in a post on the social media platform X. “The more people realize that kind-hearted, compassionate people support Trump, the sooner the hate will end.”
The company also sent an email and text message to customers containing much of the same language — but didn’t miss a chance to promote its products, plugging a sale it’s running on shirts this week. Sticker Mule didn’t immediately respond to a request for comment.

Donald Trump was shot. I don’t care what your political views are but the hate for Trump and his supporters has gone too far. People are terrified to admit they support Trump. I’ve been scared myself. Americans shouldn’t live in fear. I support Trump. Many at Sticker Mule… pic.twitter.com/mydQpM8GVV— Sticker Mule (@stickermule) July 14, 2024

Customers did not like that, to put it lightly. Many have vowed to stop ordering stickers from the company; others questioned why Constantino thought it appropriate to blast his customer email list with political endorsements. But the far right has embraced Sticker Mule — and this isn’t the first time the company has tried to lean into politics for online engagement.
For the last couple of years, Sticker Mule has shared a steady stream of posts on X aligned with the right, though none as plainly political as endorsing a far-right candidate on the official company account. In 2022, the company was deep in the release of the so-called Twitter Files, a series of select internal documents that Elon Musk gave right-wing pundits access to. The Twitter Files saga led to sustained and intense harassment of former Twitter employees, including Yoel Roth, former head of trust and safety, whom Sticker Mule also went after.

How was this guy in charge of Trust & Safety at Twitter? pic.twitter.com/JcItzDNjit— Sticker Mule (@stickermule) December 10, 2022

The company has also regularly posted in support of Musk, retweeting his posts and thanking him for fixing hate on the platform. (Reports indicate hate speech has soared on X since Musk’s takeover.) Not to miss an opportunity to profit, Sticker Mule gloated on X of its boost in sales following pro-Musk posts: “Our gift to everyone is to show that Internet haters are totally powerless. Don’t fear them, live happy! Our sales exploded after we got attacked for praising ELON MUSK,” the company posted in 2022.
Which is all to say that though the Trump endorsement seems random and sudden, it’s actually the oldest trick in the book for companies wanting to drum up business and loyalty from a specific audience. Much of the anger directed at Sticker Mule questioned why the company would torch a portion of its business to make a political statement. But it’s actually the opposite: Sticker Mule, like many other companies, has been chasing the money all along. It’s not yet clear whether this time will be any different.

Image: Alex Parkin / The Verge; Photos by Jabin Botsford/The Washington Post via Getty Images

The fallout from the assassination attempt on former President Donald Trump at a rally over the weekend is seeping into even the most tangential circles — in this case, the print-on-demand custom sticker industry.

This week, Sticker Mule, a popular option for companies, creators, event organizers, and others wanting branded merchandise, shared a long message expressing support for Trump.

“People are terrified to admit they support Trump. I’ve been scared myself,” cofounder Anthony Constantino wrote in a post on the social media platform X. “The more people realize that kind-hearted, compassionate people support Trump, the sooner the hate will end.”

The company also sent an email and text message to customers containing much of the same language — but didn’t miss a chance to promote its products, plugging a sale it’s running on shirts this week. Sticker Mule didn’t immediately respond to a request for comment.

Donald Trump was shot.

I don’t care what your political views are but the hate for Trump and his supporters has gone too far.

People are terrified to admit they support Trump. I’ve been scared myself.

Americans shouldn’t live in fear.

I support Trump. Many at Sticker Mule… pic.twitter.com/mydQpM8GVV

— Sticker Mule (@stickermule) July 14, 2024

Customers did not like that, to put it lightly. Many have vowed to stop ordering stickers from the company; others questioned why Constantino thought it appropriate to blast his customer email list with political endorsements. But the far right has embraced Sticker Mule — and this isn’t the first time the company has tried to lean into politics for online engagement.

For the last couple of years, Sticker Mule has shared a steady stream of posts on X aligned with the right, though none as plainly political as endorsing a far-right candidate on the official company account. In 2022, the company was deep in the release of the so-called Twitter Files, a series of select internal documents that Elon Musk gave right-wing pundits access to. The Twitter Files saga led to sustained and intense harassment of former Twitter employees, including Yoel Roth, former head of trust and safety, whom Sticker Mule also went after.

How was this guy in charge of Trust & Safety at Twitter? pic.twitter.com/JcItzDNjit

— Sticker Mule (@stickermule) December 10, 2022

The company has also regularly posted in support of Musk, retweeting his posts and thanking him for fixing hate on the platform. (Reports indicate hate speech has soared on X since Musk’s takeover.) Not to miss an opportunity to profit, Sticker Mule gloated on X of its boost in sales following pro-Musk posts: “Our gift to everyone is to show that Internet haters are totally powerless. Don’t fear them, live happy! Our sales exploded after we got attacked for praising ELON MUSK,” the company posted in 2022.

Which is all to say that though the Trump endorsement seems random and sudden, it’s actually the oldest trick in the book for companies wanting to drum up business and loyalty from a specific audience. Much of the anger directed at Sticker Mule questioned why the company would torch a portion of its business to make a political statement. But it’s actually the opposite: Sticker Mule, like many other companies, has been chasing the money all along. It’s not yet clear whether this time will be any different.

Read More 

Spotify launches a new voice and language for its AI DJ

Premium Spotify subscribers can switch between DJ’s English and Spanish voice options. | Image: Spotify

Spotify is launching a Spanish-language version of its “AI DJ” feature. The new Spanish-language voice has been provided by Spotify’s senior music editor, Olivia “Livi” Quiroz Roa, with users able to switch between hearing Livi or Xavier “X” Jernigan, another Spotify staffer who voices the English-speaking DJ variant released in February 2023.
The feature uses an AI-generated voice to create radio-like commentary that provides users with additional context and information about the song they’re hearing, which Spotify says makes listeners less likely to skip to another track. This commentary is paired with an endless playlist of personalized music recommendations that change based on user activity, such as if users switch to different genres or artists.
The Spanish-speaking DJ will be available to Premium users in select Latin American and Spanish markets, alongside regions where the English-speaking DJ feature has already launched. This follows price increases being applied to Spotify Premium subscriptions for both US and international users in recent weeks, which Spotify said would allow the company to “continue to invest in and innovate on our product features.”

Premium Spotify subscribers can switch between DJ’s English and Spanish voice options. | Image: Spotify

Spotify is launching a Spanish-language version of its “AI DJ” feature. The new Spanish-language voice has been provided by Spotify’s senior music editor, Olivia “Livi” Quiroz Roa, with users able to switch between hearing Livi or Xavier “X” Jernigan, another Spotify staffer who voices the English-speaking DJ variant released in February 2023.

The feature uses an AI-generated voice to create radio-like commentary that provides users with additional context and information about the song they’re hearing, which Spotify says makes listeners less likely to skip to another track. This commentary is paired with an endless playlist of personalized music recommendations that change based on user activity, such as if users switch to different genres or artists.

The Spanish-speaking DJ will be available to Premium users in select Latin American and Spanish markets, alongside regions where the English-speaking DJ feature has already launched. This follows price increases being applied to Spotify Premium subscriptions for both US and international users in recent weeks, which Spotify said would allow the company to “continue to invest in and innovate on our product features.”

Read More 

PSA: Samsung’s Galaxy Buds 3 Pro have some very delicate ear tips

Photo by Chris Welch / The Verge

Samsung’s new Galaxy Buds 3 Pro underwent a major redesign, making them look much more like AirPods and any number of other stemmed earbuds. So far, my impressions have been quite positive — full review coming soon — but even before the earbuds are widely released on July 24th, early buyers have been running into a frustrating issue.
Apparently, the ear tips are quite fragile and can be prone to breaking if you’re not careful. Reddit is already starting to light up with examples:

“How are you supposed to remove the tips?”
“Rant: Buds 3 Pro Eartips”

The Zuyoni Tech YouTube channel also fell victim to the issue.

GIF: Zuyoni Tech
Not the ideal first impression.

With the switch to an AirPods-like design, Samsung also adopted a similar mechanism for attaching the tips: they latch onto the earbud housing with a proprietary hard plastic ring that melds into the soft silicone you put in your ears. But some people are finding that the silicone can rip away when they attempt to remove the tips, leaving the plastic bit attached to the buds.
Except for occasional cleaning, most people don’t remove ear tips often after deciding on the right size. But if you tear them while testing the fit of Samsung’s three included sets, that would certainly be annoying. I haven’t had any trouble with my review pair so far, but there are ample reports from people who haven’t been as lucky.
This page on Samsung’s Korean help site warns that the ear tips could tear if handled forcefully, and there’s a recommended way of removing them. You’re supposed to flip out the ear tip and then gently tug it off. The company cautions against involving any fingernails in the process.

GIF: Samsung
Flip out and tug. Don’t twist or use a fingernail.

I’ve reached out to Samsung for comment on the situation. Hopefully, we’re just looking at some unfortunate early accidents that come with the learning curve of this new design — and not a case where the company has cheapened out on materials. It should only be a matter of time before Comply and other third-party ear tip manufacturers offer their own alternatives for Samsung’s stock tips. But for now, you’ll want to be careful since it’s not so easy to replace these proprietary tips.

Photo by Chris Welch / The Verge

Samsung’s new Galaxy Buds 3 Pro underwent a major redesign, making them look much more like AirPods and any number of other stemmed earbuds. So far, my impressions have been quite positive — full review coming soon — but even before the earbuds are widely released on July 24th, early buyers have been running into a frustrating issue.

Apparently, the ear tips are quite fragile and can be prone to breaking if you’re not careful. Reddit is already starting to light up with examples:

“How are you supposed to remove the tips?”
“Rant: Buds 3 Pro Eartips”

The Zuyoni Tech YouTube channel also fell victim to the issue.

GIF: Zuyoni Tech
Not the ideal first impression.

With the switch to an AirPods-like design, Samsung also adopted a similar mechanism for attaching the tips: they latch onto the earbud housing with a proprietary hard plastic ring that melds into the soft silicone you put in your ears. But some people are finding that the silicone can rip away when they attempt to remove the tips, leaving the plastic bit attached to the buds.

Except for occasional cleaning, most people don’t remove ear tips often after deciding on the right size. But if you tear them while testing the fit of Samsung’s three included sets, that would certainly be annoying. I haven’t had any trouble with my review pair so far, but there are ample reports from people who haven’t been as lucky.

This page on Samsung’s Korean help site warns that the ear tips could tear if handled forcefully, and there’s a recommended way of removing them. You’re supposed to flip out the ear tip and then gently tug it off. The company cautions against involving any fingernails in the process.

GIF: Samsung
Flip out and tug. Don’t twist or use a fingernail.

I’ve reached out to Samsung for comment on the situation. Hopefully, we’re just looking at some unfortunate early accidents that come with the learning curve of this new design — and not a case where the company has cheapened out on materials. It should only be a matter of time before Comply and other third-party ear tip manufacturers offer their own alternatives for Samsung’s stock tips. But for now, you’ll want to be careful since it’s not so easy to replace these proprietary tips.

Read More 

Robots will head to the depths to scan the Titanic

The wrecked bow of the Titanic. | Image: NOAA / Institute for Exploration / University of Rhode Island

A pair of remote-operated submersibles (ROV) is heading down to the wreckage of the Titanic this week to conduct digital 3D scans and take high-resolution images of the ship’s remains. Orchestrated by RMS Titanic, Inc., the expedition is the company’s first to visit the ship since 2010.
RMS Titanic’s seven-person crew arrived at the Titanic’s coordinates last night, according to the company’s most recent Instagram update.

View this post on Instagram

A post shared by RMS Titanic, Inc. (@rmstitanicinc)

This will also have been the first attempt of any kind to reach the ship since the OceanGate Titan submersible imploded on its way to the wreck last year, killing five people. (Incredibly, that event didn’t ruin the submersible industry.) However, this particular mission is about preservation and study rather than taking very wealthy people to look at the wreckage.
RMS Titanic will compare its new scans to those taken during its 2010 expedition to document deterioration and “determine the impact of the oceans and other expeditions on the site,” writes RMS Titanic. It also hopes to discover new marine life or debris field areas and find new deterioration that could offer access to the inside of the ship.
To that end, the ROVs are equipped with “a custom-built structured array of high-resolution cameras and custom lighting” that can capture 65K imagery, the company wrote in April. The cameras are “the highest resolution camera systems ever deployed at the site,” Marine Imaging Technologies founder Evan Kovacs told Oceanographic.
The company also collects artifacts. Indiana Jones may take issue with this, but the company doesn’t believe its collection belongs in a museum. Instead, it shows them at its permanent exhibits in Las Vegas, Nevada, and Orlando, Florida, as well as in worldwide touring exhibits. According to the site, RMS Titanic “believes it is in the best interest of the public to provide artifacts for display all over the world.”

The wrecked bow of the Titanic. | Image: NOAA / Institute for Exploration / University of Rhode Island

A pair of remote-operated submersibles (ROV) is heading down to the wreckage of the Titanic this week to conduct digital 3D scans and take high-resolution images of the ship’s remains. Orchestrated by RMS Titanic, Inc., the expedition is the company’s first to visit the ship since 2010.

RMS Titanic’s seven-person crew arrived at the Titanic’s coordinates last night, according to the company’s most recent Instagram update.

This will also have been the first attempt of any kind to reach the ship since the OceanGate Titan submersible imploded on its way to the wreck last year, killing five people. (Incredibly, that event didn’t ruin the submersible industry.) However, this particular mission is about preservation and study rather than taking very wealthy people to look at the wreckage.

RMS Titanic will compare its new scans to those taken during its 2010 expedition to document deterioration and “determine the impact of the oceans and other expeditions on the site,” writes RMS Titanic. It also hopes to discover new marine life or debris field areas and find new deterioration that could offer access to the inside of the ship.

To that end, the ROVs are equipped with “a custom-built structured array of high-resolution cameras and custom lighting” that can capture 65K imagery, the company wrote in April. The cameras are “the highest resolution camera systems ever deployed at the site,” Marine Imaging Technologies founder Evan Kovacs told Oceanographic.

The company also collects artifacts. Indiana Jones may take issue with this, but the company doesn’t believe its collection belongs in a museum. Instead, it shows them at its permanent exhibits in Las Vegas, Nevada, and Orlando, Florida, as well as in worldwide touring exhibits. According to the site, RMS Titanic “believes it is in the best interest of the public to provide artifacts for display all over the world.”

Read More 

US eyeing new rules to keep Chinese software out of cars

Photo by Jia Tianyong / China News Service / VCG via Getty Images

The US Commerce Department is expected to issue new rules in August placing limits on vehicle software that comes from China, according to Reuters.
The new rules come as the Biden administration ramps up its scrutiny of Chinese auto imports in an effort to prevent the country from flooding the market with cheap electric vehicles.
While speaking at a forum in Colorado, Alan Estevez, who serves as under secretary of commerce for industry and security, said that the department would propose rules that would require certain vehicle software be made in the US or by its trade partners. The rules would pertain to “key driver components of the vehicle that manage the software and manage the data around that car,” Estevez said, according to Reuters.
The new rules come as the Biden administration ramps up its scrutiny of Chinese auto imports
Such action would mirror trade restrictions placed against companies like Huawei over national security concerns that the telecom giant could be exploited by the Chinese government for espionage.
The rules would stem from an investigation launched earlier this year by the Commerce Department into connected vehicle software produced in China and other nations that are considered antagonistic to the US.
The probe focused on “connected vehicles,” a broad term that can be applied to any car with internet access. It was meant to address concerns that technology like cameras, sensors, and onboard computers could be exploited by foreign adversaries to collect sensitive data about US citizens and infrastructure.
China has previously accused the US of repeatedly abusing “the concept of national security” to wrongfully target Chinese companies and impede competition from global markets.
The new rules could end up mirroring similar provisions in the federal EV tax credits, which prohibit the credit from being applied to vehicles with battery components made in China. The administration has also proposed steep tariffs on Chinese vehicles in an effort to make them too expensive to sell in the US.

Photo by Jia Tianyong / China News Service / VCG via Getty Images

The US Commerce Department is expected to issue new rules in August placing limits on vehicle software that comes from China, according to Reuters.

The new rules come as the Biden administration ramps up its scrutiny of Chinese auto imports in an effort to prevent the country from flooding the market with cheap electric vehicles.

While speaking at a forum in Colorado, Alan Estevez, who serves as under secretary of commerce for industry and security, said that the department would propose rules that would require certain vehicle software be made in the US or by its trade partners. The rules would pertain to “key driver components of the vehicle that manage the software and manage the data around that car,” Estevez said, according to Reuters.

The new rules come as the Biden administration ramps up its scrutiny of Chinese auto imports

Such action would mirror trade restrictions placed against companies like Huawei over national security concerns that the telecom giant could be exploited by the Chinese government for espionage.

The rules would stem from an investigation launched earlier this year by the Commerce Department into connected vehicle software produced in China and other nations that are considered antagonistic to the US.

The probe focused on “connected vehicles,” a broad term that can be applied to any car with internet access. It was meant to address concerns that technology like cameras, sensors, and onboard computers could be exploited by foreign adversaries to collect sensitive data about US citizens and infrastructure.

China has previously accused the US of repeatedly abusing “the concept of national security” to wrongfully target Chinese companies and impede competition from global markets.

The new rules could end up mirroring similar provisions in the federal EV tax credits, which prohibit the credit from being applied to vehicles with battery components made in China. The administration has also proposed steep tariffs on Chinese vehicles in an effort to make them too expensive to sell in the US.

Read More 

Canon’s new pro cameras have eye-controlled autofocus and stacked sensors

Canon EOS R5 Mark II | Image: Canon

Canon’s new EOS R5 Mark II and EOS R1 cameras both have better autofocus performance than their predecessors, with computations powered by a new Digic X processor. That includes a Digic Accelerator that is behind a new Dual Pixel Intelligent AF autofocus system capable of body, joint, and head area estimation, as well as focusing on people other than the subject.
Both cameras share much-needed upgrades, like backside illuminated sensors with faster readouts and Canon’s unique eye-controlled autofocus, which was only available in the R3. Also, one of the most requested features for filmmakers, support for Canon Log 2 color profile, is now included.

Image: Canon
R5 Mark II

The R5 Mark II is Canon’s sweet spot for people shooting both photos and video, upgrading the full-frame mirrorless and 8K video-shooting R5 that launched in 2020. The Mark II includes a new stacked sensor (still 45 megapixels) and a faster Digic X processor, all helping it achieve a quicker 30fps capturing speed compared to 20fps in the previous model.

In its first impressions of the new camera, PetaPixel said the R5 II “promises a more compelling blend of image quality and speed” than Sony’s A7R V, despite the latter having more megapixels. The publication also praised the R5 II’s video abilities with “a waveform monitor, zebra display, four-channel audio, and HDR video.”

Image: Canon
The all-new Canon EOS R1.

Canon’s other new release is the all-new R1, which is intended as the new go-to camera for photojournalists who prioritize performance. It includes a stacked 24.2-megapixel full-frame sensor, along with the Digic X processor, helping it achieve 40fps image shooting speeds and giving it better resistance to rolling shutter distortions. Despite the focus on still images, it can still record 4K footage and even 6K RAW video.
Both cameras are already available for preordering on Canon’s website. The R5 Mark II will launch in August for $4,299.00 (body only) or $5,399.00 with a kit RF 24-105mm F4 L IS USM lens. Canon is launching the R1 this fall for $6,299.00 and currently shows an estimated arrival date of November 26th.

Canon EOS R5 Mark II | Image: Canon

Canon’s new EOS R5 Mark II and EOS R1 cameras both have better autofocus performance than their predecessors, with computations powered by a new Digic X processor. That includes a Digic Accelerator that is behind a new Dual Pixel Intelligent AF autofocus system capable of body, joint, and head area estimation, as well as focusing on people other than the subject.

Both cameras share much-needed upgrades, like backside illuminated sensors with faster readouts and Canon’s unique eye-controlled autofocus, which was only available in the R3. Also, one of the most requested features for filmmakers, support for Canon Log 2 color profile, is now included.

Image: Canon
R5 Mark II

The R5 Mark II is Canon’s sweet spot for people shooting both photos and video, upgrading the full-frame mirrorless and 8K video-shooting R5 that launched in 2020. The Mark II includes a new stacked sensor (still 45 megapixels) and a faster Digic X processor, all helping it achieve a quicker 30fps capturing speed compared to 20fps in the previous model.

In its first impressions of the new camera, PetaPixel said the R5 II “promises a more compelling blend of image quality and speed” than Sony’s A7R V, despite the latter having more megapixels. The publication also praised the R5 II’s video abilities with “a waveform monitor, zebra display, four-channel audio, and HDR video.”

Image: Canon
The all-new Canon EOS R1.

Canon’s other new release is the all-new R1, which is intended as the new go-to camera for photojournalists who prioritize performance. It includes a stacked 24.2-megapixel full-frame sensor, along with the Digic X processor, helping it achieve 40fps image shooting speeds and giving it better resistance to rolling shutter distortions. Despite the focus on still images, it can still record 4K footage and even 6K RAW video.

Both cameras are already available for preordering on Canon’s website. The R5 Mark II will launch in August for $4,299.00 (body only) or $5,399.00 with a kit RF 24-105mm F4 L IS USM lens. Canon is launching the R1 this fall for $6,299.00 and currently shows an estimated arrival date of November 26th.

Read More 

Microsoft’s Designer app arrives on iOS and Android with AI editing and creation

Image: Microsoft

Microsoft’s AI-powered Designer app is coming out of preview today for both iOS and Android users. Microsoft Designer lets you use templates to create custom images, stickers, greeting cards, invitations, and more. Designer can also use AI to edit images and restyle them or create collages of images.
Originally available on the web or through Microsoft Edge, Designer has been in preview for nearly a year. It’s now generally available to anyone with a personal Microsoft account and as a free app for Windows, iOS, and Android. The mobile app includes the ability to create images and edit them on the go.

Image: Microsoft
The Microsoft Designer editor view.

Microsoft Designer includes the usual text prompt for generating images, but there is also a big selection of templates you can use to create things like greeting cards, social media posts, icons, wallpapers, coloring book pages, and much more. Designer also includes an avatar creator, which it prompts you to use on the mobile version of the app.
You can also use Designer to edit images with AI, allowing you to restyle existing images or frame them with decorative AI-generated borders. Designer also includes the ability to edit and remove backgrounds, remove people or objects from images, and features like adding text and branding to images.
While Microsoft Designer is available as a standalone app today, Microsoft has also been making Designer available through Copilot in apps like Word and PowerPoint. Copilot Pro subscribers can create images and designs right within Word and PowerPoint, and Microsoft is adding a new banner image generator for Word documents soon.

Image: Microsoft
Microsoft has an AI avatar mode in Designer.

Windows Insiders will also get access to Designer within the Photos app on Windows 11 today, with features like erasing objects, removing backgrounds, auto-cropping, and filters all available directly in Photos. Microsoft had been testing sending images from Photos to Designer, but it’s now integrating the features into Photos so you don’t have to leave the app. Similar features are also coming to Microsoft Edge, soon.
Microsoft Designer launches out of preview with 15 free daily boosts that can be used to create or edit AI-powered images and designs. “Boosts are automatically used whenever you’re creating or editing images or designs both in the Designer app and where Designer is integrated across Microsoft apps,” says Sumit Chauhan, corporate vice president of Microsoft’s Office product group. “You can upgrade to a Copilot Pro subscription to receive 100 boosts per day.”

Image: Microsoft

Microsoft’s AI-powered Designer app is coming out of preview today for both iOS and Android users. Microsoft Designer lets you use templates to create custom images, stickers, greeting cards, invitations, and more. Designer can also use AI to edit images and restyle them or create collages of images.

Originally available on the web or through Microsoft Edge, Designer has been in preview for nearly a year. It’s now generally available to anyone with a personal Microsoft account and as a free app for Windows, iOS, and Android. The mobile app includes the ability to create images and edit them on the go.

Image: Microsoft
The Microsoft Designer editor view.

Microsoft Designer includes the usual text prompt for generating images, but there is also a big selection of templates you can use to create things like greeting cards, social media posts, icons, wallpapers, coloring book pages, and much more. Designer also includes an avatar creator, which it prompts you to use on the mobile version of the app.

You can also use Designer to edit images with AI, allowing you to restyle existing images or frame them with decorative AI-generated borders. Designer also includes the ability to edit and remove backgrounds, remove people or objects from images, and features like adding text and branding to images.

While Microsoft Designer is available as a standalone app today, Microsoft has also been making Designer available through Copilot in apps like Word and PowerPoint. Copilot Pro subscribers can create images and designs right within Word and PowerPoint, and Microsoft is adding a new banner image generator for Word documents soon.

Image: Microsoft
Microsoft has an AI avatar mode in Designer.

Windows Insiders will also get access to Designer within the Photos app on Windows 11 today, with features like erasing objects, removing backgrounds, auto-cropping, and filters all available directly in Photos. Microsoft had been testing sending images from Photos to Designer, but it’s now integrating the features into Photos so you don’t have to leave the app. Similar features are also coming to Microsoft Edge, soon.

Microsoft Designer launches out of preview with 15 free daily boosts that can be used to create or edit AI-powered images and designs. “Boosts are automatically used whenever you’re creating or editing images or designs both in the Designer app and where Designer is integrated across Microsoft apps,” says Sumit Chauhan, corporate vice president of Microsoft’s Office product group. “You can upgrade to a Copilot Pro subscription to receive 100 boosts per day.”

Read More 

Scroll to top
Generated by Feedzy