Month: May 2024

ElevenLabs’ AI generator makes explosions or other sound effects with just a prompt

Cath Virginia / The Verge | Photos by Getty Images

ElevenLabs already offers AI-generated versions of human voices and music. Now, it will let people create sound effects for podcasts, movies, or games, too. The new Sound Effects tool can generate up to 22 seconds of sounds based on user prompts that can be combined with the company’s voice and music platform, and it gives users at least four downloadable audio clip options.
The company says it worked with the stock media platform Shutterstock to build a library and train its model on its audio clips. Shutterstock has licensed its content libraries to many AI companies, including OpenAI, Meta, and Google.

pic.twitter.com/pxrJy3BPOB— ElevenLabs (@elevenlabsio) May 31, 2024

Sound Effects is free to use, but paid tiers can use the generated audio clips with commercial licenses, while free users “must attribute ElevenLabs by including ‘elevenlabs.io’ in the title.” ElevenLabs users have a set character count limit when writing prompts, with free users getting 10,000 characters per month. For Sound Effects, ElevenLabs says on its FAQs page that it will take 40 characters per second from the allotment if users set the audio clip duration themselves. If using the default audio duration, each prompt request will be charged 200 characters.
Libraries with sound effect clips already exist in the market for creators, filmmakers, and video game developers. But sometimes, these can be expensive or have trouble surfacing just the right type of sound. ElevenLabs says in its blog post that it designed Sound Effects “to generate rich and immersive soundscapes quickly, affordably and at scale.”
Other AI developers are also developing their own text-to-sound generators. Stability AI released Stable Audio last year, which creates audio clips of music and sound effects, and Meta’s AudioCraft models generate natural sound (think background noises like wind or traffic).

Cath Virginia / The Verge | Photos by Getty Images

ElevenLabs already offers AI-generated versions of human voices and music. Now, it will let people create sound effects for podcasts, movies, or games, too. The new Sound Effects tool can generate up to 22 seconds of sounds based on user prompts that can be combined with the company’s voice and music platform, and it gives users at least four downloadable audio clip options.

The company says it worked with the stock media platform Shutterstock to build a library and train its model on its audio clips. Shutterstock has licensed its content libraries to many AI companies, including OpenAI, Meta, and Google.

pic.twitter.com/pxrJy3BPOB

— ElevenLabs (@elevenlabsio) May 31, 2024

Sound Effects is free to use, but paid tiers can use the generated audio clips with commercial licenses, while free users “must attribute ElevenLabs by including ‘elevenlabs.io’ in the title.” ElevenLabs users have a set character count limit when writing prompts, with free users getting 10,000 characters per month. For Sound Effects, ElevenLabs says on its FAQs page that it will take 40 characters per second from the allotment if users set the audio clip duration themselves. If using the default audio duration, each prompt request will be charged 200 characters.

Libraries with sound effect clips already exist in the market for creators, filmmakers, and video game developers. But sometimes, these can be expensive or have trouble surfacing just the right type of sound. ElevenLabs says in its blog post that it designed Sound Effects “to generate rich and immersive soundscapes quickly, affordably and at scale.”

Other AI developers are also developing their own text-to-sound generators. Stability AI released Stable Audio last year, which creates audio clips of music and sound effects, and Meta’s AudioCraft models generate natural sound (think background noises like wind or traffic).

Read More 

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In a twist that shocks absolutely no one and thrills pyromaniacs who love seeing money burn, Elon Musk’s newest venture, xAI, has snagged a casual $6
© 2024 TechCrunch. All rights reserved. For personal use only.

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In a twist that shocks absolutely no one and thrills pyromaniacs who love seeing money burn, Elon Musk’s newest venture, xAI, has snagged a casual $6 […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read More 

Meta’s AI is summarizing some bizarre Facebook comment sections

Illustration by Nick Barclay / The Verge

If you’ve been on the Facebook app lately, you might’ve seen Meta’s AI inject itself into the comment section with summaries of what people say. Given how wild Facebook comment sections often become, it’s not hard to imagine how ridiculous some of these summaries turn out. (This isn’t the first time Meta’s AI has appeared in the comment section, by the way: 404 Media spotted it pretending to be a parent in a Facebook group.)
After seeing screenshots of the feature shared on Threads and Reddit, I decided to check the comment sections on my Facebook app. I found the AI summaries popping up on many of the posts I checked — unhinged responses and all. One AI summary on a post about a store closure said, “Some commenters attribute the closure to the store ‘going woke’ or having poor selection, while others point to the rise of online shopping.”

Image: The Verge

Another Facebook post from Vice about Mexican street wrestlers prompted a comment section summary that said some people were “less impressed” with the performance and referred to it as a “moronic way of panhandling.” The AI also picked up some of the more lighthearted jokes people made about a bobcat sighting in a Florida town. “Some admired the sighting, with one commenter hoping the bobcat remembered sunscreen.”
It’s still not clear how Meta chooses which posts to display comment summaries on, and the company didn’t immediately respond to The Verge’s request for comment.
Either way, the summaries really don’t include anything that I found useful (unless you love vague notions about what random people have to say) — but it could help you identify posts where the comment section has gotten too toxic to bother scrolling.
The AI summaries have also prompted privacy concerns, as Meta is feeding user comments into its AI system to generate them. Over the past week or so, many Facebook and Instagram users in the European Union and the UK received a notification informing them that Meta will train its AI on their content. (Data protection laws in both regions require Meta to disclose this information.) Although Meta will let these users object to having their data used to train AI, the process isn’t that simple, and the company has rejected some users’ requests.
Here in the US, Meta’s privacy policy page says the company uses “information shared on Meta’s Products and services” to train AI, including posts, photos, and captions. Meta lets you submit a request to correct or delete personal information used to train its AI models, but it only applies to information from a third party. Everything else seems to be fair game.

Illustration by Nick Barclay / The Verge

If you’ve been on the Facebook app lately, you might’ve seen Meta’s AI inject itself into the comment section with summaries of what people say. Given how wild Facebook comment sections often become, it’s not hard to imagine how ridiculous some of these summaries turn out. (This isn’t the first time Meta’s AI has appeared in the comment section, by the way: 404 Media spotted it pretending to be a parent in a Facebook group.)

After seeing screenshots of the feature shared on Threads and Reddit, I decided to check the comment sections on my Facebook app. I found the AI summaries popping up on many of the posts I checked — unhinged responses and all. One AI summary on a post about a store closure said, “Some commenters attribute the closure to the store ‘going woke’ or having poor selection, while others point to the rise of online shopping.”

Image: The Verge

Another Facebook post from Vice about Mexican street wrestlers prompted a comment section summary that said some people were “less impressed” with the performance and referred to it as a “moronic way of panhandling.” The AI also picked up some of the more lighthearted jokes people made about a bobcat sighting in a Florida town. “Some admired the sighting, with one commenter hoping the bobcat remembered sunscreen.”

It’s still not clear how Meta chooses which posts to display comment summaries on, and the company didn’t immediately respond to The Verge’s request for comment.

Either way, the summaries really don’t include anything that I found useful (unless you love vague notions about what random people have to say) — but it could help you identify posts where the comment section has gotten too toxic to bother scrolling.

The AI summaries have also prompted privacy concerns, as Meta is feeding user comments into its AI system to generate them. Over the past week or so, many Facebook and Instagram users in the European Union and the UK received a notification informing them that Meta will train its AI on their content. (Data protection laws in both regions require Meta to disclose this information.) Although Meta will let these users object to having their data used to train AI, the process isn’t that simple, and the company has rejected some users’ requests.

Here in the US, Meta’s privacy policy page says the company uses “information shared on Meta’s Products and services” to train AI, including posts, photos, and captions. Meta lets you submit a request to correct or delete personal information used to train its AI models, but it only applies to information from a third party. Everything else seems to be fair game.

Read More 

Japan’s Push To Make All Research Open Access is Taking Shape

The Japanese government is pushing ahead with a plan to make Japan’s publicly funded research output free to read. From a report: In June, the science ministry will assign funding to universities to build the infrastructure needed to make research papers free to read on a national scale. The move follows the ministry’s announcement in February that researchers who receive government funding will be required to make their papers freely available to read on the institutional repositories from January 2025. The Japanese plan “is expected to enhance the long-term traceability of research information, facilitate secondary research and promote collaboration,” says Kazuki Ide, a health-sciences and public-policy scholar at Osaka University in Suita, Japan, who has written about open access in Japan.

The nation is one of the first Asian countries to make notable advances towards making more research open access (OA) and among the first countries in the world to forge a nationwide plan for OA. The plan follows in the footsteps of the influential Plan S, introduced six years ago by a group of research funders in the United States and Europe known as cOAlition S, to accelerate the move to OA publishing. The United States also implemented an OA mandate in 2022 that requires all research funded by US taxpayers to be freely available from 2026. When the Ministry of Education, Culture, Sports, Science and Technology (MEXT) announced Japan’s pivot to OA in February, it also said that it would invest around $63 million to standardize institutional repositories — websites dedicated to hosting scientific papers, their underlying data and other materials — ensuring that there will be a mechanism for making research in Japan open.

Read more of this story at Slashdot.

The Japanese government is pushing ahead with a plan to make Japan’s publicly funded research output free to read. From a report: In June, the science ministry will assign funding to universities to build the infrastructure needed to make research papers free to read on a national scale. The move follows the ministry’s announcement in February that researchers who receive government funding will be required to make their papers freely available to read on the institutional repositories from January 2025. The Japanese plan “is expected to enhance the long-term traceability of research information, facilitate secondary research and promote collaboration,” says Kazuki Ide, a health-sciences and public-policy scholar at Osaka University in Suita, Japan, who has written about open access in Japan.

The nation is one of the first Asian countries to make notable advances towards making more research open access (OA) and among the first countries in the world to forge a nationwide plan for OA. The plan follows in the footsteps of the influential Plan S, introduced six years ago by a group of research funders in the United States and Europe known as cOAlition S, to accelerate the move to OA publishing. The United States also implemented an OA mandate in 2022 that requires all research funded by US taxpayers to be freely available from 2026. When the Ministry of Education, Culture, Sports, Science and Technology (MEXT) announced Japan’s pivot to OA in February, it also said that it would invest around $63 million to standardize institutional repositories — websites dedicated to hosting scientific papers, their underlying data and other materials — ensuring that there will be a mechanism for making research in Japan open.

Read more of this story at Slashdot.

Read More 

Scroll to top
Generated by Feedzy