Month: May 2024
Perplexity will research and write reports
Illustration by Cath Virginia / The Verge | Photos by Getty Images
AI search platform Perplexity is launching a new feature called Pages that will generate a customizable webpage based on user prompts. The new feature feels like a one-stop shop for making a school report since Perplexity does the research and writing for you.
Pages taps Perplexity’s AI search models to find information and then creates what I can loosely call a research presentation that can be published and shared with others. In a blog post, Perplexity says it designed Pages to help educators, researchers, and “hobbyists” share their knowledge.
Users type out what their report is about or what they want to know in the prompt box. They can gear the writing more toward beginners, expert readers, or a more general audience. Perplexity searches for information, then begins writing the page by breaking down the information into sections, citing some sources, and then adding visuals. Users can make the page as detailed or concise as they want, and they can also change the images Perplexity uses. However, you can’t edit the text it generates; you have to write another prompt to fix any mistakes.
I tried out Pages ahead of time to see how it works. Pages is not geared toward people like me who already have an avenue to share our knowledge. But it doesn’t seem geared toward researchers or teachers, either. I wanted to see how it can break down complex topics and if it can help with the difficult task of presenting dense information to different audiences.
Among other topics, I asked Perplexity’s Pages to generate a page on the “convergence of quantum computing and artificial intelligence and its impact on society” across the three audience types. The main difference between audiences seems to be the jargon in the written text and the kind of website it takes data from. Each generated report pulls from different sources, including introductory blog posts like this one from IBM. It also cited Wikipedia, which drove the student report vibe home.
Screenshot: The Verge
One of the pages Perplexity generated for me.
The Perplexity-generated page did a passable job of explaining the basics of quantum computing and how AI fits into the technology. But the “research” didn’t go as deep as I could have if I were writing the presentation myself. The more advanced version didn’t even really talk about “the convergence of quantum computing and AI.” It found blog posts talking about quantum inflection points, which is when quantum technologies become more commercially viable and is not at all related to what I asked it to write about.
Then, I asked Pages to write a report about myself, mainly because the information there is easily verifiable. But it only took information from my personal website and an article about me on my high school’s website — not from other public, easily accessible sources like my author page on The Verge. It also sometimes elaborated on things that had nothing to do with me. For example, I began my journalism career during the 2008 financial crisis. Instead of talking about the pieces I wrote about mass layoffs, Perplexity explained the beginnings of the financial crisis.
Pages does the surface-level googling and writing for you, but it isn’t research. Perplexity claims that Pages will help educators develop “comprehensive” study guides for students and researchers to create detailed reports on their findings. I could not upload a research paper for it to summarize, and I couldn’t edit the text it generated, two things I believe users who want to make the most of Pages would appreciate.
I do see one potential user for Pages, and it isn’t one Perplexity called out: students rushing to put out an assignment. Pages may improve in the future. Right now, it’s a way to get easy, possibly correct surface-level information into a presentation that doesn’t really teach anything.
Pages will be available to all Perplexity users, and the company says it’s slowly rolling it out to its free, Pro, and Enterprise users.
Illustration by Cath Virginia / The Verge | Photos by Getty Images
AI search platform Perplexity is launching a new feature called Pages that will generate a customizable webpage based on user prompts. The new feature feels like a one-stop shop for making a school report since Perplexity does the research and writing for you.
Pages taps Perplexity’s AI search models to find information and then creates what I can loosely call a research presentation that can be published and shared with others. In a blog post, Perplexity says it designed Pages to help educators, researchers, and “hobbyists” share their knowledge.
Users type out what their report is about or what they want to know in the prompt box. They can gear the writing more toward beginners, expert readers, or a more general audience. Perplexity searches for information, then begins writing the page by breaking down the information into sections, citing some sources, and then adding visuals. Users can make the page as detailed or concise as they want, and they can also change the images Perplexity uses. However, you can’t edit the text it generates; you have to write another prompt to fix any mistakes.
I tried out Pages ahead of time to see how it works. Pages is not geared toward people like me who already have an avenue to share our knowledge. But it doesn’t seem geared toward researchers or teachers, either. I wanted to see how it can break down complex topics and if it can help with the difficult task of presenting dense information to different audiences.
Among other topics, I asked Perplexity’s Pages to generate a page on the “convergence of quantum computing and artificial intelligence and its impact on society” across the three audience types. The main difference between audiences seems to be the jargon in the written text and the kind of website it takes data from. Each generated report pulls from different sources, including introductory blog posts like this one from IBM. It also cited Wikipedia, which drove the student report vibe home.
Screenshot: The Verge
One of the pages Perplexity generated for me.
The Perplexity-generated page did a passable job of explaining the basics of quantum computing and how AI fits into the technology. But the “research” didn’t go as deep as I could have if I were writing the presentation myself. The more advanced version didn’t even really talk about “the convergence of quantum computing and AI.” It found blog posts talking about quantum inflection points, which is when quantum technologies become more commercially viable and is not at all related to what I asked it to write about.
Then, I asked Pages to write a report about myself, mainly because the information there is easily verifiable. But it only took information from my personal website and an article about me on my high school’s website — not from other public, easily accessible sources like my author page on The Verge. It also sometimes elaborated on things that had nothing to do with me. For example, I began my journalism career during the 2008 financial crisis. Instead of talking about the pieces I wrote about mass layoffs, Perplexity explained the beginnings of the financial crisis.
Pages does the surface-level googling and writing for you, but it isn’t research. Perplexity claims that Pages will help educators develop “comprehensive” study guides for students and researchers to create detailed reports on their findings. I could not upload a research paper for it to summarize, and I couldn’t edit the text it generated, two things I believe users who want to make the most of Pages would appreciate.
I do see one potential user for Pages, and it isn’t one Perplexity called out: students rushing to put out an assignment. Pages may improve in the future. Right now, it’s a way to get easy, possibly correct surface-level information into a presentation that doesn’t really teach anything.
Pages will be available to all Perplexity users, and the company says it’s slowly rolling it out to its free, Pro, and Enterprise users.
Get Up to $500 Before Your Next Paycheck With Chime MyPay – CNET
Bridge the gap between paydays with Chime’s new cash advance service.
Bridge the gap between paydays with Chime’s new cash advance service.
Framework Boosts Its 13-inch Laptop With New CPUs, Lower Prices, and Better Screens
Framework, a company known for its modular laptops, has announced a fourth round of iterative updates and upgrade options for its Framework Laptop 13. The upgrades include motherboards and pre-built laptops featuring new Intel Meteor Lake Core Ultra processors with Intel Arc dedicated GPUs, lower prices for AMD Ryzen 7000 and 13th-gen Intel editions, and a new display with a higher resolution and refresh rate.
The Core Ultra boards come with three CPU options, with prices starting at $899 for a pre-built or DIY model. Upgrading from an older Intel Framework board requires an upgrade to DDR5 RAM, and Framework charges $40 for every 8GB of DDR5-5600, which is above market rates. The new 13.5-inch display has a resolution of 2880×1920, a 120 Hz refresh rate, and costs $130 more than the standard display.
Read more of this story at Slashdot.
Framework, a company known for its modular laptops, has announced a fourth round of iterative updates and upgrade options for its Framework Laptop 13. The upgrades include motherboards and pre-built laptops featuring new Intel Meteor Lake Core Ultra processors with Intel Arc dedicated GPUs, lower prices for AMD Ryzen 7000 and 13th-gen Intel editions, and a new display with a higher resolution and refresh rate.
The Core Ultra boards come with three CPU options, with prices starting at $899 for a pre-built or DIY model. Upgrading from an older Intel Framework board requires an upgrade to DDR5 RAM, and Framework charges $40 for every 8GB of DDR5-5600, which is above market rates. The new 13.5-inch display has a resolution of 2880×1920, a 120 Hz refresh rate, and costs $130 more than the standard display.
Read more of this story at Slashdot.
Spotify is refunding Car Thing owners before bricking their devices
Photo by Ashley Carman / The Verge
Spotify is now offering refunds to people who purchased its $90 Car Thing dashboard accessory. While the streaming company unceremoniously discontinued its first and only hardware product in 2022, just five months after its release, it recently announced plans to deactivate all remaining Car Things on December 4th, 2024. Users looking to get their money back will need to reach out to Spotify support and provide proof of purchase.
The slight concession comes after a group of Car Thing customers filed a class action lawsuit with the Southern District of New York against Spotify over their short-lived and now doomed accessories.
Car Thing was a simple external screen / remote control for the Spotify app on your phone— most attractive to drivers whose cars lack Android Auto or Apple CarPlay. While the device was fairly simple and likely a sales failure for Spotify, it found new life once it went on fire sale and crafty users took to implementing their Car Things into non-car places like home desk setups and on their keyboards.
Spotify’s slight attempt to make it up to Car Thing owners comes after a rough year that saw its music streaming service increase in price in 2023, reports of further increases coming later in 2024, and a round of layoffs that affected 17 percent of its staff. which subsequently cut off some popular music discovery tools. While a refund is better than nothing, Spotify is still making its customers jump through hoops by digging up years-old receipts (and, according to Reddit posts, breaking up the refund across multiple payments). Refunds or not, there will still be many useless hunks of e-waste left behind.
Photo by Ashley Carman / The Verge
Spotify is now offering refunds to people who purchased its $90 Car Thing dashboard accessory. While the streaming company unceremoniously discontinued its first and only hardware product in 2022, just five months after its release, it recently announced plans to deactivate all remaining Car Things on December 4th, 2024. Users looking to get their money back will need to reach out to Spotify support and provide proof of purchase.
The slight concession comes after a group of Car Thing customers filed a class action lawsuit with the Southern District of New York against Spotify over their short-lived and now doomed accessories.
Car Thing was a simple external screen / remote control for the Spotify app on your phone— most attractive to drivers whose cars lack Android Auto or Apple CarPlay. While the device was fairly simple and likely a sales failure for Spotify, it found new life once it went on fire sale and crafty users took to implementing their Car Things into non-car places like home desk setups and on their keyboards.
Spotify’s slight attempt to make it up to Car Thing owners comes after a rough year that saw its music streaming service increase in price in 2023, reports of further increases coming later in 2024, and a round of layoffs that affected 17 percent of its staff. which subsequently cut off some popular music discovery tools. While a refund is better than nothing, Spotify is still making its customers jump through hoops by digging up years-old receipts (and, according to Reddit posts, breaking up the refund across multiple payments). Refunds or not, there will still be many useless hunks of e-waste left behind.
Tech giants form AI group to counter Nvidia with new interconnect standard
“Ultra Accelerator Link” will connect high-performance GPUs and servers.
On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia’s proprietary NVLink interconnect technology, which links together multiple servers that power today’s AI applications like ChatGPT.
The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn’t enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.
This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia’s proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.
Google will roll out Chrome’s new extension spec next week
Image: The Verge
Google is making way for Manifest V3, the Chrome extension specification that could change the way ad blockers work. The company says it will begin phasing out the old system on the Chrome Beta, Dev, and Canary channels starting on June 3rd.
If you’re on any of these channels, you may see a warning message on your extension management page that says Google will soon end support for extensions running on Manifest V2. The extensions will still work, but Google says it will disable them on your browser in the “coming months” before removing the ability to use them completely. The stable version of Chrome will eventually get these changes, with a full rollout set for the beginning of 2025.
Google’s long-delayed transition to Manifest V3 has faced pushback over concerns it could limit the effectiveness of ad blockers. However, Google has since attempted to address developers’ main concerns by adding support for user scripts and increasing the number of rulesets for the declarativeNetRequest API used by ad blocking extensions. According to Google, Manifest V3 will help improve the security of extensions, as it removes support for remotely hosted code.
Google says 85 percent of “actively maintained” extensions in the Chrome Web Store have already created Manifest V3 versions, including some of the most popular ad blockers, like AdBlock, Adblock Plus, uBlock, and AdGuard.
Image: The Verge
Google is making way for Manifest V3, the Chrome extension specification that could change the way ad blockers work. The company says it will begin phasing out the old system on the Chrome Beta, Dev, and Canary channels starting on June 3rd.
If you’re on any of these channels, you may see a warning message on your extension management page that says Google will soon end support for extensions running on Manifest V2. The extensions will still work, but Google says it will disable them on your browser in the “coming months” before removing the ability to use them completely. The stable version of Chrome will eventually get these changes, with a full rollout set for the beginning of 2025.
Google’s long-delayed transition to Manifest V3 has faced pushback over concerns it could limit the effectiveness of ad blockers. However, Google has since attempted to address developers’ main concerns by adding support for user scripts and increasing the number of rulesets for the declarativeNetRequest API used by ad blocking extensions. According to Google, Manifest V3 will help improve the security of extensions, as it removes support for remotely hosted code.
Google says 85 percent of “actively maintained” extensions in the Chrome Web Store have already created Manifest V3 versions, including some of the most popular ad blockers, like AdBlock, Adblock Plus, uBlock, and AdGuard.
More Advanced AI Siri Functionality Not Coming to iOS 18 Until 2025
Apple is planning a major AI overhaul for Siri in iOS 18, and Bloomberg’s Mark Gurman says that the update will let Siri control all individual features in apps for the first time, expanding the range of functions the personal assistant can perform.
Siri will be able to do things like open specific documents, move a note from one folder to another, delete an email, summarize an article, email a web link, and open a particular news site in Apple News. Apple plans to use AI to analyze what people are doing on their devices, automatically enabling Siri features.
To make this happen, Apple engineers had to rearchitect Siri’s underlying software with large language models or LLMs, which is also the technology that’s been used for chatbots like ChatGPT. Apple has been working on a deal with OpenAI to integrate OpenAI’s ChatGPT technology into iOS 18, and it is also in talks with Google about incorporating Gemini, but Siri functionality likely relies on Apple’s own LLM work.
At launch, the new Siri functionality will be limited to Apple apps, and Siri will only be able to respond to one command at a time. Eventually, Apple wants Siri to be able to respond to multiple commands, such as capturing a photo and then sending it to someone in a message.
While the Siri features will be introduced at WWDC 2024, Apple reportedly does not plan to launch them in September when iOS 18 sees an initial release. Instead, Siri will be overhauled in a future iOS 18 update that’s set to be introduced in 2025.
Basic AI tasks in iOS 18 will be processed on device, but more advanced capabilities will rely on Apple’s cloud servers. Gurman previously said that Apple would power all of the initial iOS 18 features on-device without relying on cloud technology in order to preserve privacy, but rumors have shifted in recent weeks. Part of Apple’s new Siri technology will include code for determining whether a request can be processed on device or requires Apple’s servers. On-device iOS 18 AI capabilities will largely require an iPhone 15 Pro or later to work, and an M1 or later for iPadOS 18 and macOS 15.
According to The Information, Apple’s AI servers will be powered by M2 Ultra and M4 chips, with Apple planning to use the Secure Enclave to “to help isolate the data being processed on its servers so that it can’t be seen by the wider system or Apple.” Gurman says that Apple will also provide customers with an “intelligent report” that explains how information is kept safe.
We’ll hear all about the AI functionality coming to Siri in just over 10 days. WWDC 2024 is set to begin on Monday, June 10.Related Roundup: iOS 18Tags: Siri, Bloomberg, Mark GurmanThis article, “More Advanced AI Siri Functionality Not Coming to iOS 18 Until 2025” first appeared on MacRumors.comDiscuss this article in our forums
Apple is planning a major AI overhaul for Siri in iOS 18, and Bloomberg‘s Mark Gurman says that the update will let Siri control all individual features in apps for the first time, expanding the range of functions the personal assistant can perform.
Siri will be able to do things like open specific documents, move a note from one folder to another, delete an email, summarize an article, email a web link, and open a particular news site in Apple News. Apple plans to use AI to analyze what people are doing on their devices, automatically enabling Siri features.
To make this happen, Apple engineers had to rearchitect Siri’s underlying software with large language models or LLMs, which is also the technology that’s been used for chatbots like ChatGPT. Apple has been working on a deal with OpenAI to integrate OpenAI’s ChatGPT technology into iOS 18, and it is also in talks with Google about incorporating Gemini, but Siri functionality likely relies on Apple’s own LLM work.
At launch, the new Siri functionality will be limited to Apple apps, and Siri will only be able to respond to one command at a time. Eventually, Apple wants Siri to be able to respond to multiple commands, such as capturing a photo and then sending it to someone in a message.
While the Siri features will be introduced at WWDC 2024, Apple reportedly does not plan to launch them in September when iOS 18 sees an initial release. Instead, Siri will be overhauled in a future iOS 18 update that’s set to be introduced in 2025.
Basic AI tasks in iOS 18 will be processed on device, but more advanced capabilities will rely on Apple’s cloud servers. Gurman previously said that Apple would power all of the initial iOS 18 features on-device without relying on cloud technology in order to preserve privacy, but rumors have shifted in recent weeks. Part of Apple’s new Siri technology will include code for determining whether a request can be processed on device or requires Apple’s servers. On-device iOS 18 AI capabilities will largely require an iPhone 15 Pro or later to work, and an M1 or later for iPadOS 18 and macOS 15.
According to The Information, Apple’s AI servers will be powered by M2 Ultra and M4 chips, with Apple planning to use the Secure Enclave to “to help isolate the data being processed on its servers so that it can’t be seen by the wider system or Apple.” Gurman says that Apple will also provide customers with an “intelligent report” that explains how information is kept safe.
We’ll hear all about the AI functionality coming to Siri in just over 10 days. WWDC 2024 is set to begin on Monday, June 10.
This article, “More Advanced AI Siri Functionality Not Coming to iOS 18 Until 2025” first appeared on MacRumors.com
Discuss this article in our forums
Key misinformation “superspreaders” on Twitter: Older women
Some of our fellow citizens seem to voluntarily do the work of spreading fake news.
Misinformation is not a new problem, but there are plenty of indications that the advent of social media has made things worse. Academic researchers have responded by trying to understand the scope of the problem, identifying the most misinformation-filled social media networks, organized government efforts to spread false information, and even prominent individuals who are the sources of misinformation.
All of that’s potentially valuable data. But it skips over another major contribution: average individuals who, for one reason or another, seem inspired to spread misinformation. A study released today looks at a large panel of Twitter accounts that are associated with US-based voters (the work was done back when X was still Twitter). It identifies a small group of misinformation superspreaders, which represent just 0.3 percent of the accounts but are responsible for sharing 80 percent of the links to fake news sites.
While you might expect these to be young, Internet-savvy individuals who automate their sharing, it turns out this population tends to be older, female, and very, very prone to clicking the “retweet” button.