verge-rss

Google dominates online ads, says antitrust trial witness, but publishers are feeling ‘stuck’

Image: Cath Virginia / The Verge, Getty Images

Google’s tool that lets publishers sell ad space on their websites is ubiquitous, but that’s largely a testament to how hard it is for customers to get out of it, one former publishing executive testified in federal court on Tuesday.
“I felt like they were holding us hostage,” said Stephanie Layser, a former programmatic advertising executive at News Corp (which owns brands like The Wall Street Journal and the New York Post) who now works at AWS. Layser was testifying as a government witness in the Justice Department’s second antitrust case against Google, which is accusing the company of monopolizing the markets for ad tech tools and illegally tying together two of its products.
Layser was one of three witnesses the court heard from on Tuesday, covering perspectives from the publisher side, the advertiser side, and inside of Google. Through their testimony, the government is attempting to paint a picture of a company that exerts so much control over the markets for ad tech tools that customers don’t walk away, even in the face of unfavorable changes. That’s because, according to the government, Google has protected its monopoly power, preventing adequate alternatives and true competition from emerging. Google, for its part, says the government is punishing it for success and trying to force it to deal with rivals on more favorable terms.
Layser felt captured by a change Google rolled out in 2019, which prevented publishers from setting higher floor prices just for Google’s ad exchange, AdX, under what it called unified pricing rules (UPR). With UPR, Layser said it was still possible to set different floors for other exchanges within each of their systems but not for Google’s. Publishers might want to set a higher floor price from AdX to enable more competition during ad auctions in the hope it would result in a higher price than the minimum they’re willing to accept, she said.
When Google introduced UPR, Layser set up a meeting with Google executives to express her concerns and said she believed the program was “in the best interest of Google and not in the best interest of their customers.” She didn’t recall how Google responded but said that “nothing changed,” and the program was implemented.
Despite her grievances, Layser said switching to a different tool was not a viable option. That’s because using Google’s publisher ad server, known at the time as DoubleClick for Publishers (DFP) and today as Google Ad Manager, was the only way to access the large base of Google advertiser demand with real-time prices — which is important in a system where computer-run ad auctions happen in milliseconds.
Layser even helped put together an analysis at News Corp considering the pros and cons of switching to another publisher ad server, AppNexus (later bought by Microsoft and rebranded as Xandr), but determined the risk of losing revenue without the same access to Google Ads demand was too great.
The decision didn’t really have to do with the quality or price of Google’s product, however, Layser testified. “DFP is a 25- to 30-year-old piece of technology. It’s slow and clunky,” she told the court. Google also provided News Corp less insight into their transactions than they could have gotten with AppNexus, Layser said. She “begged” Google for what she called “log-level data” but never got it. And because of DFP’s limitations, Layser said she was unable to take on projects she felt could maximize revenue. “I couldn’t innovate,” she said. “I felt stuck.”
“DFP is a 25- to 30-year-old piece of technology. It’s slow and clunky.”
Despite DFP’s supposed drawbacks, the Department of Justice alleges the tool has nearly 90 percent market share in the US. Layser, who previously consulted for upward of 70 publishers, said she could think of “maybe three publications out of hundreds that don’t use DFP.” Because of its near universality, she said there are “legions” of publishing professionals who have only ever worked with the Google tool in their whole careers.
During cross-examination, Google’s attorneys pointed out that News Corp believed itself to be competitive with Google in some areas, underscoring its claim that the DOJ is trying to force deals with rivals. In the analysis about switching to AppNexus, News Corp wrote that because Google owns a media business, it was unlikely to have aligned interests with Google long term.
Later in the day, the court heard from Jay Friedman, CEO of the Goodway Group, who shed light on the advertiser side of the market. Friedman testified that Google’s AdX has been the only exchange his company has not been able to negotiate fees with, even though its rate is higher than others. “We were told it wasn’t an option,” he said.
Then, the court heard prerecorded deposition from Eisar Lipkovitz, a former VP of engineering for display and video ads at Google. Lipkovitz said he still has “PTSD” from his time at Google and expressed frustration with colleagues who disagreed with his view of how the tools should work or moved too slowly on projects.
Lipkovitz said he recognized a potential conflict of interest in the way DFP and AdX were integrated, and he described those in the company who denied it as making “self-interested arguments.” Still, he credited a lack of alternatives to Google’s DFP to the difficulty of running such a product. “It’s a business that nobody wants,” he said.

Image: Cath Virginia / The Verge, Getty Images

Google’s tool that lets publishers sell ad space on their websites is ubiquitous, but that’s largely a testament to how hard it is for customers to get out of it, one former publishing executive testified in federal court on Tuesday.

“I felt like they were holding us hostage,” said Stephanie Layser, a former programmatic advertising executive at News Corp (which owns brands like The Wall Street Journal and the New York Post) who now works at AWS. Layser was testifying as a government witness in the Justice Department’s second antitrust case against Google, which is accusing the company of monopolizing the markets for ad tech tools and illegally tying together two of its products.

Layser was one of three witnesses the court heard from on Tuesday, covering perspectives from the publisher side, the advertiser side, and inside of Google. Through their testimony, the government is attempting to paint a picture of a company that exerts so much control over the markets for ad tech tools that customers don’t walk away, even in the face of unfavorable changes. That’s because, according to the government, Google has protected its monopoly power, preventing adequate alternatives and true competition from emerging. Google, for its part, says the government is punishing it for success and trying to force it to deal with rivals on more favorable terms.

Layser felt captured by a change Google rolled out in 2019, which prevented publishers from setting higher floor prices just for Google’s ad exchange, AdX, under what it called unified pricing rules (UPR). With UPR, Layser said it was still possible to set different floors for other exchanges within each of their systems but not for Google’s. Publishers might want to set a higher floor price from AdX to enable more competition during ad auctions in the hope it would result in a higher price than the minimum they’re willing to accept, she said.

When Google introduced UPR, Layser set up a meeting with Google executives to express her concerns and said she believed the program was “in the best interest of Google and not in the best interest of their customers.” She didn’t recall how Google responded but said that “nothing changed,” and the program was implemented.

Despite her grievances, Layser said switching to a different tool was not a viable option. That’s because using Google’s publisher ad server, known at the time as DoubleClick for Publishers (DFP) and today as Google Ad Manager, was the only way to access the large base of Google advertiser demand with real-time prices — which is important in a system where computer-run ad auctions happen in milliseconds.

Layser even helped put together an analysis at News Corp considering the pros and cons of switching to another publisher ad server, AppNexus (later bought by Microsoft and rebranded as Xandr), but determined the risk of losing revenue without the same access to Google Ads demand was too great.

The decision didn’t really have to do with the quality or price of Google’s product, however, Layser testified. “DFP is a 25- to 30-year-old piece of technology. It’s slow and clunky,” she told the court. Google also provided News Corp less insight into their transactions than they could have gotten with AppNexus, Layser said. She “begged” Google for what she called “log-level data” but never got it. And because of DFP’s limitations, Layser said she was unable to take on projects she felt could maximize revenue. “I couldn’t innovate,” she said. “I felt stuck.”

“DFP is a 25- to 30-year-old piece of technology. It’s slow and clunky.”

Despite DFP’s supposed drawbacks, the Department of Justice alleges the tool has nearly 90 percent market share in the US. Layser, who previously consulted for upward of 70 publishers, said she could think of “maybe three publications out of hundreds that don’t use DFP.” Because of its near universality, she said there are “legions” of publishing professionals who have only ever worked with the Google tool in their whole careers.

During cross-examination, Google’s attorneys pointed out that News Corp believed itself to be competitive with Google in some areas, underscoring its claim that the DOJ is trying to force deals with rivals. In the analysis about switching to AppNexus, News Corp wrote that because Google owns a media business, it was unlikely to have aligned interests with Google long term.

Later in the day, the court heard from Jay Friedman, CEO of the Goodway Group, who shed light on the advertiser side of the market. Friedman testified that Google’s AdX has been the only exchange his company has not been able to negotiate fees with, even though its rate is higher than others. “We were told it wasn’t an option,” he said.

Then, the court heard prerecorded deposition from Eisar Lipkovitz, a former VP of engineering for display and video ads at Google. Lipkovitz said he still has “PTSD” from his time at Google and expressed frustration with colleagues who disagreed with his view of how the tools should work or moved too slowly on projects.

Lipkovitz said he recognized a potential conflict of interest in the way DFP and AdX were integrated, and he described those in the company who denied it as making “self-interested arguments.” Still, he credited a lack of alternatives to Google’s DFP to the difficulty of running such a product. “It’s a business that nobody wants,” he said.

Read More 

Adobe previews its upcoming text-to-video generative AI tools

Adobe’s new text-to-video and image-to-video AI features will be available in beta later this year. | Image: Adobe

Adobe has teased some of its upcoming generative AI video tools, including a new feature that can produce video clips from still images. This latest preview builds on the in-development Firefly video model that the software giant demonstrated in April, which is set to power AI video and audio editing features across Adobe’s Creative Cloud applications.
The new promotional teaser shows footage produced by Firefly’s text-to-video capabilities that Adobe announced (but didn’t demonstrate) earlier this year. The tool allows users to generate video clips using text descriptions and adjust the results using a variety of “camera controls” that simulate camera angles, motion, and shooting distance. An image-to-video feature was also demonstrated for the Firefly video model that can generate clips using specific reference images. Adobe suggests this could be useful for making additional B-roll footage or to patch gaps in production timelines.

Image: Adobe
Adobe’s new AI video tool will allow users to choose preset filming styles to emulate, alongside describing their desired footage.

If the example footage is any indication of the final release, the generated video quality looks on par with what we’ve seen from OpenAI’s Sora model so far, which Adobe is also “exploring” as a third-party integration for its Premiere Pro video software. Duration is limited, though, with Adobe’s VP of generative AI, Alexandru Costin, telling The Verge that videos produced by the text-to-video and image-to-video features have a maximum length of five seconds.
One advantage Adobe’s own model may have against Sora is its promise that Firefly is “commercially safe” due to being trained on openly licensed, public domain, and Adobe Stock content, which could reduce some concerns about copyright infringement.

GIF: Adobe
Here’s an AI-generated example clip of realistic camera footage produced by Adobe’s Firefly video model.

The text-to-video and image-to-video features will both initially be available in beta as a standalone Firefly application sometime later this year. Adobe says the new Firefly video model will eventually be integrated into its Creative Cloud, Experience Cloud, and Adobe Express applications.
The company also showed off some additional clips of the upcoming “Generative Extend” feature for Premiere Pro that can extend the length of existing video footage, similar to Photoshop’s Generative Expand tool for image backgrounds. Adobe says this will also be arriving on an unspecified date “later this year.”

Adobe’s new text-to-video and image-to-video AI features will be available in beta later this year. | Image: Adobe

Adobe has teased some of its upcoming generative AI video tools, including a new feature that can produce video clips from still images. This latest preview builds on the in-development Firefly video model that the software giant demonstrated in April, which is set to power AI video and audio editing features across Adobe’s Creative Cloud applications.

The new promotional teaser shows footage produced by Firefly’s text-to-video capabilities that Adobe announced (but didn’t demonstrate) earlier this year. The tool allows users to generate video clips using text descriptions and adjust the results using a variety of “camera controls” that simulate camera angles, motion, and shooting distance. An image-to-video feature was also demonstrated for the Firefly video model that can generate clips using specific reference images. Adobe suggests this could be useful for making additional B-roll footage or to patch gaps in production timelines.

Image: Adobe
Adobe’s new AI video tool will allow users to choose preset filming styles to emulate, alongside describing their desired footage.

If the example footage is any indication of the final release, the generated video quality looks on par with what we’ve seen from OpenAI’s Sora model so far, which Adobe is also “exploring” as a third-party integration for its Premiere Pro video software. Duration is limited, though, with Adobe’s VP of generative AI, Alexandru Costin, telling The Verge that videos produced by the text-to-video and image-to-video features have a maximum length of five seconds.

One advantage Adobe’s own model may have against Sora is its promise that Firefly is “commercially safe” due to being trained on openly licensed, public domain, and Adobe Stock content, which could reduce some concerns about copyright infringement.

GIF: Adobe
Here’s an AI-generated example clip of realistic camera footage produced by Adobe’s Firefly video model.

The text-to-video and image-to-video features will both initially be available in beta as a standalone Firefly application sometime later this year. Adobe says the new Firefly video model will eventually be integrated into its Creative Cloud, Experience Cloud, and Adobe Express applications.

The company also showed off some additional clips of the upcoming “Generative Extend” feature for Premiere Pro that can extend the length of existing video footage, similar to Photoshop’s Generative Expand tool for image backgrounds. Adobe says this will also be arriving on an unspecified date “later this year.”

Read More 

Nuro is branching out into robotaxis and personally owned autonomous vehicles

Nuro wants to put its technology into a range of vehicle types. | Image: Nuro

Nuro, the delivery robot company created by veterans of Google’s self-driving car project, is taking the bold — and risky — step to expand its business model to include robotaxis and personally owned autonomous vehicles.
The California-based company, which currently operates a small fleet of delivery vehicles in California and Texas, doesn’t plan to build the vehicles itself. Instead, it will license its autonomous driving technology to outside companies, including car companies that want to use it for advanced driver-assist systems (ADAS) and rideshare operators for robotaxis.
Nuro uses hardware provided by major companies like Nvidia and Arm to power its Nuro Driver, the company’s branding for the hardware and software used to power its autonomous delivery vehicles. While the software stack will run using technology from Nvidia and Arm, Nuro’s powertrain — electric motors and batteries — is developed by China’s BYD. The fact that Chinese-made EVs will likely face steep new tariffs under the Biden administration could be seen as a factor in Nuro’s decision to move into new, less trade-dependent territory. (A spokesperson for the company said that tariffs didn’t play a role in its decision-making.)
Nuro will tailor its Driver product to meet the specific use case of the licensing company, whether it’s a fully autonomous robotaxi or a partially autonomous ADAS feature. The company will also sell an AI platform of developer tools “to support AI development and validation for the Nuro Driver.”

Image: Nuro

It’s a risky step given the thorny regulatory requirements surrounding driverless vehicles that carry human passengers. Nuro is one of the few companies to have received an exemption from federal vehicle safety rules to deploy vehicles without certain controls, like sideview mirrors. This is partly due to the fact that the company has only delivered groceries and other household items in its self-driving vehicles; now, it’s proposing to deliver humans as well.
Andrew Clare, Nuro’s chief technology officer, said the reason for the business model shake-up was twofold: first, the company’s self-driving technology has improved to the point where Nuro now believes it can handle a broader range of tasks beyond just delivery.
“Our tech has gotten to the point where we believe very firmly that it is ready for more applications,” Clare said.
Nuro has only delivered groceries; now, it’s proposing to deliver humans as well
Secondly, when Nuro first launched eight years ago, Clare said there weren’t any car companies seriously planning to manufacture fully driverless Level 4-capable vehicles. If Nuro wanted to go that route, it would have had to build the hardware itself, and that would likely have been too pricey for an independent company without a well-capitalized backer.
“Fast forward eight years, and there are now multiple OEMs who have recently been announcing that they are starting to create these platforms, either for mobility services or for consumer vehicles,” Clare said.
Nuro’s status as a “commercially independent” company that’s not owned by a major tech company gives it a leg up in conversations with potential partners, Clare said. Other major AV operators, like Waymo (owned by Alphabet), Cruise (General Motors), and Zoox (Amazon), can’t make similar claims.
“They’re owned by big mothership companies,” he said. “That makes us a very strong partner for both mobility companies and OEMs.”
The news comes at a perilous time for autonomous vehicle developers. Companies are facing new questions about safety following several incidents in which people were injured by driverless vehicles. Outside investment has dwindled as deployment timelines have stretched further into the future. And surveys suggest that the public remains deeply skeptical about self-driving cars.
“Our tech has gotten to the point where we believe very firmly that it is ready for more applications.”
Nuro was founded in 2016 by Dave Ferguson and Jiajun Zhu, two veterans of the Google self-driving car project that would go on to become Waymo. It is one of the few companies operating fully driverless vehicles — that is, vehicles without safety drivers behind the wheel — on public roads today.
Nuro’s current fleet of vehicles, which operates in California and Texas, has traveled over 1 million miles autonomously without any major safety incidents, Clare said. That includes a mix of R1 and R2 vehicles as well as a fleet of Toyota Highlanders retrofitted with autonomous driving hardware. Nuro is also building a facility in Nevada, where it will manufacture its next-generation vehicles.
The company was forced to put its commercial expansion on pause and delay production of its R3 vehicle last year as it dealt with rising costs. The company also announced it was restructuring its business, resulting in the loss of 30 percent of its employees.
The company is also the first AV operator to receive a special exemption from certain federal safety requirements and was the first to charge money for its driverless deliveries in California.
Clare said Nuro is in a more “financially strong” position than it was two years ago. “We have multiple years of runway,” he said. “We certainly went through the painful period of restructuring two years ago, but that really put us in a very stable place financially.”
Clare insisted that the idea that people could someday own their own Level 4 autonomous vehicles — a controversial opinion in the world of AVs — was not a question of if but when. Some auto manufacturers are making plans to produce their own Level 4 vehicles for personal use, even as some experts insist that the safety and liability concerns remain too vast.
But Nuro thinks its own technology could help facilitate that shift, even if it’s still years away from reality.
“While it may not be available on the market today, it is coming,” Clare said. “It is simply a matter of time that it’s coming.”

Nuro wants to put its technology into a range of vehicle types. | Image: Nuro

Nuro, the delivery robot company created by veterans of Google’s self-driving car project, is taking the bold — and risky — step to expand its business model to include robotaxis and personally owned autonomous vehicles.

The California-based company, which currently operates a small fleet of delivery vehicles in California and Texas, doesn’t plan to build the vehicles itself. Instead, it will license its autonomous driving technology to outside companies, including car companies that want to use it for advanced driver-assist systems (ADAS) and rideshare operators for robotaxis.

Nuro uses hardware provided by major companies like Nvidia and Arm to power its Nuro Driver, the company’s branding for the hardware and software used to power its autonomous delivery vehicles. While the software stack will run using technology from Nvidia and Arm, Nuro’s powertrain — electric motors and batteries — is developed by China’s BYD. The fact that Chinese-made EVs will likely face steep new tariffs under the Biden administration could be seen as a factor in Nuro’s decision to move into new, less trade-dependent territory. (A spokesperson for the company said that tariffs didn’t play a role in its decision-making.)

Nuro will tailor its Driver product to meet the specific use case of the licensing company, whether it’s a fully autonomous robotaxi or a partially autonomous ADAS feature. The company will also sell an AI platform of developer tools “to support AI development and validation for the Nuro Driver.”

Image: Nuro

It’s a risky step given the thorny regulatory requirements surrounding driverless vehicles that carry human passengers. Nuro is one of the few companies to have received an exemption from federal vehicle safety rules to deploy vehicles without certain controls, like sideview mirrors. This is partly due to the fact that the company has only delivered groceries and other household items in its self-driving vehicles; now, it’s proposing to deliver humans as well.

Andrew Clare, Nuro’s chief technology officer, said the reason for the business model shake-up was twofold: first, the company’s self-driving technology has improved to the point where Nuro now believes it can handle a broader range of tasks beyond just delivery.

“Our tech has gotten to the point where we believe very firmly that it is ready for more applications,” Clare said.

Nuro has only delivered groceries; now, it’s proposing to deliver humans as well

Secondly, when Nuro first launched eight years ago, Clare said there weren’t any car companies seriously planning to manufacture fully driverless Level 4-capable vehicles. If Nuro wanted to go that route, it would have had to build the hardware itself, and that would likely have been too pricey for an independent company without a well-capitalized backer.

“Fast forward eight years, and there are now multiple OEMs who have recently been announcing that they are starting to create these platforms, either for mobility services or for consumer vehicles,” Clare said.

Nuro’s status as a “commercially independent” company that’s not owned by a major tech company gives it a leg up in conversations with potential partners, Clare said. Other major AV operators, like Waymo (owned by Alphabet), Cruise (General Motors), and Zoox (Amazon), can’t make similar claims.

“They’re owned by big mothership companies,” he said. “That makes us a very strong partner for both mobility companies and OEMs.”

The news comes at a perilous time for autonomous vehicle developers. Companies are facing new questions about safety following several incidents in which people were injured by driverless vehicles. Outside investment has dwindled as deployment timelines have stretched further into the future. And surveys suggest that the public remains deeply skeptical about self-driving cars.

“Our tech has gotten to the point where we believe very firmly that it is ready for more applications.”

Nuro was founded in 2016 by Dave Ferguson and Jiajun Zhu, two veterans of the Google self-driving car project that would go on to become Waymo. It is one of the few companies operating fully driverless vehicles — that is, vehicles without safety drivers behind the wheel — on public roads today.

Nuro’s current fleet of vehicles, which operates in California and Texas, has traveled over 1 million miles autonomously without any major safety incidents, Clare said. That includes a mix of R1 and R2 vehicles as well as a fleet of Toyota Highlanders retrofitted with autonomous driving hardware. Nuro is also building a facility in Nevada, where it will manufacture its next-generation vehicles.

The company was forced to put its commercial expansion on pause and delay production of its R3 vehicle last year as it dealt with rising costs. The company also announced it was restructuring its business, resulting in the loss of 30 percent of its employees.

The company is also the first AV operator to receive a special exemption from certain federal safety requirements and was the first to charge money for its driverless deliveries in California.

Clare said Nuro is in a more “financially strong” position than it was two years ago. “We have multiple years of runway,” he said. “We certainly went through the painful period of restructuring two years ago, but that really put us in a very stable place financially.”

Clare insisted that the idea that people could someday own their own Level 4 autonomous vehicles — a controversial opinion in the world of AVs — was not a question of if but when. Some auto manufacturers are making plans to produce their own Level 4 vehicles for personal use, even as some experts insist that the safety and liability concerns remain too vast.

But Nuro thinks its own technology could help facilitate that shift, even if it’s still years away from reality.

“While it may not be available on the market today, it is coming,” Clare said. “It is simply a matter of time that it’s coming.”

Read More 

Will California flip the AI industry on its head?

Image: Cath Virginia / The Verge, Getty Images

SB 1047 aims to regulate AI, and the AI industry is out to stop it. Artificial intelligence is moving quickly. It’s now able to mimic humans convincingly enough to fuel massive phone scams or spin up nonconsensual deepfake imagery of celebrities to be used in harassment campaigns. The urgency to regulate this technology has never been more critical — so, that’s what California, home to many of AI’s biggest players, is trying to do with a bill known as SB 1047.
SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far. Critics have painted a nearly apocalyptic picture of its impact, calling it a threat to startups, open source developers, and academics. Supporters call it a necessary guardrail for a potentially dangerous technology — and a corrective to years of under-regulation. Either way, the fight in California could upend AI as we know it, and both sides are coming out in force.
AI’s power players are battling California — and each other
The original version of SB 1047 was bold and ambitious. Introduced by state Senator Scott Wiener as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, it set out to tightly regulate advanced AI models with a sufficient amount of computing power, around the size of today’s largest AI systems (which is 10^26 FLOPS). The bill required developers of these frontier models to conduct thorough safety testing, including third-party evaluations, and certify that their models posed no significant risk to humanity. Developers also had to implement a “kill switch” to shut down rogue models and report safety incidents to a newly established regulatory agency. They could face potential lawsuits from the attorney general for catastrophic safety failures. If they lied about safety, developers could even face perjury charges, which include the threat of prison (however, that’s extremely rare in practice).
California’s legislators are in a uniquely powerful position to regulate AI. The country’s most populous state is home to many leading AI companies, including OpenAI, which publicly opposed the bill, and Anthropic, which was hesitant on its support before amendments. SB 1047 also seeks to regulate models that wish to operate in California’s market, giving it a far-reaching impact far beyond the state’s borders.
Unsurprisingly, significant parts of the tech industry revolted. At a Y Combinator event regarding AI regulation that I attended in late July, I spoke with Andrew Ng, cofounder of Coursera and founder of Google Brain, who talked about his plans to protest SB 1047 in the streets of San Francisco. Ng made a surprise appearance onstage later, criticizing the bill for its potential harm to academics and open source developers as Wiener looked on with his team.
“When someone trains a large language model…that’s a technology. When someone puts them into a medical device or into a social media feed or into a chatbot or uses that to generate political deepfakes or non-consensual deepfake porn, those are applications,” Ng said onstage. “And the risk of AI is not a function. It doesn’t depend on the technology — it depends on the application.”
Critics like Ng worry SB 1047 could slow progress, often invoking fears that it could impede the lead the US has against adversarial nations like China and Russia. Representatives Zoe Lofgren and Nancy Pelosi and California’s Chamber of Commerce worry that the bill is far too focused on fictional versions of catastrophic AI, and AI pioneer Fei-Fei Li warned in a Fortune column that SB 1047 would “harm our budding AI ecosystem.” That’s also a pressure point for Khan, who’s concerned about federal regulation stifling the innovation in open-source AI communities.
Onstage at the YC event, Khan emphasized that open source is a proven driver of innovation, attracting hundreds of billions in venture capital to fuel startups. “We’re thinking about what open source should mean in the context of AI, both for you all as innovators but also for us as law enforcers,” Khan said. “The definition of open source in the context of software does not neatly translate into the context of AI.” Both innovators and regulators, she said, are still navigating how to define, and protect, open-source AI in the context of regulation.
A weakened SB 1047 is better than nothing
The result of the criticism was a significantly softer second draft of SB 1047, which passed out of committee on August 15th. In the new SB 1047, the proposed regulatory agency has been removed, and the attorney general can no longer sue developers for major safety incidents. Instead of submitting safety certifications under the threat of perjury, developers now only need to provide public “statements” about their safety practices, with no criminal liability. Additionally, entities spending less than $10 million on fine-tuning a model are not considered developers under the bill, offering protection to small startups and open source developers.
Still, that doesn’t mean the bill isn’t worth passing, according to supporters. Even in its weakened form, if SB 1047 “causes even one AI company to think through its actions, or to take the alignment of AI models to human values more seriously, it will be to the good,” wrote Gary Marcus, emeritus professor of psychology and neural science at NYU. It will still offer critical safety protections and whistleblower shields, which some may argue is better than nothing.

This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk…— Elon Musk (@elonmusk) August 26, 2024

Anthropic CEO Dario Amodei said the bill was “substantially improved, to the point where we believe its benefits likely outweigh its costs” after the amendments. In a statement in support of SB 1047 reported by Axios, 120 current and former employees of OpenAI, Anthropic, Google’s DeepMind, and Meta said they “believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.”
“It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” the statement said.
Meanwhile, many detractors haven’t changed their position. “The edits are window dressing,” Andreessen Horowitz general partner Martin Casado posted. “They don’t address the real issues or criticisms of the bill.”
There’s also OpenAI’s chief strategy officer, Jason Kwon, who said in a letter to Newsom and Wiener that “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”
“Given those risks, we must protect America’s AI edge with a set of federal policies — rather than state ones — that can provide clarity and certainty for AI labs and developers while also preserving public safety,” Kwon wrote.
Newsom’s political tightrope
Though this highly amended version of SB 1047 has made it to Newsom’s desk, he’s been noticeably quiet about it. It’s not exactly news that regulating technology has always involved a degree of political maneuvering and that much is being signaled by Newsom’s tight-lipped approach on such controversial regulation. Newsom may not want to rock the boat with technologists just ahead of a presidential election.
Many influential tech executives are also major donors to political campaigns, and in California, home to some of the world’s largest tech companies, these executives are deeply connected to the state’s politics. Venture capital firm Andreessen Horowitz has even enlisted Jason Kinney, a close friend of Governor Newsom and a Democratic operative, to lobby against the bill. For a politician, pushing for tech regulation could mean losing millions in campaign contributions. For someone like Newsom, who has clear presidential ambitions, that’s a level of support he can’t afford to jeopardize.
What’s more, the rift between Silicon Valley and Democrats has grown, especially after Andreessen Horowitz’s cofounders voiced support for Donald Trump. The firm’s strong opposition to SB 1047 means if Newsom signs it into law, the divide could widen, making it harder for Democrats to regain Silicon Valley’s backing.
So, it comes down to Newsom, who’s under intense pressure from the world’s most powerful tech companies and fellow politicians like Pelosi. While lawmakers have been working to strike a delicate balance between regulation and innovation for decades, AI is nebulous and unprecedented, and a lot of the old rules don’t seem to apply. For now, Newsom has until September to make a decision that could upend the AI industry as we know it.

Image: Cath Virginia / The Verge, Getty Images

SB 1047 aims to regulate AI, and the AI industry is out to stop it.

Artificial intelligence is moving quickly. It’s now able to mimic humans convincingly enough to fuel massive phone scams or spin up nonconsensual deepfake imagery of celebrities to be used in harassment campaigns. The urgency to regulate this technology has never been more critical — so, that’s what California, home to many of AI’s biggest players, is trying to do with a bill known as SB 1047.

SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far. Critics have painted a nearly apocalyptic picture of its impact, calling it a threat to startups, open source developers, and academics. Supporters call it a necessary guardrail for a potentially dangerous technology — and a corrective to years of under-regulation. Either way, the fight in California could upend AI as we know it, and both sides are coming out in force.

AI’s power players are battling California — and each other

The original version of SB 1047 was bold and ambitious. Introduced by state Senator Scott Wiener as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, it set out to tightly regulate advanced AI models with a sufficient amount of computing power, around the size of today’s largest AI systems (which is 10^26 FLOPS). The bill required developers of these frontier models to conduct thorough safety testing, including third-party evaluations, and certify that their models posed no significant risk to humanity. Developers also had to implement a “kill switch” to shut down rogue models and report safety incidents to a newly established regulatory agency. They could face potential lawsuits from the attorney general for catastrophic safety failures. If they lied about safety, developers could even face perjury charges, which include the threat of prison (however, that’s extremely rare in practice).

California’s legislators are in a uniquely powerful position to regulate AI. The country’s most populous state is home to many leading AI companies, including OpenAI, which publicly opposed the bill, and Anthropic, which was hesitant on its support before amendments. SB 1047 also seeks to regulate models that wish to operate in California’s market, giving it a far-reaching impact far beyond the state’s borders.

Unsurprisingly, significant parts of the tech industry revolted. At a Y Combinator event regarding AI regulation that I attended in late July, I spoke with Andrew Ng, cofounder of Coursera and founder of Google Brain, who talked about his plans to protest SB 1047 in the streets of San Francisco. Ng made a surprise appearance onstage later, criticizing the bill for its potential harm to academics and open source developers as Wiener looked on with his team.

“When someone trains a large language model…that’s a technology. When someone puts them into a medical device or into a social media feed or into a chatbot or uses that to generate political deepfakes or non-consensual deepfake porn, those are applications,” Ng said onstage. “And the risk of AI is not a function. It doesn’t depend on the technology — it depends on the application.”

Critics like Ng worry SB 1047 could slow progress, often invoking fears that it could impede the lead the US has against adversarial nations like China and Russia. Representatives Zoe Lofgren and Nancy Pelosi and California’s Chamber of Commerce worry that the bill is far too focused on fictional versions of catastrophic AI, and AI pioneer Fei-Fei Li warned in a Fortune column that SB 1047 would “harm our budding AI ecosystem.” That’s also a pressure point for Khan, who’s concerned about federal regulation stifling the innovation in open-source AI communities.

Onstage at the YC event, Khan emphasized that open source is a proven driver of innovation, attracting hundreds of billions in venture capital to fuel startups. “We’re thinking about what open source should mean in the context of AI, both for you all as innovators but also for us as law enforcers,” Khan said. “The definition of open source in the context of software does not neatly translate into the context of AI.” Both innovators and regulators, she said, are still navigating how to define, and protect, open-source AI in the context of regulation.

A weakened SB 1047 is better than nothing

The result of the criticism was a significantly softer second draft of SB 1047, which passed out of committee on August 15th. In the new SB 1047, the proposed regulatory agency has been removed, and the attorney general can no longer sue developers for major safety incidents. Instead of submitting safety certifications under the threat of perjury, developers now only need to provide public “statements” about their safety practices, with no criminal liability. Additionally, entities spending less than $10 million on fine-tuning a model are not considered developers under the bill, offering protection to small startups and open source developers.

Still, that doesn’t mean the bill isn’t worth passing, according to supporters. Even in its weakened form, if SB 1047 “causes even one AI company to think through its actions, or to take the alignment of AI models to human values more seriously, it will be to the good,” wrote Gary Marcus, emeritus professor of psychology and neural science at NYU. It will still offer critical safety protections and whistleblower shields, which some may argue is better than nothing.

This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.

For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk…

— Elon Musk (@elonmusk) August 26, 2024

Anthropic CEO Dario Amodei said the bill was “substantially improved, to the point where we believe its benefits likely outweigh its costs” after the amendments. In a statement in support of SB 1047 reported by Axios, 120 current and former employees of OpenAI, Anthropic, Google’s DeepMind, and Meta said they “believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.”

“It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” the statement said.

Meanwhile, many detractors haven’t changed their position. “The edits are window dressing,” Andreessen Horowitz general partner Martin Casado posted. “They don’t address the real issues or criticisms of the bill.”

There’s also OpenAI’s chief strategy officer, Jason Kwon, who said in a letter to Newsom and Wiener that “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”

“Given those risks, we must protect America’s AI edge with a set of federal policies — rather than state ones — that can provide clarity and certainty for AI labs and developers while also preserving public safety,” Kwon wrote.

Newsom’s political tightrope

Though this highly amended version of SB 1047 has made it to Newsom’s desk, he’s been noticeably quiet about it. It’s not exactly news that regulating technology has always involved a degree of political maneuvering and that much is being signaled by Newsom’s tight-lipped approach on such controversial regulation. Newsom may not want to rock the boat with technologists just ahead of a presidential election.

Many influential tech executives are also major donors to political campaigns, and in California, home to some of the world’s largest tech companies, these executives are deeply connected to the state’s politics. Venture capital firm Andreessen Horowitz has even enlisted Jason Kinney, a close friend of Governor Newsom and a Democratic operative, to lobby against the bill. For a politician, pushing for tech regulation could mean losing millions in campaign contributions. For someone like Newsom, who has clear presidential ambitions, that’s a level of support he can’t afford to jeopardize.

What’s more, the rift between Silicon Valley and Democrats has grown, especially after Andreessen Horowitz’s cofounders voiced support for Donald Trump. The firm’s strong opposition to SB 1047 means if Newsom signs it into law, the divide could widen, making it harder for Democrats to regain Silicon Valley’s backing.

So, it comes down to Newsom, who’s under intense pressure from the world’s most powerful tech companies and fellow politicians like Pelosi. While lawmakers have been working to strike a delicate balance between regulation and innovation for decades, AI is nebulous and unprecedented, and a lot of the old rules don’t seem to apply. For now, Newsom has until September to make a decision that could upend the AI industry as we know it.

Read More 

RIP XOXO

Hugs and kisses! | Image: Cody Peterson (Each Other)

Though I’d heard this XOXO festival would be the last one, many people I spoke to seemed not to believe it. I was told by previous attendees that festival organizers Andy Baio and Andy McMillan — affectionately called “the Andys” — “always say that.” But from the festival’s beginning, it also seemed clear that the Andys didn’t plan to do this forever.
Anyway, this year’s XOXO felt like an Irish wake to me. It was like we had all gathered over the body of a specific period on the internet to pay our respects.
XOXO began in 2012, born on the crowdfunding platform Kickstarter, where Baio worked. The basic idea was to celebrate “disruptive creativity” — that is, to take all the artists who make a living online and bring them together with technologists. Kickstarter was part of this: a place for people to fund their creative projects without having to, say, pitch VCs or impress an A&R guy. At the time, the idea was that the internet would make it possible for people to make a living without the compromises made by corporate culture. My former colleague Casey Newton attended in 2014 and wrote of the festival, “It’s a place where the ideas are dangerous, where culture matters, and where art, not commerce, lies at the center of everything.”
“There just became a greater and greater understanding over time that platforms are not your friend.”
Ten years after Casey’s visit, I attended for the first time. The festival, held at Revolution Hall in Portland, Oregon, appeared to have been stripped to the minimum viable product. It was shorter than previous iterations, and the murals, rental drones, rock concerts, and other goodies from a decade ago were nowhere to be seen. But then 2024 is a worse time for independent creators than 2014 was.
“There just became a greater and greater understanding over time that platforms are not your friend,” Baio says, in an interview after the festival. “They are your partners but they are uncomfortable partners, and the more you rely on them, the more at risk you are, that they are going to change or shift in some uncomfortable way.”
Those shifts affected XOXO, too. The festival had scaled down because there were fewer sponsors. The tech companies that had been important for the creator economy stopped spending money on independent events like XOXO. Instead, they focused on their own events, which they could control. “In the last five years they’ve cut their, I assume, marketing budgets,” Baio says. “They’ve tightened their belts.”
Still, it was basically a party. There were large outdoor tents, tabletop games, two days’ worth of programming and hangouts, karaoke — The Verge’s Sarah Jeong did “Enter Sandman” — and plenty of food and drink. Darius Kazemi, an internet artist, attended every festival except the first and told me that this final one was his favorite. “I do think smaller events are better, generally,” Kazemi says. “They’re more productive in terms of making good conversation and emotional connections with people, that kind of thing.”
XOXO is a gathering of the terminally online, many of whom met each other on Twitter
Plus, the single track of talks meant that all attendees were focused on the same things. On Friday, there was an “Indie Media Circus,” featuring talks by 404 Media, Casey, now of Platformer, and Ryan Broderick of Garbage Day. An “Art and Code” section featured the work of indie artists, such as Julia Evans of Wizard Zines, Teresa Ibarra of “Analyzing my text messages with my ex-boyfriend,” and Shelby Wilson of The HTML Review.
The evenings featured new and upcoming video games such as Time Flies — a standout among my friends — Despelote, and XOXO tradition Johann Sebastian Joust, a no-graphics game that involves moving in time to the Brandenburg Concertos. There was a tabletop evening as well, which I missed because I was at a party thrown by The Verge, where, once again, I got drunk with Casey.
If this all seems pretty dorky, that’s right. XOXO is a gathering of the terminally online, many of whom met each other on Twitter. One recurring punchline throughout the two days of talks was that whenever someone wanted to evoke platform degradation, a photo of Elon Musk would flash up in their slides. “What difficulties have increased for us in the last five years?” says McMillan. “It’s all stuff to do with fucking Elon.”
“Well, not all of it,” Baio says.
XOXO originally came into being as a response to the commodification of festivals that had once been about oddballs
“It certainly hasn’t helped matters,” McMillan says.
“It’s so agonizing to have something that is like the connective thread between a community go away,” Baio says.
Early on, XOXO was referred to as a “meeting of the mutuals,” as in people who followed each other on Twitter. But when Musk took over the platform and began shredding it, it meant many users peeled off to Bluesky, Mastodon, and “dark social” spaces on Slack and Discord.
XOXO originally came into being as a response to the commodification of festivals that had once been about oddballs — like South By Southwest. Gradually, these events had been swamped with marketing types, pushing out the weirdos who’d made the festivals interesting in the first place. Attendance at this year’s XOXO was capped at 1,000 paying attendees, and there was a lottery system for getting in. But to even make it into the lottery, you had to fill out a questionnaire that the Andys reviewed. They prioritized the people who would make the festival interesting.
Even the name is a way of selecting for attendees
After the first year, “all these people showed up in our inbox and were like, ‘How do we do some like, stealth marketing activation, whatever bullshit,’” Baio says. He stressed that the point of the lottery was not to judge whether people were cool enough to come — “we’re two of the least cool people on the planet, sorry” — but rather, whether they were members of the community that the festival was built around. “Anyone who is stupid enough to say, ‘I love crypto, it’s my entire being, I want to come here and talk about crypto a whole bunch,’ okay, great, you’re going to hate it,” Baio says. “You’re not going to get prioritized in the lottery quite so much.”
Even the name is a way of selecting for attendees. If you’re the kind of person who gets turned off by a festival named, functionally, “hugs and kisses,” you aren’t going to apply.
When XOXO began, Cards Against Humanity had emerged as a megahit from a Kickstarter campaign. But as time wore on, the challenges of trying to make a living as an indie creator increasingly became a festival focus. In 2014, Kazemi’s talk about winning the creative lottery was one of the festival’s breakout hits. In it, Kazemi spoofed the archetype of talks given by successful creative people and suggested it was more important to continue rigorously creating (that is, “buying more lottery tickets”) than trying to strategize around how to pick the right numbers.
In his most recent talk, Kazemi revisited his 2014 themes. He’d quit his job, moved to Portland, and begun living the indie dream. Except, it turned out, living the indie dream just meant different problems. Kazemi described becoming a landlord as part of staying afloat and also noted that his output of creative projects had declined relative to 10 years ago. Other creators make other compromises — podcasters doing ad reads for less-than-savory companies, for instance — in order to continue making things.
“We were like, ‘I think we have one more left in us.’”
The Andys told me that they’d planned to make 2020 the last festival — but their plans were interrupted by covid-19. “We did make the decision in 2019,” says McMillan. “We were like, ‘I think we have one more left in us.’” This final festival, five years after the last one, was attending to unfinished business. But the Andys want you to know: XOXO is over. “We are not coming back next year,” McMillan says. “That was the end of XO.”
People are still making independent projects, using resources as only the internet can. Erin Kissane, for instance, talked about processing covid data with the Covid Tracking Project. Molly White discussed “Web3 is Going Just Great,” the timeline of various crypto crises. Kazemi’s work at Tiny Subversions has involved a fork of Mastodon and teaching people how to run their own social media sites.
It wouldn’t surprise me — or for that matter, the Andys — if this group of people were to create spinoff get-togethers from connections made at XOXO; it’s a tightly knit group. “I’ve been thinking a lot about Darius, like his talk asks, ‘What’s next? What are we going to do next?’” McMillan says. He doesn’t have an answer, and he doesn’t expect to be responsible for whatever it is. “That is important to think about, and answering that question in the not-too-distant future will be important.”

Hugs and kisses! | Image: Cody Peterson (Each Other)

Though I’d heard this XOXO festival would be the last one, many people I spoke to seemed not to believe it. I was told by previous attendees that festival organizers Andy Baio and Andy McMillan — affectionately called “the Andys” — “always say that.” But from the festival’s beginning, it also seemed clear that the Andys didn’t plan to do this forever.

Anyway, this year’s XOXO felt like an Irish wake to me. It was like we had all gathered over the body of a specific period on the internet to pay our respects.

XOXO began in 2012, born on the crowdfunding platform Kickstarter, where Baio worked. The basic idea was to celebrate “disruptive creativity” — that is, to take all the artists who make a living online and bring them together with technologists. Kickstarter was part of this: a place for people to fund their creative projects without having to, say, pitch VCs or impress an A&R guy. At the time, the idea was that the internet would make it possible for people to make a living without the compromises made by corporate culture. My former colleague Casey Newton attended in 2014 and wrote of the festival, “It’s a place where the ideas are dangerous, where culture matters, and where art, not commerce, lies at the center of everything.”

“There just became a greater and greater understanding over time that platforms are not your friend.”

Ten years after Casey’s visit, I attended for the first time. The festival, held at Revolution Hall in Portland, Oregon, appeared to have been stripped to the minimum viable product. It was shorter than previous iterations, and the murals, rental drones, rock concerts, and other goodies from a decade ago were nowhere to be seen. But then 2024 is a worse time for independent creators than 2014 was.

“There just became a greater and greater understanding over time that platforms are not your friend,” Baio says, in an interview after the festival. “They are your partners but they are uncomfortable partners, and the more you rely on them, the more at risk you are, that they are going to change or shift in some uncomfortable way.”

Those shifts affected XOXO, too. The festival had scaled down because there were fewer sponsors. The tech companies that had been important for the creator economy stopped spending money on independent events like XOXO. Instead, they focused on their own events, which they could control. “In the last five years they’ve cut their, I assume, marketing budgets,” Baio says. “They’ve tightened their belts.”

Still, it was basically a party. There were large outdoor tents, tabletop games, two days’ worth of programming and hangouts, karaoke — The Verge’s Sarah Jeong did “Enter Sandman” — and plenty of food and drink. Darius Kazemi, an internet artist, attended every festival except the first and told me that this final one was his favorite. “I do think smaller events are better, generally,” Kazemi says. “They’re more productive in terms of making good conversation and emotional connections with people, that kind of thing.”

XOXO is a gathering of the terminally online, many of whom met each other on Twitter

Plus, the single track of talks meant that all attendees were focused on the same things. On Friday, there was an “Indie Media Circus,” featuring talks by 404 Media, Casey, now of Platformer, and Ryan Broderick of Garbage Day. An “Art and Code” section featured the work of indie artists, such as Julia Evans of Wizard Zines, Teresa Ibarra of “Analyzing my text messages with my ex-boyfriend,” and Shelby Wilson of The HTML Review.

The evenings featured new and upcoming video games such as Time Flies — a standout among my friends — Despelote, and XOXO tradition Johann Sebastian Joust, a no-graphics game that involves moving in time to the Brandenburg Concertos. There was a tabletop evening as well, which I missed because I was at a party thrown by The Verge, where, once again, I got drunk with Casey.

If this all seems pretty dorky, that’s right. XOXO is a gathering of the terminally online, many of whom met each other on Twitter. One recurring punchline throughout the two days of talks was that whenever someone wanted to evoke platform degradation, a photo of Elon Musk would flash up in their slides. “What difficulties have increased for us in the last five years?” says McMillan. “It’s all stuff to do with fucking Elon.”

“Well, not all of it,” Baio says.

XOXO originally came into being as a response to the commodification of festivals that had once been about oddballs

“It certainly hasn’t helped matters,” McMillan says.

“It’s so agonizing to have something that is like the connective thread between a community go away,” Baio says.

Early on, XOXO was referred to as a “meeting of the mutuals,” as in people who followed each other on Twitter. But when Musk took over the platform and began shredding it, it meant many users peeled off to Bluesky, Mastodon, and “dark social” spaces on Slack and Discord.

XOXO originally came into being as a response to the commodification of festivals that had once been about oddballs — like South By Southwest. Gradually, these events had been swamped with marketing types, pushing out the weirdos who’d made the festivals interesting in the first place. Attendance at this year’s XOXO was capped at 1,000 paying attendees, and there was a lottery system for getting in. But to even make it into the lottery, you had to fill out a questionnaire that the Andys reviewed. They prioritized the people who would make the festival interesting.

Even the name is a way of selecting for attendees

After the first year, “all these people showed up in our inbox and were like, ‘How do we do some like, stealth marketing activation, whatever bullshit,’” Baio says. He stressed that the point of the lottery was not to judge whether people were cool enough to come — “we’re two of the least cool people on the planet, sorry” — but rather, whether they were members of the community that the festival was built around. “Anyone who is stupid enough to say, ‘I love crypto, it’s my entire being, I want to come here and talk about crypto a whole bunch,’ okay, great, you’re going to hate it,” Baio says. “You’re not going to get prioritized in the lottery quite so much.”

Even the name is a way of selecting for attendees. If you’re the kind of person who gets turned off by a festival named, functionally, “hugs and kisses,” you aren’t going to apply.

When XOXO began, Cards Against Humanity had emerged as a megahit from a Kickstarter campaign. But as time wore on, the challenges of trying to make a living as an indie creator increasingly became a festival focus. In 2014, Kazemi’s talk about winning the creative lottery was one of the festival’s breakout hits. In it, Kazemi spoofed the archetype of talks given by successful creative people and suggested it was more important to continue rigorously creating (that is, “buying more lottery tickets”) than trying to strategize around how to pick the right numbers.

In his most recent talk, Kazemi revisited his 2014 themes. He’d quit his job, moved to Portland, and begun living the indie dream. Except, it turned out, living the indie dream just meant different problems. Kazemi described becoming a landlord as part of staying afloat and also noted that his output of creative projects had declined relative to 10 years ago. Other creators make other compromises — podcasters doing ad reads for less-than-savory companies, for instance — in order to continue making things.

“We were like, ‘I think we have one more left in us.’”

The Andys told me that they’d planned to make 2020 the last festival — but their plans were interrupted by covid-19. “We did make the decision in 2019,” says McMillan. “We were like, ‘I think we have one more left in us.’” This final festival, five years after the last one, was attending to unfinished business. But the Andys want you to know: XOXO is over. “We are not coming back next year,” McMillan says. “That was the end of XO.”

People are still making independent projects, using resources as only the internet can. Erin Kissane, for instance, talked about processing covid data with the Covid Tracking Project. Molly White discussed “Web3 is Going Just Great,” the timeline of various crypto crises. Kazemi’s work at Tiny Subversions has involved a fork of Mastodon and teaching people how to run their own social media sites.

It wouldn’t surprise me — or for that matter, the Andys — if this group of people were to create spinoff get-togethers from connections made at XOXO; it’s a tightly knit group. “I’ve been thinking a lot about Darius, like his talk asks, ‘What’s next? What are we going to do next?’” McMillan says. He doesn’t have an answer, and he doesn’t expect to be responsible for whatever it is. “That is important to think about, and answering that question in the not-too-distant future will be important.”

Read More 

Ikea’s smart home hub now supports Matter

The Dirigera hub and Ikea’s Home smart app. | Image: Ikea

Two years after announcing support, Ikea’s Dirigera hub can now be updated to act as a bridge between Ikea’s smart home devices and Matter-enabled systems. The software update builds upon early beta support by letting Ikea’s entire lineup of Zigbee-based smart home devices — like lights, blinds, controllers, air purifiers, and sensors — communicate with Matter-enabled devices from any company.
Bridging support for existing devices to Matter is a small but significant step in Ikea’s plan to fully support the new smart home protocol. It’s the same tepid approach Philips Hue has taken but hopefully with better results. Other companies like Aqara have more fully embraced the standard by launching native Matter devices that don’t require bridges to do the protocol translation.

Ikea’s smart home hubs have always featured custom integrations that have allowed its products to communicate with setups from Apple, Google, and Amazon. “With Matter, we are expanding these possibilities even further,” says David Granath, range manager at Ikea of Sweden. “Combining our decades of life at home expertise with innovative technology, we believe we are uniquely positioned to lower the barriers and enable a smarter everyday life for more people.”
Matter support for Dirigera is available in every location where the hub is sold.

The Dirigera hub and Ikea’s Home smart app. | Image: Ikea

Two years after announcing support, Ikea’s Dirigera hub can now be updated to act as a bridge between Ikea’s smart home devices and Matter-enabled systems. The software update builds upon early beta support by letting Ikea’s entire lineup of Zigbee-based smart home devices — like lights, blinds, controllers, air purifiers, and sensors — communicate with Matter-enabled devices from any company.

Bridging support for existing devices to Matter is a small but significant step in Ikea’s plan to fully support the new smart home protocol. It’s the same tepid approach Philips Hue has taken but hopefully with better results. Other companies like Aqara have more fully embraced the standard by launching native Matter devices that don’t require bridges to do the protocol translation.

Ikea’s smart home hubs have always featured custom integrations that have allowed its products to communicate with setups from Apple, Google, and Amazon. “With Matter, we are expanding these possibilities even further,” says David Granath, range manager at Ikea of Sweden. “Combining our decades of life at home expertise with innovative technology, we believe we are uniquely positioned to lower the barriers and enable a smarter everyday life for more people.”

Matter support for Dirigera is available in every location where the hub is sold.

Read More 

Here’s a closer look at the Huawei Mate XT triple-screen foldable

The display creases are prominant at certain angles, but are barely noticable when looking at the Huawei Mate XT straight on. | Screenshot: Fixed Focus Digital

Now that Huawei has officially launched the Mate XT Ultimate Design in China, demonstration videos are popping up online that reveal what the world’s first dual-folding, triple-screen phone looks like in real-world conditions.
Several unboxing videos show that the Huawei Mate XT is shipped in its unfolded position, which is just 3.6mm (around 0.14 inches) thick at its thinnest point — making it slimmer than Google’s 5.1mm (around 0.2 inch) Pixel 9 Pro Fold. When fully folded, the Mate XT measures in at 12.8mm (around 0.5 inches), a smidge thicker than the Pixel 9 Pro Fold’s 10.1mm (around 0.39 inches) and the 12.1mm Samsung Galaxy Z Fold 6.
A hands-on review from Gizmodo China shows that the display can be used to view multiple app windows simultaneously, which can be resized into custom configurations. The Mate XT also ships with a specialized phone case that protects the device when folded and doubles as a stand to position it upright when both folded and unfolded.

A small loading wheel can briefly be seen in some videos when the phone transitions from single to dual-screen modes, but the display seems responsive enough when switching between horizontal and vertical orientations. One noticeable gripe that can be seen in a video shared by Android Authority is that the display creases on both hinges are very visible at certain angles and lighting, which some may find disappointing — though not surprising given the current state of foldable displays. They seem hardly noticeable when viewed straight on, however.

The display creases are prominant at certain angles, but are barely noticable when looking at the Huawei Mate XT straight on. | Screenshot: Fixed Focus Digital

Now that Huawei has officially launched the Mate XT Ultimate Design in China, demonstration videos are popping up online that reveal what the world’s first dual-folding, triple-screen phone looks like in real-world conditions.

Several unboxing videos show that the Huawei Mate XT is shipped in its unfolded position, which is just 3.6mm (around 0.14 inches) thick at its thinnest point — making it slimmer than Google’s 5.1mm (around 0.2 inch) Pixel 9 Pro Fold. When fully folded, the Mate XT measures in at 12.8mm (around 0.5 inches), a smidge thicker than the Pixel 9 Pro Fold’s 10.1mm (around 0.39 inches) and the 12.1mm Samsung Galaxy Z Fold 6.

A hands-on review from Gizmodo China shows that the display can be used to view multiple app windows simultaneously, which can be resized into custom configurations. The Mate XT also ships with a specialized phone case that protects the device when folded and doubles as a stand to position it upright when both folded and unfolded.

A small loading wheel can briefly be seen in some videos when the phone transitions from single to dual-screen modes, but the display seems responsive enough when switching between horizontal and vertical orientations. One noticeable gripe that can be seen in a video shared by Android Authority is that the display creases on both hinges are very visible at certain angles and lighting, which some may find disappointing — though not surprising given the current state of foldable displays. They seem hardly noticeable when viewed straight on, however.

Read More 

Taylor Swift endorses Kamala Harris in response to fake AI Trump endorsement

Photo by Noam Galai/TAS24/Getty Images for TAS Rights Management

Taylor Swift said on Tuesday that she plans to vote for Vice President Kamala Harris in November’s presidential election — and that AI-generated images circulating of herself pushed her in part to make her support public.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote in an Instagram post. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”

View this post on Instagram

A post shared by Taylor Swift (@taylorswift)

Her post references an incident in late August, in which Trump shared a collection of images to Truth Social intended to show support for his presidential campaign. Some of the photos depict “Swifties for Trump,” and another obviously AI-generated image shows Swift herself in an Uncle Sam-type image with text reading, “Taylor wants YOU to vote for Donald Trump.” The former president captioned the post, “I accept!”
This wasn’t the first time AI images of Swift were circulated on social media. Earlier this year, nonconsensual sexualized images of her made using AI were shared on X. That incident prompted the White House to call for legislation to “deal” with the issue.
In her endorsement post, Swift also mentioned LGBTQ rights, reproductive care, and IVF as specific issues she cares about. She also directed fans to her Instagram story, where she added a link to register to vote.

Photo by Noam Galai/TAS24/Getty Images for TAS Rights Management

Taylor Swift said on Tuesday that she plans to vote for Vice President Kamala Harris in November’s presidential election — and that AI-generated images circulating of herself pushed her in part to make her support public.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote in an Instagram post. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”

Her post references an incident in late August, in which Trump shared a collection of images to Truth Social intended to show support for his presidential campaign. Some of the photos depict “Swifties for Trump,” and another obviously AI-generated image shows Swift herself in an Uncle Sam-type image with text reading, “Taylor wants YOU to vote for Donald Trump.” The former president captioned the post, “I accept!”

This wasn’t the first time AI images of Swift were circulated on social media. Earlier this year, nonconsensual sexualized images of her made using AI were shared on X. That incident prompted the White House to call for legislation to “deal” with the issue.

In her endorsement post, Swift also mentioned LGBTQ rights, reproductive care, and IVF as specific issues she cares about. She also directed fans to her Instagram story, where she added a link to register to vote.

Read More 

Donald Trump goes all in on viral anti-immigrant lie

Image: Cath Virginia / The Verge; Getty Images

Less than 30 minutes into the presidential debate, former president Donald Trump brought up a viral racist lie about Haitian migrants in Springfield, Ohio — and repeated it after fact-checkers asserted that it wasn’t true.
“In Springfield, they’re eating the dogs — the people that came in — they’re eating the cats, they’re eating the pets of the people that live there,” Trump said in response to a question about why he asked Republican legislators to vote against a bipartisan border security bill. After Trump finished his tirade, ABC News moderator David Muir clarified that Springfield’s city manager told ABC reports of migrants eating pets were false — but Trump repeated the lie. “People on television are saying, ‘My dog was taken and being used for food,’” Trump interjected.

Trump falsely accuses migrants of eating dogs and pets in Springfield, Ohio, then fights the moderators when he gets fact-checked. “I’ve seen people on television.” pic.twitter.com/u5vRymVgEm— nikki mccann ramírez (@NikkiMcR) September 11, 2024

Trump’s resistance to fact-checking shouldn’t come as a surprise by this point. In fact, his campaign has fully leaned into the claim, which took off on right-wing social media over the weekend has since been mainstreamed by the likes of Elon Musk and Sen. Ted Cruz (R-TX).
On Tuesday, vice presidential candidate JD Vance claimed his office had “received many inquiries from actual residents of Springfield” regarding their pets being eaten, contradicting statements from Springfield police and city officials that they had received no such complaints. Though Vance acknowledged the possibility that “all these rumors will turn out to be false,” he nonetheless encouraged supporters to continue spreading them. “In short, don’t let the crybabies in the media dissuade you, fellow patriots,” Vance posted on X. “Keep the cat memes flowing.”
In the days since the Springfield rumor went viral, Trump’s supporters and campaign surrogates have embraced it, posting AI-generated images depicting Trump as a champion of America’s pets. The Republican Party of Arizona unveiled a dozen billboards in the Phoenix area referencing the meme, urging Arizonans to “eat less kittens” and vote Republican.

THE ARE LIVE! Catch our newest billboard across 12 locations in the Phoenix metro area! https://t.co/bCv6TsBJr3 pic.twitter.com/xkRlfE9AJf— Republican Party of Arizona (@AZGOP) September 10, 2024

These memes have become a visual shorthand for Trump and his supporters’ belief in the white supremacist great replacement theory. And rather than acknowledging the falsehood at the heart of the rumor about Haitians in Springfield, Trump’s supporters have suggested that the media’s focus on fact-checking the viral lie obscures the “replacement” of Americans in Springfield with Haitian migrants.

Some good points here https://t.co/2zoPTjxlkC— Elon Musk (@elonmusk) September 10, 2024

Trump, the Republican Party’s standard-bearer, isn’t bothering to obfuscate the baseless claims by tying them to locals’ broader concerns about immigrants. Instead, he’s going for the baldest version of the lie.

Image: Cath Virginia / The Verge; Getty Images

Less than 30 minutes into the presidential debate, former president Donald Trump brought up a viral racist lie about Haitian migrants in Springfield, Ohio — and repeated it after fact-checkers asserted that it wasn’t true.

“In Springfield, they’re eating the dogs — the people that came in — they’re eating the cats, they’re eating the pets of the people that live there,” Trump said in response to a question about why he asked Republican legislators to vote against a bipartisan border security bill. After Trump finished his tirade, ABC News moderator David Muir clarified that Springfield’s city manager told ABC reports of migrants eating pets were false — but Trump repeated the lie. “People on television are saying, ‘My dog was taken and being used for food,’” Trump interjected.

Trump falsely accuses migrants of eating dogs and pets in Springfield, Ohio, then fights the moderators when he gets fact-checked.

“I’ve seen people on television.” pic.twitter.com/u5vRymVgEm

— nikki mccann ramírez (@NikkiMcR) September 11, 2024

Trump’s resistance to fact-checking shouldn’t come as a surprise by this point. In fact, his campaign has fully leaned into the claim, which took off on right-wing social media over the weekend has since been mainstreamed by the likes of Elon Musk and Sen. Ted Cruz (R-TX).

On Tuesday, vice presidential candidate JD Vance claimed his office had “received many inquiries from actual residents of Springfield” regarding their pets being eaten, contradicting statements from Springfield police and city officials that they had received no such complaints. Though Vance acknowledged the possibility that “all these rumors will turn out to be false,” he nonetheless encouraged supporters to continue spreading them. “In short, don’t let the crybabies in the media dissuade you, fellow patriots,” Vance posted on X. “Keep the cat memes flowing.”

In the days since the Springfield rumor went viral, Trump’s supporters and campaign surrogates have embraced it, posting AI-generated images depicting Trump as a champion of America’s pets. The Republican Party of Arizona unveiled a dozen billboards in the Phoenix area referencing the meme, urging Arizonans to “eat less kittens” and vote Republican.

THE ARE LIVE!

Catch our newest billboard across 12 locations in the Phoenix metro area! https://t.co/bCv6TsBJr3 pic.twitter.com/xkRlfE9AJf

— Republican Party of Arizona (@AZGOP) September 10, 2024

These memes have become a visual shorthand for Trump and his supporters’ belief in the white supremacist great replacement theory. And rather than acknowledging the falsehood at the heart of the rumor about Haitians in Springfield, Trump’s supporters have suggested that the media’s focus on fact-checking the viral lie obscures the “replacement” of Americans in Springfield with Haitian migrants.

Some good points here https://t.co/2zoPTjxlkC

— Elon Musk (@elonmusk) September 10, 2024

Trump, the Republican Party’s standard-bearer, isn’t bothering to obfuscate the baseless claims by tying them to locals’ broader concerns about immigrants. Instead, he’s going for the baldest version of the lie.

Read More 

Flipper Zero 1.0 firmware update supercharges the hacking handheld

This little gadget keeps getting better and better. | Image: Flipper Devices Inc.

One of our favorite hacking gadgets, the Flipper Zero, received its first major firmware update today. It includes a bunch of features the developers have been working on stabilizing for the last three years, including a big battery life boost that technically arrived with a previous update. While many of the features aren’t technically new, the entire package should make this gadget feel like a supercharged version of its former self.
One of the most notable updates since launch solves one of the developers’ biggest problems: the device’s internal flash memory originally limited how many features they could add. In the past, new features were built into the firmware itself, but they eventually exceeded what the memory could handle. Last year, the device added an app store and let you run new apps from the microSD card instead.

JavaScript is now supported, so coding your own apps could be easier. The NFC subsystem has been rewritten from the ground up, supports more card types, and can read cards faster. Transferring data via Bluetooth with Android devices is also faster, and the speed of firmware updates has increased by 40 percent, according to the developers.
The new firmware also includes more IR protocols, so if you already use the Flipper Zero as a universal remote control, you can now use it with more TVs, ACs, audio systems, and projectors. If you need a longer transmission range, the Flipper Zero supports external infrared modules, too. Here’s an explainer video:

You can also use the Flipper Zero to listen to analog walkie-talkies, and the developers say its sub-GHz radio now supports 89 different radio protocols in all. If the built-in antenna isn’t sensitive enough, you can now also connect it to an external sub-GHz module to get a better one.
You can find more details and a link to the firmware update here.

This little gadget keeps getting better and better. | Image: Flipper Devices Inc.

One of our favorite hacking gadgets, the Flipper Zero, received its first major firmware update today. It includes a bunch of features the developers have been working on stabilizing for the last three years, including a big battery life boost that technically arrived with a previous update. While many of the features aren’t technically new, the entire package should make this gadget feel like a supercharged version of its former self.

One of the most notable updates since launch solves one of the developers’ biggest problems: the device’s internal flash memory originally limited how many features they could add. In the past, new features were built into the firmware itself, but they eventually exceeded what the memory could handle. Last year, the device added an app store and let you run new apps from the microSD card instead.

JavaScript is now supported, so coding your own apps could be easier. The NFC subsystem has been rewritten from the ground up, supports more card types, and can read cards faster. Transferring data via Bluetooth with Android devices is also faster, and the speed of firmware updates has increased by 40 percent, according to the developers.

The new firmware also includes more IR protocols, so if you already use the Flipper Zero as a universal remote control, you can now use it with more TVs, ACs, audio systems, and projectors. If you need a longer transmission range, the Flipper Zero supports external infrared modules, too. Here’s an explainer video:

You can also use the Flipper Zero to listen to analog walkie-talkies, and the developers say its sub-GHz radio now supports 89 different radio protocols in all. If the built-in antenna isn’t sensitive enough, you can now also connect it to an external sub-GHz module to get a better one.

You can find more details and a link to the firmware update here.

Read More 

Scroll to top
Generated by Feedzy