daring-rss

The Talk Show: ‘150 Million Calculator Apps’

Quinn Nelson, esteemed host of Snazzy Labs, returns to the show to recap the highlights of WWDC: Apple Intelligence, platform updates, and the latest salvos from the EC regarding Apple’s compliance with the DMA.

Sponsored by:

Trade Coffee: Enjoy 30% off your first month of coffee.
Squarespace: Make your next move. Use code talkshow for 10% off your first order.

 ★ 

Quinn Nelson, esteemed host of Snazzy Labs, returns to the show to recap the highlights of WWDC: Apple Intelligence, platform updates, and the latest salvos from the EC regarding Apple’s compliance with the DMA.

Sponsored by:

Trade Coffee: Enjoy 30% off your first month of coffee.
Squarespace: Make your next move. Use code talkshow for 10% off your first order.

Read More 

Wavelength Is Shutting Down at the End of July

Wavelength:

We’re sad to announce that we’re shutting down Wavelength. We’re
so grateful to our users and community — you’ve been amazing.

On July 31st we’ll turn off our servers, which means that you’ll
no longer be able to sign in, create a group, or send messages.
You will continue to have access to your message history as long
as you keep the app installed on your device, but we recommend
saving or copying anything important out of the app as soon as
you can.

Your Wavelength account data will be deleted from our servers at
the time of the shutdown. Rest assured that we will not retain,
sell, or transfer any user information, and that your messages
remain end-to-end encrypted and secure.

You may recall I’ve been an advisor to the team at Wavelength for a little over a year, so I knew this announcement was coming. It’s a bummer, personally, at two levels. First, just knowing the team, particularly cofounders Richard Henry and Marc Bodnick, both of whom I now consider friends. They tried to crack the “privacy-minded social network” nut before with Telepath, and with Wavelength got even closer to pulling it off. So much work went into it, and so much of it was so good.

Second, though, is a more selfish reason: I’m an active participant in a bunch of active, vibrant groups on Wavelength. I’m going to miss them. The groups I’m most active in on Wavelength have a higher signal-to-noise ratio than any social networking platform I’ve seen in ages. I’d have to go back to the heyday of Usenet and email group mailing lists, literally decades ago, or the very early years of Twitter, to find anything with such a high level of discourse.

But the simple truth is that while Wavelength has been far from a failure, it’s also far from a breakout hit. It’d be an easy decision to shut it down if it were a flop. It was a hard decision to shut it down because it wasn’t. But a social platform really needs to be a breakout hit to succeed, and Wavelength just wasn’t on a path to become one.

So: time to move on. Until the plug gets pulled at the end of next month though, I’ll still be there.

 ★ 

Wavelength:

We’re sad to announce that we’re shutting down Wavelength. We’re
so grateful to our users and community — you’ve been amazing.

On July 31st we’ll turn off our servers, which means that you’ll
no longer be able to sign in, create a group, or send messages.
You will continue to have access to your message history as long
as you keep the app installed on your device, but we recommend
saving or copying anything important out of the app as soon as
you can.

Your Wavelength account data will be deleted from our servers at
the time of the shutdown. Rest assured that we will not retain,
sell, or transfer any user information, and that your messages
remain end-to-end encrypted and secure.

You may recall I’ve been an advisor to the team at Wavelength for a little over a year, so I knew this announcement was coming. It’s a bummer, personally, at two levels. First, just knowing the team, particularly cofounders Richard Henry and Marc Bodnick, both of whom I now consider friends. They tried to crack the “privacy-minded social network” nut before with Telepath, and with Wavelength got even closer to pulling it off. So much work went into it, and so much of it was so good.

Second, though, is a more selfish reason: I’m an active participant in a bunch of active, vibrant groups on Wavelength. I’m going to miss them. The groups I’m most active in on Wavelength have a higher signal-to-noise ratio than any social networking platform I’ve seen in ages. I’d have to go back to the heyday of Usenet and email group mailing lists, literally decades ago, or the very early years of Twitter, to find anything with such a high level of discourse.

But the simple truth is that while Wavelength has been far from a failure, it’s also far from a breakout hit. It’d be an easy decision to shut it down if it were a flop. It was a hard decision to shut it down because it wasn’t. But a social platform really needs to be a breakout hit to succeed, and Wavelength just wasn’t on a path to become one.

So: time to move on. Until the plug gets pulled at the end of next month though, I’ll still be there.

Read More 

Microsoft Edge Has an ‘Enhanced Security’ Mode That Disables the JIT

Sergiu Gatlan, writing for Bleeping Computer in 2021 (thanks to Kevin van Haaren):

Microsoft has announced that the Edge Vulnerability Research team
is experimenting with a new feature dubbed “Super Duper Secure
Mode” and designed to bring security improvements without
significant performance losses. When enabled, the new Microsoft
Edge Super Duper Secure Mode will remove Just-In-Time Compilation
(JIT) from the V8 processing pipeline, reducing the attack surface
threat actors can use to hack into Edge users’ systems.

Based on CVE (Common Vulnerabilities and Exposures) data
collected since 2019, around 45% of vulnerabilities found in the
V8 JavaScript and WebAssembly engine were related to the JIT
engine, more than half of all “in the wild” Chrome exploits
abusing JIT bugs.

“Super Duper Secure Mode” was a funner name, but they settled on “Enhanced Security Mode”.

This is why Apple considers BrowserEngineKit — which is complex and requires a special entitlement with stringent requirements to use — necessary for complying with the DMA’s mandate to allow third-party browser engines. JITs are inherently vulnerable. It’s not about known bugs — it’s the unknown bugs.

The anti-WebKit peanut gallery responded to my piece on JITs yesterday with a collective response along the lines of “Who’s to say WebKit’s JIT is any more secure than Chrome’s or Gecko’s?” That’s not really the point, but that answer is, Apple is to say. iOS is their platform and they’ve decided that it’s better for the platform to reduce the attack surface to a single browser engine, WebKit, the one they themselves control. And Apple isn’t saying WebKit as a whole, or its JavaScript JIT compiler in particular, is more secure than Chrome or Gecko. They’re saying, implicitly, that it’s safer to have just one that they themselves are fully responsible for. And that the safest way to comply with the DMA’s mandate to allow third-party rendering engines is via a stringent framework like BrowserEngineKit.

You might think it would be just fine for iOS to work just like MacOS, where you can install whatever software you want. But Apple, expressly, does not. iOS is designed to be significantly more secure than MacOS.

 ★ 

Sergiu Gatlan, writing for Bleeping Computer in 2021 (thanks to Kevin van Haaren):

Microsoft has announced that the Edge Vulnerability Research team
is experimenting with a new feature dubbed “Super Duper Secure
Mode” and designed to bring security improvements without
significant performance losses. When enabled, the new Microsoft
Edge Super Duper Secure Mode will remove Just-In-Time Compilation
(JIT) from the V8 processing pipeline, reducing the attack surface
threat actors can use to hack into Edge users’ systems.

Based on CVE (Common Vulnerabilities and Exposures) data
collected since 2019, around 45% of vulnerabilities found in the
V8 JavaScript and WebAssembly engine were related to the JIT
engine, more than half of all “in the wild” Chrome exploits
abusing JIT bugs.

“Super Duper Secure Mode” was a funner name, but they settled on “Enhanced Security Mode”.

This is why Apple considers BrowserEngineKit — which is complex and requires a special entitlement with stringent requirements to use — necessary for complying with the DMA’s mandate to allow third-party browser engines. JITs are inherently vulnerable. It’s not about known bugs — it’s the unknown bugs.

The anti-WebKit peanut gallery responded to my piece on JITs yesterday with a collective response along the lines of “Who’s to say WebKit’s JIT is any more secure than Chrome’s or Gecko’s?” That’s not really the point, but that answer is, Apple is to say. iOS is their platform and they’ve decided that it’s better for the platform to reduce the attack surface to a single browser engine, WebKit, the one they themselves control. And Apple isn’t saying WebKit as a whole, or its JavaScript JIT compiler in particular, is more secure than Chrome or Gecko. They’re saying, implicitly, that it’s safer to have just one that they themselves are fully responsible for. And that the safest way to comply with the DMA’s mandate to allow third-party rendering engines is via a stringent framework like BrowserEngineKit.

You might think it would be just fine for iOS to work just like MacOS, where you can install whatever software you want. But Apple, expressly, does not. iOS is designed to be significantly more secure than MacOS.

Read More 

Reuters: Amazon Is Considering $5 Monthly Charge for Improved Alexa

Greg Bensinger, reporting for Reuters:

Amazon is planning a major revamp of its decade-old money-losing
Alexa service to include a conversational generative AI with two
tiers of service and has considered a monthly fee of around $5 to
access the superior version, according to people with direct
knowledge of the company’s plans.

Known internally as “Banyan,” a reference to the sprawling ficus
trees, the project would represent the first major overhaul of the
voice assistant since it was introduced in 2014 along with the
Echo line of speakers. Amazon has dubbed the new voice assistant
“Remarkable Alexa,” the people said.

A bit of a role reversal here. Apple, which is not known for giving away much for free, isn’t charging users for Apple Intelligence, including ChatGPT integration. Amazon, which is known for ruthlessly pursuing low prices, is, according to this report, looking for charge for an LLM-powered version of Alexa. Maybe that new version of Alexa really is that good? But I sort of think that if they gate this new Alexa behind a paywall, it will just be added to the existing package for Prime.

Speaking of Alexa, though, I’m reminded that Apple’s WWDC announcements didn’t include anything about bringing the new Apple-Intelligence-powered Siri to devices like HomePods or Apple Watches. Let’s say you have an iPhone 15 Pro or buy a new iPhone 16 this fall. What happens when you talk to Siri through your Apple Watch? Do you get the new Apple Intelligence Siri, because your watch is paired to your iPhone, which meets the device requirements for Apple Intelligence? Or do you get old dumb Siri on your Watch and only get new Siri when talking directly to your iPhone?

 ★ 

Greg Bensinger, reporting for Reuters:

Amazon is planning a major revamp of its decade-old money-losing
Alexa service to include a conversational generative AI with two
tiers of service and has considered a monthly fee of around $5 to
access the superior version, according to people with direct
knowledge of the company’s plans.

Known internally as “Banyan,” a reference to the sprawling ficus
trees, the project would represent the first major overhaul of the
voice assistant since it was introduced in 2014 along with the
Echo line of speakers. Amazon has dubbed the new voice assistant
“Remarkable Alexa,” the people said.

A bit of a role reversal here. Apple, which is not known for giving away much for free, isn’t charging users for Apple Intelligence, including ChatGPT integration. Amazon, which is known for ruthlessly pursuing low prices, is, according to this report, looking for charge for an LLM-powered version of Alexa. Maybe that new version of Alexa really is that good? But I sort of think that if they gate this new Alexa behind a paywall, it will just be added to the existing package for Prime.

Speaking of Alexa, though, I’m reminded that Apple’s WWDC announcements didn’t include anything about bringing the new Apple-Intelligence-powered Siri to devices like HomePods or Apple Watches. Let’s say you have an iPhone 15 Pro or buy a new iPhone 16 this fall. What happens when you talk to Siri through your Apple Watch? Do you get the new Apple Intelligence Siri, because your watch is paired to your iPhone, which meets the device requirements for Apple Intelligence? Or do you get old dumb Siri on your Watch and only get new Siri when talking directly to your iPhone?

Read More 

Gurman Just Pantsed the WSJ on Their Report About Apple and Meta Working on an AI Deal

Salvador Rodriguez, Aaron Tilley, Miles Kruppa, reporting for The Wall Street Journal Sunday morning (News+):

In its hustle to catch up on AI, Apple has been talking with a
longtime rival: Meta. Facebook’s parent has held discussions with
Apple about integrating Meta Platforms’ generative AI model into
Apple Intelligence, the recently announced AI system for iPhones
and other devices, according to people familiar with the matter.

This didn’t make much sense, given Tim Cook’s strident condemnation of Meta and Mark Zuckerberg. E.g. this interview with Kara Swisher, which, though it was six years ago, doesn’t leave much room for a strange bedfellows partnership today: “Asked by Swisher what he would do if he were in Zuckerberg’s position, Cook said pointedly: ‘I wouldn’t be in this situation.’” Cook and Apple’s entire problem with Meta is their approach to privacy and monetizing through targeted advertising based on user profiles. Apple is trying to convince customers that Apple’s approach to AI is completely private and trustworthy; a partnership with Meta would run counter to that. And, quite frankly, Meta’s AI technology is not enviable.

Now here’s Mark Gurman, reporting for Bloomberg yesterday evening (News+B):

Apple Inc. rejected overtures by Meta Platforms Inc. to integrate
the social networking company’s AI chatbot into the iPhone months
ago, according to people with knowledge of the matter.

The two companies aren’t in discussions about using Meta’s Llama
chatbot in an AI partnership and only held brief talks in March,
said the people, who asked not to be identified because the
situation is private. The dialogue about a partnership didn’t
reach any formal stage, and Apple has no active plans to integrate
Llama. […]

Apple decided not to move forward with formal Meta discussions in
part because it doesn’t see that company’s privacy practices as
stringent enough, according to the people. Apple has spent years
criticizing Meta’s technology, and integrating Llama into the
iPhone would have been a stark about-face.

Spokespeople for Apple and Meta declined to comment. The Wall
Street Journal reported on Sunday that the two companies
were in talks about an AI partnership.

Delicious, right down to the fact that Bloomberg’s link on “reported on Sunday” points not to the Journal but to Bloomberg’s own regurgitation of the WSJ’s report.

 ★ 

Salvador Rodriguez, Aaron Tilley, Miles Kruppa, reporting for The Wall Street Journal Sunday morning (News+):

In its hustle to catch up on AI, Apple has been talking with a
longtime rival: Meta. Facebook’s parent has held discussions with
Apple about integrating Meta Platforms’ generative AI model into
Apple Intelligence, the recently announced AI system for iPhones
and other devices, according to people familiar with the matter.

This didn’t make much sense, given Tim Cook’s strident condemnation of Meta and Mark Zuckerberg. E.g. this interview with Kara Swisher, which, though it was six years ago, doesn’t leave much room for a strange bedfellows partnership today: “Asked by Swisher what he would do if he were in Zuckerberg’s position, Cook said pointedly: ‘I wouldn’t be in this situation.’” Cook and Apple’s entire problem with Meta is their approach to privacy and monetizing through targeted advertising based on user profiles. Apple is trying to convince customers that Apple’s approach to AI is completely private and trustworthy; a partnership with Meta would run counter to that. And, quite frankly, Meta’s AI technology is not enviable.

Now here’s Mark Gurman, reporting for Bloomberg yesterday evening (News+B):

Apple Inc. rejected overtures by Meta Platforms Inc. to integrate
the social networking company’s AI chatbot into the iPhone months
ago, according to people with knowledge of the matter.

The two companies aren’t in discussions about using Meta’s Llama
chatbot in an AI partnership and only held brief talks in March,
said the people, who asked not to be identified because the
situation is private. The dialogue about a partnership didn’t
reach any formal stage, and Apple has no active plans to integrate
Llama. […]

Apple decided not to move forward with formal Meta discussions in
part because it doesn’t see that company’s privacy practices as
stringent enough, according to the people. Apple has spent years
criticizing Meta’s technology, and integrating Llama into the
iPhone would have been a stark about-face.

Spokespeople for Apple and Meta declined to comment. The Wall
Street Journal reported on Sunday that the two companies
were in talks about an AI partnership.

Delicious, right down to the fact that Bloomberg’s link on “reported on Sunday” points not to the Journal but to Bloomberg’s own regurgitation of the WSJ’s report.

Read More 

European Commission Dings Apple Over Anti-Steering Provisions in App Store, and Opens New Investigations Into Core Technology Fee, Sideloading Protections, and the Eligibility Requirements to Offer an Alternative Marketplace

The European Commission:

Today, the European Commission has informed Apple of its
preliminary view that its App Store rules are in breach of the
Digital Markets Act (DMA), as they prevent app developers from
freely steering consumers to alternative channels for offers and
content.

I think what they’re saying here is that Apple’s current compliance offering, where developers can remain exclusively in the App Store in the EU under the existing terms, or choose the new terms that allow for linking out to the web, aren’t going to pass muster. The EC wants all apps to be able to freely — as in free of charge freely — link out to the web for purchases, regardless if they’re from the App Store, an alternative marketplace, or directly sideloaded.

The Commission will investigate whether these new contractual
requirements for third-party app developers and app stores breach
Article 6(4) of the DMA and notably the necessity and
proportionality requirements provided therein. This includes:

1. Apple’s Core Technology Fee, under which developers of
third-party app stores and third-party apps must pay a €0.50
fee per installed app. The Commission will investigate whether
Apple has demonstrated that the fee structure that it has
imposed, as part of the new business terms, and in particular
the Core Technology Fee, effectively complies with the DMA.

No word on how it doesn’t comply, just that they don’t like it.

2. Apple’s multi-step user journey to download and install
alternative app stores or apps on iPhones. The Commission will
investigate whether the steps that a user has to undertake to
successfully complete the download and installation of
alternative app stores or apps, as well as the various
information screens displayed by Apple to the user, comply with
the DMA.

This sounds like they’re going to insist that Apple make installing sideloaded apps and alternative stores a no-hassle experience. What critics see is Apple putting up obstacles to installing marketplaces or sideloaded apps just to be a dick about it and discouraging their use to keep users in the App Store. What I see are reasonable warnings for potentially dangerous software. We’ll see how that goes.

Perhaps where the EC will wind up is making app store choice like web browser choice. Force Apple to present each user with a screen listing all available app marketplaces in their country in random order, of which Apple’s own App Store is but one, just like Safari in the default browser choice screen.

3. The eligibility requirements for developers related to the
ability to offer alternative app stores or directly distribute
apps from the web on iPhones. The Commission will investigate
whether these requirements, such as the ‘membership of good
standing’ in the Apple Developer Program, that app developers
have to meet in order to be able to benefit from alternative
distribution provided for in the DMA comply with the DMA.

I’m not sure what this is about, given that Apple relented on allowing even Epic Games to open a store. Maybe the financial requirements?

In parallel, the Commission will continue undertaking preliminary
investigative steps outside of the scope of the present
investigation, in particular with respect to the checks and
reviews put in place by Apple to validate apps and alternative app
stores to be sideloaded.

This pretty clearly is about Apple using notarization as a review for anything other than egregious bugs or security vulnerabilities. I complain as much as anyone about the aspects of the DMA that are vague (or downright inscrutable), but this aspect seems clear-cut. It’s a bit baffling why Apple seemingly sees notarization as an opportunity for content/purpose review, like with last week’s brouhaha over the UTM SE PC emulator. Refusing to notarize an emulator that uses a JIT is something Apple ought to be able to defend under the DMA’s exceptions pertaining to device security; refusing to notarize an emulator that doesn’t use a JIT seems clearly forbidden by the DMA.

 ★ 

The European Commission:

Today, the European Commission has informed Apple of its
preliminary view that its App Store rules are in breach of the
Digital Markets Act (DMA), as they prevent app developers from
freely steering consumers to alternative channels for offers and
content.

I think what they’re saying here is that Apple’s current compliance offering, where developers can remain exclusively in the App Store in the EU under the existing terms, or choose the new terms that allow for linking out to the web, aren’t going to pass muster. The EC wants all apps to be able to freely — as in free of charge freely — link out to the web for purchases, regardless if they’re from the App Store, an alternative marketplace, or directly sideloaded.

The Commission will investigate whether these new contractual
requirements for third-party app developers and app stores breach
Article 6(4) of the DMA and notably the necessity and
proportionality requirements provided therein. This includes:

1. Apple’s Core Technology Fee, under which developers of
third-party app stores and third-party apps must pay a €0.50
fee per installed app. The Commission will investigate whether
Apple has demonstrated that the fee structure that it has
imposed, as part of the new business terms, and in particular
the Core Technology Fee, effectively complies with the DMA.

No word on how it doesn’t comply, just that they don’t like it.

2. Apple’s multi-step user journey to download and install
alternative app stores or apps on iPhones
. The Commission will
investigate whether the steps that a user has to undertake to
successfully complete the download and installation of
alternative app stores or apps, as well as the various
information screens displayed by Apple to the user, comply with
the DMA.

This sounds like they’re going to insist that Apple make installing sideloaded apps and alternative stores a no-hassle experience. What critics see is Apple putting up obstacles to installing marketplaces or sideloaded apps just to be a dick about it and discouraging their use to keep users in the App Store. What I see are reasonable warnings for potentially dangerous software. We’ll see how that goes.

Perhaps where the EC will wind up is making app store choice like web browser choice. Force Apple to present each user with a screen listing all available app marketplaces in their country in random order, of which Apple’s own App Store is but one, just like Safari in the default browser choice screen.

3. The eligibility requirements for developers related to the
ability to offer alternative app stores or directly distribute
apps from the web on iPhones. The Commission will investigate
whether these requirements, such as the ‘membership of good
standing’ in the Apple Developer Program, that app developers
have to meet in order to be able to benefit from alternative
distribution provided for in the DMA comply with the DMA.

I’m not sure what this is about, given that Apple relented on allowing even Epic Games to open a store. Maybe the financial requirements?

In parallel, the Commission will continue undertaking preliminary
investigative steps outside of the scope of the present
investigation, in particular with respect to the checks and
reviews put in place by Apple to validate apps and alternative app
stores to be sideloaded.

This pretty clearly is about Apple using notarization as a review for anything other than egregious bugs or security vulnerabilities. I complain as much as anyone about the aspects of the DMA that are vague (or downright inscrutable), but this aspect seems clear-cut. It’s a bit baffling why Apple seemingly sees notarization as an opportunity for content/purpose review, like with last week’s brouhaha over the UTM SE PC emulator. Refusing to notarize an emulator that uses a JIT is something Apple ought to be able to defend under the DMA’s exceptions pertaining to device security; refusing to notarize an emulator that doesn’t use a JIT seems clearly forbidden by the DMA.

Read More 

★ Apple Disables WebKit’s JIT in Lockdown Mode, Offering a Hint Why BrowserEngineKit Is Complex and Restricted

To put it in Steven Sinofsky’s terms, gatekeeping is a fundamental aspect of Apple’s brand promise with iOS.

Last week I mentioned Apple’s prohibition on JITs — just-in-time compilers — in the context of their rejection of UTM SE, an open source PC emulator. Apple’s prohibition on JITs, on security grounds, is a side issue regarding UTM SE, because UTM SE is the version of UTM that doesn’t use a JIT. But because it doesn’t a JIT, it’s so slow that the UTM team doesn’t consider it worth fighting with Apple regarding its rejection.

On that no-JITs prohibition, though, it’s worth noting that Apple even disables its own trusted JIT in WebKit when you enable Lockdown Mode, which Apple now describes as “an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature.” Apple previously described Lockdown Mode as protection for those targeted by “private companies developing state-sponsored mercenary spyware”, but has recently dropped the “state-sponsored” language.

Here’s how Apple describes Lockdown Mode’s effect on web browsing:

Web browsing – Certain complex web technologies are blocked, which
might cause some websites to load more slowly or not operate
correctly. In addition, web fonts might not be displayed, and
images might be replaced with a missing image icon.

JavaScriptCore’s JIT interpreter is one of those “complex web technologies”. Alexis Lours did some benchmarking two years ago, when iOS 16 was in beta, to gauge the effect of disabling the JIT on JavaScript performance (and he also determined a long list of other WebKit features that get disabled in Lockdown Mode, a list I wish Apple would publish and keep up to date). Lours ran several benchmarks but I suspect Speedometer is most relevant to real-world usage:

Speedometer aims to benchmark real world applications by emulating
page action on multiple frameworks. This should allow us to get a
decent idea of the performance drop in JavaScript heavy
frameworks.

A 65% drop in performance, while this is still a heavy hit on
performance, compared to a 95% drop, this shifts the value from a
no-go to a compromise worth considering for people seeking the
extra privacy.

This brings me to BrowserEngineKit, a new framework Apple created specifically for compliance with the EU’s DMA, which requires gatekeeping platforms to allow for third-party browser engines. Apple has permitted third-party browsers on iOS for over a decade, but requires all browsers to use the system’s WebKit rendering engine. One take on Apple’s longstanding prohibition against third-party rendering engines is that they’re protecting their own interests with Safari. More or less that they’re just being dicks about it. But there really is a security angle to it. JavaScript engines run much faster with JIT compilation, but JITs inherently pose security challenges. There’s a whole section in the BrowserEngineKit docs specifically about JIT compilation.

As I see it Apple had three choices, broadly speaking, for complying with the third-party browser engine mandate in the DMA:

Disallow third-party browser engines from using JITs. This would clearly be deemed malicious by anyone who actually wants to see Chromium or Gecko-based browsers on iOS. JavaScript execution would be somewhere between 65 to 90 percent slower compared to WebKit.

Allow third-party browser engines in the EU to just use JIT compilation freely without restrictions. This would open iOS devices running such browsers to security vulnerabilities. The message to users would be, effectively, “If you use one of these browsers you’re on your own.”

Create something like BrowserEngineKit, which adds complexity in the name of allowing for JIT compilation (and other potentially insecure technologies) in a safer way, and limit the use of BrowserEngineKit only to trusted web browser developers.

Apple went with choice 3, and I doubt they gave serious consideration to anything else. Disallowing third-party rendering engines from using JITs wasn’t going to fly, and allowing them to run willy-nilly would be insecure. The use of BrowserEngineKit also requires a special entitlement:

Apple will provide authorized developers access to technologies
within the system that enable critical functionality and help
developers offer high-performance modern browser engines. These
technologies include just-in-time compilation, multiprocess
support, and more.

However, as browser engines are constantly exposed to untrusted
and potentially malicious content and have visibility of sensitive
user data, they are one of the most common attack vectors for bad
actors. To help keep users safe online, Apple will only authorize
developers to implement alternative browser engines after meeting
specific criteria and who commit to a number of ongoing privacy
and security requirements, including timely security updates to
address emerging threats and vulnerabilities.

BrowserEngineKit isn’t easy, but I genuinely don’t think any good solution would be. Browsers don’t need a special entitlement or complex framework to run on MacOS, true, but iOS is not MacOS. To put it in Steven Sinofsky’s terms, gatekeeping is a fundamental aspect of Apple’s brand promise with iOS.

Read More 

★ WWDC 2024: Apple Intelligence

Apple is focusing on what it can do that no one else can on Apple devices, and not really even trying to compete against ChatGPT *et al* for world-knowledge context. They’re focusing on unique differentiation, and eschewing commoditization.

An oft-told story is that back in 2009 — two years after Dropbox debuted, two years before Apple unveiled iCloud — Steve Jobs invited Dropbox cofounders Drew Houston and Arash Ferdowsi to Cupertino to pitch them on selling the company to Apple. Dropbox, Jobs told them, was “a feature, not a product”.

It’s easy today to forget just how revolutionary a product Dropbox was. A simple installation on your Mac and boom, you had a folder that synced between every Mac you used — automatically, reliably, and quickly. At the time Dropbox had a big sign in its headquarters that read, simply, “It Just Works”, and they delivered on that ideal — at a time when no other sync service did. Jobs, of course, was trying to convince Houston and Ferdowsi to sell, but that doesn’t mean he was wrong that, ultimately, it was a feature, not a product. A tremendously useful feature, but a feature nonetheless.

Leading up to WWDC last week, I’d been thinking that this same description applies, in spades, to LLM generative AI. Fantastically useful, downright amazing at times, but features. Not products. Or at least not broadly universal products. Chatbots are products, of course. People pay for access to the best of them, or for extended use of them. But people pay for Dropbox too.

Chatbots can be useful. There are people doing amazing work through them. But they’re akin to the terminal and command-line tools. Most people just don’t think like that.

What Apple unveiled last week with Apple Intelligence wasn’t so much new products, but new features — a slew of them — for existing products, powered by generative AI.

Safari? Better now, with generative AI page summaries. Messages? More fun, with Genmoji. Notes and Mail and Pages (and any other app that uses the system text frameworks)? Better now, with proofreading and rewriting tools built-in. Photos? Even better recommendations for memories, and automatic categorization of photos into smart collections. Siri? That frustrating, dumb-as-a-rock son of a bitch, Siri? Maybe, actually, pretty useful and kind of smart now. These aren’t new apps or new products. They’re the most used, most important apps Apple makes, the core apps that define the Apple platforms ecosystem, and Apple is using generative AI to make them better and more useful — without, in any way, rendering them unfamiliar.1

We had a lot of questions about Apple’s generative AI strategy heading into WWDC. Now that we have the answers, it all looks very obvious, and mostly straightforward. First, their models are almost entirely based on personal context, by way of an on-device semantic index. In broad strokes, this on-device semantic index can be thought of as a next-generation Spotlight. Apple is focusing on what it can do that no one else can on Apple devices, and not really even trying to compete against ChatGPT et al for world-knowledge context. They’re focusing on unique differentiation, and eschewing commoditization.

Second, they’re doing both on-device processing, for smaller/simpler tasks, and cloud processing (under the name Private Cloud Compute) for more complex tasks. All of this is entirely Apple’s own work: the models, the servers (based on Apple silicon), the entire software stack running on the servers, and the data centers where the servers reside. This is an enormous amount of work, and seemingly puts the lie to reports that Apple executives only even became interested in generative AI 18 months ago. And if they did accomplish all this in just 18 months that’s a remarkable achievement.

Anyone can make a chatbot. (And, seemingly, everyone is — searching for “chatbot” in the App Store is about as useful as searching for “game”.) Apple, conspicuously, has not made one. Benedict Evans keenly observes:

To begin, then: Apple has built an LLM with no chatbot. Apple has
built its own foundation models, which (on the benchmarks
it published) are comparable to anything else on the market, but
there’s nowhere that you can plug a raw prompt directly into the
model and get a raw output back – there are always sets of buttons
and options shaping what you ask, and that’s presented to the user
in different ways for different features. In most of these
features, there’s no visible bot at all. You don’t ask a question
and get a response: instead, your emails are prioritised, or you
press ‘summarise’ and a summary appears. You can type a request
into Siri (and Siri itself is only one of the many features using
Apple’s models), but even then you don’t get raw model output
back: you get GUI. The LLM is abstracted away as an API call.

Instead Apple is doing what no one else can do: integrating generative AI into the frameworks in iOS and MacOS used by developers to create native apps. Apps built on the system APIs and frameworks will gain generative AI features for free, both in the sense that the features come automatically when the app is running on a device that meets the minimum specs to qualify for Apple Intelligences, and in the sense that Apple’s isn’t charging developers or users to utilize these features.

Apple’s keynote presentation was exceedingly well-structured and paced. But nevertheless it was widely misunderstood, I suspect because expectations were so wrong. Those who believed going in that Apple was far behind the state of the art in generative AI technology wrongly saw the keynote’s coda — the announcement of a partnership with OpenAI to integrate their latest model, ChatGPT-4o, as an optional “world knowledge” layer sitting atop Apple’s own homegrown Apple Intelligence — as an indication that most or even all of the cool features Apple revealed were in fact powered by OpenAI. Quite the opposite. Almost nothing Apple showed in the keynote was from ChatGPT.

What I see as the main takeaways:

Apple continues to build machine learning and generative AI features across its core platforms, iOS and MacOS. They’ve been adding such features for years, and announced many new ones this year. Nothing Apple announced in the entire first hour of the keynote was part of “Apple Intelligence”. Math Notes (freeform handwritten or typed mathematics, in Apple Notes and the Calculator app, which is finally coming to iPadOS) is coming to all devices running iOS 18 and MacOS 15 Sequoia. Smart Script — the new personalized handwriting feature when using Apple Pencil, which aims to improve the legibility of your handwriting as you write, and simulates your handwriting when pasting text or generating answers in Math Notes — is coming to all iPads with an A14 or better chip. Inbox categorization and smart message summaries are coming to Apple Mail on all devices. Safari web page summaries are coming to all devices. Better background clipping (“greenscreening”) for videoconferencing. None of these features are under the “Apple Intelligence” umbrella. They’re for everyone with devices eligible for this year’s OS releases.

The minimum device specs for Apple Intelligence are understandable, but regrettable, particularly the fact that the only current iPhones that are eligible are the iPhone 15 Pro and Pro Max. Even the only-nine-month-old iPhone 15 models don’t make the cut. When I asked John Giannandrea (along with Craig Federighi and Greg Joswiak) about this on stage at The Talk Show Live last week, his answer was simple: lesser devices aren’t fast enough to provide a good experience. That’s the Apple way: better not to offer the feature at all than offer it with a bad (slow) experience. A-series chips before last year’s A17 Pro don’t have enough RAM and don’t have powerful enough Neural Engines. But by the time Apple Intelligence features actually become available — even in beta form (they are not enabled in the current developer OS betas) — the iPhone 15 Pro will surely be joined by all iPhone 16 models, both Pro and non-pro. Apple Intelligence is skating to where the puck is going to be in a few years, not where it is now.

Surely Apple is also being persnickety with the device requirements to lessen the load on its cloud compute servers. And if this pushes more people to upgrade to a new iPhone this year, I doubt Tim Cook is going to see that as a problem.

One question I’ve been asked repeatedly is why devices that don’t qualify for Apple Intelligence can’t just do everything via Private Cloud Compute. Everyone understands that if a device isn’t fast or powerful enough for on-device processing, that’s that. But why can’t older iPhones (or in the case of the non-pro iPhones 15, new iPhones with two-year-old chips) simply use Private Cloud Compute for everything? From what I gather, that just isn’t how Apple Intelligence is designed to work. The models that run on-device are entirely different models than the ones that run in the cloud, and one of those on-device models is the heuristic that determines which tasks can execute with on-device processing and which require Private Cloud Compute or ChatGPT. But, see also the previous item in this list — surely Apple has scaling concerns as well. As things stand, with only devices using M-series chips or the A17 or later eligible, Apple is going to be on the hook for an enormous amount of server-side computation with Private Cloud Compute. They’d be on the hook for multiples of that scale if they enabled Apple Intelligence for older iPhones, with those older iPhones doing none of the processing on-device. The on-device processing component of Apple Intelligence isn’t just nice-to-have, it’s a keystone to the entire thing.

Apple could have skipped, or simply delayed announcing until the fall, the entire OpenAI partnership, and they still would have had an impressive array of generative AI features with broad, practical appeal. And clearly they would have gotten a lot more credit for their achievements in the aftermath of the keynote. I remain skeptical that integrating ChatGPT (and any future world-knowledge LLM chatbot partners) at the OS level will bring any significant practical advantage to users versus just using the chatbot apps from the makers of those LLMs. But perhaps removing a few steps, and eliminating the need to choose, download, and sign up for a third-party chatbot, will expose such features to many more users than who are using them currently. But I can’t help but feel that integrating these third-party chatbots in the OSes is at least as much a services-revenue play as a user-experience play.

The most unheralded aspect of Apple Intelligence is that the data centers Apple is building for Private Cloud Compute are not only carbon neutral, but are operating entirely on renewable energy sources. That’s extraordinary, and I believe unique in the entire industry. But it’s gone largely un-remarked-upon — because Apple itself did not mention this during the WWDC keynote. Craig Federighi first mentioned it in a post-keynote interview with Justine Ezarik, and he reiterated on stage with me at The Talk Show Live From WWDC. In hindsight, I wish I’d asked, on stage, why Apple did not even mention this during the keynote, let alone trumpet it. I suspect the real answer is that Apple felt like they couldn’t brag about their own data centers running entirely on renewable energy during the same event in which they announced a partnership with OpenAI, whose data centers can make no such claims. OpenAI’s carbon footprint is a secret, and experts suspect it’s bad. It’s unseemly to throw your own partner under the bus, but that takes Apple Intelligence’s proclaimed carbon neutrality off the table as a marketing point. Yet another reason why I feel Apple might have been better off not announcing this partnership last week.

If you don’t want or don’t trust Apple Intelligence (or just not yet), you’ll be able to turn it off. And you’ll have to opt-in to using the integrated ChatGPT feature, and, each time Apple Intelligence decides to send you to ChatGPT to handle a task, you’ll have to explicitly allow it. As currently designed, no one is going to accidentally interact with, let alone expose personal information to, ChatGPT. If anything I suspect the more common complaint will come from people who wish to use ChatGPT without confirmation each time. At present there’s no “Always allow” option, but some people are going to want one.

At a technical level Apple is using indirection to anonymize devices from ChatGPT. OpenAI will never see your IP address or precise location. At a policy level, OpenAI has agreed not to store user data, nor use data for training purposes, unless users have signed into a ChatGPT account. If you want to use Apple Intelligence but not ChatGPT, you can. If you want to use ChatGPT anonymously, you can. And if you do want ChatGPT to keep a history of your interactions, you can do that too, by signing in to your account. Users are entirely in control, as they should be.

VisionOS 2 is not getting any Apple Intelligence features, despite the fact that the Vision Pro has an M2 chip. One reason is that VisionOS remains a dripping-wet new platform — Apple is still busy building the fundamentals, like rearranging and organizing apps in the Home view. VisionOS 2 isn’t even getting features like Math Notes, which, as I mentioned above, isn’t even under the Apple Intelligence umbrella. But another reason is that, according to well-informed little birdies, Vision Pro is already making significant use of the M2’s Neural Engine to supplement the R1 chip for real-time processing purposes — occlusion and object detection, things like that. With M-series-equipped Macs and iPads, the Neural Engine is basically sitting there, fully available for Apple Intelligence features. With the Vision Pro, it’s already being used.

“Apple Intelligence” is not one thing or one model. Or even two models — local and cloud. It’s an umbrella for dozens of models, some of them very specific. One of the best, potentially, is a new model that will allow Siri to answer technical support questions about Apple products and services. This model has been trained on Apple’s own copious Knowledge Base of support documentation. You can’t say “no reads the documentation” any more — Siri is reading it. Apple’s platforms are so rich and deep, but most users’ knowledge of them is shallow; getting correct answers from Siri to specific how-to questions could be a game-changer. AI-generated slop is polluting web search results for technical help; Apple is using targeted AI trained on its own documentation to avoid the need to search the web in the first place. Technical documentation isn’t sexy, but exposing it all through natural language queries could be one of the sleeper hits of this year’s announcements.

Xcode is the one product where Apple was clearly behind on generative AI features. It was behind on LLM-backed code completion/suggestion/help last year. Apple introduced two generative AI features in Xcode 16, and they exemplify the local/cloud distinction in Apple Intelligence in general. Predictive code completion runs locally, on your Mac. Swift Assist is more profound, answering natural language questions and providing entire solutions in working Swift code, and runs entirely in Private Cloud Compute.

Take It All With a Grain of Salt

Lastly, it is essential to note that we haven’t been able to try any of these Apple Intelligence features yet. None of them are yet available in the developer OS betas, and none are slated to be available, even in beta, until “later this summer”. I witnessed multiple live demos of some of these features last week, during press briefings at Apple Park after the keynote. Demos I witnessed included the writing tools (“make this email sound more professional”) and Xcode code completion and Swift Assist. But those demos were conducted by Apple employees; we in the media were not able to try them ourselves.

It all looks very impressive, and almost all these features seem very practical. But it’s all very, very early. None of it counts as real until we’re able to use it ourselves. We don’t know how well it works. We don’t know how will it scales.

If generative AI weren’t seen as essential — both in terms of consumer marketing and investor confidence — I think much, if not most, of what Apple unveiled in “Apple Intelligence” wouldn’t even have been announced until next year’s WWDC, not last week’s WWDC. Again, none of the features in “Apple Intelligence” are even available in beta yet, and I think all or most of them will be available only under a “beta” label until next year.

It’s good to see Apple hustling, though. I continue to believe it’s incorrect to see Apple as “behind”, overall, on generative AI. But clearly they are feeling tremendous competitive pressure on this front, which is good for them, and great for us.

Image Playground is a new app, and thus definitely counts as a product, but at the moment I’m seeing it as the least interesting part of Apple Intelligence, if only because it’s offering something a dozen other products offer, and it doesn’t seem to do a particularly interesting job of it. ↩︎

Read More 

Kolide by 1Password

My thanks to Kolide by 1Password for sponsoring last week at DF. The September 2023 MGM hack is one of the most notorious ransomware attacks in recent years. Journalists and cybersecurity experts rushed to report on the broken slot machines, angry hotel guests, and the fateful phishing call to MGM’s help desk that started it all.

But while it’s true that MGM’s help desk needed better ways of verifying employee identity, there’s another factor that should have stopped the hackers in their tracks. That’s where you should focus your attention. In fact, if you just focus your vision, you’ll find you’re already staring at the security story the pros have been missing.

It’s the device you’re reading this on.

To read more about what they learned after researching the MGM hack — like how hacker groups get their names, the worrying gaps in MGM’s security, and why device trust is the real core of the story — check out the Kolide by 1Password blog.

 ★ 

My thanks to Kolide by 1Password for sponsoring last week at DF. The September 2023 MGM hack is one of the most notorious ransomware attacks in recent years. Journalists and cybersecurity experts rushed to report on the broken slot machines, angry hotel guests, and the fateful phishing call to MGM’s help desk that started it all.

But while it’s true that MGM’s help desk needed better ways of verifying employee identity, there’s another factor that should have stopped the hackers in their tracks. That’s where you should focus your attention. In fact, if you just focus your vision, you’ll find you’re already staring at the security story the pros have been missing.

It’s the device you’re reading this on.

To read more about what they learned after researching the MGM hack — like how hacker groups get their names, the worrying gaps in MGM’s security, and why device trust is the real core of the story — check out the Kolide by 1Password blog.

Read More 

★ Training Large Language Models on the Public Web

The whole point of the public web is that it’s there to learn from — even if the learner isn’t human. Is there a single LLM that was *not* trained on the public web? To my knowledge there is not, and a model that is ignorant of all information available on the public web would be, well, pretty ignorant of the world.

Yesterday, quoting Anthropic’s announcement of their impressive new model, Claude 3.5 Sonnet, I wrote:

Also, from the bottom of the post, this interesting nugget:

One of the core constitutional principles that guides our AI model
development is privacy. We do not train our generative models on
user-submitted data unless a user gives us explicit permission to
do so. To date we have not used any customer or user-submitted
data to train our generative models.

Even Apple can’t say that.

It now seems clear that I misread Anthropic’s statement. I wrongly interpreted “user-submitted data” as including everything on the public web. That’s not true. Here is Anthropic’s FAQ on training data:

Large language models such as Claude need to be “trained” on text
so that they can learn the patterns and connections between words.
This training is important so that the model performs effectively
and safely.

While it is not our intention to “train” our models on personal
data specifically, training data for our large language models,
like others, can include web-based data that may contain publicly
available personal data. We train our models using data from three
sources:

Publicly available information via the Internet
Datasets that we license from third party businesses
Data that our users or crowd workers provide

We take steps to minimize the privacy impact on individuals
through the training process. We operate under strict policies and
guidelines for instance that we do not access password protected
pages or bypass CAPTCHA controls. We undertake due diligence on
the data that we license. And we encourage our users not to use
our products and services to process personal data. Additionally,
our models are trained to respect privacy: one of our
constitutional “principles” at the heart of Claude, based on the
Universal Declaration of Human Rights, is to choose the response
that is most respectful of everyone’s privacy, independence,
reputation, family, property rights, and rights of association.

Here is Apple, from its announcement last week of their on-device and server foundation models:

We train our foundation models on licensed data, including data
selected to enhance specific features, as well as publicly
available data collected by our web-crawler, AppleBot. Web
publishers have the option to opt out of the use of their
web content for Apple Intelligence training with a data usage
control.

We never use our users’ private personal data or user interactions
when training our foundation models, and we apply filters to
remove personally identifiable information like social security
and credit card numbers that are publicly available on the
Internet. We also filter profanity and other low-quality content
to prevent its inclusion in the training corpus. In addition to
filtering, we perform data extraction, deduplication, and the
application of a model-based classifier to identify high quality
documents.

This puts Apple in the same boat as Anthropic in terms of using public pages on the web as training sources. Some writers and creators object to this — including Federico Viticci, whose piece on MacStories I linked to with my “Even Apple can’t say that” comment yesterday. Dan Moren wrote a good introduction to blocking these crawling bots with robots.txt directives.

The best argument against Apple’s use of public web pages for model training is that they trained first, but only after announcing Apple Intelligence last week issued the instructions for blocking Applebot for AI training purposes. Apple should clarify whether they plan to re-index the public data they used for training before Apple Intelligence ships in beta this summer. Clearly, a website that bans Applebot-Extended shouldn’t have its data in Apple’s training corpus simply because Applebot crawled it before Apple Intelligence was even announced. It’s fair for public data to be excluded on an opt-out basis, rather than included on an opt-in one, but Apple trained its models on the public web before they allowed for opting out.

But other than that chicken/egg opt-out issue, I don’t object to this. The whole point of the public web is that it’s there to learn from — even if the learner isn’t human. Is there a single LLM that was not trained on the public web? To my knowledge there is not, and a model that is ignorant of all information available on the public web would be, well, pretty ignorant of the world. To me the standards for LLMs should be similar to those we hold people to. You’re free to learn from anything I publish, but not free to plagiarize it. If you quote it, attribute and link to the source. That’s my standard for AI bots as well. So at the moment, my robots.txt file bans just one: Perplexity.

Read More 

Scroll to top
Generated by Feedzy