daring-rss

‘The Apple Jonathan: A Very 1980s Concept Computer That Never Shipped’

Stephen Hackett, writing at 512 Pixels:

The backbone of the system would need to accept modules from Apple
and other companies, letting users build what they needed in terms
of functionality, as D’Agostino writes:

(Fitch) designed a simple hardware “backbone” carrying basic
operations and I/O on which the user could add a series of “book”
modules, carrying hardware for running Apple II, Mac, UNIX and
DOS software, plus other modules with disk drives or networking
capabilities.

This meant that every user could have their own unique Jonathan
setup, pulling together various software platforms, storage
devices, and hardware capabilities into their own personalized
system. Imagining what would have been required for all this to
work together gives me a headache. In addition to the shared
backbone interface, there would need to be software written to
make an almost-endless number of configurations work smoothly for
the most demanding of users. It was all very ambitions, but
perhaps a little too far-fetched.

I’d go further than “never shipped” and describe this is a concept that never could have shipped. It was a pipe dream. The concepts sure did look cool though.

 ★ 

Stephen Hackett, writing at 512 Pixels:

The backbone of the system would need to accept modules from Apple
and other companies, letting users build what they needed in terms
of functionality, as D’Agostino writes:

(Fitch) designed a simple hardware “backbone” carrying basic
operations and I/O on which the user could add a series of “book”
modules, carrying hardware for running Apple II, Mac, UNIX and
DOS software, plus other modules with disk drives or networking
capabilities.

This meant that every user could have their own unique Jonathan
setup, pulling together various software platforms, storage
devices, and hardware capabilities into their own personalized
system. Imagining what would have been required for all this to
work together gives me a headache. In addition to the shared
backbone interface, there would need to be software written to
make an almost-endless number of configurations work smoothly for
the most demanding of users. It was all very ambitions, but
perhaps a little too far-fetched.

I’d go further than “never shipped” and describe this is a concept that never could have shipped. It was a pipe dream. The concepts sure did look cool though.

Read More 

Meta to Start Licensing Quest’s Horizon OS to Third-Party OEMs

Alex Heath, The Verge:

Meta has started licensing the operating system for its Quest
headset to other hardware makers, starting with Lenovo and
Asus. It’s also making a limited-run, gaming-focused Quest with
Xbox.

On the theme of opening up, Meta is also pushing for more ways to
discover alternative app stores. It’s making its experimental App
Lab store more prominent and even inviting Google to bring the
Play Store to its operating system, which is now called Horizon
OS. In a blog post, Meta additionally said that it’s
working on a spatial framework for developers to more easily port
their mobile apps to Horizon OS. […]

Zuckerberg has been clear that he wants his company to be a more
open platform than Apple’s. Here, he’s firmly positioning Meta’s
Horizon OS as the Android alternative to Apple’s Vision Pro. Given
how Android was more of a reaction to the iPhone, an analogy he’d
probably prefer is how Microsoft built the early PC market by
licensing Windows.

It definitely seems more like Windows than Android — there’s no word that Horizon OS is going open source. But we have an answer regarding what Zuckerberg meant when he positioned the Quest platform — now the Horizon OS platform, I suppose — as the “open” alternative to Apple’s VisionOS.

 ★ 

Alex Heath, The Verge:

Meta has started licensing the operating system for its Quest
headset
to other hardware makers, starting with Lenovo and
Asus. It’s also making a limited-run, gaming-focused Quest with
Xbox
.

On the theme of opening up, Meta is also pushing for more ways to
discover alternative app stores. It’s making its experimental App
Lab store more prominent and even inviting Google to bring the
Play Store to its operating system, which is now called Horizon
OS. In a blog post, Meta additionally said that it’s
working on a spatial framework for developers to more easily port
their mobile apps to Horizon OS. […]

Zuckerberg has been clear that he wants his company to be a more
open platform than Apple’s. Here, he’s firmly positioning Meta’s
Horizon OS as the Android alternative to Apple’s Vision Pro. Given
how Android was more of a reaction to the iPhone, an analogy he’d
probably prefer is how Microsoft built the early PC market by
licensing Windows.

It definitely seems more like Windows than Android — there’s no word that Horizon OS is going open source. But we have an answer regarding what Zuckerberg meant when he positioned the Quest platform — now the Horizon OS platform, I suppose — as the “open” alternative to Apple’s VisionOS.

Read More 

‘LLM in a Flash: Efficient Large Language Model Inference With Limited Memory’ (PDF)

Re: my previous item on LLMs being RAM-hungry while iPhones are relatively low on RAM, this certainly isn’t news to Apple. Back in December, a team of eight researchers from Apple published this paper, which states in its abstract:

This paper tackles the challenge of efficiently running LLMs that
exceed the available DRAM capacity by storing the model parameters
in flash memory, but bringing them on demand to DRAM. Our method
involves constructing an inference cost model that takes into
account the characteristics of flash memory, guiding us to
optimize in two critical areas: reducing the volume of data
transferred from flash and reading data in larger, more contiguous
chunks. Within this hardware-informed framework, we introduce two
principal techniques. First, “windowing” strategically reduces
data transfer by reusing previously activated neurons, and second,
“row-column bundling”, tailored to the sequential data access
strengths of flash memory, increases the size of data chunks read
from flash memory. These methods collectively enable running
models up to twice the size of the available DRAM, with a 4-5× and
20-25× increase in inference speed compared to naive loading
approaches in CPU and GPU, respectively. Our integration of
sparsity awareness, context-adaptive loading, and a
hardware-oriented design paves the way for effective inference of
LLMs on devices with limited memory.

 ★ 

Re: my previous item on LLMs being RAM-hungry while iPhones are relatively low on RAM, this certainly isn’t news to Apple. Back in December, a team of eight researchers from Apple published this paper, which states in its abstract:

This paper tackles the challenge of efficiently running LLMs that
exceed the available DRAM capacity by storing the model parameters
in flash memory, but bringing them on demand to DRAM. Our method
involves constructing an inference cost model that takes into
account the characteristics of flash memory, guiding us to
optimize in two critical areas: reducing the volume of data
transferred from flash and reading data in larger, more contiguous
chunks. Within this hardware-informed framework, we introduce two
principal techniques. First, “windowing” strategically reduces
data transfer by reusing previously activated neurons, and second,
“row-column bundling”, tailored to the sequential data access
strengths of flash memory, increases the size of data chunks read
from flash memory. These methods collectively enable running
models up to twice the size of the available DRAM, with a 4-5× and
20-25× increase in inference speed compared to naive loading
approaches in CPU and GPU, respectively. Our integration of
sparsity awareness, context-adaptive loading, and a
hardware-oriented design paves the way for effective inference of
LLMs on devices with limited memory.

Read More 

On-Device AI Craves RAM

Ron Amadeo, writing for Ars Technica on a purported leak of a trio of Pixel 9 phones:

Rozetked says (through translation) that the phone is “similar in
size to the iPhone 15 Pro.” It runs a Tensor G4 SoC, of
course, and — here’s a noteworthy spec — has a whopping 16GB of
RAM according to the bootloader screen. The Pixel 8 Pro tops out
at 12GB.

Anything could change between prototype and product, especially
for RAM, which is usually scaled up and down in various phone
tiers. A jump in RAM is something we were expecting though. As
part of Google’s new AI-focused era, it wants generative AI models
turned on 24/7 for some use cases. Google said as much in a recent
in-house podcast, pointing to some features like a new
version of Smart Reply built right into the keyboard, which
“requires the models to be RAM-resident” — in other words, loaded
all the time. Google’s desire to keep generative AI models in
memory means less RAM for your operating system to actually do
operating system things, and one solution to that is to just add
more RAM. So how much RAM is enough? At one point Google said the
smaller Pixel 8’s 8GB of RAM was too much of a “hardware
limitation” for this approach. Google PR also recently
told us the company still hasn’t enabled generative AI smart reply
on Pixel 8 Pro by default with its 12GB of RAM, so expect these
RAM numbers to start shooting up.

That last link is to a story positing that Google’s Gemini Nano runs on the Pixel 8 Pro but not the regular Pixel because the Pro has more RAM (12 vs. 8 GB).

Comparing iPhone RAM to Android RAM has never been apples-to-apples (same goes for battery capacity), but still, it’s hard not to wonder whether Apple’s on-device AI plans are hamstrung by the relatively stingy amounts of RAM on iPhones. Here’s a list from 9to5Mac showing the RAM in each iPhone going back to the original (which had just 128 MB!). iOS 17 supports models dating back to 2018’s iPhone XS and XR (4 and 3 GB of RAM, respectively). If iOS 18 drops those models, the new baseline will be the iPhones 11 and 11 Pro, which all sport 4 GB. The most RAM on any iPhones to date is the 8 GB in the 15 Pro models, but 8 GB is what Google deemed insufficient for the Pixel 8 to run Gemini Nano.

Might some iOS 18 on-device AI features be limited to newer models with more RAM?

 ★ 

Ron Amadeo, writing for Ars Technica on a purported leak of a trio of Pixel 9 phones:

Rozetked says (through translation) that the phone is “similar in
size to the iPhone 15 Pro.” It runs a Tensor G4 SoC, of
course, and — here’s a noteworthy spec — has a whopping 16GB of
RAM according to the bootloader screen. The Pixel 8 Pro tops out
at 12GB.

Anything could change between prototype and product, especially
for RAM, which is usually scaled up and down in various phone
tiers. A jump in RAM is something we were expecting though. As
part of Google’s new AI-focused era, it wants generative AI models
turned on 24/7 for some use cases. Google said as much in a recent
in-house podcast, pointing to some features like a new
version of Smart Reply built right into the keyboard, which
“requires the models to be RAM-resident” — in other words, loaded
all the time. Google’s desire to keep generative AI models in
memory means less RAM for your operating system to actually do
operating system things, and one solution to that is to just add
more RAM. So how much RAM is enough? At one point Google said the
smaller Pixel 8’s 8GB of RAM was too much of a “hardware
limitation
” for this approach. Google PR also recently
told us the company still hasn’t enabled generative AI smart reply
on Pixel 8 Pro by default with its 12GB of RAM, so expect these
RAM numbers to start shooting up.

That last link is to a story positing that Google’s Gemini Nano runs on the Pixel 8 Pro but not the regular Pixel because the Pro has more RAM (12 vs. 8 GB).

Comparing iPhone RAM to Android RAM has never been apples-to-apples (same goes for battery capacity), but still, it’s hard not to wonder whether Apple’s on-device AI plans are hamstrung by the relatively stingy amounts of RAM on iPhones. Here’s a list from 9to5Mac showing the RAM in each iPhone going back to the original (which had just 128 MB!). iOS 17 supports models dating back to 2018’s iPhone XS and XR (4 and 3 GB of RAM, respectively). If iOS 18 drops those models, the new baseline will be the iPhones 11 and 11 Pro, which all sport 4 GB. The most RAM on any iPhones to date is the 8 GB in the 15 Pro models, but 8 GB is what Google deemed insufficient for the Pixel 8 to run Gemini Nano.

Might some iOS 18 on-device AI features be limited to newer models with more RAM?

Read More 

★ Making a Mountain Out of Molehill-Sized M4 News

“*The entire Mac product line is set for annual speed-bump Apple silicon updates*” is, as far I can tell, the actual story. Not “*Mac sales are in the tank and Apple is overhauling the whole product line to change its focus to AI.*”

Here’s how I see the current state of Apple’s Macintosh hardware lineup, three-and-a-half years into the Apple silicon era and midway through the M3 generation. Apple is a company that in many ways is built around an annual schedule. WWDC comes every June. New iPhones (along with Watches) come every September. The new OS releases (which are announced and previewed at WWDC) ship later each year in the fall. Many Apple products are not on an annual schedule — the iPad, to name the most prominent example — but the OSes are, and the iPhones and Watches are. All things considered, I think Apple would like to have more of its products on an annual cycle. This predictable regularity is one of the hallmarks of Tim Cook’s era as CEO.

Launching a new PC architecture is difficult (to say the least). And the M1 launched at the end of 2020, the most tumultuous and disuptive year for the world since World War II. Then, M2 models seemed late — that’s the only logical explanation for the M2 MacBook Pros not shipping until January 2023, but their M3 successors shipping just 10 months later. Just last month Apple shipped the M3 MacBook Airs. It feels to me, as a longtime observer of the company, that with the M3 generation, Apple has started to hit its intended stride. The M1 and M2 generations were like an airplane taking off — a bit rocky and rough. Turbulence is to be expected. But with the M3, Apple silicon hit cruising altitude. The seatbelt light is now off, and new M-series chips are seemingly being developed on the same annual schedule as the iPhone’s A-series chips. (Note that the M3 family uses the same 3nm fabrication process as the A17 Pro.)

If Apple wants to refresh Macs with new generations of M-series chips annually — and I suspect they do — the schedule we’re seeing with the M3 generation makes sense: MacBook Pros in the fall, MacBooks Airs a few months later, pro desktops in the spring. Last year the M3 update to the iMac — a product that skipped the entire M2 generation — shipped alongside the MacBook Pros, but I could see that happening alongside the non-pro MacBook Airs in future years. Because the iMacs skipped the M2 generation, they were overdue. That leaves the spring or even early summer for the high-performance Mac Studio and Mac Pro, and the surprisingly-pro-in-many-use-cases Mac Mini.

So I expect we’ll see M3-generation updates to the Mac Studio, Mac Pro, and Mac Mini either in May (alongside the universally-expected lineup of new iPads) or (more likely) at WWDC in June. And then, if all things go according to Apple’s plans, I expect to see M4-generation MacBook Pros in November, M4 MacBook Airs next February or March, and the desktop models just before or at WWDC 2025, 14 months from now. Lather, rinse, repeat, every 12 months for years to come.

And how do we expect the M4 chips to evolve? Everything tends to get incrementally faster between generations: CPU, GPU, I/O, Neural Engine. But it’s the GPU where Apple silicon lags Nvidia’s state-of-the-art in sheer performance, and it’s GPU performance that’s essential for AI model training, so it’s natural to expect GPU improvements to be an area of focus. Intel-based Mac Pros were configurable with up to 1.5 TB of RAM, but the M2 Ultra Mac Pro maxes out at 192 GB of RAM. Increasing the maximum amount of RAM in high-end configurations is an obvious improvement that Apple’s chip designers should be focused on. So we’ll probably see incremental (15 percent-ish) gains in CPU performance, greater gains in GPU and Neural Engine performance, and perhaps higher capacity for RAM.

No need to follow the rumor mill or to hear any leaks from insiders in Cupertino. The above summary can all be gleaned just by paying attention to Apple’s patterns and industry-wide trends.

That brings us to a report by Mark Gurman at Bloomberg last week: “Apple Plans to Overhaul Entire Mac Line With AI-Focused M4 Chips”. Overhauled line-up! AI-focused chips! Big news!

The company, which released its first Macs with M3 chips five
months ago, is already nearing production of the next generation — the M4 processor — according to people with knowledge of the
matter. The new chip will come in at least three main varieties,
and Apple is looking to update every Mac model with it, said the
people, who asked not to be identified because the plans haven’t
been announced.

The new Macs are underway at a critical time. After peaking in
2022, Mac sales fell 27% in the last fiscal year, which ended in
September. In the holiday period, revenue from the computer line
was flat. Apple attempted to breathe new life into the Mac
business with an M3-focused launch event last October, but those
chips didn’t bring major performance improvements over the M2 from
the prior year.

It is true that Mac sales were down considerably last year, but Gurman is painting that as the result of the M3 generation being a meh upgrade compared to M2. But (a) the M3 chips only started shipping in the most recently reported quarter; (b) they’re a fine upgrade compared to the M2 chips. The real problem is that laptop sales shot up considerably during COVID, with so many people working from home and kids “going to school” via Zoom from home. MacBook sales were pulled forward, so a dip seemed inevitable, no matter how good the M2 and M3 offerings were. And Apple silicon was so good right out of the gate that most people who own M1 Macs — any M1 Mac, including the base MacBook Air or Mac Mini — have little reason to consider upgrading yet.

Apple also is playing catch-up in AI, where it’s seen as a laggard
to Microsoft Corp., Alphabet Inc.’s Google and other tech peers.
The new chips are part of a broader push to weave AI capabilities
into all its products.

Back in 2007, Joe Biden dropped a zinger during a presidential primary debate: “Rudy Giuliani, there’s only three things he mentions in a sentence: a noun, a verb, and 9/11.” This year, product rumors need only three things: a noun, a verb, and “AI”.

Apple shares climbed 4.3% to $175.04 on Thursday in New York, the
biggest single-day gain in 11 months. They had been down 13% this
year through Wednesday’s close.

Bloomberg gonna Bloomberg.

Apple is aiming to release the updated computers beginning late
this year and extending into early next year. There will be new
iMacs, a low-end 14-inch MacBook Pro, high-end 14-inch and 16-inch
MacBook Pros, and Mac minis — all with M4 chips. But the
company’s plans could change. An Apple spokesperson declined to
comment. […]

The move will mark a quick refresh schedule for the iMac and
MacBook Pro, as both lines were just updated in October. The Mac
mini was last upgraded in January 2023.

Apple is then planning to follow up with more M4 Macs throughout
2025. That includes updates to the 13-inch and 15-inch MacBook Air
by the spring, the Mac Studio around the middle of the year, and
the Mac Pro later in 2025. The MacBook Air received the M3 chip
last month, while the Mac Studio and Mac Pro were updated with M2
processors last year.

If all this pans out, it is indeed news, but the news is that Apple has successfully gotten the entire Mac hardware lineup onto an annual upgrade cycle. Whereas Gurman is framing the news as a reactionary response by Apple, “overhauling” the hardware lineup very shortly after a supposedly tepid reaction to the M3 generation of Macs that, at this writing, still hasn’t completed rolling out.

The M4 chip line includes an entry-level version dubbed Donan,
more powerful models named Brava and a top-end processor codenamed
Hidra. The company is planning to highlight the AI processing
capabilities of the components and how they’ll integrate with the
next version of macOS, which will be announced in June at Apple’s
annual developer conference.

I’m sure Apple will have much to say about “AI” at WWDC this June, and that M4-based Macs will execute AI features faster and more efficiently than previous chips, but that’s what new chips do for everything. They’re faster.

That they will provide Apple with AI-related performance to brag about just means they’ll have faster GPUs and bigger Neural Engines, which is exactly how Apple silicon has been evolving year-over-year for 15 years, dating back to the original iPad in 2010. No one is postulating that M4-based Macs will offer AI features that require M4 chips.

The Mac Pro remains the lower-selling model in the company’s
computer lineup, but it has a vocal fan base. After some customers
complained about the specifications of Apple’s in-house chips, the
company is looking to beef up that machine next year. […] As
part of the upgrades, Apple is considering allowing its
highest-end Mac desktops to support as much as a half-terabyte of
memory. The current Mac Studio and Mac Pro top out at 192
gigabytes — far less capacity than on Apple’s previous Mac Pro,
which used an Intel Corp. processor. The earlier machine worked
with off-the-shelf memory that could be added later and handle as
much as 1.5 terabytes. With Apple’s in-house chips, the memory is
more deeply integrated into the main processor, making it harder
to add more.

Raising the memory ceiling from 192 GB to 512 GB is also news, but surely is the natural progression of the platform, not a response to criticism of the M2 Mac Pro being little more than a Mac Studio with more options for I/O expansion. No one knows better than Apple that the first-generation Apple silicon Mac Pros are a bit of a disappointment. Raising the memory ceiling to 512 GB would be a significant improvement from the M2 Ultra, but would still offer just one-third the RAM ceiling of the 2019 Intel-based Mac Pro. Raising the ceiling to 512 GB would simultaneously be a nice upgrade for Apple silicon, but still not enough for the highest of high-end computing needs.

Anyway, “the entire Mac product line is set for annual speed-bump Apple silicon updates” is, as far I can tell, the actual story. Not “Mac sales are in the tank and Apple is overhauling the whole product line to change its focus to AI.

Read More 

★ Face the Critic: Ian Betteridge Edition

To sum up my stance: Tracking is wrong when it’s done without consent, and when users have no idea what’s being tracked or how it’s being used. Tracking is fine when it’s done with consent, and users know what’s being tracked and how it’s being used.

Ian Betteridge, quoting yours truly on non-consensual tracking back in 2020 and then my piece yesterday on the EDPB issuing an opinion against Meta’s “Pay or OK” model in the EU:

I wonder what happened to turn John’s attitude from “no action
Apple can take against the tracking industry is too strong” to
defending Facebook’s “right” to choose how it invades people’s
privacy? Or is he suggesting that a private company is entitled to
defend people’s privacy, but governments are not?

I’ve seen a bit of pushback along this line recently, more or less asking: How come I was against Meta’s tracking but now seem for it? I don’t see any contradiction or change in my position though. The only thing I’d change in the 2020 piece Betteridge quotes is this sentence, which Betteridge emphasizes: “No action Apple can take against the tracking industry is too strong.” I should have inserted an adjective before “tracking” — it’s non-consensual tracking I object to, especially tracking that’s downright surreptitious. Not tracking in and of itself.

That’s why I remain a staunch supporter of Apple’s App Tracking Transparency, and consider it a success. Apple didn’t ban the use of the IDFA for cross-app tracking, and they were correct not to. They simply now require consent. If I had believed that all tracking was ipso facto wrong, I’d have been opposed to ATT on the grounds that it offers the “Allow” choice.

Also from 2020, I quoted Steve Jobs on privacy:

Privacy means people know what they’re signing up for, in plain
English, and repeatedly. That’s what it means. I’m an optimist,
I believe people are smart. And some people want to share more
data than other people do. Ask them. Ask them every time. Make
them tell you to stop asking them if they get tired of your
asking them. Let them know precisely what you’re going to do
with their data.

That’s what ATT does. And that’s what’s Meta’s “Pay or OK” model in the EU does. It offers users a clear fair choice: Use Facebook and Instagram free of charge with targeted ads, or pay a reasonable monthly fee for an ad-free experience. No less than Margrethe Vestager herself, back in 2018, was keen on this idea:

My concern is more about whether we get the right choices. I would
like to have a Facebook in which I pay a fee each month, but I
would have no tracking and advertising and the full benefits of
privacy. It is a provoking thought after all the Facebook scandal.
This market is not being explored.

Now Meta is “exploring” that market, but the European Commission doesn’t like the results, because it turns out that when given the clear choice, the overwhelming majority of EU denizens prefer to use Meta’s platforms free-of-charge with targeted ads.

The best aspects of the EU’s digital privacy laws are those that give people the right to know what data is being collected, where it’s being stored, who it’s being shared with, etc. That’s all fantastic. But the worst aspect is the paternalism. The EU is correct to require that users be required to provide consent before being tracked across properties. And Apple is correct for protecting unique device IDFA identifiers behind a mandatory “Ask App Not to Track / Allow” consent alert. But Jobs was right too: people are smart, and they can — and should be allowed to — make their own decisions. And many people are more comfortable with sharing data than others. The privacy zealots leading this crusade in the EU do not think people are smart, and do not think they should trusted to make these decisions for themselves.

I don’t like Meta as a company. If a corporation can be smarmy, Meta is that. And they’ve done a lot of creepy stuff over the years, and for a long while clearly acted as though they were entitled to track whatever they could get away with technically. I suspect they thought that if they asked for consent, or made clear what and how they tracked, that users would revolt. But it turns out billions of people who enjoy Meta’s platforms are fine with the deal.

It’s obviously the case that for some people, Meta’s past transgressions are unforgivable. That’s each person’s decision to make for themselves. Me, I believe in mercy. Again, I still don’t really like the company, by and large. But Threads is pretty good. And sometimes, when I occasionally check in, Instagram can still make me smile. It’s very clear what I’m sharing with Meta when I use those apps, and I’m fine with that. If you’re not, don’t use them. (I’ve still never created a Facebook blue app account, and still feel like I haven’t missed out on a damn thing.)

To sum up my stance: Tracking is wrong when it’s done without consent, and when users have no idea what’s being tracked or how it’s being used. Tracking is fine when it’s done with consent, and users know what’s being tracked and how it’s being used. Privacy doesn’t mean never being tracked. It means never being tracked without clear consent. I think Meta is now largely, if not entirely, on the right side of this.

It’s paternalistic — infantilizing even — to believe that government bureaucrats should take these decisions out of the hands of EU citizens. Me, I trust people to decide for themselves. The current European Commission regime is clearly of the belief that all tracking is wrong, regardless of consent. That’s a radical belief that is not representative of the public. The government’s proper role is to ensure people can make an informed choice, and that they have control over their own data. That’s what I thought four years ago, and it’s what I think now.

Read More 

Might Meta Go Pay-Only in the EU?

Eric Seufert, writing at Mobile Dev Memo, spitballing how Meta right respond if the EU accepts the recommendation from the EDPB that their “Pay or OK” model is illegal:

Charge a nominal fee for the ad-supported versions of Facebook and Instagram

Meta could introduce a small fee to use the ad-supported versions
of Facebook and Instagram, rendering them as completely paid
products in the EU. By eliminating its free tier, Meta should
theoretically sidestep the conditions proposed in the EDPB’s
opinion, since the elimination of a free tier supported by
personalized advertising renders the Pay or Okay restrictions
irrelevant.

As frequent MDM Podcast guest Mikołaj Barczentewicz
points out in this blog post, both Netflix and Disney+
target ads behaviorally in their paid, ad-supported tiers. Meta
could point to these products as examples of this pricing model
being invoked: all options are paid, but the cheapest option is
subsidized by behaviorally-supported ads. Of course, the EDPB has
given itself latitude with its definition of “large online
platform” to only litigate specific instances of commercial
strategy.

I didn’t think of this when I spitballed my own ideas for how Meta might respond. Maybe they offer two tiers: €1/month with targeted ads, or €6/month without ads. Maybe they even make the fee for the ad tier truly nominal, say €1/year? The problem with this might be that too few people are willing to pay anything at all for social networking. Because it’s always been free-of-charge, people (not unreasonably!) now think it ought to forever remain free-of-charge.

Regarding the “just exit the EU” option, Seufert writes:

I don’t believe that Meta will respond by exiting the EU market
altogether — at least not in the near term. Per above: the EU is
10% of (what I understand to be) Facebook’s global advertising
revenue, and GDPR fines aren’t as significant as those incurred
under the DMA. The maximum fine under the GDPR is 4% of
annual worldwide turnover, whereas the maximum fine under the
DMA is 20% of annual worldwide turnover. While I do
believe the EU regulatory regime’s intransigence will influence a
scaled, US-domiciled tech company to exit the EU market in the
medium term, my sense is that Meta won’t take that course of
action in immediate response to this decision.

That 10 percent figure is big but not indispensable. And it’s not much bigger than Apple’s 7 percent figure for App Store revenue from the EU. The EU is indeed a big and important market, but it’s nowhere near as big or important as the European Commissioners think.

 ★ 

Eric Seufert, writing at Mobile Dev Memo, spitballing how Meta right respond if the EU accepts the recommendation from the EDPB that their “Pay or OK” model is illegal:

Charge a nominal fee for the ad-supported versions of Facebook and Instagram

Meta could introduce a small fee to use the ad-supported versions
of Facebook and Instagram, rendering them as completely paid
products in the EU. By eliminating its free tier, Meta should
theoretically sidestep the conditions proposed in the EDPB’s
opinion, since the elimination of a free tier supported by
personalized advertising renders the Pay or Okay restrictions
irrelevant.

As frequent MDM Podcast guest Mikołaj Barczentewicz
points out in this blog post, both Netflix and Disney+
target ads behaviorally in their paid, ad-supported tiers. Meta
could point to these products as examples of this pricing model
being invoked: all options are paid, but the cheapest option is
subsidized by behaviorally-supported ads. Of course, the EDPB has
given itself latitude with its definition of “large online
platform” to only litigate specific instances of commercial
strategy.

I didn’t think of this when I spitballed my own ideas for how Meta might respond. Maybe they offer two tiers: €1/month with targeted ads, or €6/month without ads. Maybe they even make the fee for the ad tier truly nominal, say €1/year? The problem with this might be that too few people are willing to pay anything at all for social networking. Because it’s always been free-of-charge, people (not unreasonably!) now think it ought to forever remain free-of-charge.

Regarding the “just exit the EU” option, Seufert writes:

I don’t believe that Meta will respond by exiting the EU market
altogether — at least not in the near term. Per above: the EU is
10% of (what I understand to be) Facebook’s global advertising
revenue, and GDPR fines aren’t as significant as those incurred
under the DMA. The maximum fine under the GDPR is 4% of
annual worldwide turnover, whereas the maximum fine under the
DMA is 20% of annual worldwide turnover. While I do
believe the EU regulatory regime’s intransigence will influence a
scaled, US-domiciled tech company to exit the EU market in the
medium term, my sense is that Meta won’t take that course of
action in immediate response to this decision.

That 10 percent figure is big but not indispensable. And it’s not much bigger than Apple’s 7 percent figure for App Store revenue from the EU. The EU is indeed a big and important market, but it’s nowhere near as big or important as the European Commissioners think.

Read More 

China Orders Apple to Remove WhatsApp, Threads, Signal, and Telegram From Chinese App Store

Aaron Tilley, Liza Lin, and Jeff Horwitz, reporting for The Wall Street Journal (News+):

Meta Platforms’ WhatsApp and Threads as well as messaging
platforms Signal and Telegram were taken off the Chinese App Store
Friday. Apple said it was told to remove certain apps because of
national security concerns, without specifying which.

“We are obligated to follow the laws in the countries where we
operate, even when we disagree,” an Apple spokesperson said.

These messaging apps, which allow users to exchange messages and
share files individually and in large groups, combined have around
three billion users globally. They can only be accessed in China
through virtual private networks that take users outside China’s
Great Firewall, but are still commonly used.

I’m surprised any of these apps had been available in China until now. Two questions:

Are these apps still on the iPhones of Chinese people who already had them installed? I don’t recall Apple ever using the kill switch that revokes the developer signing for already-installed copies of apps pulled from the App Store. E.g., iGBA, the rip-off Nintendo emulator that briefly rocketed to the top of the charts last weekend — pulled from the App Store early this week, but if you installed it while it was available, you can still use it.
Do Android phones in China offer sideloading?

 ★ 

Aaron Tilley, Liza Lin, and Jeff Horwitz, reporting for The Wall Street Journal (News+):

Meta Platforms’ WhatsApp and Threads as well as messaging
platforms Signal and Telegram were taken off the Chinese App Store
Friday. Apple said it was told to remove certain apps because of
national security concerns, without specifying which.

“We are obligated to follow the laws in the countries where we
operate, even when we disagree,” an Apple spokesperson said.

These messaging apps, which allow users to exchange messages and
share files individually and in large groups, combined have around
three billion users globally. They can only be accessed in China
through virtual private networks that take users outside China’s
Great Firewall, but are still commonly used.

I’m surprised any of these apps had been available in China until now. Two questions:

Are these apps still on the iPhones of Chinese people who already had them installed? I don’t recall Apple ever using the kill switch that revokes the developer signing for already-installed copies of apps pulled from the App Store. E.g., iGBA, the rip-off Nintendo emulator that briefly rocketed to the top of the charts last weekend — pulled from the App Store early this week, but if you installed it while it was available, you can still use it.

Do Android phones in China offer sideloading?

Read More 

The Best Simple USB-C Microphone: Audio-Technica’s ATR2100x-USB

Joe Fabisevich asked a common question on Threads:

What’s the go to simple USB-C podcast mic that sounds good, but
doesn’t have to be top of the line or super expensive? Think more
“I need to sound professional on a podcast or two”, not “I make my
money by recording podcasts.”

My answer: Audio-Technica’s ATR2100×-USB. It costs just $50 at Amazon. (That’s an affiliate link that will make me rich if you buy through it.) Spend an extra $4 and get a foam windscreen cap. I mean just look at it — it literally looks like the microphone emoji: 🎤. You can just plug it into a USB-C port and it sounds great. No need for an XLR interface, but it does support XLR if you ever have the need.

My main podcasting microphone remains the $260 Shure BETA 87A Supercardioid Condenser, which I connect to a $180 SSL 2 audio interface. But that stays in my podcast cave in the basement. I keep the ATR2100× in my carry-on suitcase, so it’s with me whenever I’m away from home. My Dithering co-host Ben Thompson uses the same ATR2100× when he’s away from home, too. It’s a great simple mic and a fantastic value at just $50.

 ★ 

Joe Fabisevich asked a common question on Threads:

What’s the go to simple USB-C podcast mic that sounds good, but
doesn’t have to be top of the line or super expensive? Think more
“I need to sound professional on a podcast or two”, not “I make my
money by recording podcasts.”

My answer: Audio-Technica’s ATR2100×-USB. It costs just $50 at Amazon. (That’s an affiliate link that will make me rich if you buy through it.) Spend an extra $4 and get a foam windscreen cap. I mean just look at it — it literally looks like the microphone emoji: 🎤. You can just plug it into a USB-C port and it sounds great. No need for an XLR interface, but it does support XLR if you ever have the need.

My main podcasting microphone remains the $260 Shure BETA 87A Supercardioid Condenser, which I connect to a $180 SSL 2 audio interface. But that stays in my podcast cave in the basement. I keep the ATR2100× in my carry-on suitcase, so it’s with me whenever I’m away from home. My Dithering co-host Ben Thompson uses the same ATR2100× when he’s away from home, too. It’s a great simple mic and a fantastic value at just $50.

Read More 

Google Reorg Puts Android, Chrome, Photos and More Under Leadership of Devices Team

David Pierce, writing for The Verge:

Google CEO Sundar Pichai announced substantial internal
reorganizations on Thursday, including the creation of a new team
called “Platforms and Devices” that will oversee all of Google’s
Pixel products, all of Android, Chrome, ChromeOS, Photos, and
more. The team will be run by Rick Osterloh, who was previously
the SVP of devices and services, overseeing all of Google’s
hardware efforts. Hiroshi Lockheimer, the longtime head of
Android, Chrome, and ChromeOS, will be taking on other projects
inside of Google and Alphabet.

This is a huge change for Google, and it likely won’t be the last
one. There’s only one reason for all of it, Osterloh says: AI.
“This is not a secret, right?” he says. Consolidating teams “helps
us to be able to do full-stack innovation when that’s necessary,”
Osterloh says.

I’m sure this is about AI, but I think it’s also about getting the company’s shit together and forming a cohesive strategy for integration with their consumer devices. Lost amid the schadenfreude surrounding the near-universal panning Humane’s AI Pin is the question of, well, what are the device form factors we need for AI-driven features? I would argue, strenuously, that the phone is the natural AI device. It already has: always-on networking, cameras, a screen, microphones, and speakers. Everyone owns one and almost everyone takes theirs with them almost everywhere they go.

Putting all of Android under a new division led by the guy in charge of Pixel devices since 2016 says to me that Google sees AI not primarily as a way to make Android better, in general, but to make Pixel devices better, specifically. Best-of-class AI, only on Pixels, could be the sort of differentiation that actual results in Pixels gaining traction.

 ★ 

David Pierce, writing for The Verge:

Google CEO Sundar Pichai announced substantial internal
reorganizations on Thursday, including the creation of a new team
called “Platforms and Devices” that will oversee all of Google’s
Pixel products, all of Android, Chrome, ChromeOS, Photos, and
more. The team will be run by Rick Osterloh, who was previously
the SVP of devices and services, overseeing all of Google’s
hardware efforts. Hiroshi Lockheimer, the longtime head of
Android, Chrome, and ChromeOS, will be taking on other projects
inside of Google and Alphabet.

This is a huge change for Google, and it likely won’t be the last
one. There’s only one reason for all of it, Osterloh says: AI.
“This is not a secret, right?” he says. Consolidating teams “helps
us to be able to do full-stack innovation when that’s necessary,”
Osterloh says.

I’m sure this is about AI, but I think it’s also about getting the company’s shit together and forming a cohesive strategy for integration with their consumer devices. Lost amid the schadenfreude surrounding the near-universal panning Humane’s AI Pin is the question of, well, what are the device form factors we need for AI-driven features? I would argue, strenuously, that the phone is the natural AI device. It already has: always-on networking, cameras, a screen, microphones, and speakers. Everyone owns one and almost everyone takes theirs with them almost everywhere they go.

Putting all of Android under a new division led by the guy in charge of Pixel devices since 2016 says to me that Google sees AI not primarily as a way to make Android better, in general, but to make Pixel devices better, specifically. Best-of-class AI, only on Pixels, could be the sort of differentiation that actual results in Pixels gaining traction.

Read More 

Scroll to top
Generated by Feedzy