daring-rss
Mac Virtualization in MacOS 15 Sequoia Now Supports Logging In to iCloud
Andrew Cunningham, writing at Ars Technica:
But up until now, you haven’t been able to sign into iCloud using
macOS on a VM. This made the feature less useful for developers or
users hoping to test iCloud features in macOS, or whose apps rely
on some kind of syncing with iCloud, or people who just wanted
easy access to their iCloud data from within a VM.
This limitation is going away in macOS 15 Sequoia, according to
developer documentation that Apple released yesterday. As
long as your host operating system is macOS 15 or newer and your
guest operating system is macOS 15 or newer, VMs will now be able
to sign into and use iCloud and other Apple ID-related services
just as they would when running directly on the hardware.
Nice change. Makes me wonder if this is related to Apple’s use of virtualization to allow security researchers to inspect the OS images for its Private Cloud Computer servers for Apple Intelligence.
(Via Dan Moren.)
★
Andrew Cunningham, writing at Ars Technica:
But up until now, you haven’t been able to sign into iCloud using
macOS on a VM. This made the feature less useful for developers or
users hoping to test iCloud features in macOS, or whose apps rely
on some kind of syncing with iCloud, or people who just wanted
easy access to their iCloud data from within a VM.
This limitation is going away in macOS 15 Sequoia, according to
developer documentation that Apple released yesterday. As
long as your host operating system is macOS 15 or newer and your
guest operating system is macOS 15 or newer, VMs will now be able
to sign into and use iCloud and other Apple ID-related services
just as they would when running directly on the hardware.
Nice change. Makes me wonder if this is related to Apple’s use of virtualization to allow security researchers to inspect the OS images for its Private Cloud Computer servers for Apple Intelligence.
Arm, Qualcomm Legal Battle Might Disrupt ‘AI PCs’
Max A. Cherney:
The British company, which is majority-owned by Japan’s SoftBank
Group, opens new tab, sued Qualcomm in 2022 for failing to
negotiate a new license after it acquired a new company. The suit
revolves around technology that Qualcomm, a designer of mobile
chips, acquired from a business called Nuvia that was founded by
Apple chip engineers and which it purchased in 2021 for $1.4
billion.
Arm builds the intellectual property and designs that it sells to
companies such as Apple, opens new tab and Qualcomm, which they
use to make chips. Nuvia had plans to design server chips based on
Arm licenses, but after the acquisition closed, Qualcomm
reassigned its remaining team to develop a laptop processor, which
is now being used in Microsoft’s latest AI PC, called Copilot+.
Arm said the current design planned for Microsoft’s Copilot+
laptops is a direct technical descendant of Nuvia’s chip. Arm said
it had cancelled the license for these chips.
My initial reaction when I see reports of legal disputes like this is “Eh, they’ll settle.” But look at the Apple-Masimo dispute over blood oxygen sensors — that’s still dragging on as we head into summer.
★
Max A. Cherney:
The British company, which is majority-owned by Japan’s SoftBank
Group, opens new tab, sued Qualcomm in 2022 for failing to
negotiate a new license after it acquired a new company. The suit
revolves around technology that Qualcomm, a designer of mobile
chips, acquired from a business called Nuvia that was founded by
Apple chip engineers and which it purchased in 2021 for $1.4
billion.
Arm builds the intellectual property and designs that it sells to
companies such as Apple, opens new tab and Qualcomm, which they
use to make chips. Nuvia had plans to design server chips based on
Arm licenses, but after the acquisition closed, Qualcomm
reassigned its remaining team to develop a laptop processor, which
is now being used in Microsoft’s latest AI PC, called Copilot+.
Arm said the current design planned for Microsoft’s Copilot+
laptops is a direct technical descendant of Nuvia’s chip. Arm said
it had cancelled the license for these chips.
My initial reaction when I see reports of legal disputes like this is “Eh, they’ll settle.” But look at the Apple-Masimo dispute over blood oxygen sensors — that’s still dragging on as we head into summer.
Casey Newton: ‘Apple’s AI Moment Arrives’
Casey Newton, Platformer:
The question now is how polished those features will feel at release. Will the new, more natural Siri deliver on its now 13-year-old promise of serving as a valuable digital assistant? Or will it quickly find itself in a Google-esque scenario where it’s telling anyone who asks to eat rocks?
Impossible to answer at this point, given that none (or almost none?) of the Apple Intelligence features or the ChatGPT integration are enabled in the developer betas. But it feels like the answer is yes, Apple’s new AI features will err on the side of caution, at the risk of feeling pedestrian, rather than turning the “Wow” dial to its maximum setting and delivering glue-on-pizza recipes.
★
Casey Newton, Platformer:
The question now is how polished those features will feel at release. Will the new, more natural Siri deliver on its now 13-year-old promise of serving as a valuable digital assistant? Or will it quickly find itself in a Google-esque scenario where it’s telling anyone who asks to eat rocks?
Impossible to answer at this point, given that none (or almost none?) of the Apple Intelligence features or the ChatGPT integration are enabled in the developer betas. But it feels like the answer is yes, Apple’s new AI features will err on the side of caution, at the risk of feeling pedestrian, rather than turning the “Wow” dial to its maximum setting and delivering glue-on-pizza recipes.
Sandwich Launches Theater for Vision Pro (and Will Livestream The Talk Show Tomorrow)
Zac Hall, 9to5Mac:
Earlier this year, Sandwich Vision introduced its first-ever app
with the debut of Television. The app lets you watch
content on a range of virtual TV sets that you can pin in your
real-world environment through Vision Pro.
Television supports viewing your own video files as well as
content from YouTube. You can even watch Television with friends
synchronously over spatial FaceTime on Apple Vision Pro.
Sometimes, though, you just want to enjoy a film in a proper movie
theater setting. What if you could do that for every movie? Enter
Theater: the new Apple Vision Pro app that transports you
to the perfect venue for movies.
Theater will let you experience the theatrical cinema release
feeling (even if the original Star Wars film isn’t showing at
your local movie chain). Want to watch a movie at the same time
with friends or family who can’t be together in person? Spatial
FaceTime makes that possible in Theater.
You know the immersive theater environments in Apple’s own TV app and Disney’s VisionOS app? Theater is like that, but for any video. It’s like watching YouTube on a 100-foot screen from the best seat in a cinema. I’ve been testing it, and it’s so great. I love it. And:
Sandwich is collaborating with the duo at SpatialGen, Michael Butterfield and Zachary Handshoe. See their expertise on display as they produce the first-ever stereoscopic livestream of The Talk Show Live.
The studio is also collaborating with SpatialGen to livestream
John Gruber’s The Talk Show Live in stereoscopically-captured 3D
video using high-end cameras and lenses. […]
“I started to think ‘what if John’s audience that can’t be at the
California Theater could join us anyway?’ That’s when I pitched
the idea to my co-developer, the genius Andy Roth,” Adam
[Lisagor] says. “He loved it, he found SpatialGen, and I pitched
them the idea. And we had roughly 8 weeks to make this happen, and
I can’t believe it all came together.”
Live-streaming an event and making it look good in realtime is
hard enough. But doing it in 3D video? That’s new territory,
especially considering Apple Vision Pro was just previewed at last
year’s WWDC and launched in the United States in February.
“Gruber was fascinated by the idea but a little skeptical it
could work — it just seemed too ambitious,” Lisagor adds. “The
world’s first livestreamed 3D video event? In an immersive
theater environment? Admittedly seems like a pipe dream. But
nope, it’s real.”
To be clear, the exclusive way to watch the livestream will be through Theater on Vision Pro. Murphy’s Law willing, it should be pretty cool.
★
Zac Hall, 9to5Mac:
Earlier this year, Sandwich Vision introduced its first-ever app
with the debut of Television. The app lets you watch
content on a range of virtual TV sets that you can pin in your
real-world environment through Vision Pro.
Television supports viewing your own video files as well as
content from YouTube. You can even watch Television with friends
synchronously over spatial FaceTime on Apple Vision Pro.
Sometimes, though, you just want to enjoy a film in a proper movie
theater setting. What if you could do that for every movie? Enter
Theater: the new Apple Vision Pro app that transports you
to the perfect venue for movies.
Theater will let you experience the theatrical cinema release
feeling (even if the original Star Wars film isn’t showing at
your local movie chain). Want to watch a movie at the same time
with friends or family who can’t be together in person? Spatial
FaceTime makes that possible in Theater.
You know the immersive theater environments in Apple’s own TV app and Disney’s VisionOS app? Theater is like that, but for any video. It’s like watching YouTube on a 100-foot screen from the best seat in a cinema. I’ve been testing it, and it’s so great. I love it. And:
Sandwich is collaborating with the duo at SpatialGen, Michael Butterfield and Zachary Handshoe. See their expertise on display as they produce the first-ever stereoscopic livestream of The Talk Show Live.
The studio is also collaborating with SpatialGen to livestream
John Gruber’s The Talk Show Live in stereoscopically-captured 3D
video using high-end cameras and lenses. […]
“I started to think ‘what if John’s audience that can’t be at the
California Theater could join us anyway?’ That’s when I pitched
the idea to my co-developer, the genius Andy Roth,” Adam
[Lisagor] says. “He loved it, he found SpatialGen, and I pitched
them the idea. And we had roughly 8 weeks to make this happen, and
I can’t believe it all came together.”
Live-streaming an event and making it look good in realtime is
hard enough. But doing it in 3D video? That’s new territory,
especially considering Apple Vision Pro was just previewed at last
year’s WWDC and launched in the United States in February.
“Gruber was fascinated by the idea but a little skeptical it
could work — it just seemed too ambitious,” Lisagor adds. “The
world’s first livestreamed 3D video event? In an immersive
theater environment? Admittedly seems like a pipe dream. But
nope, it’s real.”
To be clear, the exclusive way to watch the livestream will be through Theater on Vision Pro. Murphy’s Law willing, it should be pretty cool.
Kolide
My thanks to Kolide for sponsoring DF last week. Kolide’s Shadow IT report found that 47% of companies let unmanaged devices access their resources, and authenticate via credentials alone.
Even with phishing-resistant MFA, it’s frighteningly easy for bad actors to impersonate end users — in the case of the MGM hack, all it took was a call to the help desk. What could have prevented that attack (and so many others) was an un-spoofable form of authentication for the device itself.
That’s what you get with Kolide’s device trust solution: a chance to verify that a device is both known and secure before it authenticates. Kolide’s agent looks at hundreds of device properties; their competitors look at only a handful. What’s more, Kolide’s user-first, privacy-respecting approach means you can put it on machines outside MDM: contractor devices, mobile phones, and even Linux machines.
Without a device trust solution, all the security in the world is just security theater. But Kolide can help close the gaps.
★
My thanks to Kolide for sponsoring DF last week. Kolide’s Shadow IT report found that 47% of companies let unmanaged devices access their resources, and authenticate via credentials alone.
Even with phishing-resistant MFA, it’s frighteningly easy for bad actors to impersonate end users — in the case of the MGM hack, all it took was a call to the help desk. What could have prevented that attack (and so many others) was an un-spoofable form of authentication for the device itself.
That’s what you get with Kolide’s device trust solution: a chance to verify that a device is both known and secure before it authenticates. Kolide’s agent looks at hundreds of device properties; their competitors look at only a handful. What’s more, Kolide’s user-first, privacy-respecting approach means you can put it on machines outside MDM: contractor devices, mobile phones, and even Linux machines.
Without a device trust solution, all the security in the world is just security theater. But Kolide can help close the gaps.
MKBHD Visits Apple’s iPhone Stress-Testing Lab
Fascinating stuff. I could watch the super slo-mo footage of iPhones being dropped onto various surfaces for an hour. Also, an interesting brief interview with John Ternus on the tension between making devices more durable vs. making them more easily repairable.
★
Fascinating stuff. I could watch the super slo-mo footage of iPhones being dropped onto various surfaces for an hour. Also, an interesting brief interview with John Ternus on the tension between making devices more durable vs. making them more easily repairable.
★ Gurman’s Epic Pre-WWDC Leak Report
It’s astonishing how much of what we supposedly know about Apple’s WWDC keynote announcements is entirely from Gurman. If he switched to a different beat we’d be almost entirely in the dark; as it stands, he’s seemingly spoiled most of it.
More regarding Gurman’s Friday-before-WWDC report at Bloomberg. But before I start quoting, man, his report reads as though he’s gotten the notes from someone who’s already watched Monday’s keynote. I sort of think that’s what happened, given how much of this no one had reported before today. Bloomberg’s headline even boldly asserts “Here’s Everything Apple Plans to Show at Its AI-Focused WWDC Event”. I’m only aware of one feature for one platform that isn’t in his report, but it’s not a jaw-dropper so I wouldn’t be surprised if it was simply beneath his threshold for newsworthiness. Don’t follow the link to Bloomberg and don’t continue reading this post if you don’t want to see a bunch of spoilers, several of which weren’t even rumors until Gurman dropped this. It’s astonishing how much of what we supposedly know about Apple’s WWDC keynote announcements is entirely from Gurman. If he switched to a different beat we’d be almost entirely in the dark; as it stands, he’s seemingly spoiled most of it.
First, he says yes, Apple’s going to do a chatbot, powered by OpenAI:
The company’s new AI system will be called Apple Intelligence, and
it will come to new versions of the iPhone, iPad and Mac operating
systems, according to people familiar with the plans. There also
will be a partnership with OpenAI that powers a ChatGPT-like
chatbot. And the tech giant is preparing to show new software for
the Vision Pro headset, Apple Watch and TV platforms.
A question Gurman’s report doesn’t answer is where this chatbot will be. Is it going to be a new app — a dedicated AI chatbot app? What would that app be called? “Siri”? Or will it live within Spotlight, a system-level UI you dip in and out of temporarily, not an app? Spotlight works today because you more or less can only ask one thing at a time; a chat app is something with persistence, that you can Command-Tab to and from.
Or would Apple make Siri a persona you can chat with in Messages? I don’t think Apple would put it in Messages, but if they do, will we be able to include it in group chats? That seems like fun on the surface (and it is, in Wavelength) but a privacy problem on deeper thought. When I’m talking to Siri one-on-one I expect Siri to know about me. If Siri/Apple AI/whatever-it’s-going-to-be-called were in a group Messages chat it would have to be private, which would make it a different Siri/Apple AI/whatever than you get in a one-on-one context.
There are a lot of questions even if the answer is that it’s a new standalone app. Will the conversations sync between devices? If so, how does that jibe with on-device processing? If I start a chatbot conversation on my Mac can I continue it on my iPhone? How does that work if the conversation pertains to, say, files or data that’s only on my Mac? Or vice-versa, if it pertains to content in an app that’s only on my iPhone? On-device processing raises questions that don’t exist with cloud-only processing.
One feature that will likely get a lot of attention among Gen
Z — and perhaps the rest of the population — will be
AI-created emoji. This will use AI to create custom emoji
characters on the fly that represent phrases or words as
they’re being typed. That means there will be many more
options than the ones in the standardized emoji library that
has long been built into the iPhone.
This sounds like Memoji, but for anything? Will it be exclusive to Messages or something system-wide, in the emoji picker?
The Messages app is getting some non-AI tweaks, including a change
to the effects feature — the thing that lets you send fireworks
and other visual elements to the people you’re texting. Users will
now be able to trigger an effect with individual words, rather
than the entire message. There will be new colorful icons for
Tapbacks, which let you quickly respond to a message with a heart,
exclamation point or other character (they’re currently gray). And
users will have the ability to Tapback a message with any emoji.
There’s another frequently requested feature coming as well: the
ability to schedule a message to be sent later.
Not sure what the difference is between “colorful Tapbacks” and “Tapback a message with any emoji”, but this one gets a legit finally.
Safari in macOS 15, codenamed Glow, is getting some changes, but
it seems unlikely that Apple is going to unveil its own ad blocker — something that’s been reported as a possibility. Advertisers
already pushed back heavily against Apple’s App Tracking
Transparency, or ATT, in iOS 14 a few years ago, and the company
doesn’t need another privacy-related headache.
Built-in ad blocking in Safari wouldn’t be a privacy headache; blocking ads can only increase privacy. It would be an antitrust/regulatory headache. The argument from ATT opponents is that it steers advertisers toward purchasing ads in the App Store, where the ATT rules don’t apply. Apple doesn’t track what users do within apps, of course — which is the legitimate privacy issue ATT attempts to address — but as the operator of the App Store, it does know which apps a user owns and uses. So Apple can, say, recommend game C because you play games A and B, even if A, B, and C all come from different developers.
More From Gurman’s Epic Pre-WWDC Leak Report
More from Gurman’s Friday-before-WWDC report at Bloomberg. But before I start quoting, man, his report reads as though he’s gotten the notes from someone who’s already watched Monday’s keynote. I sort of think that’s what happened, given how much of this no one had reported before today. Bloomberg’s headline even boldly asserts “Here’s Everything Apple Plans to Show at Its AI-Focused WWDC Event”. Don’t follow the link and don’t continue reading this post if you don’t want to see a bunch of spoilers, several of which weren’t even rumors until Gurman dropped this. It’s astonishing how much of what we supposedly know about Apple’s WWDC keynote announcements is entirely from Gurman. If he switched to a different beat we’d be almost entirely in dark; as it stands, he’s seemingly spoiled most of it.
First, he says yes, Apple’s going to do a chatbot, powered by OpenAI:
The company’s new AI system will be called Apple Intelligence, and
it will come to new versions of the iPhone, iPad and Mac operating
systems, according to people familiar with the plans. There also
will be a partnership with OpenAI that powers a ChatGPT-like
chatbot. And the tech giant is preparing to show new software for
the Vision Pro headset, Apple Watch and TV platforms.
A question Gurman’s report doesn’t answer is where this chatbot will be. Is it going to be a new app — a dedicated AI chatbot app? What would that app be called? “Siri”? Or will it live within Spotlight, which a system-level UI you dip in and out of temporarily, not an app? Spotlight works today because you more or less can only ask one thing at a time; a chat app is something with persistence, that you can Command-Tab to and from.
Or would Apple make Siri a persona you can chat with in Messages? I don’t think Apple would put it in Messages, but if they do, will we be able to include it in group chats? That seems like fun on the surface (and it is, in Wavelength) but a privacy problem on deeper thought. When I’m talking to Siri one-on-one I expect Siri to know about me. If Siri were in a group Messages chat it would have be private.
There are a lot of questions even if the answer is that it’s a new standalone app. Will the conversations sync between devices? If so, how does that jibe with on-device processing? If I start a chatbot conversation on my Mac can I continue it on my iPhone? How does that work if the conversation on pertains to files or data that’s only on my Mac? On-device processing raises questions that don’t exist with cloud-only processing.
One feature that will likely get a lot of attention among Gen
Z — and perhaps the rest of the population — will be
AI-created emoji. This will use AI to create custom emoji
characters on the fly that represent phrases or words as
they’re being typed. That means there will be many more
options than the ones in the standardized emoji library that
has long been built into the iPhone.
Will this be like Memoji — a feature of Messages — not the OS? I’m guessing yes. So you won’t be able to send these emoji through, say, WhatsApp, Signal, or even email. It kind of makes sense. To be cross-platform it either needs to be part of the Unicode spec (which isn’t even possible for on-the-fly custom emoji) or would have be rendered as an image attachment. And we can paste whatever images we want anywhere we want already. What makes emoji (and Memoji) special is that you don’t treat them like images, you treat them like text.
The Messages app is getting some non-AI tweaks, including a change
to the effects feature — the thing that lets you send fireworks
and other visual elements to the people you’re texting. Users will
now be able to trigger an effect with individual words, rather
than the entire message. There will be new colorful icons for
Tapbacks, which let you quickly respond to a message with a heart,
exclamation point or other character (they’re currently gray). And
users will have the ability to Tapback a message with any emoji.
There’s another frequently requested feature coming as well: the
ability to schedule a message to be sent later.
Not sure what the difference is between “colorful Tapbacks” and “Tapback a message with any emoji”, but this one gets a legit finally.
Safari in macOS 15, codenamed Glow, is getting some changes, but
it seems unlikely that Apple is going to unveil its own ad blocker — something that’s been reported as a possibility. Advertisers
already pushed back heavily against Apple’s App Tracking
Transparency, or ATT, in iOS 14 a few years ago, and the company
doesn’t need another privacy-related headache.
Built-in ad blocking in Safari wouldn’t be a privacy headache; blocking ads can only increase privacy. It would be an antitrust/regulatory headache. The argument from ATT opponents is that it steers advertisers toward purchasing ads in the App Store, where the ATT rules don’t apply. Apple doesn’t track what users do within apps, of course — which is the legitimate privacy issue ATT attempts to address — but as the operator of the App Store, it does know which apps a user owns and uses. So Apple can, say, recommend game C because you play games A and B, even if A, B, and C all come from different developers.
★
More from Gurman’s Friday-before-WWDC report at Bloomberg. But before I start quoting, man, his report reads as though he’s gotten the notes from someone who’s already watched Monday’s keynote. I sort of think that’s what happened, given how much of this no one had reported before today. Bloomberg’s headline even boldly asserts “Here’s Everything Apple Plans to Show at Its AI-Focused WWDC Event”. Don’t follow the link and don’t continue reading this post if you don’t want to see a bunch of spoilers, several of which weren’t even rumors until Gurman dropped this. It’s astonishing how much of what we supposedly know about Apple’s WWDC keynote announcements is entirely from Gurman. If he switched to a different beat we’d be almost entirely in dark; as it stands, he’s seemingly spoiled most of it.
First, he says yes, Apple’s going to do a chatbot, powered by OpenAI:
The company’s new AI system will be called Apple Intelligence, and
it will come to new versions of the iPhone, iPad and Mac operating
systems, according to people familiar with the plans. There also
will be a partnership with OpenAI that powers a ChatGPT-like
chatbot. And the tech giant is preparing to show new software for
the Vision Pro headset, Apple Watch and TV platforms.
A question Gurman’s report doesn’t answer is where this chatbot will be. Is it going to be a new app — a dedicated AI chatbot app? What would that app be called? “Siri”? Or will it live within Spotlight, which a system-level UI you dip in and out of temporarily, not an app? Spotlight works today because you more or less can only ask one thing at a time; a chat app is something with persistence, that you can Command-Tab to and from.
Or would Apple make Siri a persona you can chat with in Messages? I don’t think Apple would put it in Messages, but if they do, will we be able to include it in group chats? That seems like fun on the surface (and it is, in Wavelength) but a privacy problem on deeper thought. When I’m talking to Siri one-on-one I expect Siri to know about me. If Siri were in a group Messages chat it would have be private.
There are a lot of questions even if the answer is that it’s a new standalone app. Will the conversations sync between devices? If so, how does that jibe with on-device processing? If I start a chatbot conversation on my Mac can I continue it on my iPhone? How does that work if the conversation on pertains to files or data that’s only on my Mac? On-device processing raises questions that don’t exist with cloud-only processing.
One feature that will likely get a lot of attention among Gen
Z — and perhaps the rest of the population — will be
AI-created emoji. This will use AI to create custom emoji
characters on the fly that represent phrases or words as
they’re being typed. That means there will be many more
options than the ones in the standardized emoji library that
has long been built into the iPhone.
Will this be like Memoji — a feature of Messages — not the OS? I’m guessing yes. So you won’t be able to send these emoji through, say, WhatsApp, Signal, or even email. It kind of makes sense. To be cross-platform it either needs to be part of the Unicode spec (which isn’t even possible for on-the-fly custom emoji) or would have be rendered as an image attachment. And we can paste whatever images we want anywhere we want already. What makes emoji (and Memoji) special is that you don’t treat them like images, you treat them like text.
The Messages app is getting some non-AI tweaks, including a change
to the effects feature — the thing that lets you send fireworks
and other visual elements to the people you’re texting. Users will
now be able to trigger an effect with individual words, rather
than the entire message. There will be new colorful icons for
Tapbacks, which let you quickly respond to a message with a heart,
exclamation point or other character (they’re currently gray). And
users will have the ability to Tapback a message with any emoji.
There’s another frequently requested feature coming as well: the
ability to schedule a message to be sent later.
Not sure what the difference is between “colorful Tapbacks” and “Tapback a message with any emoji”, but this one gets a legit finally.
Safari in macOS 15, codenamed Glow, is getting some changes, but
it seems unlikely that Apple is going to unveil its own ad blocker — something that’s been reported as a possibility. Advertisers
already pushed back heavily against Apple’s App Tracking
Transparency, or ATT, in iOS 14 a few years ago, and the company
doesn’t need another privacy-related headache.
Built-in ad blocking in Safari wouldn’t be a privacy headache; blocking ads can only increase privacy. It would be an antitrust/regulatory headache. The argument from ATT opponents is that it steers advertisers toward purchasing ads in the App Store, where the ATT rules don’t apply. Apple doesn’t track what users do within apps, of course — which is the legitimate privacy issue ATT attempts to address — but as the operator of the App Store, it does know which apps a user owns and uses. So Apple can, say, recommend game C because you play games A and B, even if A, B, and C all come from different developers.
‘Apple Intelligence’
Daniel Jalkut, writing on his Bitsplitting blog one year ago:
Which leads me to my somewhat far-fetched prediction for WWDC:
Apple will talk about AI, but they won’t once utter the letters
“AI”. They will allude to a major new initiative, under way for
years within the company. The benefits of this project will make
it obvious that it is meant to serve as an answer comparable
efforts being made by OpenAI, Microsoft, Google, and Facebook.
During the crescendo to announcing its name, the letters “A” and
“I” will be on all of our lips, and then they’ll drop the
proverbial mic: “We’re calling it Apple Intelligence.” Get it?
Apple often follows the herd in terms of what they focus their
efforts on, but rarely fall into line using the same tired jargon
as the rest of the industry. Apple Intelligence will allow Apple
to make it crystal clear to the entire world that they’re taking
“AI” seriously, without stooping to the level of treating it as a
commodity technology. They do this kind of thing all the time with
names like Airport, Airplay, and Airtags. These marketing terms
represent underlying technologies that Apple embraces and
extends. Giving them unique names makes them easier to sell, but
also gives Apple freedom to blur the lines on exactly what the
technology should or shouldn’t be capable of.
Was a decent prediction a year ago, but looking even better now. Mark Gurman, today:
The company’s new AI system will be called Apple Intelligence, and
it will come to new versions of the iPhone, iPad and Mac operating
systems, according to people familiar with the plans. There also
will be a partnership with OpenAI that powers a ChatGPT-like
chatbot. And the tech giant is preparing to show new software for
the Vision Pro headset, Apple Watch and TV platforms.
While we are guessing names, my prediction is they call the new Siri “Siri AI”. I don’t think they’ll abandon the Siri brand, but I think they need a name to say “This is an all-new Siri that is way better and more useful and definitely not so frustratingly dumb.” And what Apple likes to do with names is append adjectives. MacBook Pro. M3 Max. AirPort Extreme (RIP). “Siri” = old Siri; “Siri AI” = new Siri, and when you’re talking to it, you still just address it as “Siri”. That’s my guess. Otherwise I think they just stick with no-adjective “Siri” and swear up and down that it’s actually going to be good this year.
★
Daniel Jalkut, writing on his Bitsplitting blog one year ago:
Which leads me to my somewhat far-fetched prediction for WWDC:
Apple will talk about AI, but they won’t once utter the letters
“AI”. They will allude to a major new initiative, under way for
years within the company. The benefits of this project will make
it obvious that it is meant to serve as an answer comparable
efforts being made by OpenAI, Microsoft, Google, and Facebook.
During the crescendo to announcing its name, the letters “A” and
“I” will be on all of our lips, and then they’ll drop the
proverbial mic: “We’re calling it Apple Intelligence.” Get it?
Apple often follows the herd in terms of what they focus their
efforts on, but rarely fall into line using the same tired jargon
as the rest of the industry. Apple Intelligence will allow Apple
to make it crystal clear to the entire world that they’re taking
“AI” seriously, without stooping to the level of treating it as a
commodity technology. They do this kind of thing all the time with
names like Airport, Airplay, and Airtags. These marketing terms
represent underlying technologies that Apple embraces and
extends. Giving them unique names makes them easier to sell, but
also gives Apple freedom to blur the lines on exactly what the
technology should or shouldn’t be capable of.
Was a decent prediction a year ago, but looking even better now. Mark Gurman, today:
The company’s new AI system will be called Apple Intelligence, and
it will come to new versions of the iPhone, iPad and Mac operating
systems, according to people familiar with the plans. There also
will be a partnership with OpenAI that powers a ChatGPT-like
chatbot. And the tech giant is preparing to show new software for
the Vision Pro headset, Apple Watch and TV platforms.
While we are guessing names, my prediction is they call the new Siri “Siri AI”. I don’t think they’ll abandon the Siri brand, but I think they need a name to say “This is an all-new Siri that is way better and more useful and definitely not so frustratingly dumb.” And what Apple likes to do with names is append adjectives. MacBook Pro. M3 Max. AirPort Extreme (RIP). “Siri” = old Siri; “Siri AI” = new Siri, and when you’re talking to it, you still just address it as “Siri”. That’s my guess. Otherwise I think they just stick with no-adjective “Siri” and swear up and down that it’s actually going to be good this year.
NYT: ‘What Ukraine Has Lost During Russia’s Invasion’
Marco Hernandez, Jeffrey Gettleman, Finbarr O’Reilly, and Tim Wallace, with reporting and imagery for The New York Times:
Few countries since World War II have experienced this level of
devastation. But it’s been impossible for anybody to see more than
glimpses of it. It’s too vast. Every battle, every bombing, every
missile strike, every house burned down, has left its mark across
multiple front lines, back and forth over more than two years.
This is the first comprehensive picture of where the Ukraine war
has been fought and the totality of the destruction. Using
detailed analysis of years of satellite data, we developed a
record of each town, each street, each building that has been
blown apart.
The scale is hard to comprehend. More buildings have been
destroyed in Ukraine than if every building in Manhattan were to
be leveled four times over. Parts of Ukraine hundreds of miles
apart look like Dresden or London after World War II, or Gaza
after half a year of bombardment.
To produce these estimates, The New York Times worked with two
leading remote sensing scientists, Corey Scher of the City
University of New York Graduate Center and Jamon Van Den Hoek of
Oregon State University, to analyze data from radar
satellites that can detect small changes in the built
environment.
A staggering, sobering work of journalism and data visualization.
★
Marco Hernandez, Jeffrey Gettleman, Finbarr O’Reilly, and Tim Wallace, with reporting and imagery for The New York Times:
Few countries since World War II have experienced this level of
devastation. But it’s been impossible for anybody to see more than
glimpses of it. It’s too vast. Every battle, every bombing, every
missile strike, every house burned down, has left its mark across
multiple front lines, back and forth over more than two years.
This is the first comprehensive picture of where the Ukraine war
has been fought and the totality of the destruction. Using
detailed analysis of years of satellite data, we developed a
record of each town, each street, each building that has been
blown apart.
The scale is hard to comprehend. More buildings have been
destroyed in Ukraine than if every building in Manhattan were to
be leveled four times over. Parts of Ukraine hundreds of miles
apart look like Dresden or London after World War II, or Gaza
after half a year of bombardment.
To produce these estimates, The New York Times worked with two
leading remote sensing scientists, Corey Scher of the City
University of New York Graduate Center and Jamon Van Den Hoek of
Oregon State University, to analyze data from radar
satellites that can detect small changes in the built
environment.
A staggering, sobering work of journalism and data visualization.