Month: September 2024

I’m a Camera Nerd, but Something Is Strange About the iPhone 16’s Camera Control Button

Commentary: Was it supposed to be the Visual Intelligence button instead? There’s a lot of unrealized potential in Apple’s new button that goes well beyond the camera.

Commentary: Was it supposed to be the Visual Intelligence button instead? There’s a lot of unrealized potential in Apple’s new button that goes well beyond the camera.

Read More 

Welcome to Meta’s future, where everyone wears cameras

See that little circle? That’s a camera. | Photo by Vjeran Pavic / The Verge

All around Meta’s Menlo Park campus, cameras stared at me. I’m not talking about security cameras or my fellow reporters’ DSLRs. I’m not even talking about smartphones. I mean Ray-Ban and Meta’s smart glasses, which Meta hopes we’ll all — one day, in some form — wear.
I visited Meta for this year’s Connect conference, where just about every hardware product involved cameras. They’re on the Ray-Ban Meta smart glasses that got a software update, the new Quest 3S virtual reality headset, and Meta’s prototype Orion AR glasses. Orion is what Meta calls a “time machine”: a functioning example of what full-fledged AR could look like, years before it will be consumer-ready.
But on Meta’s campus, at least, the Ray-Bans were already everywhere. It was a different kind of time machine: a glimpse into CEO Mark Zuckerberg’s future world where glasses are the new phones.
I’m conflicted about it.

Photo by Vjeran Pavic / The Verge
The Ray-Ban Meta smart glasses.

Meta really wants to put cameras on your face. The glasses, which follow 2021’s Ray-Ban Stories, are apparently making inroads on that front, as Zuckerberg told The Verge sales are going “very well.” They aren’t full-fledged AR glasses since they have no screen to display information, though they’re becoming more powerful with AI features. But they’re perfect for what the whole Meta empire is built on: encouraging people to share their lives online.
The glasses come in a variety of classic Ray-Ban styles, but for now, it’s obvious users aren’t just wearing glasses. As I wandered the campus, I spotted the telltale signs on person after person: two prominent circle cutouts at the edges of their glasses, one for a 12MP ultrawide camera and the other for an indicator light.
This light flashes when a user is taking photos and videos, and it’s generally visible even in sunlight. In theory, that should have put my mind at ease: if the light wasn’t on, I could trust nobody was capturing footage of me tucking into some lunch before my meetings.
But as I talked with people around campus, I was always slightly on edge. I found myself keenly aware of those circles, checking to see if somebody was filming me when I wasn’t paying attention. The mere potential of a recording would distract me from conversations, inserting a low hum of background anxiety.
When I put a pair on for myself, the situation changed
Then, when I put a pair on for myself, the situation suddenly changed. As a potential target of recording, I’d been hesitant, worried I might be photographed or filmed as a byproduct of making polite eye contact. With the glasses on my own face, though, I felt that I should be recording more. There’s something really compelling about the experience of a camera right at the level of your eyes. By just pressing a button on the glasses, I could take a photo or video of anything I was seeing at exactly the angle I was seeing it. No awkward fumble of pulling out my phone and hoping the moment lasted. There might be no better way to share my reality with other people.
Meta’s smart glasses have been around for a few years now, and I’m hardly the first person — or even the first person at The Verge — to be impressed by them. But this was the first time I’d seen these glasses not as early adopter tech, but as a ubiquitous product like a phone or smartwatch. I got a hint of how this seamless recording would work at scale, and the prospect is both exciting and terrifying.
The camera phone was a revolution in its own right, and we’re still grappling with its social effects. Nearly anyone can now document police brutality or capture a fleeting funny moment, but also take creepshots and post them online or (a far lesser offense, to be clear) annoy people at concerts. What will happen when even the minimal friction of pulling a phone out drops away, and billions of people can immediately snap a picture of anything they see?

Personally, I can see how incredibly useful this would be to capture candid photos of my new baby, who is already starting to recognize when a phone is taking a picture of her. But it’s not hard to imagine far more malicious uses. Sure, you might think that we all got used to everyone pointing their phone cameras at everything, but I’m not exactly sure that’s a good thing; I don’t like that there’s a possibility I end up in somebody’s TikTok just because I stepped outside the house. (The rise of sophisticated facial recognition makes the risks even greater.) With ubiquitous glasses-equipped cameras, I feel like there’s an even greater possibility that my face shows up somewhere on the internet without my permission.
There are also clear risks to integrating cameras into what is, for many people, a nonnegotiable vision aid. If you already wear glasses and switch to prescription smart glasses, you’ll either have to carry a low-tech backup or accept that they’ll stay on in some potentially very awkward places, like a public bathroom. The current Ray-Ban Meta glasses are largely sunglasses, so they’re probably not most people’s primary set. But you can get them with clear and transition lenses, and I bet Meta would like to market them more as everyday specs.
Of course, there’s no guarantee most people will buy them. The Ray-Ban Meta glasses are pretty good gadgets now, but I was at Meta’s campus meeting Meta employees to preview Meta hardware for a Meta event. It’s not surprising Meta’s latest hardware was commonplace, and it doesn’t necessarily tell us much about what people outside that world want.
Camera glasses have been just over the horizon for years now. Remember how magical I said taking pictures of what’s right in front of your eyes is? My former colleague Sean O’Kane relayed almost the exact same experience with Snap Spectacles back in 2016.
But Meta is the first company to make a credible play for mainstream acceptance. They’re a lot of fun — and that’s what scares me a little.

See that little circle? That’s a camera. | Photo by Vjeran Pavic / The Verge

All around Meta’s Menlo Park campus, cameras stared at me. I’m not talking about security cameras or my fellow reporters’ DSLRs. I’m not even talking about smartphones. I mean Ray-Ban and Meta’s smart glasses, which Meta hopes we’ll all — one day, in some form — wear.

I visited Meta for this year’s Connect conference, where just about every hardware product involved cameras. They’re on the Ray-Ban Meta smart glasses that got a software update, the new Quest 3S virtual reality headset, and Meta’s prototype Orion AR glasses. Orion is what Meta calls a “time machine”: a functioning example of what full-fledged AR could look like, years before it will be consumer-ready.

But on Meta’s campus, at least, the Ray-Bans were already everywhere. It was a different kind of time machine: a glimpse into CEO Mark Zuckerberg’s future world where glasses are the new phones.

I’m conflicted about it.

Photo by Vjeran Pavic / The Verge
The Ray-Ban Meta smart glasses.

Meta really wants to put cameras on your face. The glasses, which follow 2021’s Ray-Ban Stories, are apparently making inroads on that front, as Zuckerberg told The Verge sales are going “very well.” They aren’t full-fledged AR glasses since they have no screen to display information, though they’re becoming more powerful with AI features. But they’re perfect for what the whole Meta empire is built on: encouraging people to share their lives online.

The glasses come in a variety of classic Ray-Ban styles, but for now, it’s obvious users aren’t just wearing glasses. As I wandered the campus, I spotted the telltale signs on person after person: two prominent circle cutouts at the edges of their glasses, one for a 12MP ultrawide camera and the other for an indicator light.

This light flashes when a user is taking photos and videos, and it’s generally visible even in sunlight. In theory, that should have put my mind at ease: if the light wasn’t on, I could trust nobody was capturing footage of me tucking into some lunch before my meetings.

But as I talked with people around campus, I was always slightly on edge. I found myself keenly aware of those circles, checking to see if somebody was filming me when I wasn’t paying attention. The mere potential of a recording would distract me from conversations, inserting a low hum of background anxiety.

When I put a pair on for myself, the situation changed

Then, when I put a pair on for myself, the situation suddenly changed. As a potential target of recording, I’d been hesitant, worried I might be photographed or filmed as a byproduct of making polite eye contact. With the glasses on my own face, though, I felt that I should be recording more. There’s something really compelling about the experience of a camera right at the level of your eyes. By just pressing a button on the glasses, I could take a photo or video of anything I was seeing at exactly the angle I was seeing it. No awkward fumble of pulling out my phone and hoping the moment lasted. There might be no better way to share my reality with other people.

Meta’s smart glasses have been around for a few years now, and I’m hardly the first person — or even the first person at The Verge — to be impressed by them. But this was the first time I’d seen these glasses not as early adopter tech, but as a ubiquitous product like a phone or smartwatch. I got a hint of how this seamless recording would work at scale, and the prospect is both exciting and terrifying.

The camera phone was a revolution in its own right, and we’re still grappling with its social effects. Nearly anyone can now document police brutality or capture a fleeting funny moment, but also take creepshots and post them online or (a far lesser offense, to be clear) annoy people at concerts. What will happen when even the minimal friction of pulling a phone out drops away, and billions of people can immediately snap a picture of anything they see?

Personally, I can see how incredibly useful this would be to capture candid photos of my new baby, who is already starting to recognize when a phone is taking a picture of her. But it’s not hard to imagine far more malicious uses. Sure, you might think that we all got used to everyone pointing their phone cameras at everything, but I’m not exactly sure that’s a good thing; I don’t like that there’s a possibility I end up in somebody’s TikTok just because I stepped outside the house. (The rise of sophisticated facial recognition makes the risks even greater.) With ubiquitous glasses-equipped cameras, I feel like there’s an even greater possibility that my face shows up somewhere on the internet without my permission.

There are also clear risks to integrating cameras into what is, for many people, a nonnegotiable vision aid. If you already wear glasses and switch to prescription smart glasses, you’ll either have to carry a low-tech backup or accept that they’ll stay on in some potentially very awkward places, like a public bathroom. The current Ray-Ban Meta glasses are largely sunglasses, so they’re probably not most people’s primary set. But you can get them with clear and transition lenses, and I bet Meta would like to market them more as everyday specs.

Of course, there’s no guarantee most people will buy them. The Ray-Ban Meta glasses are pretty good gadgets now, but I was at Meta’s campus meeting Meta employees to preview Meta hardware for a Meta event. It’s not surprising Meta’s latest hardware was commonplace, and it doesn’t necessarily tell us much about what people outside that world want.

Camera glasses have been just over the horizon for years now. Remember how magical I said taking pictures of what’s right in front of your eyes is? My former colleague Sean O’Kane relayed almost the exact same experience with Snap Spectacles back in 2016.

But Meta is the first company to make a credible play for mainstream acceptance. They’re a lot of fun — and that’s what scares me a little.

Read More 

Why Index Ventures is bulking up its investment team in NYC

While online discourse would make it seem that venture has retreated to the Bay Area, with San Francisco being the most important place to build a startup, Index Ventures is looking to bulk up its New York-based investing team. The firm is currently looking to hire another New York-based investor with plans to add three
© 2024 TechCrunch. All rights reserved. For personal use only.

While online discourse would make it seem that venture has retreated to the Bay Area, with San Francisco being the most important place to build a startup, Index Ventures is looking to bulk up its New York-based investing team. The firm is currently looking to hire another New York-based investor with plans to add three […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read More 

The price of ChatGPT Plus could more than double in the next five years

The current subscription price of $20 a month is tipped to climb as high as $44 by 2029.

If you want the smartest AI and the most features from ChatGPT, you can currently pay $20 (about £15 / AU$29) a month to OpenAI for ChatGPT Plus – but that subscription fee could more than double over the next five years.

That’s according to a report from the New York Times, apparently based on internal company documents from OpenAI. The first price bump, an extra $2 a month, is tipped to be coming before the end of 2024.

After that, we can expect OpenAI to “aggressively raise” the cost of ChatGPT Plus to $44 (about £33 / AU$64) a month by 2029. There are currently around 10 million people paying for ChatGPT Plus, the NYT report says.

We’re now well used to digital subscription services bumping up their prices every so often, and it seems ChatGPT Plus is going to be no different in that regard – though there may well be new features and new AI models alongside the price increases.

‘Burning through piles of money’

Advanced Voice is a new ChatGPT feature (Image credit: OpenAI)

The main reason for the projected price hike for ChatGPT Plus is clear: powering the AI service costs a lot. The new report says OpenAI is “burning through piles of money”, and is on course to lose $5 billion across the course of 2024.

While monthly revenue is now up to $300 million, and annual sales are expected to be $3.7 billion this year (and $11.6 billion next year), OpenAI remains very firmly in the red, as per the documents that the NYT has managed to gain access to.

Behind the scenes, OpenAI is looking to raise around $7 billion in outside funding, which would value the company at a cool $150 billion – around the level of a Goldman Sachs, an Uber, or an AT&T. There continues to be turbulence behind the scenes though, with three executives quitting the company in the last week.

None of this necessarily affects end users though – at least not until the price rises start. In recent days, OpenAI has pushed out improvements to the ChatGPT 4o-mini model, and rolled out the chatbot’s Advanced Voice mode to more users.

You might also like

OpenAI CEO Sam Altman predicts AI superintelligence7 new (and improved) things you can do with ChatGPT-4oOpenAI says ChatGPT messaging first was a bug

Read More 

Opinion: How to design a US data privacy law

Op-ed: Why you should care about the GDPR, and how the US could develop a better version.

Enlarge (credit: akinbostanci/Getty Images)

Nick Dedeke is an associate teaching professor at Northeastern University, Boston. His research interests include digital transformation strategies, ethics, and privacy. His research has been published in IEEE Management Review, IEEE Spectrum, and the Journal of Business Ethics. He holds a PhD in Industrial Engineering from the University of Kaiserslautern-Landau, Germany.

The opinions in this piece do not necessarily reflect the views of Ars Technica.

In an earlier article, I discussed a few of the flaws in Europe’s flagship data privacy law, the General Data Protection Regulation (GDPR). Building on that critique, I would now like to go further, proposing specifications for developing a robust privacy protection regime in the US.

Writers must overcome several hurdles to have a chance at persuading readers about possible flaws in the GDPR. First, some readers are skeptical of any piece criticizing the GDPR because they believe the law is still too young to evaluate. Second, some are suspicious of any piece criticizing the GDPR because they suspect that the authors might be covert supporters of Big Tech’s anti-GDPR agenda. (I can assure readers that I am not, nor have I ever, worked to support any agenda of Big Tech companies.)

Read 41 remaining paragraphs | Comments

Read More 

Scroll to top
Generated by Feedzy