Month: September 2023

Pitch Deck Teardown: Point.me’s $10M Series A deck

I like a good airline points flight just as much as the fuzzy feeling you get from knowing you’ve “beaten the system,” but does Point.me solve a “real” problem?

I like a good airline points flight just as much as the fuzzy feeling you get from knowing you’ve “beaten the system,” but does Point.me solve a “real” problem?

Read More 

Flare Blockchain is now supported on Arkham Intelligence Platform

It’s exactly three months since we last covered Flare Network, a Layer 1 Oracle network that gives developers access to the broadest range of decentralized data at scale. Back then, Flare announced it was teaming up with aiPX to launch
The post Flare Blockchain is now supported on Arkham Intelligence Platform first appeared on TechStartups.

It’s exactly three months since we last covered Flare Network, a Layer 1 Oracle network that gives developers access to the broadest range of decentralized data at scale. Back then, Flare announced it was teaming up with aiPX to launch […]

The post Flare Blockchain is now supported on Arkham Intelligence Platform first appeared on TechStartups.

Read More 

How good is the ANC in Bowers & Wilkins’ luxe headphones? I wore them at a live gig to find out

To mark the release of the new burgundy Bowers & Wilkins PX8 headphones, I got try them at a James Blake gig.

Bowers & Wilkins is known for its luxury high-end headphones that look as stunning as they sound – we rate its flagship over-ear offering, the Bowers & Wilkins PX8, as the best headphones for design. The one downside is that they only come in black and tan Nappa leather finishes. Not anymore. 

There’s now a brand new color up for grabs, which B&W is calling a “Royal Burgundy” and for good reason too. It’s a beautiful deep tone, and is complemented by new gold detailing on the single-cast aluminum headband arm, which extends to the outside of the diamond-cut earcups.

(Image credit: Bowers & Wilkins )

Aside from the new finish, the hardware itself of the PX8s is the same, meaning it still has “oodles of detail, agility and expanse on offer, all underpinned by a gloriously weighty bass” that we fell in love with in our Bowers & Wilkins PX8 review. However, the team at Bowers & Wilkins has upgraded the firmware inside of the PX8 range at the same time as releasing this new color.

The acoustic tuning update is claimed to “ensure even more detail, giving an unparalleled combination of ultra-fast response plus exceptionally low distortion throughout the frequency range” according to B&W. The company is even going as far to say that the PX8 now offer the best sound quality that the company’s headphones have ever delivered.

Hands on with the new burgundy Bowers & Wilkins PX8  

(Image credit: Future)

I was lucky enough to get to try out the newly enhanced burgundy Bowers & Wilkins PX8 headphones the day they were released, but I wasn’t in the best of environments for testing the sound quality in terms of fine detail. You see, I was at the Tate Modern, an art gallery in London, and while you might be thinking a gallery is great quiet space, it wasn’t on this occasion. 

Down in the Vaults of the former Bankside Power Station, Bowers & Wilkins hosted a live gig turned art exhibition with James Blake to celebrate his new album Playing Robots into Heaven. In it, there were two video installations (you can see one of these in the video below) that each had a row of PX8 headphones in front of them. 

The first thing you notice about them is their minimalist design. They have an understated profile about them from afar but as you get close, there’s an undeniable premium flare that is immediately noticeable from the wide earcups that house angled drivers. Even in the dark, the gold detailing of the new color reflected the lights in a way that was simply stunning.

Maybe it’s because I’m used to seeing art at the Tate, but the PX8 headphones really did look like sonic sculptures hanging beside each other. Of course, I was able to pick a pair up and try them on – unlike the giant ear/shell-shaped speaker that was in the middle of the room (or most art in general). The first thing that is hard to miss is just how soft the Nappa leather makes them. They’re extremely comfortable and easy to adjust to fit the size of your head – the arm band had a smooth feel to it as you slide it on.

People wearing the Bowers & Wilkins PX8 headphones at the Tate Modern.   (Image credit: Future)

But what really stood out to me was the active noise cancellation. We said in our PX8 review that we thought this was pretty average, but with the new firmware out, it was good that I really got to put it through its paces at this event, because behind me was the infamous Bowers & Wilkins Sound System that hardly ever gets let out. 

Built more than five years ago as a passion project among sound engineers at the company and said to cost a staggering amount – I’ve been told that the company won’t part with it or make another one even after being offered an obscene amount for it – the system is truly something unique and special.

It’s not a great picture of it, but the Bowers & Wilkins Sound System is truly something special.  (Image credit: Future)

The system is said to be capable of reaching 120dB, so I was basically standing in the middle of a concert. Switching from Transparency mode to Active Noise Cancellation, I heard the voices and music around me fade away until the bass was completely gone. While I could still hear some high frequencies leaking through, everything else was drowned out. 

The ANC was extremely effective considering the environment I was in. Let me tell you, it’s a strange experience, feeling bass vibrations without actually hearing them. But the noise cancellation was effective enough to make that happen. 

I don’t normally wear ear plugs to a concert, even though I probably should considering the ringing I have in my ears afterwards, which is a sign of your ears being damaged (in some cases where you have prolonged exposure to higher decibels this can lead to hearing loss). But this experiment has given me the idea that I could at least protect my hearing between sessions with a pair of the best noise cancelling headphones.   

You might also like

B&W’s James Bond 007 Edition wireless headphones are a view to (a) kill forBowers & Wilkins PX8 vs Focal Bathys: which high-end wireless over-ears are best?I always wanted B&W 600 speakers and these four new models are surprisingly affordable

Read More 

Analogue’s limited edition Pockets are delightful and frustrating

The life of a retro gamer is one fraught with delight and frustration. Chasing the unique feeling of waiting years while someone develops a new game for your vintage console of choice in their spare time. But the delight, when it lands, makes it all worth it. Conversely, watching someone snipe your eBay bid for a super rare game you’ve been seeking for years, that’s frustrating. No one appears to understand this yo-yo of emotions better than the team at Analogue — makers of some of the most desirable modern retro consoles around.
When I say Analogue understands this, I mean it’s perfected the art of inducing both ends of that emotional spectrum. The very existence of the company shows it understands the passion retro lovers feel about gaming history. But almost two years after the release of the (delightful) Pocket handheld, we’re still (frustrated) waiting for key accessories and consoles to reliably be in stock. Meanwhile, the company just unveiled some seriously delightful limited editions. (Good luck actually buying one — frustrating.) They really have this retro gaming thing down to a tee, and fans have noticed.
Analogue
When the Pocket was announced, that sent a wave of delight around the retro gaming community. That was in October 2019 with an estimated release date of “2020.” Eagle-eyed readers will have already noticed that the company missed that broad target by almost a year. That’s a minor frustration, but one that only served to fuel the desire for what is, arguably, Analogue’s most complicated and refined product. Almost immediately, the company reopened orders along with a mild bump in price and — depending on how quick you were — a potential two-year window for it to ship.
As of this month, most of those orders have finally been fulfilled — but not without sprawling Reddit mega-threads of people comparing shipping statuses, order numbers and total days since ordering (props to the 600+ crew). The recent glow in the dark (GITD) limited edition itself caused a bit of a stir (or, in some cases, contempt) as the lucky few who were able to secure one saw it ship out immediately with no wait at all —- including the one Analogue supplied for the images in this story.
Things got a bit meta when Analogue quickly unveiled another series of limited editions, this time, the saliva-inducing transparent colors that every gaming handheld deserves. People who had jumped on the GITD Pocket found themselves with buyer’s remorse, had they known the other editions were coming, they would have rather tried for one of those. Some folks are just buying the limited editions because they simply want a Pocket, leaving fewer for those that actively wanted them. A delightfully frustrating situation for all involved.
Reddit / MrFixter
The Glow in the Dark Analogue Pocket looks fantastic though (we’re sure the transparent ones will also). And it’s another sign of Analogue’s hard-line approach to retro purism. The Pocket, a clear reference to the Game Boy Pocket, which had one little-known, hyper rare limited edition given out at a gaming competition. You guessed it, it was glow in the dark — the only official Nintendo console ever to come in the luminous material. Cruelly, the Game Boy Pocket didn’t have a backlight, so the effect was hard to enjoy during play time.
Analogue’s version, of course, can totally be played in the dark, and is positively encouraged. “Glow in the dark is amazing — when was the last time you’ve seen a proper consumer electronic fully glow in the dark?” Christopher Taber, founder and CEO of the company told Engadget. And according to Taber, the design involved creating an entirely new material. “We spent a few months getting the color and unique starry, chalky texture. Multiple different plastics to allow that to only be shown when it’s glowing — when not glowing it has the perfect green, pure.” Taber’s enthusiasm appears to be matched by Pocket fans as all the units sold out in under two minutes. (Though Taber didn’t specify how many were available when asked.)
Unsurprisingly, and to the chagrin of, well, everyone, plenty of GITD editions have found their way into the hands of resellers.
Now that the shipping of actual Pockets seems to be mostly caught up, I asked whether there’d be stock for the holidays, to which Taber confirmed there would be. Which just leaves those cartridge adapters, and that’s a whole other situation, one that’s changed a fair bit since launch.
Photo by James Trew / Engadget
The whole selling point of the Pocket was that it could natively play original Game Boy cartridges (including Color and Advanced titles), plus Atari Lynx and Game Gear carts via an adapter. Later, TurboGrafx-16/PC Engine and NeoGeo Pocket adapters were also confirmed to be in development. At launch, the Game Gear accessory was ready to go, but there’s been a long wait for the others.
Analogue initially communicated they should be available in Q3 this year, but Taber said they were “still on track to be shipped out by the end of the year.” (FWIW, an archived version of this page showed Q3 up until at least the day before we asked for confirmation, Google has since cached a newer version.) But the real change is that the Pocket can play games from far more systems than it could at launch, including some of the ones for which there are adapters.
The Pocket doesn’t emulate games so much as it reprograms itself to “become” the system you want to play. It does this via something called Field Programmable Gate Arrays (FPGA) and more specifically “cores” that, in lay terms, mimic each system — it’s what sets the Pocket apart from most other retro handhelds that emulate in software.
Since launch cores have been made available for a number of consoles, including the NES, SNES, Genesis/Megadrive, Neo Geo and TurboGrafx-16. To play games from these systems, no adapter is required, but it does mean dabbling in the murky world of ROMs. To what extent this diminishes the appetite for the adapters is unclear (the Atari Lynx and Neo Geo Pocket remain the systems with adapters that don’t have community-created cores available).
Analogue’s Transparent Limited Edition Pockets go on sale today at 11AM Eastern. This article originally appeared on Engadget at https://www.engadget.com/analogues-limited-edition-pockets-are-delightful-and-frustrating-140012471.html?src=rss

The life of a retro gamer is one fraught with delight and frustration. Chasing the unique feeling of waiting years while someone develops a new game for your vintage console of choice in their spare time. But the delight, when it lands, makes it all worth it. Conversely, watching someone snipe your eBay bid for a super rare game you’ve been seeking for years, that’s frustrating. No one appears to understand this yo-yo of emotions better than the team at Analogue — makers of some of the most desirable modern retro consoles around.

When I say Analogue understands this, I mean it’s perfected the art of inducing both ends of that emotional spectrum. The very existence of the company shows it understands the passion retro lovers feel about gaming history. But almost two years after the release of the (delightful) Pocket handheld, we’re still (frustrated) waiting for key accessories and consoles to reliably be in stock. Meanwhile, the company just unveiled some seriously delightful limited editions. (Good luck actually buying one — frustrating.) They really have this retro gaming thing down to a tee, and fans have noticed.

Analogue

When the Pocket was announced, that sent a wave of delight around the retro gaming community. That was in October 2019 with an estimated release date of “2020.” Eagle-eyed readers will have already noticed that the company missed that broad target by almost a year. That’s a minor frustration, but one that only served to fuel the desire for what is, arguably, Analogue’s most complicated and refined product. Almost immediately, the company reopened orders along with a mild bump in price and — depending on how quick you were — a potential two-year window for it to ship.

As of this month, most of those orders have finally been fulfilled — but not without sprawling Reddit mega-threads of people comparing shipping statuses, order numbers and total days since ordering (props to the 600+ crew). The recent glow in the dark (GITD) limited edition itself caused a bit of a stir (or, in some cases, contempt) as the lucky few who were able to secure one saw it ship out immediately with no wait at all —- including the one Analogue supplied for the images in this story.

Things got a bit meta when Analogue quickly unveiled another series of limited editions, this time, the saliva-inducing transparent colors that every gaming handheld deserves. People who had jumped on the GITD Pocket found themselves with buyer’s remorse, had they known the other editions were coming, they would have rather tried for one of those. Some folks are just buying the limited editions because they simply want a Pocket, leaving fewer for those that actively wanted them. A delightfully frustrating situation for all involved.

Reddit / MrFixter

The Glow in the Dark Analogue Pocket looks fantastic though (we’re sure the transparent ones will also). And it’s another sign of Analogue’s hard-line approach to retro purism. The Pocket, a clear reference to the Game Boy Pocket, which had one little-known, hyper rare limited edition given out at a gaming competition. You guessed it, it was glow in the dark — the only official Nintendo console ever to come in the luminous material. Cruelly, the Game Boy Pocket didn’t have a backlight, so the effect was hard to enjoy during play time.

Analogue’s version, of course, can totally be played in the dark, and is positively encouraged. “Glow in the dark is amazing — when was the last time you’ve seen a proper consumer electronic fully glow in the dark?” Christopher Taber, founder and CEO of the company told Engadget. And according to Taber, the design involved creating an entirely new material. “We spent a few months getting the color and unique starry, chalky texture. Multiple different plastics to allow that to only be shown when it’s glowing — when not glowing it has the perfect green, pure.” Taber’s enthusiasm appears to be matched by Pocket fans as all the units sold out in under two minutes. (Though Taber didn’t specify how many were available when asked.)

Unsurprisingly, and to the chagrin of, well, everyone, plenty of GITD editions have found their way into the hands of resellers.

Now that the shipping of actual Pockets seems to be mostly caught up, I asked whether there’d be stock for the holidays, to which Taber confirmed there would be. Which just leaves those cartridge adapters, and that’s a whole other situation, one that’s changed a fair bit since launch.

Photo by James Trew / Engadget

The whole selling point of the Pocket was that it could natively play original Game Boy cartridges (including Color and Advanced titles), plus Atari Lynx and Game Gear carts via an adapter. Later, TurboGrafx-16/PC Engine and NeoGeo Pocket adapters were also confirmed to be in development. At launch, the Game Gear accessory was ready to go, but there’s been a long wait for the others.

Analogue initially communicated they should be available in Q3 this year, but Taber said they were “still on track to be shipped out by the end of the year.” (FWIW, an archived version of this page showed Q3 up until at least the day before we asked for confirmation, Google has since cached a newer version.) But the real change is that the Pocket can play games from far more systems than it could at launch, including some of the ones for which there are adapters.

The Pocket doesn’t emulate games so much as it reprograms itself to “become” the system you want to play. It does this via something called Field Programmable Gate Arrays (FPGA) and more specifically “cores” that, in lay terms, mimic each system — it’s what sets the Pocket apart from most other retro handhelds that emulate in software.

Since launch cores have been made available for a number of consoles, including the NES, SNES, Genesis/Megadrive, Neo Geo and TurboGrafx-16. To play games from these systems, no adapter is required, but it does mean dabbling in the murky world of ROMs. To what extent this diminishes the appetite for the adapters is unclear (the Atari Lynx and Neo Geo Pocket remain the systems with adapters that don’t have community-created cores available).

Analogue’s Transparent Limited Edition Pockets go on sale today at 11AM Eastern. 

This article originally appeared on Engadget at https://www.engadget.com/analogues-limited-edition-pockets-are-delightful-and-frustrating-140012471.html?src=rss

Read More 

AMD CEO Lisa Su on the AI revolution and competing with Nvidia

Photo illustration by Alex Parkin / The Verge

At this year’s Code Conference, the CEO of one of the world’s largest computer chip companies discusses competing with Nvidia’s leading GPU, AI regulation, and the global supply chain. Today, we’re bringing you something a little different. The Code Conference was this week, and we had a great time talking live onstage with all of our guests. We’ll be sharing a lot of these conversations here in the coming days, and the first one we’re sharing is my chat with Dr. Lisa Su, the CEO of AMD.
Lisa and I spoke for half an hour, and we covered an incredible number of topics, especially about AI and the chip supply chain. These past few years have seen a global chip shortage, exacerbated by the pandemic, and now, coming out of it, there’s suddenly another big spike in demand thanks to everyone wanting to run AI models. The balance of supply and demand is overall in a pretty good place right now, Lisa told us, with the notable exception of these high-end GPUs powering all of the large AI models that everyone’s running.

The hottest GPU in the game is Nvidia’s H100 chip. But AMD is working to compete with a new chip Lisa told us about called the MI300 that should be as fast as the H100. There’s also a lot of work being done in software to make it so that developers can move easily between Nvidia and AMD. So we got into that.
You’ll also hear Lisa talk about what companies are doing to increase manufacturing capacity. The CHIPS and Science Act that recently passed is a great step toward building chip manufacturing here in the United States, but Lisa told us it takes a long time to bring up that supply. So I wanted to know how AMD is looking to diversify this supply chain and make sure it has enough capacity to meet all of this new demand.
Finally, Lisa answered questions from the amazing Code audience and talked a lot about how much AMD is using AI inside the company right now. It’s more than you think, although Lisa did say AI is not going to be designing chips all by itself anytime soon.
Okay, Dr. Lisa Su, CEO of AMD. Here we go.

This transcript has been lightly edited for length and clarity.
Lisa Su: Hello, hello. Nice to see you.
Nilay Patel: Nice to see you.
Thank you for having me.
I have a ton to talk about — 500 cards’ worth of questions. We’re going to be here all night. But let’s start with something exciting. AMD made some news today in the AI market. What’s going on?
Well, I can say, first of all, the theme of this whole conference, AI, is the theme of everything in tech these days. And when we look at all of the opportunities for computing to really advance AI, that’s really what we’re working on. So yes, today, we did have an announcement this morning from a company, a startup called Lamini, a great company that we’ve been working with, some of the top researchers in large language models.
And the key for everyone is, when I talk to CEOs, people are all asking, “I know I need to pay attention to AI. I know I need to do something. But what do I do? It’s so complicated. There are so many different factors.” And with these foundational models like Llama, which are great foundational models, many enterprises actually want to customize those models with their own data and ensure that you can do that in your private environment and for your application. And that’s what Lamini does.
They actually customize models, fine-tune models for enterprises, and they operate on AMD GPUs. And so that was a cool thing. And we spent a bit of time with them, quite a bit of time with them, really optimizing the software and the applications to make it as easy as possible to develop these enterprise, fine-tuned models.
I want to talk about that software in depth. I think it’s very interesting where we’re abstracting the different levels of software development away from the hardware. But I want to come back to that.
I want to begin broadly with the chip market. We’re exiting a period of pretty incredible constraint in chips across every process node. Where do you think we are now?
It’s interesting. I’ve been in the semiconductor business for, I don’t know, the last 30 years, and for the longest time, people didn’t really even understand what semiconductors were or where they fit in the overall supply chain and where they were necessary in applications. I think the last few years, especially with the pandemic-driven demand and everything that we’re doing with AI, people now are really focused on semiconductors.
I think there has been a tremendous cycle. One, a cycle where we needed a lot more chips than we had, and then a cycle where we had too many of some. But at the end of the day, I think the fact is semiconductors are essential to so many applications. And particularly for us, what we’re focused on are the most complex, the highest performance, the bleeding edge of semiconductors. And I would say that there’s tremendous growth in the market.
What do you think the bottleneck is now? Is it the cutting edge? Is it at the older process nodes, which is what we were hearing in the middle of the chip shortage?
I think the industry as a whole has really come together as an ecosystem to put a lot of capacity on for the purposes of ensuring that we do satisfy overall demand. So in general, I would say that the supply / demand balance is in a pretty good place, with perhaps the exception of GPUs. If you need GPUs for large language model training and inference, they’re probably tight right now. A little bit tight.
Lisa’s got some in the back if you need some.
But look, the truth is we absolutely are putting a tremendous amount of effort getting the entire supply chain ramped up. These are some of the most complex devices in the world — hundreds of billions of transistors, lots of advanced technology. But absolutely ramping up supply overall.
The CHIPS and Science Act passed last year, a massive investment in this country in fabs. AMD is obviously the largest fabless semiconductor company in the world. Has that had a noticeable effect yet, or are we still waiting for that to come to fruition?
I do think that if you look at the CHIPS and Science Act and what it’s doing for the semiconductor industry in the United States, it’s really a fantastic thing. I have to say, hats off to Gina Raimondo and everything that the Commerce Department is doing with industry. These are long lead time things. The semiconductor ecosystem in the US needed to be built five years ago. It is expanding now, especially at the leading edge, but it’s going to take some time.
So I don’t know that we feel the effects right now. But one of the things that we always believe is the more you invest over the longer term, you’ll see those effects. So I’m excited about onshore capacity. I’m also really excited about some of the investments in our national research infrastructure because that’s also extremely important for long-term semiconductor strength and leadership.
AMD’s results speak for themselves. You’re selling a lot more chips than you were a few years ago. Where have you found that supply? Are you still relying on TSMC while you wait for these new fabs to come up?
Again, when you look at the business that we’re in, it’s pushing the bleeding edge of technology. So we’re always on the most advanced node and trying to get the next big innovation out there. And there’s a combination of both process technology, manufacturing, design, design systems. We are very happy with our partnership with TSMC. They are the best in the world with advanced and leading-edge technologies.
They’re it, right? Can you diversify away from them?
I think the key is geographical diversity, Nilay. So when you think about geographical diversity, and by the way, this is true no matter what. Nobody wants to be in the same place because there are just natural risks that happen. And that’s where the CHIPS and Science Act has actually been helpful because there are now significant numbers of manufacturing plants being built in the US. They’re actually going to start production over the next number of quarters, and we will be active in having some of our manufacturing here in the United States.
I talked to Intel CEO Pat Gelsinger when he broke ground in Ohio. They’re trying to become a foundry. He said very confidently to me, “I would love to have an AMD logo on the side of one of these fabs.” How close is he to making that a reality?
Well, I would say this. I would say that from onshore manufacturing, we are certainly looking at lots and lots of opportunities. I think Pat has a very ambitious plan, and I think that’s there. I think we always look at who are the best manufacturing partners, and what’s most important to us is someone who’s really dedicated to the bleeding edge of technology.
Is there a competitor in the market to TSMC on that front?
There’s always competition in the market. TSMC is certainly very good. Samsung is certainly making a lot of investments. You mentioned Intel. I think there are some activities in Japan as well to bring up advanced manufacturing. So there are lots of different options.
Last question on this thread, and then I do want to talk to you about AI. There has been a lot of noise recently about Huawei. They put out a seven-nanometer chip. This is either an earth-shattering geopolitical event or it’s bullshit. What do you think it is?
Let’s see. I don’t know that I would call it an earth-shattering geopolitical event. Look, I think there’s no question that technology is considered a national security importance. And from a US standpoint, I think we want to ensure that we keep that lead. Again, I think the US government has spent a lot of time on this aspect.
The way I look at these things is we are a global company. China’s an important market for us. We do sell to China more consumer-related goods versus other things, and there’s an opportunity there for us to really have a balanced approach into how we deal with some of these geopolitical matters.
Do you think that there was more supply available at TSMC because Huawei got kicked out of the game?
I think TSMC has put a tremendous amount of supply on the table. I mean, if you think about the CapEx that’s happened over the last three or four years, it’s there because we all need more chips. And when we need more chips, the investment is there. Now chips are more expensive as a result, and that’s part of the ecosystem that we’ve built out.
Let’s talk about that part of it. So you mentioned GPUs are constrained. The Nvidia H100, there’s effectively a black market for access to these chips. You have some chips, you’re coming out with some new ones. You just announced Lamini’s training fully on your chips. Have you seen opportunity to disrupt this market because Nvidia supply is so constrained?
I would take a step back, Nilay, and just talk about what’s happening in the AI market because it’s incredible what’s happening. If you think about the technology trends that we’ve seen over the last 10 or 20 years — whether you’re talking about the internet or the mobile phone revolution or how PCs have changed things — AI is 10 times, 100 times, more than that in terms of how it’s impacting everything that we do.
So if you talk about enterprise productivity, if you talk about personal productivity or society, what we can do from a productivity standpoint, it’s that big. So the fact that there’s a shortage of GPUs, I think it’s not surprising because people recognize how important the technology is. Now, we’re in such the early innings of how AI and especially generative AI is coming to market that I view this as a 10-year cycle that we’re talking about, not how many GPUs can you get in the next two to four quarters.
We are excited about our road map. I think with high-performance computing, I would call generative AI the killer app for high-performance computing. You need more and more and more. And as good as today’s large language model is, it can still get better if you continue to increase the training performance and the inference performance.
And so that’s what we do. We build the most complex chips. We do have a new one coming out. It’s called MI300 if you want the code name there, and it’s going to be fantastic. It’s targeted at large language model training as well as large language model inference. Do we see opportunity? Yes. We see significant opportunity, and it’s not just in one place. The idea of the cloud guys are the only users, that’s not true. There’s going to be a lot of enterprise AI. A lot of startups have tremendous VC backing around AI as well. And so we see opportunity across all those spaces.
So MI300?
MI300, you got it.
Performance-wise, is this going to be competitive with the H100 or exceed the H100?
It is definitely going to be competitive from training workloads, and in the AI market, there’s no one-size-fits-all as it relates to chips. There are some that are going to be exceptional for training. There are some that are going to be exceptional for inference, and that depends on how you put it together.
What we’ve done with MI300 is we’ve built an exceptional product for inference, especially large language model inference. So when we look going forward, much of what work is done right now is companies training and deciding what their models are going to be. But going forward, we actually think inference is going to be a larger market, and that plays well into some of what we’ve designed MI300 for.
If you look at what Wall Street thinks Nvidia’s mode is, it’s CUDA, it’s the proprietary software stack, it’s the long-running relationships with developers. You have ROCm, which is a little different. Do you think that that’s a moat that you can overcome with better products or with a more open approach? How are you going about attacking that?
I’m not a believer in moats when the market is moving as fast as it is. When you think about moats, it’s more mature markets where people are not really wanting to change things a lot. When you look at generative AI, it’s moving at an incredible pace. The progress that we’re making in a few months in a regular development environment might’ve taken a few years. And software in particular, our approach is an open software approach.
There’s actually a dichotomy. If you look at people who have developed software over the last five, seven, or eight years, they’ve tended to use… let’s call it, more hardware-specific software. It was convenient. There weren’t that many choices out there, and so that’s what people did. When you look at going forward, actually what you find is everyone’s looking for the ability to build hardware-agnostic software because people want choice. Frankly, people want choice. People want to use their older infrastructure. People want to ensure that they’re able to move from one infrastructure to another infrastructure. And so they’re building on these higher levels of software. Things like PyTorch, for example, which tends to be that hardware-agnostic capability.
So I do think the next 10 years are going to be different from the last 10 as it relates to how do you develop within AI. And I think we’re seeing that across the industry and the ecosystem. And the benefit of an open approach is that there’s no one company that has all of the ideas. So the more we’re able to bring the ecosystem together, we get to take advantage of all of those really, really smart developers who want to accelerate AI learning.
PyTorch is a big deal, right? This is the language that all these models are actually coded in. I talk to a bunch of cloud CEOs. They don’t love their dependency on Nvidia as much as anybody doesn’t love being dependent on any one vendor. Is this a place where you can go work with those cloud providers and say, “We’re going to optimize our chips for PyTorch and not CUDA,” and developers can just run on PyTorch and pick whichever is best optimized?
That’s exactly it. So if you think about what PyTorch is trying to do — and it really is trying to be that sort of hardware-agnostic layer — one of the major milestones that we’ve come up with is on PyTorch 2.0, AMD was qualified on day one. And what that means is anybody who runs CUDA on PyTorch right now, it will run on AMD out of the box because we’ve done the work there. And frankly, it’ll run on other hardware as well.
But our goal is “may the best chip win.” And the way you do that is to make the software much more seamless. And it’s PyTorch, but it’s also Jax. It’s also some of the tools that OpenAI is bringing in with Triton. There are lots of different tools and frameworks that people are bringing forward that are hardware-agnostic. There are a bunch of people who are also doing “build your own” types of things. So I do think this is the wave of the future for AI software.
Are you building custom chips for any of these companies?
We have the capability of building custom chips. And the way I think about it is the time to build custom chips is actually when you get very high volume applications going forward. So I do believe there will be custom chips over the next number of years. The other piece that’s also interesting is you need all different types of engines for AI. So we spend a lot of time talking about big GPUs because that’s what’s needed for trading large language models. But you’re also going to see ASICs for some… let’s call it, more narrow applications. You’re also going to see AI in client chips. So I’m pretty excited about that as well in terms of just how broad AI will be incorporated into chips across all of the market segments.
I’ve got Kevin Scott, CTO of Microsoft, here tomorrow. So I’ll ask you this question so I can chase him down with it. If, say, Microsoft wanted to diversify Azure and put more AMD in there and be invisible to customers, is that possible right now?
Well, first of all, I love Kevin Scott. He’s a great guy, and we have a tremendous partnership with Microsoft across both the cloud as well as the Windows environment. I think you should ask him the question. But I think if you were to ask him or if you were to ask a bunch of other cloud manufacturers, they would say it’s absolutely possible. Yes, it takes work. It takes work that we each have to put in, but it’s much less work than you might have imagined because people are actually writing code at the higher-level frameworks. And we believe that this is the wave of the future for AI programming.
Let me connect this to an end-user application just for a second. We’re talking about things that are very much raising the cost curve: a lot of smart people doing a lot of work to develop for really high-end GPUs on the cutting-edge process nodes. Everything’s just getting more expensive, and you see how the consumer applications are expensive: $25 a month, $30 a seat for Microsoft Office with Copilot. When do you come down the cost curve that brings those consumer prices down?
It’s a great, great question. I do believe that the value that you get with gen AI in terms of productivity will absolutely be proven out. So yes, the cost of these infrastructures is high right now, but the productivity that you get on the other side is also exciting. We’re deploying AI internally within AMD, and it’s such a high priority because, if I can get chips out faster, that’s huge productivity.
Do you trust it? Do you have your people checking the work that AI is doing, or do you trust it?
Sure. Look, we’re all experimenting, right? We’re in the very, very early stages of building the tools and the infrastructure so that we can deploy. But the fact is it saves us time — whether we’re designing chips, where we’re testing chips, where we’re validating chips — it saves us time, and time is money in our world.
But back to your question about when do you get to the other side of the curve. I think that’s why it’s so important to think about AI broadly and not just in the cloud. So if you think about how the ecosystem will look a few years from now, you would imagine a place where, yes, you have the cloud infrastructures training these largest foundational models, but you’re also going to have a bunch of AI at the edge. And whether it’s in your PC or it’s in your phone, you’re going to be able to do local AI. And there, it is cheaper, it is faster, and it is actually more private when you do that. And so, that’s this idea of AI everywhere and how it can really enhance the way we’re deploying.
That brings me to open source and, honestly, to the idea of how we will regulate this. So there’s a White House meeting, everyone participates, great. Everyone’s very proud of each other. You think about how you will actually enforce AI regulation. And it’s okay, you can probably tell AWS or Azure not to run certain work streams. “Don’t do these things.” And that seems fine. Can you tell AMD to not let certain things happen on the chips for somebody running an open-source model on Linux on their laptop?
I think it is something that we all take very seriously. The technology has so much upside in terms of what it can do from a productivity and a discovery standpoint, but there’s also safety in AI. And I do think that, as large companies, we have a responsibility. If you think about the two things around data privacy as well as just overall ensuring as these models are developed that they’re developed to the best of our ability without too much bias. We’re going to make mistakes. The industry as a whole will not be perfect here. But I think there is clarity around its importance and that we need to do it together and that there needs to be a public / private partnership to make it happen.
I can’t remember anyone’s name, so I’d be a horrible politician. But let’s pretend I’m a regulator. I’m going to do it. And I say, “Boy, I really don’t want these kids using any model to develop chemical weapons. And I need to figure out where to land that enforcement.” I can definitely tell Azure, “Don’t do that.” But a kid with an AMD chip in a Dell laptop running Linux, I have no mechanism of enforcement except to tell you to make the chip not do it. Would you accept that regulation?
I don’t think there’s a silver bullet. It’s not, “I can make the chip not do it.” It’s “I can make the combination of the chip and the model and have some safeguards in place.” And we’re absolutely willing to be at that table to help that happen.
You would accept that kind of regulation, that the chip will be constrained?
Yes, I would accept an opportunity for us to look at what are the safeguards that we would need to put in place.
I think this is going to be one of the most complicated… I don’t think we expect our chips to be limited in what we can do, and it feels like this is a question we have to ask and answer.
Let me say again, it’s not the chip by itself. Because in general, chips have broad capability. It’s the chips plus the software and the models. Particularly on the model side, what you do in terms of safeguards.
We could start lining up for questions. I’ve just got a couple more for you. You’re in the PS5; you’re in the Xbox. There’s a view of the world that says cloud gaming is the future of all things. That might be great for you because you’ll be in their data centers, too. But do you see that shift underway? Is that for real, or are we still doing console generations?
It’s so interesting. Gaming is everywhere. Gaming is everywhere in every form factor. There’s been this long conversation about: is this the end of console gaming? And I don’t see it. I see PC gaming strong, I see console gaming strong, and I see cloud gaming also having legs. And they all need similar types of technology, but they obviously use it in different ways.
Audience Q&A
Nilay Patel: Please introduce yourself.
Alan Lee: Hi, Lisa. Alan Lee, Analog Devices. One and a half years after the Xilinx acquisition, how do you see adaptive computing playing out in AI?
Lisa Su: First of all, it’s nice to see you, Alan. I think, first of all, the Xilinx acquisition was an acquisition we completed about 18 months ago — fantastic acquisition. Brought a lot of high-performance IP with adaptive computing IP. And I do see that particularly on these AI engines, engines that are optimized for data flow architectures, that’s one of the things that we were able to bring in as part of Xilinx. That’s actually the IP that is now going into PCs.
And so we see significant IP usage there. And together, as we go forward, I have this belief that there’s no one computer that is the right one. You actually need the right computing for the right applications. So whether it’s CPUs or GPUs or FPGAs or adaptive SoCs, you need all of those. And that’s the ecosystem that we’re bringing together.
NP: This tall gentleman over here.
Casey Newton: Hi, Casey Newton from Platformer. I wanted to return to Nilay’s question about regulation. Someday, it’s sad to say, but somebody might try to acquire a bunch of your GPUs for the express purpose of doing harm — training a large language model for that purpose. And so I wonder what sort of regulations, if any, do you think government should place around who gets access to large numbers of GPUs and what size training runs they’re allowed to do.
LS: That’s a good question. I don’t think we know the answer to that, particularly in terms of how to regulate. Our goal is, again, within all of the export controls that are out there, because GPUs are export controlled, that we follow those regulations. There are the biggest and the next level of GPUs that are there. I think the key is, again, as I said, it’s a combination of both chip and model development that really comes about. And we’re active at those tables and talking about how to do those things. I think we want to ensure that we are very protective of the highest-performing GPUs. But also, it’s an important market where lots of people want access.
Daniel Vestergaard: Hi, I’m Daniel from DR [Danmarks Radio]. To return to something you talked about earlier because everyone here is thinking about implementing AI in their internal workflows — and it’s just so interesting to hear about your thoughts because you have access to the chips and deep machine learning knowledge. Can you specify a bit, what are you using AI internally for in the chip-making process? Because this might point us in the right direction.
LS: Thanks for the question. I think every business is looking at how to implement AI. So for us, for example, there are the engineering functions and the non-engineering: sales, marketing, data analytics, lead generation. Those are all places where AI can be very useful. On the engineering side, we look at it in terms of how can we build chips faster. So they help us with design, they help us with test generation, they help us with manufacturing diagnostics.
Back to Nilay’s question, do I trust it to build a chip with no humans involved? No, of course not. We have lots of engineers. I think copilot functions in particular are actually fairly easy to adopt. Pure generative AI, we need to check and make sure that it works. But it’s a learning process. And the key, I would say, is there’s lots of experimentation, and fast cycles of learning are important. So we actually have dedicated teams that are spending their time looking at how we bring AI into our company development processes as fast as possible.
Jay Peters: Hi, Jay Peters with The Verge. Apple seems to be making a much bigger push in how its devices, and particularly its M-series chips, are really good for AAA gaming. Are you worried about Apple on that front at all?
NP: They told me the iPhone 15 Pro is the world’s best game console. And that’s why it’s “Pro.” It’s a very confusing situation.
LS: I don’t know about that. I would say, look, as I said earlier, gaming is such an important application when you think about entertainment and what we’re doing with it. I always think about all competition. But from my standpoint, it’s how do we get… It’s not just the hardware; it’s really how do we get the gaming ecosystem. People want to be able to take their games wherever and play with their friends and on different platforms. Those are options that we have with the gaming ecosystem today. We’re going to continue to push the envelope on the highest-performing PCs and console chips. And I think we’re going to be pretty good.
NP: I have one more for you. If you listen to Decoder, you know I love asking people about decisions. Chip CEOs have to make the longest-range decisions of basically anybody I can think of. What’s the longest-term bet you’re making right now?
LS: We are definitely designing for the five-plus-year cycle. I talked to you today about MI300. We made some of those architectural decisions four or five years ago. And the thought process there was, “Hey, where’s the world going? What kind of computing do you need?” Being very ambitious in our goals and what we were trying to do. So we’re pretty excited about what we’re building for the next five years.
NP: What’s a bet you’re making right now?
LS: We’re betting on what the next big thing in AI is.
NP: Okay. Thank you, Lisa.
LS: Alright.
NP: I did my best.

Photo illustration by Alex Parkin / The Verge

At this year’s Code Conference, the CEO of one of the world’s largest computer chip companies discusses competing with Nvidia’s leading GPU, AI regulation, and the global supply chain.

Today, we’re bringing you something a little different. The Code Conference was this week, and we had a great time talking live onstage with all of our guests. We’ll be sharing a lot of these conversations here in the coming days, and the first one we’re sharing is my chat with Dr. Lisa Su, the CEO of AMD.

Lisa and I spoke for half an hour, and we covered an incredible number of topics, especially about AI and the chip supply chain. These past few years have seen a global chip shortage, exacerbated by the pandemic, and now, coming out of it, there’s suddenly another big spike in demand thanks to everyone wanting to run AI models. The balance of supply and demand is overall in a pretty good place right now, Lisa told us, with the notable exception of these high-end GPUs powering all of the large AI models that everyone’s running.

The hottest GPU in the game is Nvidia’s H100 chip. But AMD is working to compete with a new chip Lisa told us about called the MI300 that should be as fast as the H100. There’s also a lot of work being done in software to make it so that developers can move easily between Nvidia and AMD. So we got into that.

You’ll also hear Lisa talk about what companies are doing to increase manufacturing capacity. The CHIPS and Science Act that recently passed is a great step toward building chip manufacturing here in the United States, but Lisa told us it takes a long time to bring up that supply. So I wanted to know how AMD is looking to diversify this supply chain and make sure it has enough capacity to meet all of this new demand.

Finally, Lisa answered questions from the amazing Code audience and talked a lot about how much AMD is using AI inside the company right now. It’s more than you think, although Lisa did say AI is not going to be designing chips all by itself anytime soon.

Okay, Dr. Lisa Su, CEO of AMD. Here we go.

This transcript has been lightly edited for length and clarity.

Lisa Su: Hello, hello. Nice to see you.

Nilay Patel: Nice to see you.

Thank you for having me.

I have a ton to talk about — 500 cards’ worth of questions. We’re going to be here all night. But let’s start with something exciting. AMD made some news today in the AI market. What’s going on?

Well, I can say, first of all, the theme of this whole conference, AI, is the theme of everything in tech these days. And when we look at all of the opportunities for computing to really advance AI, that’s really what we’re working on. So yes, today, we did have an announcement this morning from a company, a startup called Lamini, a great company that we’ve been working with, some of the top researchers in large language models.

And the key for everyone is, when I talk to CEOs, people are all asking, “I know I need to pay attention to AI. I know I need to do something. But what do I do? It’s so complicated. There are so many different factors.” And with these foundational models like Llama, which are great foundational models, many enterprises actually want to customize those models with their own data and ensure that you can do that in your private environment and for your application. And that’s what Lamini does.

They actually customize models, fine-tune models for enterprises, and they operate on AMD GPUs. And so that was a cool thing. And we spent a bit of time with them, quite a bit of time with them, really optimizing the software and the applications to make it as easy as possible to develop these enterprise, fine-tuned models.

I want to talk about that software in depth. I think it’s very interesting where we’re abstracting the different levels of software development away from the hardware. But I want to come back to that.

I want to begin broadly with the chip market. We’re exiting a period of pretty incredible constraint in chips across every process node. Where do you think we are now?

It’s interesting. I’ve been in the semiconductor business for, I don’t know, the last 30 years, and for the longest time, people didn’t really even understand what semiconductors were or where they fit in the overall supply chain and where they were necessary in applications. I think the last few years, especially with the pandemic-driven demand and everything that we’re doing with AI, people now are really focused on semiconductors.

I think there has been a tremendous cycle. One, a cycle where we needed a lot more chips than we had, and then a cycle where we had too many of some. But at the end of the day, I think the fact is semiconductors are essential to so many applications. And particularly for us, what we’re focused on are the most complex, the highest performance, the bleeding edge of semiconductors. And I would say that there’s tremendous growth in the market.

What do you think the bottleneck is now? Is it the cutting edge? Is it at the older process nodes, which is what we were hearing in the middle of the chip shortage?

I think the industry as a whole has really come together as an ecosystem to put a lot of capacity on for the purposes of ensuring that we do satisfy overall demand. So in general, I would say that the supply / demand balance is in a pretty good place, with perhaps the exception of GPUs. If you need GPUs for large language model training and inference, they’re probably tight right now. A little bit tight.

Lisa’s got some in the back if you need some.

But look, the truth is we absolutely are putting a tremendous amount of effort getting the entire supply chain ramped up. These are some of the most complex devices in the world — hundreds of billions of transistors, lots of advanced technology. But absolutely ramping up supply overall.

The CHIPS and Science Act passed last year, a massive investment in this country in fabs. AMD is obviously the largest fabless semiconductor company in the world. Has that had a noticeable effect yet, or are we still waiting for that to come to fruition?

I do think that if you look at the CHIPS and Science Act and what it’s doing for the semiconductor industry in the United States, it’s really a fantastic thing. I have to say, hats off to Gina Raimondo and everything that the Commerce Department is doing with industry. These are long lead time things. The semiconductor ecosystem in the US needed to be built five years ago. It is expanding now, especially at the leading edge, but it’s going to take some time.

So I don’t know that we feel the effects right now. But one of the things that we always believe is the more you invest over the longer term, you’ll see those effects. So I’m excited about onshore capacity. I’m also really excited about some of the investments in our national research infrastructure because that’s also extremely important for long-term semiconductor strength and leadership.

AMD’s results speak for themselves. You’re selling a lot more chips than you were a few years ago. Where have you found that supply? Are you still relying on TSMC while you wait for these new fabs to come up?

Again, when you look at the business that we’re in, it’s pushing the bleeding edge of technology. So we’re always on the most advanced node and trying to get the next big innovation out there. And there’s a combination of both process technology, manufacturing, design, design systems. We are very happy with our partnership with TSMC. They are the best in the world with advanced and leading-edge technologies.

They’re it, right? Can you diversify away from them?

I think the key is geographical diversity, Nilay. So when you think about geographical diversity, and by the way, this is true no matter what. Nobody wants to be in the same place because there are just natural risks that happen. And that’s where the CHIPS and Science Act has actually been helpful because there are now significant numbers of manufacturing plants being built in the US. They’re actually going to start production over the next number of quarters, and we will be active in having some of our manufacturing here in the United States.

I talked to Intel CEO Pat Gelsinger when he broke ground in Ohio. They’re trying to become a foundry. He said very confidently to me, “I would love to have an AMD logo on the side of one of these fabs.” How close is he to making that a reality?

Well, I would say this. I would say that from onshore manufacturing, we are certainly looking at lots and lots of opportunities. I think Pat has a very ambitious plan, and I think that’s there. I think we always look at who are the best manufacturing partners, and what’s most important to us is someone who’s really dedicated to the bleeding edge of technology.

Is there a competitor in the market to TSMC on that front?

There’s always competition in the market. TSMC is certainly very good. Samsung is certainly making a lot of investments. You mentioned Intel. I think there are some activities in Japan as well to bring up advanced manufacturing. So there are lots of different options.

Last question on this thread, and then I do want to talk to you about AI. There has been a lot of noise recently about Huawei. They put out a seven-nanometer chip. This is either an earth-shattering geopolitical event or it’s bullshit. What do you think it is?

Let’s see. I don’t know that I would call it an earth-shattering geopolitical event. Look, I think there’s no question that technology is considered a national security importance. And from a US standpoint, I think we want to ensure that we keep that lead. Again, I think the US government has spent a lot of time on this aspect.

The way I look at these things is we are a global company. China’s an important market for us. We do sell to China more consumer-related goods versus other things, and there’s an opportunity there for us to really have a balanced approach into how we deal with some of these geopolitical matters.

Do you think that there was more supply available at TSMC because Huawei got kicked out of the game?

I think TSMC has put a tremendous amount of supply on the table. I mean, if you think about the CapEx that’s happened over the last three or four years, it’s there because we all need more chips. And when we need more chips, the investment is there. Now chips are more expensive as a result, and that’s part of the ecosystem that we’ve built out.

Let’s talk about that part of it. So you mentioned GPUs are constrained. The Nvidia H100, there’s effectively a black market for access to these chips. You have some chips, you’re coming out with some new ones. You just announced Lamini’s training fully on your chips. Have you seen opportunity to disrupt this market because Nvidia supply is so constrained?

I would take a step back, Nilay, and just talk about what’s happening in the AI market because it’s incredible what’s happening. If you think about the technology trends that we’ve seen over the last 10 or 20 years — whether you’re talking about the internet or the mobile phone revolution or how PCs have changed things — AI is 10 times, 100 times, more than that in terms of how it’s impacting everything that we do.

So if you talk about enterprise productivity, if you talk about personal productivity or society, what we can do from a productivity standpoint, it’s that big. So the fact that there’s a shortage of GPUs, I think it’s not surprising because people recognize how important the technology is. Now, we’re in such the early innings of how AI and especially generative AI is coming to market that I view this as a 10-year cycle that we’re talking about, not how many GPUs can you get in the next two to four quarters.

We are excited about our road map. I think with high-performance computing, I would call generative AI the killer app for high-performance computing. You need more and more and more. And as good as today’s large language model is, it can still get better if you continue to increase the training performance and the inference performance.

And so that’s what we do. We build the most complex chips. We do have a new one coming out. It’s called MI300 if you want the code name there, and it’s going to be fantastic. It’s targeted at large language model training as well as large language model inference. Do we see opportunity? Yes. We see significant opportunity, and it’s not just in one place. The idea of the cloud guys are the only users, that’s not true. There’s going to be a lot of enterprise AI. A lot of startups have tremendous VC backing around AI as well. And so we see opportunity across all those spaces.

So MI300?

MI300, you got it.

Performance-wise, is this going to be competitive with the H100 or exceed the H100?

It is definitely going to be competitive from training workloads, and in the AI market, there’s no one-size-fits-all as it relates to chips. There are some that are going to be exceptional for training. There are some that are going to be exceptional for inference, and that depends on how you put it together.

What we’ve done with MI300 is we’ve built an exceptional product for inference, especially large language model inference. So when we look going forward, much of what work is done right now is companies training and deciding what their models are going to be. But going forward, we actually think inference is going to be a larger market, and that plays well into some of what we’ve designed MI300 for.

If you look at what Wall Street thinks Nvidia’s mode is, it’s CUDA, it’s the proprietary software stack, it’s the long-running relationships with developers. You have ROCm, which is a little different. Do you think that that’s a moat that you can overcome with better products or with a more open approach? How are you going about attacking that?

I’m not a believer in moats when the market is moving as fast as it is. When you think about moats, it’s more mature markets where people are not really wanting to change things a lot. When you look at generative AI, it’s moving at an incredible pace. The progress that we’re making in a few months in a regular development environment might’ve taken a few years. And software in particular, our approach is an open software approach.

There’s actually a dichotomy. If you look at people who have developed software over the last five, seven, or eight years, they’ve tended to use… let’s call it, more hardware-specific software. It was convenient. There weren’t that many choices out there, and so that’s what people did. When you look at going forward, actually what you find is everyone’s looking for the ability to build hardware-agnostic software because people want choice. Frankly, people want choice. People want to use their older infrastructure. People want to ensure that they’re able to move from one infrastructure to another infrastructure. And so they’re building on these higher levels of software. Things like PyTorch, for example, which tends to be that hardware-agnostic capability.

So I do think the next 10 years are going to be different from the last 10 as it relates to how do you develop within AI. And I think we’re seeing that across the industry and the ecosystem. And the benefit of an open approach is that there’s no one company that has all of the ideas. So the more we’re able to bring the ecosystem together, we get to take advantage of all of those really, really smart developers who want to accelerate AI learning.

PyTorch is a big deal, right? This is the language that all these models are actually coded in. I talk to a bunch of cloud CEOs. They don’t love their dependency on Nvidia as much as anybody doesn’t love being dependent on any one vendor. Is this a place where you can go work with those cloud providers and say, “We’re going to optimize our chips for PyTorch and not CUDA,” and developers can just run on PyTorch and pick whichever is best optimized?

That’s exactly it. So if you think about what PyTorch is trying to do — and it really is trying to be that sort of hardware-agnostic layer — one of the major milestones that we’ve come up with is on PyTorch 2.0, AMD was qualified on day one. And what that means is anybody who runs CUDA on PyTorch right now, it will run on AMD out of the box because we’ve done the work there. And frankly, it’ll run on other hardware as well.

But our goal is “may the best chip win.” And the way you do that is to make the software much more seamless. And it’s PyTorch, but it’s also Jax. It’s also some of the tools that OpenAI is bringing in with Triton. There are lots of different tools and frameworks that people are bringing forward that are hardware-agnostic. There are a bunch of people who are also doing “build your own” types of things. So I do think this is the wave of the future for AI software.

Are you building custom chips for any of these companies?

We have the capability of building custom chips. And the way I think about it is the time to build custom chips is actually when you get very high volume applications going forward. So I do believe there will be custom chips over the next number of years. The other piece that’s also interesting is you need all different types of engines for AI. So we spend a lot of time talking about big GPUs because that’s what’s needed for trading large language models. But you’re also going to see ASICs for some… let’s call it, more narrow applications. You’re also going to see AI in client chips. So I’m pretty excited about that as well in terms of just how broad AI will be incorporated into chips across all of the market segments.

I’ve got Kevin Scott, CTO of Microsoft, here tomorrow. So I’ll ask you this question so I can chase him down with it. If, say, Microsoft wanted to diversify Azure and put more AMD in there and be invisible to customers, is that possible right now?

Well, first of all, I love Kevin Scott. He’s a great guy, and we have a tremendous partnership with Microsoft across both the cloud as well as the Windows environment. I think you should ask him the question. But I think if you were to ask him or if you were to ask a bunch of other cloud manufacturers, they would say it’s absolutely possible. Yes, it takes work. It takes work that we each have to put in, but it’s much less work than you might have imagined because people are actually writing code at the higher-level frameworks. And we believe that this is the wave of the future for AI programming.

Let me connect this to an end-user application just for a second. We’re talking about things that are very much raising the cost curve: a lot of smart people doing a lot of work to develop for really high-end GPUs on the cutting-edge process nodes. Everything’s just getting more expensive, and you see how the consumer applications are expensive: $25 a month, $30 a seat for Microsoft Office with Copilot. When do you come down the cost curve that brings those consumer prices down?

It’s a great, great question. I do believe that the value that you get with gen AI in terms of productivity will absolutely be proven out. So yes, the cost of these infrastructures is high right now, but the productivity that you get on the other side is also exciting. We’re deploying AI internally within AMD, and it’s such a high priority because, if I can get chips out faster, that’s huge productivity.

Do you trust it? Do you have your people checking the work that AI is doing, or do you trust it?

Sure. Look, we’re all experimenting, right? We’re in the very, very early stages of building the tools and the infrastructure so that we can deploy. But the fact is it saves us time — whether we’re designing chips, where we’re testing chips, where we’re validating chips — it saves us time, and time is money in our world.

But back to your question about when do you get to the other side of the curve. I think that’s why it’s so important to think about AI broadly and not just in the cloud. So if you think about how the ecosystem will look a few years from now, you would imagine a place where, yes, you have the cloud infrastructures training these largest foundational models, but you’re also going to have a bunch of AI at the edge. And whether it’s in your PC or it’s in your phone, you’re going to be able to do local AI. And there, it is cheaper, it is faster, and it is actually more private when you do that. And so, that’s this idea of AI everywhere and how it can really enhance the way we’re deploying.

That brings me to open source and, honestly, to the idea of how we will regulate this. So there’s a White House meeting, everyone participates, great. Everyone’s very proud of each other. You think about how you will actually enforce AI regulation. And it’s okay, you can probably tell AWS or Azure not to run certain work streams. “Don’t do these things.” And that seems fine. Can you tell AMD to not let certain things happen on the chips for somebody running an open-source model on Linux on their laptop?

I think it is something that we all take very seriously. The technology has so much upside in terms of what it can do from a productivity and a discovery standpoint, but there’s also safety in AI. And I do think that, as large companies, we have a responsibility. If you think about the two things around data privacy as well as just overall ensuring as these models are developed that they’re developed to the best of our ability without too much bias. We’re going to make mistakes. The industry as a whole will not be perfect here. But I think there is clarity around its importance and that we need to do it together and that there needs to be a public / private partnership to make it happen.

I can’t remember anyone’s name, so I’d be a horrible politician. But let’s pretend I’m a regulator. I’m going to do it. And I say, “Boy, I really don’t want these kids using any model to develop chemical weapons. And I need to figure out where to land that enforcement.” I can definitely tell Azure, “Don’t do that.” But a kid with an AMD chip in a Dell laptop running Linux, I have no mechanism of enforcement except to tell you to make the chip not do it. Would you accept that regulation?

I don’t think there’s a silver bullet. It’s not, “I can make the chip not do it.” It’s “I can make the combination of the chip and the model and have some safeguards in place.” And we’re absolutely willing to be at that table to help that happen.

You would accept that kind of regulation, that the chip will be constrained?

Yes, I would accept an opportunity for us to look at what are the safeguards that we would need to put in place.

I think this is going to be one of the most complicated… I don’t think we expect our chips to be limited in what we can do, and it feels like this is a question we have to ask and answer.

Let me say again, it’s not the chip by itself. Because in general, chips have broad capability. It’s the chips plus the software and the models. Particularly on the model side, what you do in terms of safeguards.

We could start lining up for questions. I’ve just got a couple more for you. You’re in the PS5; you’re in the Xbox. There’s a view of the world that says cloud gaming is the future of all things. That might be great for you because you’ll be in their data centers, too. But do you see that shift underway? Is that for real, or are we still doing console generations?

It’s so interesting. Gaming is everywhere. Gaming is everywhere in every form factor. There’s been this long conversation about: is this the end of console gaming? And I don’t see it. I see PC gaming strong, I see console gaming strong, and I see cloud gaming also having legs. And they all need similar types of technology, but they obviously use it in different ways.

Audience Q&A

Nilay Patel: Please introduce yourself.

Alan Lee: Hi, Lisa. Alan Lee, Analog Devices. One and a half years after the Xilinx acquisition, how do you see adaptive computing playing out in AI?

Lisa Su: First of all, it’s nice to see you, Alan. I think, first of all, the Xilinx acquisition was an acquisition we completed about 18 months ago — fantastic acquisition. Brought a lot of high-performance IP with adaptive computing IP. And I do see that particularly on these AI engines, engines that are optimized for data flow architectures, that’s one of the things that we were able to bring in as part of Xilinx. That’s actually the IP that is now going into PCs.

And so we see significant IP usage there. And together, as we go forward, I have this belief that there’s no one computer that is the right one. You actually need the right computing for the right applications. So whether it’s CPUs or GPUs or FPGAs or adaptive SoCs, you need all of those. And that’s the ecosystem that we’re bringing together.

NP: This tall gentleman over here.

Casey Newton: Hi, Casey Newton from Platformer. I wanted to return to Nilay’s question about regulation. Someday, it’s sad to say, but somebody might try to acquire a bunch of your GPUs for the express purpose of doing harm — training a large language model for that purpose. And so I wonder what sort of regulations, if any, do you think government should place around who gets access to large numbers of GPUs and what size training runs they’re allowed to do.

LS: That’s a good question. I don’t think we know the answer to that, particularly in terms of how to regulate. Our goal is, again, within all of the export controls that are out there, because GPUs are export controlled, that we follow those regulations. There are the biggest and the next level of GPUs that are there. I think the key is, again, as I said, it’s a combination of both chip and model development that really comes about. And we’re active at those tables and talking about how to do those things. I think we want to ensure that we are very protective of the highest-performing GPUs. But also, it’s an important market where lots of people want access.

Daniel Vestergaard: Hi, I’m Daniel from DR [Danmarks Radio]. To return to something you talked about earlier because everyone here is thinking about implementing AI in their internal workflows — and it’s just so interesting to hear about your thoughts because you have access to the chips and deep machine learning knowledge. Can you specify a bit, what are you using AI internally for in the chip-making process? Because this might point us in the right direction.

LS: Thanks for the question. I think every business is looking at how to implement AI. So for us, for example, there are the engineering functions and the non-engineering: sales, marketing, data analytics, lead generation. Those are all places where AI can be very useful. On the engineering side, we look at it in terms of how can we build chips faster. So they help us with design, they help us with test generation, they help us with manufacturing diagnostics.

Back to Nilay’s question, do I trust it to build a chip with no humans involved? No, of course not. We have lots of engineers. I think copilot functions in particular are actually fairly easy to adopt. Pure generative AI, we need to check and make sure that it works. But it’s a learning process. And the key, I would say, is there’s lots of experimentation, and fast cycles of learning are important. So we actually have dedicated teams that are spending their time looking at how we bring AI into our company development processes as fast as possible.

Jay Peters: Hi, Jay Peters with The Verge. Apple seems to be making a much bigger push in how its devices, and particularly its M-series chips, are really good for AAA gaming. Are you worried about Apple on that front at all?

NP: They told me the iPhone 15 Pro is the world’s best game console. And that’s why it’s “Pro.” It’s a very confusing situation.

LS: I don’t know about that. I would say, look, as I said earlier, gaming is such an important application when you think about entertainment and what we’re doing with it. I always think about all competition. But from my standpoint, it’s how do we get… It’s not just the hardware; it’s really how do we get the gaming ecosystem. People want to be able to take their games wherever and play with their friends and on different platforms. Those are options that we have with the gaming ecosystem today. We’re going to continue to push the envelope on the highest-performing PCs and console chips. And I think we’re going to be pretty good.

NP: I have one more for you. If you listen to Decoder, you know I love asking people about decisions. Chip CEOs have to make the longest-range decisions of basically anybody I can think of. What’s the longest-term bet you’re making right now?

LS: We are definitely designing for the five-plus-year cycle. I talked to you today about MI300. We made some of those architectural decisions four or five years ago. And the thought process there was, “Hey, where’s the world going? What kind of computing do you need?” Being very ambitious in our goals and what we were trying to do. So we’re pretty excited about what we’re building for the next five years.

NP: What’s a bet you’re making right now?

LS: We’re betting on what the next big thing in AI is.

NP: Okay. Thank you, Lisa.

LS: Alright.

NP: I did my best.

Read More 

7 new movies and TV shows on Netflix, Max, Prime Video and more this weekend (September 29)

From highly anticipated superhero spin-offs to new Netflix thrillers, there’s plenty to watch this weekend.

September has proven a relatively quiet month for big-name theatrical releases, but the likes of Netflix, Max, and Prime Video continue to plug the gap with new movies and TV shows to stream this weekend.

Leading the charge is Gen V on Prime Video, which further expands the canon of Amazon’s The Boys series by taking place in the same universe. Reptile, Django, and The Wonderful Story of Henry Sugar also debut on Netflix this week, with the latter marking the first of four Wes Anderson-directed short films heading to the streamer in the coming days.

Below, you’ll find our pick of the best new movies and TV shows to stream on Netflix, Prime Video, Max, and more this weekend.

Gen V (Prime Video)

Gen V, Amazon’s second The Boys spin-off after last year’s animated series The Boys Presents: Diabolical, begins streaming on Prime Video this week.

Set before The Boys season 4 – which is now well into its post-production phase – Gen V explores the lives of hormonal (read: teenage) Supes who are competing for the top ranking at America’s only college for young adult superheroes. Jessie T Usher and Claudia Doumit are among those making cameos from the main series, but Gen V also introduces plenty of new foul-mouthed faces including Jaz Sinclair, Patrick Schwarzenegger, and Lizze Broadway.

In our Gen V review, we said that the show “brings The Hunger Games and Murder, She Wrote to The Boys universe [in a way that] works superbly,” so expect this one to make our list of the best Prime Video shows very soon.

Episodes 1 through 3 are available to stream on Prime Video. New episodes arrive weekly until the finale on November 3.

The Wonderful Story of Henry Sugar (Netflix) 

The first of four Wes Anderson-directed, Roald Dahl-inspired short films headed to Netflix this week is The Wonderful Story of Henry Sugar.

Starring Benedict Cumberbatch as Dahl’s miserly bachelor Henry Sugar, this 40-minute production follows the titular character’s quest to master an extraordinary skill in order to cheat at gambling games. Ralph Fiennes, Dev Patel, Rupert Friend, Ben Kingsley, and Richard Ayoade count among the film’s stacked and largely male supporting cast.

The second and third of Anderson’s new Dahl adaptations, The Swan and The Ratcatcher, are also now streaming on Netflix. The fourth titled Poison is set to premiere on September 30. Fans of The Grand Budapest Hotel and Fantastic Mr. Fox won’t want to miss these. 

Now available to stream on Netflix.

Reptile (Netflix) 

Murder mystery fans, listen up: Reptile, a gritty new thriller starring Benecio Del Toro, is now streaming on Netflix. 

This Grant Singer-directed movie follows a small-town New England detective (played by Del Toro) whose investigation into the brutal murder of a young real estate agent begins to dismantle the illusions in his own life. Justin Timberlake and Alicia Silverstone also star.

Unfortunately, despite its intriguing premise, Reptile has been slammed by critics, so we’ve taken the liberty of recommending three greater Netflix thrillers to watch instead.

Now available to stream on Netflix.

Flora and Son (Apple TV Plus) 

Music-based movies are few and far between these days, so props to Apple for bringing us Flora and Son on Apple TV Plus this weekend.

This Ireland-set tale – from Sing Street director John Carney – follows Flora (Eve Hewson), a young mother who seeks out guitar lessons from a Los Angeles-based teached (Joseph Gordon-Levitt) in a bid to connect with her 14-year-old son. 

Critics have described Flora and Son as a “sweary, big-hearted musical comedy drama,” suggesting it could be among the best Apple TV Plus movies of 2023.

Now available to stream on Apple TV Plus.

Django (Netflix) 

Having already aired in much of the world earlier this year, Django comes to Netflix in the US this weekend.

Loosely based on the cult movie by Sergio Corbucci, this French and Italian-produced (but English-language) series tells the story of a man seeking revenge who ends up fighting for something greater. Matthias Schoenaerts, Noomi Rapace, and Nicholas Pinnock all star in Django, which critics labeled as “defiantly gruesome western” following its premiere in March 2023.

Now available to stream on Netflix.

Who Killed Jill Dando? (Netflix) 

The first of this week’s two documentary picks is Who Killed Jill Dando? on Netflix.

This feature-length production delves deep into the murder of British broadcasting legend Jill Dando who, in 1999, was killed by a single bullet on her doorstep in broad daylight. Across three parts – and in typical Netflix style – Who Killed Jill Dando? pieces together the mystery using insight from friends, family, journalists, investigators, and lawyers. Expect it to join our best Netflix documentaries list soon.

Now available to stream on Netflix.

Savior Complex (Max) 

If you’re after another unsettling documentary story this weekend, look no further than Savior Complex on Max.

This three-party docuseries examines the case of Renee Bach, a young American missionary accused of causing the death of vulnerable Ugandan children by dangerously treating them despite having no medical training.

Critics have described Savior Complex as “a fascinating melting pot of colonialism, charity, outrage, class, race, privilege, and naivete,” so this is bound to be an interesting watch for those who can stomach its harrowing reality. One to add to our best Max documentaries guide, for sure.

Now available to stream on Max.

Didn’t see anything you like? Take a look at our guides to new Netflix movies, new Disney Plus movies, new Prime Video movies and new HBO Max movies.

Read More 

The makers of MOVEit have patched another major security flaw

The WS_FTP Server product was found to be vulnerable in multiple ways, with two flaws being labeled as critical.

The company behind the now-famous (for all the wrong reasons) MOVEit managed file transfer software has warned its clients that a different product – WS_FTP Server, also carries a couple of high-severity flaws that can be exploited in malware hacks. 

In an advisory, Progress said WS_FTP carried eight vulnerabilities, two of which were labeled as critical. One is tracked as CVE-2023-40044 (severity rating 10/10), while the other is tracked as CVE-2023-42657 (9.9/10). These vulnerabilities allow threat actors to run a range of malicious activities, including remote code execution.

“Attackers could also escape the context of the WS_FTP Server file structure and perform the same level of operations (delete, rename, rmdir, mkdir) on file and folder locations on the underlying operating system,” Progress said in the advisory. 

Patching the flaw

The worst part is – these flaws don’t even require user interaction, as the company adds, “We have addressed the vulnerabilities above and the Progress WS_FTP team strongly recommends performing an upgrade.”

“We do recommend upgrading to the most highest version which is 8.8.2. Upgrading to a patched release, using the full installer, is the only way to remediate this issue. There will be an outage to the system while the upgrade is running.”

There is also a way to remove and disable the vulnerable WS_FTP Server Ad Hoc Transfer Module, for those who cannot patch right away, or don’t really use the service. The details can be found here

Progress is the company behind MOVEit, a managed file transfer solution that was compromised by ransomware actors Clop, resulting in a major data theft affecting more than 2,000 firms, so far. 

As for WS_FTP Server, we don’t know if the flaws were used by any hackers in the meantime, but the product was being used by “thousands” of IT teams, according to Progress.

Via BleepingComputer 

More from TechRadar Pro

Millions of newborn child registry data entries stolen by another MOVEit hackHere’s a list of the best firewalls today These are the best malware removal tools right now

Read More 

Sony’s cheapest OLED TV is now available – here’s what you’re getting for your money

The A75L is Sony’s entry-level OLED in its 2023 TV range and provides just as good specs as the Sony A80L, but will we see big discounts on Black Friday for it?

Sony has released its entry-level 2023 OLED, the A75L. It’s available in the US (no official UK or Australian release has been confirmed yet) in a limited size range of just 55- and 65-inches, and is priced at $1,599.99 and $1,999.99, respectively. According to DigitalTrends, orders are expected to ship in early October. 

With its pricing as it is, the Sony A75L sits roughly $500 cheaper than the Sony A80L. Although, we have seen a price cut on Sony’s official US website for the A80L at the time of writing across all four sizes (55-, 65-, 77- and 83-inches). The recent discount brings the 55-inch set down to $1,599 – the same price as the 55-inch A75L. 

You’re still getting most of the same features on the A75L as you would on the A80L, a TV that sits amongst the best OLED TVs available, but it remains to be seen if this A80L price cut is permanent or not.  

Sony A75L OLED 4K TV vs A80L: how do they compare?  

The A75L supports the same HDR formats as the A80L (Dolby Vision, HDR10 and HLG) and two 2.1 HDMI ports with 4K 120Hz gaming support and VRR. It even still features the same processor as the A80L – the Sony Cognitive Processor XR. It also has the same game bar system to enhance gaming experiences that you can find on the A80L.

There are some differences between the two TVs though, with the main one being its speaker system. The A75L features two actuators, two bass reflex speakers and uses Acoustic Surface Audio. Whereas the Sony A80L, one of the best TVs for sound, features three actuators, two down-firing subwoofers and Acoustic Surface Audio+, meaning it will have a wider soundstage and more powerful bass levels than the A75L. However, with that $500 saving, you could pair one of the best soundbars with the A75L to compensate for the probable difference in sound quality. 

The other differences are firstly in the stand, which on the A75L is a fixed stand whereas the A80L has an adjustable three way stand. The other difference is that the A75L doesn’t come with a built-in microphone meaning there is no hands-free option to control Google Assistant without the remote, whereas on the Sony A80L this is possible. 

If you are looking for the absolute best in the Sony OLED range, you’ll be looking at the Sony A95L, which features a QD-OLED panel that gives outstanding picture quality. But you’ll be paying $2,799 for the 55-inch, which is $1,200 more than the new Sony A75L, and for people on a strict budget, this is a huge difference. 

Why Sony has made, at the time of writing, the 55-inch Sony A75L the same price as the A80L – and in the case of the 65-inch A75L more expensive than the Sony A80L–  is a mystery. Hopefully, this signals that the A75L will be one of this year’s Black Friday deals as Sony has left room for discount. If it isn’t, it better be.  

You might also like

Sony’s 2023 TV lineup LG B3; the dark horse OLED of 2023 – rival to the Sony A75L?Sony’s new 98-inch TV 

Read More 

Scroll to top
Generated by Feedzy