Meta looking to use exotic, custom CPU in its datacenters for machine learning and AI — yet another indication that Nvidia’s most formidable rivals are developing their own alternatives to its hardware
Meta is advertising for engineers to build machine learning accelerators as part of its plans to move away from Nvidia.
We previously reported that Meta Platforms, the parent company of Facebook, plans to deploy its own custom-designed artificial intelligence chips, codenamed Artemis, into its data centers this year, but would continue using Nvidia H100 GPUs alongside them – for the foreseeable future at least.
However, The Register now claims job advertisements for ASIC engineers with expertise in architecture, design, and testing have been spotted in Bangalore, India, and Sunnyvale, California, indicating Meta’s intentions to develop its own AI hardware.
The job descriptions suggest that Meta is seeking professionals to “help architect state-of-the-art machine learning accelerators” and to design complex SoCs and IPs for datacenter applications. Some of these roles were initially posted on LinkedIn in late December 2023 and re-posted two weeks ago, with the Sunnyvale roles offering salaries nearing $200,000.
Artificial general intelligence
While the exact nature of Meta’s project remains unspecified, it’s likely linked to the company’s previously announced “Meta Training Inference Accelerators,” set to be launched later this year.
Meta’s ambitions also extend to artificial general intelligence, a venture that might necessitate specialized silicon.
With the increasing demand for AI and Nvidia’s struggle to meet this demand, Meta’s move to develop its own technology is a strategic step to ensure it doesn’t have to compete with rivals for hardware in a super-hot market.
The Register reports that the Indian government will likely welcome Meta’s decision to advertise in Bangalore, as the nation seeks to become a significant player in the global semiconductor industry.
In addition, rumors suggest that Microsoft is also reducing its dependence on Nvidia by developing a server networking card to optimize machine-learning workload performance. This trend suggests Nvidia’s most formidable rivals are looking for ways to become less reliant on its massively in-demand hardware.
More from TechRadar Pro
Meta has done something that will get Nvidia and AMD very, very worriedMeta set to use own AI chips in its servers in 2024Intel has a new rival to Nvidia’s uber-popular H100 AI GPU