Arm launches its own CPU, with Meta as first customer

For more than 35 years, Arm Holdings has licensed its instruction sets to the world’s biggest chipmakers and collected royalties on every processor made with its designs. Now the U.K.-based company is making physical silicon of its own for the first time.
Arm CEO Rene Haas unveiled his company’s first in-house chip on Tuesday at an event in San Francisco. Arm is calling the new data center central processing unit the AGI CPU. It’s a long-anticipated move that marks a major change for the so-called Switzerland of chip firms as it enters into fresh competition with its customers.
Meta is the first to sign on, as the social media company builds out multiple gigawatts of AI data centers and plans to shell out up to $135 billion on capital expenditures this year. In February, Meta secured a huge amount of chips from both Nvidia and Advanced Micro Devices.
“In today’s world, you really only have a couple of players,” said Meta software engineer Paul Saab, who helped with the Arm chip project since its start in 2023, in an interview with CNBC. “This adds yet another player to the ecosystem for us.”
Saab added that the Arm deal “allows a lot more flexibility in our software stack and in our supply chain.”
Terms of the agreement weren’t disclosed. For Arm, the deal marks a major win and a stamp of approval from one of the most valuable companies in the world.
“Let’s say they get 5% of Meta’s $115 to $135 billion capex going into the future,” said chip analyst Patrick Moorhead of Moor Insights. “That is a game changer on the top line for them.”
It’s also the latest sign that CPUs are seeing a resurgence in demand. Nvidia, which has established itself as the leader in AI graphics processing units, recently told CNBC that CPUs are “becoming the bottleneck” as agentic AI changes compute needs. Futurum Group calls it a “quiet supply crisis,” predicting the CPU market growth rate could exceed GPU growth by 2028.
While GPUs are ideal for training and running AI models because their thousands of cores can perform many operations simultaneously, CPUs have a smaller number of powerful cores running sequential general-purpose tasks. Agentic AI requires a lot of general compute power, with large amounts of data moving around across multiple agents.
At Nvidia’s annual GTC conference last week, CEO Jensen Huang unveiled an entire rack filled only with Vera CPUs. At the Arm event on Tuesday, Huang appeared in a recorded statement congratulating Arm on its new CPU.
Top leaders from Google, Amazon, Microsoft, Oracle, Broadcom, Micron, Samsung, SK Hynix and Marvell also appeared in the video. Arm told CNBC about 50 partners signaled support ahead of the launch.
“It’s a $1 trillion market, and what we’re seeing over and over again is actually our partners coming out and understanding and realizing this is actually great for the industry,” Mohamed Awad, Arm’s cloud AI head, told CNBC in an interview.
CNBC got an exclusive first look at Arm’s new chip lab where it’s preparing the new CPU for full production later this year.
Arm’s head of cloud AI Mohamed Away gave CNBC’s Katie Tarasov a tour of the chip lab where it created its first in-house chip, the AGI CPU, in Austin, Texas, on Friday March 6, 2026.
Erin Black | CNBC
‘Create the chip that you want’
Arm spent $71 million and about 18 months building three new lab rooms at its campus in Austin, Texas, where a once-tiny team has grown to over 1,000 people. Inside, engineers “bring up” the chips by putting them through multiple rounds of testing once they come off the factory line.
Like nearly all fabless AI chipmakers, Arm currently manufactures its CPU at Taiwan Semiconductor Manufacturing Company‘s fabrication plants. Made on TSMC’s 3-nanometer node, Arm’s CPU is entirely manufactured in Taiwan for now. TSMC has a 3nm fab coming to Arizona soon, and Awad said Arm would “love to manufacture here. It really comes down to what our customers are ultimately looking for.”
Arm is best known as the leading architecture for mobile chips in almost every smartphone. It got into data center chips in 2018 with the launch of its Neoverse platform. Amazon took Neoverse mainstream in its first custom processor, Graviton, and Google and Microsoft now also base their AI chips on Arm.
“If Arm didn’t exist, then all of those companies who have their own processors wouldn’t be able to create their own,” Moorhead said.
Still, most server chips are built on traditional x86 architecture used by Intel and AMD. Moorhead calls x86 “tried and true” and said it can “run pretty much anything.”
The benefit of Arm architecture is it’s “super efficient” because the designs are more customizable, Moorhead said. “You can just create the chip that you want with nothing else.”
Awad told CNBC the Arm team “ruthlessly optimized” its new AGI CPU for artificial general intelligence — hence the name. Up to 64 of the new CPUs, a total of about 8,700 cores, can fit in a single air-cooled rack. It’s a dense configuration Arm is betting will appeal to power-constrained data center customers around the world.
“You can get two times the performance-per-watt than you can from an x86 rack,” Awad said. “That means twice as much performance in the same footprint, in the same power.”
Meta’s Saab said wattage is “a very scarce resource.”
“If you have a best in class CPU that’s giving you the best performance-per-watt that you possibly can, that opens up more wattage for other parts of your infrastructure,” he said.
Meta’s 5-gigawatt Hyperion data center under construction in Richland Parish, Louisiana, Jan. 9, 2026.
Courtesy of Meta
‘Available to the whole world’
Meta has a big need for efficiency as it builds out massive AI data centers across Louisiana, Ohio and Indiana. The company is also reportedly looking to lease space at the giant Stargate site in Texas, where OpenAI and Oracle scrapped plans to expand up to 10GW of capacity.
Meta’s AI spending spree comes after its Llama 4 model was not well received by developers last year.
“They got behind,” Moorhead said. “They also recognized we don’t have enough compute power to do what we need to do.”
In addition to securing processors from Nvidia and AMD, Meta unveiled four new chips in March within its own line of Meta Training and Inference Accelerators that it’s been making since 2023. Now, it’s adding CPUs from Arm to the mix.
“It was meant to basically be a full replacement, drop-in replacement, for our current compute CPUs and be transparent to our developers,” Saab said.
Saab was at Facebook in 2011, when the company launched the Open Compute Project, a consortium that now has hundreds of member companies, including Arm and Nvidia, committed to open hardware designs that help reduce data center energy consumption and costs.
“The first conversations we had with Arm were, ‘Hey, if we build this, we don’t want to keep this only within the company,'” Saab said. “We’re not like a chip company that’s trying to build sales channels to sell chips. We wanted it to be available to the whole world.”
Arm wouldn’t disclose pricing for the CPU, but Moorhead predicts it will be in the thousands of dollars.
Awad told CNBC it would be “competitively priced,” with an aim to serve as an option for companies that can’t afford to make their own in-house processor.
“You have to have 1,000 engineers, a $500 million dollar budget to go and create it,” Moorhead said. “So there’s definitely a market need.”
Watch: Inside Arm’s $71 million chip lab where its making its first ever CPU
<