[ad_1]
Qualcomm (QCOM) shares soared more than 20% Monday as the company announced it is diving into the data center with the launch of its new AI200 and AI250 chips and rack-scale server offerings.
The move puts Qualcomm into direct competition with the likes of Nvidia (NVDA) and AMD (AMD), as the company seeks to stake its claim to a portion of the multibillion-dollar data center market.
Available beginning in 2026, the AI200 is both the name of Qualcomm’s individual AI accelerator and the full server rack it slots into, complete with a Qualcomm CPU. The AI250 is Qualcomm’s next-generation AI accelerator and server coming in 2027. A third chip and server are scheduled for 2028.
Qualcomm says it will follow this annual cadence moving forward.
The company says that its AI200 and AI250 chips take advantage of its custom Hexagon NPU, or neural processing unit. The company has rolled out NPUs in its Windows PC chips and is taking the teachings from those processors and scaling them up for the data center.
Qualcomm is also touting its servers’ total cost of ownership as a key benefit, thanks to their low power consumption. The chips, the company explains, are specifically designed for AI inference, or the process of running AI models. In other words, customers won’t use them to train new AI models.
Total cost of ownership has become a major metric for data center builders, as they try to contain the dizzying costs associated with constructing and running their enormous server farms.
The key difference between the AI200 and AI250, Qualcomm explained, is that the AI250 will offer 10x the memory bandwidth of the AI200.
Customers won’t necessarily have to purchase Qualcomm’s servers to access the company’s chips. According to Durga Malladi, senior vice president and GM for technology planning, edge solutions, and data centers at Qualcomm, they’ll be able to choose either individual chips, portions of the company’s server offerings, or the entire setup.
And those customers, Malladi said, could include the likes of Nvidia and AMD, making the companies both rivals and potential partners.
This isn’t Qualcomm’s first run at the data center market. In 2017, the company announced it was building the Qualcomm Centriq 2400 platform with Microsoft (MSFT), but the venture quickly fell apart due to tough competition from Intel and AMD and broader corporate issues, including a range of lawsuits eating into the company’s focus.
Qualcomm also currently offers its own AI 100 Ultra card, though that’s designed as a drop-in card for off-the-shelf servers. The AI200 and AI250 are meant to live in dedicated AI systems.
[ad_2]
Source by [author_name]
Qualcomm (QCOM) shares soared more than 20% Monday as the company announced it is diving into the data center with the launch of its new AI200 and AI250 chips and rack-scale server offerings.
The move puts Qualcomm into direct competition with the likes of Nvidia (NVDA) and AMD (AMD), as the company seeks to stake its claim to a portion of the multibillion-dollar data center market.
Available beginning in 2026, the AI200 is both the name of Qualcomm’s individual AI accelerator and the full server rack it slots into, complete with a Qualcomm CPU. The AI250 is Qualcomm’s next-generation AI accelerator and server coming in 2027. A third chip and server are scheduled for 2028.
Qualcomm says it will follow this annual cadence moving forward.
The company says that its AI200 and AI250 chips take advantage of its custom Hexagon NPU, or neural processing unit. The company has rolled out NPUs in its Windows PC chips and is taking the teachings from those processors and scaling them up for the data center.
Qualcomm is also touting its servers’ total cost of ownership as a key benefit, thanks to their low power consumption. The chips, the company explains, are specifically designed for AI inference, or the process of running AI models. In other words, customers won’t use them to train new AI models.
Total cost of ownership has become a major metric for data center builders, as they try to contain the dizzying costs associated with constructing and running their enormous server farms.
The key difference between the AI200 and AI250, Qualcomm explained, is that the AI250 will offer 10x the memory bandwidth of the AI200.
Customers won’t necessarily have to purchase Qualcomm’s servers to access the company’s chips. According to Durga Malladi, senior vice president and GM for technology planning, edge solutions, and data centers at Qualcomm, they’ll be able to choose either individual chips, portions of the company’s server offerings, or the entire setup.
And those customers, Malladi said, could include the likes of Nvidia and AMD, making the companies both rivals and potential partners.
This isn’t Qualcomm’s first run at the data center market. In 2017, the company announced it was building the Qualcomm Centriq 2400 platform with Microsoft (MSFT), but the venture quickly fell apart due to tough competition from Intel and AMD and broader corporate issues, including a range of lawsuits eating into the company’s focus.
Qualcomm also currently offers its own AI 100 Ultra card, though that’s designed as a drop-in card for off-the-shelf servers. The AI200 and AI250 are meant to live in dedicated AI systems.
