Industry association formed by tech heavyweights to assist in developing next-generation AI chip components

Tech giants like Intel, Google, Microsoft, and Meta, along with other significant players, are forming a new industry group called the Ultra Accelerator Link UALink Promoter Group. This collective aims to steer the development of components that connect AI accelerator chips in data centers. Recently announced, the UALink Promoter Group includes members such as AMD, Hewlett Packard Enterprise, Broadcom, and Cisco, though notably, Arm is not yet part of the group. The group’s mission is to establish a new industry standard for connecting AI accelerator chips within servers, ranging from GPUs to custom-designed solutions that enhance the training, fine-tuning, and running of AI models.

Forrest Norrod, AMD’s General Manager of data center solutions, emphasized the need for an open standard that allows rapid innovation and is not hindered by any single company’s interests. According to Norrod, the industry requires a flexible standard that enables multiple companies to contribute value to the ecosystem. This initiative aims to facilitate the development and integration of AI technologies swiftly and collaboratively.

The first version of the proposed standard, UALink 1.0, will enable the connection of up to 1,024 AI accelerators — currently limited to GPUs — across a single computing “pod,” which the group defines as one or several racks in a server. Based on open standards like AMD’s Infinity Fabric, UALink 1.0 is designed to allow direct loads and stores between the memory attached to AI accelerators, thereby enhancing speed and reducing data transfer latency compared to existing interconnect specifications.

The UALink Promoter Group plans to establish a consortium, the UALink Consortium, by Q3 to oversee the ongoing development of the UALink specification. The first version of the UALink spec will be made available around the same time to companies that join the consortium, with an updated higher-bandwidth version, UALink 1.1, expected to be released by Q4 2024. According to Norrod, the first UALink products are anticipated to hit the market within the next few years.

Nvidia, a dominant player in the AI accelerator market with an estimated 80% to 95% market share, is conspicuously absent from the list of UALink Promoter Group members. Nvidia declined to comment on the story, but its absence is likely due to its proprietary interconnect technology for linking GPUs within data centers. Given Nvidia’s significant market influence and robust financial performance, the company might not see the strategic advantage of supporting a standard based on competing technologies.

In Nvidia’s recent fiscal quarter (Q1 2025), its data center sales, which include AI chips, soared by over 400% compared to the previous year. This growth positions Nvidia as a leading force in the industry, potentially surpassing Apple as the world’s second-most valuable firm. Consequently, Nvidia’s reluctance to participate in the UALink initiative is understandable given its strong market position.

Amazon Web Services (AWS) is another major player not currently involved with UALink. AWS might be adopting a “wait and see” approach as it develops its own in-house accelerator hardware solutions. With a dominant position in the cloud services market and reliance on Nvidia GPUs, AWS may not perceive immediate strategic benefits in opposing Nvidia or adopting the UALink standard.

The primary beneficiaries of UALink appear to be companies like Microsoft, Meta, and Google, which have heavily invested in Nvidia GPUs for their cloud services and AI model training. These companies are keen to reduce their dependence on Nvidia, which they view as overly dominant in the AI hardware ecosystem. Gartner projects that the value of AI accelerators used in servers will reach $21 billion this year and grow to $33 billion by 2028, with AI chip revenue expected to hit $33.4 billion by 2025. This growth underscores the importance of having flexible and open standards like UALink to support the expanding AI market.

Google, Amazon, and Microsoft have all developed their custom AI chips, such as Google’s TPUs and Axion, Amazon’s various AI chip families, and Microsoft’s Maia and Cobalt. Meta is also refining its lineup of accelerators. Additionally, Microsoft and OpenAI plan to spend at least $100 billion on a supercomputer for AI model training, which will utilize future versions of Cobalt and Maia chips. These chips will need an efficient interconnect solution, which UALink aims to provide.

In conclusion, the establishment of the UALink Promoter Group signifies a significant step towards creating an open and collaborative standard for AI accelerator interconnects. This initiative will likely drive innovation and competition in the AI hardware space, benefiting companies looking to enhance their data center capabilities and AI model performance.

If you like the article please follow on THE UBJ.

Exit mobile version