Dell has added an AMD-powered server that is specifically made to support large language models (LLMs) to its portfolio of high-performance computing solutions for AI workloads. With this new addition, customers will have even more options when it comes to AI infrastructure, complementing Dell’s current Nvidia-powered options.
With 8 AMD M1300X accelerators, the newest server
Eight AMD Instinct MI300X accelerators are included in the most recent server, the Dell PowerEdge XE9680. These accelerators are remarkable for their remarkable performance capacity, which surpasses 21 petaFLOPS, and their 1.5GB of high-bandwidth memory (HBM3). They are therefore especially appropriate for companies that want to train and manage their own internal LLMs.
Scalability is one of the main advantages of the new Dell offering. Customers can use the global memory interconnect (xGMI) standard to scale the systems they deploy. Additionally, a Dell PowerSwitch Z9664F-ON can be used to connect AMD’s GPUs via an Ethernet-based AI fabric. This release comes after Dell unveiled a device with Nvidia H100 GPUs earlier.
Dell Validated Design for Generative AI with AMD is a new standard that is a key component of the company’s strategy. With the help of this standard, enterprises can run their own LLM hardware and networking architecture. The usage of AMD ROCm-powered AI frameworks—an open-source bundle that includes development toolkits, drivers, and APIs compatible with AMD Instinct accelerators—is emphasized. Popular AI programs like PyTorch, TensorFlow, and OpenAI Triton are supported by these frameworks and are natively supported on the PowerEdge XE9680 equipped with AMD accelerators.
Dell has taken a different tack from Nvidia in favor of standards-based networking, as evidenced by its participation in the Ultra Ethernet Consortium (UEC). In contrast to Nvidia, AMD and Dell support open Ethernet for AI, allowing switches from various vendors to work together within the same system. Dell’s strategy pushes companies to embrace an open strategy that includes storage, computing, and fabric components that are necessary to power internal generative AI models.
It is anticipated that the new hardware and services that make up Dell’s most recent AI initiative will be accessible in the first half of the following year. This innovation is a component of Dell’s larger endeavor to offer adaptable and strong solutions for companies using generative AI and other cutting-edge computing applications.