Focus on both security and computing power! With the support of SIXUNITED ESA24V3-P+AMD, you won't go wrong choosing this enterprise-grade AI server.
Release time:
2026-01-16
At a time when AI large models are being trained, data centers are undergoing capacity expansions, and demand for intelligent computing power is experiencing explosive growth, computing equipment has become the core engine driving the development of the digital economy. The current industry situation—characterized by a supply-demand imbalance and soaring prices in the global memory market—has made AI servers with high performance, high scalability, and high reliability an urgent necessity for businesses and institutions.

The SIXUNITED ESA24V3-P 4U high-end AI server, closely aligned with industry trends, leverages four core advantages—ultra-high computing power, innovative cooling, flexible scalability, and robust security management—to provide powerful computational support for diverse scenarios such as cloud computing, HPC, artificial intelligence, and virtualization.
Extreme Computing Power Breakthrough: Building a Super-Strong Performance Foundation for AI Workloads
Computing power is the core competitive advantage of AI servers, and the ESA24V3-P achieves a comprehensive performance leap in its hardware configuration. This product supports up to two AMD 9004/9005 series processors, covering multiple core models including Genoa, Bergamo, Turin Classic, and Turin Dense, with each processor boasting a TDP of up to 500W. Powered by the Zen4/Zen5 architecture, it can deliver up to 192 high-density cores, with memory frequencies reaching as high as 6400 MT/s. The design features 24 memory DIMM slots, perfectly meeting the demanding requirements of AI training for ultra-high memory bandwidth.

In the realm of heterogeneous computing, the ESA24V3-P also delivers outstanding performance. This server can support up to 10 dual-fan GPU cards, each equipped with a PCIe 5.0 x16 lane interface and boasting a maximum TDP of up to 600W—perfectly suited for high-performance graphics cards such as the RTX 6000. Meanwhile, the flexible combination of 13 standard PCIe 5.0 slots and 1 OCP 3.0 network card slot provides ample connectivity options for peripherals including storage devices and network interfaces. The configuration featuring 12 triple-mode drive bays—supporting SAS/SATA/NVMe—balances the needs of both high-capacity mechanical hard drives and high-speed solid-state drives, enabling synergistic upgrades of computing power and storage capacity. This setup offers a stable and reliable performance foundation for compute-intensive workloads such as AI model training and big data analytics.
Innovative Thermal Management Architecture: Solving the Temperature Control Challenges of High-Density Deployment
For AI servers equipped with multiple high-power processors and GPUs, cooling efficiency directly determines the device’s stability and service life. The ESA24V3-P innovatively adopts a layered, isolated cooling design that fundamentally addresses the cooling challenges faced by high-density computing clusters. The server is built-in with 12 high-performance hot-swappable fans of model 6056, arranged in a dual-layer vertical configuration: the upper-layer fans are dedicated to cooling the graphics cards, while the lower-layer fans focus on temperature control around the CPU area, thereby creating independent cooling airflow paths for the GPU and CPU. Under heavy-load operation, the ESA24V3-P maintains stable temperature control, ensuring that both the GPU and CPU can operate at peak performance over extended periods. As a result, data centers no longer need to worry about performance degradation or thermal failure risks when running large-scale AI models or massive inference tasks, significantly enhancing the overall availability of the computing cluster.

The optimized power supply design and the advanced thermal management system complement each other, providing dual-layer assurance for the stable operation of the hardware. The CRPS 3+1 redundant power supply solution supports power outputs of 3000W or 3200W, meeting the high-power demands of demanding hardware while enabling hot-swappable replacement of failed power supplies. Combined with a precise temperature-control strategy and an efficient power supply system, the ESA24V3-P can operate reliably across a wide temperature range from 5°C to 35°C. Even in high-density deployment scenarios typical of data centers, it can maintain high performance over extended periods, significantly reducing the risk of performance throttling and failures caused by overheating.
Flexible scalability and compatibility: Adaptable to diverse business scenarios
Under the wave of digital transformation, users across different industries and of varying sizes are exhibiting markedly different demands for computing power. As a result, modular design and flexible scalability have become crucial criteria for evaluating the practicality of AI servers. The ESA24V3-P adopts a highly modular architectural design that supports two topological modes: CPU-GPU direct connection and CPU-GPU switch connection, thereby meeting the performance requirements of diverse application scenarios. In CPU-GPU direct connection mode, low-latency data transmission is achieved, significantly enhancing the processing efficiency of latency-sensitive applications such as real-time inference. Meanwhile, the CPU-GPU switch connection mode leverages load-balancing technology to maximize CPU utilization and uplink bandwidth, easily handling large-scale training tasks involving multiple GPUs working in collaboration.


Excellent hardware compatibility further expands the product’s application boundaries. The ESA24V3-P fully supports native dual-fan GPU cards and, combined with an optimized thermal management and power supply design, seamlessly integrates with mainstream high-performance graphics cards on the market. Meanwhile, its rich array of PCIe configuration options and storage expansion solutions enable users to flexibly tailor their hardware combinations according to actual business needs—whether it’s building HPC clusters for research institutions, deploying private clouds for enterprises, or constructing AI large-model training platforms for internet companies. The ESA24V3-P delivers customized computing power solutions that truly enable “on-demand configuration and efficient empowerment.”
Secure Intelligent Management: Building a Robust Defense Line for Enterprise-Level Applications
Enterprise-level servers not only require powerful performance but also comprehensive security protection and convenient management capabilities. The ESA24V3-P demonstrates a professional, enterprise-grade standard in both security protection and operations & maintenance management. The server is equipped with a built-in TPM 2.0 security chip and features chassis intrusion detection, providing hardware-level assurance of data integrity and confidentiality, and effectively guarding against malicious intrusions and the risk of data leakage.

In terms of operations and maintenance management, the ESA24V3-P provides an open out-of-band management platform that fully supports a variety of mainstream management protocols, including IPMI 2.0, Redfish, and SNMP. Administrators can use the remote monitoring system to keep track of the server’s operational status in real time, enabling functions such as hardware failure alerts, remote power on/off, and configuration updates—thereby significantly reducing data center O&M costs. The 4U standard chassis design strikes a balance between deployment flexibility and space utilization, allowing seamless integration into existing data center cabinet architectures and providing enterprise-level users with a stable, secure, and easy-to-manage computing infrastructure.
Embrace faster speeds.
Unleash greater potential
Build a more stable foundation.
Select ESA24V3-P
Let computing power accelerate the future.
recommend News