Rebellions has launched its ATOM™-Max Server, designed to power AI inference efficiently and at scale. This new server leverages advanced chiplets and high-bandwidth interconnects to deliver peta-scale performance for large AI models, including Mixture-of-Experts (MoE) architectures like Llama4 Maverick 400B and Qwen3 235B. The ATOM™-Max Server aims to significantly reduce the energy consumption associated with AI inference, addressing the growing demand for sustainable AI solutions.
The introduction of the ATOM™-Max Server by Rebellions signifies a significant advancement in AI inference hardware. By focusing on peta-scale performance and energy efficiency, particularly for MoE models, Rebellions is addressing a critical bottleneck in deploying large-scale AI. This could lead to reduced operational costs for AI deployments, enable more widespread adoption of advanced AI models, and set new benchmarks for sustainable AI infrastructure, impacting competitive positioning in the AI hardware market.
Rebellions launched ATOM™-Max Server for AI inference.
Features peta-scale performance and high energy efficiency.
Supports large MoE models like Llama4 Maverick 400B.
While the company is based in South Korea, the ATOM™-Max Server is designed for global AI inference needs, targeting data centers and AI research institutions worldwide. Its focus on energy efficiency is particularly relevant given increasing global concerns about the environmental impact of AI.
Supports large MoE models like Llama4 Maverick 400B.
Aims to reduce energy consumption in AI deployments.
Sign in to save notes on signals.
Sign In