www.industry-asia-pacific.com
11
'24
Written on Modified on
Nvidia News
FOXCONN TO BUILD TAIWAN’S FASTEST AI SUPERCOMPUTER WITH NVIDIA BLACKWELL
Using NVIDIA’s GB200 NVL72 platform, the manufacturing giant’s supercomputer will transform AI research, healthcare, smart factories, robotics and smart city innovations.
www.nvidia.com
NVIDIA and Foxconn are building Taiwan’s largest supercomputer, marking a milestone in the island’s AI advancement.
The project, Hon Hai Kaohsiung Super Computing Center, revealed recently at Hon Hai Tech Day, will be built around NVIDIA’s groundbreaking Blackwell architecture and feature the GB200 NVL72 platform, which includes a total of 64 racks and 4,608 Tensor Core GPUs.
With an expected performance of over 90 exaflops of AI performance, the machine would easily be considered the fastest in Taiwan.
Foxconn plans to use the supercomputer, once operational, to power breakthroughs in cancer research, large language model development and smart city innovations, positioning Taiwan as a global leader in AI-driven industries.
Foxconn’s “three-platform strategy” focuses on smart manufacturing, smart cities and electric vehicles. The new supercomputer will play a pivotal role in supporting Foxconn’s ongoing efforts in digital twins, robotic automation and smart urban infrastructure, bringing AI-assisted services to urban areas like Kaohsiung.
Construction has started on the new supercomputer housed in Kaohsiung, Taiwan. The first phase is expected to be operational by mid-2025. Full deployment is targeted for 2026.
The project will integrate with NVIDIA technologies, such as NVIDIA Omniverse and Isaac robotics platforms for AI and digital twins technologies to help transform manufacturing processes.
The GB200 NVL72 is a state-of-the-art data center platform optimized for AI and accelerated computing. Each rack features 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs connected via NVIDIA’s NVLink technology, delivering 130TB/s of bandwidth.
NVIDIA NVLink Switch allows the 72-GPU system to function as a single, unified GPU. This makes it ideal for training large AI models and executing complex inference tasks in real time on trillion-parameter models.
www.nvidia.com