- Nvidia's H800 was launched in March 2023 and is a cut-down version of the H100
- It is also significantly slower than Nvidia's H200 and AMD's Instinct range
- These artificial constraints have forced DeepSeek's engineering to innovate
It was widely assumed that the United States would remain unchallenged as the global AI superpower, particularly after President Donald Trump’s recent announcement of Project Stargate - a $500 billion initiative to bolster AI infrastructure across the US. However, this week saw a seismic shift with the arrival of China’s DeepSeek. Developed at a fraction of the cost of its American rivals, DeepSeek came out swinging seemingly out of nowhere and made such an impact that it wiped $1 trillion from the market value of US tech stock, with Nvidia the major casualty.
Obviously, anything developed in China is going to be highly secretive, but a tech paper published a few days before the chat model stunned AI watchers does give us some insight into the technology that drives the Chinese equivalent of ChatGPT.
In 2022, the US blocked the importation of advanced Nvidia GPUs to China to tighten control over critical AI technology, and has since imposed further restrictions, but evidently that hasn’t stopped DeepSeek. According to the paper, the company trained its V3 model on a cluster of 2,048 Nvidia H800 GPUs - crippled versions of the H100.
Training on the cheap
The H800 launched in March 2023, to comply with US export restrictions to China, and features 80GB of HBM3 memory with 2TB/s bandwidth.
It lags behind the newer H200, which offers 141GB of HBM3e memory and 4.8TB/s bandwidth, and AMD’s Instinct MI325X which outpaces both with 256GB of HBM3e memory and 6TB/s bandwidth.
Each node in the cluster DeepSeek trained on houses 8 GPUs connected by NVLink and NVSwitch for intra-node communication, while InfiniBand interconnects handle communication between nodes. The H800 has lower NVLink bandwidth compared to the H100, and this, naturally, affects multi-GPU communication performance.
DeekSeek-V3 required a total of 2.79 million GPU-hours for pretraining and fine-tuning on 14.8 trillion tokens, using a combination of pipeline and data parallelism, memory optimizations, and innovative quantization techniques.
The Next Platform, which has done a deep dive into how DeepSeek works, says “At the cost of $2 per GPU hour – we have no idea if that is actually the prevailing price in China – then it cost a mere $5.58 million to train V3.”
You might also like
- Check out our list of the best AI tools around today
- “This is a wake-up call" - the DeepSeek disruption: 10 experts weigh in
- DeepSeek forced to pause new signups following large scale cyberattack
from Latest from TechRadar US in News,opinion https://ift.tt/efAazy7
No comments:
Post a Comment