Microsoft and OpenAI might be concocting a $100 billion supercomputer to accelerate their artificial intelligence models.
Category: supercomputing – Page 14
According to insiders, Microsoft and OpenAI are planning to build a $100 billion supercomputer called “Stargate” to massively accelerate the development of OpenAI’s AI models, The Information reports.
Microsoft and OpenAI executives are forging plans for a data center with a supercomputer made up of millions of specialized server processors to accelerate OpenAI’s AI development, according to three people who took part in confidential talks.
The project, code-named “Stargate,” could cost as much as $100 billion, according to one person who has spoken with OpenAI CEO Sam Altman about it and another who has seen some of Microsoft’s initial cost estimates.
Cerebras held an AI Day, and in spite of the concurrently running GTC, there wasn’t an empty seat in the house.
As we have noted, Cerebras Systems is one of the very few startups that is actually getting some serious traction in training AI, at least from a handful of clients. They just introduced the third generation of Wafer-Scale Engines, a monster of a chip that can outperform racks of GPUs, as well as a partnership with Qualcomm to provide custom training and Go-To-Market collaboration with the Edge AI leader. Here’s a few take-aways from the AI Day event. Lots of images from Cerebras, but they tell the story quite well! We will cover the challenges this bold startup still faces in the Conclusions at the end.
As the third generation of wafer-scale engines, the new WSE-3 and the system in which it runs, the CS-3, is an engineering marvel. While Cerebras likes to compare it to a single GPU chip, thats really not the point, which is to simplify scaling. Why cut up a a wafer of chips, package each with HBM, put the package on a board, connect to CPUs with a fabric, then tie them all back together with networking chips and cables? Thats a lot of complexity that leads to a lot of programing to distribute the workload via various forms of parallelism then tie them all back together into a supercomputer. Cerebras thinks it has a better idea.
Coming hot on the heels of two massive announcements last year, last week Nvidia and Cerebras showed yet again that the pace of computing is still accelerating.
The first CS-2 based Condor Galaxy AI supercomputers went online in late 2023, and already Cerebras is unveiling its successor the CS-3, based on the newly launched Wafer Scale Engine 3, an update to the WSE-2 using 5nm fabrication and boasting a staggering 900,000 AI optimized cores with sparse compute support. CS-3 incorporates Qualcomm AI 100 Ultra processors to speed up inference.
Note: sparse compute is an optimization that takes advantage of the fact that a multiplication by zero always results in zero to skip calculations that could include dozens of operands, one of which is a zero. The result can lead to a huge speedup in performance with sparse data sets like neural networks.
Third Generation 5 nm Wafer Scale Engine (WSE-3) Powers Industry’s Most Scalable AI Supercomputers, Up To 256 exaFLOPs via 2048 Nodes.
SUNNYVALE, CALIFORNIA – March 13,202 4 – Cerebras Systems, the pioneer in accelerating generative AI, has doubled down on its existing world record of fastest AI chip with the introduction of the Wafer Scale Engine 3. The WSE-3 delivers twice the performance of the previous record-holder, the Cerebr as WSE-2, at the same power draw and for the same price. Purpose built for training the industry’s largest AI models, the 5nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI perform ance through 900,000 AI optimized compute cores.
Key Specs:
Diamond is the strongest material known. However, another form of carbon has been predicted to be even tougher than diamond. The challenge is how to create it on Earth.
The eight-atom body-centered cubic (BC8) crystal is a distinct carbon phase: not diamond, but very similar. BC8 is predicted to be a stronger material, exhibiting a 30% greater resistance to compression than diamond. It is believed to be found in the center of carbon-rich exoplanets. If BC8 could be recovered under ambient conditions, it could be classified as a super-diamond.
This crystalline high-pressure phase of carbon is theoretically predicted to be the most stable phase of carbon under pressures surpassing 10 million atmospheres.
Understanding how a thermonuclear flame spreads across the surface of a neutron star—and what that spreading can tell us about the relationship between the neutron star’s mass and its radius—can also reveal a lot about the star’s composition.
While Elon Musk says Tesla is trying to build an AI supercomputer, his companies are spending billions of dollars on Nvidia hardware.
Utilizing high-resolution three-dimensional radiation hydrodynamics simulations and a detailed supernova physics model run on supercomputers, a research team led by Dr. Ke-Jung Chen from the Institute of Astronomy and Astrophysics, Academia Sinica (ASIAA) has revealed that the physical properties of the first galaxies are critically determined by the masses of the first stars. Their study is published in The Astrophysical Journal.
Simulations of an elusive carbon molecule that leaves diamonds in the dust for hardness may pave the way to creating it in a lab.
Known as the eight-atom body-centered cubic (BC8) phase, the configuration is expected to be up to 30 percent more resistant to compression than diamond – the hardest known stable material on Earth.
Physicists from the US and Sweden ran quantum-accurate molecular-dynamics simulations on a supercomputer to see how diamond behaved under high pressure when temperatures rose to levels that ought to make it unstable, revealing new clues on the conditions that could push the carbon atoms in diamond into the unusual structure.