This design can run large neural networks more efficiently than multiple groups of GPUs connected together. But manufacturing and running chips is a challenge, requiring new methods of etching silicon features, a design that includes redundancy to solve manufacturing defects, and a new water system to keep the huge chip cool.
In order to build a WSE-2 chip cluster that can run record-sized AI models, Cerebras must solve another engineering challenge: how to efficiently input and output data into and out of the chip. Common chips have their own onboard memory, but Cerebras has developed an off-chip memory cartridge called MemoryX. The company has also developed software that allows part of the neural network to be stored in off-chip memory, and only the calculations are transferred to the silicon chip. It built a hardware and software system called SwarmX to connect everything together.
Demler said that it is not yet clear how many markets the cluster will have, especially since some potential customers have already designed their own, more professional chips in-house. He added that the actual performance of the chip in terms of speed, efficiency and cost is unclear. To date, Cerebras has not released any benchmark results.
“There is a lot of impressive engineering in the new MemoryX and SwarmX technologies,” Demler said. “But like a processor, this is a highly specialized thing; it only makes sense to train the largest model.”
So far, laboratories requiring supercomputing capabilities have adopted Cerebras chips. Early customers include pharmaceutical companies such as Argonne National Laboratory, Lawrence Livermore National Laboratory, GlaxoSmithKline and AstraZeneca, as well as what Feldman calls “military intelligence” organizations.
This shows that Cerebras chips can not only be used to power neural networks; the calculations run in these laboratories involve similar massively parallel mathematical operations. “And they are always eager to get more computing power,” Demler said, adding that it is conceivable that the chip will become important for the future of supercomputing.
David Kanter Analyst Real world technology And executive director MLCommonsAn organization that measures the performance of different AI algorithms and hardware said that he generally believes that larger AI models will appear in the market in the future. “I usually tend to believe in data-centric machine learning, so we need larger data sets to be able to build larger models with more parameters,” Kanter said.