To run an AI model, computers must constantly shift vast amounts of data between separate memory and logic chips, a process that chokes performance. To solve this, Cerebras Systems in 2019 engineered a dinner plate–sized chip—the largest ever—that embeds both memory and logic. “People thought we were mad hatters,” says Andrew Feldman, Cerebras’s CEO and co-founder, given the huge technical hurdles. In March, the company released a third generation of the chip, the record-fast Wafer-Scale Engine 3 (WSE-3), which can train models 10 times bigger than OpenAI’s GPT-4, and will comprise the Condor Galaxy 3, a supercomputer under construction in Texas.
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at [email protected]