Luminous Computing, which is developing a light-based AI accelerator chip, raises $105 million
Join today’s top leaders online at the Data Summit on March 9. Register here.
According to some experts, the growth in computing power needed to develop future AI systems could hit a wall with consumer chip technologies. While startups like Cerabras claim to be developing hardware that can efficiently handle next-generation systems, from a power consumption perspective, researchers fear that highly sophisticated AI systems will become the exclusive domain of corporations and governments with the necessary resources.
One solution that has been proposed is photonic chips, which use light to send signals, rather than the electricity used by conventional processors. Photonic chips could, in theory, lead to better performance because light produces less heat than electricity, can travel faster, and is less sensitive to temperature changes and electromagnetic fields.
Lightmatter, LightOn, Celestial AI, Intel and the Japanese company NTT are among the companies developing photonic technologies. The same goes for Luminous Computing, which today announced it has raised $105 million in a Series A round with participation from investors including Microsoft co-founder Bill Gates, Gigafund, 8090 Partners, Neo, Third Kind Venture Capital, Alumni Ventures Group, Strawberry Creek Ventures, Horsley Bridge. , and Modern Venture Partners, among others. (After the money, Luminous’s valuation is between $200 million and $300 million.)
“It’s an incredible time to be a part of the AI industry,” Marcus Gomez, CEO and co-founder of Luminous, said in a statement. “AI has become superhuman. We can interact with computers in natural language and ask them to write a piece of code or even an essay, and the result will be better than most humans could provide. Frustratingly, we have the software to solve monumental, game-changing problems that humans can’t even begin to solve. We just don’t have the hardware that can run these algorithms.
Luminous was founded in 2018 by Michael Gao, CEO Marcus Gomez and Mitchell Nahmias. Nahmias’ research at Princeton became the cornerstone of Luminous’ material. Gomez, who previously founded a fashion tech startup called Swan, was previously a research scientist at Tinder and spent time working on artificial intelligence and search software at Google. As for Gao, he is the CEO of AlphaSheets, a data analysis platform for enterprise customers.
“Over the past decade, the demand for AI computation has increased nearly 10,000 times. Ten years ago, the largest models had 10 million parameters and could be trained in 1-2 hours on a single GPU; today, the largest models have over 10 trillion parameters and can take up to a year to train on tens of thousands of machines,” Gomez told VentureBeat via email. Unfortunately, we’re at an impasse: the hardware just hasn’t kept up. The large AI models that exist today are notoriously difficult and expensive to train, because the underlying hardware just isn’t fast enough. Training large AI models is mostly relegated to [big tech companies], because most companies can’t even afford to rent the necessary equipment. Worse still, even for [big tech companies], the growth of the material slows down so much that increasing the size of the model is almost intractable. The stagnation of AI progress is coming fast.
In traditional hardware, data is sent over electrical cables, which consume much more power and send much less data the longer they are. In contrast, Luminous hardware uses light channels, which can send more data between chips, and data links don’t degrade much with distance. This allows us to feed data to processing chips on a much larger memory space and to more easily scale AI algorithms on more processors.
“Using… proprietary silicon photonics technology, [we’ve] designed a new computing architecture that can scale dramatically more efficiently, allowing users to train models that are 100x to 1,000x larger in manageable timeframes, at dramatically reduced costs, and with a dramatically larger programming model simple,” Gomez said. “In other words, [we’ve] designed a computer that makes training AI algorithms faster, cheaper, and easier.
While Luminous keeps tabs on the exact technical specifications of its hardware, Nahmias published a scientific paper in January 2020 that compared the performance of photonics and electronics hardware in AI systems using what the journal called ” multiplication-accumulation”. Nahmias and the other co-authors found that photonic hardware was significantly better than electronic hardware in terms of computational energy, speed, and density.
“If you look at where modern AI computers are bottlenecks, it’s first and foremost about communication, at all scales – between chips, between boards, and between racks in the data center. If you can’t solve the communication bottleneck, you indeed have to live on these terrible trade-off curves,” Gomez added. “Luminous uses its silicon photonics technology to directly solve the communication bottleneck at each scale of the hierarchy, and when [we] say solve, [we] means to solve: [we’re] increasing the bandwidth by 10 to 100 times at each distance scale.
Photonic chips have drawbacks that need to be addressed if the technology is to reach the mainstream. They are physically larger than their electronic counterparts and difficult to mass-produce, in part due to the immaturity of photonic chip factories. Additionally, photonic architectures still rely heavily on electronic control circuitry, which can create bottlenecks.
“For large applications, including AI, machine learning, and large-scale analytics, power dissipation across many components is expected to be high – an order of magnitude higher than current systems,” writes Nicole Hemsoth of The Next Platform in a January 2021 photonics review. technologies. “We are probably at least five years to a decade away [from] computing based on silicon photonics.
But pre-revenue Luminous – which has more than 90 employees – says it has produced working prototypes of its chips, and the company aims to ship dev kits to customers within 24 months. The latest round funding brings Luiminous’ total capital raised to $115 million and will be used primarily to double the size of the engineering team, develop Luminous chips and software, and prepare for production “at commercial scale,'” Gomez said. .
“Luminous’ initial target customers are hyperscalers building their own data centers to drive their own machine learning algorithms,” Gomez continued. “Luminous’ computer has the memory, compute, and bandwidth to train these very large algorithms, and it’s designed from the ground up with the AI user in mind…For users who use large AI models to generate their base revenue, we completely unlock preventing them from developing their models, and we eliminate thousands of hours otherwise spent on programming complexity and engineering overhead.
Nahmias added, “Luminous is at the dawn of a phase transition in the optics industry, away from pluggable transceivers and towards integrating optics directly into systems. Many companies, including Broadcomm, Cisco and Intel, have built their own co-packaged optics to put into the switches, which form the backbone of data center communication. This can significantly reduce the power, cost, and increase the performance of data center links. However, the idea of including optical interconnects inside a computer system and designing the system from the ground up is a relatively new concept in the industry, and one that is at the heart of what Luminous is building.
VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Learn more