The New Moore’s Law: Why Optical Computing Could Redefine Scaling for AI
For decades, Moore’s Law (the doubling of transistors every two years) has been the heartbeat of the tech industry. But as AI models grow to hundreds of billions of parameters, electrons are hitting a physical wall. We are facing a “paradox of scale”: our demand for compute is infinite, but our power grid and silicon efficiency are not.
In a new featured article for All About Circuits, Lumai’s Head of Product, Phillip Burr, explores why the next frontier of AI won’t be built with more transistors, but with light.
Beyond the Limits of Silicon
As data centers evolve to consume an estimated 12% of the US power supply by 2028, the industry is searching for a more sustainable path. Optical computing offers a radical shift by replacing electrons with photons to perform the heavy lifting of AI: matrix multiplications.
Why 3D Optics Wins
While many are looking at “integrated photonics” (using waveguides on a flat chip), Lumai is pioneering a 3D (free-space) optical approach. By moving light through three-dimensional space rather than confined channels, we unlock:
- True Parallelism: Processing massive matrices in a single pass.
- Zero-Heat Computation: Photons travel without the resistance and heat that plague traditional chips.
- Quadratic Scaling: Unlike silicon, where bigger chips become less efficient, 3D optical systems actually become more efficient as they scale.
A New Scaling Law
The most exciting takeaway? The physics of light provides a new trajectory for performance. While electronic scaling is plateauing, optical scaling is just beginning. By combining optical cores for math-heavy tasks with digital electronics for control and non-linear operations, we can achieve up to 50x performance improvements at a fraction of the power.
“Progress will no longer be measured by transistor counts, but by how effectively we harness the physics of light.”
Read the full technical deep dive on All About Circuits: The New Moore’s Law: Why Optical Computing Could Redefine Scaling for AI