In 1976, Julian Laderman discovered something remarkable: you can multiply two 3×3 matrices using only 23 multiplications instead of the obvious 27. Fifty years later, no one has done better.
We're trying to change that.
Why Matrix Multiplication Matters
Matrix multiplication is everywhere. Every time you:
- Train a neural network
- Render 3D graphics
- Compress an image
- Run a physics simulation
...you're multiplying matrices. Billions of times per second, across millions of devices worldwide.
The naive way to multiply two n×n matrices requires n³ multiplications. For 3×3 matrices, that's 27. But here's the thing: multiplications are expensive. They cost more than additions in terms of computational resources, energy, and time.
So computer scientists have spent decades asking: what's the minimum number of multiplications actually required?
The Gap That Won't Close
For 3×3 matrices, we know:
| Bound | Value | Year | Who |
|---|---|---|---|
| Naive | 27 | — | Obvious |
| Upper bound | 23 | 1976 | Laderman |
| Lower bound | 19 | 1999 | Bläser |
That gap between 19 and 23 has remained open for fifty years.
DeepMind's AlphaTensor made headlines in 2022 by finding new algorithms for 4×4 and larger matrices. But for 3×3? Even their AI couldn't beat Laderman's 23.
Blankline Research decided to investigate why.
What We Discovered
Our team spent months analyzing Laderman's algorithm, trying every approach we could think of to shave off even a single multiplication. Here's what we found:
Discovery 1: The Four Anchors
Laderman's 23-term algorithm has a hidden structure. Four of those terms—we call them "anchors"—compute single, isolated products:
- P20: A[1,0] × B[0,2] → C[1,2]
- P21: A[1,2] × B[2,1] → C[1,1]
- P22: A[2,0] × B[0,1] → C[2,1]
- P23: A[2,2] × B[2,2] → C[2,2]
These four products have no overlap whatsoever. Different rows of A, different columns of B, different positions in C.
Mathematically, this means they're orthogonal—and orthogonal rank-1 tensors can't be compressed. You need exactly 4 terms to compute these 4 products. No fewer.
This was our first barrier.
Discovery 2: The Routing Problem
We found "super-efficient" compound structures that looked promising. One compound could cover multiple anchor products at once. Three compounds could cover all 27 required products.
We thought we had a breakthrough.
Then we tried to make it work, and hit what we call the w-vector routing problem.
Here's the issue: when a single term produces multiple products, they all share the same "routing vector" that determines where results go. But if those products need to go to different output positions, you get a contradiction:
- Product A needs: w[4] = 1, w[5] = 0
- Product B needs: w[5] = 1, w[4] = 0
Same term. Same w-vector. Impossible.
Coverage ≠ validity. Just because you can produce the right products doesn't mean you can route them correctly.
Discovery 3: Laderman is Locally Optimal
We used SMT solvers (the same technology used to verify hardware and find bugs in software) to ask: can we eliminate any single term from Laderman's algorithm by adjusting the others?
The answer: UNSAT (unsatisfiable) for all 23 terms.
Laderman's algorithm is locally optimal. You can't improve it by tweaking one piece at a time.
Discovery 4: Signs Don't Help
Laderman's algorithm uses mixed positive and negative coefficients. Maybe we just had the signs wrong?
We tested all 65,536 possible sign combinations for our most promising structure.
Every single one gave the exact same error: 2.449.
The barrier isn't about signs. It's structural.
What This Means
Our research establishes precise barriers that explain why naive approaches to rank-22 fail:
- The Anchor Barrier: Four products are fundamentally irreducible
- The Routing Barrier: Compound terms can't route products to different destinations
- The Local Optimality Barrier: You can't incrementally improve Laderman
These aren't just failed attempts—they're proofs of impossibility for entire families of approaches.
What We're Doing Next
This is an active research initiative. Our team is pursuing several directions:
Alternative Schemes
Laderman's isn't the only rank-23 algorithm. Over 17,000 distinct rank-23 decompositions exist. Maybe one of them can be reduced where Laderman's can't. We're systematically analyzing their structures.
Border Rank
What if we allow approximate decompositions that become exact in a limit? Border rank techniques have succeeded where exact methods failed. The border rank of 3×3 multiplication is at least 17—tantalizingly close to the lower bound of 19.
Algebraic Geometry
The set of rank-r tensors forms an algebraic variety. Secant varieties and their dimensions encode deep information about achievable ranks. We're exploring whether geometric methods can reveal structure invisible to computational search.
Machine Learning at Scale
AlphaTensor used reinforcement learning to discover new algorithms. But it was trained on a broad range of matrix sizes. What happens with a model focused exclusively on 3×3, with computational resources dedicated to this single problem?
The Bigger Picture
Why does this matter beyond academic curiosity?
Theoretical impact: The tensor rank of matrix multiplication is connected to fundamental questions in complexity theory. Understanding 3×3 deeply may unlock insights for all sizes.
Practical impact: Even a single multiplication saved for 3×3 matrices would propagate through countless applications. At the scale of modern computing, small improvements compound enormously.
Methodological impact: The barriers we've identified—orthogonal structures, routing problems, local optima—may appear in other optimization problems. Understanding them here teaches us something general.
Follow Our Progress
This research is ongoing. We expect to publish updates later this year as we explore alternative approaches.
Technical Paper: On the Tensor Rank of 3×3 Matrix Multiplication: Barriers and Open Problems
DOI: 10.5281/zenodo.18364905
For collaboration inquiries: research@blankline.org
Key Takeaways
- Laderman's 1976 algorithm (23 multiplications for 3×3 matrices) remains unbeaten after 50 years
- The gap between the lower bound (19) and upper bound (23) is a major open problem
- We identified structural barriers: orthogonal anchors, routing conflicts, local optimality
- These barriers prove that many natural approaches cannot work
- Blankline Research continues to investigate alternative methods
- Updates expected Q4 2026
Blankline Research is dedicated to solving hard problems in computer science and mathematics. This is the first public report from our computational complexity research initiative.
.jpg&w=3840&q=75)