In complexity theory, there is a specific type of result that quietly shows up, is courteously cited, and then sits inside the field like a tiny stone in a shoe. That is similar to the new lower bounds for MCSP against branching programs and one-tape Turing machines.
They don’t claim to be innovations. They practically make an effort not to. Nevertheless, it’s difficult to ignore how the writers are getting very close to the brink of something much larger when reading the proofs.
| Item | Detail |
|---|---|
| Topic | One-tape Turing machine and branching program lower bounds for the Minimum Circuit Size Problem |
| Field | Computational complexity theory |
| Central problem | MCSP[s(n)] — deciding whether a Boolean function has a circuit of size at most s(n) |
| Related problem | MKTP, the time-bounded Kolmogorov complexity sibling of MCSP |
| Why it matters | Hardness magnification: a small lower bound implies P ≠ NP |
| Models studied | One-tape Turing machines with read-only input tape; non-deterministic, co-non-deterministic, and parity branching programs |
| Key technique revived | The Nečiporuk method, dating back to 1966 |
| Pseudorandom tools used | Forbes–Kelley generators, Viola’s combinatorial-rectangle work |
| Earlier landmark | McKay, Murray, and Williams, STOC 2019 |
| Paper venue | Theory of Computing Systems / ECCC preprint |
| Significance | First non-trivial lower bounds for MCSP and MKTP in these models |
An uncomfortable guest at the complexity theory table has always been the Minimum Circuit Size Problem. It appears simple: given a truth table, determine whether a small circuit can compute it. However, no one has ever demonstrated that it is NP-complete or not. It sat there obstinately for decades. Then, in 2019, a wave of hardness magnification papers appeared, and MCSP began to resemble a hinge rather than a curiosity. In essence, McKay, Murray, and Williams demonstrated that you would have proven P ≠ NP if you could demonstrate even a barely-superlinear lower bound for MCSP on a one-tape Turing machine operating in time N^1.01. A bound like that. People are kept up at night by statements like that.
That is precisely what the new paper aims to achieve. It demonstrates that, for a constant strictly above the magnification threshold, a randomized two-sided-error one-tape machine cannot determine MCSP in time N^1.99. There is not much of a gap. If you’re the kind of person who feels that effort should be rewarded proportionately by the universe, this is frustratingly small. However, the writers’ candor about the disparity contributes to the work’s sense of groundedness rather than triumph.

The second step, which involves applying the Nečiporuk method to MKTP, the time-bounded Kolmogorov complexity cousin of MCSP, is more remarkable in a craftsmanlike manner. The Nečiporuk approach is outdated. It was created in the middle of the 1960s, and for a long time, people seemed to think it had been exhausted. It could still bite here, as the authors point out, thanks to Rahul Santhanam. That’s satisfying in some way. dragged back into the room and asked to perform one more task using a method that was older than the majority of the researchers using it.
The outcomes of the branching program exhibit a similar trend. Each variation—parity, co-non-deterministic, and non-deterministic—gets a lower bound that nearly matches the best known for any explicit function. Not better. matching. That’s the proper framing in a field where reaching the state of the art on a problem that no one could previously solve counts as progress.
The true question, which no one quite asks aloud in workshops, lies beneath the technical apparatus. A minor improvement will eventually turn into a well-known one if hardness magnification is real and these lower bounds continue to rise. The tipping may never occur. The magnification frontier may be a barrier in and of itself, similar to how natural proofs have proven to be. Observing this area of the field gives the impression that everyone is aware of how thin the ice is, but they are still choosing to skate cautiously.
The paper does what good complexity theory papers do for the time being. It increases a figure, refines a technique, expresses gratitude to those who identified the appropriate reference, and leaves the door open.
