One of those concepts that seems almost too sophisticated to be practical is secret sharing, which has been operating covertly for more than 40 years beneath the security architecture of government key-management systems, blockchains, and banks. The field has been following Adi Shamir’s shadow ever since he sketched it out in 1979 on what was, by all accounts, a fairly ordinary afternoon. Reading the most recent literature gives the impression that researchers are still working to complete a sentence he began.
Pasin Manurangsi, Akshayaram Srinivasan, and Prashant Nalini Vasudevan’s paper on robust secret sharing against rushing adversaries, which appeared on the Cryptology ePrint Archive in late 2019 and eventually made its way to CRYPTO 2020, represents the most recent significant attempt. The issue they addressed is surprisingly easy to articulate. You divide a secret into n parts. An adversary is able to tamper with their pieces by corrupting almost half of the shareholders. Even worse, this adversary is “rushing”—that is, it gets to see the shares of the honest parties before determining how to manipulate its own. Is the secret still recoverable? How tiny can each share be, if that’s the case?
| Field | Detail |
|---|---|
| Paper Title | Nearly Optimal Robust Secret Sharing against Rushing Adversaries |
| Authors | Pasin Manurangsi, Akshayaram Srinivasan, Prashant Nalini Vasudevan |
| Year Published | 2019 (revised 2020) |
| Venue | CRYPTO 2020 |
| Related Work | Cheraghchi (2019), Designs, Codes and Cryptography |
| Field | Foundations of Cryptography |
| Adversary Model | Rushing, adaptive, up to t < n/2 corruptions |
| Share Size | m + O(κ) bits, near-optimal |
| Reconstruction | Polynomial-time |
| Predecessors | Shamir (1979), Bishop–Pastro–Rajaraman–Wichs (2016), Fehr–Yuan (EUROCRYPT 2019) |
| License | Creative Commons CC BY |
The solution was uncomfortably above ideal for years. In 2016, Bishop, Pastro, Rajaraman, and Wichs came close, but only against opponents who weren’t rushing. This is similar to creating a lock that only functions if the burglar agrees not to examine the keyhole beforehand. At EUROCRYPT 2019, Fehr and Yuan raised the bar by dealing with aggressive opponents, but at a price. A tighter version of their scheme had reconstruction times that drifted into super-polynomial territory, and their share sizes carried an awkward extra factor. Theoretically useful. On a real machine, less so.
Manurangsi and his colleagues created a scheme that is reconstructable in polynomial time, secure against rushing adversaries, and has shares of size m + O(κ) bits. For years, the field had been unable to locate those three properties. A polynomial-time algorithm for a problem on semi-random graphs—the type of mathematical object that frequently appears unexpectedly in challenging combinatorial problems—is the trick, which is hidden in the technical sections. Rearranging the furniture in a room where cryptographers have been working for a long time is the kind of outcome that doesn’t make headlines.

It’s difficult to ignore how much of this work stems from Mahdi Cheraghchi’s 2019 paper in Designs, Codes and Cryptography, which demonstrated that robustness could be increased to any constant fraction below half by merely adding an information-theoretic authentication tag to Shamir’s scheme. Drawing on list-decoding algorithms for Reed-Solomon codes, that paper highlighted a point that practitioners sometimes overlook: if someone takes the time to add the correct checksum, existing systems are frequently more resilient than people realize.
It’s unclear if any of these modifications will use cryptography in the near future. Cryptocurrency exchange custody systems, multi-party computation backends, threshold signing setups, and other locations that most users never see are often home to robust secret sharing. The improvements are significant over time and at scale rather than in a single, dramatic moment. Nevertheless, there is a subtly satisfying aspect to witnessing the field’s boundaries being tightened year after year, paper by paper. A problem that Shamir started in 1979 is now closer to being resolved than it has ever been, and the people working on it are mostly just using patient math.
