Home Page

This page contains links to the latest (and most complete) revision of each work.


Abstract: Motivated by the structural analogies between point lattices and linear error-correcting codes, and by the mature theory on locally testable codes, we initiate a systematic study of local testing for membership in lattices.

Testing membership in lattices is also motivated in practice, by applications to integer programming, error detection in lattice-based communication, and cryptography. Apart from establishing the conceptual foundations of lattice testing, our results include the following: 1. We demonstrate upper and lower bounds on the query complexity of local testing for the well-known family of code formula lattices. Furthermore, we instantiate our results with code formula lattices constructed from Reed-Muller codes, and obtain nearly-tight bounds. 2. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing 35(1) pp1-21).
Abstract: AC0-MOD2 circuits are AC0 circuits augmented with a layer of parity gates just above the input layer. We study the AC0-MOD2 circuit lower bound for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity.

We give the first superlinear lower bound for the Boolean Inner Product function against AC0-MOD2 of depth four or greater. Indeed, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω~(n2) lower bound for the special case of depth-4 AC0-MOD2. Our proof of the depth-4 lower bound employs a new "moment-matching" inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions' values at 0, given that their first d moments match.
Abstract: We prove that a known approach to improve Shamir's celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for $n$ parties against any collusion of size $\delta n$, for any constant $\delta \in (0, 1/2)$.
This result holds in the so-called "non-rushing" model in which the $n$ shares are submitted simultaneously for reconstruction. We thus obtain an efficient and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is $k(1+o(1)) + O(\kappa)$, where $k$ is the secret length and $\kappa$ is the security parameter. Like Shamir's scheme, in this modified scheme any set of more than $\delta n$ honest parties can efficiently recover the secret. Using algebraic geometry codes instead of Reed-Solomon codes, we decrease the share length to a constant (only depending on $\delta$) while the number of shares $n$ can grow independently. In this case, when $n$ is large enough, the scheme satisfies the "threshold" requirement in an approximate sense; i.e., any set of $\delta n(1+\rho)$ honest parties, for arbitrarily small $\rho > 0$, can efficiently reconstruct the secret.

Abstract: For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard transform of an N-dimensional vector x ∈ RN in time k1+α(log N)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x' ∈ RN satisfying || x' - DHT(x) ||1 < c || DHT(x) - Hk(DHT(x)) ||1 for an absolute constant c > 0, where DHT(x) is the transform of x and Hk(DHT(x)) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts).

An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive L1/L1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008).

Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k (log N)O(1) reconstruction time with a reduced exponent in the poly-logarithmic factor).
Abstract: Non-malleable coding, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010), aims for protecting the integrity of information against tampering attacks in situations where error-detection is impossible. Intuitively, information encoded by a non-malleable code either decodes to the original message or,  in presence of any tampering, to an unrelated message. Non-malleable coding is possible against any class of adversaries of bounded size. In particular, Dziembowski et al. show that such codes exist and may achieve positive rates for any class of tampering functions of size at most exp(2α n), for any constant α < 1. However, this result is existential and has thus attracted a great deal of subsequent research on explicit constructions of non-malleable codes against natural classes of adversaries.

In this work, we consider constructions of coding schemes against two well-studied classes of tampering functions; namely, bit-wise tampering functions (where the adversary tampers each bit of the encoding independently) and the much more general class of split-state adversaries (where two independent adversaries arbitrarily tamper each half of the encoded sequence). We obtain the following results for these models.

1. For bit-tampering adversaries, we obtain explicit and efficiently encodable and decodable non-malleable codes of length $n$ achieving rate 1-o(1) and error (also known as "exact security") exp(-Ω~(n1/7)). Alternatively, it is possible to improve the error to exp(-Ω~(n))$ at the cost of making the construction Monte Carlo with success probability 1-exp(-Ω(n)) (while still allowing a compact description of the code). Previously, the best known construction of bit-tampering coding schemes was due to Dziembowski et al. (ICS 2010), which is a Monte Carlo construction achieving rate close to .1887.

2. We initiate the study of seedless non-malleable extractors as a natural variation of the notion of non-malleable extractors introduced by Dodis and Wichs (STOC 2009). We show that construction of non-malleable codes for the split-state model reduces to construction of non-malleable two-source extractors. We prove a general result on existence of seedless non-malleable extractors, which implies that codes obtained from our reduction can achieve rates arbitrarily close to 1/5 and exponentially small error. In a separate recent work, the authors show that the optimal rate in this model is 1/2. Currently, the best known explicit construction of split-state coding schemes is due to Aggarwal, Dodis and Lovett (ECCC TR13-081) which only achieves vanishing (polynomially small) rate.
Abstract: Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010),  encode messages s in a manner so that tampering the codeword causes the decoder to either output s or a message that is independent of s. While this is an impossible goal to achieve against unrestricted tampering functions, rather surprisingly non-malleable coding becomes possible against every fixed family F of tampering functions that is not too large (for instance, when |F| < exp(2α n) for some α < 1 where n is the number of bits in a codeword. In this work, we study the "capacity of non-malleable coding," and establish optimal bounds on the achievable rate as a function of the family size, answering an open problem from Dziembowski et al. (ICS 2010).  Specifically,
  1. We prove that for every family F with |F| < exp(2α n), there exist non-malleable codes against F with rate arbitrarily close to 1-α (this is achieved w.h.p. by
    a randomized construction).
  2. We show the existence of families of size exp(poly(n) 2α n) against which there is no non-malleable code of rate 1-α (in fact this is the case w.h.p for a random family of this size). 
  3. We also show that 1-α is the best achievable rate for the family of functions which are only allowed to tamper the first α n bits of the codeword, which is of special interest.
As a corollary, this implies that the capacity of non-malleable coding in the split-state model (where the tampering function acts independently but arbitrarily on the two halves of the codeword, a model which has received some attention recently) equals 1/2.
We also give an efficient Monte Carlo construction of codes of rate close to 1 with polynomial time encoding and decoding that is non-malleable against any fixed c > 0 and family F of size exp(nc), in particular tampering functions with say cubic size circuits.
Abstract: Locally decodable codes (LDCs) are error correcting codes with the extra property that it is sufficient to read just a small number of positions of a possibly corrupted codeword in order to recover any one position of the input. To achieve this, it is necessary to use randomness in the decoding procedures. We refer to the probability of returning the correct answer as the correctness of the decoding algorithm.

Thus far, the study of LDCs has focused on the question of the tradeoff between their length and the query complexity of the decoders. Another natural question is what is the largest possible correctness, as a function of the amount of codeword corruption and the number of queries, regardless of the length of the codewords. Goldreich et al. (Computational Complexity 15(3), 2006) observed that for a given number of queries and fraction of errors, the correctness probability cannot be arbitrarily close to 1. However, the quantitative dependence between the largest possible correctness and the amount of corruption  has not been established before.

We present several bounds on the largest possible correctness for LDCs, as a function of the amount of corruption tolerated and the number of queries used, regardless of the length of the code. Our bounds are close to tight. We also investigate the relationship between the amount of corruption tolerated by an LDC and its minimum distance as an error correcting code.
Even though intuitively the two notions are expected to be related, we demonstrate that in general this is not the case. However, we show a close relationship between minimum distance and
amount of corruption tolerated for linear codes over arbitrary finite fields, and for binary nonlinear codes. We use these results to strengthen the known bounds on the largest possible amount of corruption that can be tolerated by LDCs (with any nontrivial correctness better than random guessing) regardless of the query complexity or the length of the code.
Abstract: We prove that a random linear code over Fq, with probability arbitrarily close to 1, is list decodable at radius 1-1/q-ε with list size L=O(1/ε2) and rate R= Ω q2/(log3(1/ε))). Up to the polylogarithmic factor in 1/ε and constant factors depending on q, this matches the lower bound L= Ωq(1/ε2) for the list size and upper bound R=Oq2) for the rate. Previously only existence (and not abundance) of such codes was known for the special case q=2 (Guruswami, Håstad, Sudan and Zuckerman, 2002).

In order to obtain our result, we employ a relaxed version of the well known Johnson bound on list decoding that translates the average Hamming distance between codewords to list decoding guarantees. We furthermore prove that the desired average-distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to the Euclidean norm (RIP-2). For the case of random binary linear codes, this matrix coincides with a random submatrix of the Hadamard-Walsh transform matrix that is well studied in the compressed sensing literature.

Finally, we improve the analysis of Rudelson and Vershynin (2008) on the number of random frequency samples required for exact reconstruction of k-sparse signals of length N. Specifically, we improve the number of samples from O(k log(N) log2(k) (log k + log log N)) to O (k log(N) log3k). The proof involves bounding the expected supremum of a related Gaussian process by using an improved analysis of the metric defined by the process. This improvement is crucial for our application in list decoding.
    • Mahdi Cheraghchi, Adam R. Klivans, Pravesh Kothari, Homin K. Lee. Submodular Functions are Noise Stable. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), 2012. arXiv:1106.0518.
Abstract: We show that all non-negative submodular functions have high noise-stability. As a consequence, we obtain a polynomial-time learning algorithm for this class with respect to any product distribution on {-1,1}n (for any constant accuracy parameter ε).  Our algorithm also succeeds in the agnostic setting.  Previous work on learning submodular functions required either query access or strong assumptions about the types of submodular functions to be learned (and did not hold in the agnostic setting).   Additionally we give simple algorithms that efficiently release differentially private answers to all Boolean conjunctions and to all halfspaces with constant average error, subsuming and improving recent work due to Gupta, Hardt, Roth and Ullman (STOC 2011).
Abstract: We review connections between coding-theoretic objects and sparse learning problems. In particular, we show how seemingly different combinatorial objects such as error-correcting codes, combinatorial designs, spherical codes, compressed sensing matrices and group testing designs can be obtained from one another. The reductions enable one to translate upper and lower bounds on the parameters attainable by one object to another. We survey some of the well-known reductions in a unified presentation, and bring some existing gaps to attention. New reductions are also introduced; in particular, we bring up the notion of minimum "L-wise distance" of codes and show that this notion closely captures the combinatorial structure of RIP-2 matrices. Moreover, we show how this weaker variation of the minimum distance is related to combinatorial list-decoding properties of codes.
Abstract: The rapid development of derandomization theory, which is a fundamental area in theoretical computer science, has recently led to many surprising applications outside its initial intention. We will review some recent such developments related to combinatorial group testing. In its most basic setting, the aim of group testing is to identify a set of "positive" individuals in a population of items by taking groups of items and asking whether there is a positive in each group.
In particular, we will discuss explicit constructions of optimal or nearly-optimal group testing schemes using "randomness-conducting" functions. Among such developments are constructions of error-correcting group testing schemes using randomness extractors and condensers, as well as threshold group testing schemes from lossless condensers.
Abstract: We study constraint satisfaction problems with constraints defined by homogeneous linear threshold predicates. Specifically, we consider the standard optimization problem Max-CSP(P) where the objective is to satisfy as many constraints of the same type R on a number of Boolean variables as possible. We assume that the constraints are defined by a fixed linear threshold predicate P: {-1,+1}n → {-1,+1} of the form P(x1, ..., xn) = sgn(w1 x1 + ... + wn xn), for positive integer weights wi (i=1, ..., n). Our focus is on a range of threshold predicates for which the problem does not become approximation resistant, and we study the approximation curve of this class of problems. For the special case of the majority function with w1 = ... = wn = 1 and n odd, we obtain almost-matching approximability and hardness results that can be summarized as follows:
  • Approximation: Using linear programming, we design a polynomial-time algorithm that, given a 1-δ/(n+1) satisfiable instance for any δ < 1, satisfies a 1/2 + (1-δ)7 Ω(1/sqrt(n)) fraction of the constraints.
  • Hardness: Assuming the Unique Games Conjecture (Khot, STOC 2002), for any ε>0 and δ in [0,1], given a (1-δ/(n+1) - ε)-satisfiable instance it is NP-hard to satisfy a 1/2 + (1-δ) Ω(1/sqrt(n) + ε fraction of the constraints.
We extend the above results to a more general class of "majority-like" predicates and obtain parallel results for them. Loosely speaking, this class of predicates can be defined using weights <wi that are generally small. Towards this, we introduce the notion of Chow-robustness that might be of independent interest.

Abstract: The basic goal in combinatorial group testing is to identify a set of up to d defective items within a large population of size n >> d using a pooling strategy. Namely, the items can be grouped together in pools, and a single measurement would reveal whether there are one or more defectives in the pool. The threshold model is a generalization of this idea where a measurement returns positive if the number of defectives in the pool passes a fixed threshold u, negative if this number is below a fixed lower threshold Lu , and may behave arbitrarily otherwise. We study non-adaptive threshold group testing (in a possibly noisy setting) and show that, for this problem, O(dg+2 (log d) log(n/d)) measurements (where g := u- L) suffice to identify the defectives, and also present almost matching lower bounds. This significantly improves the previously known (non-constructive) upper bound O(du+1 log(n/d)). Moreover, we obtain a framework for explicit construction of measurement schemes using lossless condensers. The number of measurements resulting from this scheme is ideally bounded by O(dg+3 (log d) log n). Using state-of-the-art constructions of lossless condensers, however, we come up with explicit testing schemes with O(dg+3 (log d) quasipoly(log n)) and O(dg+3+β poly(log n)) measurements, for arbitrary constant β > 0. 
Abstract: Non-adaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in network tomography, sensor networks and infection propagation we formulate group testing problems on graphs. Unlike conventional group testing problems each group here must conform to the constraints imposed by a graph. For instance, items can be associated with vertices and each pool is any set of nodes that must be path connected. In this paper we associate a test with a random walk. In this context conventional group testing corresponds to the special case of a complete graph on n vertices.
For interesting classes of graphs we arrive at a rather surprising result, namely, that the number of tests required to identify d defective items is substantially similar to that required in conventional group testing problems, where no such constraints on pooling is imposed. Specifically, if T(n) corresponds to the mixing time of the graph G, we show that with m=O(d2 T2(n) log(n/d)) non-adaptive tests, one can identify the defective items. Consequently, for the Erdos-Renyi random graph G(n,p), as well as expander graphs with constant spectral gap, it follows that m=O(d2 log3(n)) non-adaptive tests are sufficient to identify d defective items. We next consider a specific scenario that arises in network tomography and show that m=O(d3 log3(n)) non-adaptive tests are sufficient to identify d defective items. We also consider noisy counterparts of the graph constrained group testing problem and develop parallel results for these cases.
Abstract: Detection of defective members of large populations has been widely studied in the statistics community under the name "group testing", a problem which dates back to World War II when it was suggested for syphilis screening. There the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified  by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure.
In this work, by using a natural probabilistic model for "viral infections", we design non-adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1-o(1). We propose both probabilistic and explicit design procedures that require a "small" number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k2 (logn) / p3) and m = O(k2 (log2n) / p3), respectively.  In both cases, a simple decoder is able to successfully identify the infected population in time O(mn).
Abstract: We study combinatorial group testing schemes for learning d-sparse boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector.  On the positive side, we give a general framework for construction of highly noise-resilient group testing schemes using randomness condensers. Simple randomized instantiations of this construction give non-adaptive measurement schemes, with m=O(dlog n) measurements,  that allow efficient reconstruction of d-sparse vectors up to O(d) false positives even in the presence of delta*m false positives and Omega(m/d) false negatives within the measurement outcomes, for any constant delta < 1.  None of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit (and incomparable) constructions, in particular one matching the randomized trade-off but using m = O(d1+o(1) log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time poly(m), which would be sublinear in n for sufficiently sparse vectors.
Abstract: We give a general framework for construction of small ensembles of capacity achieving linear codes for a wide range of (not necessarily memoryless) discrete symmetric channels, and in particular, the binary erasure and symmetric channels.  The main tool used in our constructions is the notion of randomness extractors and lossless condensers that are regarded as central tools in theoretical computer science.  Same as random codes, the resulting ensembles preserve their capacity achieving properties under any change of basis.  Our methods can potentially lead to polynomial-sized ensembles; however, using known explicit constructions of randomness conductors we obtain specific ensembles whose size is as small as quasipolynomial in the block length.  By applying our construction to Justesen's concatenation scheme (Justesen, 1972) we obtain explicit capacity achieving codes for BEC (resp., BSC) with almost linear time encoding and almost linear time (resp., quadratic time)  decoding and exponentially small error probability. The explicit code for BEC is defined and capacity achieving for every block length.
Abstract: A wiretap protocol is a pair of randomized encoding and decoding functions such that knowledge of a bounded fraction of the encoding of a message reveals essentially no information about the message, while knowledge of the entire encoding reveals the message using the decoder.  In this paper we study the notion of efficiently invertible extractors and show that a wiretap protocol can be constructed from such an extractor.  We will then construct invertible extractors for symbol-fixing, affine, and general sources and apply them to create wiretap protocols with asymptotically optimal trade-offs between their rate (ratio of the length of the message versus its encoding) and resilience (ratio of the observed positions of the encoding and the length of the encoding).  We will then apply our results to create wiretap protocols for challenging communication problems, such as active intruders who change portions of the encoding, network coding, and intruders observing arbitrary boolean functions of the encoding.

Abstract: This paper studies the stability of some reconstruction algorithms  for compressed sensing in terms of the bit precision. Considering the fact that practical digital systems  deal with discretized signals, we motivate the importance of the total number of accurate bits needed from the measurement outcomes in addition to the number of measurements.  It is shown that if one uses a 2k*n  Vandermonde matrix with roots on the unit circle as the measurement matrix, O(L + k log(n/k)) bits of precision per measurement are sufficient to reconstruct a k-sparse signal in Rn with dynamic range (i.e., the absolute ratio between the largest and the smallest nonzero coefficients) at most 2L within L bits of precision, hence identifying its correct support. Finally, we obtain an upper bound on the total number of required bits when the measurement matrix satisfies a restricted isometry property, which is in particular the case for random Fourier and Gaussian matrices.  For very sparse signals, the upper bound on the number of required bits for Vandermonde matrices is shown to be better than this general upper bound.
Abstract: We consider the problem of uniform sampling of points on an algebraic variety.  Specifically, we develop a randomized algorithm that, given a small set of multivariate polynomials over a sufficiently large finite field, produces a common zero of the polynomials almost uniformly at random. The statistical distance between the output distribution of the algorithm and the uniform distribution on the set of common zeros is polynomially small in the field size, and the running time of the algorithm is polynomial in the description of the polynomials and their degrees provided that the number of the polynomials is a constant.
Abstract: We outline a procedure for using pseudorandom generators to construct binary codes with good properties, assuming the existence of sufficiently hard functions. Specifically, we give a polynomial time algorithm, which for every integers n and k, constructs polynomially many linear codes of block length n and dimension k, most of which achieving the Gilbert-Varshamov bound. The success of the procedure relies on the assumption that the exponential time class of E := DTIME[2O(n)] is not contained in the sub-exponential space class DSPACE[2o(n)]. The methods used in this paper are by now standard within computational complexity theory, and the main contribution of this note is observing that they are relevant to the construction of optimal codes. We attempt to make this note self contained, and describe the relevant results and proofs from the theory of pseudorandomness in some detail.

Abstract: The rigidity function of a matrix is defined as the minimum number of its entries that need to be changed in order to reduce the rank of the matrix to below a given parameter. Proving a strong enough lower bound on the rigidity of a matrix implies a nontrivial lower bound on the complexity of any linear circuit computing the set of linear forms associated with it. However, although it is shown that most matrices are rigid enough, no explicit construction of a rigid family of matrices is known. In this survey report we review the concept of rigidity and some of its interesting variations as well as several notable results related to that. We also show the existence of highly rigid matrices constructed by evaluation of bivariate polynomials over finite fields.
  • Theses:
Abstract: Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions.  Towards this goal, numerous fundamental notions have been developed to provide a unified framework for approaching various derandomization problems and to improve our general understanding of the power of randomness in computation.  Two important classes of such tools are pseudorandom generators and randomness  extractors. Pseudorandom generators transform a short, purely random, sequence into a much longer sequence that looks random, while extractors transform a weak source of randomness into a perfectly random one (or one with much better qualities, in which case the transformation is called a randomness condenser).

In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory.  First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions. We utilize randomness extractors to construct efficient and information-theoretically optimal communication protocols for this model.

Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold.

Next, we use randomness condensers and extractors to design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. Finally, we consider the problem of explicit construction of error-correcting codes on the Gilbert-Varshamov bound and extend the original idea of Nisan and Wigderson to obtain a small ensemble of codes, mostly achieving the bound, under suitable computational hardness assumptions. 

Abstract: Error correcting codes are combinatorial objects that allow reliable recovery of information in presence of errors by cleverly augmenting the original information with a certain amount of redundancy.
The availability of efficient means of error detection is considered as a fundamental criterion for error correcting codes. Locally testable codes are families of error correcting codes that are highly robust against transmission errors and in addition provide super-efficient (sublinear time) probabilistic algorithms for error detection. In particular, the error detection algorithm probes the received sequence only at a small (or even constant) number of locations.
There seems to be a trade-off between the amount of redundancy and the number of probes for the error detection procedure in locally testable codes. Even though currently best constructions allow reduction of redundancy to a nearly linear amount, it is not clear whether this can be further reduced to linear while preserving a constant number of probes.
We study the formal notion of locally testable codes and survey several major results in this area. We also investigate closely related concepts, and in particular, polynomial low-degree tests and probabilistically checkable proofs.
(implementation in ANSI C++, platform independent but needs OpenGL and glut)

Last Updated on March 2017

Home Page