Complexity Theory
Joachim von zur Gathen
Universität Bonn, GermanyOded Goldreich
Weizmann Institute of Science, Rehovot, IsraelMadhu Sudan
The Stata Center, Cambridge, United States
Abstract
The workshop Complexity Theory was organized by Joachim von zur Gathen (Universität Bonn), Oded Goldreich (Weizmann Institute), and Madhu Sudan (MIT). The workshop was held on June 24th–30th 2007, and attended by approximately 50 participants spanning a wide range of interests within the field of Computational Complexity. The plenary program, attended by all participants, featured eight long lectures as well as short (10-minute) reports by almost all participants. In addition, extensive interaction took place in smaller groups.
The Oberwolfach Meeting on Complexity Theory is marked by a long tradition and a continuous transformation. Originally starting with a focus on algebraic and Boolean complexity, the meeting has continuously evolved to cover a wide variety of areas, most of which were not even in existence at the time of the first meeting (in 1972). While inviting many of the most prominent researchers in the field, the organizers try to identify and invite a fair number of promising young researchers.
Computational complexity (a.k.a. complexity theory) is a central field of computer science with a remarkable list of celebrated achievements as well as a vibrant research activity. The field is concerned with the study of the intrinsic complexity of computational tasks, and this study tends to aim at generality: it focuses on natural computational resources, and considers the effect of limiting these resources on the class of problems that can be solved. Computational complexity is related to and has substantial interaction with other areas of mathematics such as number theory, algebra, combinatorics, coding theory, and optimization.
The workshop focused on several sub-areas of complexity theory and its nature may be best illustrated by a brief survey of some of the meeting's highlights.
Connections to the Theory of Error-Correcting Codes. The interplay between coding theory and complexity theory first emerged in the context of “hardness amplification” (almost two decades ago) and other connections are less than a decade old (e.g., the connection to probabilistic checking of proofs and extraction of pure randomness). Several applications of the known connections were presented in the current meeting, and in addition a new connection to algebraic complexity was presented.
While previous applications of the aforementioned connections went in the direction of coding theory to complexity theory, a recent result reported by Venkat Guruswami goes in the opposite direction. This work, by Guruswami and his graduate student (Rudra), resolves a decades-old central problem in coding theory by presenting an explicit error-correcting code of constant-size alphabet that approaches the capacity bound (under worst-case errors, using list decoding).
Extracting randomness. Extracting almost-perfect randomness from weak sources of (imperfect) randomness is crucial for the actual use of randomized procedures. Typical analyses of randomized procedures assume that the procedures have access to a perfect random source. However, in reality one only has access to sources of weak randomness (e.g., having constant entropy rate). Indeed, the problem has attracted a lot of attention in the last couple of decades.
In the meeting, Chris Umans has presented recent work with Guruswami and Vadhan, which utilizes recent algebraic and coding theoretic techniques to the construction of (single-source) randomness extractors. This construction meets (and actually improves) the best known parameters for the problem (which are almost optimal), but does so by a relatively simple construction rather than by a complex combination of numerous constructs (as done in prior work). Furthermore, the new work introduces improved constructions for an intermediate primitive (called randomness condenser), which is of independent interest.
While single-source randomness extractors must utilize an auxiliary random seed (which may be very short), some applications do not allow for such a seed. In this case, extraction from several (e.g., two) independent sources of weak randomness is called for. An important step in the study of this direction was made by Anup Rao, and presented by him in the meeting.
Algebraic complexity and modular polynomial composition. An important task in algebraic computation is modular polynomial composition; that is, given three univariate polynomials and , one is required to obtain the coefficients of the polynomial . This task has many applications, most notably as an ingredient in algorithms for polynomial factorization. The previously best algorithm was presented 30 years ago and uses arithmetic operations, where denotes the maximum degree of the polynomials. In the meeting, Chris Umans presented significant progress on this celebrated open problem in the form of an almost linear-time algorithm that works for fields of small characteristic. This major progress on a purely algebraic problem is essentially based on methods that were introduced into coding theory by Guruswami and Rudra, and then applied to complexity theory in the context of randomness extractors (see foregoing paragraphs). All three results, which are major achievements in their respective areas, were presented at the meeting.
Cryptography and Zero-Knowledge. Zero-knowledge proofs are fascinating concepts and extremely useful constructs. Their fascinating nature is due to their seemingly contradictory definition that mandates that they be convincing and yet yield nothing beyond the validity of the assertion being proved. Their applicability in the domain of cryptography is vast; they are typically used to force malicious parties to behave according to a predetermined protocol. In addition to their direct applicability in cryptography, zero-knowledge proofs serve as a good bench-mark for the study of various problems regarding cryptographic protocols. Zero-knowledge proofs come in many flavors, and it is of great theoretical and practical importance to investigate the relationship among them.
A central problem in this area, which has been open since 1986, refers to the gap between the known results regarding two dual notions: the notion of general zero-knowledge proofs (in which the secrecy condition holds with respect to feasible adversaries) and the notion of statistical zero-knowledge arguments (in which the soundness condition holds with respect to feasible adversaries). This gap was bridged in a recent work of Salil Vadhan, jointly with his graduate students (Nguyen and Ong), and was presented by Vadhan in this meeting.
A problem related to both cryptography and coding theory is the problem of constructing private information retrieval schemes and/or locally decodable codes. In the context of error-correcting codes, such schemes should allow the recovery of any bit in the original message based on a constant number (e.g., three) probes to the corrupted codeword. For more than a decade it was believed that the length of such codewords must be (weakly) exponential in the length of the message. In the meeting, Sergey Yekhanin (PhD student) presented his recent result that refutes this belief.
Delegating your work to an untrusted entities. Needless to say, it is nice to delegate your work to others, but what if you don't trust the others? The very definition of a proof system refers to such a possibility – the hard task of finding a proof is delegated to the outside while you make sure that the proof is valid by performing the easier task of verification. However, facilitating verification may mean making the task of finding adequate proofs even harder. In the context of program checking this phenomenon is explicitly disallowed: wishing to solve some problem you may use an untrusted program that supposedly solves this problem (but not a program that solve more complex problems). Needless to say, the aim is allowing the delegator, called a checker, to use significantly fewer resources than any program that correctly solves the problem. In the meeting, Shafi Goldwasser presented a recent result that achieves this goal for a natural complexity measure (circuit depth) and for a wide class of problems (i.e., NC).
A characterization of testable graph properties. The area of property testing is concerned with promise problems that call for distinguishing those objects that have a predetermined property from objects that are “far” from any object having this property. The focus is on sub-linear time algorithms that probe the given object at few (randomly selected) locations. In some cases, one may perform the task by using a number of probes that only depends on the proximity parameter (and is independent of the size of the object). In the meeting, Noga Alon presented a recent result that characterizes the class of graph properties (where graphs are represented by their adjacency matrices) for which such a phenomenon holds.
The rest of this report. This report contains extended abstracts of the eight plenary lectures as well as abstracts of forty short reports.
Cite this article
Joachim von zur Gathen, Oded Goldreich, Madhu Sudan, Complexity Theory. Oberwolfach Rep. 4 (2007), no. 3, pp. 1793–1864
DOI 10.4171/OWR/2007/31