umn logo IMA home |  Contact IMA 
IMA Web
Applications of Algebraic Geometry

Abstracts and Talk Materials:

Complexity, Coding, and Communications

April 16-20, 2007

Photo Gallery

Saugata Basu (Georgia Institute of Technology) http://www.math.gatech.edu/~saugata/

On the number of homotopy types of fibres of a definable map

I will describe some results giving a single exponential upper bound on the number of possible homotopy types of the fibres of a Pfaffian map, in terms of the format of its graph. In particular, we show that if a semi-algebraic set S ⊂ ℝm+n is defined by a Boolean formula with s polynomials of degrees less than d, and π: (R)m+n → (R)n is the projection on a subspace, then the number of different homotopy types of fibres of π does not exceed (2m snd)O(nm). All previously known bounds were doubly exponential. As applications of our main results we prove single exponential bounds on the number of homotopy types of semi-algebraic sets defined by polynomials having a fixed number of monomials in their support (we need to fix only the number of monomials, not the support set itself), as well as by polynomials with bounded additive complexity. We also prove single exponential upper bounds on the radii of balls guaranteeing local contractibility for semi-algebraic sets defined by polynomials with integer coefficients. (Joint work with N. Vorobjov).

Daniel J. Bernstein (University of Illinois)

The tangent FFT

My goal in this talk is to advertise an algorithm found by James Van Buskirk, the first improvement in more than thirty years in the exact complexity of the discrete Fourier transform over the reals. The previous speed record was held by the split-radix FFT, announced by Yavne in 1968 and widely understood since the early 1980s. The split-radix FFT uses 4n\lg n-6n+8 operations over the reals for a size-n complex DFT when n is a large power of 2, and therefore (12+o(1))n\lg n operations for a complex cyclic convolution of length n. Bruun's real-factor FFT also uses (12+o(1))n\lg n operations. An analysis by Johnson and Frigo shows that Van Buskirk's new algorithm uses only (34/3+o(1))n\lg n operations.

Stanislav Bulygin (TU Kaiserslautern) http://www.mathematik.uni-kl.de/~bulygin/

Decoding linear codes via solving systems of polynomial equations

This is a joint work of the presenter and Ruud Pellikaan (Technical University of Eindhoven, Netherlands). We investigate a question of decoding linear codes, in particular up to half the minimum distance. We propose a method based on solving systems of polynomial equations over a finite field. Such a method was already quite successfully applied for cyclic codes earlier, we move on to the general case. We concentrate on solving the systems that emerge with the use of Groebner bases. The poster reflects a theoretical framework, the main result (the system we want to solve has a unique solution, etc.), and also some results on complexity estimation and experimental results.

Philip Robert Busse (University of Kentucky)

List decoding BCH codes

List decoding Reed-Solomon codes via an interpolation and root-finding algorithm was pioneered by Madhu Sudan in the mid to late 1990's. In 2006, Kwankyu Lee and Michael O'Sullivan developed a Gröbner basis based implementation of the interpolation algorithm, which they showed was an efficient generalization of the Berlekamp-Massey Algorithm. We propose to apply these ideas to list decoding BCH codes by presenting BCH codes as subfield subcodes of generalized Reed-Solomon codes and to explore the resulting algorithm. In particular, we seek ways to optimize the new algorithm and to compare it with the Berlekamp-Massey algorithm.

Peter Bürgisser (Universität Paderborn) http://math-www.uni-paderborn.de/agpb/members.html

The probability that a slight perturbation of a numerical analysis problem is difficult

The running time of many iterative numerical algorithms is dominated by the condition number of the input, a quantity measuring the sensitivity of the solution with regard to small perturbations of the input. Examples are iterative methods of linear algebra, interior-point methods of linear and convex optimization, as well as homotopy methods for solving systems of polynomial equations.

Spielman and Teng introduced in 2001 the seminal concept of smoothed analysis, which arguably blends the best of both worst-case and average-case analysis of algorithms. This led to a much better understanding of the success of the simplex method in practice.

We present a general theorem providing smoothed analysis estimates for conic condition numbers of problems of numerical analysis. Our probability estimates depend only on geometric invariants of the corresponding sets of ill-posed inputs. Several applications to linear and polynomial equation solving show that the estimates obtained in this way are easy to derive and quite accurate. The main theorem is based on a volume estimate of ε-tubular neighborhoods around a real algebraic subvariety of a sphere, intersected with a disk of radius σ. Besides ε and σ, this bound depends only the dimension of the sphere and on the degree of the defining equations.

This is joint work with Felipe Cucker and Martin Lotz.

Jennifer Chayes (Microsoft Research) http://research.microsoft.com/~chayes/

Math matters - IMA public lecture: Epidemics in technological and social networks: The downside of six degrees of separation

During the past decade, complex networks have become increasingly important in communication and information technology. Vast, self-engineered networks, like the Internet, the World Wide Web, and Instant Messaging Networks, have facilitated the flow of information, and served as media for social and economic interaction. In social networks, the ease of information flow goes by many names: the "small world" phenomenon, the "Kevin Bacon phenomenon," and "six degrees of separation"—the claim that any two people on earth can be connected through a chain of acquaintances with at most five intermediaries. Unfortunately, many of the properties that facilitate information transmission also facilitate the spread of viruses in both technological and social networks. Dr. Chayes uses simple mathematical models to explain these epidemics and to examine strategies for their containment.

Iwan Duursma (University of Illinois at Urbana-Champaign)

Strongly self-orthogonal codes for secure computation

The well-known Shamir secret sharing scheme uses polynomial interpolation to recover a shared secret. The scheme and its application to secure computation generalizes to algebraic curve based schemes (Chen-Cramer 2006). For secure computation against an active adversary a scheme needs to be strongly multiplicative. We show that this can be achieved by using what we call strongly self-orthogonal codes. (Joint work with various authors)

Heide Gluesing-Luerssen (University of Kentucky) http://www.ms.uky.edu/~heidegl/

The weight adjacency matrix of a convolutional code

Convolutional codes can be described by linear input-state-output systems. This gives rise to a state transition graph and an associated weight adjacency matrix. The latter counts in a detailed way the weights occurring in the code. After discussing some uniqueness issues we will present a MacWilliams identity theorem for convolutional codes and their duals in terms of the weight adjacency matrix. Furthermore, we will discuss isometries for convolutional codes and their effect on the weight adjacency matrix. It will be shown that for a particular class of codes the weight adjacency matrix forms a complete invariant under monomial equivalence, that is, under permutation and rescaling of the codeword coordinates.

Venkat Guruswami (University of Washington) http://www.cs.washington.edu/homes/venkat/

List decoding with optimal data rate

The fundamental trade-off in coding theory is the one between the rate of the code (a measure of amount of redundancy introduced) and the amount of errors that can be corrected. In this talk, I will describe an explicit construction of codes that achieves the optimal trade-off between these parameters, for a worst-case noise model where the channel can corrrupt the codeword arbitrarily subject to a bound on the total number of errors.

Formally, for every 0 < R < 1 and eps > 0, we present an explicit construction of codes of rate R (which encode R.n symbols into n symbols) that can be list decoded in polynomial time from up to a fraction (1-R-eps) of errors. Clearly, one needs at least Rn correct symbols in the received word to have any hope of identifying the Rn message symbols, so recovering from (R+eps)n correct symbols is information-theoretically best possible. Our codes are simple to describe: they are certain *folded* Reed-Solomon codes, which are just Reed-Solomon codes, but viewed as a code over a larger alphabet by a careful bundling of codeword symbols.

The talk will introduce the problem of list decoding, and then give a peek into the algebraic ideas and constructions that lead to the above result. Time permitting, I will also describe some challenging questions concerning algebraic-geometric codes that, if resolved, could improve the decoding complexity as one approaches the optimal trade-off.

Based on joint work with Atri Rudra (UW).

Leonid Gurvits (Los Alamos National Laboratory)

Polynomial time algorithms to approximate mixed volumes within a simply exponential factor

Let K =(K1…Kn) be a n-tuple of convex compact subsets in the Euclidean space Rn, and let V (·) be the Euclidean volume in Rn . The Minkowski polynomial VK is defined as VK(x1, …, xn)= V (λ1K1 +…+λnKn) and the mixed volume V(K1,…,Kn) as

V(K1…Kn) = ∂n / ∂λ1…∂λn VK1K1 +…λnKn).

The mixed volume is one of the backbones of convexity theory. After BKH theorem, the mixed volume( and its generalizations) had become crucially important in computational algebraic geometry.

We present in this talk randomized and deterministic algorithms to approximate the mixed volume of well-presented convex compact sets. Our main result is a poly-time randomized algorithm which approximates V (K1,…, Kn) with multiplicative error en and with better rates if the affine dimensions of most of the sets Ki are small. Because of the famous Barany-Furedi lower bound, even such rate is not achievable by a poly-time deterministic oracle algorithm.

Our approach is based on the particular, geometric programming, convex relaxation of log(V (K1,…, Kn)). We prove the mixed volume analogues of the Van der Waerden and the Schrijver/Valiant conjectures on the permanent. These results, interesting on their own, allow to "justify" the above mentioned convex relaxation, which is solved using the ellipsoid method and a randomized poly-time time algorithm for the approximation of the volume of a convex set.

Pascal Koiran (École Normale Supérieure de Lyon) http://perso.ens-lyon.fr/pascal.koiran/

Decision versus evaluation in algebraic complexity theory

wo main categories of problems are studied in algebraic complexity theory: evaluation problems and decision problems. A typical example of an evaluation problem is the evaluation of the permanent of a matrix. Such problems can be studied within Valiant's framework.

Deciding whether a multivariate polynomial has a real root is a typical example of a decision problem. This problem is NP-complete in the Blum-Shub-Smale model of computation over the real numbers.

In this talk I will present a transfer theorem which shows that if certain evaluation problems are easy, then other decision problems (including the above-mentioned NP-complete problem) are easy too.

Therefore, to show that that P is different from NP over the set of real numbers, one should first be able show that certain evaluation problems are hard.

V. Lakshmibai (Northeastern University) http://www.math.neu.edu/~lakshmibai/mathindex.html

Ubiquity of Schubert varieties

Hodge gave basis for the homogeneous co-ordinate rings of the Grassmannian and its Schubert varieties in terms of "standard monomials" in the Plucker co-ordinates. This clasical work of Hodge was extended to the generalized flag variety and its Schubert varieties by Lakshmibai-Littelmann-Musili-Seshadri. We shall give a survey talk on this generalization and will also relate many important algebraic varieties to Schubert varieties.

Joseph M. Landsberg (Texas A & M University) http://www.math.tamu.edu/~jml/

Geometry and the complexity of matrix multiplication

I'll give geometric formulations and interpretations of some of the main results regarding the complexity of matrix multiplication. I'll also explain the relevance of this geometry for areas such as algebraic statistics and quantum computing.

Grégoire Lecerf (Université Versailles/Saint Quentin-en-Yvelines)

New recombination techniques for polynomial factorization algorithms based on Hensel lifting

I will present new deterministic and probabilistic recombination algorithms to compute the irreducible factorization of bivariate polynomials via the classical Hensel lifting technique, and for the dense representation model. In bi-degree (dx, dy) with dy ≤ dx, and whatever the characteristic of the base field is, these algorithms only require the precision of the lifting to be dx+1. The cost of the deterministic recombination algorithm is sub-quadratic in dx dy, and the probabilistic version is faster by a factor of dy. At the end, I will explain how these algorithms can be adapted to the computation of the absolute factorization, and how they extend to more than 2 variables.

Kwankyu Lee (Korea Institute for Advanced Study (KIAS))

A complexity-reduced interpolation algorithm for soft-decision decoding of Reed-Solomon codes

Soon after Lee and O'Sullivan proposed a new interpolation algorithm for algebraic soft-decision decoding of Reed-Solomon codes, there have been some attempts to apply a coordinate transformation technique to the new algorithm, with a remarkable complexity reducing effect. We propose a conceptually simple way of applying the transformation technique to the interpolation algorithm.

Susan Margulies (University of California)

NP, coNP and the Nullstellensatz: lower bounds for stable set and graph coloring Nullstellensatze

Systems of polynomial equations over the complex numbers can be used to characterize NP-Complete graph-theoretic decision problems. From the point of view of computer algebra and symbolic computation, these are interesting polynomial systems because they are provably hard: solving them is as hard as solving the underlying NP-Complete problem. Furthermore, unless NP = coNP, there must exist infinite instances of these infeasible systems whose Hilbert Nullstellensatz certificates grow with respect to the underlying graphs.

Klaus Meer (Syddansk Universitet (University of Southern Denmark)) http://www.imada.sdu.dk/~meer/

The word problem for a class of groups with infinite presentation: A mputationally universal problem in the BSS model

Joint work with Martin Ziegler.

The word problem for discrete groups is well-known to be undecidable by a Turing Machine; more precisely, it is reducible both to and from and thus equivalent to the discrete Halting Problem. The present work introduces and studies a real extension of the word problem for a certain class of groups which are presented as quotient groups of a free group and a normal subgroup. The main result establishes the word problem for such groups to be computationally equivalent to the Halting Problem for BSS machines over the reals. It thus provides the first non-trivial example of a concrete problem that is computationally universal for the BSS model.

Luis Miguel Pardo (University of Cantabria) http://personales.unican.es/pardol/

On Smale's 17th problem: a probabilistic solution in average polynomial time

In this talk I will discuss several conceptual aspects leading a a probabilistic positive solution to the following problem proposed by S. Smale: "Can a zero of n complex polynomial equations in n unknowns be found approximately on the average, in polynomial time with a uniform algorithm?"

J. Maurice Rojas (Texas A & M University) http://www.math.tamu.edu/~rojas/

Descartes rule for complete fields and arbitrary codimension

We discuss an extension of Descartes' Rule (for univariate polynomials over the real numbers) to arbitrary k x n polynomial systems over a field L which is either the real numbers or the p-adic rationals. Over R, our extension fails for certain systems, but we can characterize precise regions (in the coefficient space) where our extension holds. Over the p-adic rationals, our extension holds over even larger regions in the coefficient space. We then conclude with a conjecture on what optimal fewnomial estimates should like in general.

J. Maurice Rojas (Texas A & M University) http://www.math.tamu.edu/~rojas/

A critical radius for low complexity

Just as the number of real roots of a real univariate quadratic depends on the sign of the discriminant, the topological behavior of real zero sets depends on (more general) A-discriminant variety complements. More recently, in numerical linear algebra (and nonlinear work of Shub, Smale, Beltran, Pardo, and other authors), the relationship between the numerical behavior of zero sets and distance to the discriminant variety has been clarified.

In this talk, we review some of the connections between A-discriminants, the topology of real algebraic sets, and the complexity of solving polynomial systems. In particular, we show that outside a ball of sufficiently large radius (in the coefficient space), one can assert the following with high probability:

(1) a new upper bound on the number of real roots of a fewnomial system, significantly improving Khovanski's famous result (2) the truth of a formerly broken conjecture of Itenberg and Roy

Our main results are joint work in progress with Martin Avendano. We also discuss a connection to a generalization of Smale's 17th Problem. No background in algebraic geometry is assumed.

Diego Ruano (Universität Kaiserslautern) http://www.mathematik.uni-kl.de/~ruano/

Metric structure of linear codes

We use the study of bilinear forms over a finite field to give a decomposition of the linear codes similar to the one in [D. Ruano: On the structure of generalized toric codes, arXiv:cs.IT/0611010, 2006] for generalized toric codes. Such decomposition, called geometric decomposition of a linear code and which can be obtained in a constructive way, allows to express easily the dual of a linear code and gives a new paradigm for linear codes. The proofs for characteristic 2 are different, but they will be developed parallel.

Meera Sitharam (University of Florida)

Nonextendibility of mutually unbiased bases

Two orthonormal bases of a finite dimensional Hilbert space are mutually unbiased if for every pair of vectors one from each basis, the modulus of the standard inner product is the same. The problem of determining bounds on the maximum number of such bases (MUBs) for a fixed dimension is an important open problem of 20 years that has been shown to arise in different guises in many areas of mathematics. The most common application of MUBs is found in quantum cryptography where MUBs are the quantum states used in many protocols.

Here we consider 2 questions and provide preliminary results and conjectures:
  1. When is a given set of MUBs non-extendible? I.e., characterize mutually unbiased sets M1,M2,..,Mk of for which there is no basis M unbiased to M1,..,Mk?
  2. Characterize families of bases where non-extendibility of a set of MUBs imply maximality, and are there natural families of bases with this property. I.e, within such a family of bases, a greedy method for constructing MUBs would guarantee a maximal cardinality MUB collection.

This is joint work with Oscar Boykin and Mohamad Tarifi at the University of Florida.

Milind Sohoni (Indian Institute of Technology)

Geometric complexity theory (GCT)

We will outline a possible approach to outstanding computational complexity theory (CCT) questions. The approach relies on a faithful mapping of these questions to questions in geometric invariant (GIT) theory and representation theory. We begin with a construction of Valiant and show its connection to the Orbit Closure and Membership problems in GIT. Proofs of non-existence of algorithms translate to the existence of obstructions. This immediately leads to the important notion of stability in invariant theory. For stable points, we show that obstructions are easily constructed.

We then outline proofs of the stability of forms such as the determinant and the permanent, which are important in CCT. We next define our notion of partially stable points. We pose our problems as (i) to understand the orbit closure for stable points with distinctive stabilizers, and (ii) extension of these results to partially stable points. We finish with an outline of our current attempts at these teo, and also relate it to the wrok of other researchers.

This is joint work with Ketan Mulmuley.

Frank Sottile (Texas A & M University) http://www.math.tamu.edu/~sottile/

New fewnomial upper bounds

In 1980, Askold Khovanskii established his fewnomial bound for the number of real solutions to a system of polynomials, showing that the complexity of the set of real solutions to a system of polynomials depends upon the number of monomials and not on the degree. This fundamental finiteness result in real algebraic geometry is believed to be unrealistically large.

I will Describe joint work with Frederic Bihan on a new fewnomial bound which is substantially lower than Khovanskii's bound and asymptotically optimal. This bound is obtained by first reducing a given system to a Gale system, and then bounding the number of solutions to a Gale system. Like Khovanskii's bound, this bound is the product of an exponential function and a polynomial in the dimension, with the exponents in both terms depending upon the number of monomials. In our bound, the exponents are smaller than in Khovanskii's.

John Voight (University of Minnesota) http://magma.maths.usyd.edu.au/~voight/

Computing zeta functions of sparse, nondegenerate hypersurfaces

We discuss the efficient computation of the zeta function of a hypersurface X defined over a finite field k. The zeta function Z(X,T) of X is the exponential generating series for the number of points on X defined over k and its finite extensions; Z(X,T) is a rational function in T. Zeta functions arise naturally in coding theory, arithmetic geometry, complexity theory, and many other areas. We consider the situation when X is affine, toric, or projective, when its defining equation f is nondegenerate (a condition which implies that X is smooth and which holds outside of a codimension 1 subset), and when f is sparse (consisting of few monomials). In this situation, one can use Dwork cohomology to give a very efficient method of computing Z(X,T). We exhibit this method and provide several examples.

Judy L. Walker (University of Nebraska) http://www.math.unl.edu/~jwalker/

Pseudocodeword connections

The term "pseudocodeword" is used in relationship to decoding linear block codes in at least three different ways. In linear programming decoding (introduced by Feldman), any vertex of the fundamental polytope is called a pseudocodeword. In a rigorous formulation (as first proposed by Wiberg) of min-sum or sum-product decoding, any valid configuration of any computation tree is called a pseudocodeword. And, using a more intuitive interpretation of iterative message-passing decoding, any valid configuration on any finite cover of the Tanner graph corresponds to a pseudocodeword (this is made precise through Graph Cover Decoding, proposed by Vontobel and Koetter). Though some justification of the triple-use of the term has been given, these three notions of pseudocodeword are indeed distinct. In this talk, we will describe the connections and disconnections involved.

Second Chances and Closing Remarks

Second Chances and Open Problem Session

Second Chances and Open Problem Session

Second Chances and Open Problem Session

Second Chances and Open Problem Session