1 / 36

An Exact Toric Resultant-Based RUR Approach for Solving Polynomial Systems

An Exact Toric Resultant-Based RUR Approach for Solving Polynomial Systems. Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics Texas A&M University AMS Meeting 2004. Outline. Rational Univariate Reduction (RUR) Complexity Analysis Exact RUR

dixon
Download Presentation

An Exact Toric Resultant-Based RUR Approach for Solving Polynomial Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Exact Toric Resultant-BasedRUR Approachfor Solving Polynomial Systems Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics Texas A&M University AMS Meeting 2004

  2. Outline • Rational Univariate Reduction (RUR) • Complexity Analysis • Exact RUR • Comparison with Other Work • Conclusion / Future Work

  3. Outline • Rational Univariate Reduction (RUR) • Complexity Analysis • Exact RUR • Comparison with Other Work • Conclusion / Future Work

  4. Rational Univariate Reduction • Problem: Solve a system of n polynomials f1, …, fn in n variables X1, …, Xn with coefficients in ℚ • Reduce the system to n + 1 univariate polynomials h, h1, …, hn with coefficients in ℚ s.t. if q is a root of h then (h1(q), …, hn(q)) is a solution to the system

  5. RUR via Toric Resultant • Notation • u = (u0, u1,…, un)indeterminates • f0 =u0 + u1X1+ … + unXn • Ai = Supp(fi), i = 0, 1,…, n ∴ A0= {o, e1, …, en} eithe i-th standard basis vector

  6. Toric Perturbation • Toric Generalized Characteristic Polynomial Let f1*, …, fn*be n polynomials in n variables X1, …, Xn with coefficients in ℚ and Supp(fi*) ⊆Ai= Supp(fi ), i = 0, 1,…, n that have only finitely many solutions in (ℂ\ {0})n Define TGCP(u, Y ) = Res(A0, A1, …, An)(f0, f1 - Y f1*, …, fn - Y fn*)

  7. Toric Perturbation • Toric Perturbation [Rojas 99] Define Pert(u) to be the non-zero coefficient of the lowest degree term (in Y ) of TGCP(u, Y ) • Pert(u) is well-defined • A version of “perturbations” [D’Andrea and Emiris 01, 03]

  8. Toric Perturbation • Toric Perturbation • If (1, …, n)  (ℂ\ {0})n is an isolated root of the input system f1, …, fn then u0+ u11 + … + unnPert(u) • Pert(u) completely splits into linear factors over ℂ • For every irreducible component of the zero set of the input system, there is at least one factor of Pert(u)

  9. Computing RUR • Step1: Compute Mixed Volumes • Step2: Construct a Resultant Matrix • Step3: Compute h • Step4: Compute h1, …, hn

  10. 1. Mixed Volumes 2. Resultant Matrix 3. h 4. h1, …, hn Computing RUR • Step 1: Compute Mixed Volumes Use Emiris’s algorithm [Emiris and Canny 95, 01] to compute MV–i = MV(A0, A1, …, Ai-1, Ai+1, …, An), i = 0, 1, …, n • Use Linear Programming • #P on Turing machine

  11. 1. Mixed Volumes 2. Resultant Matrix 3. h 4. h1, …, hn Computing RUR • Step 2: Construct a Resultant Matrix Use Emiris’ algorithm [Emiris and Canny 95] to construct a matrix whose maximal minor is some multiple of the toric resultant • Rows and columns are labeled by the exponents in A0, A1, …, An • Increment rows and columns until non-vanishing maximal minor is found

  12. 1. Mixed Volumes 2. Resultant Matrix 3. h 4. h1, …, hn Computing RUR • Step 2: Construct a Resultant Matrix (Cont.) • [Pederson and Sturmfels 93] deg fi Res(A0, A1, …, An)(f0, f1, …, fn) = MV-i , i = 0, 1,…, n

  13. 1. Mixed Volumes 2. Resultant Matrix 3. h 4. h1, …, hn Computing RUR • Step 2: Construct a Resultant Matrix (Cont.) • Degeneracies have been removed by perturbation The size of matrices must be at least Σ i = 0, 1,…,n MV-i • # of rows labeled by the exponents in Ai≧ MV-i , i = 0, 1, …, n • # of rows labeled by the exponents in A0 = MV-0 ∴ deg f0 D = MV-0 where D is the maximal minor

  14. 1. Mixed Volumes 2. Resultant Matrix 3. h 4. h1, …, hn Computing RUR • Step 3: Compute h(T) h(T) = Pert(T, u1, …, un) for some values of u1, …, un • Assign values to u1, …, un • Evaluate Pert(u0, u1, …, un) at deg h(T) = MV-0 distinct values of u0 and interpolate them

  15. 1. Mixed Volumes 2. Resultant Matrix 3. h 4. h1, …, hn Computing RUR • Step 4: Compute h1(T), …, hn(T) Computation of every hiinvolves • Evaluating Pert(u) and interpolate them • Univariate polynomial operations • Euclidean algorithm for GCD • First subresultant [Gonzalez-Vega 91]

  16. Computing RUR • All the steps can be implemented exactly • The coefficients of h, h1, …, hn can be computed in full digits

  17. Outline • Rational Univariate Reduction (RUR) • Complexity Analysis • Exact RUR • Comparison with Other Work • Conclusion / Future Work

  18. Complexity Analysis • Notation • O˜( )the polylog factor is ignored • Gaussian eliminationof • m dimensional matrix requires • O(m) operations

  19. Complexity Analysis • Quantities • MAMV-0 = deg h(T) • RA i = 0, 1,…, nMV-i • The size of the optimal resultant matrix • SAThe size of maximal minor • SA = (n1/2en RA)

  20. Complexity Analysis • # of Arithmetic Operations • Evaluate Res(A0, A1, …, An)O˜(SA1+) • Evaluate Pert (u) O˜(SA1+) • Compute hO˜(MA SA1+) • Compute every hiO˜(MA SA1+) • Compute RUR for fixed u1, …, unO˜(nMA SA1+) • Compute RUR O˜(n3MA3 SA1+)

  21. Complexity Analysis • Bit Complexity • The logarithmic height of h, h1, …, hn is some polynomial inSA [Rojas 00] RA [Sombra] • The bit complexity is single exponential in n

  22. Complexity Analysis • A great speed up is achieved if we could compute “small” matrix whose determinant is the resultant  No such method is known • Resultant matrices • Sylvester-Dixon [Chtcherba and Kapur] • Corner-cutting [Goldman and Zhang 00] • Tate resolution [Khetan 03, 04]

  23. Khetan’s Method • Khetan’s method gives a matrix whose determinant is the resultant of unmixed systems when n = 2 or 3 (orbigger?) [Khetan 03, 04] • Let B = A0 A1   An Then, computing RUR requires n3 MA3 RB1+ arithmetic operations

  24. Outline • Rational Univariate Reduction (RUR) • Complexity Analysis • Exact RUR • Comparison with Other Work • Conclusion / Future Work

  25. ERUR • Non square system is converted to some square system • Solutions in ℂn are computed by adding the origin o to supports • In both cases, post processing requires exact computation over the points in RUR

  26. ERUR • Exact Sign • Given an expression e, tell whether or not e(h1(q), …, hn(q)) = 0 • Use (extended) root bound approach • Use Aberth’s method [Aberth 73] to numerically compute an approximation for a root of h to any precision

  27. Applications by ERUR • Real Root • Given a system of polynomial equations, list all the real roots of the system • Positive Dimensional Component • Given a system of polynomial equations, tell whether or not the zero set of the system has a positive dimensional component

  28. Outline • Rational Univariate Reduction (RUR) • Complexity Analysis • Exact RUR • Comparison with Other Work • Conclusion / Future Work

  29. The Other RUR • GB+RS • [Rouillier 99, 04] • Kronecker / Newton • [Giusti, Lecerf and Salvy 01] • [Jeronimo, Krick, Sabia and Sombra 04]

  30. The Other RUR • GB+RS [Rouillier 99, 04] • Compute the exact RUR for real solutions of a 0-dimensional system • GB computes the Gröbner basis • The Gröbner basis computation is EXPSPACE-complete (double exponential in n) on Turing machine [Mayr and Meyer 98]

  31. The Other RUR • Kronecker / Newton • [Giusti, Lecerf and Salvy 01] • Kronecker in Magma • [Jeronimo, Krick, Sabia and Sombra 04] • BPP on BSS machine over ℚ

  32. Outline • Rational Univariate Reduction (RUR) • Complexity Analysis • Exact RUR • Comparison with Other Work • Conclusion / Future Work

  33. Implementation • ERUR • Algorithms adapt to exact implementation naturally • Strong for handling degeneracies • Need more optimizations and faster algorithms

  34. Conclusion • Deterministic algorithm • Handle degeneracies by perturbation • The total degree of Pert(u) is RA • Use the incremental matrix construction algorithm • Currently, the most efficient • Starting at a matrix of size RA • Exponential factor appearing in the complexity comes from the size of the resultant matrix

  35. Future Work • Faster toric resultant algorithms • Smaller resultant matrices • Take advantages of sparseness of matrices [Emiris and Pan 97] • Faster univariate polynomial operations • Use rational functions for h1,…, hn

  36. Thank you for listening! • Contact • Koji Ouchi, kouchi@cs.tamu.edu • John Keyser, keyser@cs.tamu.edu • Maurice Rojas, rojas@math.tamu.edu • Visit Our Web • http://research.cs.tamu.edu/keyser/geom/ERUR/ Thank you

More Related