1 / 30

Computing the Rational Univariate Reduction by Sparse Resultants

Computing the Rational Univariate Reduction by Sparse Resultants. Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics Texas A&M University ACA 2004. Outline. What is Rational Univariate Reduction? Computing RUR by Sparse Resultants Complexity Analysis

dora
Download Presentation

Computing the Rational Univariate Reduction by Sparse Resultants

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computingthe Rational Univariate Reductionby Sparse Resultants Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics Texas A&M University ACA 2004

  2. Outline • What is Rational Univariate Reduction? • Computing RUR by Sparse Resultants • Complexity Analysis • Exact Implementation

  3. Rational Univariate Reduction • Problem: Solve a system of n polynomials f1, …, fn in n variables X1, …, Xn with coefficients in the field K • Reduce the system to n + 1 univariate polynomials h, h1, …, hn with coefficients in K s.t. if q is a root of h then (h1(q), …, hn(q)) is a solution to the system

  4. RUR via Sparse Resultant • Notation • ei the i-th standard basis vector •  = {o, e1, …,en} • u0, u1,…, un indeterminates • Ai = Supp(fi) • the algebraic closure of K

  5. Toric Perturbation • Toric Generalized Characteristic Polynomial Let f1*, …, fn* be n polynomials in n variables X1, …, Xn with coefficients in K and Supp(fi*) Ai =Supp(fi) , i = 1, …, n that have only finitely many solutions in (\ {0})n Define TGCP(u, Y) = Res(, A1, …, An)(aua Xa, f1 - Y f1*, …, fn - Y fn*)

  6. Toric Perturbation • Toric Perturbation [Rojas 99] Define Pert(u) to be the non-zero coefficient of the lowest degree term (in Y) of TGCP(u, Y) • Pert(u) is well-defined • A version of “projective operator technique” [Rojas 98, D’Andrea and Emiris 03]

  7. Toric Perturbation • Toric Perturbation • If (1, …, n)  (\ {0})n is an isolated root of the input system f1, …, fn then aua a Pert(u) • Pert(u) completely splits into linear factors over (\ {0})n. For every irreducible component of the zero set of the input system, there is at least one factor of Pert(u)

  8. Computing RUR • Step 1: • Compute Pert(u) • Use Emiris’ sparse resultant algorithm [Canny and Emiris 93, 95, 00] to construct Newton matrix whose determinant is some multiple of the resultant • Evaluate resultant with distinct u and interpolate them

  9. Computing RUR • Step 2: • Compute h(T) • Set h(T) =Pert(T, u1, …, un) for some values of u1, …, un • Evaluate Pert(u) with distinct u0 and interpolate them

  10. Computing RUR • Step 3: • Compute h1 (T), …, hn (T) • Computation of hiinvolves - Evaluating Pert(u), - Interpolate them, and - Some univariate polynomial operations

  11. Complexity Analysis • Count the number of arithmetic operations • Notation • O˜( )the polylog factor is ignored • Gaussian eliminationof • m dimensional matrix requiresO(m)

  12. Complexity Analysis • Quantities • MAThe mixed volume MV(A1 , …, An) of the convex hull ofA1 , …, An • RAMV(A1, …, An) +i = 1,…,n MV(, A1, …, Ai-1, Ai+1, …, An) • The total degree of the sparse resultant • SAThe dimension of Newton matrix • Possibly exponentially bigger than RA

  13. Complexity Analysis • [Emiris and Canny 95] • Evaluating Res (, A1, …, An)(aua Xa, f1, …, fn) requires O˜(n RASA) orO˜(SA1+)if char K = 0

  14. Complexity Analysis • [Rojas 99] • Evaluating Pert (u) requires O˜(n RA2 SA) or O˜(SA1+)if char K = 0

  15. Complexity Analysis • Computing h (T) requires O˜(n MA RA2 SA) or O˜(MA SA1+)if char K = 0

  16. Complexity Analysis • Computing every hi(T) requires O˜(n MA RA2 SA) or O˜(MA SA1+)if char K = 0

  17. Complexity Analysis • Computing RUR h (T),h1(T), …, hn(T) for fixed u1, …, un requires O˜(n2 MA RA2 SA) or O˜(nMA SA1+)if char K = 0

  18. Complexity Analysis • Derandomize the choice of u1, …, un • Computing RUR h (T),h1(T), …, hn(T) requires O˜(n4 MA3 RA2 SA) orO˜(n3MA3 SA1+)if char K = 0

  19. Complexity Analysis

  20. Complexity Analysis • A great speed up is achieved if we could compute “small” Newton matrix whose determinant is the resultant  No such method is known

  21. Khetan’s Method • Khetan’s method gives Newton matrix whose determinant is the resultant of unmixed systems when n = 2 or 3 [Kehtan 03, 04] • Let B =   A1    An Then, computing RUR requires n3 MA3 RB1+ arithmetic operations

  22. ERUR: Implementation • Current implementation • The coefficients are rational numbers • Use the sparse resultant algorithm [Emiris and Canny 93, 95, 00] to construct Newton matrix • All the coefficients of RUR h, h1,…, hn are exact

  23. ERUR • Non square system is converted to some square system • Solutions in ()n are computed by adding the origin o to supports.

  24. ERUR • Exact Sign • Given an expression e, tell whether or not e(h1(q), …, hn(q)) = 0 • Use (extended) root bound approach. • Use Aberth’s method [Aberth 73] to numerically compute an approximation for a root of h to any precision.

  25. Applications by ERUR • Real Root • Given a system of polynomial equations, list all the real roots of the system • Positive Dimensional Component • Given a system of polynomial equations, tell whether or not the zero set of the system has a positive dimensional component

  26. Applications by ERUR • Presented today’s last talk in Session 3 “Applying Computer Algebra Techniques for Exact Boundary Evaluation” 4:30 – 5:00 pm

  27. The Other RUR • GB+RS [Rouillier 99, 04] • Compute the exact RUR for real solutions of a 0-dimensional system • GB computes the Gröebner basis • [Giusti, Lecerf and Salvy01] • An iterative method

  28. Conclusion • ERUR • Strong for handling degeneracies • Need more optimizations and faster algorithms

  29. Future Work • RUR • Faster sparse resultant algorithms • Take advantages of sparseness of matrices [Emiris and Pan 97] • Faster univariate polynomial operations

  30. Thank you for listening! • Contact • Koji Ouchi, kouchi@cs.tamu.edu • John Keyser, keyser@cs.tamu.edu • Maurice Rojas, rojas@math.tamu.edu • Visit Our Web • http://research.cs.tamu.edu/keyser/geom/erur/ Thank you

More Related