1 / 14

Escaping Local Minima In Logic Synthesis

Escaping Local Minima In Logic Synthesis. Eugene Goldberg Cadence Berkeley Labs ( USA ). IWLS-2007, San Diego, USA. This paper is available at http:/eigold.tripod.com/papers/iwls-2007-locmin.pdf. The full version is at http:/eigold.tripod.com/papers/loc_min.pdf. Summary. Motivation

adah
Download Presentation

Escaping Local Minima In Logic Synthesis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Escaping Local Minima In Logic Synthesis Eugene Goldberg Cadence Berkeley Labs (USA) IWLS-2007, San Diego, USA This paper is available at http:/eigold.tripod.com/papers/iwls-2007-locmin.pdf The full version is at http:/eigold.tripod.com/papers/loc_min.pdf

  2. Summary • Motivation • Logic Synthesis Preserving common Specification (LSPS) • Recent developments in LSPS • LSPS from optimization point of view • Horizontal and vertical optimization • Why should it work?

  3. Motivation Usually, a logic optimization algorithm monotonically minimizes a cost function (there are a few exceptions, that are not very successful though) Such an algorithm quickly gets stuck in a local minimum. Unfortunately, local minima of hard optimization problems (like logic synthesis) can be arbitrarily deep. This means that to get out of such a local minimum one has to make an unbounded number of local transformations increasing the cost function. In this paper, we consider LSPS as a way to address the problem of escaping local minima in logic synthesis.

  4. Logic Synthesis Preserving common Specification (LSPS) Let N be a single-output combinational circuit to be optimized. Let specificationSpec(N) of N be N1,…,Nk. (Spec(N) is just a partition of N into subcircuits N1,…,Nk.) LSPS is to produce a new circuit N* by replacing each subcircuitNi with an optimized toggle equivalent counterpart N*i. Multi-output subcircuits M(p1,..,pk) and M*(p1,..,pk) are called toggle equivalent if M(p )M(p )  M*(p )M*(p ). Since toggle equivalence of single-output subcircuits means functional equivalence (modulo negation), N and N* are functionally equivalent (modulo negation) Definition of toggle equivalence can be easily extended to the case when M and M* have different input variables but there is a one-to-one mapping between “allowed” input assignments.

  5. Example square(x) < 100  abs(x) < 10 z z y < 100 y* < 10 N*2 N2 y1 … y2n y*1 y*n … N*1 N1 square(x) abs(x) … … Circuit N* Circuit N x1 xn x1 xn Subcircuit N1 is toggle equivalent to N*1. Subcircuit N2 is toggle equivalent to N*2 (under “allowable” input assignments)

  6. Recent Developments in LSPS Better parametrization of Spec(N) Let N1,..,Nk be a specification Spec(N) of N. Complexity of LSPS is linear in the number k of subcircuits and exponential in the granu-larity of Spec(N) (the latter is the size of the largest subcircuit Ni). It can be shown that complexity of LSPS is exponential “only” in the width of Spec(N). (The latter is determined by the largest width of Ni and the largest number of outputs among Ni.) Toggle implication instead of toggle equivalence Let N1,…,Nk be a specification of N. One can build an optimized circuit N* from N by replacing subcircuit Ni,i=1,.,k-1 with a subcircuit N*i whose toggling is implied by Ni. (That is N*i toggles every time Ni does. The opposite may not be true.) Toggle equivalence is a special case of toggle implication.

  7. Recent Developments in LSPS (continued) Formulation of the key procedure of LSPS The key procedure of LSPS is to replace a subcircuit Ni of Spec(N) with subcircuit N*i that is toggle equivalent to (or toggle implied by) Ni. Such a procedure has been formulated and tested in practice Finding a good specification Finding a good specification Spec(N) of a circuit of large width by an efficient algorithm is, most probably, infeasible. However, we showed that a narrow circuit N has a trivial specification Spec(N) which is a cascade of subcircuits of N. So, for narrow circuits, there may exist practical algorithms for finding good specifications.

  8. LSPS from optimization point of view Let N1,..,Nk be a specification of N. Let N* be an optimized circuit build by LSPS. Let |N*i| < |Ni| for every i=1,..,k . (That is LSPS managed to replaced each subcircuit Ni with a smaller counter-part N*i.) Then, obviously, |N*| < |N|. Replacing Ni with say toggle equivalent subcircuit N*i is not an equivalent transformation. To make it equivalent one needs to add a re-encoder R*i such that R*i(N*i) is functionally equivalent to Ni. However, even though |N*i| < |Ni|, it may be the case that |N*i| + |R*i| > |Ni| So from the viewpoint of “traditional” logic synthesis performing equivalent transformations, LSPS may make steps that may temporarily increase the cost function.

  9. LSPS in Terms of Equivalent Transformations z z N2 R*2 y < 100 Re-encoder z z* y1 … y2n N*2 N2 y* < 10 y < 100 R*1 Re-encoder y*1 y*n y1 … y2n … y*1 y*n … abs(x) abs(x) N1 square(x) N*1 N*1 … … … x1 xn x1 xn x1 xn

  10. Escaping Local Minima by LSPS Let N1,..,Nk be a specification of N. Let N be trapped in a local minimum. Since LSPS makes moves that increase the cost function, it can get N out of this minimum. Intuitively, the depth of the minimum LSPS can escape depends on the granularity (or width) of Spec(N). The larger the granu-larity of Spec(N) is, the deeper minima can be escaped (but the complexity of such an escape is exponential in granularity of Spec(N)). In particular, if Spec(N) consists of N itself, it is within the power of LSPS to build an optimal circuit N* and so escape an arbitrarily deep minimum. (Of course, the complexity of such an escape is, in general, prohibitively high.)

  11. Vertical Optimization z z Vertical optimization is to redistribute comp-lexity between topo-logically dependent subcircuits. (Outputs of subcircuit N1 feed inputs of subcircuit N2.) N*2 y < 100 N2 y* < 10 y1 … y2n y*1 y*n … N*1 N1 square(x) abs(x) … … Circuit N* Circuit N x1 xn x1 xn

  12. Horizontal Optimization Horizontal optimization is to redistribute comp-lexity between topo-logically independent subcircuits (such as subcircuits N1 and N2.)

  13. Why Should It Work? Vertical optimization When optimizing the expression square(x) < 100, LSPS works well because N has only one output and so loses a lot of information. When replacing N1 (implementing square(x)) with N*1 (implementing abs(x)), LSPS runs up a large “re-encoding debt”. However, since N2 loses information, this debt is not paid “in full”. These arguments can be applied to any circuit with a global or local loss of information. Horizontal optimization Suppose subcircuits Ni and Nj of Spec(N), implement combinational blocks that tightly interact with each other. That is when outputs of Ni change, the outputs of Nj most likely change too and vice versa. So Ni and Nj are “almost” toggle equivalent. Then Ni and Nj can be replaced with toggle equivalent counterparts N*i and N*j that share a great deal of logic.

  14. Conclusions • We show that logic synthesis preserving specification (LSPS) can be viewed as a method for escaping of local minima. • The depth of the minimum that can be escaped by LSPS depends on the granularity of specification. • We introduce the two basic “modes” of LSPS (vertical and horizontal optimization) and discuss ways to make them successful.

More Related