1 / 19

NSPT: Getting More from Lattice Perturbation Theory

Learn about NSPT, a tool for assessing convergence properties and truncation errors of lattice perturbation theory. This article focuses on renormalization constants for quark bilinears in lattice QCD.

bourland
Download Presentation

NSPT: Getting More from Lattice Perturbation Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NSPT@apeNEXT F. Di Renzo(1) andL. Scorzato(2) in collaboration withC. Torrero(3) (1) University ofParma. INFN Parma – MI11(2) ECT* Trento. INFN Parma – MI11 (3) University of Bielefeld

  2. >> The key point: LPT is substantially more involved than other (perturb.) regulators. LPT is really cumbersome and usually (diagrammatic) computations are 1 LOOP. 2 LOOPS are really hard and 3 LOOPS almost unfeasible. >> On top of all this, LPT converges badly and one often tries to make use of Boosted PT (Parisi, Lepage & Mackenzie). This should be carefully assessed. >>With NSPT we can compute to HIGHLOOPS! We can assessconvergence properties and truncation errors of the series. With this respect we think LPT should not be necessarily regarded as a second choice. >> In the following we will mainly focus on (quark bilinears) renormalization constants. NSPT: a tool for getting more fromLattice Perturbation Theory Despite the fact that in PT the Lattice is in principle a regulator like any other, it is in practice a very ugly one… As a matter of fact, the Lattice is mainly intended as a non-perturbative regulator. Still, LPT is something you can not actually live without! >> In many (traditional) playgrounds LPT has often been replaced by non-perturbat. methods: renormalization constants, Symanzik improvement coefficients, ...

  3. Outline • We saw some motivations ... • Some technical details: just a flavour of what NSPT is and what the computational demands are. • A little bit on the status of renormalization constants for Lattice QCD: quarks bilinears for the WW (Wilson gauge and Wilson fermions) action. • Current Lattice QCD projects can be interested in different combinations of gauge/fermion actions: take into account Symanzik gauge action and Clover fermions as well! • apeNEXT can do the job (first configurations production just started)

  4. Stochastic Quantization Given a field theory, Stochastic Quantization basically amounts to giving to the field an extra degree of freedom, to be thought of as a stochastic time in which an evolution takes place according to the Langevin equation In the previous formula, h is a gaussian noise, from which the stochastic nature of the equation originates. Now, the main assertion is very simply stated: asymptotically From Stochastic Quantization to NSPT Actually NSPT comes almost for free from the framework of Stochastic Quantization (Parisi and Wu, 1980). From the latter originally both a non-perturbative alternative to standard Monte Carlo and a new version of Perturbation Theory were developed. NSPT in a sense interpolates between the two.

  5. Since the solution of Langevin equation will depend on the coupling constant of the theory, look for the solution as a power expansion If you insert the previous expansion in the Langevin equation, the latter gets translated into a hierarchy of equations, each for each order, each dependent on lower orders. Now, also observables are expanded and we get power expansions from Stochastic Quantization’s main assertion, e.g. Just to gain some insight (bosonic theory with quartic interaction): you can solve by iteration! Diagrammatically ... ... And this is a propagator ... + λ + λ2 ( + ... ) + O(λ3) ) + O(λ2) + 3 λ ( + (Numerical) Stochastic Perturbation Theory

  6. Numerical Stochastic Perturbation Theory NSPT (Di Renzo, Marchesini, Onofri 94) simply amounts to the numerical integration of these equations on a computer! For LGT this means integrating (Batrouni et al 85) where (Langevin eq. has been formulated in terms of a Lie derivative and Euler scheme) and everything should be intended as a series expansion, i.e. one has to plug in

  7. From fields to collections of fieldsorder n • From scalar operations to order by order operationsorder n2 • Not too bad from the parallelism point of view! Numerical Stochastic Perturbation Theory NSPT is not so bad to put on a computer! In particular on a parallel one (APE family) ... • ’94-’00 - APE100 - Quenched LQCD (Now on PC’s! Now also with Fadeev-Popov, • but no ghosts!). • ’00-now - APEmille - Unquenched LQCD (WW action): Dirac matrix easy to invert (it is PT, after all!) • ’07-... - apeNEXT - we have resources to undertake sistematic investigation of different actions ...

  8. Renormalization constants: our state of the art Despite the fact that there is no theoretical obstacle to computing log-div RC in PT, on the lattice one tries to compute them NP. Popular (intermediate) schemes are RI’-MOM (Rome group) and SF (alpha Coll). We work in the RI’-MOM scheme: compute quark bilinears operators between (off-shell p) quark states and then amputate to get G functions project on the tree level structure Renormalization conditions read where the field renormalization constant is defined via One wants to work at zero quark mass in order to get a mass-independent scheme.

  9. Computation of Renormalization Constants We compute everything in PT. Usually divergent parts (anomalous dimensions) are “easy”, while fixing finite parts is hard. In our approach it is just the other way around! We actually take the g’s for granted. See J.Gracey (2003): 3 loops! We know which form we have to expect for a generic coefficient (at loop L) We take small values for (lattice) momentum and look for “hypercubic symmetric” Taylor expansions to fit the finite parts we want to get. - Wilson gauge – Wilson fermion (WW) action on 324 and 164 lattices. • Gauge fixed to Landau (no anomalous dimension for the quark field at 1 loop level). - nf = 0 (both 324 and 164); 2 , 3, 4 (324). • Relevant mass countertem (Wilson fermions) plugged in (in order to stay at zero quark mass). RI’-MOM is an infinite-volume scheme, while we have to perform finite V computations! Care must be taken of this (crucial) aspect when dealing with log’s.

  10. Computation of Renormalization Constants Always keep in mind the master formula! ... and the definition of Zq • The O(p) are the quantities to be actually computed. They are made out of convenient • inversions of the Dirac operator on sources (we work out everything in mom space!) • If one computes ratios of O’s one obtains ratios of Z’s, in which in particular Zq cancels • out. Convenient ratios are finite. • Zv (Za) can be computed by taking convenient ratios of Ov (Oa) and S-1, thus eliminating • Zq. They can also be computed taking ratios of Ov (Oa) and the corresponding conserv. • currents. • Zs (Zp) requires to subtract log’s in order to obtain finite quantities. This needs care. • Once one is left with finite quantities one can extrapolate to zero the irrelevant terms • which go away with powers of pa (these powers comply to hypercubic simmetry)

  11. Zp/Zs Zv/Za Ratios of bilinears Z’s are finite and safe to compute! Good nf dependence

  12. ZaandZv

  13. Resumming Zp/Zs (to 4 loops!)One can compare to NP results from SPQCDR We can now have numbers for Za and Zv. We resum nf=2 results @β=5.8 using different coupling definitions: Zp/Zs = 0.77(1) • Results less and less dependent on the order at fixed scheme and less and less dependent on the scheme at higher and higher order. Zp/Zs and Zs/Zp quite well inverse of each other. • Compare to SPQCDR result Zp/Zs = 0.75(3)

  14. Resumming Zaand Zv (to 4 loops!)One can compare to NP results from SPQCDR Za = 0.79(1) Zv = 0.70(1) • SPQCDR result Za = 0.76(1) and Zv = 0.66(2) • Keep in mind chiral extrapolation!

  15. Renormalization constants: what next? In these days Lattice QCD has great opportunities in really performing first principle computations. There are nevertheless a variety of options as for the choice of the action • On top of Wils/Wils (Wilson-gauge/Wilson-fermion) action we want to take into account the possible combinations of • Wilson gauge action • tree-level Symanzik gauge action • (unimproved) Wilson fermion action • (Wilson improved) Cloverfermion action • Results will also apply to twisted mass (renormalization conditions in the massless limit) • Remember: nf enters as a parameter and you would like to fit the nf dependence. • APEmille (some work started on Clover)is not enough (~10 months for a given nf)

  16. apeNEXT can do the job! Why was in the end NSPT quite efficient? • You do not have to store fields, but collections of fields, on which the most intensive FPU operations are order-by-order multiplications (remember the observation on parallelism!) • This is a situation in which in there is a reasonable hope to perform well on a disegned-to-number-crunch machine ...Keep register file and pipelines busy! • (in Parma they would say “fitto come il rudo” ... this is packed rubbish ...) • This was traditionally quite easy on APE100 and APEmille (program memory and data memory are not the same) • The APEmille code was not so brilliant on apeNEXT ... • ... but we can optimize a little bit. For example we can make use of prefetching queues. We also have sofan at hand. Ok, then the cost for one nf is ~2 months.

  17. ff7db6 | break pipe | | !! first staple down | | Faux0[0] = Umu[link_c+o4] | Faux = Faux0[o4bis]^+ | Faux0[0] = Umu[link_c+o5]^+ | Faux = AUmultU11(Faux,Faux0[0]) | Faux = AUmultU11(Faux,Umu[link_c+o6]) | F = F + Faux | 1831 74 % C: 1343 F: 0 M: 0 X: 2 L: 0 I: 4 IQO: 4/4 275/17 110/2 ff84dd | break pipe | | !! second staple up | | Faux0[0] = Umu[link_c+o8]^+ | Faux = AUmultU11(Umu[link_c+o7],Faux0[0]) | Faux0[0] = Umu[link_c+o9]^+ | Faux = AUmultU11(Faux,Faux0[0]) | F = F + Faux | 1807 75 % C: 1353 F: 0 M: 0 X: 2 L: 0 I: 4 IQO: 3/3 275/17 110/2 ... ffc75d | do i = 0, Sp_Vm | Faux = logU(Umu[j1]) | Faux = Faux - MomNull[Dir] | Faux = stricToA(Faux) | Umu[j1] = expA(Faux) | j1 = j1 + 1 3172 92 % C: 2855 F: 0 M: 0 X: 31 L: 10 I: 32 IQO: 2/2 381/39 111/9 ... ffb599 | break pipe | | !! Enforcing unitary constraint ... | | Faux = logU(Faux) | Faux = stricToA(Faux) | MomNull[Dir] = MomNull[Dir] + Faux | | Umu[link_c] = expA(Faux) | 3240 90 % C: 2856 F: 0 M: 0 X: 31 L: 10 I: 31 IQO: 1/1 325/37 230/18

  18. /AUseries_fact -> "expA" "(" AUseries_expr^a ")" { temporary AUseries res, aux, auxR temporary su3 a_ temporary complex jnk, jnk1 res = SetUpA aux = a res = res + aux jnk = complex_1 jnk1 = complex_1 /for n = 2 to ordine { auxR = SetUpA queue = a.AU.[0] /for i = 1 to (ordine-n) { a_ = queue /for j = (n-1) to (ordine-i) { auxR.AU.[i+j-1] = auxR.AU.[i+j-1] + a_ * aux.AU.[j-1] } queue = a.AU.[i] } a_ = queue auxR.AU.[ordine-1] = auxR.AU.[ordine-1] + a_ * aux.AU.[n-2] jnk1 = jnk1 + complex_1 jnk = jnk/jnk1 /for i = (n-1) to (ordine-1) { res.AU.[i] = res.AU.[i] + jnk * auxR.AU.[i] } aux = auxR } res.U0 = (1.0,0.0) rreturn res } Here is an example taken from bulk computations (going from a power-expanded Aμ field to a power-expanded Uμ field)

  19. Conclusions • NSPT is by now quite a mature technique. Computations in many different frameworks can be (and are actually) undertaken. • More results for renormalization constantsare to come for different actions: on top of Wils-Wils, also Wils-Clov, Sym-Wils and Sym-Clov. apeNEXT can manage the job! • Other developments are possible ... (expansions in the chemical potential?) • So, if you want ... stay tuned!

More Related