1 / 31

# Computational Methods in Physics PHYS 3437 - PowerPoint PPT Presentation

Computational Methods in Physics PHYS 3437. Dr Rob Thacker Dept of Astronomy & Physics (MM-301C) [email protected] Today’s Lecture. Use MC techniques to simulate a random walk A few slides on random number generators. Mobility of molecules.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about ' Computational Methods in Physics PHYS 3437' - halima

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Computational Methods in Physics PHYS 3437

Dr Rob Thacker

Dept of Astronomy & Physics (MM-301C)

• Use MC techniques to simulate a random walk

• A few slides on random number generators

• Consider a single molecule suspended in air, on average how far will it travel in 1s?

• Naïve calculation: mv2/2=(5/2)kT for a diatomic gas, hence

• k=1.38x10-23 J s-1 and take T=300 K, take mass to be that of N2~28mp=4.68x10-26 kg

• Implies a velocity of v=665 m s-1

• So does a molecule move 2/3 km on average in 1s?

• No – collisions ensure that the motion changes direction an enormous number of times per second

• Result – average molecule moves < 1 m in a second

• The path of a particle frequently interrupted by collisions is called a random walk

• Gas diffusion can be approximated on the atomic scale by modelling many thousands of particles undergoing random walks

• We do not need to know any specific details about the collisions

• just that they occur randomly

• After each collision the new direction is random

• We’ll just look at the properties of one random walk in this lecture though

• After the collision the incident particle i is scattered in a random direction

• The scattering directions should be equally distributed on the surface of a sphere

• This is not the same as picking q and f randomly!

i

q

f

t

• Left result is from randomly choosing f[0,2p] and q[0,p]

• Notice that points are more clustered around the pole

• If one uses spherical polar coordinates to describe the surface, the area element on the sphere is dW=sin q dq df

• Think about the geometry near the poles – when q=0 the f coordinate is completely degenerate

• Since dW contains a factor of sin q we cannot take a uniform distribution of f & q

• However since –d(cos q)=sin q dq we can write |dW|=d(cos q) df

• This means we take a uniform distribution of f where f[0,2p]

• We also take a uniform distribution of cos q[-1,1]

sin q df

sin q

q

dq

dW=sin q dq df

l

• Let s be the cross-sectional area of each particle

• Let n=number of particles per unit volume of side length l

• What distance l is required so that the accumulated cross section of particles is ½l2?

• i.e. half of the time the incident particle will collide with another particle after having travelled l

l

• Given the distance l and cross sectional area l2 the total number of particles in the volume ll2 is nll2

• Assuming the particles are distributed on a “lattice” that has no overlaps the total accumulated cross section of all these particles is just snll2

• Need to find l such that snll2 =l2/2

• Simple rearrangement gives l=1/(2sn)

• Note that the factor of ½ is a result of us assuming that there are no overlaps

• If we think about particles being arranged on a lattice then there are a number of possible paths with an infinite path length between collisions

• Of course this is improbably unlikely to ever occur in a gas – the particles will be distributed randomly

• The random distribution will have a number of overlaps which means the factor of proportionality is 1/√2 rather than ½

• Hence

• Given

• We plug in s=(10-9)2 m2

• n~NA/(22 l) = 6.02×1023 / 0.022 m3 ~ 2.7×1025 m-3

• Find that l~2.6×10-8 m

• Diameter of N2 molecule ~ 3.1×10-10 m

• This is roughly 83 molecular diameters – so not too large!

• Given this information we can now put together a Monte Carlo random walk routine

• Choose f[0,2p) and cos q [-1,1] at random

• Let the particle move l, go to 1

Need to keep track of particles original position (i.e. (0,0,0)) current position, and the number of collisions N

Implementation hint: leaves distances in units of l. That way you can scale results after the simulation is completed for different l.

• We perform m experiments, and allow up to N=1000 collisions (for example)

• Typically we take m~500 or more, for example

• For each N we calculate the average distance <d>RMS/l that the particle has migrated from the origin

• The average is taken over the m paths

• <d>RMS is given by

• Plot up <d>RMS/l and look at results

Simulating that many collisions is incredibly

difficult right now, although the US has computers

capable of almost 1015 operations per second

• Consider a random walk in 1d, you can go forwards or back

• Each step in building up the path is given by a random variable xi, xi is either +1 or -1

• Clearly the average value of xi is zero: <xi>=(1+ -1)/2=0

• The average value of xi2 is 1: <xi2>=(1+1)/2=1

• Path endpoint d is given by

• The mean of the square of number of paths is then

• The cross terms <xixj> i≠j average to zero: -1×1+-1×-1+1×1+1×-1=0

• Hence the RMS distance is

• Truly Random

• Exhibiting true randomness – values cannot be predicted in any way

• Pseudorandom

• Appearance of randomness but having a deterministic pattern

• Period of pattern can be exceptionally long

• Must produce “good” approximation to a random deviate

• Quasi(sub)-random

• Having a set of non-random numbers in a randomized order

• Useful when we want a series of points to have certain properties

• e.g. optimal space filling, like Sobol, Halton sequences

• Linux operating system example: “/dev/random”

• Gathers information from hardware within the computer through the operating system – “noise”

• creates an “entropy pool” that is used to create the random numbers

• Avoids tracking things such as network traffic that can be manipulated by outsiders

• Was designed to be a true random generator but recent research (March 2006) has shown it has a few weaknesses

• Other inputs for random information:

• Readings from a Geiger counter – based on a quantum process, completely unpredictable in theory

• Thermal or quantum-mechanical noise, amplified to provide a random voltage source.

SGI created a random generator based upon random bits that were extracted from images of the erupting blobs inside six Lava Lite lamps

Project is no longer functioning

A new project http://www.lavarnd.org/ uses the same idea but relies on noise in the ccd of digital cameras

Humour: LavaRnd

• Operate from USB or Serial connections

• Standard mode is to deliver one byte of data at a time

• Current models pass “DIEHARD” battery of tests

• Very fast generation

• Prices range anywhere from ~\$100 to more than \$1,000 per unit

• Linear Congruential Generators

• Based on a simple iterative process

• Lagged Fibonnaci Generators

• Utilize a similar concept to the Fibonacci sequence

• Shift Register Generators

• Similar in principle to LFGs

• Combined Generators

• Can be designed to offer best of combined methods

• A Linear Congruential Generator (LCG) is defined as

Ij+1 = (Ij a + c) mod m

Where –

Ij – current number [I0 – seed]

Ij+1 – next number

a - multiplier

c - increment

m – modulus

• The period is clearly (at most) the value of the modulus, m.

mod m=remainder after

dividing by m

• Really Lewis, Goodman & Miller (1968) – algorithm was extensively reviewed in a later paper by Park and Miller

• By choosing the period and multiplier carefully LCGs of the form Ij+1 = Ij a mod m can be made to behave very well

• This type of form cannot use 0 as the initial seed

• LG & M suggest choosing

• a=75 and m=231-1

• Implementing it in a code is straightforward apart from one very important issue

• The multiplication of Ij by a will frequently produce an overflow

• Rather than using a 64bit multiplication we can use “Schrage’s method” to avoid overflow (see slide after summary)

• RANDU is an infamous LCG that was widespread on IBM mainframes in 1960s

• Definition: Ij+1=(65539I j) mod 231

• Straightforward algebra shows that

Ij+2=6Ij+1-9Ij

• Describes a series of planes in 3d (15)

• Many scientific papers were invalidated due to the poor properties of RANDU

• Sometimes you just need a variable to “slightly” randomize things on a very quick timescale

• LCG generators provide a good way of doing this provided we choose m,a,c appropriately

• But we want to avoid overflow, which constrains variable values

• NR gives a list of suitable values for m,a,c

• Example FORTRAN code for a [0:1] deviate:

j=mod(j*ia+ic,im)

ran=float(j)/float(im)

• Even faster generator are possible if we first check that overflow is a well behaved process

• low order bits only survive

• Then can use something like i=1664525*i+1013904223

• LFGs are based on the concept of the Fibonnaci sequence & are closely related to MRGs

• In=In-1+In-2

• 1, 1, 2, 3, 5, 8, …

• The idea can be generalized to any operator (as opposed to addition) and to any pair of previous values n-j, n-k as opposed to n-1 and n-2

• Currently popular LFG’s include Additive LFG’s based upon the following formula

• In=In-j+In-k (mod 2m ) where 0<j<k

• k and l are called “lags”– the current value is determined by values from l and k places ago

• You may well come across codes containing these routines, so it is worthwhile knowing what they are

• Ran0 – Park-Miller LCG

• Minimal generator

• Fails statistical tests for N>few 107, much less than the period

• Ran1 – Park-Miller plus a Bays-Durham shuffle routine

• Tries to avoid “low-order” serial correlations

• Improves statistical performance significantly

• Ran2 – Longer period generator (1018) based upon L’Ecuyer’s method of combining sequences (with shuffle)

• Believed to produce “perfect” random numbers

• \$1000 waiting if you can prove it doesn’t

• Ran3 – Knuth’s subtractive method

• Very robust & fast

• Ran4 – Data encryption standard based generator

Mersenne Twister

• The Mersenne Twister by Matsumoto & Nishimura (1998) is a comparatively new algorithm with many useful properties

• An enormous period, 219937-1, (~106000) – significantly larger than other generators

• Provides equidistribution to at least 623-dimensions, and probably higher

• The algorithm can be implemented efficiently on RISC hardware

• it avoids multiplication and division

• Requires little memory

• 624 words needed for the working area

• Its name derives from the fact that period length is chosen to be a Mersenne prime

• The GSL library includes an implementation of MT

• MT home page contains C, FORTRAN, as well as 32bit and 64bit versions

• Monte Carlo methods can be applied to simulate physical systems that undergo essentially random interactions

• Can show with numerical experiments that random walk distance is proportional to √N where N is the number of collisions

• Hardware alternatives are available if you really need true random numbers

• LCGs are useful for generating quick and dirty sequences, but are outclassed by more modern algorithms

• Use the Mersenne-Twister if you want really good random numbers and don’t have any time constraints

• Introduction to simple parallel programming

Schrage’s Method

• Suppose we have generated an value Ij=z, and our next step is create the product az and then take az mod m

• If m=aq, for some q, then we see immediately that az mod aq=a(z mod q)

• Thus we can do the modulo step before we take the product az and avoid overflow

• Schrage’s Method is exactly this idea extended to allow m=aq+r

• The resulting identity (z≥0) is

• For the Park Miller example we take