Lecture 13. Fluctuations. Fluctuations of macroscopic variables. Correlation functions. Response and Fluctuation. Density correlation function. Theory of random processes. Spectral analysis of fluctuations: the Wiener-Khintchine theorem. The Nyquist theorem.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
We considered the system in equilibrium, where we did different statistical averages of the various physical quantities. Nevertheless, there do occur deviations from, or fluctuations about these mean values. Though they are generally small, a study of these fluctuations is of great physical interest for several reasons.
It provides a natural framework for understanding a class of physical phenomena which come under the common heading of “Brownian motion”; these phenomena relate properties such as the mobility of a fluid system, its coefficient of diffusion, etc., with temperature trough the so-called Einstein’s relations. The mechanism of the Brownian motion is vital in formulating, and in a certain sense solving, problems as to how “a given physical system, which is not in a state of equilibrium, finally approaches a state of equilibrium”, while “a physical system, which is already in a state of equilibrium, persists to be in that state”.
The deviation x of a quantity x from its average value is defined as
At the same time, a study of the “frequency spectrum” of fluctuations, which is related to the time-dependent correlation function through the fundamental theorem of Wiener and Khinthchine, is of considerable value in assessing the “noise” met with in electrical circuits as well as in the transmission of electromagnetic signals.
We note that
We look to the mean square deviation for the first rough measure of the fluctuation:
We usually work with the mean square deviation, although it is sometimes necessary to consider also the mean fourth deviation. This occurs, for example, in considering nuclear resonance line shape in liquids. One refers to as the n-th moment of the distribution.
Consider the distributiong(x)dx which gives the number of systems in dxatx. In principle the distribution g(x)can be determined from a knowledge of all the moments, but in practice this connection is not always of help. The theorem is usually proved; we take the Fourier transform of the distribution:
Now it is obvious on differentiating u(t) that
Thus if u(t) is an analytic function we know from the moments all the information needed to obtain the Taylor series expansion of u(t) the inverse Fourier transform of u(t) gives g(x) as required. However, the higher moments are really needed to use this theorem, and they are sometimes hard to calculate. The function u(t) is sometimes called the characteristic function of the distribution.
Energy Fluctuations in a Canonical Ensemble
When a system is in thermal equilibrium with a reservoir the temperaturesof the system is defined to be equal to the temperature r of the reservoir, and it has strictly no meaning to ask questions about the temperature fluctuation. The energy of the system will however, fluctuate as energy is exchanged with the reservoir. For a canonical ensemble we have
where =-1/. Now
Now the heat capacity at constant values of the external parameters is given by
Here Cv refers to the heat capacity at the Actual volume of the system. The fractional fluctuation in energy is defined by
We note then that the act of defining the temperature of a system by bringing it into contact with a heat reservoir leads to an uncertainty in the value of the energy. A system in thermal equilibrium with a heat reservoir does not have energy, which is precisely constant. Ordinary thermodynamics is useful only so long as the fractional fluctuation in energy is small.
For perfect gas for example we have
For N=1022, F10-11, which is negligibly small.
For solid at low temperatures. According to the Debye low the heat capacity of a dielectric solid for T<<D is
Suppose that T=10-2deg K; D=200 deg K; N1016 for a particle 0.01 cm on a side. Then
which is not inappreciable. At very low temperatures thermodynamics fails for a fine particle, in the sense that we cannot know E and T simultaneously to reasonable accuracy. At 10-5 degree K the fractional fluctuation in energy is of the order of unity for a dielectric particle of the volume 1cm3
Concentration Fluctuations in a Grand Canonical Ensemble
We have the grand partition function
from which we may calculate
Perfect Classical Gas
From an earlier result
and using (13.23)
The fractional fluctuation is given by
A stochastic or random variable quantity with a definite range of values, each one of which, depending on chance, can be attained with a definite probability. A stochastic variable is defined
Thus the number of points on a die that is tossed is a stochastic variable with six values, each having the probability 1/6.
The sum of a large number of independent stochastic variables is itself a stochastic variable. There exists a very important theorem known as a central limit theorem, which says that under very general conditions the distribution of the sum tends toward a normal (Gaussian) distribution law as the number of terms is increased. The theorem may be stated rigorously as follows:
Let x1, x2,…, xn be independent stochastic variables with their means equal to 0, possessing absolute moments 2+(i) of the order 2+,where is some number >0. If denoting by Bn the mean square fluctuation of the sum x1+ x2+…+ xn , the quotient
tends to zero as n, the probability of the inequality
tends uniformly to the limit
For a distribution f(xi), the absolute moment of order is defined as
Almost all the probability distributions f(x) of stochastic variables x of interest to us in physical problems will satisfy the requirements of the central limit theorem. Let us consider several examples.
but . We have
The variable x distributes uniformly between 1. Then f(x)=1/2, -1 x 1,and f(x)=0 otherwise. The absolute moment of order 3 exists:
The mean square fluctuation is
If there are n independent variables xiit is easy to see that the mean square fluctuation Bnof their sum (under the same distribution) is
Thus (for =1) we have for (13.28) the result
which does tend to zero as n. Therefore the central limit theorem holds for this example.
The variable x is a normal variable with standard deviation - that means, that it is distributed according to the Gaussian distribution
where 2 is the mean square deviation; is called standard deviation. The absolute moment of order 3 exists:
The mean square fluctuation is
If there are n independent variables xi, then
which approaches 0 as n approaches . Therefore the central limit theorem applies to this example. A Gaussian random process is one for which all the basic distribution functions f(xi) are Gaussian distributions.
The variable x has a Lorentzian distribution:
The absolute moment of order is proportional to
But this integral does not converge for 1, and thus not for =2+, >0. We see that central limit theorem does not apply to a Lorentzian distribution.
Random Process or Stochastic Process
By a random process or stochastic processx(t) we mean a process in which the variable xdoes not depend in a completely definite way on the independent variable t, which may denote the time. In observations on the different systems of a representative ensemble we find different functions x(t). All we can do is to study certain probability distributions - we cannot obtain the functions x(t) themselves for the members of the ensemble. In Figure 13.1 one can see a sketch of a possible x(t) for one system.
p2(x1,t1; x2,t2)dx1dx2 =Probability of finding x in (x1, x1+dx1) at time t1; and in the range (x2, x2+dx2) at time t2
The plot might, for example, be an oscillogram of the thermal noise current x(t)I(t) obtained from the output of a filter when a thermal noise voltage is applied to the input.
We can determine, for example
If we had an actual oscillogram record covering a long period of time we might construct an ensemble by cutting the record up into strips of equal length T and mounting them one over the other, as in Figure 13.2.
The probabilities p1 andp2 will be found from the ensemble. Proceeding similarly we can form p3, p4,…. The whole set of probability distributions pn (n=1,2,…,)may be necessary to describe the random process completely.
Figure 13.2 Recordings ofx(t)versus tfor three system of an ensemble, as simulated by taking three intervals of durationT from a single long recording.Time averages are taken in a horizontal direction in such a display; ensemble averages are taken in a vertical direction.
In many important cases p2contains all the information we need. When this is true the random process is called a Markoff process. A stationary random processis one for whichthe joint probability distributions pn are invariant under a displacement of the origin of time.We assume in all our further discussion that we are dealing with stationary Markoff processes.
It is useful to introduce the conditional probability P2(x1,0x2,t)dx2 for the probability that given x1one finds xin dx2 at x2a timetlater.
Than it is obvious that
The Wiener-Khintchine theorem states a relationship between two important characteristics of a random process: the power spectrum of the process and the correlation function of the process.
Suppose we develop one of the records in Fig.13.2 of x(t) for 0<t<T in a Fourier series:
where fn=n/T. We assume that <x(t)>=0, where the angular parentheses <> denote time average; because the average is assumed zero there is no constant term in the Fourier series.
The Fourier coefficients are highly variable from one record of duration T to another. For many type of noise the an, bn have Gaussian distributions. When this is true the process (13.47) is said to be a Gaussian random process.
Let us now imagine that x(t) is an electric current flowing through unit resistance. The instantaneous power dissipation is x2(t).Each Fourier component will contribute to the total power dissipation. The power in the n-thcomponent is
We do not consider cross products terms in the power of the form
because for nm the time average of such terms will be zero. The time average of Pis
We now turn to ensemble averages, denoted here by a bar over the quantity. As we mentioned above, every record in Fig.13.2 running in time from 0 to T. We will consider that an ensemble average is an average over a large set of independent records. From a random process we will have
where for a Gaussian random process n is just the standard deviation, as in example 13b
Thus from (13.49) the ensemble average of the time average power dissipation associated with n-th component of x(t) is
We define the power spectrum or spectral density G(f) of the random process as the ensemble average of the time average of the power dissipation in unit resistance per unit frequency bandwidth. If fn equal to the separation between two adjacent frequencies
Now by (13.51), (13.52) and (13.53)
where the average is over the time t. This is the autocorrelation function. Without changing the result we may take an ensemble average of the time average
The integral of the power spectrum over all frequencies gives the ensemble average total power.
Let us consider now the correlation function
Thus the correlation function is the Fourier cosine transform of the power spectrum.
Using the inverse Fourier transform we can write
This, together with (13.62) is the Winer-Khitchine theorem. It has an obvious physical content. The correlation function tells us essentially how rapidly the random process is changing.
we may say that c is a measure of the above time the system exists without changing its state, as measured by x(t), by more than e-1. cin this case have a meaning of correlation time.We then expect physically that frequencies much higher than, 1/cwill not be represented in an important way in the power spectrum. Now ifC()is given by (13.64), the Wiener-Khintchine theorem tells us that
Thus, as shown in Fig. 13.3, the power spectrum is flat (on a log. frequency scale) out to 2f1/c, and then decreases as 1/f2 at high frequencies. Note that the noise spectrum for the correlation function is “white” out of cutoff fc1/2c,
Figure 13.3 Plot of spectral density versus log102ffor an exponential function with c=10-4 c.
The Nyquist Theorem
The Nyquist theorem is of great importance in experimental physics and in electronics. The theorem gives a quantitative expression for the thermal noise generated by a system in thermal equilibrium and is therefore needed in any estimate of the limiting signal-to-noise ratio of experimental set-ups. In the original form the Nyquist theorem states that the mean square voltage across a resistor of resistance R in thermal equilibrium at thermal T is given by
where f is the frequency band width which the voltage fluctuations are measured; all Fourier components outside the given range are ignored. Remember the definition of the spectral density G(f), we may write Nyquist results as
This is not strictly the power density, which would be G(f)/R.
Figure 13.4 The noise generator produces a power spectrum G(f)=4RkT. If the filter passes unit frequency range, the resistance R’ will absorb power 2RkT.R’is matched to R.
The maximum thermal noise power per unit frequency range delivered by a resistor to a matched load will be G(f)/4R=kT; factor of 4 enters where it does because the power delivered to the load R’ is
We will derive the Nyquist theorem in two ways: first, following the original transmission line derivation, and, second, using a microscopic argument.
Transmission line derivation
Figure 13.5 Transmission line of length l with matched terminations.
Consider as in Figure 13.5 a loss less transmission line of length l and characteristic impedance Zc=R terminated at each end by a resistance R. The line is therefore matched at each end in the sense that all energy traveling down the line will be absorbed without reflection in the appropriate resistance.
The entire circuit is maintained at temperature T. In analogy to the argument on the black-body radiation (Lecture 8) the transmission line has two electromagnetic modes (one propagation in each direction) in the frequency range
where c’ is the propagation velocity on the line. Each mode has energy
in equilibrium. We are usually concerned here with the classical limit , so that the thermal energy on the line in the frequency range f
The rate at which energy comes off the line in one direction is
Because the thermal impedance is matched to the line, the power coming off the line at one end is absorbed in the terminal impedance R at that end. The load emits energy at the same rate. The power input to the load is
But V=I(2R), so that
which is the Nyquist theorem.
We consider a resistance R with N electrons per unit volume; length l, area A and carrier relaxation time c. We treat the electrons as Maxwellian but it was shown that the noise voltage is independent of such details, involving only the value of the resistance regardless of the details of the mechanisms contributing to the resistance.
First note that
here V is the voltage, I the current, j the current density, and is the average (or drift) velocity component of the electrons down the resistor. Observing that NAl is the total number of electrons in the specimen
Summed over all electrons. Thus
where ui and Vi are the random variables. The spectral density G(f) has the property that in the range f
(m- mass of electron, average velocity of electron)
We suppose that the correlation function may be written as
Then, from the Wiener-Khintchine theorem we have
Usually in metals at room temperature c<10-13 s, so from dc through the microwave range c<<1 and may be neglected. We recall that
Thus in the frequency range f
Here we have used the relation
from the theory of conductivity and also elementary relation
is the electrical conductivity.
The simplest way to establish (13.85) in a plausible way is to solve the drift velocity equation
so that in the steady state ( or for c<<1 ) we have
giving for the mobility (drift velocity per unit electric field)
Then we have for the electric conductivity