The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a general proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance of the same physical variable (like voltage, temperature difference, etc.), and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.
The fluctuation–dissipation theorem was proven by Herbert Callen and Theodore Welton in 1951 ^{[1]} and expanded by Ryogo Kubo. There are antecedents to the general theorem, including Einstein's explanation of Brownian motion ^{[2]} during his annus mirabilis and Harry Nyquist's explanation in 1928 of Johnson noise in electrical resistors.^{[3]}
The fluctuation–dissipation theorem says that when there is a process that dissipates energy, turning it into heat (e.g., friction), there is a reverse process related to thermal fluctuations. This is best understood by considering some examples:
The fluctuation–dissipation theorem is a general result of statistical thermodynamics that quantifies the relation between the fluctuations in a system that obeys detailed balance and the response of the system to applied perturbations.
For example, Albert Einstein noted in his 1905 paper on Brownian motion that the same random forces that cause the erratic motion of a particle in Brownian motion would also cause drag if the particle were pulled through the fluid. In other words, the fluctuation of the particle at rest has the same origin as the dissipative frictional force one must do work against, if one tries to perturb the system in a particular direction.
From this observation Einstein was able to use statistical mechanics to derive the Einstein–Smoluchowski relation
which connects the diffusion constant D and the particle mobility μ, the ratio of the particle's terminal drift velocity to an applied force. k_{B} is the Boltzmann constant, and T is the absolute temperature.
In 1928, John B. Johnson discovered and Harry Nyquist explained Johnson–Nyquist noise. With no applied current, the meansquare voltage depends on the resistance R, , and the bandwidth over which the voltage is measured:
The fluctuation–dissipation theorem can be formulated in many ways; one particularly useful form is the following:^{[citation needed]}
Let be an observable of a dynamical system with Hamiltonian subject to thermal fluctuations. The observable will fluctuate around its mean value with fluctuations characterized by a power spectrum . Suppose that we can switch on a timevarying, spatially constant field which alters the Hamiltonian to . The response of the observable to a timedependent field is characterized to first order by the susceptibility or linear response function of the system
where the perturbation is adiabatically (very slowly) switched on at .
The fluctuation–dissipation theorem relates the twosided power spectrum (i.e. both positive and negative frequencies) of to the imaginary part of the Fourier transform of the susceptibility :
The lefthand side describes fluctuations in , the righthand side is closely related to the energy dissipated by the system when pumped by an oscillatory field .
This is the classical form of the theorem; quantum fluctuations are taken into account by replacing with (whose limit for is ). A proof can be found by means of the LSZ reduction, an identity from quantum field theory.^{[citation needed]}
The fluctuation–dissipation theorem can be generalized in a straightforward way to the case of spacedependent fields, to the case of several variables or to a quantummechanics setting.^{[1]}
We derive the fluctuation–dissipation theorem in the form given above, using the same notation. Consider the following test case: the field f has been on for infinite time and is switched off at t=0
where is the Heaviside function. We can express the expectation value of by the probability distribution W(x,0) and the transition probability
The probability distribution function W(x,0) is an equilibrium distribution and hence given by the Boltzmann distribution for the Hamiltonian
where . For a weak field , we can expand the righthand side
here is the equilibrium distribution in the absence of a field. Plugging this approximation in the formula for yields

(

where A(t) is the autocorrelation function of x in the absence of a field:
Note that in the absence of a field the system is invariant under timeshifts. We can rewrite using the susceptibility of the system and hence find with the above equation (*)
Consequently,

(

To make a statement about frequency dependence, it is necessary to take the Fourier transform of equation (**). By integrating by parts, it is possible to show that
Since is real and symmetric, it follows that
Finally, for stationary processes, the WienerKhinchin theorem states that the twosided spectral density is equal to the Fourier transform of the autocorrelation function:
Therefore, it follows that
While the fluctuation–dissipation theorem provides a general relation between the response of systems obeying detailed balance, when detailed balance is violated comparison of fluctuations to dissipation is more complex. Glassy systems are not equilibrated, and slowly approach their equilibrium state. This slow approach to equilibrium (unlike a nonequilibrium steadystate) is synonymous with the violation of detailed balance. Thus these systems require large timescales to be studied while they slowly move toward equilibrium.
In the mid1990s, in the study of dynamics of spin glass models, a generalization of the fluctuation–dissipation theorem was discovered^{[citation needed]} that holds for asymptotic nonstationary states, where the temperature appearing in the equilibrium relation is substituted by an effective temperature with a nontrivial dependence on the time scales. This relation is proposed to hold in glassy systems beyond the models for which it was initially found.
The Rényi entropy as well as von Neumann entropy in quantum physics are not observables since they depend nonlinearly on the density matrix. Recently, Ansari and Nazarov proved an exact correspondence that reveals the physical meaning of the Rényi entropy flow in time. This correspondence is similar to the fluctuationdissipation theorem in spirit and allows the measurement of quantum entropy using the full counting statistics (FCS) of energy transfers.^{[4]}^{[5]}^{[6]}