# Edgeworth series

The Gram–Charlier A series (named in honor of Jørgen Pedersen Gram and Carl Charlier), and the Edgeworth series (named in honor of Francis Ysidro Edgeworth) are series that approximate a probability distribution in terms of its cumulants.[1] The series are the same; but, the arrangement of terms (and thus the accuracy of truncating the series) differ.[2] The key idea of these expansions is to write the characteristic function of the distribution whose probability density function f is to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover f through the inverse Fourier transform.

## Gram–Charlier A series

We examine a continuous random variable. Let ${\displaystyle {\hat {f}}}$ be the characteristic function of its distribution whose density function is f, and ${\displaystyle \kappa _{r}}$ its cumulants. We expand in terms of a known distribution with probability density function ψ, characteristic function ${\displaystyle {\hat {\psi }}}$, and cumulants ${\displaystyle \gamma _{r}}$. The density ψ is generally chosen to be that of the normal distribution, but other choices are possible as well. By the definition of the cumulants, we have (see Wallace, 1958)[3]

${\displaystyle {\hat {f}}(t)=\exp \left[\sum _{r=1}^{\infty }\kappa _{r}{\frac {(it)^{r}}{r!}}\right]}$ and
${\displaystyle {\hat {\psi }}(t)=\exp \left[\sum _{r=1}^{\infty }\gamma _{r}{\frac {(it)^{r}}{r!}}\right],}$

which gives the following formal identity:

${\displaystyle {\hat {f}}(t)=\exp \left[\sum _{r=1}^{\infty }(\kappa _{r}-\gamma _{r}){\frac {(it)^{r}}{r!}}\right]{\hat {\psi }}(t)\,.}$

By the properties of the Fourier transform, ${\displaystyle (it)^{r}{\hat {\psi }}(t)}$ is the Fourier transform of ${\displaystyle (-1)^{r}[D^{r}\psi ](-x)}$, where D is the differential operator with respect to x. Thus, after changing ${\displaystyle x}$ with ${\displaystyle -x}$ on both sides of the equation, we find for f the formal expansion

${\displaystyle f(x)=\exp \left[\sum _{r=1}^{\infty }(\kappa _{r}-\gamma _{r}){\frac {(-D)^{r}}{r!}}\right]\psi (x)\,.}$

If ψ is chosen as the normal density

${\displaystyle \phi (x)={\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left[-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right]}$

with mean and variance as given by f, that is, mean ${\displaystyle \mu =\kappa _{1}}$ and variance ${\displaystyle \sigma ^{2}=\kappa _{2}}$, then the expansion becomes

${\displaystyle f(x)=\exp \left[\sum _{r=3}^{\infty }\kappa _{r}{\frac {(-D)^{r}}{r!}}\right]\phi (x),}$

since ${\displaystyle \gamma _{r}=0}$ for all r > 2, as higher cumulants of the normal distribution are 0. By expanding the exponential and collecting terms according to the order of the derivatives, we arrive at the Gram–Charlier A series. Such an expansion can be written compactly in terms of Bell polynomials as

${\displaystyle \exp \left[\sum _{r=3}^{\infty }\kappa _{r}{\frac {(-D)^{r}}{r!}}\right]=\sum _{n=0}^{\infty }B_{n}(0,0,\kappa _{3},\ldots ,\kappa _{n}){\frac {(-D)^{n}}{n!}}.}$

Since the n-th derivative of the Gaussian function ${\displaystyle \phi }$ is given in terms of Hermite polynomial as

${\displaystyle \phi ^{(n)}(x)={\frac {(-1)^{n}}{\sigma ^{n}}}He_{n}\left({\frac {x-\mu }{\sigma }}\right)\phi (x),}$

this gives us the final expression of the Gram-Charlier A series as

${\displaystyle f(x)=\phi (x)\sum _{n=0}^{\infty }{\frac {1}{n!\sigma ^{n}}}B_{n}(0,0,\kappa _{3},\ldots ,\kappa _{n})He_{n}\left({\frac {x-\mu }{\sigma }}\right).}$

Integrating the series gives us the cumulative distribution function

${\displaystyle F(x)=\int _{-\infty }^{x}f(u)du=\Phi (x)-\phi (x)\sum _{n=3}^{\infty }{\frac {1}{n!\sigma ^{n-1}}}B_{n}(0,0,\kappa _{3},\ldots ,\kappa _{n})He_{n-1}\left({\frac {x-\mu }{\sigma }}\right),}$

where ${\displaystyle \Phi }$ is the CDF of the normal distribution.

If we include only the first two correction terms to the normal distribution, we obtain

${\displaystyle f(x)\approx {\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left[-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right]\left[1+{\frac {\kappa _{3}}{3!\sigma ^{3}}}He_{3}\left({\frac {x-\mu }{\sigma }}\right)+{\frac {\kappa _{4}}{4!\sigma ^{4}}}He_{4}\left({\frac {x-\mu }{\sigma }}\right)\right]\,,}$

with ${\displaystyle He_{3}(x)=x^{3}-3x}$ and ${\displaystyle He_{4}(x)=x^{4}-6x^{2}+3}$.

Note that this expression is not guaranteed to be positive, and is therefore not a valid probability distribution. The Gram–Charlier A series diverges in many cases of interest—it converges only if ${\displaystyle f(x)}$ falls off faster than ${\displaystyle \exp(-(x^{2})/4)}$ at infinity (Cramér 1957). When it does not converge, the series is also not a true asymptotic expansion, because it is not possible to estimate the error of the expansion. For this reason, the Edgeworth series (see next section) is generally preferred over the Gram–Charlier A series.

## The Edgeworth series

Edgeworth developed a similar expansion as an improvement to the central limit theorem.[4] The advantage of the Edgeworth series is that the error is controlled, so that it is a true asymptotic expansion.

Let ${\displaystyle \{Z_{i}\}}$ be a sequence of independent and identically distributed random variables with mean ${\displaystyle \mu }$ and variance ${\displaystyle \sigma ^{2}}$, and let ${\displaystyle X_{n}}$ be their standardized sums:

${\displaystyle X_{n}={\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}{\frac {Z_{i}-\mu }{\sigma }}.}$

Let ${\displaystyle F_{n}}$ denote the cumulative distribution functions of the variables ${\displaystyle X_{n}}$. Then by the central limit theorem,

${\displaystyle \lim _{n\to \infty }F_{n}(x)=\Phi (x)\equiv \int _{-\infty }^{x}{\tfrac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}q^{2}}dq}$

for every ${\displaystyle x}$, as long as the mean and variance are finite.

Now assume that, in addition to having mean ${\displaystyle \mu }$ and variance ${\displaystyle \sigma ^{2}}$, the i.i.d. random variables ${\displaystyle Z_{i}}$ have higher cumulants ${\displaystyle \kappa _{r}}$. From the additivity and homogeneity properties of cumulants, the cumulants of ${\displaystyle X_{n}}$ in terms of the cumulants of ${\displaystyle Z_{i}}$ are for ${\displaystyle r\geq 2}$,

${\displaystyle \kappa _{r}^{F_{n}}={\frac {n\kappa _{r}}{\sigma ^{r}n^{r/2}}}={\frac {\lambda _{r}}{n^{r/2-1}}}\quad \mathrm {where} \quad \lambda _{r}={\frac {\kappa _{r}}{\sigma ^{r}}}.}$

If we expand in terms of the standard normal distribution, that is, if we set

${\displaystyle \phi (x)={\frac {1}{\sqrt {2\pi }}}\exp(-{\tfrac {1}{2}}x^{2})}$

then the cumulant differences in the formal expression of the characteristic function ${\displaystyle {\hat {f}}_{n}(t)}$ of ${\displaystyle F_{n}}$ are

${\displaystyle \kappa _{1}^{F_{n}}-\gamma _{1}=0,}$
${\displaystyle \kappa _{2}^{F_{n}}-\gamma _{2}=0,}$
${\displaystyle \kappa _{r}^{F_{n}}-\gamma _{r}={\frac {\lambda _{r}}{n^{r/2-1}}};\qquad r\geq 3.}$

The Gram-Charlier A series for the density function of ${\displaystyle X_{n}}$ is now

${\displaystyle f_{n}(x)=\phi (x)\sum _{r=0}^{\infty }{\frac {1}{r!}}B_{r}\left(0,0,{\frac {\lambda _{3}}{n^{1/2}}},\ldots ,{\frac {\lambda _{r}}{n^{r/2-1}}}\right)He_{r}(x).}$

The Edgeworth series is developed similarly to the Gram–Charlier A series, only that now terms are collected according to powers of ${\displaystyle n}$. The coefficients of n-m/2 term can be obtained by collecting the monomials of the Bell polynomials corresponding to the integer partitions of m. Thus, we have the characteristic function as

${\displaystyle {\hat {f}}_{n}(t)=\left[1+\sum _{j=1}^{\infty }{\frac {P_{j}(it)}{n^{j/2}}}\right]\exp(-t^{2}/2)\,,}$

where ${\displaystyle P_{j}(x)}$ is a polynomial of degree ${\displaystyle 3j}$. Again, after inverse Fourier transform, the density function ${\displaystyle f_{n}}$ follows as

${\displaystyle f_{n}(x)=\phi (x)+\sum _{j=1}^{\infty }{\frac {P_{j}(-D)}{n^{j/2}}}\phi (x)\,.}$

Likewise, integrating the series, we obtain the distribution function

${\displaystyle F_{n}(x)=\Phi (x)+\sum _{j=1}^{\infty }{\frac {1}{n^{j/2}}}{\frac {P_{j}(-D)}{D}}\phi (x)\,.}$

We can explicitly write the polynomial ${\displaystyle P_{m}(-D)}$ as

${\displaystyle P_{m}(-D)=\sum \prod _{i}{\frac {1}{k_{i}!}}\left({\frac {\lambda _{l_{i}}}{l_{i}!}}\right)^{k_{i}}(-D)^{s},}$

where the summation is over all the integer partitions of m such that ${\displaystyle \sum _{i}ik_{i}=m}$ and ${\displaystyle l_{i}=i+2}$ and ${\displaystyle s=\sum _{i}k_{i}l_{i}.}$

For example, if m = 3, then there are three ways to partition this number: 1 + 1 + 1 = 2 + 1 = 3. As such we need to examine three cases:

• 1 + 1 + 1 = 1 · k1, so we have k1 = 3, l1 = 3, and s = 9.
• 1 + 2 = 1 · k1 + 2 · k2, so we have k1 = 1, k2 = 1, l1 = 3, l2 = 4, and s = 7.
• 3 = 3 · k3, so we have k3 = 1, l3 = 5, and s = 5.

Thus, the required polynomial is

{\displaystyle {\begin{aligned}P_{3}(-D)&={\frac {1}{3!}}\left({\frac {\lambda _{3}}{3!}}\right)^{3}(-D)^{9}+{\frac {1}{1!1!}}\left({\frac {\lambda _{3}}{3!}}\right)\left({\frac {\lambda _{4}}{4!}}\right)(-D)^{7}+{\frac {1}{1!}}\left({\frac {\lambda _{5}}{5!}}\right)(-D)^{5}\\&={\frac {\lambda _{3}^{3}}{1296}}(-D)^{9}+{\frac {\lambda _{3}\lambda _{4}}{144}}(-D)^{7}+{\frac {\lambda _{5}}{120}}(-D)^{5}.\end{aligned}}}

The first five terms of the expansion are[5]

{\displaystyle {\begin{aligned}f_{n}(x)&=\phi (x)\\&\quad -{\frac {1}{n^{\frac {1}{2}}}}\left({\tfrac {1}{6}}\lambda _{3}\,\phi ^{(3)}(x)\right)\\&\quad +{\frac {1}{n}}\left({\tfrac {1}{24}}\lambda _{4}\,\phi ^{(4)}(x)+{\tfrac {1}{72}}\lambda _{3}^{2}\,\phi ^{(6)}(x)\right)\\&\quad -{\frac {1}{n^{\frac {3}{2}}}}\left({\tfrac {1}{120}}\lambda _{5}\,\phi ^{(5)}(x)+{\tfrac {1}{144}}\lambda _{3}\lambda _{4}\,\phi ^{(7)}(x)+{\tfrac {1}{1296}}\lambda _{3}^{3}\,\phi ^{(9)}(x)\right)\\&\quad +{\frac {1}{n^{2}}}\left({\tfrac {1}{720}}\lambda _{6}\,\phi ^{(6)}(x)+\left({\tfrac {1}{1152}}\lambda _{4}^{2}+{\tfrac {1}{720}}\lambda _{3}\lambda _{5}\right)\phi ^{(8)}(x)+{\tfrac {1}{1728}}\lambda _{3}^{2}\lambda _{4}\,\phi ^{(10)}(x)+{\tfrac {1}{31104}}\lambda _{3}^{4}\,\phi ^{(12)}(x)\right)\\&\quad +O\left(n^{-{\frac {5}{2}}}\right).\end{aligned}}}

Here, φ(j)(x) is the j-th derivative of φ(·) at point x. Remembering that the derivatives of the density of the normal distribution are related to the normal density by ${\displaystyle \phi ^{(n)}(x)=(-1)^{n}He_{n}(x)\phi (x)}$, (where ${\displaystyle He_{n}}$ is the Hermite polynomial of order n), this explains the alternative representations in terms of the density function. Blinnikov and Moessner (1998) have given a simple algorithm to calculate higher-order terms of the expansion.

Note that in case of a lattice distributions (which have discrete values), the Edgeworth expansion must be adjusted to account for the discontinuous jumps between lattice points.[6]

## Illustration: density of the sample mean of three ${\displaystyle \chi ^{2}}$

Density of the sample mean of three chi2 variables. The chart compares the true density, the normal approximation, and two edgeworth expansions

Take ${\displaystyle X_{i}\sim \chi ^{2}(k=2)\qquad i=1,2,3}$ and the sample mean ${\displaystyle {\bar {X}}={\frac {1}{3}}\sum _{i=1}^{3}X_{i}}$.

We can use several distributions for ${\displaystyle {\bar {X}}}$:

• The exact distribution, which follows a gamma distribution: ${\displaystyle {\bar {X}}\sim \mathrm {Gamma} \left(\alpha =n\cdot k/2,\theta =2/n\right)}$ = ${\displaystyle \mathrm {Gamma} \left(\alpha =3,\theta =2/3\right)}$
• The asymptotic normal distribution: ${\displaystyle {\bar {X}}{\xrightarrow {n\to \infty }}N(k,2\cdot k/n)=N(2,4/3)}$
• Two Edgeworth expansion, of degree 2 and 3

## Disadvantages of the Edgeworth expansion

Edgeworth expansions can suffer from a few issues:

• They are not guaranteed to be a proper probability distribution as:
• The integral of the density need not integrate to 1
• Probabilities can be negative
• They can be inaccurate, especially in the tails, due to mainly two reasons:
• They are obtained under a Taylor series around the mean
• They guarantee (asymptotically) an absolute error, not a relative one. This is an issue when one wants to approximate very small quantities, for which the absolute error might be small, but the relative error important.