Phorgy Phynance

Archive for the ‘Stable Distributions’ Category

More fun with maximum likelihood estimation

with one comment

A while ago, I wrote a post

Fun with maximum likelihood estimation

where I jotted down some notes. I ended the post with the following:

Note: The first time I worked through this exercise, I thought it was cute, but I would never compute \mu and \sigma^2 as above so the maximum likelihood estimation, as presented, is not meaningful to me. Hence, this is just a warm up for what comes next. Stay tuned…

Well, it has been over a year and I’m trying to get a friend interested in MLE for a side project we might work on together, so thought it would be good to revisit it now.

To briefly review, the probability of observing N independent samples X\in\mathbb{R}^N may be approximated by

\begin{aligned} P(X|\theta) = \prod_{i = 1}^N P(x_i|\theta) = \left(\Delta x\right)^N \prod_{i=1}^N \rho(x_i|\theta),\end{aligned}

where \rho(x|\theta) is a probability density and \theta represents the parameters we are trying to estimate. The key observation becomes clear after a slight change in perspective.

If we take the Nth root of the above probability (and divide by \Delta x), we obtain the geometric mean of the individual densities, i.e.

\begin{aligned} \langle \rho(X|\theta)\rangle_{\text{geom}} = \prod_{i=1}^N \left[\rho(x_i|\theta)\right]^{1/N}.\end{aligned}

In computing the geometric mean above, each sample is given the same weighting, i.e. 1/N. However, we may have reason to want to weigh some samples heavier than others, e.g. if we are studying samples from a time series, we may want to weigh the more recent data heavier. This inspired me to replace 1/N with an arbitrary weight w_i satisfying

\begin{aligned} w_i\ge 0,\quad\text{and}\quad \sum_{i=1}^N w_i = 1.\end{aligned}

With no apologies for abusing terminology, I’ll refer to this as the likelihood function

\begin{aligned} \mathcal{L}(\theta) = \prod_{i=1}^N \rho(x_i|\theta)^{w_i}.\end{aligned}

Replacing w_i with 1/N would result in the same parameter estimation as the traditional maximum likelihood method.

It is often more convenient to work with log likelihoods, which has an even more intuitive expression

\begin{aligned}\log\mathcal{L}(\theta) = \sum_{i=1}^N w_i \log \rho(x_i|\theta),\end{aligned}

i.e. the log likelihood is simply the weighted (arithmetic) average of the log densities.

I use this approach to estimate stable density parameters for time series analysis that is more suitable for capturing risk in the tails. For instance, I used this technique when generating the charts in a post from back in 2009:

80 Years of Daily S&P 500 Value-at-Risk Estimates

which was subsequently picked up by Felix Salmon of Reuters in

How has VaR changed over time?

and Tracy Alloway of Financial Times in

On baseline VaR

If I find a spare moment, which is rare these days, I’d like to update that analysis and expand it to other markets. A lot has happened since August 2009. Other markets I’d like to look at would include other equity markets as well as fixed income. Due to the ability to cleanly model skew, stable distributions are particularly useful for analyzing fixed income returns.

Daily S&P 500 Value-at-Risk Estimates

with 3 comments

A few people have commented about the methodology used to produce the charts in my last post. Keep in mind, I threw those together quickly for Felix based on charts already put together for a seminar at UCLA. If you want to see what I actually look at on a regular basis, I put the following chart together:

SP500VaR_10yr_linear_annotated

This is the 99%, 1-day VaR using a weighting scheme that places more weight on the most recent data.

Again, note the divergence between the two charts in recent months. Risk systems (like most third party vendors) based on normal distributions are likely indicating that risk continues to decrease. However, the stable distribution indicates the opposite, i.e. risk has begun increasing again.

Written by Eric

August 8, 2009 at 9:49 am