Phorgy Phynance

Archive for the ‘Modeling’ Category

Discrete Stochastic Calculus

leave a comment »

This post is part of a series

In the previous post of this series, we found that when Cartesian coordinates are placed on a binary tree, the commutative relations are given by

  • [dx,x] = \frac{(\Delta x)^2}{\Delta t} dt
  • [dt,t] = \Delta t dt
  • [dx,t] = [dt,x] = \Delta t dx.

There are two distinct classes of discrete calculus depending on the relation between \Delta x and \Delta t.

Discrete Exterior Calculus

If we set \Delta x = \Delta t, the commutative relations reduce to

  • [dx,x] = \Delta t dt
  • [dt,t] = \Delta t dt
  • [dx,t] = [dt,x] = \Delta t dx

and in the continuum limit, i.e.  \Delta t\to 0, reduce to

  • [dx,x] = 0
  • [dt,t] = 0
  • [dx,t] = [dt,x] = 0.

In other words, when \Delta x = \Delta t, the commutative relations vanish in the continuum limit and the discrete calculus converges to the exterior calculus of differential forms.

Because of this, the discrete calculus on a binary tree with \Delta x = \Delta t will be referred to as the discrete exterior calculus.

Discrete Stochastic Calculus

If instead of \Delta x = \Delta t, we set (\Delta x)^2 = \Delta t, the commutative relations reduce to

  • [dx,x] = dt
  • [dt,t] = \Delta t dt
  • [dx,t] = [dt,x] = \Delta t dx

and in the continuum limit, i.e.  \Delta t\to 0, reduce to

  • [dx,x] = dt
  • [dt,t] = 0
  • [dx,t] = [dt,x] = 0.

In this case, all commutative relations vanish in the continuum limit except [dx,x] = dt.

In the paper:

I demonstrate how the continuum limit of the commutative relations give rise to (a noncommutative version of) stochastic calculus, where dx plays the role of a Brownian motion.

Because of this, the discrete calculus on a binary tree with (\Delta x)^2 = \Delta t will be referred to as the discrete stochastic calculus.

To date, discrete stochastic calculus has found robust applications in mathematical finance and fluid dynamics. For instance, the application of discrete stochastic calculus to Black-Scholes option pricing was presented in

and the application to fluid dynamics was presented in

Both of these subjects will be addressed in more detail as part of this series of articles.

It should be noted that discrete calculus and its special cases of discrete exterior calculus and discrete stochastic calculus represent a new framework for numerical modeling. We are not taking continuum models built on continuum calculus and constructing finite approximations. Instead, we are building a robust mathematical framework that has finiteness built in from the outset. The resulting numerical models are not approximations, but exact models developed within a finite numerical framework. The framework itself converges to the continuum versions so that any numerical models built within this framework will automatically converge to the continuum versions (if such a thing is desired).

Discrete calculus provides a kind of meta algorithm. It is an algorithm for generating algorithms.


Written by Eric

August 25, 2012 at 12:38 pm

Network Theory and Discrete Calculus – Notation Revisited

with 2 comments

This post is part of a series

As stated in the Introduction to this series, one of my goals is to follow along with John Baez’ series and reformulate things in the language of discrete calculus. Along the way, I’m coming across operations that I haven’t used in any of my prior applications of discrete calculus to mathematical finance and field theories. For instance, in the The Discrete Master Equation, I introduced a boundary operator

\begin{aligned} \partial \mathbf{e}^{i,j} = \mathbf{e}^j-\mathbf{e}^i.\end{aligned}

Although, I hope the reason I call this a boundary operator is obvious, it would be more precise to call this something like graph divergence. To see why, consider the boundary of an arbitrary discrete 1-form

\begin{aligned}\partial \alpha = \sum_{i,j} \alpha_{i,j} \left(\mathbf{e}^j - \mathbf{e}^i\right) = \sum_i \left[ \sum_j \left(\alpha_{j,i} - \alpha_{i,j}\right)\right] \mathbf{e}^i.\end{aligned}

A hint of sloppy notation has already crept in here, but we can see that the boundary of a discrete 1-form at a node i is the sum of coefficients flowing into node i minus the sum of coefficients flowing out of node i. This is what you would expect of a divergence operator, but divergence depends on a metric. This operator does not, hence it is topological in nature. It is tempting to call this a topological divergence, but I think graph divergence is a better choice for reasons to be seen later.

One reason the above notation is a bit sloppy is because in the summations, we should really keep track of what directed edges are actually present in the directed graph. Until now, simply setting

\mathbf{e}^{i,j} = 0

if there is no directed edge from node i to node j was sufficient. Not anymore.

Also, for applications I’ve used discrete calculus so far, there has always only been a single directed edge connecting any two nodes. When applying discrete calculus to electrical circuits, as John has started doing in his series, we obviously would like to consider elements that are in parallel.

I tend to get hung up on notation and have thought about the best way to deal with this. My solution is not perfect and I’m open to suggestions, but what I settled on is to introduce a summation not only over nodes, but also over directed edges connected those nodes. Here it is for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

where [i,j] is the set of all directed edges from node i to node j. I’m not 100% enamored, but is handy for performing calculations and doesn’t make me think too much.

For example, with this new notation, the boundary operator is much clearer

\begin{aligned} \partial \alpha &= \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \left(\mathbf{e}^{j}-\mathbf{e}^i\right) \\ &= \sum_i \left[\sum_j \left( \sum_{\epsilon\in[j,i]} \alpha_{j,i}^{\epsilon} - \sum_{\epsilon\in[i,j]} \alpha_{i,j}^{\epsilon} \right)\right]\mathbf{e}^i.\end{aligned}

As before, this says the graph divergence of \alpha at the node i is the sum of all coefficients flowing into node i minus the sum of all coefficients flowing out of node i. Moreover, for any node j there can be one or more (or zero) directed edges from j into i.

Written by Eric

November 19, 2011 at 11:27 pm

Network Theory and Discrete Calculus – The Discrete Master Equation

with 6 comments

This post is a follow up to

Network Theory and Discrete Calculus – Introduction

To give the result first, the master equation can be expressed in terms of discrete calculus simply as

\partial(\psi P) = 0,

where \psi is a discrete 0-form representing the states of a Markov chain (at all times), P is a discrete 1-form representing transition probabilities, and \partial is the boundary operator, i.e. a kind of graph divergence.

The rest of this post explains the terms in this discrete master equation and how it works.

The State-Time Graph

When working with a finite (or countable) number of states, there is nothing new in considering states \psi_i to be associated to nodes and the transition probabilities P_{i,j} to be associated to directed edges of a bi-directed graph. A simple 2-state example is given below

The directed graphs we work with for discrete stochastic calculus are slightly different and could be referred to as “state-time” graphs, which are supposed to make you think of “space-time”. A state i at time t is considered a different node than the state i at time t+1. An example 2-state, 2-time directed graph is illustrated below:

There are four directed edges in this state-time graph, which will be labelled

* \mathbf{e}^{(i,t)(i,t+1)}
* \mathbf{e}^{(i,t)(j,t+1)}
* \mathbf{e}^{(j,t)(i,t+1)}
* \mathbf{e}^{(j,t)(j,t+1)}

For N states, the state-time graph will look similar but with more states appended horizontally.

The Discrete Master Equation

A discrete 0-form representing the states at all times can be expressed as

\psi = \sum_i \sum_t \psi_i^t \mathbf{e}^{(i,t)}

and a discrete 1-form representing the transition probabilities can be expressed as

P = \sum_{i,j} \sum_t P_{i,j}^t \mathbf{e}^{(i,t)(j,t+1)}.

The product of the 0-form \psi and the 1-form P is given by

\psi P = \sum_{i,j} \sum_t \psi_i^t P_{i,j}^t \mathbf{e}^{(i,t)(j,t+1)}.

The boundary of a directed edge is given by

\partial \mathbf{e}^{(i,t)(j,t+1)} = \mathbf{e}^{(j,t+1)} - \mathbf{e}^{(i,t)}.

Now for some gymnastics, we can compute

\begin{aligned} \partial(\psi P)  &= \sum_{i,j} \sum_t \psi_i^t P_{i,j}^t \left[\mathbf{e}^{(j,t+1)} - \mathbf{e}^{(i,t)}\right] \\  &= \sum_{i,j} \sum_t \left[\psi_j^t P_{j,i}^t \mathbf{e}^{(i,t+1)} - \psi_i^t P_{i,j}^t \mathbf{e}^{(i,t)}\right] \\  &= \sum_i \sum_t \left[\sum_j \left(\psi_j^t P_{j,i}^t - \psi_i^{t+1} P_{i,j}^{t+1}\right)\right] \mathbf{e}^{(i,t+1)}.  \end{aligned}

This is zero only when the last term in brackets is zero, i.e.

\sum_j \left(\psi_j^t P_{j,i}^t - \psi_i^{t+1} P_{i,j}^{t+1}\right) = 0


\psi_i^{t+1} \sum_j P_{i,j}^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

Since P is right stochastic, we have

\sum_j P_{i,j}^{t+1} = 1

so that

\psi_i^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

In other words, when P is right stochastic and \partial(\psi P) = 0, we get the usual master equation from stochastic mechanics

\partial(\psi P) = 0\implies \psi_i^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

Parting Thoughts

The master equation is a boundary. This makes me wonder about homology, gauge transformations, sources, etc. For example, since

\partial(\psi P) = 0,

does this imply

\psi P = \partial F

for some discrete 2-form F?

If G is a discrete 2-form whose boundary does not vanish, then

\psi P + \partial G

gives the same dynamics because \partial^2 = 0. This would be a kind of gauge transformation.

There are several directions to take this from here, but that is about all the energy I have for now. More to come…

Written by Eric

October 29, 2011 at 10:04 pm

Leverage Causes Fat Tails and Clustered Volatility

leave a comment »

Doyne Farmer is awesome. I first ran into him back in 2001 (or maybe 2002) at the University of Chicago where he was giving a talk on order book dynamics with some awesome videos from the order book of the London Stock Exchange. He has another recent paper that also looks very interesting:

Leverage Causes Fat Tails and Clustered Volatility
Stefan Thurner, J. Doyne Farmer, John Geanakoplos
(Submitted on 11 Aug 2009 (v1), last revised 10 Jan 2010 (this version, v2))

We build a simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called “value investing”, i.e. systematically attempting to buy underpriced assets. When funds do not borrow, the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on “irrational behavior”, such as trend following. Here instead this comes from the fact that leverage limits cause funds to sell into a falling market: A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies large downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making these extreme fluctuations even worse.

I completely agree with this idea. In fact, I discussed this concept with Francis Longstaff at the last advisory board meeting of UCLA’s financial engineering program. Back in December, I spent the majority of a flight back to Hong Kong from Europe doodling a bunch of math trying to express the idea in formulas, but didn’t come up with anything worth writing home about. But it seems like they make some good progress in this paper.

Basically, financial firms of all stripes have performance targets. In a period of decreasing volatility (as we were in preceding the crisis), asset returns tend to decrease as well. To compensate, firms tend to move out further along the risk spectrum and/or increase leverage to maintain a given return level. The dynamics here is that leverage tends to increase as volatility decreases. However, the increased leverage increases the chance of a tail event occurring as we experienced.

On first glance, this paper captures a lot of the dynamics I’ve been wanting to see written down somewhere. Hopefully this gets some attention.


Written by Eric

April 5, 2011 at 11:12 am

Discrete stochastic calculus and commutators

leave a comment »

This post is in response to a question from Tim van Beek over at the Azimuth Project blog hosted by my friend John Baez regarding my paper

The basic fact needed to address the question is that we have a set 0-dimensional objects \mathbf{e}^\kappa and 1-dimensional objects \mathbf{e}^{\kappa\lambda} that obey the following geometrically-motivated multiplication rules:

  1. \mathbf{e}^\kappa \mathbf{e}^\lambda = \delta_{\kappa,\lambda} \mathbf{e}^\kappa
  2. \mathbf{e}^{\kappa\lambda} \mathbf{e}^\mu = \delta_{\lambda,\mu}  \mathbf{e}^{\kappa\lambda}
  3. \mathbf{e}^\mu \mathbf{e}^{\kappa\lambda} = \delta_{\mu,\kappa} \mathbf{e}^{\kappa\lambda}

To see the geometrical meaning of these multiplications, it might help to consider discrete 0-forms

f = \sum_\kappa f(\kappa) \mathbf{e}^\kappa.


g = \sum_\lambda g(\lambda) \mathbf{e}^\lambda

Multiplication 1.) above implies

f g = \sum_\kappa f(\kappa) g(\kappa) \mathbf{e}^\kappa,

which is completely reminiscent of multiplication of functions where we think of f(\kappa) as the value of the function at the “node” \mathbf{e}^\kappa.

Multiplications 2.) and 3.) introduce new concepts, but whose geometrical interpretation is quite intuitive. To see this consider

f \mathbf{e}^{\kappa\lambda} = f(\kappa) \mathbf{e}^{\kappa\lambda}


\mathbf{e}^{\kappa\lambda} f = f(\lambda) \mathbf{e}^{\kappa\lambda}.

Multiplying the function f on the left of the “directed edge” \mathbf{e}^{\kappa\lambda} projects out the value of the function at the beginning of the edge and multiplying on the right projects out the value of the function at the end. Hence, the product of functions and edges do not commute.

In the paper, it was shown that the exterior derivative of a discrete 0-form is given by

df = \sum_{\kappa,\lambda} \left[f(\lambda) - f(\kappa)\right] \mathbf{e}^{\kappa\lambda}.

Here, we show that this may be expressed as the commutator with the “graph operator”

\mathbf{G} = \sum_{\kappa,\lambda} \mathbf{e}^{\kappa\lambda}.

The result is quite simple and follows directly from \mathbf{G} and the multiplication rules, i.e.

f\mathbf{G} = \sum_{\kappa,\lambda} f(\kappa) \mathbf{e}^{\kappa\lambda}


\mathbf{G} f = \sum_{\kappa,\lambda} f(\lambda) \mathbf{e}^{\kappa\lambda}

so that

[\mathbf{G},f] = \sum_{\kappa,\lambda} \left[f(\lambda) - f(\kappa)\right] \mathbf{e}^{\kappa\lambda}


df = [\mathbf{G},f].

Written by Eric

October 27, 2010 at 3:47 pm

Quants R Us

leave a comment »


It’s been a while since I’ve been able to find the time to post anything. Work is completely insane, but I’m loving it 🙂

I’ve been in a “tool” building frenzy lately. When you’re building quant models, one of the best uses of them is sensitivity analysis, i.e. trying to see how sensitive securities are to various parameters. After fiddling with one too many Excel charts, I decided to automate a lot of the sensitivity analysis we’re doing.

Building tools to automate mundane tasks is one thing, but I also enjoy throwing fancy looking interfaces on my tools. This is motivated in large part by Emanuel Derman. Both in his book and at conferences where I’ve heard him speak, he often emphasizes that some of the biggest impacts he made early in his quant career were in building nice user-friendly interfaces. Stuff that even the traders can figure out 😉

Building tools requires a bit of effort up front and when you are barraged by demands that need to be done yesterday, you can either succumb to the “easy” way and just crank out results as fast as you can (knowing full well you will have to repeat the same process next week), or double your working hours to get the tools working asap. I chose the latter, which explains my hiatus.

Things are working quite nicely now and it was pleasing to watch the younger quants admiring the analysis tool I built 🙂 There is no doubt in my mind the extra effort it took to build some useful tools, will be paid back tenfold (or more!) in time.

Written by Eric

October 9, 2007 at 8:00 pm

Back to the grind

with one comment

Well, the days of sitting around reading financial blogs in my boxers has finally come to an end 😦 [Too much information?]

I started my new job today. I’ve got mixed emotions about it so far. There are certainly aspects of my old job I’m going to miss. It was a lot more plush, e.g. I had my own office with a view and personal administrative assistant. I got to chat with super smart investors and economists daily. Now, I’m back to cube land (and I couldn’t find a post-it note to save my life). I will certainly miss the exposure I had at my old place, but I’m also optimistic about the new gig. Although I’m more of a pure “quant” now (just as everyone is learning to hate quants!), I’ve already started talking with our economists at the new place. I’m certainly less of an alien from another planet (the way I described being a quant at my old place) and I’m surrounded by other PhDs who are well versed in finance kung fu. It should be interesting.

One of the first technical obstacles I’m facing is that the weapon of choice here is SAS. I’m a big Matlab fan for quick prototyping of models and the unintuitive nature of SAS is a bit daunting at the moment. After about 15 minutes of reading the Little SAS Book, I put in a request for Matlab 🙂

I’ll probably also try to migrate to C#. From what I can tell, learning C# will be a nice investment that I suspect will pay off down the road.

Written by Eric

August 13, 2007 at 8:10 pm

Posted in Matlab, Modeling, Personal, SAS