Archive for the ‘Modeling’ Category
This post is part of a series
In the previous post of this series, we found that when Cartesian coordinates are placed on a binary tree, the commutative relations are given by
There are two distinct classes of discrete calculus depending on the relation between and .
Discrete Exterior Calculus
If we set , the commutative relations reduce to
and in the continuum limit, i.e. , reduce to
In other words, when , the commutative relations vanish in the continuum limit and the discrete calculus converges to the exterior calculus of differential forms.
Because of this, the discrete calculus on a binary tree with will be referred to as the discrete exterior calculus.
Discrete Stochastic Calculus
If instead of , we set , the commutative relations reduce to
and in the continuum limit, i.e. , reduce to
In this case, all commutative relations vanish in the continuum limit except .
In the paper:
I demonstrate how the continuum limit of the commutative relations give rise to (a noncommutative version of) stochastic calculus, where plays the role of a Brownian motion.
Because of this, the discrete calculus on a binary tree with will be referred to as the discrete stochastic calculus.
To date, discrete stochastic calculus has found robust applications in mathematical finance and fluid dynamics. For instance, the application of discrete stochastic calculus to Black-Scholes option pricing was presented in
and the application to fluid dynamics was presented in
Both of these subjects will be addressed in more detail as part of this series of articles.
It should be noted that discrete calculus and its special cases of discrete exterior calculus and discrete stochastic calculus represent a new framework for numerical modeling. We are not taking continuum models built on continuum calculus and constructing finite approximations. Instead, we are building a robust mathematical framework that has finiteness built in from the outset. The resulting numerical models are not approximations, but exact models developed within a finite numerical framework. The framework itself converges to the continuum versions so that any numerical models built within this framework will automatically converge to the continuum versions (if such a thing is desired).
Discrete calculus provides a kind of meta algorithm. It is an algorithm for generating algorithms.
This post is part of a series
As stated in the Introduction to this series, one of my goals is to follow along with John Baez’ series and reformulate things in the language of discrete calculus. Along the way, I’m coming across operations that I haven’t used in any of my prior applications of discrete calculus to mathematical finance and field theories. For instance, in the The Discrete Master Equation, I introduced a boundary operator
Although, I hope the reason I call this a boundary operator is obvious, it would be more precise to call this something like graph divergence. To see why, consider the boundary of an arbitrary discrete 1-form
A hint of sloppy notation has already crept in here, but we can see that the boundary of a discrete 1-form at a node is the sum of coefficients flowing into node minus the sum of coefficients flowing out of node . This is what you would expect of a divergence operator, but divergence depends on a metric. This operator does not, hence it is topological in nature. It is tempting to call this a topological divergence, but I think graph divergence is a better choice for reasons to be seen later.
One reason the above notation is a bit sloppy is because in the summations, we should really keep track of what directed edges are actually present in the directed graph. Until now, simply setting
if there is no directed edge from node to node was sufficient. Not anymore.
Also, for applications I’ve used discrete calculus so far, there has always only been a single directed edge connecting any two nodes. When applying discrete calculus to electrical circuits, as John has started doing in his series, we obviously would like to consider elements that are in parallel.
I tend to get hung up on notation and have thought about the best way to deal with this. My solution is not perfect and I’m open to suggestions, but what I settled on is to introduce a summation not only over nodes, but also over directed edges connected those nodes. Here it is for an arbitrary discrete 1-form
where is the set of all directed edges from node to node . I’m not 100% enamored, but is handy for performing calculations and doesn’t make me think too much.
For example, with this new notation, the boundary operator is much clearer
As before, this says the graph divergence of at the node is the sum of all coefficients flowing into node minus the sum of all coefficients flowing out of node . Moreover, for any node there can be one or more (or zero) directed edges from into .
This post is a follow up to
To give the result first, the master equation can be expressed in terms of discrete calculus simply as
where is a discrete 0-form representing the states of a Markov chain (at all times), is a discrete 1-form representing transition probabilities, and is the boundary operator, i.e. a kind of graph divergence.
The rest of this post explains the terms in this discrete master equation and how it works.
The State-Time Graph
When working with a finite (or countable) number of states, there is nothing new in considering states to be associated to nodes and the transition probabilities to be associated to directed edges of a bi-directed graph. A simple 2-state example is given below
The directed graphs we work with for discrete stochastic calculus are slightly different and could be referred to as “state-time” graphs, which are supposed to make you think of “space-time”. A state at time is considered a different node than the state at time . An example 2-state, 2-time directed graph is illustrated below:
There are four directed edges in this state-time graph, which will be labelled
For states, the state-time graph will look similar but with more states appended horizontally.
The Discrete Master Equation
A discrete 0-form representing the states at all times can be expressed as
and a discrete 1-form representing the transition probabilities can be expressed as
The product of the 0-form and the 1-form is given by
The boundary of a directed edge is given by
Now for some gymnastics, we can compute
This is zero only when the last term in brackets is zero, i.e.
Since is right stochastic, we have
In other words, when is right stochastic and , we get the usual master equation from stochastic mechanics
The master equation is a boundary. This makes me wonder about homology, gauge transformations, sources, etc. For example, since
does this imply
for some discrete 2-form ?
If is a discrete 2-form whose boundary does not vanish, then
gives the same dynamics because This would be a kind of gauge transformation.
There are several directions to take this from here, but that is about all the energy I have for now. More to come…
Doyne Farmer is awesome. I first ran into him back in 2001 (or maybe 2002) at the University of Chicago where he was giving a talk on order book dynamics with some awesome videos from the order book of the London Stock Exchange. He has another recent paper that also looks very interesting:
Leverage Causes Fat Tails and Clustered Volatility
Stefan Thurner, J. Doyne Farmer, John Geanakoplos
(Submitted on 11 Aug 2009 (v1), last revised 10 Jan 2010 (this version, v2))
We build a simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called “value investing”, i.e. systematically attempting to buy underpriced assets. When funds do not borrow, the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on “irrational behavior”, such as trend following. Here instead this comes from the fact that leverage limits cause funds to sell into a falling market: A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies large downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making these extreme fluctuations even worse.
I completely agree with this idea. In fact, I discussed this concept with Francis Longstaff at the last advisory board meeting of UCLA’s financial engineering program. Back in December, I spent the majority of a flight back to Hong Kong from Europe doodling a bunch of math trying to express the idea in formulas, but didn’t come up with anything worth writing home about. But it seems like they make some good progress in this paper.
Basically, financial firms of all stripes have performance targets. In a period of decreasing volatility (as we were in preceding the crisis), asset returns tend to decrease as well. To compensate, firms tend to move out further along the risk spectrum and/or increase leverage to maintain a given return level. The dynamics here is that leverage tends to increase as volatility decreases. However, the increased leverage increases the chance of a tail event occurring as we experienced.
On first glance, this paper captures a lot of the dynamics I’ve been wanting to see written down somewhere. Hopefully this gets some attention.
The basic fact needed to address the question is that we have a set 0-dimensional objects and 1-dimensional objects that obey the following geometrically-motivated multiplication rules:
To see the geometrical meaning of these multiplications, it might help to consider discrete 0-forms
Multiplication 1.) above implies
which is completely reminiscent of multiplication of functions where we think of as the value of the function at the “node”
Multiplications 2.) and 3.) introduce new concepts, but whose geometrical interpretation is quite intuitive. To see this consider
Multiplying the function on the left of the “directed edge” projects out the value of the function at the beginning of the edge and multiplying on the right projects out the value of the function at the end. Hence, the product of functions and edges do not commute.
In the paper, it was shown that the exterior derivative of a discrete 0-form is given by
Here, we show that this may be expressed as the commutator with the “graph operator”
The result is quite simple and follows directly from and the multiplication rules, i.e.
It’s been a while since I’ve been able to find the time to post anything. Work is completely insane, but I’m loving it 🙂
I’ve been in a “tool” building frenzy lately. When you’re building quant models, one of the best uses of them is sensitivity analysis, i.e. trying to see how sensitive securities are to various parameters. After fiddling with one too many Excel charts, I decided to automate a lot of the sensitivity analysis we’re doing.
Building tools to automate mundane tasks is one thing, but I also enjoy throwing fancy looking interfaces on my tools. This is motivated in large part by Emanuel Derman. Both in his book and at conferences where I’ve heard him speak, he often emphasizes that some of the biggest impacts he made early in his quant career were in building nice user-friendly interfaces. Stuff that even the traders can figure out 😉
Building tools requires a bit of effort up front and when you are barraged by demands that need to be done yesterday, you can either succumb to the “easy” way and just crank out results as fast as you can (knowing full well you will have to repeat the same process next week), or double your working hours to get the tools working asap. I chose the latter, which explains my hiatus.
Things are working quite nicely now and it was pleasing to watch the younger quants admiring the analysis tool I built 🙂 There is no doubt in my mind the extra effort it took to build some useful tools, will be paid back tenfold (or more!) in time.
Well, the days of sitting around reading financial blogs in my boxers has finally come to an end 😦 [Too much information?]
I started my new job today. I’ve got mixed emotions about it so far. There are certainly aspects of my old job I’m going to miss. It was a lot more plush, e.g. I had my own office with a view and personal administrative assistant. I got to chat with super smart investors and economists daily. Now, I’m back to cube land (and I couldn’t find a post-it note to save my life). I will certainly miss the exposure I had at my old place, but I’m also optimistic about the new gig. Although I’m more of a pure “quant” now (just as everyone is learning to hate quants!), I’ve already started talking with our economists at the new place. I’m certainly less of an alien from another planet (the way I described being a quant at my old place) and I’m surrounded by other PhDs who are well versed in finance kung fu. It should be interesting.
One of the first technical obstacles I’m facing is that the weapon of choice here is SAS. I’m a big Matlab fan for quick prototyping of models and the unintuitive nature of SAS is a bit daunting at the moment. After about 15 minutes of reading the Little SAS Book, I put in a request for Matlab 🙂
I’ll probably also try to migrate to C#. From what I can tell, learning C# will be a nice investment that I suspect will pay off down the road.