This post is part of a series
As stated in the Introduction to this series, one of my goals is to follow along with John Baez’ series and reformulate things in the language of discrete calculus. Along the way, I’m coming across operations that I haven’t used in any of my prior applications of discrete calculus to mathematical finance and field theories. For instance, in the The Discrete Master Equation, I introduced a boundary operator
Although, I hope the reason I call this a boundary operator is obvious, it would be more precise to call this something like graph divergence. To see why, consider the boundary of an arbitrary discrete 1-form
A hint of sloppy notation has already crept in here, but we can see that the boundary of a discrete 1-form at a node is the sum of coefficients flowing into node minus the sum of coefficients flowing out of node . This is what you would expect of a divergence operator, but divergence depends on a metric. This operator does not, hence it is topological in nature. It is tempting to call this a topological divergence, but I think graph divergence is a better choice for reasons to be seen later.
One reason the above notation is a bit sloppy is because in the summations, we should really keep track of what directed edges are actually present in the directed graph. Until now, simply setting
if there is no directed edge from node to node was sufficient. Not anymore.
Also, for applications I’ve used discrete calculus so far, there has always only been a single directed edge connecting any two nodes. When applying discrete calculus to electrical circuits, as John has started doing in his series, we obviously would like to consider elements that are in parallel.
I tend to get hung up on notation and have thought about the best way to deal with this. My solution is not perfect and I’m open to suggestions, but what I settled on is to introduce a summation not only over nodes, but also over directed edges connected those nodes. Here it is for an arbitrary discrete 1-form
where is the set of all directed edges from node to node . I’m not 100% enamored, but is handy for performing calculations and doesn’t make me think too much.
For example, with this new notation, the boundary operator is much clearer
As before, this says the graph divergence of at the node is the sum of all coefficients flowing into node minus the sum of all coefficients flowing out of node . Moreover, for any node there can be one or more (or zero) directed edges from into .
This post is a follow up to
To give the result first, the master equation can be expressed in terms of discrete calculus simply as
where is a discrete 0-form representing the states of a Markov chain (at all times), is a discrete 1-form representing transition probabilities, and is the boundary operator, i.e. a kind of graph divergence.
The rest of this post explains the terms in this discrete master equation and how it works.
The State-Time Graph
When working with a finite (or countable) number of states, there is nothing new in considering states to be associated to nodes and the transition probabilities to be associated to directed edges of a bi-directed graph. A simple 2-state example is given below
The directed graphs we work with for discrete stochastic calculus are slightly different and could be referred to as “state-time” graphs, which are supposed to make you think of “space-time”. A state at time is considered a different node than the state at time . An example 2-state, 2-time directed graph is illustrated below:
There are four directed edges in this state-time graph, which will be labelled
For states, the state-time graph will look similar but with more states appended horizontally.
The Discrete Master Equation
A discrete 0-form representing the states at all times can be expressed as
and a discrete 1-form representing the transition probabilities can be expressed as
The product of the 0-form and the 1-form is given by
The boundary of a directed edge is given by
Now for some gymnastics, we can compute
This is zero only when the last term in brackets is zero, i.e.
Since is right stochastic, we have
In other words, when is right stochastic and , we get the usual master equation from stochastic mechanics
The master equation is a boundary. This makes me wonder about homology, gauge transformations, sources, etc. For example, since
does this imply
for some discrete 2-form ?
If is a discrete 2-form whose boundary does not vanish, then
gives the same dynamics because This would be a kind of gauge transformation.
There are several directions to take this from here, but that is about all the energy I have for now. More to come…
I’ve enjoyed applying discrete calculus to various problems since Urs Schreiber and I wrote our paper together back in 2004
Shortly after that, I wrote an informal paper applying the theory to finance in
From there I wrote up some private notes laying the foundations for applying a higher-dimensional version of discrete calculus to interest rate models. However, life intervened, I went to work on Wall Street followed by various career twists leading me to Hong Kong where I am today. The research has laid fairly dormant since then.
I started picking this up again recently when my friend, John Baez, effectively changed careers and started the Azimuth Project. In particular, I’ve recently developed a discrete Burgers equation with corresponding discrete Cole-Hopf transformation, which is summarized – including numerical simulation results – on the Azimuth Forum here:
Motivated by these results, I started looking at a reformulation of the Navier-Stokes equation in
This is still a work-in-progress, but sorting this out is a necessary step to writing down the discrete Navier-Stokes equation.
Even more recently, John began a series of very interesting Azimuth Blog posts on network theory. I knew that network theory and discrete calculus should link up together naturally, but it took a while to see the connection. It finally clicked one night as I laid in bed half asleep in one of those rare “Eureka!” moments. I wrote up the details in
There is much more to be said about the connection between network theory and discrete calculus. I intend to write a series of subsequent posts in parallel to John’s highlighting how his work with Brendan Fong can be presented in terms of discrete calculus.
Doyne Farmer is awesome. I first ran into him back in 2001 (or maybe 2002) at the University of Chicago where he was giving a talk on order book dynamics with some awesome videos from the order book of the London Stock Exchange. He has another recent paper that also looks very interesting:
Leverage Causes Fat Tails and Clustered Volatility
Stefan Thurner, J. Doyne Farmer, John Geanakoplos
(Submitted on 11 Aug 2009 (v1), last revised 10 Jan 2010 (this version, v2))
We build a simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called “value investing”, i.e. systematically attempting to buy underpriced assets. When funds do not borrow, the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on “irrational behavior”, such as trend following. Here instead this comes from the fact that leverage limits cause funds to sell into a falling market: A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies large downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making these extreme fluctuations even worse.
I completely agree with this idea. In fact, I discussed this concept with Francis Longstaff at the last advisory board meeting of UCLA’s financial engineering program. Back in December, I spent the majority of a flight back to Hong Kong from Europe doodling a bunch of math trying to express the idea in formulas, but didn’t come up with anything worth writing home about. But it seems like they make some good progress in this paper.
Basically, financial firms of all stripes have performance targets. In a period of decreasing volatility (as we were in preceding the crisis), asset returns tend to decrease as well. To compensate, firms tend to move out further along the risk spectrum and/or increase leverage to maintain a given return level. The dynamics here is that leverage tends to increase as volatility decreases. However, the increased leverage increases the chance of a tail event occurring as we experienced.
On first glance, this paper captures a lot of the dynamics I’ve been wanting to see written down somewhere. Hopefully this gets some attention.
The following is a fun little exercise that most statistics students have probably worked out as a homework assignment at some point, but since I have found myself rederiving it a few times over the years, I decided to write this post for the record to save me some time the next time this comes up.
Given a probability density , we can approximate the probability of a sample falling within a region around the value by
Similarly, the probability of observing independent samples is approximated by
In the case of a normal distribution, the density is parameterized by two parameters and and we have
The probability of observing a given sample is then approximated by
The idea behind maximum likelihood estimation is that the parameters should be chosen such that the probability of observing the given samples is maximized. This occurs when the differential vanishes, i.e.
This, in turn, vanishes only when both components vanish, i.e.
The first component is given by
and vanishes when
The second component is given by
and vanishes when
Note: The first time I worked through this exercise, I thought it was cute, but I would never compute and as above so the maximum likelihood estimation, as presented, is not meaningful to me. Hence, this is just a warm up for what comes next. Stay tuned…
During my days at Capital Group, we had an opportunity to bring Paul Harrison in to give a presentation at an internal research conference. This was prior to the crisis when people were still feeling good and were still confident that the Fed had conquered the business cycle. They even called it the Great Moderation.1
Paul was great. I truly enjoyed his presentation and he was very gracious with his time afterward putting up with my questions. I remember he was examining the term structure of credit spreads and the information that can be extracted from it. His presentation and subsequent question and answer session had a lasting impression on the way I look at fixed income markets.
However, one thing about it always seemed odd to me. Paul was the head of the Capital Markets Section at the Federal Reserve. That wouldn’t be so odd in itself, but what I found odd about it was that they actually managed a risky fixed income portfolio. Why was the Federal Reserve managing a portfolio of corporate bonds? Talk about conflict of interest. Not to mention the potential for insider information.
Soon afterward, Paul left the Fed to work at a big investment bank and I left Capital. I hadn’t put much thought into it since then, but the “odd” feelings came rushing back when I read the following article from Zero Hedge:
Please go have a look.
Although quantitative easing is quite a different animal than the fixed income portfolio at the Capital Markets Section, the market risk is the same. The mark to market losses the Fed is now exposed to are astronomical. The sad thing is, I have very little confidence they even understand those risks. Bernanke is quite confident he can raise rates, but that is exactly the thing he should fear most. What is going to happen to yields in the US when they get their wishes and China floats the RMB (which I expect them to do within 4 years)?
It is difficult to inflate your way out of debt obligations when the Fed is the largest holder of US treasuries. When the RMB floats, China will take a hit on their USD holdings, but that will be hedged to a large extent by their improved purchasing power, which will only accelerate their evolution to a consumer economy. The US will once again increase manufacturing as promised but at the huge cost of quality of life as prices of all imports skyrocket.
Be careful what you wish for. It is sad for me to watch my country deteriorate like this at the hands of Bernanke and Geithner. History books will not be kind to either of them.
1: You dont hear that term very often anymore. I remember debating colleagues about the concept and told them when history books were written they’d look at the period of the Great Moderation as the most irresponsible period of monetary policy in history, but that is another story.