Phorgy Phynance

Network Theory and Discrete Calculus – Notation Revisited

with 2 comments

This post is part of a series

As stated in the Introduction to this series, one of my goals is to follow along with John Baez’ series and reformulate things in the language of discrete calculus. Along the way, I’m coming across operations that I haven’t used in any of my prior applications of discrete calculus to mathematical finance and field theories. For instance, in the The Discrete Master Equation, I introduced a boundary operator

\begin{aligned} \partial \mathbf{e}^{i,j} = \mathbf{e}^j-\mathbf{e}^i.\end{aligned}

Although, I hope the reason I call this a boundary operator is obvious, it would be more precise to call this something like graph divergence. To see why, consider the boundary of an arbitrary discrete 1-form

\begin{aligned}\partial \alpha = \sum_{i,j} \alpha_{i,j} \left(\mathbf{e}^j - \mathbf{e}^i\right) = \sum_i \left[ \sum_j \left(\alpha_{j,i} - \alpha_{i,j}\right)\right] \mathbf{e}^i.\end{aligned}

A hint of sloppy notation has already crept in here, but we can see that the boundary of a discrete 1-form at a node i is the sum of coefficients flowing into node i minus the sum of coefficients flowing out of node i. This is what you would expect of a divergence operator, but divergence depends on a metric. This operator does not, hence it is topological in nature. It is tempting to call this a topological divergence, but I think graph divergence is a better choice for reasons to be seen later.

One reason the above notation is a bit sloppy is because in the summations, we should really keep track of what directed edges are actually present in the directed graph. Until now, simply setting

\mathbf{e}^{i,j} = 0

if there is no directed edge from node i to node j was sufficient. Not anymore.

Also, for applications I’ve used discrete calculus so far, there has always only been a single directed edge connecting any two nodes. When applying discrete calculus to electrical circuits, as John has started doing in his series, we obviously would like to consider elements that are in parallel.

I tend to get hung up on notation and have thought about the best way to deal with this. My solution is not perfect and I’m open to suggestions, but what I settled on is to introduce a summation not only over nodes, but also over directed edges connected those nodes. Here it is for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

where [i,j] is the set of all directed edges from node i to node j. I’m not 100% enamored, but is handy for performing calculations and doesn’t make me think too much.

For example, with this new notation, the boundary operator is much clearer

\begin{aligned} \partial \alpha &= \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \left(\mathbf{e}^{j}-\mathbf{e}^i\right) \\ &= \sum_i \left[\sum_j \left( \sum_{\epsilon\in[j,i]} \alpha_{j,i}^{\epsilon} - \sum_{\epsilon\in[i,j]} \alpha_{i,j}^{\epsilon} \right)\right]\mathbf{e}^i.\end{aligned}

As before, this says the graph divergence of \alpha at the node i is the sum of all coefficients flowing into node i minus the sum of all coefficients flowing out of node i. Moreover, for any node j there can be one or more (or zero) directed edges from j into i.

Written by Eric

November 19, 2011 at 11:27 pm

Network Theory and Discrete Calculus – The Discrete Master Equation

with 6 comments

This post is a follow up to

Network Theory and Discrete Calculus – Introduction

To give the result first, the master equation can be expressed in terms of discrete calculus simply as

\partial(\psi P) = 0,

where \psi is a discrete 0-form representing the states of a Markov chain (at all times), P is a discrete 1-form representing transition probabilities, and \partial is the boundary operator, i.e. a kind of graph divergence.

The rest of this post explains the terms in this discrete master equation and how it works.

The State-Time Graph

When working with a finite (or countable) number of states, there is nothing new in considering states \psi_i to be associated to nodes and the transition probabilities P_{i,j} to be associated to directed edges of a bi-directed graph. A simple 2-state example is given below

The directed graphs we work with for discrete stochastic calculus are slightly different and could be referred to as “state-time” graphs, which are supposed to make you think of “space-time”. A state i at time t is considered a different node than the state i at time t+1. An example 2-state, 2-time directed graph is illustrated below:

There are four directed edges in this state-time graph, which will be labelled

* \mathbf{e}^{(i,t)(i,t+1)}
* \mathbf{e}^{(i,t)(j,t+1)}
* \mathbf{e}^{(j,t)(i,t+1)}
* \mathbf{e}^{(j,t)(j,t+1)}

For N states, the state-time graph will look similar but with more states appended horizontally.

The Discrete Master Equation

A discrete 0-form representing the states at all times can be expressed as

\psi = \sum_i \sum_t \psi_i^t \mathbf{e}^{(i,t)}

and a discrete 1-form representing the transition probabilities can be expressed as

P = \sum_{i,j} \sum_t P_{i,j}^t \mathbf{e}^{(i,t)(j,t+1)}.

The product of the 0-form \psi and the 1-form P is given by

\psi P = \sum_{i,j} \sum_t \psi_i^t P_{i,j}^t \mathbf{e}^{(i,t)(j,t+1)}.

The boundary of a directed edge is given by

\partial \mathbf{e}^{(i,t)(j,t+1)} = \mathbf{e}^{(j,t+1)} - \mathbf{e}^{(i,t)}.

Now for some gymnastics, we can compute

\begin{aligned} \partial(\psi P)  &= \sum_{i,j} \sum_t \psi_i^t P_{i,j}^t \left[\mathbf{e}^{(j,t+1)} - \mathbf{e}^{(i,t)}\right] \\  &= \sum_{i,j} \sum_t \left[\psi_j^t P_{j,i}^t \mathbf{e}^{(i,t+1)} - \psi_i^t P_{i,j}^t \mathbf{e}^{(i,t)}\right] \\  &= \sum_i \sum_t \left[\sum_j \left(\psi_j^t P_{j,i}^t - \psi_i^{t+1} P_{i,j}^{t+1}\right)\right] \mathbf{e}^{(i,t+1)}.  \end{aligned}

This is zero only when the last term in brackets is zero, i.e.

\sum_j \left(\psi_j^t P_{j,i}^t - \psi_i^{t+1} P_{i,j}^{t+1}\right) = 0

or

\psi_i^{t+1} \sum_j P_{i,j}^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

Since P is right stochastic, we have

\sum_j P_{i,j}^{t+1} = 1

so that

\psi_i^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

In other words, when P is right stochastic and \partial(\psi P) = 0, we get the usual master equation from stochastic mechanics

\partial(\psi P) = 0\implies \psi_i^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

Parting Thoughts

The master equation is a boundary. This makes me wonder about homology, gauge transformations, sources, etc. For example, since

\partial(\psi P) = 0,

does this imply

\psi P = \partial F

for some discrete 2-form F?

If G is a discrete 2-form whose boundary does not vanish, then

\psi P + \partial G

gives the same dynamics because \partial^2 = 0. This would be a kind of gauge transformation.

There are several directions to take this from here, but that is about all the energy I have for now. More to come…

Written by Eric

October 29, 2011 at 10:04 pm

Network Theory and Discrete Calculus – Introduction

with 2 comments

I’ve enjoyed applying discrete calculus to various problems since Urs Schreiber and I wrote our paper together back in 2004

Discrete differential geometry on causal graphs

Shortly after that, I wrote an informal paper applying the theory to finance in

Financial modeling using discrete stochastic calculus

From there I wrote up some private notes laying the foundations for applying a higher-dimensional version of discrete calculus to interest rate models. However, life intervened, I went to work on Wall Street followed by various career twists leading me to Hong Kong where I am today. The research has laid fairly dormant since then.

I started picking this up again recently when my friend, John Baez, effectively changed careers and started the Azimuth Project. In particular, I’ve recently developed a discrete Burgers equation with corresponding discrete Cole-Hopf transformation, which is summarized – including numerical simulation results – on the Azimuth Forum here:

Discrete Burgers equation revisited

Motivated by these results, I started looking at a reformulation of the Navier-Stokes equation in

Towards Navier-Stokes from noncommutative geometry

This is still a work-in-progress, but sorting this out is a necessary step to writing down the discrete Navier-Stokes equation.

Even more recently, John began a series of very interesting Azimuth Blog posts on network theory. I knew that network theory and discrete calculus should link up together naturally, but it took a while to see the connection. It finally clicked one night as I laid in bed half asleep in one of those rare “Eureka!” moments. I wrote up the details in

Discrete stochastic mechanics

There is much more to be said about the connection between network theory and discrete calculus. I intend to write a series of subsequent posts in parallel to John’s highlighting how his work with Brendan Fong can be presented in terms of discrete calculus.

Written by Eric

October 28, 2011 at 9:12 am

Leverage Causes Fat Tails and Clustered Volatility

leave a comment »

Doyne Farmer is awesome. I first ran into him back in 2001 (or maybe 2002) at the University of Chicago where he was giving a talk on order book dynamics with some awesome videos from the order book of the London Stock Exchange. He has another recent paper that also looks very interesting:

Leverage Causes Fat Tails and Clustered Volatility
Stefan Thurner, J. Doyne Farmer, John Geanakoplos
(Submitted on 11 Aug 2009 (v1), last revised 10 Jan 2010 (this version, v2))

We build a simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called “value investing”, i.e. systematically attempting to buy underpriced assets. When funds do not borrow, the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on “irrational behavior”, such as trend following. Here instead this comes from the fact that leverage limits cause funds to sell into a falling market: A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies large downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making these extreme fluctuations even worse.

I completely agree with this idea. In fact, I discussed this concept with Francis Longstaff at the last advisory board meeting of UCLA’s financial engineering program. Back in December, I spent the majority of a flight back to Hong Kong from Europe doodling a bunch of math trying to express the idea in formulas, but didn’t come up with anything worth writing home about. But it seems like they make some good progress in this paper.

Basically, financial firms of all stripes have performance targets. In a period of decreasing volatility (as we were in preceding the crisis), asset returns tend to decrease as well. To compensate, firms tend to move out further along the risk spectrum and/or increase leverage to maintain a given return level. The dynamics here is that leverage tends to increase as volatility decreases. However, the increased leverage increases the chance of a tail event occurring as we experienced.

On first glance, this paper captures a lot of the dynamics I’ve been wanting to see written down somewhere. Hopefully this gets some attention.

 

Written by Eric

April 5, 2011 at 11:12 am

Fun with maximum likelihood estimation

with 2 comments

The following is a fun little exercise that most statistics students have probably worked out as a homework assignment at some point, but since I have found myself rederiving it a few times over the years, I decided to write this post for the record to save me some time the next time this comes up.

Given a probability density \rho(x), we can approximate the probability of a sample falling within a region \Delta x around the value x_i\in\mathbb{R} by

P(x_i) = \rho(x_i)\Delta x.

Similarly, the probability of observing N independent samples X\in\mathbb{R}^N is approximated by

P(X) = \prod_{i = 1}^N P(x_i) = \left(\Delta x\right)^N \prod_{i=1}^N \rho(x_i).

In the case of a normal distribution, the density is parameterized by two parameters \mu and \nu and we have

\rho(x|\mu,\nu) = \frac{1}{\sqrt{\pi\nu}} \exp\left[-\frac{(x-\mu)^2}{\nu}\right].

The probability of observing a given sample is then approximated by

P(X|\mu,\nu) =   \left(\frac{\Delta x}{\sqrt{\pi}}\right)^N \nu^{-N/2} \exp\left[-\frac{1}{\nu} \sum_{i=1}^N (x_i - \mu)^2\right].

The idea behind maximum likelihood estimation is that the parameters should be chosen such that the probability of observing the given samples is maximized. This occurs when the differential vanishes, i.e.

dP(X|\mu,\nu) = \frac{\partial P(X|\mu,\nu)}{\partial \mu} d\mu + \frac{\partial P(X|\mu,\nu)}{\partial \nu} d\nu = 0.

This, in turn, vanishes only when both components vanish, i.e.

\frac{\partial P(X|\mu,\nu)}{\partial \mu} = \frac{\partial P(X|\mu,\nu)}{\partial \nu} = 0.

The first component is given by

\frac{\partial P(X|\mu,\nu)}{\partial \mu} = P(X|\mu,\nu) \left[-\frac{2}{\nu} \sum_{i=1}^N (x_i - \mu)\right]

and vanishes when

\mu = \frac{1}{N} \sum_{i = 1}^N x_i.

The second component is given by

\frac{\partial P(X|\mu,\nu)}{\partial \nu} = P(X|\mu,\nu) \left[-\frac{N}{2\nu} + \frac{1}{\nu^2} \sum_{i=1}^N (x_i - \mu)^2\right]

and vanishes when

\sigma^2 = \frac{1}{N} \sum_{i=1}^N (x_i-\mu)^2,

where \nu = 2\sigma^2.

Note: The first time I worked through this exercise, I thought it was cute, but I would never compute \mu and \sigma^2 as above so the maximum likelihood estimation, as presented, is not meaningful to me. Hence, this is just a warm up for what comes next. Stay tuned…

Written by Eric

January 2, 2011 at 10:56 pm

Market Risk at the Federal Reserve – History books will not be kind

leave a comment »

During my days at Capital Group, we had an opportunity to bring Paul Harrison in to give a presentation at an internal research conference. This was prior to the crisis when people were still feeling good and were still confident that the Fed had conquered the business cycle. They even called it the Great Moderation.1

Paul was great. I truly enjoyed his presentation and he was very gracious with his time afterward putting up with my questions. I remember he was examining the term structure of credit spreads and the information that can be extracted from it. His presentation and subsequent question and answer session had a lasting impression on the way I look at fixed income markets.

However, one thing about it always seemed odd to me. Paul was the head of the Capital Markets Section at the Federal Reserve.  That wouldn’t be so odd in itself, but what I found odd about it was that they actually managed a risky fixed income portfolio. Why was the Federal Reserve managing a portfolio of corporate bonds? Talk about conflict of interest. Not to mention the potential for insider information.

Soon afterward, Paul left the Fed to work at a big investment bank and I left Capital. I hadn’t put much thought into it since then, but the “odd” feelings came rushing back when I read the following article from Zero Hedge:

Federal Reserve Loses $2.4 Billion In Taxpayer Money In Most Recent QE2 POMO Interval

Please go have a look.

Although quantitative easing is quite a different animal than the fixed income portfolio at the Capital Markets Section, the market risk is the same. The mark to market losses the Fed is now exposed to are astronomical. The sad thing is, I have very little confidence they even understand those risks. Bernanke is quite confident he can raise rates, but that is exactly the thing he should fear most. What is going to happen to yields in the US when they get their wishes and China floats the RMB (which I expect them to do within 4 years)?

It is difficult to inflate your way out of debt obligations when the Fed is the largest holder of US treasuries. When the RMB floats, China will take a hit on their USD holdings, but that will be hedged to a large extent by their improved purchasing power, which will only accelerate their evolution to a consumer economy. The US will once again increase manufacturing as promised but at the huge cost of quality of life as prices of all imports skyrocket.

Be careful what you wish for. It is sad for me to watch my country deteriorate like this at the hands of Bernanke and Geithner. History books will not be kind to either of them.

1: You dont hear that term very often anymore. I remember debating colleagues about the concept and told them when history books were written they’d look at the period of the Great Moderation as the most irresponsible period of monetary policy in history, but that is another story.

Written by Eric

December 11, 2010 at 11:05 am

Quantitative Easing Explained

leave a comment »

Written by Eric

November 13, 2010 at 1:12 pm

Follow

Get every new post delivered to your Inbox.