Phorgy Phynance

Archive for the ‘Directed Graphs’ Category

Network Theory and Discrete Calculus – Graph Divergence and Graph Laplacian

with one comment

This post is part of a series

Another Note on Notation

In a previous post, I introduced a slightly generalized notation in order to deal with directed graphs with multiple directed edges between any two nodes, e.g. parallel elements in electrical networks. However, the revised notation now makes some simpler calculations look more cumbersome. This is an example of what my adviser called the conservation of frustration. For example, the coboundary is now given by:

\begin{aligned} d\mathbf{e}^i = \sum_{i,j} \left( \sum_{\epsilon\in[j,i]}\mathbf{e}_\epsilon^{j,i} - \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j}\right).\end{aligned}

Applied to a general discrete 0-form, this becomes

\begin{aligned} d\phi = \sum_{i,j} {\left(\phi_j-\phi_i\right) \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j}} .\end{aligned}

To re-simplify the notation while maintaining the advantages of the new generalized notation, we can define

\begin{aligned} \mathbf{e}^{i,j} = \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j} \end{aligned}

and we’re back to

\begin{aligned} d\mathbf{e}^i = \sum_{i,j} \left(\mathbf{e}^{j,i} - \mathbf{e}^{i,j}\right)\end{aligned} and \begin{aligned} d\phi = \sum_{i,j} \left(\phi_j-\phi_i\right) \mathbf{e}^{i,j} \end{aligned}

as before. Furthermore, we have

\partial\mathbf{e}^{i,j} = N_{i,j} \left(\mathbf{e}^j - \mathbf{e}^i\right),

where N_{i,j} is the number of directed edges from node i to node j.

Trace and Inner Products

Given a discrete 0-form \phi, we define its trace via

\begin{aligned} tr_0(\phi) = \sum_i \phi_i. \end{aligned}

Similarly, given a discrete 1-form \alpha, its trace is given by

\begin{aligned} tr_1(\alpha) = \sum_{i,j} {\sum_{\epsilon\in[i,j]} \alpha^\epsilon_{i,j}} .\end{aligned}

With the trace, we can define the inner product of discrete 0-forms via

\langle \phi,\psi\rangle_0 = tr_0(\phi\psi)

and the inner product of discrete 1-forms via

\langle \alpha,\beta\rangle_1 = tr_1(\alpha\circ\beta),

where \alpha\circ\beta is the edge product.

Graph Divergence

The graph divergence was introduced here as a boundary operator, but the relation to divergence was mentioned here.

With the inner products defined above, a simple calculation shows

\langle \partial\alpha,\phi\rangle_0 = \langle \alpha, d\phi\rangle_1

so the graph divergence is the adjoint of the coboundary.

In relating discrete calculus to algebraic topology, typically, in algebraic topology you would have a coboundary operator for cochains and a boundary operator for chains. With discrete calculus, we have both d and \partial for discrete forms.

Graph Laplacian

The graph Laplacian of a discrete 0-form \phi is given by

\begin{aligned} \partial d\phi = -\sum_{i,j} \left(\phi_j - \phi_i\right) \left(N_{i,j} + N_{j,i}\right) \mathbf{e}^i. \end{aligned}

More generally, we could define a graph Laplace-Beltrami operator

d\partial + \partial d.

Graph Dirac Operator

The graph Dirac operator is essentially the “square root” of the graph Laplace-Beltrami operator. Since d^2 = 0 and \partial^2 = 0, we have

d\partial + \partial d = (d+\partial)^2

so the/a graph Dirac operator is given by

d + \partial.


Written by Eric

December 4, 2011 at 1:26 pm

Network Theory and Discrete Calculus – Edge Algebra

with one comment

This post is part of a series

In my last post, I noted that in following John Baez’ series, I’m finding the need to introduce operators that I haven’t previously used in any applications. In this post, I will introduce another. It turns out that we could get away without introducing this concept, but I think it helps motivate some things I will talk about later.

In all previous applications, the important algebra was a noncommutative graded differential algebra. The grading means that the degree of elements add when you multiply them together. For example, the product of two nodes (degree 0) is a node (degree 0+0), the product of a node (degree 0) and a directed edge (degree 1) is a directed edge (degree 0+1), and the product of a directed edge (degree 1) with another directed edge is a directed surface (degree 1+1).

Note the algebra of nodes is a commutative sub-algebra of the full noncommutative graded algebra.

There is another related commutative edge algebra with corresponding edge product.

The edge product is similar to the product of nodes in that it is a projection given by

\mathbf{e}_\epsilon^{i,j} \circ \mathbf{e}_{\epsilon'}^{k,l} = \delta_{\epsilon,\epsilon'} \delta_{i,k} \delta_{j,l} \mathbf{e}_\epsilon^{i,j}.

It is a projection because for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

we have

\mathbf{e}_\epsilon^{i,j} \circ \alpha = \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j}


\mathbf{e}_\epsilon^{i,j} \circ \mathbf{e}_\epsilon^{i,j} = \mathbf{e}_\epsilon^{i,j}.

The product of two discrete 1-forms is

\begin{aligned}\alpha\circ\beta = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \beta_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j}\end{aligned}.

I have not yet come across an application where the full edge algebra is needed. When the product does arise, one of the discrete 1-forms is usual the coboundary of a discrete 0-form, i.e.

\alpha\circ d\phi.

When this is the case, the edge product can be expressed as a (graded) commutator in the noncommutative graded algebra, i.e.

\alpha\circ d\phi = [\alpha,\phi].

An example of this will be seen when we examine electrical circuits.

Written by Eric

November 20, 2011 at 12:21 pm

Network Theory and Discrete Calculus – Notation Revisited

with 2 comments

This post is part of a series

As stated in the Introduction to this series, one of my goals is to follow along with John Baez’ series and reformulate things in the language of discrete calculus. Along the way, I’m coming across operations that I haven’t used in any of my prior applications of discrete calculus to mathematical finance and field theories. For instance, in the The Discrete Master Equation, I introduced a boundary operator

\begin{aligned} \partial \mathbf{e}^{i,j} = \mathbf{e}^j-\mathbf{e}^i.\end{aligned}

Although, I hope the reason I call this a boundary operator is obvious, it would be more precise to call this something like graph divergence. To see why, consider the boundary of an arbitrary discrete 1-form

\begin{aligned}\partial \alpha = \sum_{i,j} \alpha_{i,j} \left(\mathbf{e}^j - \mathbf{e}^i\right) = \sum_i \left[ \sum_j \left(\alpha_{j,i} - \alpha_{i,j}\right)\right] \mathbf{e}^i.\end{aligned}

A hint of sloppy notation has already crept in here, but we can see that the boundary of a discrete 1-form at a node i is the sum of coefficients flowing into node i minus the sum of coefficients flowing out of node i. This is what you would expect of a divergence operator, but divergence depends on a metric. This operator does not, hence it is topological in nature. It is tempting to call this a topological divergence, but I think graph divergence is a better choice for reasons to be seen later.

One reason the above notation is a bit sloppy is because in the summations, we should really keep track of what directed edges are actually present in the directed graph. Until now, simply setting

\mathbf{e}^{i,j} = 0

if there is no directed edge from node i to node j was sufficient. Not anymore.

Also, for applications I’ve used discrete calculus so far, there has always only been a single directed edge connecting any two nodes. When applying discrete calculus to electrical circuits, as John has started doing in his series, we obviously would like to consider elements that are in parallel.

I tend to get hung up on notation and have thought about the best way to deal with this. My solution is not perfect and I’m open to suggestions, but what I settled on is to introduce a summation not only over nodes, but also over directed edges connected those nodes. Here it is for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

where [i,j] is the set of all directed edges from node i to node j. I’m not 100% enamored, but is handy for performing calculations and doesn’t make me think too much.

For example, with this new notation, the boundary operator is much clearer

\begin{aligned} \partial \alpha &= \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \left(\mathbf{e}^{j}-\mathbf{e}^i\right) \\ &= \sum_i \left[\sum_j \left( \sum_{\epsilon\in[j,i]} \alpha_{j,i}^{\epsilon} - \sum_{\epsilon\in[i,j]} \alpha_{i,j}^{\epsilon} \right)\right]\mathbf{e}^i.\end{aligned}

As before, this says the graph divergence of \alpha at the node i is the sum of all coefficients flowing into node i minus the sum of all coefficients flowing out of node i. Moreover, for any node j there can be one or more (or zero) directed edges from j into i.

Written by Eric

November 19, 2011 at 11:27 pm

Network Theory and Discrete Calculus – The Discrete Master Equation

with 6 comments

This post is a follow up to

Network Theory and Discrete Calculus – Introduction

To give the result first, the master equation can be expressed in terms of discrete calculus simply as

\partial(\psi P) = 0,

where \psi is a discrete 0-form representing the states of a Markov chain (at all times), P is a discrete 1-form representing transition probabilities, and \partial is the boundary operator, i.e. a kind of graph divergence.

The rest of this post explains the terms in this discrete master equation and how it works.

The State-Time Graph

When working with a finite (or countable) number of states, there is nothing new in considering states \psi_i to be associated to nodes and the transition probabilities P_{i,j} to be associated to directed edges of a bi-directed graph. A simple 2-state example is given below

The directed graphs we work with for discrete stochastic calculus are slightly different and could be referred to as “state-time” graphs, which are supposed to make you think of “space-time”. A state i at time t is considered a different node than the state i at time t+1. An example 2-state, 2-time directed graph is illustrated below:

There are four directed edges in this state-time graph, which will be labelled

* \mathbf{e}^{(i,t)(i,t+1)}
* \mathbf{e}^{(i,t)(j,t+1)}
* \mathbf{e}^{(j,t)(i,t+1)}
* \mathbf{e}^{(j,t)(j,t+1)}

For N states, the state-time graph will look similar but with more states appended horizontally.

The Discrete Master Equation

A discrete 0-form representing the states at all times can be expressed as

\psi = \sum_i \sum_t \psi_i^t \mathbf{e}^{(i,t)}

and a discrete 1-form representing the transition probabilities can be expressed as

P = \sum_{i,j} \sum_t P_{i,j}^t \mathbf{e}^{(i,t)(j,t+1)}.

The product of the 0-form \psi and the 1-form P is given by

\psi P = \sum_{i,j} \sum_t \psi_i^t P_{i,j}^t \mathbf{e}^{(i,t)(j,t+1)}.

The boundary of a directed edge is given by

\partial \mathbf{e}^{(i,t)(j,t+1)} = \mathbf{e}^{(j,t+1)} - \mathbf{e}^{(i,t)}.

Now for some gymnastics, we can compute

\begin{aligned} \partial(\psi P)  &= \sum_{i,j} \sum_t \psi_i^t P_{i,j}^t \left[\mathbf{e}^{(j,t+1)} - \mathbf{e}^{(i,t)}\right] \\  &= \sum_{i,j} \sum_t \left[\psi_j^t P_{j,i}^t \mathbf{e}^{(i,t+1)} - \psi_i^t P_{i,j}^t \mathbf{e}^{(i,t)}\right] \\  &= \sum_i \sum_t \left[\sum_j \left(\psi_j^t P_{j,i}^t - \psi_i^{t+1} P_{i,j}^{t+1}\right)\right] \mathbf{e}^{(i,t+1)}.  \end{aligned}

This is zero only when the last term in brackets is zero, i.e.

\sum_j \left(\psi_j^t P_{j,i}^t - \psi_i^{t+1} P_{i,j}^{t+1}\right) = 0


\psi_i^{t+1} \sum_j P_{i,j}^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

Since P is right stochastic, we have

\sum_j P_{i,j}^{t+1} = 1

so that

\psi_i^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

In other words, when P is right stochastic and \partial(\psi P) = 0, we get the usual master equation from stochastic mechanics

\partial(\psi P) = 0\implies \psi_i^{t+1} = \sum_j \psi_j^t P_{j,i}^t.

Parting Thoughts

The master equation is a boundary. This makes me wonder about homology, gauge transformations, sources, etc. For example, since

\partial(\psi P) = 0,

does this imply

\psi P = \partial F

for some discrete 2-form F?

If G is a discrete 2-form whose boundary does not vanish, then

\psi P + \partial G

gives the same dynamics because \partial^2 = 0. This would be a kind of gauge transformation.

There are several directions to take this from here, but that is about all the energy I have for now. More to come…

Written by Eric

October 29, 2011 at 10:04 pm

Network Theory and Discrete Calculus – Introduction

with 2 comments

I’ve enjoyed applying discrete calculus to various problems since Urs Schreiber and I wrote our paper together back in 2004

Discrete differential geometry on causal graphs

Shortly after that, I wrote an informal paper applying the theory to finance in

Financial modeling using discrete stochastic calculus

From there I wrote up some private notes laying the foundations for applying a higher-dimensional version of discrete calculus to interest rate models. However, life intervened, I went to work on Wall Street followed by various career twists leading me to Hong Kong where I am today. The research has laid fairly dormant since then.

I started picking this up again recently when my friend, John Baez, effectively changed careers and started the Azimuth Project. In particular, I’ve recently developed a discrete Burgers equation with corresponding discrete Cole-Hopf transformation, which is summarized – including numerical simulation results – on the Azimuth Forum here:

Discrete Burgers equation revisited

Motivated by these results, I started looking at a reformulation of the Navier-Stokes equation in

Towards Navier-Stokes from noncommutative geometry

This is still a work-in-progress, but sorting this out is a necessary step to writing down the discrete Navier-Stokes equation.

Even more recently, John began a series of very interesting Azimuth Blog posts on network theory. I knew that network theory and discrete calculus should link up together naturally, but it took a while to see the connection. It finally clicked one night as I laid in bed half asleep in one of those rare “Eureka!” moments. I wrote up the details in

Discrete stochastic mechanics

There is much more to be said about the connection between network theory and discrete calculus. I intend to write a series of subsequent posts in parallel to John’s highlighting how his work with Brendan Fong can be presented in terms of discrete calculus.

Written by Eric

October 28, 2011 at 9:12 am

Currency exchange rates, directed graphs, and quivers

with 4 comments

I always enjoy working with currency exchange rates because it gives me excuses to draw directed graphs. For example, an exchange rate EURUSD = 1.4  can be thought of as a directed edge from EUR to USD as shown below.

A fact of life when you work in finance is that you rarely have any say in the data available to you for analysis. When working with three currencies, e.g. USD, EUR, and KRW, it would be great if someone handed you a complete FX Matrix with all exchange rates filled in like

\text{FX Matrix} = \left[\begin{aligned} &\text{EUREUR\quad} &\text{EURUSD\quad} &\text{EURKRW\quad} \\ &\text{USDEUR\quad} &\text{USDUSD\quad} &\text{USDKRW\quad} \\ &\text{KRWEUR\quad} &\text{KRWUSD\quad} &\text{KRWKRW\quad}\end{aligned}\right]

Instead, what is usually available is a list of exchange rates such as

\text{FX Vector} = \left[\begin{aligned} \text{EURUSD} \\ \text{USDKRW}\end{aligned}\right]

With a little patience, you can usually figure out all the cross terms, e.g. it is obvious that from the given data we have

\text{EURKRW} = \text{EURUSD}\times\text{USDKRW},

but we’d like to find a systematic way to do this that can be programmed.

So the thing I want to talk about here is starting with a list of exchange rates, i.e. FX Vector, we want to fill in as much of the FX Matrix as possible systematically. It turns out this process is identical to constructing a free category, i.e. quiver, from a directed graph.

In other words, we want to reinterpret the exchange rate as a morphism

\text{EURUSD}: \text{EUR}\to\text{USD}.

Then, “filling in” the FX Matrix means finding all compositions of morphisms, e.g.

\text{EURKRW} = \text{USDKRW}\circ\text{EURUSD}.

How would you do this? Here is what I did…

First, you take the given FX Vector and form an adjacency matrix. In the example above with EUR, USD, and KRW, the adjacency matrix looks like

A = \left[\begin{aligned}1\quad & 1\quad & 0\\1\quad & 1\quad & 1\\ 0\quad & 1\quad & 1\end{aligned}\right],

but in general there will be more zeros.

We can also form an initial FX Matrix

(\text{FX Matrix})_1 = \left[\begin{aligned} 1\quad &\text{EURUSD}\quad &0\quad \\ (\text{EURUSD})^{-1}\quad & 1\quad & \text{USDKRW}\quad \\ 0\quad & (\text{USDKRW})^{-1}\quad & 1\quad\end{aligned}\right]

This FX Matrix can be thought of as the currency exchanges that can occur in a single transaction.

An interesting property of the adjacency matrix is that when you raise it to the nth power, the matrix entries represent the number of paths between any two objects in the quiver that require n steps, i.e.

A^n = \left[\begin{aligned} \#(\text{EUREUR})_n\quad & \#(\text{EURUSD})_n\quad & \#(\text{EURKRW})_n \\ \#(\text{USDEUR})_n\quad &\#(\text{USDUSD})_n\quad & \#(\text{USDKRW})_n \\ \#(\text{KRWEUR})_n\quad & \#(\text{KRWUSD})_n\quad & \#(\text{KRWKRW})_n\end{aligned}\right]

where \#(\text{EURUSD})_n is the number of paths of length n or the number of ways to convert EUR to USD in n transactions.

To fill out FX Matrix, we incrementally take powers of A and find entries whose value is 1. Since it is getting late and I’m running out of energy, I’ll just write the procedure as a simple pseudo Matlab code:

FXMatrix = Zeros(Size(A));
For n = 1:NumCurrencies
   FXMatrix(A^n == 1) = FXMatrix1^n(A^n == 1);

Although the explanation may seem a bit arduous, the final procedure is quite simple and easy to program.

Can you find a better way?

As a side note, this provides an algorithm for generating a quiver from a directed graph, i.e. it gives an adjoint functor to the forgetful functor U:Cat\to Grph.

Written by Eric

March 1, 2010 at 1:30 am