Phorgy Phynance

Archive for the ‘John Baez’ Category

Network Theory and Discrete Calculus – Noether’s Theorem

leave a comment »

This post is part of a series

As stated in the Introduction, one of the motivations for this series is to work in parallel with John Baez’ series on network theory to highlight some applications of discrete calculus. In this post, I reformulate some of the material in Part 11 pertaining to Noether’s theorem.

The State-Time Graph

The directed graphs associated with discrete stochastic mechanics are described in the post The Discrete Master Equation, where the simple state-time graph example below was presented

Conceptually, the thing to keep in mind is that any transition from one state to another requires a time step. Therefore a transition from node i to node j is more precisely a transition from node (i,t) to node (j,t+1).

On a state-time graph, a discrete 0-form can be written as

\begin{aligned} \psi = \sum_{i,t} \psi^t_i \mathbf{e}^{(i,t)}.\end{aligned}

and a discrete 1-form can be written as

\begin{aligned} P = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} P^{\epsilon,t}_{i,j} \mathbf{e}^{(i,t)(j,t+1)}_\epsilon.\end{aligned}

The Master Equation

The master equation for discrete stochastic mechanics can be expressed simply as

\partial(\psi P) = 0,

where \psi is a discrete 0-form representing the state at all times with

\begin{aligned} 0\le \psi_{i}^t \le 1 \quad\text{and}\quad \sum_{i} \psi_{i}^t = 1 \end{aligned}

and P is a discrete 1-form representing transition probabilities with

\begin{aligned} 0\le P_{i,j}^t \le 1 \quad\text{and}\quad \sum_{j} P_{i,j}^t = 1 \end{aligned}

for all t. When expanded into components, the master equation becomes

\begin{aligned} \psi_j^{t+1} = \sum_i \psi_i^{t} P_{i,j}^{t}. \end{aligned}

Observables and Expectations

A general discrete 0-form on a state-time graph is defined over all states and all time. However, occasionally, we would like to consider a discrete 0-form defined over all states at a specific point in time. To facilitate this in a component-free manner, denote

\begin{aligned} 1^t = \sum_i \mathbf{e}^{(i,t)} \end{aligned}

so the identity can be expressed as

\begin{aligned} 1 = \sum_t 1^t.\end{aligned}

The discrete 0-form 1^t is a projection that projects a general discrete 0-form to a discrete 0-form defined only at time t. For instance, given a discrete 0-form \psi, let

\begin{aligned} \psi^t = 1^t \psi = \sum_i \psi_i^t \mathbf{e}^{(i,t)}\end{aligned}

so that

\begin{aligned} \psi = \sum_t \psi^t.\end{aligned}

In discrete stochastic mechanics, an observable is nothing more than a discrete 0-form

\begin{aligned} O = \sum_t O^t = \sum_{i,t} O_i^t \mathbf{e}^{(i,t)}.\end{aligned}

The expectation of an observable O^t with respect to a state \psi is given by

\langle O^t\rangle = tr_0(O^t \psi) = \sum_i O_i^t \psi_i^t

where tr_0 was defined in a previous post. Note: O^t \psi = O^t \psi^t.

Some Commutators

In preparation for the discrete Noether’s theorem, note that

\begin{aligned} { [P,O] = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} (O_j^{t+1} - O_i^t) P_{i,j}^{\epsilon,t} \mathbf{e}^{(i,t)(j,t+1)}_\epsilon. } \end{aligned}


\begin{aligned} { [[P,O],O] = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} (O_j^{t+1} - O_i^t)^2 P_{i,j}^{\epsilon,t} \mathbf{e}^{(i,t)(j,t+1)}_\epsilon. } \end{aligned}

For these commutators to vanish, we must have

P_{i,j}^{\epsilon,t} \ne 0 \implies O_j^{t+1} = O_i^t.

This implies [P,O] = 0 if and only if O is constant on each connected component of the state-time graph.

Constant Expectations

In this section, we determine the conditions under which the expectation of an observable O is constant in time, i.e.

\langle O^{t+1}\rangle = \langle O^{t} \rangle

for all t. This is a fairly straightforward application of the discrete master equation, i.e.

\begin{aligned} \langle O^{t+1}\rangle &= \sum_{j} \psi_j^{t+1} O_j^{t+1} \\ &= \sum_{i} {\psi_i^{t} \sum_j {\sum_{\epsilon\in[i,j]} { P_{i,j}^{\epsilon,t} O_j^{t+1}}}}\end{aligned}

indicating the condition we’re looking for is

\begin{aligned} O_i^{t} = \sum_j {\sum_{\epsilon\in[i,j]} { P_{i,j}^{\epsilon,t} O_j^{t+1}. }}\end{aligned}

Noether’s Theorem

In this section, we demonstrate that when both \langle O^t\rangle and \langle (O^t)^2\rangle are constant in time, this implies

tr_1\left( [[P,O],O] \right) = 0

which, in turn, implies [P,O] = 0. To do this, we first expand

\begin{aligned} tr_1([[P,O],O]) = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} (O_j^{t+1} - O_i^t)^2 P_{i,j}^{\epsilon,t}. \end{aligned}

The condition for this trace to vanish is the same as the condition for the commutators above to vanish, i.e.

P_{i,j}^{\epsilon,t} \ne 0 \implies O_j^{t+1} = O_i^t.

Expanding the trace further results in

\begin{aligned} tr_1([[P,O],O]) = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} P_{i,j}^{\epsilon,t} {(O_j^{t+1})}^2 - 2 O_i^t (P_{i,j}^{\epsilon,t} O_j^{t+1}) + (O_i^t)^2 P_{i,j}^{\epsilon,t}.\end{aligned}

Summing over j and \epsilon when \langle O^t\rangle and \langle (O^t)^2\rangle are constants results in

\begin{aligned} \text{1st Term + 2nd Term} = -\sum_{i,t} (O_i^t)^2,\end{aligned}

while summing j and \epsilon in the third term results in

\begin{aligned} \text{3rd Term} = \sum_{i,t} (O_i^t)^2 \end{aligned}

by definition of the transition 1-form. Consequently, when \langle O^t\rangle and \langle (O^t)^2\rangle are constants, it follows that

tr_1([[P,O],O]) =0.

Finally, this implies [P,O] = 0 if and only if \langle O^t\rangle and \langle (O^t)^2\rangle are constant in time.

Written by Eric

December 25, 2011 at 9:09 am

Network Theory and Discrete Calculus – Electrical Networks

with one comment

This post is part of a series

Basic Equations

In Part 16 of John Baez’ series on Network Theory, he discussed electrical networks. On the day he published his article (November 4), I wrote down the following in my notebook

G\circ dV = [G,V] = I and \partial I = 0.

The first equation is essentially the discrete calculus version of Ohm’s Law, where

\begin{aligned} G = \sum_{i,j} \sum_{\epsilon\in[i,j]} G_{i,j}^\epsilon \mathbf{e}^{i,j}_\epsilon \end{aligned}

is a discrete 1-form representing conductance,

\begin{aligned} V = \sum_i V_i \mathbf{e}^i \end{aligned}

is a discrete 0-form representing voltage, and

\begin{aligned} I = \sum_{i,j} \sum_{\epsilon\in[i,j]} I_{i,j}^\epsilon \mathbf{e}^{i,j}_\epsilon. \end{aligned}

In components, this becomes

G_{i,j}^\epsilon \left(V_j - V_i\right) = I^\epsilon_{i,j}.

The second equation is a charge conservation law which simply says

I_{*,i} = I_{i,*},


\begin{aligned} I_{*,i} = \sum_j \sum_{\epsilon\in[j,i]} I^\epsilon_{j,i}\end{aligned}

is the sum of all currents into node i and

\begin{aligned} I_{i,*} = \sum_j \sum_{\epsilon\in[i,j]} I^\epsilon_{i,j}\end{aligned}

is the sum of all currents out of node i. This is more general than it may first appear. The reason is that directed graphs are naturally about spacetime, so the currents here are more like 4-dimensional currents of special relativity. The equation

\partial I = 0

is related to the corresponding Maxwell’s equation

d^\dagger j = 0,

where d^\dagger is the adjoint exterior derivative and j is the 4-current 1-form

j = j_x dx + j_y dy + j_z dz + \rho dt.

This also implies the discrete Ohm’s Law appearing above is 4-dimensional and actually a bit more general than the usual Ohm’s Law.

Some Thoughts

I’ve been thinking about this off and on since then as time allows, but questions seem to be growing exponentially.

For one, the equation

[G,V] = GV - VG = I

is curious because it implies that [G,\cdot] is a derivative, i.e.

[G,V_1 V_2] = [G,V_1] V_2 + V_1 [G, V_2].

Further, although by pure coincidence, in my paper with Urs, we introduced the graph operator

\begin{aligned} \mathbf{G} = \sum_{i,j} \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j}\end{aligned}

and showed that for any directed graph and any discrete 0-form \phi that

d\phi = [\mathbf{G},\phi].

Is it possible that G and \mathbf{G} are related?

I think they are. This brings thoughts of spin networks and Penrose, but I’ll try to refrain from speculating too much beyond mentioning it.

If they were related, this would mean that the discrete Ohm’s Law above simplifies even further to

dV = I


\partial d V = 0.

In components, the above becomes

\begin{aligned} \sum_j \left(V_j - V_i\right) \left(N_{i,j} + N_{j,i} \right) = 0.\end{aligned}

This expresses an effective conductance in terms of the total number of directed edges connecting the two nodes in either direction, i.e.

G^*_{i,j} = N_{i,j} + N_{j,i}.

If the G^\epsilon_{i,j}‘s appearing in the conductance 1-form G are themselves effective conductances resulting from multiple more fundamental directed edges, then we do in fact have

G = \mathbf{G}.

Applications from here can go in any number of directions, so stay tuned!

Written by Eric

December 10, 2011 at 9:23 pm

Network Theory and Discrete Calculus – Edge Algebra

with one comment

This post is part of a series

In my last post, I noted that in following John Baez’ series, I’m finding the need to introduce operators that I haven’t previously used in any applications. In this post, I will introduce another. It turns out that we could get away without introducing this concept, but I think it helps motivate some things I will talk about later.

In all previous applications, the important algebra was a noncommutative graded differential algebra. The grading means that the degree of elements add when you multiply them together. For example, the product of two nodes (degree 0) is a node (degree 0+0), the product of a node (degree 0) and a directed edge (degree 1) is a directed edge (degree 0+1), and the product of a directed edge (degree 1) with another directed edge is a directed surface (degree 1+1).

Note the algebra of nodes is a commutative sub-algebra of the full noncommutative graded algebra.

There is another related commutative edge algebra with corresponding edge product.

The edge product is similar to the product of nodes in that it is a projection given by

\mathbf{e}_\epsilon^{i,j} \circ \mathbf{e}_{\epsilon'}^{k,l} = \delta_{\epsilon,\epsilon'} \delta_{i,k} \delta_{j,l} \mathbf{e}_\epsilon^{i,j}.

It is a projection because for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

we have

\mathbf{e}_\epsilon^{i,j} \circ \alpha = \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j}


\mathbf{e}_\epsilon^{i,j} \circ \mathbf{e}_\epsilon^{i,j} = \mathbf{e}_\epsilon^{i,j}.

The product of two discrete 1-forms is

\begin{aligned}\alpha\circ\beta = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \beta_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j}\end{aligned}.

I have not yet come across an application where the full edge algebra is needed. When the product does arise, one of the discrete 1-forms is usual the coboundary of a discrete 0-form, i.e.

\alpha\circ d\phi.

When this is the case, the edge product can be expressed as a (graded) commutator in the noncommutative graded algebra, i.e.

\alpha\circ d\phi = [\alpha,\phi].

An example of this will be seen when we examine electrical circuits.

Written by Eric

November 20, 2011 at 12:21 pm

Network Theory and Discrete Calculus – Notation Revisited

with 2 comments

This post is part of a series

As stated in the Introduction to this series, one of my goals is to follow along with John Baez’ series and reformulate things in the language of discrete calculus. Along the way, I’m coming across operations that I haven’t used in any of my prior applications of discrete calculus to mathematical finance and field theories. For instance, in the The Discrete Master Equation, I introduced a boundary operator

\begin{aligned} \partial \mathbf{e}^{i,j} = \mathbf{e}^j-\mathbf{e}^i.\end{aligned}

Although, I hope the reason I call this a boundary operator is obvious, it would be more precise to call this something like graph divergence. To see why, consider the boundary of an arbitrary discrete 1-form

\begin{aligned}\partial \alpha = \sum_{i,j} \alpha_{i,j} \left(\mathbf{e}^j - \mathbf{e}^i\right) = \sum_i \left[ \sum_j \left(\alpha_{j,i} - \alpha_{i,j}\right)\right] \mathbf{e}^i.\end{aligned}

A hint of sloppy notation has already crept in here, but we can see that the boundary of a discrete 1-form at a node i is the sum of coefficients flowing into node i minus the sum of coefficients flowing out of node i. This is what you would expect of a divergence operator, but divergence depends on a metric. This operator does not, hence it is topological in nature. It is tempting to call this a topological divergence, but I think graph divergence is a better choice for reasons to be seen later.

One reason the above notation is a bit sloppy is because in the summations, we should really keep track of what directed edges are actually present in the directed graph. Until now, simply setting

\mathbf{e}^{i,j} = 0

if there is no directed edge from node i to node j was sufficient. Not anymore.

Also, for applications I’ve used discrete calculus so far, there has always only been a single directed edge connecting any two nodes. When applying discrete calculus to electrical circuits, as John has started doing in his series, we obviously would like to consider elements that are in parallel.

I tend to get hung up on notation and have thought about the best way to deal with this. My solution is not perfect and I’m open to suggestions, but what I settled on is to introduce a summation not only over nodes, but also over directed edges connected those nodes. Here it is for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

where [i,j] is the set of all directed edges from node i to node j. I’m not 100% enamored, but is handy for performing calculations and doesn’t make me think too much.

For example, with this new notation, the boundary operator is much clearer

\begin{aligned} \partial \alpha &= \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \left(\mathbf{e}^{j}-\mathbf{e}^i\right) \\ &= \sum_i \left[\sum_j \left( \sum_{\epsilon\in[j,i]} \alpha_{j,i}^{\epsilon} - \sum_{\epsilon\in[i,j]} \alpha_{i,j}^{\epsilon} \right)\right]\mathbf{e}^i.\end{aligned}

As before, this says the graph divergence of \alpha at the node i is the sum of all coefficients flowing into node i minus the sum of all coefficients flowing out of node i. Moreover, for any node j there can be one or more (or zero) directed edges from j into i.

Written by Eric

November 19, 2011 at 11:27 pm

Network Theory and Discrete Calculus – Introduction

with 2 comments

I’ve enjoyed applying discrete calculus to various problems since Urs Schreiber and I wrote our paper together back in 2004

Discrete differential geometry on causal graphs

Shortly after that, I wrote an informal paper applying the theory to finance in

Financial modeling using discrete stochastic calculus

From there I wrote up some private notes laying the foundations for applying a higher-dimensional version of discrete calculus to interest rate models. However, life intervened, I went to work on Wall Street followed by various career twists leading me to Hong Kong where I am today. The research has laid fairly dormant since then.

I started picking this up again recently when my friend, John Baez, effectively changed careers and started the Azimuth Project. In particular, I’ve recently developed a discrete Burgers equation with corresponding discrete Cole-Hopf transformation, which is summarized – including numerical simulation results – on the Azimuth Forum here:

Discrete Burgers equation revisited

Motivated by these results, I started looking at a reformulation of the Navier-Stokes equation in

Towards Navier-Stokes from noncommutative geometry

This is still a work-in-progress, but sorting this out is a necessary step to writing down the discrete Navier-Stokes equation.

Even more recently, John began a series of very interesting Azimuth Blog posts on network theory. I knew that network theory and discrete calculus should link up together naturally, but it took a while to see the connection. It finally clicked one night as I laid in bed half asleep in one of those rare “Eureka!” moments. I wrote up the details in

Discrete stochastic mechanics

There is much more to be said about the connection between network theory and discrete calculus. I intend to write a series of subsequent posts in parallel to John’s highlighting how his work with Brendan Fong can be presented in terms of discrete calculus.

Written by Eric

October 28, 2011 at 9:12 am


Get every new post delivered to your Inbox.