# Phorgy Phynance

## Discrete Stochastic Calculus

This post is part of a series

In the previous post of this series, we found that when Cartesian coordinates are placed on a binary tree, the commutative relations are given by

• $[dx,x] = \frac{(\Delta x)^2}{\Delta t} dt$
• $[dt,t] = \Delta t dt$
• $[dx,t] = [dt,x] = \Delta t dx.$

There are two distinct classes of discrete calculus depending on the relation between $\Delta x$ and $\Delta t$.

### Discrete Exterior Calculus

If we set $\Delta x = \Delta t$, the commutative relations reduce to

• $[dx,x] = \Delta t dt$
• $[dt,t] = \Delta t dt$
• $[dx,t] = [dt,x] = \Delta t dx$

and in the continuum limit, i.e.  $\Delta t\to 0$, reduce to

• $[dx,x] = 0$
• $[dt,t] = 0$
• $[dx,t] = [dt,x] = 0.$

In other words, when $\Delta x = \Delta t$, the commutative relations vanish in the continuum limit and the discrete calculus converges to the exterior calculus of differential forms.

Because of this, the discrete calculus on a binary tree with $\Delta x = \Delta t$ will be referred to as the discrete exterior calculus.

### Discrete Stochastic Calculus

If instead of $\Delta x = \Delta t$, we set $(\Delta x)^2 = \Delta t$, the commutative relations reduce to

• $[dx,x] = dt$
• $[dt,t] = \Delta t dt$
• $[dx,t] = [dt,x] = \Delta t dx$

and in the continuum limit, i.e.  $\Delta t\to 0$, reduce to

• $[dx,x] = dt$
• $[dt,t] = 0$
• $[dx,t] = [dt,x] = 0.$

In this case, all commutative relations vanish in the continuum limit except $[dx,x] = dt$.

In the paper:

• #### Noncommutative Geometry and Stochastic Calculus: Applications in Mathematical Finance

I demonstrate how the continuum limit of the commutative relations give rise to (a noncommutative version of) stochastic calculus, where $dx$ plays the role of a Brownian motion.

Because of this, the discrete calculus on a binary tree with $(\Delta x)^2 = \Delta t$ will be referred to as the discrete stochastic calculus.

To date, discrete stochastic calculus has found robust applications in mathematical finance and fluid dynamics. For instance, the application of discrete stochastic calculus to Black-Scholes option pricing was presented in

• #### Financial Modelling Using Discrete Stochastic Calculus

and the application to fluid dynamics was presented in

• #### Discrete Burgers Equation Revisited

Both of these subjects will be addressed in more detail as part of this series of articles.

It should be noted that discrete calculus and its special cases of discrete exterior calculus and discrete stochastic calculus represent a new framework for numerical modeling. We are not taking continuum models built on continuum calculus and constructing finite approximations. Instead, we are building a robust mathematical framework that has finiteness built in from the outset. The resulting numerical models are not approximations, but exact models developed within a finite numerical framework. The framework itself converges to the continuum versions so that any numerical models built within this framework will automatically converge to the continuum versions (if such a thing is desired).

Discrete calculus provides a kind of meta algorithm. It is an algorithm for generating algorithms.

Written by Eric

August 25, 2012 at 12:38 pm

## Network Theory and Discrete Calculus – Coordinates

This post is part of a series

When the binary tree was presented in the context of discrete calculus, the following small section of the tree was illustrated to establish the way the nodes are labelled

Two sets of coordinates were introduced on the binary tree:

1. Cartesian coordinates $(x,t)$
2. Graph coordinates $(u^+,u^-)$

The following illustrates the binary tree when we zoom out a bit:

Cartesian coordinates were defined such that

• $x(i\pm 1,j+1) = x(i,j) \pm \Delta x,$ and
• $t(i\pm 1,j+1) = t(i,j) + \Delta t$

resulting in

• $dx = \Delta x (\mathbf{e}^+ - \mathbf{e}^-),$ and
• $dt = \Delta t (\mathbf{e}^+ + \mathbf{e}^-).$

Although Cartesian coordinates often help to relate discrete calculus to continuum calculus, the expressions are often not the most natural to work with when performing computations. One reason for this can be understood by overlaying the Cartesian coordinate lines onto the binary tree.

On the other hand, graph coordinates are defined such that

• $u^+(i+1,j+1) = u^+(i,j) + \Delta u^+,$
• $u^+(i-1,j+1) = u^+(i,j),$
• $u^-(i+1,j+1) = u^-(i,j),$
• $u^-(i-1,j+1) = u^-(i,j) + \Delta u^-$

resulting in

• $du^+ = \Delta u^+ \mathbf{e}^+,$ and
• $du^- = \Delta u^- \mathbf{e}^-.$

Computations are often cleaner when using graph coordinates. One reason for this can be understood by overlaying the graph coordinate lines onto the binary tree.

For instance, the commutative relations in graph coordinates are given by

• $[du^\pm,u^\pm] = \Delta u^\pm du^\pm$
• $[du^\pm,u^\mp] = 0$

whereas the commutative relations for Cartesian coordinates are given by

• $[dx,x] = \frac{(\Delta x)^2}{\Delta t} dt$
• $[dt,t] = \Delta t dt.$
• $[dx,t] = [dt,x] = \Delta t dx.$

The cross commutative relations between the two sets of coordinates are given by

• $[du^\pm,x] = [dx,u^\pm] = \pm\Delta x du^\pm$
• $[du^\pm,t] = [dt,u^\pm] = \Delta t du^\pm$

As a final note, for any discrete 0-form $\phi$, the above indicates that

$d\phi = \frac{1}{\Delta t} [dt,\phi].$

Written by Eric

February 19, 2012 at 2:17 pm

## Network Theory and Discrete Calculus – Differentiation Rules

This post is part of a series

In the previous post, we introduced discrete calculus on a binary tree. In particular, we introduced two sets of basis 1-forms we’ll refer to as

1. Graph bases $\{du^+,du^-\}$ and
2. Coordinate bases $\{dt,dx\}$

related by

$du^\pm = \Delta u \mathbf{e}^\pm = \frac{\Delta u}{2\Delta t} dt \pm \frac{\Delta u}{2\Delta x} dx$

and saw that discrete 1-forms can be expressed in either left- or right-components forms

$\alpha = \overleftarrow{\alpha_t} dt + \overleftarrow{\alpha_x} dx = dt \overrightarrow{\alpha_t} + dx \overrightarrow{\alpha_x},$

where, in general, the left- and right-component 0-forms do not coincide, i.e.

$\overleftarrow{\alpha_t} \ne \overrightarrow{\alpha_t}$ and $\overleftarrow{\alpha_x} \ne \overrightarrow{\alpha_x}$

due to the noncommutativity of discrete 0-forms and discrete 1-forms.

### Product Rule

Recall the exterior derivative of a discrete 0-form $f$ may be expressed in left-component form as

$df = \overleftarrow{\partial_t f} dt + \overleftarrow{\partial_x f} dx$

where

\begin{aligned} \overleftarrow{\partial_t f} = \sum_{(i,j)} \left[\frac{f(i+1,j+1)+f(i-1,j+1) -2f(i,j)}{2\Delta t}\right] \mathbf{e}^{(i,j)} \end{aligned}

and

\begin{aligned} \overleftarrow{\partial_x f} = \sum_{(i,j)} \left[\frac{f(i+1,j+1) -f(i-1,j+1)}{2\Delta x}\right] \mathbf{e}^{(i,j)} \end{aligned}

Although the product rule is satisfied, i.e.

$d(fg) = (df)g + f(dg),$

note the discrete 0-form $g$ is on the right of the discrete 1-form $df$ and the discrete 0-form $f$ is on the left of the discrete 1-form $dg$. Attempting to express the product rule in left components, we find

\begin{aligned} d(fg) &= \overleftarrow{\partial_t (fg)} dt + \overleftarrow{\partial_x (fg)} dx \\ &= (\overleftarrow{\partial_t f} dt + \overleftarrow{\partial_x f} dx )g + f(\overleftarrow{\partial_t g} dt + \overleftarrow{\partial_x g} dx).\end{aligned}

In order to move $g$ to the left of the coordinate bases above, we need to know the commutation relations

$[dt,g]$ and $[dx,g].$

These commutation relations may be determined by noting that for any two discrete 0-forms $f$ and $g$, we have

$[df,g] = [dg,f].$

Therefore,

\begin{aligned} { [dt,g] }&= [dg,t] \\ &= \overleftarrow{\partial_t g} [dt,t] + \overleftarrow{\partial_x g} [dx,t] \\ &= \Delta t \overleftarrow{\partial_t g} dt + \Delta t \overleftarrow{\partial_x g} dx\end{aligned}

and

\begin{aligned} { [dx,g] }&= [dg,x] \\ &= \overleftarrow{\partial_t g} [dt,x] + \overleftarrow{\partial_x g} [dx,x] \\ &= \Delta t \overleftarrow{\partial_t g} dx + \frac{(\Delta x)^2}{\Delta t} \overleftarrow{\partial_x g} dt,\end{aligned}

where use has been made of the coordinate commutation relations in the previous post.

Putting everything together, we find the product rule above implies the left components satisfy

$\overleftarrow{\partial_t (fg)} = (\overleftarrow{\partial_t f}) g + f(\overleftarrow{\partial_t g})+ \Delta t (\overleftarrow{\partial_t f})(\overleftarrow{\partial_t g}) + \frac{(\Delta x)^2}{\Delta t} (\overleftarrow{\partial_x f})( \overleftarrow{\partial_x g})$

and

$\overleftarrow{\partial_x (fg)} = (\overleftarrow{\partial_x f}) g + f(\overleftarrow{\partial_x g})+ \Delta t (\overleftarrow{\partial_x f})(\overleftarrow{\partial_t g}) + \Delta t (\overleftarrow{\partial_t f})( \overleftarrow{\partial_x g})$

### Change of Variables

Change of variables is something straightforward, yet has many applications, so it is worth writing it down here for future reference.

Let $S$ be a discrete 0-form with

$dS = \overleftarrow{\partial_t S} dt + \overleftarrow{\partial_x S} dx.$

If $\overleftarrow{\partial_x S}$ is invertible, we can rewrite this as

$dx = \frac{1}{\overleftarrow{\partial_x S}} (dS - \overleftarrow{\partial_t S} dt).$

Given any other discrete 0-form $V$, we have

$dV = \left.\overleftarrow{\partial_t V}\right|_x dt + \overleftarrow{\partial_x V} dx = \left.(\overleftarrow{\partial_t V} \right|_x- \frac{\overleftarrow{\partial_x V}}{\overleftarrow{\partial_x S}} \left.\overleftarrow{\partial_t S}\right|_x) dt + \frac{\overleftarrow{\partial_x V}}{\overleftarrow{\partial_x S}} dS.$

From this, we can read off the discrete chain rules

$\overleftarrow{\partial_x V} = \overleftarrow{\partial_S V} \overleftarrow{\partial_x S}$

and

$\left.\overleftarrow{\partial_t V}\right|_S = \left.\overleftarrow{\partial_t V} \right|_x-\overleftarrow{\partial_S V} \left.\overleftarrow{\partial_t S}\right|_x.$

Written by Eric

January 8, 2012 at 11:20 pm

## Network Theory and Discrete Calculus – The Binary Tree

This post is part of a series

So far in this series we’ve touched on a few applications of discrete calculus, but these were still at a fairly high level of abstraction. In this post, we lay some foundations for some very concrete applications that will allow us to actually start calculating things.

### The Binary Tree

A particularly nice directed graph with many applications is the binary tree – a portion of which is illustrated below:

A node in the binary tree is labelled $(i,j)$, where the first integer $i$ denotes the “spatial” position, i.e. its location at a given time, and the second integer $j$ denotes the “temporal” position.

### Discrete Forms

A general discrete 0-form on a binary tree is written as usual as

\begin{aligned} \psi = \sum_{(i,j)} \psi(i,j) \mathbf{e}^{(i,j)},\end{aligned}

where the sum is only over nodes of the binary tree and not over all integers. For instance, if $(i,j)$ is in the binary tree, then $(i+1,j)$ and $(i,j+1)$ are not.

Due to the special nature of the binary tree, a general discrete 1-form may also be reduced to a single sum over nodes, but in two distinct ways. First, we can group edges directed away from a given node. Second, we can group edges directed toward a given node.

In the first case, we can write

\begin{aligned} \alpha = \sum_{(i,j)} \left [ \overleftarrow{\alpha_+}(i,j) \mathbf{e}^{(i,j),(i+1,j+1)} + \overleftarrow{\alpha_-}(i,j) \mathbf{e}^{(i,j),(i-1,j+1)} \right] \end{aligned}

which is referred to as the left-component form and in the second case, we can write

\begin{aligned} \alpha = \sum_{(i,j)} \left [ \overrightarrow{\alpha_+}(i,j) \mathbf{e}^{(i-1,j-1),(i,j)} + \overrightarrow{\alpha_-}(i,j) \mathbf{e}^{(i+1,j-1),(i,j)} \right] \end{aligned}

which is referred to as the right-component form. These are two equivalent ways of expressing the same general discrete 1-form with

$\overleftarrow{\alpha_+}(i,j) = \overrightarrow{\alpha_+}(i+1,j+1)$

and

$\overleftarrow{\alpha_-}(i,j) = \overrightarrow{\alpha_-}(i-1,j+1) .$

To see why these are referred to as left- and right-component forms, denote

$\mathbf{e}^{(i,j),(i+1,j+1)} = \overleftarrow{\mathbf{e}^+}(i,j) = \overrightarrow{\mathbf{e}^+}(i+1,j+1)$

and

$\mathbf{e}^{(i,j),(i-1,j+1)} = \overleftarrow{\mathbf{e}^-}(i,j) = \overrightarrow{\mathbf{e}^-}(i-1,j+1)$

and define a pair of basis 1-forms

\begin{aligned} \mathbf{e}^+ = \sum_{(i,j)} \overleftarrow{\mathbf{e}^+}(i,j) = \sum_{(i,j)} \overrightarrow{\mathbf{e}^+}(i,j) \end{aligned}

and

\begin{aligned} \mathbf{e}^- = \sum_{(i,j)} \overleftarrow{\mathbf{e}^-}(i,j) = \sum_{(i,j)} \overrightarrow{\mathbf{e}^-}(i,j). \end{aligned}

Next, we can define left- and right-component 0-forms

\begin{aligned} \overleftarrow{\alpha_\pm} = \sum_{(i,j)} \overleftarrow{\alpha_\pm} \mathbf{e}^{(i,j)}\end{aligned}

and

\begin{aligned} \overrightarrow{\alpha_\pm} = \sum_{(i,j)} \overrightarrow{\alpha_\pm} \mathbf{e}^{(i,j)}\end{aligned}

respectively so that a discrete 1-form may be expressed in left-component form as

\begin{aligned} \alpha = \overleftarrow{\alpha_+} \mathbf{e}^+ + \overleftarrow{\alpha_-} \mathbf{e}^- = \sum_{(i,j)} \left[\overleftarrow{\alpha_+}(i,j) \overleftarrow{\mathbf{e}^+}(i,j) + \overleftarrow{\alpha_-}(i,j) \overleftarrow{\mathbf{e}^-}(i,j)\right] \end{aligned}

or equivalently in right-component form as

\begin{aligned} \alpha = \mathbf{e}^+ \overrightarrow{\alpha_+} + \mathbf{e}^- \overrightarrow{\alpha_-} = \sum_{(i,j)} \left[\overrightarrow{\mathbf{e}^+}(i,j) \overrightarrow{\alpha_+}(i,j) + \overrightarrow{\mathbf{e}^-}(i,j) \overrightarrow{\alpha_-}(i,j)\right] \end{aligned}

In other words, the left- and right- component forms of the bases allow us to express a general discrete 1-form form in terms of left- or right-component discrete 0-forms.

### Differentials

The exterior derivative of a general discrete 0-form $\psi$ on a binary tree is given in left-component form as

\begin{aligned} d\psi = \sum_{(i,j)} \left[\psi(i+1,j+1)-\psi(i,j)\right] \mathbf{e}^{(i,j),(i+1,j+1)} + \left[\psi(i-1,j+1)-\psi(i,j)\right] \mathbf{e}^{(i,j),(i-1,j+1)}. \end{aligned}

From this, we can read off the left-components which we’ll denote as

\begin{aligned} \overleftarrow{\partial_+ \psi} = \sum_{(i,j)} \left[\psi(i+1,j+1)-\psi(i,j)\right] \mathbf{e}^{(i,j)}\end{aligned}

and

\begin{aligned} \overleftarrow{\partial_- \psi} = \sum_{(i,j)} \left[\psi(i-1,j+1)-\psi(i,j)\right] \mathbf{e}^{(i,j)}\end{aligned}

so that

$d\psi = \overleftarrow{\partial_+ \psi} \mathbf{e}^+ + \overleftarrow{\partial_- \psi} \mathbf{e}^-.$

Similarly, the right-components are given by

\begin{aligned} \overrightarrow{\partial_+ \psi} = \sum_{(i,j)} \left[\psi(i,j)-\psi(i-1,j-1)\right] \mathbf{e}^{(i,j)}\end{aligned}

and

\begin{aligned} \overrightarrow{\partial_- \psi} = \sum_{(i,j)} \left[\psi(i,j)-\psi(i+1,j-1)\right] \mathbf{e}^{(i,j)}\end{aligned}

so that

$d\psi = \mathbf{e}^+ \overrightarrow{\partial_+ \psi} + \mathbf{e}^- \overrightarrow{\partial_- \psi}.$

### Noncommutative Coordinates

Although, strictly speaking, coordinates (other than node labels) are not necessary for performing computations in discrete calculus, it is helpful when comparing to continuum calculus to introduce coordinate 0-forms to the binary tree

\begin{aligned} x = \sum_{i,j} x(i,j) \mathbf{e}^{(i,j)} = \sum_{i,j} i \Delta x \mathbf{e}^{(i,j)}, \end{aligned}

where $\Delta x$ is the spatial distance between endpoints of a directed edge at successive time steps, and

\begin{aligned} t = \sum_{i,j} t(i,j) \mathbf{e}^{(i,j)} = \sum_{i,j} j \Delta t \mathbf{e}^{(i,j)}, \end{aligned}

where $\Delta t$ is the temporal spacing between successive temporal nodes.

In this special case, we have

\begin{aligned} \overleftarrow{\partial_+ x} = \overrightarrow{\partial_+ x} = -\overleftarrow{\partial_- x} = -\overleftarrow{\partial_- x} = \Delta x \sum_{(i,j)} \mathbf{e}^{(i,j)}\end{aligned}

and

\begin{aligned} \overleftarrow{\partial_+ t} = \overrightarrow{\partial_+ t} = \overleftarrow{\partial_- t} = \overleftarrow{\partial_- t} = \Delta t \sum_{(i,j)} \mathbf{e}^{(i,j)}\end{aligned}

so that

$dx = \Delta x (\mathbf{e}^+ - \mathbf{e}^-).$

and

$dt = \Delta t (\mathbf{e}^+ + \mathbf{e}^-).$

These relations can be inverted resulting in

$\mathbf{e}^+ = \frac{1}{2\Delta t} dt + \frac{1}{2\Delta x} dx$

and

$\mathbf{e}^- = \frac{1}{2\Delta t} dt - \frac{1}{2\Delta x} dx$

so that

\begin{aligned} d\psi &= \overleftarrow{\partial_+ \psi} \mathbf{e}^+ + \overleftarrow{\partial_- \psi} \mathbf{e}^- \\ &= \overleftarrow{\partial_t \psi} dt + \overleftarrow{\partial_x \psi} dx \end{aligned}

where

\begin{aligned} \overleftarrow{\partial_t \psi} &= \frac{\overleftarrow{\partial_+ \psi} + \overleftarrow{\partial_- \psi}}{2\Delta t} \\ &= \sum_{(i,j)} \left[\frac{\psi(i+1,j+1)+\psi(i-1,j+1) -2\psi(i,j)}{2\Delta t}\right] \mathbf{e}^{(i,j)} \end{aligned}

and

\begin{aligned} \overleftarrow{\partial_x \psi} &= \frac{\overleftarrow{\partial_+ \psi} - \overleftarrow{\partial_- \psi}}{2\Delta x} \\ &= \sum_{(i,j)} \left[\frac{\psi(i+1,j+1) -\psi(i-1,j+1)}{2\Delta x}\right] \mathbf{e}^{(i,j)} \end{aligned}

and the discrete calculus begins to resemble the continuum calculus.

### Commutation Relations

The coordinates $x$ and $t$ were referred to as “noncommutative” above because although discrete 0-forms commute, i.e.

$x t = t x,$

in general, discrete 0-forms and discrete 1-forms do not commute. A straightforward computation results in the following commutation relations

$[\mathbf{e}^\pm,x] = \pm\Delta x \mathbf{e}^\pm$

and

$[\mathbf{e}^\pm,t] = \Delta t \mathbf{e}^\pm$

from which it follows that

$[dx,x] = \frac{(\Delta x)^2}{\Delta t} dt$

$[dx,t] = [dt,x] = \Delta t dx$

$[dt,t] = \Delta t dt.$

From here, there are two continuum limits one could consider that lead to different calculi. In the first, we could set

$\Delta x = c\Delta t, \Delta t\to 0.$

In this case, all commutation relation vanish and the continuum is also a commutative limit, i.e. the coordinates commute in this limit and we’re left with the usual deterministic continuum calculus.

In the second limit, we could set

$(\Delta x)^2 = \Delta t, \Delta t\to 0.$

In this case, the second and third commutation relations vanish, but the first one remains

$[dx,x] = dt.$

This limit gives rise to stochastic calculus (or a very close cousin). Motivated by this, the discrete calculus on a binary tree when setting

$(\Delta x)^2 = \Delta t$

but keeping $\Delta t$ finite is referred to as discrete stochastic calculus.

Written by Eric

December 30, 2011 at 11:48 pm

## Network Theory and Discrete Calculus – Noether’s Theorem

This post is part of a series

As stated in the Introduction, one of the motivations for this series is to work in parallel with John Baez’ series on network theory to highlight some applications of discrete calculus. In this post, I reformulate some of the material in Part 11 pertaining to Noether’s theorem.

### The State-Time Graph

The directed graphs associated with discrete stochastic mechanics are described in the post The Discrete Master Equation, where the simple state-time graph example below was presented

Conceptually, the thing to keep in mind is that any transition from one state to another requires a time step. Therefore a transition from node $i$ to node $j$ is more precisely a transition from node $(i,t)$ to node $(j,t+1)$.

On a state-time graph, a discrete 0-form can be written as

\begin{aligned} \psi = \sum_{i,t} \psi^t_i \mathbf{e}^{(i,t)}.\end{aligned}

and a discrete 1-form can be written as

\begin{aligned} P = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} P^{\epsilon,t}_{i,j} \mathbf{e}^{(i,t)(j,t+1)}_\epsilon.\end{aligned}

### The Master Equation

The master equation for discrete stochastic mechanics can be expressed simply as

$\partial(\psi P) = 0,$

where $\psi$ is a discrete 0-form representing the state at all times with

\begin{aligned} 0\le \psi_{i}^t \le 1 \quad\text{and}\quad \sum_{i} \psi_{i}^t = 1 \end{aligned}

and $P$ is a discrete 1-form representing transition probabilities with

\begin{aligned} 0\le P_{i,j}^t \le 1 \quad\text{and}\quad \sum_{j} P_{i,j}^t = 1 \end{aligned}

for all $t$. When expanded into components, the master equation becomes

\begin{aligned} \psi_j^{t+1} = \sum_i \psi_i^{t} P_{i,j}^{t}. \end{aligned}

### Observables and Expectations

A general discrete 0-form on a state-time graph is defined over all states and all time. However, occasionally, we would like to consider a discrete 0-form defined over all states at a specific point in time. To facilitate this in a component-free manner, denote

\begin{aligned} 1^t = \sum_i \mathbf{e}^{(i,t)} \end{aligned}

so the identity can be expressed as

\begin{aligned} 1 = \sum_t 1^t.\end{aligned}

The discrete 0-form $1^t$ is a projection that projects a general discrete 0-form to a discrete 0-form defined only at time $t$. For instance, given a discrete 0-form $\psi$, let

\begin{aligned} \psi^t = 1^t \psi = \sum_i \psi_i^t \mathbf{e}^{(i,t)}\end{aligned}

so that

\begin{aligned} \psi = \sum_t \psi^t.\end{aligned}

In discrete stochastic mechanics, an observable is nothing more than a discrete 0-form

\begin{aligned} O = \sum_t O^t = \sum_{i,t} O_i^t \mathbf{e}^{(i,t)}.\end{aligned}

The expectation of an observable $O^t$ with respect to a state $\psi$ is given by

$\langle O^t\rangle = tr_0(O^t \psi) = \sum_i O_i^t \psi_i^t$

where $tr_0$ was defined in a previous post. Note: $O^t \psi = O^t \psi^t.$

### Some Commutators

In preparation for the discrete Noether’s theorem, note that

\begin{aligned} { [P,O] = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} (O_j^{t+1} - O_i^t) P_{i,j}^{\epsilon,t} \mathbf{e}^{(i,t)(j,t+1)}_\epsilon. } \end{aligned}

and

\begin{aligned} { [[P,O],O] = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} (O_j^{t+1} - O_i^t)^2 P_{i,j}^{\epsilon,t} \mathbf{e}^{(i,t)(j,t+1)}_\epsilon. } \end{aligned}

For these commutators to vanish, we must have

$P_{i,j}^{\epsilon,t} \ne 0 \implies O_j^{t+1} = O_i^t.$

This implies $[P,O] = 0$ if and only if $O$ is constant on each connected component of the state-time graph.

### Constant Expectations

In this section, we determine the conditions under which the expectation of an observable $O$ is constant in time, i.e.

$\langle O^{t+1}\rangle = \langle O^{t} \rangle$

for all $t$. This is a fairly straightforward application of the discrete master equation, i.e.

\begin{aligned} \langle O^{t+1}\rangle &= \sum_{j} \psi_j^{t+1} O_j^{t+1} \\ &= \sum_{i} {\psi_i^{t} \sum_j {\sum_{\epsilon\in[i,j]} { P_{i,j}^{\epsilon,t} O_j^{t+1}}}}\end{aligned}

indicating the condition we’re looking for is

\begin{aligned} O_i^{t} = \sum_j {\sum_{\epsilon\in[i,j]} { P_{i,j}^{\epsilon,t} O_j^{t+1}. }}\end{aligned}

### Noether’s Theorem

In this section, we demonstrate that when both $\langle O^t\rangle$ and $\langle (O^t)^2\rangle$ are constant in time, this implies

$tr_1\left( [[P,O],O] \right) = 0$

which, in turn, implies $[P,O] = 0$. To do this, we first expand

\begin{aligned} tr_1([[P,O],O]) = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} (O_j^{t+1} - O_i^t)^2 P_{i,j}^{\epsilon,t}. \end{aligned}

The condition for this trace to vanish is the same as the condition for the commutators above to vanish, i.e.

$P_{i,j}^{\epsilon,t} \ne 0 \implies O_j^{t+1} = O_i^t.$

Expanding the trace further results in

\begin{aligned} tr_1([[P,O],O]) = \sum_{i,j,t} \sum_{\epsilon\in[i,j]} P_{i,j}^{\epsilon,t} {(O_j^{t+1})}^2 - 2 O_i^t (P_{i,j}^{\epsilon,t} O_j^{t+1}) + (O_i^t)^2 P_{i,j}^{\epsilon,t}.\end{aligned}

Summing over $j$ and $\epsilon$ when $\langle O^t\rangle$ and $\langle (O^t)^2\rangle$ are constants results in

\begin{aligned} \text{1st Term + 2nd Term} = -\sum_{i,t} (O_i^t)^2,\end{aligned}

while summing $j$ and $\epsilon$ in the third term results in

\begin{aligned} \text{3rd Term} = \sum_{i,t} (O_i^t)^2 \end{aligned}

by definition of the transition 1-form. Consequently, when $\langle O^t\rangle$ and $\langle (O^t)^2\rangle$ are constants, it follows that

$tr_1([[P,O],O]) =0.$

Finally, this implies $[P,O] = 0$ if and only if $\langle O^t\rangle$ and $\langle (O^t)^2\rangle$ are constant in time.

Written by Eric

December 25, 2011 at 9:09 am

## Network Theory and Discrete Calculus – Quantized Conductance

This post is part of a series

### The Graph Operator

In my last post, I mentioned the graph operator

\begin{aligned} \mathbf{G} = \sum_{i,j} \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j}\end{aligned}

and the fact the exterior derivative of a discrete 0-form can be expressed as a commutator

\begin{aligned} dV = [\mathbf{G},V] = \sum_{i,j} (V_j - V_i) \mathbf{e}^{i,j}. \end{aligned},

where

\begin{aligned} \mathbf{e}^{i,j} = \sum_{\epsilon\in[i,j]} \mathbf{e}^{i,j}_\epsilon. \end{aligned}.

I then let myself speculate that the graph conductance 1-form

\begin{aligned} G = \sum_{i,j} \sum_{\epsilon\in[i,j]} G_{i,j}^\epsilon \mathbf{e}^{i,j}_\epsilon \end{aligned}

could be nothing more than the graph operator. In this post, I hope to explain a bit more how that might work.

### Graph Conductance

Recall that the discrete Ohm’s Law

$[G,V] = I$

gives the total current

\begin{aligned} I = \sum_{i,j} \sum_{\epsilon\in[i,j]} I^\epsilon_{i,j} \mathbf{e}^{i,j}_{\epsilon} = \sum_{i,j} (V_j-V_i) \sum_{\epsilon\in[i,j]} G^\epsilon_{i,j} \mathbf{e}^{i,j}_{\epsilon}. \end{aligned}

If we did not need to probe the current in any one of the individual parallel directed edges, it would be tempting to replace them with a single effective directed edge representing the total current flowing them, i.e.

\begin{aligned} \sum_{\epsilon\in[i,j]} I^\epsilon_{i,j} \mathbf{e}^{i,j}_\epsilon \implies I_{i,j} \mathbf{e}^{i,j} , \end{aligned}

where

\begin{aligned} I_{i,j} = \sum_{\epsilon\in[i,j]} I^\epsilon_{i,j}.\end{aligned}

In doing so, we could also replace the conductances with a single effective conductance

\begin{aligned} \sum_{\epsilon\in[i,j]} G^\epsilon_{i,j} \mathbf{e}^{i,j}_\epsilon \implies G_{i,j} \mathbf{e}^{i,j} , \end{aligned}

where

\begin{aligned} G_{i,j} = \sum_{\epsilon\in[i,j]} G^\epsilon_{i,j}.\end{aligned}

### Equivalence

Could it be that $\mathbf{G} = G$?

Let $P[i,j]$ denote a partition of the set $[i,j]$ of directed edges from node $i$ to node $j$ and express the graph operator as

\begin{aligned} \mathbf{G} = \sum_{i,j} \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j} = \sum_{i,j} \sum_{E\in P[i,j]} N^E_{i,j} \mathbf{e}_E^{i,j},\end{aligned}

where

\begin{aligned} N^E_{i,j} \mathbf{e}_E^{i,j} = \sum_{\epsilon\in E} \mathbf{e}^{i,j}_\epsilon \end{aligned}

and $N^E_{i,j}$ is the number of directed edges in the subset $E$. This would only make sense if we were not going to probe into any single directed edge within any element of the partition.

Comparing this to the conductance

\begin{aligned} G = \sum_{i,j} \sum_{\epsilon\in[i,j]} G_{i,j}^\epsilon \mathbf{e}_\epsilon^{i,j}\end{aligned}

we see that the graph conductance can be interpreted as the graph operator where each directed edge of  our electric network is actually composed of a number $G^\epsilon_{i,j}$ of fundamental directed edges, i.e. conductance is simply counting the number of sub-paths within each directed edge.

### Thoughts

As before, thinking about this (as time allows) raises more questions than answers. For example, if the above makes any sense and is in any way related to nature, this would imply a fundamental unit of conductance and that conductance should be quantized, i.e. come in integer multiples of the fundamental unit. For completely unrelated (?) reasons, conductance is observed to be quantized due to the waveguide like nature of small, e.g. nano, wires and the fundamental unit of conductance is given by

$G_0 = \frac{2 e^2}{h},$

where $e$ is the electron charge and $h$ is Planck constant.

This also makes me think of the geometric origin of inhomogeneous media. In vacuum, I would expect there to be just a single directed edge connecting any two nodes. Hence, I would expect $G^\epsilon_{i,j} = G_0$ in vacuum. In the presence of matter, e.g. components of an electrical network, there should be bunches of directed edges between any two nodes.

Written by Eric

December 17, 2011 at 10:24 pm

## Network Theory and Discrete Calculus – Electrical Networks

with one comment

This post is part of a series

### Basic Equations

In Part 16 of John Baez’ series on Network Theory, he discussed electrical networks. On the day he published his article (November 4), I wrote down the following in my notebook

$G\circ dV = [G,V] = I$ and $\partial I = 0.$

The first equation is essentially the discrete calculus version of Ohm’s Law, where

\begin{aligned} G = \sum_{i,j} \sum_{\epsilon\in[i,j]} G_{i,j}^\epsilon \mathbf{e}^{i,j}_\epsilon \end{aligned}

is a discrete 1-form representing conductance,

\begin{aligned} V = \sum_i V_i \mathbf{e}^i \end{aligned}

is a discrete 0-form representing voltage, and

\begin{aligned} I = \sum_{i,j} \sum_{\epsilon\in[i,j]} I_{i,j}^\epsilon \mathbf{e}^{i,j}_\epsilon. \end{aligned}

In components, this becomes

$G_{i,j}^\epsilon \left(V_j - V_i\right) = I^\epsilon_{i,j}.$

The second equation is a charge conservation law which simply says

$I_{*,i} = I_{i,*},$

where

\begin{aligned} I_{*,i} = \sum_j \sum_{\epsilon\in[j,i]} I^\epsilon_{j,i}\end{aligned}

is the sum of all currents into node $i$ and

\begin{aligned} I_{i,*} = \sum_j \sum_{\epsilon\in[i,j]} I^\epsilon_{i,j}\end{aligned}

is the sum of all currents out of node $i$. This is more general than it may first appear. The reason is that directed graphs are naturally about spacetime, so the currents here are more like 4-dimensional currents of special relativity. The equation

$\partial I = 0$

is related to the corresponding Maxwell’s equation

$d^\dagger j = 0,$

where $d^\dagger$ is the adjoint exterior derivative and $j$ is the 4-current 1-form

$j = j_x dx + j_y dy + j_z dz + \rho dt.$

This also implies the discrete Ohm’s Law appearing above is 4-dimensional and actually a bit more general than the usual Ohm’s Law.

### Some Thoughts

I’ve been thinking about this off and on since then as time allows, but questions seem to be growing exponentially.

For one, the equation

$[G,V] = GV - VG = I$

is curious because it implies that $[G,\cdot]$ is a derivative, i.e.

$[G,V_1 V_2] = [G,V_1] V_2 + V_1 [G, V_2].$

Further, although by pure coincidence, in my paper with Urs, we introduced the graph operator

\begin{aligned} \mathbf{G} = \sum_{i,j} \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j}\end{aligned}

and showed that for any directed graph and any discrete 0-form $\phi$ that

$d\phi = [\mathbf{G},\phi].$

Is it possible that $G$ and $\mathbf{G}$ are related?

I think they are. This brings thoughts of spin networks and Penrose, but I’ll try to refrain from speculating too much beyond mentioning it.

If they were related, this would mean that the discrete Ohm’s Law above simplifies even further to

$dV = I$

and

$\partial d V = 0.$

In components, the above becomes

\begin{aligned} \sum_j \left(V_j - V_i\right) \left(N_{i,j} + N_{j,i} \right) = 0.\end{aligned}

This expresses an effective conductance in terms of the total number of directed edges connecting the two nodes in either direction, i.e.

$G^*_{i,j} = N_{i,j} + N_{j,i}.$

If the $G^\epsilon_{i,j}$‘s appearing in the conductance 1-form $G$ are themselves effective conductances resulting from multiple more fundamental directed edges, then we do in fact have

$G = \mathbf{G}.$

Applications from here can go in any number of directions, so stay tuned!

Written by Eric

December 10, 2011 at 9:23 pm

## Network Theory and Discrete Calculus – Graph Divergence and Graph Laplacian

with one comment

This post is part of a series

### Another Note on Notation

In a previous post, I introduced a slightly generalized notation in order to deal with directed graphs with multiple directed edges between any two nodes, e.g. parallel elements in electrical networks. However, the revised notation now makes some simpler calculations look more cumbersome. This is an example of what my adviser called the conservation of frustration. For example, the coboundary is now given by:

\begin{aligned} d\mathbf{e}^i = \sum_{i,j} \left( \sum_{\epsilon\in[j,i]}\mathbf{e}_\epsilon^{j,i} - \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j}\right).\end{aligned}

Applied to a general discrete 0-form, this becomes

\begin{aligned} d\phi = \sum_{i,j} {\left(\phi_j-\phi_i\right) \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j}} .\end{aligned}

To re-simplify the notation while maintaining the advantages of the new generalized notation, we can define

\begin{aligned} \mathbf{e}^{i,j} = \sum_{\epsilon\in[i,j]} \mathbf{e}_\epsilon^{i,j} \end{aligned}

and we’re back to

\begin{aligned} d\mathbf{e}^i = \sum_{i,j} \left(\mathbf{e}^{j,i} - \mathbf{e}^{i,j}\right)\end{aligned} and \begin{aligned} d\phi = \sum_{i,j} \left(\phi_j-\phi_i\right) \mathbf{e}^{i,j} \end{aligned}

as before. Furthermore, we have

$\partial\mathbf{e}^{i,j} = N_{i,j} \left(\mathbf{e}^j - \mathbf{e}^i\right),$

where $N_{i,j}$ is the number of directed edges from node $i$ to node $j$.

### Trace and Inner Products

Given a discrete 0-form $\phi$, we define its trace via

\begin{aligned} tr_0(\phi) = \sum_i \phi_i. \end{aligned}

Similarly, given a discrete 1-form $\alpha$, its trace is given by

\begin{aligned} tr_1(\alpha) = \sum_{i,j} {\sum_{\epsilon\in[i,j]} \alpha^\epsilon_{i,j}} .\end{aligned}

With the trace, we can define the inner product of discrete 0-forms via

$\langle \phi,\psi\rangle_0 = tr_0(\phi\psi)$

and the inner product of discrete 1-forms via

$\langle \alpha,\beta\rangle_1 = tr_1(\alpha\circ\beta),$

where $\alpha\circ\beta$ is the edge product.

### Graph Divergence

The graph divergence was introduced here as a boundary operator, but the relation to divergence was mentioned here.

With the inner products defined above, a simple calculation shows

$\langle \partial\alpha,\phi\rangle_0 = \langle \alpha, d\phi\rangle_1$

so the graph divergence is the adjoint of the coboundary.

In relating discrete calculus to algebraic topology, typically, in algebraic topology you would have a coboundary operator for cochains and a boundary operator for chains. With discrete calculus, we have both $d$ and $\partial$ for discrete forms.

### Graph Laplacian

The graph Laplacian of a discrete 0-form $\phi$ is given by

\begin{aligned} \partial d\phi = -\sum_{i,j} \left(\phi_j - \phi_i\right) \left(N_{i,j} + N_{j,i}\right) \mathbf{e}^i. \end{aligned}

More generally, we could define a graph Laplace-Beltrami operator

$d\partial + \partial d.$

### Graph Dirac Operator

The graph Dirac operator is essentially the “square root” of the graph Laplace-Beltrami operator. Since $d^2 = 0$ and $\partial^2 = 0$, we have

$d\partial + \partial d = (d+\partial)^2$

so the/a graph Dirac operator is given by

$d + \partial.$

Written by Eric

December 4, 2011 at 1:26 pm

## Network Theory and Discrete Calculus – Edge Algebra

with one comment

This post is part of a series

In my last post, I noted that in following John Baez’ series, I’m finding the need to introduce operators that I haven’t previously used in any applications. In this post, I will introduce another. It turns out that we could get away without introducing this concept, but I think it helps motivate some things I will talk about later.

In all previous applications, the important algebra was a noncommutative graded differential algebra. The grading means that the degree of elements add when you multiply them together. For example, the product of two nodes (degree 0) is a node (degree 0+0), the product of a node (degree 0) and a directed edge (degree 1) is a directed edge (degree 0+1), and the product of a directed edge (degree 1) with another directed edge is a directed surface (degree 1+1).

Note the algebra of nodes is a commutative sub-algebra of the full noncommutative graded algebra.

There is another related commutative edge algebra with corresponding edge product.

The edge product is similar to the product of nodes in that it is a projection given by

$\mathbf{e}_\epsilon^{i,j} \circ \mathbf{e}_{\epsilon'}^{k,l} = \delta_{\epsilon,\epsilon'} \delta_{i,k} \delta_{j,l} \mathbf{e}_\epsilon^{i,j}.$

It is a projection because for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

we have

$\mathbf{e}_\epsilon^{i,j} \circ \alpha = \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j}$

and

$\mathbf{e}_\epsilon^{i,j} \circ \mathbf{e}_\epsilon^{i,j} = \mathbf{e}_\epsilon^{i,j}.$

The product of two discrete 1-forms is

\begin{aligned}\alpha\circ\beta = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \beta_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j}\end{aligned}.

I have not yet come across an application where the full edge algebra is needed. When the product does arise, one of the discrete 1-forms is usual the coboundary of a discrete 0-form, i.e.

$\alpha\circ d\phi.$

When this is the case, the edge product can be expressed as a (graded) commutator in the noncommutative graded algebra, i.e.

$\alpha\circ d\phi = [\alpha,\phi].$

An example of this will be seen when we examine electrical circuits.

Written by Eric

November 20, 2011 at 12:21 pm

## Network Theory and Discrete Calculus – Notation Revisited

This post is part of a series

As stated in the Introduction to this series, one of my goals is to follow along with John Baez’ series and reformulate things in the language of discrete calculus. Along the way, I’m coming across operations that I haven’t used in any of my prior applications of discrete calculus to mathematical finance and field theories. For instance, in the The Discrete Master Equation, I introduced a boundary operator

\begin{aligned} \partial \mathbf{e}^{i,j} = \mathbf{e}^j-\mathbf{e}^i.\end{aligned}

Although, I hope the reason I call this a boundary operator is obvious, it would be more precise to call this something like graph divergence. To see why, consider the boundary of an arbitrary discrete 1-form

\begin{aligned}\partial \alpha = \sum_{i,j} \alpha_{i,j} \left(\mathbf{e}^j - \mathbf{e}^i\right) = \sum_i \left[ \sum_j \left(\alpha_{j,i} - \alpha_{i,j}\right)\right] \mathbf{e}^i.\end{aligned}

A hint of sloppy notation has already crept in here, but we can see that the boundary of a discrete 1-form at a node $i$ is the sum of coefficients flowing into node $i$ minus the sum of coefficients flowing out of node $i$. This is what you would expect of a divergence operator, but divergence depends on a metric. This operator does not, hence it is topological in nature. It is tempting to call this a topological divergence, but I think graph divergence is a better choice for reasons to be seen later.

One reason the above notation is a bit sloppy is because in the summations, we should really keep track of what directed edges are actually present in the directed graph. Until now, simply setting

$\mathbf{e}^{i,j} = 0$

if there is no directed edge from node $i$ to node $j$ was sufficient. Not anymore.

Also, for applications I’ve used discrete calculus so far, there has always only been a single directed edge connecting any two nodes. When applying discrete calculus to electrical circuits, as John has started doing in his series, we obviously would like to consider elements that are in parallel.

I tend to get hung up on notation and have thought about the best way to deal with this. My solution is not perfect and I’m open to suggestions, but what I settled on is to introduce a summation not only over nodes, but also over directed edges connected those nodes. Here it is for an arbitrary discrete 1-form

\begin{aligned}\alpha = \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \mathbf{e}_\epsilon^{i,j},\end{aligned}

where $[i,j]$ is the set of all directed edges from node $i$ to node $j$. I’m not 100% enamored, but is handy for performing calculations and doesn’t make me think too much.

For example, with this new notation, the boundary operator is much clearer

\begin{aligned} \partial \alpha &= \sum_{i,j} \sum_{\epsilon\in [i,j]} \alpha_{i,j}^{\epsilon} \left(\mathbf{e}^{j}-\mathbf{e}^i\right) \\ &= \sum_i \left[\sum_j \left( \sum_{\epsilon\in[j,i]} \alpha_{j,i}^{\epsilon} - \sum_{\epsilon\in[i,j]} \alpha_{i,j}^{\epsilon} \right)\right]\mathbf{e}^i.\end{aligned}

As before, this says the graph divergence of $\alpha$ at the node $i$ is the sum of all coefficients flowing into node $i$ minus the sum of all coefficients flowing out of node $i$. Moreover, for any node $j$ there can be one or more (or zero) directed edges from $j$ into $i$.

Written by Eric

November 19, 2011 at 11:27 pm