# Phorgy Phynance

## Discrete Black-Scholes Model

This post is part of a series

In the previous post, we found discrete geometric Brownian motion

$dS = \mu S dt + \sigma S dx$

has the closed-form solution

$\displaystyle S(i,j) = S(0,0) R^{\frac{j+i}{2}} L^{\frac{j-i}{2}}$

where

$R = 1+\mu\Delta t+\sigma\Delta x\quad\text{and}\quad L = 1+\mu\Delta t+\sigma\Delta x$

In this post, I revisit some results presented in

back in 2004.

### Risk-Free Bond Price

The price of a risk-free bond will be modeled as

$dB = r B dt$

which has the closed-form solution

$\boxed{B(i,j) = B(0,0) (1+r\Delta t)^j.}$

### Stock Price

The price of a stock will be modeled as a geometric Brownian motion

$dS = \mu S dt + \sigma S dx.$

However, we will be working in a risk-neutral measure meaning the drift will be the risk-free rate so that

$dS = r S dt + \sigma S d\tilde x,$

where

$\displaystyle d\tilde x = dx - \frac{\mu-r}{\sigma} dt.$

In what follows, we assume risk-neutral measure and write simply

$dS = r S dt + \sigma S dx$

with no$\tilde{}$so

$\displaystyle \boxed{S(i,j) = S(0,0) (1+r\Delta t+\sigma\Delta x)^{\frac{j+i}{2}} (1+r\Delta t-\sigma\Delta x)^{\frac{j-i}{2}}.}$

### Self Financing

A portfolio$\Pi$can be constructed consisting of an option$V$, a stock$S$and a risk-free bond$B,$i.e.

$\displaystyle \Pi = \alpha V + \delta S + \beta B,$

where$\alpha,\delta,\beta$are units held in the respective security.

The Black-Scholes model assumes the portfolio is self financing, i.e. to increase units in one security another security needs to be sold so no money flows into or out of the portfolio. A self-financing portfolio has a simple expression in discrete stochastic calculus

$(d\alpha) V + (d\delta) S + (d\beta) B = 0,$

i.e. the total change in value of the portfolio due to trading activity is zero.

Consequently, if$\Pi$ is self financing, then

$\boxed{d\Pi = \alpha dV + \delta dS + \beta dB.}$

### Hedging Strategy

The final ingredient of the Black-Scholes model is that there is a no-arbitrage hedging strategy$\delta$ that eliminates the “risk” of the portfolio so that

$d\Pi = r\Pi dt.$

For the risky$dx$component of$d\Pi$term to vanish, the component of$\mathbf{e}^+$must equal the component of$\mathbf{e}^-.$

Expanding$d\Pi$and comparing components gives the hedging strategy

$\displaystyle \boxed{\delta(j) = -\alpha(j) \frac{V(i+1,j+1)-V(i-1,j+1)}{S(i+1,j+1)-S(i-1,j+1)}.}$

### Discrete Black-Scholes Equation

Inserting the hedging strategy above into

$d\Pi = r\Pi dt = \alpha dV + \delta dS + \beta dB$

and expanding results in the discrete Black-Scholes equation

$\displaystyle \boxed{V(i,j) = \frac{1}{1+r\Delta}\left[V(i+1,j+1) p^+(j) + V(i-1,j+1) p^-(j)\right],}$

where

$\displaystyle p^+(j) = \frac{(1+r\Delta t) S(i,j) - S(i-1,j+1)}{S(i+1,j+1) - S(i-1,j+1)}$

and$p^-(j) = 1-p^+(j).$

The discrete Black-Scholes equation is essentially the Cox-Ross-Rubinstein model.

### Simplifications

Since we have the closed-form expression for$S$, there are several simplification that result. For instance, we have

$S(i+1,j+1) - S(i-1,j+1) = 2\sigma\Delta x S(i,j)$

and

$(1+r\Delta t) S(i,j) - S(i-1,j+1) = \sigma\Delta x S(i,j)$

so that the hedging strategy simplifies to

$\displaystyle \boxed{\delta(j) = -\alpha(j)\frac{V(i+1,j+1) - V(i-1,j+1)}{2\sigma\Delta x S(i,j)}}$

and$p^+(j) = p^-(j) = 1/2$so that

$\displaystyle \boxed{V(i,j) = \frac{V(i+1,j+1)+V(i-1,j+1)}{2(1+r\Delta t)}.}$

Written by Eric

December 6, 2018 at 12:41 pm

## Discrete Stochastic Differential Equations

with one comment

This post is part of a series

In this post, I begin looking at discrete stochastic differential equations (DSDEs).

Recall that in the continuum, stochastic differential equations (SDEs) are of the form

$dS = \mu(t,S) dt + \sigma(t,S) dx.$

When transcribing continuum models to the discrete setting, the first challenge is to determine how to write down the DSDEs. Due to the noncommutative nature of discrete stochastic calculus,

$dS = dt \mu(t,S) + dx \sigma(t,S)$

with 0-forms on the right is a different model than above with 0-forms on the left. In fact, we could consider linear combinations such as

$dS=\kappa\left[\mu(t,S)dt+\sigma(t,S) dx\right]+(1-\kappa)\left[dt\mu(t,S)+dx\sigma(t,S)\right].$

Consider the continuum special case of geometric Brownian motion with constant coefficients

$dS = \mu S dt + \sigma S dx$

with known solution

$\displaystyle S(t,x) = S(0,0) \exp\left[\left(\mu - \frac{\sigma^2}{2}\right) t + \sigma x\right].$

With the above solution and switching to the context of noncommutative stochastic calculus, the differential expressed in left components is given by

\begin{aligned} dS&=\left(\partial_t S+\frac{1}{2}\partial_{xx}S\right)dt+\left(\partial_x S\right)dx\\&=\mu S dt+\sigma S dx\end{aligned}

as expected. In contrast, expressing the model in right-component form results in

\begin{aligned} dS&=dt\left(\partial_t S-\frac{1}{2}\partial_{xx}S\right)+dx\left(\partial_x S\right)\\&=dt S(\mu-\sigma)+dx S\sigma\end{aligned},

which although correct and equivalent to the above, the right-component form is not the way we would expect the continuum model to be transcribed. With this as our motivation, we will take the left-component

$dS = \mu(t,S) dt + \sigma(t,S) dx$

to be the way we transcribe continuum models.

## Discrete Geometric Brownian Motion

To solve discrete geometric Brownian motion, first express it in terms of the bases$\mathbf{e}^+$and$\mathbf{e}^-$

\begin{aligned}dS &= \mu S dt + \sigma S dx\\&=\mu S\Delta t(\mathbf{e}^++\mathbf{e}^-) + \sigma S\Delta x(\mathbf{e}^+-\mathbf{e}^-)\\&=(\mu\Delta t + \sigma\Delta x) S\mathbf{e}^+ + (\mu\Delta t - \sigma \Delta x)S\mathbf{e}^-\end{aligned}.

The update expressions can be written immediately as

$S(i+1,j+1) = \left[1+\mu\Delta t(j)+\sigma\Delta x(j)\right] S(i,j)$

and

$S(i-1,j+1) = \left[1+\mu\Delta t(j)-\sigma\Delta x(j)\right] S(i,j)$

so that the solution at any node can be expressed as

$\displaystyle \boxed{S(i,j) = S(0,0)R^{\frac{j+i}{2}} L^{\frac{j-i}{2}}}$

where

$R=\left[1+\mu\Delta t(j)+\sigma\Delta x(j)\right]$

and

$L=\left[1+\mu\Delta t(j)-\sigma\Delta x(j)\right].$

The continuum solution can be written in a similar form as

$S(i,j) = S(0,0) R_c^{\frac{j+i}{2}} L_c^{\frac{j-i}{2}},$

where

\begin{aligned} R_c &= \exp\left[\left(\mu-\frac{\sigma^2}{2}\right)\Delta t + \sigma\Delta x\right]\\&= 1+\left(\mu-\frac{\sigma^2}{2}\right)\Delta t + \sigma\Delta x + \frac{1}{2}\left[\sigma^2(\Delta x)^2+O\left((\Delta x)^3\right)\right]\\&=R + O((\Delta x)^3)\end{aligned}

and

\begin{aligned} L_c &= \exp\left[\left(\mu-\frac{\sigma^2}{2}\right)\Delta t - \sigma\Delta x\right]\\&= 1+\left(\mu-\frac{\sigma^2}{2}\right)\Delta t - \sigma\Delta x + \frac{1}{2}\left[\sigma^2(\Delta x)^2+O\left((\Delta x)^3\right)\right]\\&=L + O((\Delta x)^3).\end{aligned}

## Discrete Ornstein–Uhlenbeck Process

Consider the discrete Ornstein-Uhlenbeck process

\begin{aligned} dS &= \theta (\mu - S) dt + \sigma dx \\&=\left[\theta(\mu-S)\Delta t+\sigma\Delta x\right]\mathbf{e}^+\left[\theta(\mu-S)\Delta t-\sigma\Delta x\right]\mathbf{e}^-\end{aligned}

with update expressions

$S(i+1,j+1) = S(i,j) \left[1-\theta\Delta t(j)\right]+\theta\mu\Delta t(j)+\sigma\Delta x(j)$

and

$S(i-1,j+1) = S(i,j) \left[1-\theta\Delta t(j)\right]+\theta\mu\Delta t(j)-\sigma\Delta x(j).$

Let

$R_j(S(i,j)) = S(i,j)\left[1-\theta\Delta t(j)\right]+\theta\mu\Delta t(j)+\sigma\Delta x(j)$

and

$L_j(S(i,j)) = S(i,j)\left[1-\theta\Delta t(j)\right]+\theta\mu\Delta t(j)-\sigma\Delta x(j).$

For consistency, we require

$L_{j+1} \circ R_j = R_{j+1} \circ L_j.$

Imposing this consistency condition gives constraints on the grid spacing, i.e.

$\displaystyle \Delta x(j+1) = \frac{-1+\sqrt{1+4\theta\left[\Delta x(j)\right]^2}}{2\theta\Delta x(j)}$

implying dynamic grid spacing.

## General Strategy

Armed with the above examples, we can develop a general strategy for solving discrete stochastic differential equations defined by the general process

\begin{aligned} dS&=\mu(t,S) dt+\sigma(t,S) dx\\&=\left[\mu(t,S)\Delta t+\sigma(t,S)\Delta x\right]\mathbf{e}^+ +\left[\mu(t,S)\Delta t-\sigma(t,S)\Delta x\right]\mathbf{e}^-.\end{aligned}

Write down the update expressions as simply

\begin{aligned} S(i+1,j+1)&=R_j(S(i,j))\\&=S(i,j)+\mu\left[j,S(i,j)\right]\Delta t(j)+\sigma\left[j,S(i,j)\right]\Delta x(j)\end{aligned}

and

\begin{aligned} S(i-1,j+1)&=L_j(S(i,j))\\&=S(i,j)+\mu\left[j,S(i,j)\right]\Delta t(j)-\sigma\left[j,S(i,j)\right]\Delta x(j).\end{aligned}

Finally, impose the consistency condition

$L_{j+1} \circ R_j = R_{j+1} \circ L_j.$

Written by Eric

December 5, 2018 at 6:11 pm

## Dynamic Grids

This post is part of a series

In the previous post of this series, I defined discrete stochastic calculus as the discrete calculus on a binary tree with$\Delta t = (\Delta x)^2$ so that the coordinate basis 1-forms$dx$and$dt$ satisfy the commutative relations

• $[dx,x] = dt$
• $[dt,t] = \Delta t dt$
• $[dx,t] = [dt,x] = \Delta t dx.$

A major assumption there was that the grid spacings$\Delta t$and$\Delta x$are constants. It turns out that to solve some discrete stochastic differential equations, we need to relax this assumption. In this post, I generalize discrete stochastic calculus to dynamic grids.

Recall the binary tree

We define the coordinates$x$and$t$ as discrete 0-forms

$\displaystyle x = \sum_{(i,j)} x(i,j) \mathbf{e}^{(i,j)}\quad\text{and}\quad t = \sum_{(i,j)} t(i,j) \mathbf{e}^{(i,j)},$

where we set

$x(i\pm1,j+1) = x(i,j) \pm \Delta x(j)$

and

$t(i\pm1,j+1) = t(i,j) + \Delta t(j),$

i.e. the grid spacings are fixed for a given time, but are allowed to change in time. Therefore,

$dx = \Delta x (\mathbf{e}^+ - \mathbf{e}^-)$

$dt = \Delta t (\mathbf{e}^+ + \mathbf{e}^-)$

where

$\displaystyle \Delta x = \sum_{(i,j)} \Delta x(j) \mathbf{e}^{(i,j)}\quad\text{and}\quad\Delta t = \sum_{(i,j)} \Delta t(j) \mathbf{e}^{(i,j)}.$

Since$\Delta x$and$\Delta t$are no longer constant 0-forms, they do not commute with$\mathbf{e}^+$and$\mathbf{e}^-.$

Straightforward calculations show the generalized coordinates satisfy the commutation relations

$[\mathbf{e}^\pm,x] = \pm \Delta x\mathbf{e}^\pm\quad\text{and}\quad[\mathbf{e}^\pm,t] = \Delta t\mathbf{e}^\pm.$

From there, it is a simple exercise to demonstrate that

$[dx,x] = (\Delta t)^{-1} (\Delta x)^2 dt$

$[dx,t] = [dt,x] = \Delta t dx$

$[dt,t] = \Delta t dt,$

where

$\displaystyle (\Delta t)^{-1} = \sum_{(i,j)} \frac{1}{\Delta t(j)} \mathbf{e}^{(i,j)}.$

Setting

$\Delta t = (\Delta x)^2$

results in the commutation relations

$[dx,x] = dt$

$[dx,t] = [dt,x] = \Delta t dx$

$[dt,t] = \Delta t dt.$

In the continuum limit$\Delta t \to 0$, these reduce to

$[dx,x] = dt$

$[dx,t] = [dt,x] = 0$

$[dt,t] = 0$

giving the noncommutative stochastic calculus.

When the grid spacing$\Delta x(j)$and$\Delta t(j)$are constants, the above reduces to the previously considered case.

In subsequents posts, I will adopt this more general definition of discrete stochastic calculus on dynamic grids.

Written by Eric

December 4, 2018 at 9:42 pm

## Wave Equation

In the finite-difference method, there is a fairly remarkable property of the (1+1)-dimensional wave equation

$\partial_t^2 \phi - c^2 \partial_x^2\phi = 0.$

Replacing$\partial_t^2$and$\partial_x^2$with central-difference approximations, a node at$\phi\left(i,j\right)$ may be computed as

$\displaystyle \boxed{\phi(i,j+1) = \kappa\phi(i+1,j) + \kappa\phi(i-1,j) + 2(1-\kappa)\phi(i,j) - \phi(i,j-1)}$

where$\phi(i,j)$is notation for$\phi(i\Delta x,j\Delta t)$and$\kappa = \frac{(c\Delta t)^2}{(\Delta x)^2}.$

The stencil for updating$\phi(i,j+1)$is illustrated below

However, when

$\Delta t = \frac{1}{c}\Delta x$

then$\kappa=1$and the$\phi(i,j)$term drops out so the update expression reduces to

$\displaystyle \boxed{\phi(i,j+1) = \phi(i+1,j) - \phi(i,j-1) + \phi(i-1,j)}$

and does not depend on any physical constants.

This time step is known as the “magic time step” because, the above represents an exact solution to the (1+1)-dimensional wave equation, i.e. for given initial conditions, the wave will propagate exactly to within machine precision.

The stencil for the (1+1)-dimensional wave equation when using the magic time steps reduces to the following:

This stencil is more reminiscent of a binary tree.

## Heat Equation

Next consider a finite-difference approximation to the (1+1)-dimensional heat equation

$\partial_t\phi - \frac{\sigma^2}{2} \partial_x^2\phi = 0$

using a forward-difference approximation for$\partial_t$and a central-difference approximation for$\partial_x^2.$ The resulting update equation is given by

$\displaystyle \boxed{\phi(i,j+1) = \frac{\kappa}{2} \phi(i+1,j) + \frac{\kappa}{2} \phi(i-1,j) + (1-\kappa)\phi(i,j)}$

where$\kappa = \frac{\sigma^2\Delta t}{(\Delta x)^2}.$

The stencil for this update expression is given by

Similar to the wave equation, when setting

$\Delta t = \frac{1}{\sigma^2} (\Delta x)^2$

then$\kappa=1$and the$\phi(i,j)$term drops out so the update expression reduces to

$\displaystyle \boxed{\phi(i,j+1) = \frac{1}{2}\left[\phi(i+1,j) + \phi(i-1,j)\right]}$

and does not depend on any physical constants. The stencil also reduces to

This stencil is reminiscent of a binary tree.

The above time step for the heat equation is also “magic”.

The reduced update expression has a closed form (Greens function) solution being the the normalized binomial coefficient, i.e. if$\phi(0,0) = 1$then

$\displaystyle \phi(i,j) = \frac{1}{2^{j}} {j\choose i}.$

Written by Eric

November 24, 2018 at 1:22 am

## Noncommutative Geometry

In this section, I briefly review some elementary concepts of noncommutative geometry on commutative algebras primarily to collect some notation to be used later on. For a deeper look into the topic, I highly recommend the largely self-contained review articles:

and

Consider an (n+1)-dimensional differential graded algebra

$\displaystyle \Omega = \bigoplus_{i=0}^n \Omega^i,\quad d:\Omega^i\to\Omega^{i+1}$

over a commutative algebra of 0-forms$\Omega^0$with basis 1-forms $d x^0, ... , d x^n\in\Omega^1.$ The core concept here lies in the fact 0-forms and 1-forms do not commute so any 1-form can be written equivalently in either left- or right-component forms

$\displaystyle \alpha = \overleftarrow{\alpha_\mu} d x^\mu = d x^\mu \overrightarrow{\alpha_\mu}$

respectively. In particular, the differential of a 0-form can be written as

$\displaystyle d f = \left(\overleftarrow{\partial_\mu} f\right) d x^\mu = d x^\mu \left(\overrightarrow{\partial_\mu} f\right),$

where$\overleftarrow{\partial_\mu}: \Omega^0 \to \Omega^0$ and$\overrightarrow{\partial_\mu}: \Omega^0 \to \Omega^0$ are not necessarily partial derivatives.

The basis 1-forms satisfy the commutation relations

$\displaystyle [d x^\mu,x^\nu] = \overleftarrow{C^{\mu\nu}_{\lambda}} d x^\lambda = d x^\lambda\overrightarrow{C^{\mu\nu}_{\lambda}}.$

In this post, I assume$\overleftarrow{C^{\mu\nu}_{\lambda}}$and$\overrightarrow{C^{\mu\nu}_{\lambda}}$are constant 0-forms so that$\overleftarrow{C^{\mu\nu}_{\lambda}}= \overrightarrow{C^{\mu\nu}_{\lambda}}$and write both simply as$C^{\mu\nu}_{\lambda}.$ Note, however, that constant coefficients are not a restriction of the general formalism.

Since $[d x^\mu, x^\nu] = [d x^\nu, x^\mu]$, it follows that

$\displaystyle C^{\mu\nu}_{\lambda} = C^{\nu\mu}_{\lambda},$

i.e.$C^{\mu\nu}_{\lambda}$is symmetric in the upper indices.

More generally,

$\left[d f,g\right] = \left[d g,f\right] = C^{\kappa\lambda}_{\mu}\left(\overleftarrow{\partial_\kappa} f\right)\left(\overleftarrow{\partial_\lambda} g\right) d x^\mu = d x^\mu C^{\kappa\lambda}_{\mu}\left(\overrightarrow{\partial_\kappa} f\right)\left(\overrightarrow{\partial_\lambda} g\right)$

so that the product rule results in

\begin{aligned} d(fg) &= (d f)g + f(d g) \\ &= g(d f) + f(d g) + [d f,g] \\ &= \left[ \left(\overleftarrow{\partial_\mu} f\right) g + f\left(\overleftarrow{\partial_\mu} g\right) + C^{\kappa\lambda}_{\mu} \left(\overleftarrow{\partial_\lambda} f\right)\left(\overleftarrow{\partial_\kappa} g\right) \right] d x^\mu \end{aligned}

and

$\displaystyle \overleftarrow{\partial_\mu}\left(f g\right) = \left(\overleftarrow{\partial_\mu} f\right) g + f\left(\overleftarrow{\partial_\mu} g\right) + C^{\kappa\lambda}_{\mu} \left(\overleftarrow{\partial_\lambda} f\right)\left(\overleftarrow{\partial_\kappa} g\right) .$

Similarly,

$\displaystyle \overrightarrow{\partial_\mu}\left(f g\right) = \left(\overrightarrow{\partial_\mu} f\right) g + f\left(\overrightarrow{\partial_\mu} g\right) - C^{\kappa\lambda}_{\mu}\left(\overrightarrow{\partial_\lambda} f\right)\left(\overrightarrow{\partial_\kappa} g\right) .$

Everything above this point holds for any commutative associative algebra$\Omega^0$with no assumption of smoothness. However, if$\Omega^0$ consists of smooth 0-forms on$\mathbb{R}^{n+1}$, the right- and left-partial derivatives can be expressed as

$\displaystyle \overleftarrow{\partial_\mu} = \partial_\mu + \sum_{r=2}^{\infty} \frac{1}{r!} C^{i_1i_2}_{j_1}C^{i_3j_1}_{j_2}\dots C^{i_rj_{r-2}}_k \partial_{i_1}\dots\partial_{i_r}$

and

$\displaystyle \overrightarrow{\partial_\mu} = \partial_\mu + \sum_{r=2}^{\infty} \frac{(-1)^{r-1}}{r!} C^{i_1i_2}_{j_1}C^{i_3j_1}_{j_2}\dots C^{i_rj_{r-2}}_k \partial_{i_1}\dots\partial_{i_r}.$

## Stochastic Calculus

In the paper

a reformulation of stochastic calculus in terms of noncommutative geometry was presented, where$\Omega^0$consists of smooth 0-forms on$\mathbb{R}^2.$ In this (1+1)-dimensional case, we set$x^0 = t$and$x^1 = x$ with commutation relations

$\displaystyle [dx,t] = [dt,x] = [dt,t] = 0$and$\displaystyle [dx,x] = \sigma^2 dt.$

In other words

$C^{xx}_t = \sigma^2$

and all other coefficients vanish. In this case, we have

$\overleftarrow{\partial_x} = \overrightarrow{\partial_x} = \partial_x,$

where$\partial_x$is the partial derivative with respect to$x,$

$\overleftarrow{\partial_t} = \partial_t + \frac{\sigma^2}{2} \partial^2_x,$

where$\partial_t$ is the partial derivative with respect to$t$

so that

\begin{aligned} df &= \left(\partial_t + \frac{\sigma^2}{2}\partial^2_x f\right) dt + \left(\partial_x f\right) dx \\ &= dt\left(\partial_t + \frac{\sigma^2}{2}\partial^2_x f\right) + dx\left(\partial_x f\right) - [dx,(\partial_x f)] \\ &= dt\left(\partial_t + \frac{\sigma^2}{2}\partial^2_x f\right) + dx\left(\partial_x f\right) - \sigma^2 dt\left(\partial^2_x f\right) \\ &= dt\left(\partial_t - \frac{\sigma}{2}\partial^2_x f\right) + dx\left(\partial_x f\right). \end{aligned}

and we see that

$\overrightarrow{\partial_t} = \partial_t - \frac{\sigma^2}{2} \partial^2_x.$

The Ito formula of stochastic calculus emerges naturally as left-components of the differential $df$and the heat equation emerges when$df$ is expressed in right components.

## Burgers Equation

In the paper

Burgers (1+1)-dimensional equation was derived from noncommutative geometry as a zero-curvature condition on$\mathbb{R}^2$. In the post

I corrected a minor sign error in the above article and indicated a relation to the Navier-Stokes equation. A simplified derivation is reproduced here.

In this section, let

$\displaystyle [dx,t] = [dt,x] = [dt,t] = 0$and$\displaystyle [dx,x] = \eta dt.$

and

$A = dx \left(k u\right)$

denote a connection 1-form, where$k$is a constant and$u$is a 1-form so that

\begin{aligned} dA &= -dx \left(kdu\right) \\ &= dt dx \left[k \left(\partial_t u - \frac{\eta}{2} \partial^2_x u\right)\right] \end{aligned}

and

\begin{aligned} AA &= k^2 dx\left(u dx\right) u \\ &= k^2 dx\left(dx u - [dx, u]\right) u \\ &= dt dx \left[k^2\eta\left(\partial_x u\right) u\right]. \end{aligned}

Putting the above together and setting$k = \frac{1}{\eta}$results in the curvature 2-form

$F = dA + AA = dt dx \frac{1}{\eta} \left[ \partial_t u - \frac{\eta}{2} \partial^2_x u + \left(\partial_x u\right) u. \right]$

If the curvature 2-form vanishes, we have

$\boxed{\partial_t u - \frac{\eta}{2} \partial^2_x u + \left(\partial_x u\right) u = 0}$

which is the (1+1)-dimensional Burger’s equation.

Furthermore, if the curvature 2-form vanishes, it implies we can write the connection 1-form as a pure gauge

$A = \phi^{-1} d\phi = -\left(d\phi^{-1}\right) \phi = -\left(d\psi\right)\psi^{-1},$

where$\psi = \phi^{-1}$. Expanding the differential results in

$A = -\left[dt \left(\partial_t\psi - \frac{\eta}{2}\partial^2_x\psi\right) + dx \left(\partial_x\psi\right)\right] \psi^{-1}$

implies

$\boxed{u = -\eta\left(\partial_x\psi\right)\psi^{-1}\quad\text{and}\quad \partial_t \psi - \frac{\eta}{2}\partial^2_x\psi = 0.}$

The above relations are referred to as the Cole-Hopf transformation.

This was trivially extended to (3+1)-dimensions in the post

Remark: I personally find the fact Burgers equation pops out like this and the Cole-Hopf transformation is almost a tautology mind boggling.

## Navier-Stokes Equation

In this section, consider a (3+1)-dimensional noncommutative geometry with$x^0 = t$and implied sums run over only the spatial dimensions$x^\mu, \mu = 1,2,3$and

$\displaystyle [dx^\mu,t] = [dt,x^\mu] = [dt,t] = 0$and$\displaystyle [dx^\mu,x^\mu] = \eta g^{\mu\nu} dt$

so that

$df = dt \left(\partial_t f - \frac{\eta}{2} \nabla^2 f\right) + dx^\mu \left(\partial_\mu f\right)$

where

$\nabla^2 f = g^{\mu\nu} \partial_\mu\partial_\nu.$

In analogy with Burgers equation, consider a 1-form connection

$A = \frac{1}{\eta} \left(-dt p + dx^\mu u_\mu\right)$

where$\eta$is constant and$p,u_\mu\in\Omega^0$so that

$dA = dt dx^\mu \frac{1}{\eta}\left(\partial_t u_\mu - \frac{\eta}{2}\nabla^2 u_\mu + \partial_\mu p\right) + dx^\mu dx^\nu \frac{1}{\eta}\left(\partial_\mu u_\nu\right)$

and

$AA = dt dx^\mu \frac{1}{\eta^2}\left[\eta\left(u\cdot\nabla\right)u_\mu\right]$

where

$\left(u\cdot\nabla\right) = g^{\kappa\lambda} u_\kappa \partial_\lambda.$

Putting the above together results in the curvature 2-form

\begin{aligned} F &= dA + AA \\ &= dt x^\mu \frac{1}{\eta} \left[\partial_t u_\mu + \left(u\cdot\nabla\right)u_\mu - \frac{\eta}{2}\nabla^2 u_\mu + \partial_\mu p\right] + dx^\mu dx^\nu \frac{1}{\eta} \left[\frac{1}{2}\left(\partial_\mu u_\nu - \partial_\nu u_\mu\right)\right]. \end{aligned}

Unlike Burgers equation, the curvature 2-form does not always vanish so we can express it in the suggestive form

$F = \frac{1}{\eta} \left(dt E + B\right)$

where$E\in\Omega^1$and$B\in\Omega^2$so that

$\boxed{\partial_t u_\mu + \left(u\cdot\nabla\right)u_\mu - \frac{\eta}{2}\nabla^2 u_\mu + \partial_\mu p = E_\mu}$

and

$\boxed{\frac{1}{2}\left(\partial_\mu u_\nu - \partial_\nu u_\mu\right) = B_{\mu\nu}.}$

The above expressions form the Navier-Stokes equations for a Newtonian fluid, where$E$is an external force and$B$is the vorticity.

In the special case of no external force ($E=0$) and no vorticity ($B = 0$), the curvature 2-form vanishes again implying the connection 1-form may be expressed as a pure guage

$A = \frac{1}{\eta} \left(-dt p + dx^\mu u_m\right) = -\left(d\phi\right) \phi^{-1}$

so that

$\boxed{u_\mu = -\eta\left(\partial_\mu\phi\right)\phi^{-1}\quad\text{and}\quad \partial_t\phi - \frac{\eta}{2}\nabla^2\phi - \frac{1}{\eta}p\phi = 0}$

which is a Cole-Hopf-like transformation for the vorticity-free Navier-Stokes equation.

Written by Eric

November 19, 2018 at 11:15 pm

## Weighted Likelihood for Time-Varying Gaussian Parameter Estimation

In a previous article, we presented a weighted likelihood technique for estimating parameters $\theta$ of a probability density function $\rho(x|\theta)$. The motivation being that for time series, we may wish to weigh more recent data more heavily. In this article, we will apply the technique to a simple Gaussian density

$\rho(x|\mu,\nu) = \frac{1}{\sqrt{\pi\nu}} \exp\left[-\frac{(x-\mu)^2}{\nu}\right].$

In this case, the log likelihood is given by

\begin{aligned} \log\mathcal{L}(\mu,\nu) &= \sum_{i=1}^N w_i \log\rho(x_i|\mu,\nu) \\ &= -\frac{1}{2} \log\pi\nu - \frac{1}{\nu} \sum_{i=1}^N w_i \left(x_i - \mu\right)^2 \end{aligned}.

Recall that the maximum likelihood occurs when

\begin{aligned} \frac{\partial}{\partial\mu} \log\mathcal{L}(\mu,\nu) = \frac{\partial}{\partial\nu} \log\mathcal{L}(\mu,\nu) = 0. \end{aligned}

A simple calculation demonstrates that this occurs when

\begin{aligned} \mu = \sum_{i=1}^N w_i x_i \end{aligned}

and

\begin{aligned} \sigma^2 = \sum_{i=1}^N w_i \left(x_i - \mu\right)^2, \end{aligned}

where $\sigma^2 = \nu/2$.

Introducing a weighted expectation operator for a random variable $X$ with samples $x_i$ given by

\begin{aligned} E_w(X) = \sum_{i=1}^N w_i x_i, \end{aligned}

the Gaussian parameters may be expressed in a familiar form via

$\mu = E_w(X)$

and

$\sigma^2 = E_w(X^2) - \left[E_w(X)\right]^2.$

This simple result justifies the use of weighted expectations for time varying Gaussian parameter estimation. As we will see, this is also useful for coding financial time series analysis.

Written by Eric

February 3, 2013 at 4:33 pm

## 60 GHz Wireless – A Reality Check

The wireless revolution has been fascinating to watch. Radio (and micro) waves are transforming the way we live our lives. However, I’m increasingly seeing indications the hype may be getting ahead of itself and we’re beginning to have inflated expectations (c/o the hype cycle) about wireless broadband. In this post, I’d like to revisit some of my prior posts on the subject in light of something that has recently come to my attention: 60 GHz wireless.

### Wavelength Matters

As I outlined in Physics of Wireless Broadband, the most important property that determines the propagation characteristics of radio (and micro) waves is its wavelength. Technical news and marketing materials about wireless broadband refer to frequency, but there is a simple translation to wavelength (in cm) given by:

\begin{aligned} \lambda(cm) = \frac{30}{f(GHz)}. \end{aligned}

Ages ago when I generated those SAR images, cell phones operated at 900 MHz (0.9 GHz) corresponding to a wavelength of about 33 cm. More recent 3G and 4G wireless devices operate at higher carrier frequencies up to 2.5 GHz corresponding to a shorter wavelength of 12 cm. Earlier this month, the FCC announced plans to release bandwidth at 5 GHz (6 cm).

This frequency creep is partially due to the issues related to the ultimate wireless bandwidth speed limit I outlined, but is also driven by a slight misconception that can be found on Wikipedia:

A key characteristic of bandwidth is that a band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum.

Although this is true from a pure information theoretic perspective, when it comes to wireless broadband, the transmission of information is not determined by Shannon alone. One must also consider Maxwell and there are far fewer people in the world that understand the latter than the former.

The propagation characteristics of 2G radio waves at 900 MHz (33 cm) are already quite different than 3G/4G microwaves at 2.5 GHz (12 cm) not to mention the newly announced 5 GHz (6 cm). That is why I was more than a little surprised to learn that organizations are seriously promoting 60 GHz WiFi. Plugging 60 GHz into our formula gives a wavelength of just 5 mm. This is important for three reasons: 1) Directionality and 2) Penetration, and 3) Diffraction.

### Directionality

As I mentioned in Physics of Wireless Broadband, in order for an antenna to broadcast fairly uniformly in all directions, the antenna length should not be much more than half the carrier wavelength. At 60 GHz, this means the antenna should not be much larger than 2.5 mm. This is not feasible due to the small amount of energy transmitted/received by such a tiny antenna.

Consequently, the antenna would end up being very directional, i.e. it will have preferred directions for transmission/reception, and you’ll need to aim your wireless device toward the router. With the possible exception of being in an empty anechoic chamber, the idea that you’ll be able to carry around a wireless device operating at 60 GHz and maintain a good connection is wishful thinking to say the least.

### Penetration

If directionality weren’t an issue, the transmission characteristics of 60 GHz microwaves alone should dampen any hopes for gigabit wireless at this frequency. Although the physics of transmission is complicated, as a general rule of thumb, the depth at which electromagnetic waves penetrate material is related to wavelength. Early 2G (33 cm) and more recent 3G/4G (12 cm) do a decent job of penetrating walls and doors, etc.

At 60 GHz (5 mm), the signal would be severely challenged to penetrate a paperback novel much less chairs, tables, or cubical walls. As a result, to receive full signal strength, 60 GHz wireless requires direct unobstructed line of sight between the device and router.

### Photon Torpedoes vs. Molasses

The more interesting aspects of wireless signal propagation are diffraction and reflection, both of which can be understood via Huygen’s beautiful principle and both of which depend on wavelength. Wireless signals do a reasonably good job of oozing around obstacles if the wavelength is long compared to the size of the obstacle, i.e. at low frequencies. Wireless signal propagation is much better at lower frequencies because the signal can penetrate walls and doors and for those obstacles that cannot be penetrated, you still might receive a signal because the signal can ooze around corners.

As the frequency of the signal increases, the wave stops behaving like molasses oozing around and through obstacles, and begins acting more like photon torpedoes bouncing around the room like particles and shadowing begins to occur. At 60 GHz, shadowing would be severe and communication would depend on direct line of sight or indirect line of sight via reflections. However, it is important to keep in mind that each time the signal bounces off an obstacle, the strength is significantly weakened.

### What Does it all Mean?

The idea that we can increase wireless broadband speeds simply by increasing the available bandwidth indefinitely is flawed because you must also consider the propagation characteristics of the carrier frequency. There is only a finite amount of spectrum available that has reasonable directionality, penetration, and diffraction characteristics. This unavoidable inherent physical limitation will lead us eventually to the ultimate wireless broadband speed limit. There is no amount of engineering that can defeat Heisenberg.

There are ways to obtain high bandwidth wireless signals, but you must sacrifice directionality. The extreme would be direct line of sight laser beam communications. Two routers can certainly communicate at gigabit speeds and beyond if they are connected by laser beams. Of course, there can be no obstacles between the routers or the signal will be lost. I can almost imagine a future-esque Star Wars-like communication system where individual mobile devices are, in fact, tracked with laser beams, but I don’t see that ever becoming a practical reality.

We still have some time before we reach this ultimate wireless broadband limit, but to not begin preparing for it now is irresponsible. The only future-proof technology is fiber optics. Communities should avoid the temptation to fore go fiber plans in favor of wireless because those who do so will soon bump into this wireless broadband limit and need to roll out fiber anyway.

Written by Eric

January 21, 2013 at 9:15 am