John Baez Circuit theory

This is a draft of a never-completed paper by John Baez. Much of this, but not the part on cohomology, found its way into here:

Abstract

There is a dagger-compact category whose morphisms are equivalence classes of electrical circuits made of linear resistors. To construct this category, we begin by recalling work going back to Weyl which expresses Kirchhoff’s laws and Ohm’s law in terms of chains and cochains on a graph. We show that a ‘lumped’ circuit made of linear resistors—that is, a circuit of this sort treated as a ‘black box’ whose inside workings we cannot see—amounts mathematically to a Dirichlet form: a finite-dimensional real vector space with a chosen basis and a quadratic form obeying some conditions. There are rules for composing and tensoring Dirichlet forms, which correspond to the operations of composing circuits in series and setting circuits side by side. However, these rules do not give a category, because the would-be identity morphisms are made of wires with zero resistance, which fall outside the Dirichlet form framework. The most elegant solution is to treat Dirichlet forms as a special case of Lagrangian correspondences. This leads to a dagger-compact category of electrical circuits that can include wires with zero resistance.

Contents

Introduction

Basic Concepts

The concept of an electrical circuit made of linear resistors is well-known in electrical engineering, but we need to formalize it with more precision than usual. The basic idea is that an electrical circuit is a graph whose edges are labelled by positive real numbers called ‘resistances’, and whose set of vertices is equipped with two disjoint subsets: the ‘inputs’ and ‘outputs’.

Circuits

All graphs in this paper will be directed. So, define a graph to be a pair of functions s,t:EVs,t : E \to V where EE and VV are finite sets. We call elements of EE edges and elements of VV vertices. We say that the edge eEe \in E has source s(e)s(e) and target t(e)t(e), and we also say that ee is an edge from s(e)s(e) to t(e)t(e).

Define an open graph to be a graph where the set of vertices is equipped with subsets V V_- and V +V_+, called inputs and outputs. We do not require that V V_- and V +V_+ are disjoint. Often the difference between inputs and outputs will not matter, so we define V=V V +\partial V = V_- \cup V_+, and call elements of this set terminals.

Define a circuit (made of linear resistors) to be an open graph together with a function called the resistance

R:E(0,+)R : E \to (0,+\infty)

assigning to each edge a positive real number. We will use Γ\Gamma to stand for a circuit:

Γ=(s,t:EV,V ±,R:E(0,+)) \Gamma = \left(s,t : E \to V, V_{\pm}, R: E \to (0,+\infty) \right)

Suppose we have another circuit

Γ=(s,t:EV,V ±,R:E(0,+)) \Gamma' = \left(s',t' : E' \to V', V'_{\pm}, R': E' \to (0,+\infty) \right)

Then there is an obvious notion of a map of circuits f:ΓΓf : \Gamma \to \Gamma'. Such a map consists of a function sending vertices to vertices and a function sending edges to edges, both called ff:

f:VV f : V \to V'
f:EE f : E \to E'

which preserve sources and targets, inputs and outputs, and resistances:

s(f(e))=f(s(e)),t(f(e))=f(t(e)) s'(f(e)) = f(s(e)), t'(f(e)) = f(t'(e))
vV +f(v)V + v \in V_+ \implies f(v) \in V'_+
vV f(v)V v \in V_- \implies f(v) \in V'_-
R(f(e))=R(e) R'(f(e)) = R(e)

This definition makes circuits into the objects of a category.

Given any circuit Γ\Gamma, there are three other circuits we can build from it. They are all rather trivial, since they have no edges, only vertices. Nonetheless they are very important in what follows.

First, we have a circuit Γ +\Gamma_+ whose set of vertices is V +V_+ and whose set of edges is empty. We call this the input of Γ\Gamma. There is an obvious map of circuits

ι +:Γ +Γ \iota_+ : \Gamma_+ \to \Gamma

coming from the inclusion V +VV_+ \hookrightarrow V and the inclusion E\emptyset \hookrightarrow E.

Similarly, there is a circuit Γ \Gamma_-, called the output of Γ\Gamma, whose set of vertices is V V_- and whose set of edges is empty. There is an obvious map

ι :Γ Γ \iota_- : \Gamma_- \to \Gamma

Finally, there is a circuit Γ\partial \Gamma whose set of vertices is V\partial V and whose set of edges is empty. We call this the boundary of Γ\Gamma. Yet again there is an obvious map

ι:ΓΓ \iota : \partial \Gamma \to \Gamma

Chain Complexes from Circuits

In 1923, Hermann Weyl published a paper in Spanish which described electrical circuits in terms of the homology and cohomology of graphs (W). In this approach, Kirchhoff’s voltage and current laws simply say that voltage is a 1-coboundary and current is a 1-cycle. Furthermore, the electrical resistances labelling edges of the graphs put an inner product on the space of 1-chains, allowing us to identify them with 1-cochains. Ohm’s law then says that voltage may then be identified with the current.

In the late 1960’s and early 1970’s, these ideas were further developed by authors including Paul Slepian (Sl), G. E. Ching (C), J. P. Roth (R) and Stephen Smale (Sm). By now they are well-known. The textbook by Bamberg and Sternberg (BS) uses electrical circuits to motivate homology, cohomology and the beginnings of Hodge theory. The text by Gross and Kotiuga (GK) uses chain and cochain complexes to tackle a wide variety of problems in electromagnetism. What follows is a terse review of the basics.

Any circuit Γ\Gamma determines a chain complex of real vector spaces, C *(Γ)C_*(\Gamma). As we shall see, a 1-chain in this complex can be used to describe the electrical current flowing through wires (that is, edges) of our circuit.

In fact, C *(Γ)C_*(\Gamma) is just the usual chain complex associated to a graph. So, it has only two nonzero terms:

C 0(Γ)= VC_0(\Gamma) = \mathbb{R}^V
C 1(Γ)= EC_1(\Gamma) = \mathbb{R}^E

with differential

:C 1(Γ)C 0(Γ) \partial : C_1(\Gamma) \to C_0(\Gamma)

given by

(e)=t(e)s(e) \partial(e) = t(e) - s(e)

We can make C *(Γ)C_*(\Gamma) a chain complex of finite-dimensional real Hilbert spaces, since the resistance R:E(0,+)R : E \to (0,+\infty) defines an inner product on C 1(Γ)C_1(\Gamma) by

e,e=R(e)δ e,e \langle e, e' \rangle = R(e) \delta_{e,e'}

and there is also an inner product on C 0(Γ)C_0(\Gamma) for which the vertices form an orthonormal basis:

v,v=δ v,v \langle v, v' \rangle = \delta_{v,v'}

Cochain Complexes From Circuits

The dual of the chain complex C *(Γ)C_*(\Gamma) is a cochain complex of finite-dimensional real Hilbert spaces, C *(Γ)C^*(\Gamma). As we shall see, a 1-cochain in this complex can be used to describe the voltage across wires of our circuit.

We call the differential in this cochain complex dd. It is is given by

(dϕ)(e)=ϕ(t(e))ϕ(s(e)) (d\phi)(e) = \phi(t(e)) - \phi(s(e))

But since any real Hilbert space is equipped with a canonical isomorphism to its dual, we get isomorphisms

r:C 0(Γ)C 0(Γ) r: C_0(\Gamma) \to C^0(\Gamma)
r:C 1(Γ)C 1(Γ) r: C_1(\Gamma) \to C^1(\Gamma)

Explicitly, these are given by:

(1)a(β)=r(a),β a(\beta) = \langle r(a), \beta \rangle

where aC i(Γ)a \in C_i(\Gamma) and βC i(Γ)\beta \in C^i(\Gamma).

Using these isomorphisms, we can transfer the differential \partial on C *(Γ)C_*(\Gamma) to a differential on C *(Γ)C^*(\Gamma), which we call

d :C 1(Γ)C 0(Γ) d^\dagger : C^1(\Gamma) \to C^0(\Gamma)

In other words, we define d d^\dagger so that this diagram commutes:

C 0(Γ) C 1(Γ) r r C 0(Γ) d C 1(Γ) \array{ C_0(\Gamma) & \stackrel{\partial}{\leftarrow} & C_1(\Gamma) \\ r\downarrow && \downarrow r \\ C^0(\Gamma) & \stackrel{d^\dagger}{\leftarrow} & C^1(\Gamma) }

or in other words:

(2)d r=r d^\dagger r = r \partial

We use the dagger notation because d d^\dagger really is the Hilbert space adjoint of dd:

(3)d α,β=α,dβ \langle d^\dagger \alpha, \beta \rangle = \langle \alpha, d \beta \rangle

for all αC 1(Γ)\alpha \in C^1(\Gamma), βC 0(Γ)\beta \in C^0(\Gamma). This follows immediately from (1) and (2) if we choose aa with ra=αr a = \alpha:

d α,β = d ra,β = ra,β = (a)(β) = a,dβ \begin{array}{ccl} \langle d^\dagger \alpha, \beta \rangle &=& \langle d^\dagger r a, \beta \rangle \\ &=& \langle r \partial a , \beta \rangle \\ &=& (\partial a)(\beta) \\ &=& \langle a, d \beta \rangle \end{array}

The inclusion of circuits

ι:ΓΓ \iota : \partial \Gamma \to \Gamma

gives an inclusion of chain complexes

ι *:C *(Γ)C *(Γ) \iota_* : C_*(\partial \Gamma) \to C_*(\Gamma)

and then, by taking duals, a map of cochain complexes ι *:C *(Γ)C *(Γ) \iota^* : C^*(\Gamma) \to C^*(\partial \Gamma). Henceforth we call this map

p:C *(Γ)C *(Γ) p: C^*(\Gamma) \to C^*(\partial \Gamma)

This map is zero on 1-cochains, and on 0-cochains it simply amounts to restricting a function on the set of vertices VV to a function on the set of terminals.

Since we have cochain complexes of finite-dimensional real Hilbert spaces, we can also take the Hilbert space adjoint to get a map p :C *(Γ)C *(Γ) p^\dagger : C^*(\partial \Gamma) \to C^*(\Gamma). We write this map as

i:C *(Γ)C *(Γ) i: C^*(\partial \Gamma) \to C^*(\Gamma)

This map is zero on 1-cochains, and on 0-cochains it simply amounts to extending a function on the set of terminals to a function on the set of vertices that is zero on the vertices that are not terminals.

The following standard facts will come in handy:

Proposition

If the maps i,r,pi,r,p and ι *\iota_* are defined as above, then

(4)ir=rι * i r = r \iota_*
(5)pi=1 p i = 1

and

(6)(kerp) =imi (ker p)^\perp = im i
Proof

FILL IN DETAILS.

Since i=p i = p^\dagger, Equation (6) follows from a general fact about a linear map TT between finite-dimensional Hilbert spaces: (kerT) =imT(ker T)^\perp = im T.

Kirchhoff’s Laws

Given a circuit, we shall focus on two quantities: a 1-chain IC 1(Γ)I \in C_1(\Gamma) called the current and a 1-cochain VC 1(Γ)V \in C^1(\Gamma) called the voltage. In 1847, Gustav Kirchhoff formulated two laws governing these quantities.

We say Kirchhoff’s voltage law holds if

V=dϕ V = d \phi

for some ϕC 0(Γ)\phi \in C^0(\Gamma) called the potential. If Kirchhoff’s voltage law holds for some voltage VV, the potential ϕ\phi is hardly ever unique. But we can say exactly how much it fails to be unique. Given ϕ 1,ϕ 2C 0(Γ)\phi_1, \phi_2 \in C^0(\Gamma), then dϕ 1=dϕ 2d \phi_1 = d \phi_2 if and only if their difference is constant on each connected component of the graph Γ\Gamma.

We say Kirchhoff’s current law holds if

(7)I=ι *J \partial I = \iota_* J

for some JC 0(Γ)J \in C_0(\partial \Gamma), called the boundary current. This says that the total current flowing in or out of any vertex is zero unless that vertex is a terminal. If Kirchhoff’s current law holds for II, the boundary current JJ is unique, since ι *:C 0(Γ)C 0(Γ)\iota_* : C_0(\partial \Gamma) \to C_0(\Gamma) is one-to-one.

Ohm’s Law

In 1827 Georg Ohm published a book which included a relation between the voltage and current for circuits made of resistors (O). At the time, the critical reception was harsh: one contemporary called Ohm’s work “a web of naked fancies, which can never find the semblance of support from even the most superficial of observations”, and the German Minister of Education said that a professor who preached such heresies was unworthy to teach science (D,H). However, a simplified version of his relation is now widely used under the name of “Ohm’s law”.

As we have seen, the resistance lets us define an inner product on the vector space C 1(Γ)C_1(\Gamma), which gives an isomorphism r:C 1(Γ)C 1(Γ)r: C_1(\Gamma) \to C^1(\Gamma) as defined in (1). We say Ohm’s law holds if the voltage VV and current II are related as follows:

(8)V=rI V = r I

This allows us to express II in terms of VV:

I=r 1V I = r^{-1} V

Kirchhoff’s voltage law then lets us write II in terms of ϕ\phi:

I=r 1dϕ I = r^{-1} d \phi

Given this, what does Kirchhoff’s current law say in terms of ϕ\phi? The answer is this:

Proposition

Kirchhoff’s current law holds for I=r 1dϕI = r^{-1} d \phi if and only if

(9)d dϕ=iχ d^\dagger d \phi = i \chi

for some χC 0(Γ)\chi \in C^0(\partial \Gamma). Moreover, in this case we can take χ\chi to be given by

(10)χ=rJ \chi = r J

where JJ is the boundary current given by Kirchhoff’s current law.

Proof

Assume Kirchhoff’s current law: I=ι *J\partial I = \iota_* J for some JJ. Then we have

(11)d dϕ=d V=d rI=rI=rι *J=irJ d^\dagger d \phi = d^\dagger V = d^\dagger r I = r \partial I = r \iota_* J = i r J

Here the first step uses Kirchhoff’s voltage law, the second uses Ohm’s law; the third uses (2), the fourth uses Kirchhoff’s current law, and the last step uses (4). Thus d dϕ=iχd^\dagger d \phi = i \chi if we take χ=rJ\chi = r J.

Conversely, suppose d dϕ=iχd^\dagger d \phi = i \chi. Then taking J=r 1χJ = r^{-1} \chi, the same sort of reasoning shows that d dϕ=iχ d^\dagger d \phi = i \chi.

The Principle of Minimum Power

In this section we always assume Kirchhoff’s voltage law and Ohm’s law.

Given a circuit Γ\Gamma with voltage VV and current II, the power dissipated by a circuit is defined to be

P=V(I)P = V(I)

where we are pairing the 1-chain II and the 1-cochain VV to get a real number. Ohm’s law allows us to rewrite II as r 1Vr^{-1} V, so the power can be expressed in terms of the voltage:

P=V(r 1V)=V,VP = V(r^{-1} V) = \langle V, V \rangle

Kirchhoff’s voltage law allows us to write VV as dϕd \phi, so the power can also be expressed in terms of the potential:

P=dϕ,dϕP = \langle d \phi, d \phi \rangle

This expression lets us formulate the ‘principle of minimum power’, which gives us information about the potential ϕ\phi given its restriction to the boundary of Γ\Gamma. This restriction is an element of C 0(Γ)C^0(\partial \Gamma), and in general we call any element of this space a boundary potential.

Definition

We say a potential ϕC 0(Γ)\phi \in C^0(\Gamma) obeys the principle of minimum power for a boundary potential ψC 0(Γ)\psi \in C^0(\partial \Gamma) if ϕ\phi minimizes the power dϕ,dϕ \langle d \phi, d \phi \rangle subject to the constraint that pϕ=ψp \phi = \psi.

Proposition

A potential ϕ\phi obeys the principle of minimum power for some boundary potential ψ\psi if and only if I=r 1dϕI = r^{-1} d \phi obeys Kirchhoff’s current law.

Proof

If ϕ\phi obeys the principle of minimum power for some boundary potential ψ\psi, then for any ϕC 0(Γ)\phi' \in C^0(\Gamma) with pϕ=0p \phi' = 0 we must have

ddtd(ϕ+tϕ),d(ϕ+tϕ)| t=0=0 \left. \frac{d}{d t} \langle d(\phi + t \phi'), d(\phi + t \phi') \rangle \right|_{t = 0} = 0

or in other words:

dϕ,dϕ=0 \langle d \phi', d \phi \rangle = 0

or

ϕ,d dϕ=0 \langle \phi' , d^\dagger d \phi \rangle = 0

This means that d dϕ(kerp) d^\dagger d \phi \in (ker p)^\perp , so by (6) we have

d dϕ=iχ d^\dagger d \phi = i \chi

for some boundary voltage χ\chi. By Proposition this equation implies Kirchhoff’s current law for I=r 1dϕI = r^{-1} d \phi. Conversely, Kirchhoff’s current law for II implies the above equation and thus, running the above calculation backwards,

ddtd(ϕ+tϕ),d(ϕ+tϕ)| t=0=0 \left. \frac{d}{d t} \langle d(\phi + t \phi'), d(\phi + t \phi') \rangle \right|_{t = 0} = 0

It follows that ϕ\phi is a critical point for the power as a function on potentials satisfying the constraint pϕ=ψp \phi = \psi. But since the power is a nonnegative quadratic form, ϕ\phi must minimize the power among such potentials.

The Dirichlet problem

We have seen that a potential ϕ\phi gives a solution of all three basic equations governing electric circuits made from linear resistors—Kirchhoff’s voltage law, Kirchhoff’s current law and Ohm’s law—if and only if this equation holds:

(12)d dϕ=iχ d^\dagger d \phi = i \chi

Our next task is to solve this equation. But first, some remarks are in order.

The operator

d d:C 0(Γ)C 0(Γ) d^\dagger d : C^0(\Gamma) \to C^0(\Gamma)

acts as a discrete analogue of the Laplacian for the graph Γ\Gamma, so we call this operator the Laplacian of Γ\Gamma. Equation (12) is thus a version of Laplace’s equation with boundary conditions. It says the Laplacian of the potential ϕC 0(Γ)\phi \in \C^0(\Gamma) equals zero except on the boundary of Γ\Gamma, where it equals χ\chi.

We could try to solve for ϕ\phi given χ\chi. However, we prefer a slightly different approach, which emphasizes the role of the boundary potential ψ=pϕ\psi = p \phi. After all, we have seen that ϕ\phi solves Equation (12) for some χ\chi if and only if ϕ\phi obeys the principle of minimum power for some boundary potential ψ\psi. We call the problem of finding a potential ϕ\phi that minimizes the power for a fixed value of ψ=pϕ\psi = p \phi is a discrete version of the Dirichlet problem.

As we shall see, this version of the Dirichlet problem always has a solution. However, the solution is not necessarily unique. If we take a solution ϕ\phi and add to it some αC 0(Γ)\alpha \in C^0(\Gamma) with dα=0d \alpha = 0 and pα=0p \alpha = 0, we clearly get another solution. It should be intuitively clear that such an α\alpha is a function on the vertices of Γ\Gamma that is constant on each connected component and vanishes on the boundary of Γ\Gamma. To make this precise we need some standard concepts from graph theory:

Definition

Given two vertices v,wv, w of a graph Γ\Gamma, a path from vv to ww is a finite sequence of vertices v=v 0,v 1,,v n=wv = v_0, v_1, \dots , v_n = w and edges e 1,,e ne_1, \dots , e_n such that for each 1in1 \le i \le n, either e ie_i is an edge from v iv_i to v i+1v_{i+1}, or an edge from v i+1v_{i+1} to v iv_i.

Definition

A subset SS of the vertices of a graph Γ\Gamma is connected if for each pair of vertices in SS, there is a path from one to the other.

Definition

A connected component of a graph Γ\Gamma is a maximal connected subset of the vertices of Γ\Gamma.

In the theory of directed graphs, the qualifier ‘strongly’ is commonly used before the word ‘connected’ in the last two definitions. However, we never consider any other sort of connectedness, so we omit this qualifier.

Definition

A connected component of Γ\Gamma touches the boundary if it contains a vertex in Γ\partial \Gamma.

It easy to see that αC 0(Γ)\alpha \in C^0(\Gamma) obeys dα=0d \alpha = 0 if and only if it is constant on each connected component of Γ\Gamma. If moreover pα=0p \alpha = 0, then α\alpha must vanish on all connected components touching the boundary.

With these preliminaries in hand, we can solve the Dirichlet problem:

Proposition

For any boundary potential ψC 0(Γ)\psi \in C^0(\partial \Gamma) there exists a potential ϕ\phi obeying the principle of minimum power for ψ\psi. If we also demand that ϕ\phi vanish on every connected component of Γ\Gamma not touching the boundary, then ϕ\phi is unique, and depends linearly on ψ\psi.

Proof

For existence, note that a nonnegative quadratic form restricted to an affine subspace of a real vector space must reach a minimum somewhere on this subspace. So, because the power dϕ,dϕ\langle d \phi, d \phi \rangle defines a nonnegative quadratic form on the space C 0(Γ)C^0(\Gamma), for any ψC 0(Γ)\psi \in C^0(\partial \Gamma) the power must reach a minimum somewhere on the affine subspace

X={ϕ:pϕ=ψ}. X = \{ \phi : p \phi = \psi \} .

For uniqueness, suppose that ϕ,ϕX\phi, \phi' \in X both minimize the power. Let

α=ϕϕ. \alpha = \phi' - \phi .

Then pα=0p \alpha = 0, so ϕ+tα\phi + t \alpha lies in XX for all tt \in \mathbb{R}. Thus, the function

P(t)=d(ϕ+tα),d(ϕ+tα) P(t) = \langle d(\phi + t \alpha), d(\phi + t \alpha) \rangle

attains its minimum value both at t=0t = 0 and at t=1t = 1. Since this function is smooth, we must have P(0)=0P'(0) = 0. Since

P(t)=dϕ,dϕ+2tdϕ,dα+t 2dα,dα, P(t) = \langle d \phi, d \phi \rangle + 2 t \langle d \phi, d \alpha \rangle + t^2 \langle d \alpha, d \alpha \rangle,

it follows that dϕ,dα=0\langle d \phi, d \alpha \rangle = 0. Thus

P(t)=dϕ,dϕ+t 2dα,dα. P(t) = \langle d \phi, d \phi \rangle + t^2 \langle d \alpha, d \alpha \rangle .

Since this function takes on the same value at t=0t = 0 and t=1t = 1, we must have dα=0d \alpha = 0. This implies that α\alpha is constant on each connected component of Γ\Gamma. Furthermore, since pα=0p \alpha = 0, α\alpha vanishes on each connected component of Γ\Gamma touching the boundary.

Thus, if we demand that both ϕ\phi and ϕ\phi' vanish on every connected component of Γ\Gamma that does not touch the boundary, α=ϕϕ\alpha = \phi' - \phi vanishes on every connected component of Γ\Gamma. It follows that ϕ=ϕ\phi = \phi', giving the desired uniqueness.

To prove that ϕ\phi depends linearly on ψ\psi, suppose that for i=1,2i = 1,2 the potential ϕ i\phi_i obeys the principle of minimum power for ψ i\psi_i and vanishes on every component of Γ\Gamma not touching the boundary. Then by Propositions and , we have

d dψ i=iχ i d^\dagger d \psi_i = i \chi_i

for some χ iC 0(Γ)\chi_i \in C^0(\partial \Gamma). It follows that for any real numbers c 1c_1 and c 2c_2, the potential ϕ=c 1ϕ 1+c 2ϕ 2\phi = c_1 \phi_1 + c_2 \phi_2 obeys

d dϕ=iχ d^\dagger d \phi = i \chi

where χ=c 1χ 1+c 2χ 2\chi = c_1 \chi_1 + c_2 \chi_2. By another application of Propositions and , it follows that ϕ\phi obeys the principle of minimum power for some boundary potential ψ\psi. But since

pϕ=p(c 1ϕ 1+c 2ϕ 2)=c 1ψ 1+c 2ψ 2, p \phi = p(c_1 \phi_1 + c_2 \phi_2) = c_1 \psi_1 + c_2 \psi_2 ,

we must have ψ=c 1ψ 1+c 2ψ 2\psi = c_1 \psi_1 + c_2 \psi_2. So, ϕ\phi depends linearly on ψ\psi.

Note from the proof of the above proposition that:

Proposition

Suppose ψC 0(Γ)\psi \in C^0(\partial \Gamma) and ϕ\phi is a potential obeying the principle of minimum power for ψ\psi. Then ϕ\phi' obeys the principle of minimum power for ψ\psi if and only if the difference ϕϕ\phi' - \phi is constant on every connected component of Γ\Gamma and it vanishes on every connected component touching the boundary of Γ\Gamma.

Bamberg and Sternberg (BS) describe another way to solve the Dirichlet problem, going back to Weyl (W).

Lumped Circuits

In this section we always assume that the principle of minimum power holds, as well as Kirchhoff’s voltage law and Ohm’s law.

Under these circumstances, we shall see that the boundary potential determines the boundary current. A ‘lumped circuit’ is an equivalence class of circuits, where two are considered equivalent when the boundary current is the same function of the boundary potential. The idea is that the boundary current and boundary potential are all that can be observed ‘from outside’, i.e. by making measurements at the terminals. Restricting our attention to what can be observed by making measurements at the terminals amounts to treating a circuit as a ‘black box’: that is, treating its interior as hidden from view. So, two circuits give the same lumped circuit when they behave the same as ‘black boxes’.

First let us check that the boundary current is a function of the boundary potential. For this we introduce an important quadratic form on the space of boundary potentials:

Definition

For any ψC 0(Γ)\psi \in C^0(\partial \Gamma), let

Q(ψ)=12inf {ϕ:pϕ=ψ}dϕ,dϕ Q(\psi) = \frac{1}{2} \inf_{\{\phi : p \phi = \psi\} } \; \langle d \phi, d \phi \rangle

Since dϕ,dϕ\langle d \phi, d \phi \rangle defines a nonnegative quadratic form on the finite-dimensional vector space C 0(Γ)C^0(\Gamma) and the constraint pϕ=ψp \phi = \psi picks out a linear subspace of this subspace, the infimum above is actually attained. One can check that Q(ψ)Q(\psi) is a nonnegative quadratic form on C 0(Γ)C^0(\partial \Gamma).

Up to a factor of 12\frac{1}{2}, Q(ψ)Q(\psi) is just the power dissipated by the circuit when the boundary voltage is ψ\psi, thanks to the principle of minimum power. The factor of 12\frac{1}{2} simplifies the next proposition, which uses QQ to compute the boundary current as a function of the boundary voltage.

Since QQ is a smooth real-valued function on C 0(Γ)C^0(\partial \Gamma), its differential dQd Q at any given point ψC 0(Γ)\psi \in C^0(\partial \Gamma) defines an element of the dual space C 0(Γ)C_0(\partial \Gamma), which we denote by dQ ψd Q_\psi. In fact, this element is equal to the boundary current JJ corresponding to the boundary voltage ψ\psi:

Proposition

Suppose ψC 0(Γ)\psi \in C^0(\partial \Gamma). Suppose ϕ\phi is any potential minimizing the power dϕ,dϕ\langle d \phi , d \phi \rangle subject to the constraint pϕ=ψp \phi = \psi. Let V=dϕV = d \phi be the corresponding voltage, I=r 1VI = r^{-1} V the current, and I=ι *J\partial I = \iota^* J where JJ is the corresponding boundary current. Then

(13)dQ ψ=J. d Q_\psi = J .
Proof

Note first that while there may be several choices of ϕ\phi minimizing the power subject to the constraint that pϕ=ψp \phi = \psi, Proposition says that the difference between any two choices vanishes on all components touching the boundary of Γ\Gamma. Thus, these two choices give the same value for JJ. So, with no loss of generality we may assume ϕ\phi is the unique choice that vanishes on all components not touching the boundary. By Proposition , there is a linear operator

f:C 0(Γ)C 0(Γ) f: C^0(\partial \Gamma) \to C^0(\Gamma)

sending ψC 0(Γ)\psi \in C^0(\partial \Gamma) to this choice of ϕ\phi, and then

Q(ψ)=12dfψ,dfψ. Q(\psi) = \frac{1}{2} \langle d f \psi, d f \psi \rangle .

Given any ψC 0(Γ)\psi' \in C^0(\partial \Gamma), we thus have

dQ ψ(ψ) = ddtQ(ψ+tψ)| t=0 = 12ddtdf(ψ+tψ),df(ψ+tψ)| t=0 = dfψ,dfψ = d dfψ,fψ = irJ,fψ \begin{array}{ccl} d Q_\psi (\psi') &=& \left. \frac{d}{d t} Q(\psi + t \psi') \right|_{t = 0} \\ &=& \frac{1}{2} \left. \frac{d}{d t} \langle d f(\psi + t \psi'), d f (\psi + t \psi') \rangle \right|_{t = 0} \\ &=& \langle d f \psi, d f \psi' \rangle \\ &=& \langle d^\dagger d f \psi , f \psi' \rangle \\ &=& \langle i r J , f \psi' \rangle \end{array}

where in the last step we use Equation (11). Since i =pi^\dagger = p, we obtain

dQ ψ(ψ) = rJ,pfψ = rJ,ψ = J(ψ) \begin{array}{ccl} d Q_\psi (\psi') &=& \langle r J, p f \psi' \rangle \\ &=& \langle r J , \psi' \rangle \\ &=& J(\psi') \end{array}

where in the last step we use Equation (1). It follows that dQ ψ=Jd Q_\psi = J.

Categories of Circuits

In this section we define a category of circuits, and also a category of lumped circuits. Both these are dagger-compact categories

There is a category where objects are finite sets of points, and a morphism f:STf : S \to T is an equivalence class of circuits

Γ=(s,t:EV,V ±,R:E(0,+)) \Gamma = \left(s,t : E \to V, V_{\pm}, R: E \to (0,+\infty) \right)

equipped with bijections

i:SV ,j:TV +. i: S \to V_-, \qquad j: T \to V_+ .

The equivalence relation is as follows: (Γ,i,j)(\Gamma, i, j) is equivalent to (Γ,i,j)(\Gamma', i', j') if there is an isomorphism of circuits f:ΓΓf : \Gamma \to \Gamma' such that

fi=i,fj=j. f i = i' , \qquad f j = j'.

The composition of circuits is given by pushout of cospans…

This category is symmetric monoidal, and in fact a dagger-compact category. WHY???

Dirichlet forms

We have seen that a lumped circuit is completely specified by the vector space C 0(Γ)C^0(\partial \Gamma) along with its distinguished basis and the quadratic form QQ. Now we describe which quadratic forms can arise this way. They are known as ‘Dirichlet forms’, and they admit a number of equivalent characterizations. We start with the simplest.

Given a finite set SS, let S\mathbb{R}^S be the vector space of functions ψ:S S\psi: S \to \mathbb{R}^S. A Dirichlet form on SS will be a certain sort of quadratic form on S\mathbb{R}^S:

Definition

Given a finite set SS, a Dirichlet form on SS is a quadratic form Q: SQ: \mathbb{R}^S \to \mathbb{R} given by the formula

Q(ψ)= i,jc ij(ψ iψ j) 2 Q(\psi) = \sum_{i,j} c_{i j} (\psi_i - \psi_j)^2

for some nonnegative real numbers c ijc_{i j}.

Note that we may assume without loss of generality that c ii=0c_{i i} = 0 and c ij=c jic_{i j} = c_{j i}; we do this henceforth. Any Dirichlet form is nonnegative: Q(ψ)0Q(\psi) \ge 0 for all ψ S\psi \in \mathbb{R}^S. However, not all nonnegative quadratic forms are Dirichlet forms. For example, if S={1,2}S = \{1, 2\}:

Q(ψ)=(ψ 1+ψ 2) 2 Q(\psi) = (\psi_1 + \psi_2)^2

is not a Dirichlet form.

In fact, the concept of Dirichlet form is vastly more general: such quadratic forms are studied not just on finite-dimensional vector spaces S\mathbb{R}^S but on L 2L^2 of any measure space. When this measure space is just a finite set, the concept of Dirichlet form reduces to the definition above. For a thorough introduction Dirichlet forms, see the text by Fukushima (F). For a fun tour of the underlying ideas, see the paper by Doyle and Snell (DS).

We will not really need any other characterizations of Dirichlet forms, but they do help illuminate the concept:

Proposition

Given a finite set SS and a quadratic form Q: SQ : \mathbb{R}^S \to \mathbb{R}, the following are equivalent:

  1. QQ is a Dirichlet form.

  2. Q(ϕ)Q(ψ)Q(\phi) \le Q(\psi) whenever |ϕ iϕ j||ψ iψ j||\phi_i - \phi_j| \le |\psi_i - \psi_j| for all i,ji, j.

  3. Q(ϕ)=0Q(\phi) = 0 whenever ϕ i\phi_i is independent of ii, and QQ obeys the Markov property: Q(ϕ)Q(ψ)Q(\phi) \le Q(\psi) when ψ i=min(ϕ i,1)\psi_i = \min (\phi_i, 1) .

Proof

See Fukushima (F).

A Category of Lumped Circuits

We begin with a naive attempt to construct a category where the morphisms are lumped circuits. This naive attempt doesn’t quite work, because it doesn’t include identity morphisms. However, it points in the right direction.

Given finite sets SS and TT, let S+TS+T denote their disjoint union. Let D(S,T)D(S,T) be the set of Dirichlet forms on S+T\mathbb{R}^{S + T}. There is a way to compose these Diriclet forms:

:D(T,U)×D(S,T)D(S,U) \circ : D(T,U) \times D(S,T) \to D(S,U)

defined as follows. Given QD(S,T)Q \in D(S,T) and RD(T,U)R \in D(T,U), let

(RQ)(γ,α)=inf βR TQ(γ,β)+R(β,α) (R \circ Q)(\gamma, \alpha) = \inf_{\beta \in \R^T} Q(\gamma, \beta) + R(\beta, \alpha)

where αR S,γR U\alpha \in R^S, \gamma \in R^U. Moreover, this composition is associative:

(PQ)R=P(QR) (P \circ Q) \circ R = P \circ (Q \circ R)

However, there is typically no Dirichlet form 1 SD(S,S)1_S \in D(S,S) playing the role of the identity for this composition. A ‘category without identity morphisms’ is called a semicategory, so we see

Proposition

There is a semicategory where:

  • the objects are finite sets,

  • a morphism from TT to SS is a Dirichlet form QD(S,T)Q \in D(S,T).

  • composition of morphisms is given by

(RQ)(γ,α)=inf βR TQ(γ,β)+R(β,α).(R \circ Q)(\gamma, \alpha) = \inf_{\beta \in \R^T} Q(\gamma, \beta) + R(\beta, \alpha) .

We would like to make this into a category. The easy way is to formally adjoin identity morphisms; this trick works for any semicategory. This amounts to introducing some circuits that contains wires with zero resistance. However, we obtain a better category if we include more morphisms: more circuits having wires with zero resistance.

References

  • S. Abramsky and B. Coecke, A categorical semantics of quantum protocols, in Proceedings of the 19th IEEE conference on Logic in Computer Science (LiCS’04), IEEE Computer Science Press, ????, 2004. Also available as http://arxiv.org/abs/quant-ph/0402130.
  • P. Bamberg and S. Sternberg, A Course of Mathematics for Students of Physics 2, Chap. 12: The theory of electrical circuits, Cambridge University, Cambridge, 1982.
  • G. E. Ching, Topological concepts in networks; an application of homology theory to network analysis, Proc. 11th. Midwest Conference on Circuit Theory, University of Notre Dame, 1968, pp. 165-175.
  • B. Davies, A web of naked fancies?, Phys. Educ. 15 (1980), 57-61.
  • M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.
  • P. W. Gross and P. R. Kotiuga, Electromagnetic Theory and Computation: A Topological Approach, Cambridge University Press, 2004.
  • I. B. Hart, Makers of Science, Oxford U. Press, London, 1923, p. 243.
  • P. Katis, N. Sabadini, R. F. C. Walters, On the algebra of systems with feedback and boundary, Rendiconti del Circolo Matematico di Palermo Serie II, Suppl. 63 (2000), 123–156.

  • J. Kigami, Analysis on Fractals, Cambridge U. Press. First 60 pages available at http://www-an.acs.i.kyoto-u.ac.jp/~kigami/AOF.pdf.

  • Z.-M. Ma and M. Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1991.

  • G. Ohm, Die Galvanische Kette, Mathematisch Bearbeitet, T. H. Riemann, Berlin, 1827. Also available at http://www.ohm-hochschule.de/bib/textarchiv/Ohm.Die_galvanische_Kette.pdf.

  • J. P. Roth, Existence and uniqueness of solutions to electrical network problems via homology sequences, Mathematical Aspects of Electrical Network Theory, SIAM-AMS Proceedings III, 1971, pp. 113-118.

  • C. Sabot, Existence and uniqueness of diffusions on finitely ramified self-similar fractals, Section 1: Dirichlet forms on finite sets and electrical networks, Annales Scientifiques de l’École Normale Supérieure, Sér. 4, 30 (1997), 605-673. Also available at http://www.numdam.org/numdam-bin/item?id=ASENS_1997_4_30_5_605_0.
  • C. Sabot, Electrical networks, symplectic reductions, and application to the renormalization map of self-similar lattices, Proc. Sympos. Pure Math. 72 (2004), 155-205. Also available as arXiv:math-ph/0304015.
  • P. Selinger, Dagger compact closed categories and completely positive maps, in Proceedings of the 3rd International Workshop on Quantum Programming Languages (QPL 2005), ENTCS 170 (2007), 139–163. Also available at http://www.mscs.dal.ca/~selinger/papers/dagger.pdf.

  • P. Slepian, Mathematical Foundations of Network Analysis, Springer, Berlin, 1968.

  • S. Smale, On the mathematical foundations of electrical network theory, J. Diff. Geom. 7 (1972), 193-210.

Last revised on April 27, 2021 at 20:35:04. See the history of this page for a list of all contributions to it.