11. The Dirichlet Green’s Function as an Inverse of 2 : Basis States

Michael Fowler, University of Virginia

Definition

Recall the Dirichlet Green’s function G D r , r  is defined to be the potential φ r  from a point unit charge at r  in a volume V  bounded by grounded conducting surfaces S:

2 G D r , r =4πδ r r ,

with G D r , r =0  for r  on S,   r  in V.   (following Jackson (1.31), we're leaving the 1/4π ε 0  out of our definition, it just clutters up the math and can be restored at the end.)  

Point Charge in Empty Space

Let’s start with the simplest case of no boundaries at all (but our function going to zero at infinity): a point charge in empty space.

As we discussed in the Math Bootcamp lectures, the delta function is essentially the continuum equivalent of the identity.  Loosely speaking, then, the operator 2  operating on G D  gives unity, suggesting that G D  is an inverse of the differential operator 2  (apart from sign).

So how do we solve the equation?  As we discussed in the Bootcamp, differential operators are expressed very simply in the Fourier transform space, in particular 2 = k 2 ,  and in empty space G D  is a function only of r r ,  not r , r  separately, so, Fourier transforming the differential equation we find

G D k = 4π k 2 .

We’ve already discussed the Fourier transform of the function on the right, going back into configuration space we get:

G r , r = 1 r r .

Of course, we knew this already: its just a point charge, so are we just going around in circles?

But this approach suggests how we can handle the much more complicated case where there are boundaries.

Analyzing and Generalizing the Method

We solved the problem by going to the Fourier transform space: that is, expressing both sides as integrals over plane waves.

Now the crucial point these plane wave states are the eigenstates of the differential operator in empty space!  

That is to say, we have transformed to a representation in which 2  is diagonal, and it’s pretty easy to find the inverse of a diagonal operator: it’s another diagonal operator, with each diagonal term replaced by its inverse (see the paragraph below if you’re a bit rusty on this).

We can use this same strategy in a space with boundaries and boundary conditions: we need to find the eigenstates of 2  in this space satisfying the given boundary conditions, then express 2  as a matrix between these eigenstates it will be diagonal, of course, and so invertible, giving us G D .  The hard part in practice is finding the eigenstates.

Math Reminder: How to Invert an Operator Using the Eigenvectors

You can skip this if it looks familiar from quantum, or some other course.

To understand the inverse of an operator, we'll begin with a simple set of operators, matrices operating on vectors in a finite-dimensional vector space.  We'll denote the vectors in the space using the ket notation of Dirac, i ,  in the present case just a column vector with n  elements. An operator here is an n×n  matrix M  (and if we're doing quantum mechanics, it's Hermitian).  How do we find the inverse of such an operator? 

The procedure is to find its eigenvectors λ i ,  the vectors it leaves invariant in direction:

M λ i = λ i λ i .

We'll assume there are n  such vectors, they're linearly independent and span the space. (This is usually the case for physical operators, we can deal with exceptions later.)  Any vector in the space can therefore be written as α = α i λ i ,  (dummy suffix notation, repeated suffix implies summation) and using this basis the n×n   matrix operator M  is just diagonal, having elements λ 1 , λ 2 ,, λ n .  

The unit matrix, the one that operating on a vector leaves it completely unchanged, is of course the diagonal 1,1,…,1.

So the inverse of our operator in this basis is just 1/ λ 1 ,1/ λ 2 ,,1/ λ n .  

M= λ 1 0 0 0 0 λ 2 0 0 0 0 λ 3 0 0 0 0 λ 4 , M 1 = λ 1 1 0 0 0 0 λ 2 1 0 0 0 0 λ 3 1 0 0 0 0 λ 4 1 .

Writing the Operator Using Bras and Kets

There's a neater way to put all this, a way that generalizes to the cases we're interested in. We've used the ket vector notation  to denote a column vector.  Dirac invented this notation, and an n  component row vector he called a bra, and wrote it .  The point is that the inner product (with the standard definition of matrix multiplication) of these two vectors λ μ  is a "bracket",  bra-ket,  just a number.  

If our basis set of vectors are all normalized and orthogonal (the standard approach), then

λ i λ j = δ ij .

What about the outer product μ λ  ?  This is a column vector multiplied by a row vector, so, with the standard definition of matrix multiplication (the element ij  of the matrix product is the inner product of the i  th row of the first matrix with the j  th column of the second),  it is an n×n  matrix.

What does it look like?  Consider λ 1 λ 1 .  It has 1 in the 11 position, all the other elements are zero.

1 1 = 1 0 0 0 1 0 0 0 =1, 1 1 = 1 0 0 0 1 0 0 0 = 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .

Now we see that the matrix

M= i=1 n λ i λ i λ i .

With the orthogonality and normalization of the basis set, this representation certainly gives the right answer on all the basis vectors, so by linearity it gives the correct answer for any other vector in the space.  

Evidently the inverse is

M 1 = i=1 n 1 λ i λ i λ i .

Inverting a Differential Operator with Boundary Conditions: a One-Dimensional Example

Now for the more ambitious goal of finding the inverse of the 2  operator subject to boundary conditions.  

The key is to restrict the space of functions on which the operator operates to those satisfying the boundary conditions.  That means we must find a basis set of eigenfunctions of the operator in this subspace.  (Any solution of our problem is of course in this subspace.)

As a warm-up, to get the idea, we'll begin with the simplest possible model that has the right ingredients: the one-dimensional operator d 2 /d x 2  acting on the space of (twice differentiable!) functions f x  with x  in the interval 0,1  and f x =0   at those boundary points. (Actually, there’s a simpler way to solve this simple system, just direct integration, as we’ll discuss shortly but here we want to show the general method. Both approaches are necessary for many problems.)

Following the strategy laid out for the n×n  matrix above, we first find the eigenfunctions for this operator, subject to the given boundary conditions.

The normalized eigenfunctions are

n = 2 sinnπx.

These are real functions, and the bracket of two of them is

m n = 0 1 2sinmπxsinnπxdx= δ mn .

Writing This in Terms of Dirac Localized Kets

This can be written more neatly, purely in terms of bras and kets, by bringing in localized kets, following Dirac, delta functions labeled by position,

x 0 =δ x x 0 , (?)

but this is pretty sloppy it’s only meaningful inside an integral, the delta function is not a normalizable wave function in the usual way (you can check this by defining it as a limit of some series of functions, say equal to zero for x >ε/2  and equal to 1/ε  for x ε/2.  The norm diverges.)

Anyway, the expression x 0 n  is defined by

x 0 n = 0 1 δ x x 0 2 sinnπxdx= 2 sinnπ x 0 .

Dirac defined “normalization” of these local kets by

x x =δ x x ,

the continuum analog of m n = δ mn ,  and, as we’ll see, this leads to a consistent formalism.

So the unit matrix, which can be written in terms of our set of basis functions as

I= n=1 n n

can equally be written

I= 0 1 x x dx .

(OK, this is mathematically pretty dubious it’s not even a countably infinite basis! But if we’re careful, it is well-defined when the localized bras and kets appear in brackets in an expression, in the present case with our basis functions, or in integrals, the same thing.)

To check for consistency:

n =I n = 0 1 x x n dx = 0 1 2 sinnπx x dx ,

and

x =I x = n=1 n n x = n=1 2 sinnπx n .

(So defining the position ket in this way puts it in our subspace of functions that are zero at the boundaries.)

Hence:

δ x x = x x = n=1 x n n x =2 n sinnπxsinnπ x .

So the function I= n=1 n n   is equivalent to the “unit matrix” in the space of continuous functions defined in the unit interval and equal to zero at the boundaries.

We're finally ready to find the Green's function for the differential operator   d 2 /d x 2  in this space.

The differential operator is of course diagonal in the n  basis (they're its eigenstates) with eigenvalues n 2 π 2 .  

That is, it can be written

2 = n n 2 π 2 n n ,

(so that x 2 x = n n 2 π 2 x n n x =2 n n 2 π 2 sinnπxsinnπ x  ) .

The unit matrix

I= n=1 n n =2 n sinnπxsinnπ x ,

It follows that the Dirichlet Green's function, the inverse of the differential operator, must be

G D x, x = n x n n x n 2 π 2 =2 n sinnπxsinnπ x π 2 n 2 .

Exercise:  Check that this is indeed the solution to the equation:  d 2 d x 2 G D x, x =δ x x .

So we’ve formally solved the problem but it’s not exactly clear from this expression what the Green’s function looks like!

Picturing the Dirichlet Green’s Function

However, a little thought about the original differential equation makes it clear.

As a function of x  for given x ,   away from the point x , G D x, x  must be just a straight line: the solution of d 2 f/d x 2 =0.  

G D x, x  goes to zero at the boundaries x=0,x=1  and has unit discontinuity in slope at x= x .   (Differentiating a slope discontinuity gives a step discontinuity, the next differentiation gives the delta function.)

The neat way to write it is in terms of

x < =min x, x , x > =max x, x ,

In terms of these variables, the expression is simple:

G D x, x = x < 1 x > .

Exercise:  Convince yourself that this is correct, in particular the change in slope, and that it is symmetric in x, x ,  as a Green’s function must be.

For three-dimensional problems, we’ll often find it best to use the complete set of states representation for angle-type variables, the direct integration for radial variables.

Two Different Representations of the Green’s Function

It’s worth thinking a little more about the two different expressions for the Green’s function.

A:     G D x, x =2 n sinnπxsinnπ x π 2 n 2 ,

B:    G D x, x = x < 1 x > .

Expression A is a sum over all eigenstates of the differential operator, to be expected since it comes from integrating the delta function, which, being infinitely sharp, has contributions from all momentum/energy states. And, it’s difficult to visualize.

Expression B apparently involves only zero energy eigenstates of the operator: one satisfying the x=0  boundary condition, the other the   x=1  boundary condition, and, in contrast to A, is very easy to see.  But how can these two expressions be equivalent? The point is that B has a slope discontinuity, this generates the delta function when the differential operator is applied, and it means that if you take a Fourier transform of B as a function of x  for fixed x ,  there will be contributions from all energies.

These two representations are a recurring theme (especially in homework questions).

Physics

Does this one-dimensional model have any electrostatic realization?  Yes it’s equivalent to an infinite uniform sheet of charge between two infinite parallel grounded conductors, which will have negative charge induced, to give linear potential drop from the charged sheet to the plates.

Exercise: for a non-central sheet of charge, what are the charges induced? 

A mechanical one-dimensional realization would be a taut string pushed out of line by a sharp object to a V shape (in general with different length arms, of course). The two-dimensional generalization would be a rubber sheet attached to, say, the four sides of a square frame, pushed up at one point.  Everywhere else the rubber would relax to the minimum energy, which is given by the displacement z  (in linear approximation) satisfying the two-dimensional 2 z=0.  

Point Charge Inside a Grounded Cubical Box: Induced Surface Charge

The obvious three-dimensional generalization of the one-dimensional delta function presented above to a 3D box with the six walls at x,y,z=0,1,  is

δ r r =δ x x δ y y δ z z =8 n,m, sinnπxsinnπ x sinmπysinmπ y sinπzsinπ z .

Using the same argument as that above for the one-dimensional case, the three-dimensional Dirichlet Green’s function inside a cubical box with all walls at zero potential is:

G D r , r =8 n,m, sinnπxsinnπ x sinmπysinmπ y sinπzsinπ z π 2 n 2 + m 2 + 2 .

It’s straightforward to check that

2 G D r , r =δ r r .

So what can we do with this Green’s function?  Well, if we put a charge at any point x,y,z  inside a grounded conducting cubical box, this gives the potential at any other point x , y , z  and, in particular, by differentiating this expression in the normal direction at the walls, we can find the charge induced on the walls (remember they’re held at zero potential, meaning grounded, so charge will flow into these walls when the point charge is put inside the box). 

For example, the surface charge density induced at the point  x , y ,1  on the top face is

σ x , y =8 n,m, sinnπxsinmπysinπzsinnπ x sinmπ y cosπ π n 2 + m 2 + 2 .

Potential Inside a Cubical (Nonconducting) Box with Given Potential at Walls

Recall now one of the results of the Reciprocation Theorem: a point charge at r  inside a closed  grounded container induces a surface charge density σ r : from the theorem, if we have an identical empty container with specified potentials φ r  at the walls (so it’s not in general a connected conductor), the potential at the corresponding point r  inside

φ r = ε 0 S φ r σ r da= ε 0 S φ r G D r , r n da.

We can take five of the six walls to be at zero potential, then use linearity to add six such results.

Assume the potential is   φ x , y  on the face at z=1,  so 0 x , y <1,  and φ=0  on the other faces. Then at any point inside the cube,

φ r = ε 0 S φ r σ r da= ε 0 S φ r G D r , r n da =8 ε 0 top face φ x , y n,m, sinnπxsinmπysinπzsinnπ x sinmπ y cosπ π n 2 + m 2 + 2 d x d y .

In the next lecture, we will analyze this same problem with a rather different approach, and find a different-looking (but of course equivalent) answer. In fact, the two approaches are equivalent to the two forms of the Green's function discussed earlier in this lecture for the simple one-dimensional system.

Exercise:  Suppose the six faces of a cube are conductors, but insulated from each other. Suppose the top face is held at potential V,  the others are grounded. Use the Reciprocation Theorem to prove the potential at the center of the cube is V/6.  Does this approach also work for a dodecahedron?