39  Solving Maxwell's Equations: Green’s Functions, Jefimenko

    Michael Fowler, UVa

The Story So Far...

We have established that the magnetic and electric fields can be expressed in terms of a vector potential A  and a scalar potential φ

B = × A ,  

  E = φ A t ,

and that Maxwell's equations are equivalent to linear differential equations for these potentials, having as source terms the charge and current distributions:

2 φ 1 c 2 2 φ t 2 =ρ/ ε 0 , 2 A 1 c 2 2 A t 2 = μ 0 j .

Here the potentials have been chosen to satisfy the Lorenz gauge condition A + 1/ c 2 φ/t=0,  (this can always be done).

Green's Function for a Single-Frequency Point Source

Clearly these equations for φ, A  have plane wave solutions in empty space.  But we need a full solution, including regions with nonzero charge and current densities.

The basic strategy is to Fourier transform with respect to time only, then solve the resulting spatial equation for a particular frequency.  This is of course an important case in the real world: an oscillating charge/current is an antenna, the signal is basically a single frequency with information transmitted by modulating the amplitude, the frequency, or digitizing.  

Since the equations are linear, any solution can then be found by a superposition of single-frequency terms.

Writing the Fourier transform (with respect to time only) pair

φ r ,t = 1 2π φ r ,ω e iωt dω φ r ,ω = φ r ,t e iωt dt

the spatial differential equation for φ r ,ω  is the Fourier transform of

2 φ 1 c 2 2 φ t 2 =ρ/ ε 0 ,  

that is,

2 + k 2 φ r ,ω =ρ r ,ω / ε 0 .  

So, for a charge distribution oscillating at ω,  the actual time-dependent potential is the real part of φ r ,ω e iωt .   The time-independent function φ r ,ω  will be complex, because there will almost always be phase differences between the oscillations at different points in space.

As we discovered in electrostatics, the way to solve equations of this type is to introduce Green’s functions.  Recall that the Green's function is essentially the inverse of a differential operator, meaning purely formally that we want to "solve" the above equation by writing

φ r ,ω = 1 2 + k 2 ρ r ,ω / ε 0 .  

Again, we have to figure out how to interpret this formal expression.

Since this is a linear theory, we only have to solve the equation for a point charge, ρ r =δ r ,  then we can get the general result by integrating our solution over the actual charge distribution.

That's where the Green's function comes in recall it's defined by (with appropriate boundary conditions)

2 + k 2 G k r , r =4πδ r r .

For now, we'll take the case of charges and currents in otherwise empty space, so no finite boundaries.

With no boundaries, G k  is a function of R = r r ,  and in fact only of R =R , since there is no preferred direction. 

Now, from electrostatics, Poisson’s equation 2 ψ=4πδ r r  has the solution

ψ r r = 1 r r = 1 R .

Our Green’s function G k R  must look a lot like this in the limit R0 , because close enough to the origin the k 2  becomes negligible: think of the delta function on the right-hand side as the limit of a very sharp peak, the rapid change in slope ensures that, close in, the 2  is far more important than the k 2 .  

So we try a solution of the form G k R =u R /R,  with u R 1  as R0.   The differential equation becomes

d 2 u R /d R 2 = k 2 u R ,  

 and the solutions are

G k ± R = e ±ikR R .

Remember this solution is for a particular time Fourier component ω=ck :  to go back from frequency to time, recall φ r ,t = 1 2π φ r ,ω e iωt dω , and since our solution has a single frequency component, to find the time-dependent single-frequency Green's function we just multiply by e iωt , so 

G k ± R,t = e ±ikRiωt R = e ±i ω/c Riωt R .

This describes spherical waves: the choice of sign G k + R,t  represents a wave going out from the origin. The negative sign represents a wave going into the origin.  When light is scattered by a molecule, the ingoing light can be represented in terms of ingoing spherical waves, so the minus sign is relevant initially, the subsequent outgoing wave of course has the positive sign. 

Green's Function for a Point Source in Both Space and Time

To get the full picture of potentials generated by time dependent charge and current distributions, we need the Green’s function with a delta function source in both space and time:

2 1 c 2 2 t 2 G r ,t; r , t =4πδ r r δ t t .

(This corresponds to charge appearing for a moment at the origin not a physical situation by itself, but the potential from moving charges can be constructed from an integral over such functions.)

Again, we'll take the case of no finite boundaries, so

G r ,t; r , t =G R,τ ,τ=t t ,  

The equation is solved by noting that the delta function pulse in time has a Fourier transform in which all frequencies appear equally:

δ τ = dω 2π e iωτ ,  

and therefore (with no finite boundaries) we can just integrate over the contributions from each frequency, using the single-frequency Green's function we found above,

G ± R,τ = dω 2π G k=ω/c ± R,τ = dω 2π e ±ikRiωτ R ,k=ω/c.

 The integral over ω  is the delta function, that is,

G ± R,τ = 1 R δ τ R c .

The choice G + R,τ = 1 R δ τ R c  is called the retarded Green’s function: the scalar potential equation 2 φ 1 c 2 2 φ t 2 =ρ/ ε 0  is formally solved with this Green’s function to give

φ r ,t = 1 ε 0 d r d t G + r ,t; r , t ρ r , t

and, with the simple form of the Green’s function we’ve found, this can be written

φ r ,t = 1 4π ε 0 d r 1 r r ρ r , t ret

where the “ret” for retarded means at the time that a light signal emitted would reach r  

at time t.

So the solutions to the wave equations

2 φ 1 c 2 2 φ t 2 =ρ/ ε 0 , 2 A 1 c 2 2 A t 2 = μ 0 j .  

are

φ r ,t = 1 4π ε 0 d 3 r ρ r , t ret R , A r ,t = μ 0 4π d 3 r J r , t ret R .  

Recall that we are working in the Lorenz gauge,

A + 1 c 2 φ t =0.  

So, given the time and space charge and current distribution, we can find the scalar and vector potentials. 

Recall that in the static case, we found similar expressions for the electric field as an integral over the charge distribution

E r = 1 4π ε 0 d 3 r ρ r R ^ R 2  

and the Biot-Savart law for the magnetic field in terms of the current distribution.  You might think that for the time-dependent case, these would generalize as the potentials did, by just putting in the retarded times.  But they don't: essentially, because the fields come from differentiating the potentials, and differentiating a retarded time corresponding to a moving source has spatial contributions.  It can get very messy.    

*Deriving General Equations for the Fields from any Charge and Current Distribution

The general equations for the fields in terms of sources are cumbersome, and not often used, real problems are usually simpler.  In Jackson and Griffiths, the general equations are called Jefimenko's equations, from Jefimenko's 1966 text, although they were in fact first written down in 1912 by Schott, a student of J.J. Thomson.

The real point of going over these equations is to show explicitly that Maxwell's equations can be solved to give a complete description of the electric and magnetic fields generated by any time-dependent flow of electric charge. 

Also, they make a conceptual/linguistic point: we write down Maxwell’s equations, then we usually say “a changing electric field generates a magnetic field”, and vice versa. But strictly speaking, this is not what happens, it’s an invalid “cause and effect” sequence.  If there is a changing electric field at some space time point, it’s because at a previous light-separated point (meaning retarded point, in the sense used above) some charge was moving, so at that point there was a current, and that’s where the magnetic field is coming from. Perhaps we should say “a changing electric field implies a magnetic field” but in fact we’ll probably continue with the sloppy language, it works fine as long as we get the equations right.

Here's how Jackson derives the general equations. He focusses on deriving equations with the wave-like form on the left-hand side, for the components of the electric and magnetic fields.  You can get these equations directly from Maxwell's equations, using the potentials in intermediate steps, but ending with source terms (meaning the right-hand side of the equation) that are just charge density, current density and their derivatives.

To construct the equations for 2 E 1/ c 2 E ¨  and similarly for B , we use

E = φ A ˙ , B = × A  

to find

2 E 1/ c 2 E ¨ = 2 φ 2 A ˙ + 1/ c 2 φ ¨ + A ,

but we're in the Lorenz gauge, so 2 A 1 c 2 2 A t 2 = μ 0 j ,  the A  terms on the right hand side add to μ 0 j ˙ , then from 2 φ 1 c 2 2 φ t 2 =ρ/ ε 0  the φ  terms add to ρ/ ε 0 .  

Therefore,

2 E 1/ c 2 E ¨ = 1 ε 0 ρ 1 c 2 j t .  

The corresponding equation for the magnetic field,

2 B 1 c 2 2 B t 2 = μ 0 × j ,  

follows immediately on applying ×  to the equation for A .  

We can solve these wave-type equations using the Green's function, just as we did for the potentials, to get

E r ,t = 1 4π ε 0 d 3 r 1 R ρ 1 c 2 j t ret , B r ,t = μ 0 4π d 3 r 1 R × j ret .  

But this is where things get complicated!  Take that first term, ρ.  We're differentiating the charge density ρ r  with respect to r , but it's inside the retarded bracket: so, we're finding the difference in ρ  between r +d r  and r , but if these two points are at different distances from the point r  at which we're finding the field (left-hand side of above equation) then the "ret" requirement will mean that in finding this derivative we're making these measurements at slightly different times, so, if ρ  is also varying in time, that gives a contribution.  To separate out this effect, imagine ρ r  is uniform in space over some region, but increasing in time (for example, a charged gas that's being compressed).  For this gas,  ρ  is constant in the region, so ρ r +d r ρ r =0  if they're measured simultaneously, but inside the ret  they're not: if r +d r  is a distance dR  radially further out than r ,  we'll be sensing it a time dt=dR/c  earlier, so will find a contribution ρ ˙ /c  in the radial direction to the gradient. That is,

ρ ret = ρ ret R ^ ρ ˙ /c.  

Another way of seeing this, following Jackson, is to write f r , t ret =f r ,tR/c  then realize that any differentiation with respect to r  must include a term from differentiating R= r r .  Thus.

× j ret = × j ret + j t ret × tR/c = × j ret + 1 c j t ret × R ^ .

Differentiation with respect to t  is not a problem, since it involves sequential sampling at the same point r .   

Putting these expressions into the formulas for the fields gives

E r ,t = 1 4π ε 0 d 3 r R ^ R 2 ρ r , t ret + R ^ cR ρ r , t t ret 1 c 2 R j r , t t ret B r ,t = μ 0 4π d 3 r j r , t ret × R ^ R 2 + j r , t t ret × R ^ cR .  

To check that these equations make sense, look first at the static limit: they yield the Coulomb electrostatic law, and the Biot-Savart magnetic law.  Now note that there are terms decreasing with distance as 1/R  and others as 1/ R 2 , the latter dominate close in.  These "near field" terms are in fact the same as the static terms, except for the time delay correction.

Assuming the charge and current distribution is confined to a finite volume, far away these 1/ R 2  terms become negligible.  Notice that the remaining 1/R  terms only come from changing charge and current densities.  They are electromagnetic radiation. But they don’t have quite the right form: as we'll see in more detail later (and as hopefully you already know) in an electromagnetic wave, the electric field, the magnetic field and the direction of propagation are all perpendicular to each other.  The above expression for the magnetic field is perpendicular to the direction of propagation, but the second term in the electric field is along that direction.  It turns out that this term is in fact canceled by a contribution form the third term, but this is a tricky calculation (see Kirk Donaldson, or the Fourier treatment in Panofsky and Phillips).

(Note:  Actually, the field components parallel to the direction of propagation, although they are decreasing by a factor 1/R   relative to the perpendicular components, play one important role: as we'll discuss later, a system can radiate angular momentum (atoms making a transition usually do!) and thinking of that in terms of the Poynting vector it is immediately evident we need that parallel component.  The 1/R  is compensated in angular momentum by the R × p  term.)

The bottom line is that the electric far field is

E radiation r ,t = 1 4π ε 0 d 3 r 1 c 2 R j r , t t ret × R ^ × R ^ ,  

and since ε 0 c 2 =1/ μ 0 ,  this matches the magnetic field vector.

Griffiths gets the same equations by a slightly different route: he begins by writing the fields in terms of the potentials, then feeds in the expressions for the potentials in terms of the Green's functions, then changes the source terms back to fields.

Remarkably, Feynman gives the results in Volume I of his Lectures, but just to say how wonderful nature is he doesn't derive it.

Kirk McDonald gives a full analysis of why, despite appearances, the far field has the standard wave form, with no component of the electric field in the direction of propagation.  This was done in terms of Fourier transforms by Panofsky and Phillips.