39 Solving Maxwell's Equations: Green’s Functions, Jefimenko
The Story So Far...
We have established that the magnetic and electric fields can be expressed in terms of a vector potential and a scalar potential :
and that Maxwell's equations are equivalent to linear differential equations for these potentials, having as source terms the charge and current distributions:
Here the potentials have been chosen to satisfy the Lorenz gauge condition (this can always be done).
Green's Function for a Single-Frequency Point Source
Clearly these equations for have plane wave solutions in empty space. But we need a full solution, including regions with nonzero charge and current densities.
The basic strategy is to Fourier transform with respect to time only, then solve the resulting spatial equation for a particular frequency. This is of course an important case in the real world: an oscillating charge/current is an antenna, the signal is basically a single frequency with information transmitted by modulating the amplitude, the frequency, or digitizing.
Since the equations are linear, any solution can then be found by a superposition of single-frequency terms.
Writing the Fourier transform (with respect to time only) pair
the spatial differential equation for is the Fourier transform of
that is,
So, for a charge distribution oscillating at the actual time-dependent potential is the real part of The time-independent function will be complex, because there will almost always be phase differences between the oscillations at different points in space.
As we discovered in electrostatics, the way to solve equations of this type is to introduce Green’s functions. Recall that the Green's function is essentially the inverse of a differential operator, meaningpurely formallythat we want to "solve" the above equation by writing
Again, we have to figure out how to interpret this formal expression.
Since this is a linear theory, we only have to solve the equation for a point charge, then we can get the general result by integrating our solution over the actual charge distribution.
That's where the Green's function comes inrecall it's defined by (with appropriate boundary conditions)
For now, we'll take the case of charges and currents in otherwise empty space, so no finite boundaries.
With no boundaries, is a function of and in fact only of , since there is no preferred direction.
Now, from electrostatics, Poisson’s equation has the solution
Our Green’s function must look a lot like this in the limit , because close enough to the origin the becomes negligible: think of the delta function on the right-hand side as the limit of a very sharp peak, the rapid change in slope ensures that, close in, the is far more important than the
So we try a solution of the form with as The differential equation becomes
and the solutions are
Remember this solution is for a particular time Fourier component : to go back from frequency to time, recall , and since our solution has a single frequency component, to find the time-dependent single-frequency Green's function we just multiply by , so
This describes spherical waves: the choice of sign represents a wave going out from the origin. The negative sign represents a wave going into the origin. When light is scattered by a molecule, the ingoing light can be represented in terms of ingoing spherical waves, so the minus sign is relevant initially, the subsequent outgoing wave of course has the positive sign.
Green's Function for a Point Source in Both Space and Time
To get the full picture of potentials generated by time dependent charge and current distributions, we need the Green’s function with a delta function source in both space and time:
(This corresponds to charge appearing for a moment at the originnot a physical situation by itself, but the potential from moving charges can be constructed from an integral over such functions.)
Again, we'll take the case of no finite boundaries, so
The equation is solved by noting that the delta function pulse in time has a Fourier transform in which all frequencies appear equally:
and therefore (with no finite boundaries) we can just integrate over the contributions from each frequency, using the single-frequency Green's function we found above,
The integral over is the delta function, that is,
The choice is called the retarded Green’s function: the scalar potential equation is formally solved with this Green’s function to give
and, with the simple form of the Green’s function we’ve found, this can be written
where the “ret” for retarded means at the time that a light signal emitted would reach
at time
So the solutions to the wave equations
are
Recall that we are working in the Lorenz gauge,
So, given the time and space charge and current distribution, we can find the scalar and vector potentials.
Recall that in the static case, we found similar expressions for the electric field as an integral over the charge distribution
and the Biot-Savart law for the magnetic field in terms of the current distribution. You might think that for the time-dependent case, these would generalize as the potentials did, by just putting in the retarded times. But they don't: essentially, because the fields come from differentiating the potentials, and differentiating a retarded time corresponding to a moving source has spatial contributions. It can get very messy.
*Deriving General Equations for the Fields from any Charge and Current Distribution
The general equations for the fields in terms of sources are cumbersome, and not often used, real problems are usually simpler. In Jackson and Griffiths, the general equations are called Jefimenko's equations, from Jefimenko's 1966 text, although they were in fact first written down in 1912 by Schott, a student of J.J. Thomson.
The real point of going over these equations is to show explicitly that Maxwell's equations can be solved to give a complete description of the electric and magnetic fields generated by any time-dependent flow of electric charge.
Also, they make a conceptual/linguistic point: we write down Maxwell’s equations, then we usually say “a changing electric field generates a magnetic field”, and vice versa. But strictly speaking, this is not what happens, it’s an invalid “cause and effect” sequence. If there is a changing electric field at some space time point, it’s because at a previous light-separated point (meaning retarded point, in the sense used above) some charge was moving, so at that point there was a current, and that’s where the magnetic field is coming from. Perhaps we should say “a changing electric field implies a magnetic field” but in fact we’ll probably continue with the sloppy language, it works fineas long as we get the equations right.
Here's how Jackson derives the general equations. He focusses on deriving equations with the wave-like form on the left-hand side, for the components of the electric and magnetic fields. You can get these equations directly from Maxwell's equations, using the potentials in intermediate steps, but ending with source terms (meaning the right-hand side of the equation) that are just charge density, current density and their derivatives.
To construct the equations for and similarly for , we use
to find
but we're in the Lorenz gauge, so the terms on the right hand side add to , then from the terms add to
Therefore,
The corresponding equation for the magnetic field,
follows immediately on applying to the equation for
We can solve these wave-type equations using the Green's function, just as we did for the potentials, to get
But this is where things get complicated! Take that first term, We're differentiating the charge density with respect to , but it's inside the retarded bracket: so, we're finding the difference in between and , but if these two points are at different distances from the point at which we're finding the field (left-hand side of above equation) then the "ret" requirement will mean that in finding this derivative we're making these measurements at slightly different times, so, if is also varying in time, that gives a contribution. To separate out this effect, imagine is uniform in space over some region, but increasing in time (for example, a charged gas that's being compressed). For this gas, is constant in the region, so if they're measured simultaneously, but inside the they're not: if is a distance radially further out than we'll be sensing it a time earlier, so will find a contribution in the radial direction to the gradient. That is,
Another way of seeing this, following Jackson, is to write then realize that any differentiation with respect to must include a term from differentiating Thus.
Differentiation with respect to is not a problem, since it involves sequential sampling at the same point
Putting these expressions into the formulas for the fields gives
To check that these equations make sense, look first at the static limit: they yield the Coulomb electrostatic law, and the Biot-Savart magnetic law. Now note that there are terms decreasing with distance as and others as , the latter dominate close in. These "near field" terms are in fact the same as the static terms, except for the time delay correction.
Assuming the charge and current distribution is confined to a finite volume, far away these terms become negligible. Notice that the remaining terms only come from changing charge and current densities. They are electromagnetic radiation. But they don’t have quite the right form: as we'll see in more detail later (and ashopefullyyou already know) in an electromagnetic wave, the electric field, the magnetic field and the direction of propagation are all perpendicular to each other. The above expression for the magnetic field is perpendicular to the direction of propagation, but the second term in the electric field is along that direction. It turns out that this term is in fact canceled by a contribution form the third term, but this is a tricky calculation (see Kirk Donaldson, or the Fourier treatment in Panofsky and Phillips).
(Note: Actually, the field components parallel to the direction of propagation, although they are decreasing by a factor relative to the perpendicular components, play one important role: as we'll discuss later, a system can radiate angular momentum (atoms making a transition usually do!) and thinking of that in terms of the Poynting vector it is immediately evident we need that parallel component. The is compensated in angular momentum by the term.)
The bottom line is that the electric far field is
and since this matches the magnetic field vector.
Griffiths gets the same equations by a slightly different route: he begins by writing the fields in terms of the potentials, then feeds in the expressions for the potentials in terms of the Green's functions, then changes the source terms back to fields.
Remarkably, Feynman gives the results in Volume I of his Lectures, but just to say how wonderful nature ishe doesn't derive it.
Kirk McDonald gives a full analysis of why, despite appearances, the far field has the standard wave form, with no component of the electric field in the direction of propagation. This was done in terms of Fourier transforms by Panofsky and Phillips.