5. More Math: Helmholtz' Theorem

Introductory Remarks: Two Different Kinds of Vector Fields and Potentials

In the first part of this course, we'll study electrostatics: the many mathematical techniques for analyzing the E  field arising from a time-independent electric charge density

E =ρ/ ε 0 .  

The field E  is conservative, meaning it can be written as the gradient of a scalar potential, E = φ.  Consequently

× E =0,  

since ×  of anything is zero. 

Secondly, we’ll move on to the magnetic field B .  Unlike electric charge, there is no magnetic charge, so

B =0.  

This does not, of course, mean that B =0  everywhere.  We know that a steady current going down a wire generates a magnetic field circling around it.  This circular picture makes clear that B  can’t be written as the gradient of some scalar potential it’s not conservative: if we had a magnetic charge, a monopole, and put it on a circular racetrack around a current, it would accelerate indefinitely (we’d have to maintain the current, of course, that’s where the energy is coming from).  In fact, as we'll discuss in a few lectures,

× B = μ 0 J ,  

the current density.

So we see that E , B  are kind of opposites as far as div and curl are concerned.

There is a way of representing a magnetic field with a potential, it’s a vector potential, the equation is

B = × A .

Notice this automatically excludes the presence of monopoles, since B = × A =0.  

Helmholtz Theorem: Any Vector Field is a Sum of Fields of These Two Kinds

Now we’re ready for Helmholtz’ theorem:

Any reasonably well behaved vector field (and they all are in physics) can be writes as a sum of two fields, one a gradient of a scalar potential, one the curl of a vector potential.

That is, for any field F r  we can find a scalar potential φ r  and a vector potential A r  such that

F r = φ r + × A r ,

and furthermore, for time-dependent fields F r ,t  both potentials are in general necessary.

It certainly isn’t obvious that any field can be represented as a sum like this, or why it might be useful.  But you’ll see that it is.

It's Easier to Understand in k-Space

Remarkably, on going to Fourier transform space, it all becomes clear. So without further ado, we write

F r = F k e i k r d 3 k 2π 3 .

We’ll assume that our integrals are sufficiently convergent that we can differentiate with respect to r  through the integral operator, so for example

F r = i k F k e i k r d 3 k 2π 3 .  


× A r = i k × A k e i k r d 3 k 2π 3 ,  

and the gradient  of a scalar field

φ r = i k φ k e i k r d 3 k 2π 3 .  

Consider now the Helmholtz expression for an arbitrary field:

F r = φ r + × A r .  

The k  Fourier component is

F k =i k φ k +i k × A k ,  

and it’s evident that at each point in k  space we can of course write any vector F k  as a sum of components parallel to and perpendicular to k ,  

F k = F k + F k ,  

where (as you should check!)

F k = k F k k k 2 , F k = k × F k × k k 2 .  

Comparing this with F k =i k φ k +i k × A k  gives

φ k =i k F k k 2 , A k =i k × F k k 2 .  

So, this is the proof of Helmholtz' theorem, although admittedly it's not very clear what it looks like in ordinary space, as opposed to k  -space.

To gain some insight, let’s look at the terms one at a time. Suppose first we just have a longitudinal field (meaning that in momentum space, the field is parallel to the momentum vector, so the field is the gradient of a potential, think electrostatics):

F k = k F k k 2 k ,  

Now i F k k = F k ,  so for an electrostatic field, with E =ρ/ ε 0 ,  we have

E k =i k 1 ε 0 ρ k 1 k 2 .  

To get back into position space, we know that initial i k  will just translate to  of the rest of the expression, which we can evaluate using the convolution theorem,

f k g k e i k r d 3 k 2π 3 = d 3 r f r g r r .

It’s a standard result (and in the previous lecture!) that the Fourier transform of 1/k 2  is 1/4πr,  so Fourier transforming the equation for E k  yields the familiar result

E r = φ= 1 4π ε 0 d 3 r ρ r r r .  

We’ve expressed the longitudinal field as a gradient of a function which is itself an integral over the divergence of the original field:   the point is that in electrostatics if we know the divergence the charge distribution then from that we can find the field everywhere. 

Analogously, for the transverse field, we express the field as the curl of an expression which is an integral over the curl of the original field in magnetostatics, that would be the current density. The derivation is almost identical to that given above, and is left as an exercise.

Position Space Derivation

Note:  this is the derivation in many texts.  I prefer the one above I think the theorem is most naturally expressed in k  space, I include the following just for more vector calculus practice.

We use 2 1 r r =4πδ r r  and × ×= 2 .  (You need to know both these important identities prove the second using ε ijk ε ilm = δ jl δ km δ jm δ kl .  )

In the derivation below, we include surface terms explicitly useful in discussing conductors later.

First, write

F r = d 3 r F r δ r r = 1 4π d 3 r F r 2 1 r r = 1 4π 2 d 3 r F r 1 r r ,

where we moved 2  to the outside in the last expression because it acts only on r ,  not on r

Now we use the identity curl curl = grad div  del squared the transform from Fourier space of k 2 F = k k F k × k × F   to write

F r = 1 4π × × d 3 r F r 1 r r 1 4π d 3 r F r 1 r r ,

where in the last term we used (remembering  acts only on r ,  not on r  )

F r 1 r r = F r 1 r r  .

If you stare at the full expression for F r , you'll see that it's already in the form we want — it's a curl of some vector minus the grad of some scalar. 

Since 1 r r = 1 r r  , where the prime denotes differentiation with respect to r ,  the last term can be further rearranged, then integrated by parts, and using the divergence theorem to get a surface integral:

1 4π d 3 r F r 1 r r = 1 4π d 3 r F r 1 r r = 1 4π d 3 r F r r r 1 4π d 3 r F r r r = 1 4π d 3 r F r r r 1 4π S F r n ^ d S r r =φ r ,

this last line being our definition of φ r .  

Looking back at our expression for F r ,  we see this term contributes φ r .

(If F r  were a purely electrostatic field, notice this is the expression for the potential from volume and surface charge.)

To make progress interpreting the ×  term, we need another theorem. We just used the divergence theorem, that for a vector field G r , V G dV= S G n ^ dS .  If we now replace G r  with G r × C , where C  is some constant vector, we find a new theorem:

V × G dV= S n ^ × G dS.

Exercise:  Check this.  (If something’s true for arbitrary constant vector C ,  the C  can be dropped, leaving a vector equation).

   Now, using in the third line × F 1 r r = × F 1 r r + 1 r r × F ,  

1 4π × F r 1 r r d 3 r = 1 4π F r × 1 r r d 3 r = 1 4π F r × 1 r r d 3 r = 1 4π × F r r r d 3 r 1 4π × F r r r d 3 r = 1 4π × F r r r d 3 r 1 4π S F r × n ^ r r d S = A r ,

again, this last line defines the vector field A r .  

Putting the results for φ r , A r  together, we have the general vector field in the desired form,

F r = × A r φ r .  

A common situation is one where the integrands fall off sufficiently quickly at large distances that if we integrate over all space we can ignore the distant surface contributions to find (with no finite surfaces present)

F r = × A r φ r = 1 4π × × F r r r d 3 r 1 4π d 3 r F r r r .

If there are surfaces, we simply add back in the surface terms from the earlier equations, representing surface charge and current densities.