Welcome back. Over the last few weeks, we have studied what I hope are some fascinating results in mathematics. We have developed the theory behind multivariable calculus and applied it to describe well-behaved fluids. We have also put the classic Riemann integral on rigorous footing and proven the second Fundamental Theorem of Calculus. This week, we will continue our journey by discussing Maxwell’s equations, which are unequivocally the most important equations in electricity and magnetism. We will then zoom in on one of Maxwell’s equations, which on its own is called Gauss’ law, and relate that equation to a more general partial differential equation called Poisson’s equation. Lastly, we will go over how to solve Poisson’s equation using eigenfunctions of the Laplacian operator.

When we study physical phenomena, we frequently must deal with electric fields, denoted by **E**(x,y,z,t), and magnetic fields, denoted by **B**(x,y,z,t). These are vector fields that assign an electric field vector and a magnetic field vector to each point in space. Electricity and magnetism is a rich topic that we sadly do not have too much time to discuss in detail. The most important fact for us to know right now is that in 1865, the Scottish mathematician developed four equations that tell us how electric fields and magnetic fields work. The first of these equations appears as follows:

∇ • **B** = 0.

If the divergence of the magnetic field is equal to zero, then **B** cannot have any sources or sinks. Physically-speaking, this means that there cannot be any magnetic monopoles: every magnet has both a north pole and a south pole.

The second of Maxwell’s equations is

∇ • **E** = ρ/ε_{0}.

This is Gauss’ law, and it tells us that the divergence of the electric field is equal to the charge per unit volume, denoted by ρ, divided by the permittivity of free space, which is a constant denoted by ε_{0}. Using the Divergence Theorem, we can use this relationship to show that the closed surface integral of **E** is precisely equal to the total charge enclosed by the surface we are integrating over divided by ε_{0}:

∫∫_{S, closed} **E **• d**S** = Q_{inside}/ε_{0}.

Maxwell’s remaining equations relate electric fields to magnetic fields. Faraday’s law tells us that the local rotation in the electric field is equal to the negative time derivative of the magnetic field:

∇ x **E **= -∂**B**/∂t.

Lastly, the Maxwell-Ampere Law states that the local rotation of the magnetic field is equal to a constant µ_{0} times the current density **J** plus something called the *displacement current density*, which is equal to the time derivative of the electric field divided by the speed of light squared:

∇ x **B** = µ_{0} **J** + (1/c^{2}) ∂**E**/∂t.

We will explore Maxwell’s equations in much greater detail in a future blog post. For now, let’s direct our attention to Gauss’ law (∇ • **E** = ρ/ε_{0}). It can often be useful to express an electric field in terms of a scalar *electric potential*, φ, where

**E** = -∇φ.

This is very similar to how we expressed a velocity vector field in terms of a scalar potential when we were discussing gradients and potential flow.

What we will do now is plug “-∇φ” in for “**E**” in Gauss’ law (and move the negative sign to the other side of the equation) to obtain:

∇•∇φ = ∆φ = -ρ/ε_{0}.

We can now see that the Laplacian of the electric potential φ must be equal to some other function f(x,y,z,t) = -ρ/ε_{0} (recall that the charge density ρ can change as we move from one region of space to another). This is precisely an example of Poisson’s equation, which has the following general formulation:

∆u = f.

To provide a flavor of how to solve Poisson’s equation, we can study an example problem in two-dimensional space. So suppose we have a mystery function u = u(x,y). We know that ∆u = f(x,y) for some known function f. Moreover, u(x,y) is defined on a bounded subset G of two-dimensional space and we know that u(x,y) = 0 for all points (x,y) on ∂G (recall that the symbol ∂G represents the boundary of the region G).

We will construct our final solution using building blocks, and as it turns out, these building blocks are *eigenfunctions of the Laplacian* that are subject to the sam*e boundary conditions *as our target function u(x,y). Mathematically-speaking, we have the following, where each u_{m,n} is an eigenfunction and -λ_{m,n} is that eigenfunction’s corresponding *eigenvalue*:

∆u_{m,n} = -λ_{m,n }u_{m,n} and

u_{m,n}(x,y) = 0 for every point (x,y) that lies on ∂G.

Since our region G is two-dimensional, we can index our eigenfunctions u_{m,n} using a two-dimensional array of positive integers. Hence, we use subscripts “m,n” when talking about our eigenfunctions and their eigenvalues. (Additionally, we can note that ∆u = -λu is sometimes called the *Helmholtz equation*.)

If our region G is nice enough, we can find specific formulas for u_{m,n} using a method called *separation of variables*. Fully explaining separation of variables would require a separate blog post, so for now let us suppose we have found what u_{m,n} are. Since the Laplacian (∆) is a *symmetric and positive-definite operator *(more on this in the future), the set of all eigenfunctions u_{m,n} form an *orthogonal basis* from which we can express the solution to our problem, u(x,y). In plain English, this information means that we can write the following, where each a_{m,n} is a constant that depends on the values of the positive integers m and n:

u(x,y) = ∑ a_{m,n} u_{m,n}(x,y).

Since we already know all of the u_{m,n}’s, our task now is to determine a_{m,n}. To this end, we will also write f(x,y) in terms of our eigenfunction as follows:

f(x,y) = ∑ c_{m,n} u_{m,n} where

c_{m,n} = (∫∫_{G} f(x,y) u_{m,n}(x,y) dA) / ( ∫∫_{G} [u_{m,n}(x,y)]^{2} dA).

To find c_{m,n}, we are taking f(x,y) and “projecting it” on the eigenfunction u_{m,n}. To formulate this projection, we first multiply f and u_{m,n} together and integrate the resultant function over G. We then must divide this *inner product *between f and u_{m,n} by the square of the “length” of u_{m,n}, which is given by ∫∫_{G} [u_{m,n}(x,y)]^{2} dA. This process is analogous to finding the projection of one vector onto another using dot products.

Now, let’s plug u = ∑ a_{m,n} u_{m,n}(x,y) and f = ∑ c_{m,n} u_{m,n} into Poisson’s equation to obtain:

∆(∑ a_{m,n} u_{m,n} )= ∑ c_{m,n} u_{m,n}.

Assuming that our sums are all well-behaved, we can slide the ∆ operation inside the sum on the left to get:

∑ a_{m,n} ∆u_{m,n} = ∑ c_{m,n} u_{m,n}.

But by construction, ∆u_{m,n} = -λ_{m,n }u_{m,n}, so we obtain:

∑ -a_{m,n} λ_{m,n}u_{m,n} = ∑ c_{m,n} u_{m,n}.

From here, we can equate the summands (the summand is the expression that is being summed up) to find:

a_{m,n} = – c_{m,n} / λ_{m,n}.

Since we theoretically know what c_{m,n} and λ_{m,n} are, we can declare victory!

u(x,y) = ∑ (-c_{m,n} / λ_{m,n}) u_{m,n} satisfies ∆u = f and u = 0 on ∂G.

To summarize, we started out with a partial differential equation that had a certain boundary condition. Then, we expressed our solution in terms of building blocks that turned out to be eigenfunctions of ∆ subject to our boundary condition. From there, we were able to express f(x,y) in terms of those eigenfunctions and plug our results into Poisson’s equation to solve for a_{m,n} and reach our solution. Using a similar procedure, we can solve higher-dimensional versions of Poisson’s equation. Only, we might need more indices to enumerate the eigenvalues and eigenfunctions.

Next time, I hope to discuss some measure theory and the Lebesgue integral, which is a different and more flexible way to integrate functions. Until then, take care.

## Leave a Reply