Welcome back. Over the last few weeks, we have studied what I hope are some fascinating results in mathematics. We have developed the theory behind multivariable calculus and applied it to describe well-behaved fluids. We have also put the classic Riemann integral on rigorous footing and proven the second Fundamental Theorem of Calculus. This week, we will continue our journey by discussing Maxwell’s equations, which are unequivocally the most important equations in electricity and magnetism. We will then zoom in on one of Maxwell’s equations, which on its own is called Gauss’ law, and relate that equation to a more general partial differential equation called Poisson’s equation. Lastly, we will go over how to solve Poisson’s equation using eigenfunctions of the Laplacian operator.
When we study physical phenomena, we frequently must deal with electric fields, denoted by E(x,y,z,t), and magnetic fields, denoted by B(x,y,z,t). These are vector fields that assign an electric field vector and a magnetic field vector to each point in space. Electricity and magnetism is a rich topic that we sadly do not have too much time to discuss in detail. The most important fact for us to know right now is that in 1865, the Scottish mathematician developed four equations that tell us how electric fields and magnetic fields work. The first of these equations appears as follows:
∇ • B = 0.
If the divergence of the magnetic field is equal to zero, then B cannot have any sources or sinks. Physically-speaking, this means that there cannot be any magnetic monopoles: every magnet has both a north pole and a south pole.
The second of Maxwell’s equations is
∇ • E = ρ/ε0.
This is Gauss’ law, and it tells us that the divergence of the electric field is equal to the charge per unit volume, denoted by ρ, divided by the permittivity of free space, which is a constant denoted by ε0. Using the Divergence Theorem, we can use this relationship to show that the closed surface integral of E is precisely equal to the total charge enclosed by the surface we are integrating over divided by ε0:
∫∫S, closed E • dS = Qinside/ε0.
Maxwell’s remaining equations relate electric fields to magnetic fields. Faraday’s law tells us that the local rotation in the electric field is equal to the negative time derivative of the magnetic field:
∇ x E = -∂B/∂t.
Lastly, the Maxwell-Ampere Law states that the local rotation of the magnetic field is equal to a constant µ0 times the current density J plus something called the displacement current density, which is equal to the time derivative of the electric field divided by the speed of light squared:
∇ x B = µ0 J + (1/c2) ∂E/∂t.
We will explore Maxwell’s equations in much greater detail in a future blog post. For now, let’s direct our attention to Gauss’ law (∇ • E = ρ/ε0). It can often be useful to express an electric field in terms of a scalar electric potential, φ, where
E = -∇φ.
This is very similar to how we expressed a velocity vector field in terms of a scalar potential when we were discussing gradients and potential flow.
What we will do now is plug “-∇φ” in for “E” in Gauss’ law (and move the negative sign to the other side of the equation) to obtain:
∇•∇φ = ∆φ = -ρ/ε0.
We can now see that the Laplacian of the electric potential φ must be equal to some other function f(x,y,z,t) = -ρ/ε0 (recall that the charge density ρ can change as we move from one region of space to another). This is precisely an example of Poisson’s equation, which has the following general formulation:
∆u = f.
To provide a flavor of how to solve Poisson’s equation, we can study an example problem in two-dimensional space. So suppose we have a mystery function u = u(x,y). We know that ∆u = f(x,y) for some known function f. Moreover, u(x,y) is defined on a bounded subset G of two-dimensional space and we know that u(x,y) = 0 for all points (x,y) on ∂G (recall that the symbol ∂G represents the boundary of the region G).
We will construct our final solution using building blocks, and as it turns out, these building blocks are eigenfunctions of the Laplacian that are subject to the same boundary conditions as our target function u(x,y). Mathematically-speaking, we have the following, where each um,n is an eigenfunction and -λm,n is that eigenfunction’s corresponding eigenvalue:
∆um,n = -λm,n um,n and
um,n(x,y) = 0 for every point (x,y) that lies on ∂G.
Since our region G is two-dimensional, we can index our eigenfunctions um,n using a two-dimensional array of positive integers. Hence, we use subscripts “m,n” when talking about our eigenfunctions and their eigenvalues. (Additionally, we can note that ∆u = -λu is sometimes called the Helmholtz equation.)
If our region G is nice enough, we can find specific formulas for um,n using a method called separation of variables. Fully explaining separation of variables would require a separate blog post, so for now let us suppose we have found what um,n are. Since the Laplacian (∆) is a symmetric and positive-definite operator (more on this in the future), the set of all eigenfunctions um,n form an orthogonal basis from which we can express the solution to our problem, u(x,y). In plain English, this information means that we can write the following, where each am,n is a constant that depends on the values of the positive integers m and n:
u(x,y) = ∑ am,n um,n(x,y).
Since we already know all of the um,n’s, our task now is to determine am,n. To this end, we will also write f(x,y) in terms of our eigenfunction as follows:
f(x,y) = ∑ cm,n um,n where
cm,n = (∫∫G f(x,y) um,n(x,y) dA) / ( ∫∫G [um,n(x,y)]2 dA).
To find cm,n, we are taking f(x,y) and “projecting it” on the eigenfunction um,n. To formulate this projection, we first multiply f and um,n together and integrate the resultant function over G. We then must divide this inner product between f and um,n by the square of the “length” of um,n, which is given by ∫∫G [um,n(x,y)]2 dA. This process is analogous to finding the projection of one vector onto another using dot products.
Now, let’s plug u = ∑ am,n um,n(x,y) and f = ∑ cm,n um,n into Poisson’s equation to obtain:
∆(∑ am,n um,n )= ∑ cm,n um,n.
Assuming that our sums are all well-behaved, we can slide the ∆ operation inside the sum on the left to get:
∑ am,n ∆um,n = ∑ cm,n um,n.
But by construction, ∆um,n = -λm,n um,n, so we obtain:
∑ -am,n λm,num,n = ∑ cm,n um,n.
From here, we can equate the summands (the summand is the expression that is being summed up) to find:
am,n = – cm,n / λm,n.
Since we theoretically know what cm,n and λm,n are, we can declare victory!
u(x,y) = ∑ (-cm,n / λm,n) um,n satisfies ∆u = f and u = 0 on ∂G.
To summarize, we started out with a partial differential equation that had a certain boundary condition. Then, we expressed our solution in terms of building blocks that turned out to be eigenfunctions of ∆ subject to our boundary condition. From there, we were able to express f(x,y) in terms of those eigenfunctions and plug our results into Poisson’s equation to solve for am,n and reach our solution. Using a similar procedure, we can solve higher-dimensional versions of Poisson’s equation. Only, we might need more indices to enumerate the eigenvalues and eigenfunctions.
Next time, I hope to discuss some measure theory and the Lebesgue integral, which is a different and more flexible way to integrate functions. Until then, take care.