Welcome back. Integration by parts is a very useful technique that usually shows up in introductory calculus courses. It allows us to efficiently integrate the product of two functions by transforming a difficult integral into an easier one. When working with a single variable, the integration by parts formula appears as follows:
∫[a,b] g(x) (df/dx) dx = g(b)f(b) – g(a)f(a) – ∫[a,b] f(x) (dg/dx) dx.
Essentially, we are exchanging an integral of “g df/dx” for an integral of “f dg/dx” which, if we choose f and g in a clever fashion, makes life easier. In this blog post, we will (informally) derive the higher dimensional analogue to integration by parts and leverage that formula to uncover some interesting properties of harmonic functions. In case a reminder is needed, we say that a function, u(x), from ℝn to ℝ is harmonic if ∇•∇u = ∆u = 0.
Suppose that we have a scalar function g(x) from an open bounded subset, G, of ℝn to ℝ and a vector field F(x) that assigns an n-dimensional vector to every point in the domain G. If we wish to find the divergence of the product gF, we can make use of the following identity:
∇ • (g F) = ∇g • F + g (∇ • F).
Note here that the left and right-hand-sides of the equation are scalars. Our next step will be to integrate both sides of the equation over the domain G:
∫G ∇ • (g F) = ∫G (∇g • F) + ∫G g (∇ • F).
All of these integrals are n-dimensional, so there technically should be n integral signs in front of all the terms. However, to make the notation less heavy, we will only use one integral sign to denote what really should be iterated integrals. For similar reasons, we also omit the “dx1… dxn” that should follow each integral. Applying the Divergence Theorem (which itself applies in dimensions higher than ℝ3), we find that the left-hand side is equivalent to the following (n-1)-dimensional integral, where n is the outward unit normal vector to ∂G, the boundary of G:
∫G ∇ • (g F) = ∫∂G g (F • n).
Substituting this equality into the original integral equation and doing a little bit of algebra yields the generalized integration by parts formula:
∫G g (∇ • F) = ∫∂G g (F • n) – ∫G (∇g • F) .
Now suppose we have a scalar function u(x) from G to ℝ that satisfies the following properties: ∆u = 0 inside G and u(x) = 0 for all x that lie on ∂G. What we will do is let g = u, F = ∇u, and apply integration by parts:
∫G u ∆u = ∫∂G u (∇u • n) – ∫G (∇u • ∇u).
Since ∆u = 0 over the entirety of G, the integral on the left is equal to zero. Likewise, since u = 0 on ∂G, the first integral on the right-hand-side is equal to zero. Thus, we have:
0 = ∫G (∇u • ∇u) = ∫G |∇u|2.
Recall that ∇u • ∇u is precisely equal to the square of the magnitude of the vector ∇u. Since G can be an arbitrary open and bounded domain in ℝn, the only way that the integral on the right can be zero is if |∇u| = 0, or in other words, u(x) is a constant function! Therefore, u must be identically zero over G since u = 0 on the boundary of G. This gives rise to the curious theorem that if a harmonic function is equal to zero on the boundary of a domain, then it must be identically zero over the entire domain.
We can apply this result, in turn, to prove the uniqueness of solutions to Laplace’s equation. Suppose that u1 and u2 are two solutions of the boundary value problem: ∆u = 0 over G and u = f on ∂G for some given function f. Then let v = u1 – u2 and notice that ∆v = ∆u1 + ∆u2 = 0 over G and v = f – f = 0 on ∂G. Thus, by the previous theorem, v = 0 over the entirety of G and u1 = u2 everywhere. In other words, the solution to the boundary-value problem is unique. When analyzing problems in mathematics, it can be incredibly useful to know whether or not a problem has a unique solution. Therefore, this uniqueness result may be of considerable value to us when we analyze PDEs further in the future.
Next week, we will either analyze Burgers’ equation or do some physics and explore the Euler-Lagrange equation. Until then, I wish all of my readers the best in their studies.
Leave a Reply