Welcome back. This week, I wanted to discuss the Laplace transform, which is an incredibly useful tool for solving linear ordinary and partial differential equations. The primary reason for its utility is that it converts differential equations in terms of one unknown function into algebraic equations in terms of another unknown function that can be solved relatively quickly. While I wanted to explore this practical value, I also wanted to dive a little deeper into how the Laplace transform is derived. In my own differential equations courses, the meaning of the Laplace transform was left relatively opaque, so I hope that this blog provides some clarity for readers who are learning about the transform for the first time.
To begin, let’s consider a time signal ƒ(t). Using the Fourer transform (discussed in Blog #20), we can represent this signal in the frequency domain by the function
F(ω) = ∫ℝ exp(-iωt) ƒ(t) dx.
This is all good and dandy, but the class of functions that can actually be Fourier transformed is relatively limited. In order for the integral above to converge, ƒ(t) must decay to zero as t approaches ± ∞. There are some other functions, such as sin(t) or cos(t), that can also be Fourier transformed, but the process of evaluating the integral involves some messy tricks. Therefore, it is natural to seek another transform that can convert a broader class of signals from the time domain into the frequency domain. This more powerful transform is the Laplace transform.
The idea is to take ƒ(t), pre-multiply it by an exponentially decaying function exp(-σt), and then take the Fourier transform of the result. For a sufficiently large σ and an exponentially bounded ƒ, the product ƒ(t) exp(-σt) will indeed decay to zero as t approaches +∞. The only issue is that ƒ(t) exp(-σt) would blow up as t tends toward -∞. To get around this issue, we will chop the graph of ƒ(t) exp(-σt) off at t = 0 and only consider the portion of ƒ(t) exp(-σt) that lies above the positive t-axis. This is equivalent to multiplying ƒ(t) exp(-σt) by the unit step function u(t), and then taking the Fourier transform to produce the function
F(σ + iω) = ∫ℝ exp(-iωt) exp(-σt) ƒ(t) u(t) dt.
We will call this F(σ + iω) the Laplace transform of ƒ(t). To make the notation cleaner, we will also let s = σ + iω and write
F(s) = ∫[0,+∞) exp(-st) ƒ(t) dt.
This is the definition that is frequently seen in textbooks. Since s is a complex number, F(s) is a complex function, whose inputs have real and imaginary components and whose outputs have real and imaginary components. Therefore it would take four dimensions to completely visualize the graph of F(s). Since humans cannot see in four dimensions, it is common to plot |F(s)| over the 2D complex plane and let color denote the phase of F(s). When reading these graphs, one rule of thumb is that |F(s)| becomes very large at the dominant sinusoidal frequencies and exponential growth/decay rates in the original signal ƒ(t). For example, the Laplace transform of exp(at) is (s-a)-1, which blows up when s is equal to the exponential growth rate a. Meanwhile, the Laplace transform of sin(t) is (s2+1)-1, which blows up at s = ± i. This correctly indicates that sin(t) is made of a pure sine wave of frequency 1, exactly as we knew from the beginning.
In differential equations, the most common identity to use for the Laplace transform is that the Laplace transform of the derivative of a signal is
∫[0,+∞) exp(-st) dƒ/dt dt = s F(s) – f(0),
where F(s) is the Laplace transform of ƒ. This result follows from integration by parts, and is precisely what converts differentiation to algebra. From a deeper perspective, it also comes from the fact that the exponential function is an eigenfunction of the first derivative operator. Perhaps in a future blog, we will explore this idea further.
For now, I hope this blog was informative and helped to demystify the Laplace transform. Next week, we will explore another topic that I hope will be informative and fascinating. Until then, take care.
Leave a Reply