# Continuous-Time LTI Systems - The Convolution Integral

This a continuation from the previous tutorial - ** discrete-time LTI systems - the convolution sum**.

In analogy with the results derived and discussed in the *discrete-time LTI systems and convolution sum tutorial*, the goal of this tutorial is to obtain a complete characterization of a continuous-time LTI system in terms of its unit impulse response.

In discrete time, the key to our developing the convolution sum was the sifting property of the discrete-time unit impulse — that is, the mathematical representation of a signal as the superposition of scaled and shifted unit impulse functions. Intuitively, then, we can think of the discrete-time system as responding to a sequence of individual impulses.

In continuous time, of course, we do not have a discrete sequence of input values. Nevertheless, as we discussed in the *unit impulse and unit step functions tutorial*, if we think of the unit impulse as the idealization of a pulse which is so short that its duration is inconsequential for any real, physical system, we can develop a representation for arbitrary continuous-time signals in terms of these idealized pulses with vanishingly small duration, or equivalently, impulses.

This representation is developed in the next subsection, and, following that, we will proceed very much as in the *discrete-time LTI systems and convolution sum tutorial* to develop the convolution integral representation for continuous-time LTI systems.

## 1. The Representation of Continuous-Time Signals in Terms of Impulses

To develop the continuous-time counterpart of the discrete-time sifting property in eq. (2.2) in the *discrete-time LTI systems and convolution sum tutorial*, we begin by considering a pulse or "staircase" approximation, \(\hat{x}(t)\), to a continuous-time signal \(x(t)\), as illustrated in Figure 2.12(a).

In a manner similar to that employed in the discrete-time case, this approximation can be expressed as a linear combination of delayed pulses, as illustrated in Figure 2.12(a)-(e).

If we define

\[\tag{2.24}\delta_\Delta(t)=\begin{cases}\frac{1}{\Delta},\qquad0\le{t}\lt\Delta\\0,\qquad\quad\text{otherwise}\end{cases}\]

then, since \(\delta_\Delta(t)\Delta\) has unit amplitude, we have the expression

\[\tag{2.25}\hat{x}(t)=\sum_{k=-\infty}^{\infty}x(k\Delta)\delta_\Delta(t-k\Delta)\Delta\]

From Figure 2.12, we see that, as in the discrete-time case [refer to eq. (2.2) in the *discrete-time LTI systems and convolution sum tutorial*], for any value of \(t\), only one term in the summation on the right-hand side of eq. (2.25) is nonzero.

As we let \(\Delta\) approach 0, the approximation \(\hat{x}(t)\) becomes better and better, and in the limit equals \(x(t)\). Therefore,

\[\tag{2.26}x(t)=\lim_{\Delta\rightarrow0}\sum_{k=-\infty}^{\infty}x(k\Delta)\delta_\Delta(t-k\Delta)\Delta\]

Also, as \(\Delta\rightarrow0\), the summation in eq. (2.26) approaches an integral. This can be seen by considering the graphical interpretation of the equation, illustrated in Figure 2.13.

Here, we have illustrated the signals \(x(\tau)\), \(\delta_\Delta(t-\tau)\), and their product. We have also indicated a shaded region whose area approaches the area under \(x(\tau)\delta_\Delta(t-\tau)\) as \(\Delta\rightarrow0\).

Note that the shaded region has an area equal to \(x(m\Delta)\) where \(t-\Delta\lt{m\Delta}\lt{t}\). Furthermore, for this value of \(t\), only the term with \(k=m\) is nonzero in the summation in eq. (2.26), and thus, the right-hand side of this equation also equals \(x(m\Delta)\).

Consequently, it follows from eq. (2.26) and from the preceding argument that \(x(t)\) equals the limit as \(\Delta\rightarrow0\) of the area under \(x(\tau)\delta_\Delta(t-\tau)\).

Moreover, from eq. (1.74) [refer to the *unit impulse and unit step functions tutorial*], we know that the limit as \(\Delta\rightarrow0\) of \(\delta_\Delta(t)\) is the unit impulse function \(\delta(t)\).

Consequently,

\[\tag{2.27}x(t)=\displaystyle\int\limits_{-\infty}^{\infty}x(\tau)\delta(t-\tau)\text{d}\tau\]

As in discrete time, we refer to eq. (2.27) as the ** sifting property** of the continuous-time impulse.

We note that, for the specific example of \(x(t)=u(t)\), eq. (2.27) becomes

\[\tag{2.28}u(t)=\displaystyle\int\limits_{-\infty}^{\infty}u(\tau)\delta(t-\tau)\text{d}\tau=\int\limits_0^{\infty}\delta(t-\tau)\text{d}\tau\]

since \(u(\tau)=0\) for \(\tau\lt0\) and \(u(\tau)=1\) for \(\tau\gt0\). Eq. (2.28) is identical to eq. (1.75) [refer to the *unit impulse and unit step functions tutorial*].

Once again, eq. (2.27) should be viewed as an idealization in the sense that, for \(\Delta\) "small enough," the approximation of \(x(t)\) in eq. (2.25) is essentially exact for any practical purpose. Equation (2.27) then simply represents an idealization of eq. (2.25) by taking \(\Delta\) to be vanishingly small.

Note also that we could have derived eq. (2.27) directly by using several of the basic properties of the unit impulse that we derived in the *unit impulse and unit step functions tutorial*. Let's take a look at Figure 2.14.

Specifically, as illustrated in Figure 2.14(b), the signal \(\delta(t-\tau)\) (viewed as a function of \(\tau\) with \(t\) fixed) is a unit impulse located at \(\tau=t\). Thus, as shown in Figure 2.14(c), the signal \(x(\tau)\delta(t-\tau)\) (once gain viewed as a function of \(\tau\)) equals \(x(t)\delta(t-\tau)\) [i.e., it is a scaled impulse at \(\tau=t\) with an area equal to the value of \(x(t)\)].

Consequently, the integral of this signal from \(\tau=-\infty\) to \(\tau=+\infty\) equals \(x(t)\); that is

\[\displaystyle\int\limits_{-\infty}^{+\infty}x(\tau)\delta(t-\tau)\text{d}\tau=\int\limits_{-\infty}^{+\infty}x(t)\delta(t-\tau)\text{d}\tau=x(t)\int\limits_{-\infty}^{+\infty}\delta(t-\tau)\text{d}\tau=x(t)\]

Although this derivation follows directly from the *unit impulse and unit step functions tutorial*, we have included the derivation given in eqs. (2.24) — (2.27) to stress the similarities with the discrete-time case and, in particular, to emphasize the interpretation of eq. (2.27) as representing the signal \(x(t)\) as a "sum" (more precisely, an integral) of weighted, shifted impulses.

## 2. The Continuous-Time Unit Impulse Response and the Convolution Integral Representation of LTI Systems

As in the discrete-time case, the representation developed in the preceding section provides us with a way in which to view an arbitrary continuous-time signal as the superposition of scaled and shifted pulses.

In particular, the approximate representation in eq. (2.25) represents the signal \(\hat{x}(t)\) as a sum of scaled and shifted versions of the basic pulse signal \(\delta_\Delta(t)\).

Consequently, the response \(\hat{y}(t)\) of a linear system to this signal will be the superposition of the responses to the scaled and shifted versions of \(\delta_\Delta(t)\).

Specifically, let us define \(\hat{h}_{k\Delta}(t)\) as the response of an LTI system to the input \(\delta_\Delta(t-k\Delta)\). Then, from eq. (2.25) and the superposition property, for continuous-time linear systems, we see that

\[\tag{2.29}\hat{y}(t)=\sum_{k=-\infty}^{+\infty}x(k\Delta)\hat{h}_{k\Delta}(t)\Delta\]

The interpretation of eq. (2.29) is similar to that for eq. (2.3) in discrete time [refer to the *discrete-time LTI systems and convolution sum tutorial*].

In particular, consider Figure 2.15, which is the continuous-time counterpart of Figure 2.2 [refer to the *discrete-time LTI systems and convolution sum tutorial*].

In Figure 2.15(a) we have depicted the input \(x(t)\) and its approximation \(\hat{x}(t)\), while in Figure 2.15(b)-(d), we have shown the responses of the system to three of the weighted pulses in the expression for \(\hat{x}(t)\). Then the output \(\hat{y}(t)\) corresponding to \(\hat{x}(t)\) is the superposition of all of these responses, as indicated in Figure 2.15(e).

What remains, then, is to consider what happens as \(\Delta\) becomes vanishingly small — i.e., as \(\Delta\rightarrow0\). In particular, with \(x(t)\) as expressed in eq. (2.26), \(\hat{x}(t)\) becomes an increasingly good approximation to \(x(t)\), and in fact, the two coincide as \(\Delta\rightarrow0\).

Consequently, the response to \(\hat{x}(t)\), namely, \(\hat{y}(t)\) in eq. (2.29), must converge to \(y(t)\), the response to the actual input \(x(t)\), as illustrated in Figure 2.15(f).

Furthermore, as we have said, for \(\Delta\) "small enough," the duration of the pulse \(\delta_\Delta(t-k\Delta)\) is of no significance, in that, as far as the system is concerned, the response to this pulse is essentially the same as the response to a unit impulse at the same point in time.

That is, since the pulse \(\delta_\Delta(t-k\Delta)\) corresponds to a shifted unit impulse as \(\Delta\rightarrow0\), the response \(\hat{h}_{k\Delta}(t)\) to this input pulse becomes the response to an impulse in the limit.

Therefore, if we let \(h_\tau(t)\) denote the response at time \(t\) to a unit impulse \(\delta(t-\tau)\) located at time \(\tau\), then

\[\tag{2.30}y(t)=\lim_{\Delta\rightarrow0}\sum_{k=-\infty}^{+\infty}x(k\Delta)\hat{h}_{k\Delta}(t)\Delta\]

As \(\Delta\rightarrow0\), the summation on the right-hand side becomes an integral, as can be seen graphically in Figure 2.16. Specifically, in Figure 2.16 the shaded rectangle represents one term in the summation on the right-hand side of eq. (2.30) and as \(\Delta\rightarrow0\) the summation approaches the area under \(x(\tau)h_\tau(t)\) viewed as a function of \(\tau\). Therefore,

\[\tag{2.31}y(t)=\displaystyle\int\limits_{-\infty}^{+\infty}x(\tau)h_\tau(t)\text{d}\tau\]

The interpretation of eq. (2.31) is analogous to the one for eq. (2.29). As we showed in part 1 above, any input \(x(t)\) can be represented as

\[x(t)=\displaystyle\int\limits_{-\infty}^{+\infty}x(\tau)\delta(t-\tau)\text{d}\tau\]

That is, we can intuitively think of \(x(t)\) as a "sum" of weighted shifted impulses, where the weight on the impulse \(\delta(t-\tau)\) is \(x(\tau)\text{d}\tau\).

With this interpretation, eq. (2.31) represents the superposition of the responses to each of these inputs, and by linearity, the weight on the response \(h_\tau(t)\) to the shifted impulse \(\delta(t-\tau)\) is also \(x(\tau)\text{d}\tau\).

Equation (2.31) represents the general form of the response of a linear system in continuous time.

If, in addition to being linear, the system is also time invariant, then \(h_\tau(t)=h_0(t-\tau)\); i.e., the response of an LTI system to the unit impulse \(\delta(t-\tau)\), which is shifted by \(\tau\) seconds from the origin, is a similarly shifted version of the response to the unit impulse function \(\delta(t)\).

Again, for notational convenience, we will drop the subscript and define the ** unit impulse response** \(h(t)\) as

\[\tag{2.32}h(t)=h_0(t)\]

i.e., \(h(t)\) is the response to \(\delta(t)\).

In this case, eq. (2.31) becomes

\[\tag{2.33}y(t)=\displaystyle\int\limits_{-\infty}^{+\infty}x(\tau)h(t-\tau)\text{d}\tau\]

Equation (2.33), referred to as the ** convolution integral** or the

**, is the continuous-time counterpart to the convolution sum of eq. (2.6) [refer to the**

*superposition integral**discrete-time LTI systems and convolution sum tutorial*] and corresponds to the representation of a continuous-time LTI system in terms of its response to a unit impulse.

The convolution of two signals \(x(t)\) and \(h(t)\) will be represented symbolically as

\[\tag{2.34}y(t)=x(t)*h(t)\]

While we have chosen to use the same symbol \(*\) to denote both discrete-time and continuous-time convolution, the context will generally be sufficient to distinguish the two cases.

As in discrete time, we see that a continuous-time LTI system is completely characterized by its impulse response — i.e., by its response to a single elementary signal, the unit impulse \(\delta(t)\).

In the next tutorial, we explore the implications of this as we examine a number of the properties of convolution and of LTI systems in both continuous time and discrete time.

The procedure for evaluating the convolution integral is quite similar to that for its discrete-time counterpart, the convolution sum.

Specifically, in eq. (2.33) we see that, for any value of \(t\), the output \(y(t)\) is a weighted integral of the input, where the weight on \(x(\tau)\) is \(h(t-\tau)\). To evaluate this integral for a specific value of \(t\), we first obtain the signal \(h(t-\tau)\) (regarded as a function of \(\tau\) with \(t\) fixed) from \(h(\tau)\) by a reflection about the origin and a shift to the right by \(t\) if \(t\gt0\) or a shift to the left by \(|t|\) for \(t\lt0\). We next multiply together the signals \(x(\tau)\) and \(h(t-\tau)\), and \(y(t)\) is obtained by integrating the resulting product from \(\tau=-\infty\) to \(\tau=+\infty\).

To illustrate the evaluation of the convolution integral, let us consider several examples.

**Example 2.6**

Let \(x(t)\) be the input to an LTI system with unit impulse response \(h(t)\), where

\[x(t)=e^{-at}u(t),\qquad{a\gt0}\]

and

\[h(t)=u(t)\]

In Figure 2.17, we have depicted the functions \(h(\tau)\), \(x(\tau)\), and \(h(t-\tau)\) for a negative value of \(t\) and for a positive value of \(t\).

From this figure, we see that for \(t\lt0\), the product of \(x(\tau)\) and \(h(t-\tau)\) is zero, and consequently, \(y(t)\) is zero.

For \(t\gt0\),

\[x(\tau)h(t-\tau)=\begin{cases}e^{-a\tau},\qquad0\lt\tau\lt{t}\\0,\qquad\quad\text{otherwise}\end{cases}\]

From this expression, we can compute \(y(t)\) for \(t\gt0\):

\[\begin{align}y(t)=\displaystyle\int\limits_0^te^{-a\tau}\text{d}\tau&=\left.-\frac{1}{a}e^{-a\tau}\right\vert_0^t\\&=\frac{1}{a}(1-e^{-at})\end{align}\]

Thus, for all \(t\), \(y(t)\) is

\[y(t)=\frac{1}{a}(1-e^{-at})u(t)\]

which is shown in Figure 2.18.

**Example 2.7**

Consider the convolution of the following two signals:

\[x(t)=\begin{cases}1,\qquad0\lt{t}\lt{T}\\0,\qquad\text{otherwise}\end{cases}\]

\[h(t)=\begin{cases}t,\qquad0\lt{t}\lt{2T}\\0,\qquad\text{otherwise}\end{cases}\]

As in Example 2.4 for discrete-time convolution [refer to the *discrete-time LTI systems and convolution sum tutorial*], it is convenient to consider the evaluation of \(y(t)\) in separte intervals.

In Figure 2.19, we have sketched \(x(\tau)\) and have illustrated \(h(t-\tau)\) in each of the intervals of interest.

For \(t\lt0\) and for \(t\gt3T\), \(x(\tau)h(t-\tau)=0\) for all values of \(\tau\), and consequently, \(y(t)=0\).

For the other intervals, the product \(x(\tau)h(t-\tau)\) is as indicated in Figure 2.20. Thus, for these three intervals, the integration can be carried out graphically, with the result that

\[y(t)=\begin{cases}0,\qquad\qquad\quad\qquad\qquad{t}\lt0\\\frac{1}{2}t^2,\qquad\qquad\qquad\qquad0\lt{t}\lt{T}\\Tt-\frac{1}{2}T^2\qquad\qquad\qquad{T}\lt{t}\lt2T\\-\frac{1}{2}t^2+Tt+\frac{3}{2}T^2\qquad2T\lt{t}\lt3T\\0,\qquad\qquad\qquad\qquad\quad3T\lt{t}\end{cases}\]

which is depicted in Figure 2.21.

**Example 2.8**

Let \(y(t)\) denote the convolution of the following two signals:

\[\tag{2.35}x(t)=e^{2t}u(-t)\]

\[\tag{2.36}h(t)=u(t-3)\]

The signals \(x(\tau)\) and \(h(t-\tau)\) are plotted as functions of \(\tau\) in Figure 2.22(a). We first observe that these two signals have regions of nonzero overlap, regardless of the value of \(t\).

When \(t-3\le0\), the product of \(x(\tau)\) and \(h(t-\tau)\) is nonzero for \(-\infty\lt\tau\lt{t-3}\), and the convolution integral becomes

\[\tag{2.37}y(t)=\displaystyle\int\limits_{-\infty}^{t-3}e^{2\tau}\text{d}\tau=\frac{1}{2}e^{2(t-3)}\]

For \(t-3\ge0\), the product \(x(\tau)h(t-\tau)\) is nonzero for \(-\infty\lt\tau\lt0\), so that the convolution integral is

\[\tag{2.38}y(t)=\displaystyle\int\limits_{-\infty}^{0}e^{2\tau}\text{d}\tau=\frac{1}{2}\]

The resulting signal \(y(t)\) is plotted in Figure 2.22(b).

As these examples and those presented in the *discrete-time LTI systems and convolution sum tutorial* illustrate, the graphical interpretation of continuous-time and discrete-time convolution is of considerable value in visualizing the evaluation of convolution integrals and sums.

The next tutorial discusses about ** properties of linear time-invariant (LTI) systems**.