Menu
Cart 0

Discrete-Time Linear Time-Invariant (LTI) Systems - The Convolution Sum

This is a continuation from the previous tutorial - continuous-time and discrete-time systems.

 

Introduction

In previous tutorials we introduced and discussed a number of basic system properties. Two of these, linearity and time invariance, play a fundamental role in signal and system analysis for two major reasons.

First, many physical processes possess these properties and thus can be modeled as linear time-invariant (LTI) systems. In addition, LTI systems can be analyzed in considerable detail, providing both insight into their properties and a set of powerful tools that form the core of signal and system analysis.

Our goal is to develop an understanding of these properties and tools and to provide an introduction to several of the very important applications in which the tools are used.

In this tutorial, we begin the development by deriving and examining a fundamental and extremely useful representation for LTI systems and by introducing an important class of these systems.

One of the primary reasons LTI systems are amenable to analysis is that any such system possesses the superposition property described in the previous tutorial. As a consequence, if we can represent the input to an LTI system in terms of a linear combination of a set of basic signals, we can then use superposition to compute the output of the system in terms of its responses to these basic signals.

As we will see in the following sections, one of the important characteristics of the unit impulse, both in discrete time and in continuous time, is that very general signals can be represented as linear combinations of delayed impulses.

This fact, together with the properties of superposition and time invariance, will allow us to develop a complete characterization of any LTI system in terms of its response to a unit impulse.

Such a representation, referred to as the convolution sum in the discrete-time case and the convolution integral in continuous time, provides considerable analytical convenience in dealing with LTI systems.

Following our development of the convolution sum and the convolution integral we use these characterizations to examine some of the other properties of LTI systems. We then consider the class of continuous-time systems described by linear constant-coefficient differential equations and its discrete-time counterpart, the class of systems described by linear constant-coefficient difference equations.

We will return to examine these two very important classes of systems on a number of occasions in subsequent tutorials.

Finally, we will take another look at the continuous-time unit impulse function and a number of other signals that are closely related to it in order to provide some additional insight into these idealized signals and, in particular, to their use and interpretation in the context of analyzing LTI systems.

 

1. The Representation of Discrete-Time Signals in Terms of Impulses

The key idea in visualizing how the discrete-time unit impulse can be used to construct any discrete-time signal is to think of a discrete-time signal as a sequence of individual impulses.

To see how this intuitive picture can be turned into a mathematical representation, consider the signal \(x[n]\) depicted in Figure 2.1(a).

 

Figure 2.1.  Decomposition of a discrete-time signal into a weighted sum of shifted impulses.

 

In the remaining parts of this figure, we have depicted five time-shifted, scaled unit impulse sequences, where the scaling on each impulse equals the value of \(x[n]\) at the particular instant the unit sample occurs.

For example,

\[x[-1]\delta[n+1]=\begin{cases}x[-1],\qquad{n=-1}\\0,\qquad\qquad{n\ne-1}\end{cases}\]

\[x[0]\delta[n]=\begin{cases}x[0],\qquad{n=0}\\0,\qquad\qquad{n\ne0}\end{cases}\]

\[x[1]\delta[n-1]=\begin{cases}x[1],\qquad{n=1}\\0,\qquad\qquad{n\ne1}\end{cases}\]

Therefore, the sum of the five-sequences in the figure equals \(x[n]\) for \(-2\le{n}\le2\). More generally, by including additional shifted, scaled impulses, we can write

\[\tag{2.1}\begin{align}x[n]=\ldots&+x[-3]\delta[n+3]+x[-2]\delta[n+2]+x[-1]\delta[n+1]+x[0]\delta[n]\\&+x[1]\delta[n-1]+x[2]\delta[n-2]+x[3]\delta[n-3]+\ldots\end{align}\]

For any value of \(n\), only one of the terms on the right-hand side of eq. (2.1) is nonzero, and the scaling associated with that term is precisely \(x[n]\).

Writing this summation in a more compact form, we have

\[\tag{2.2}x[n]=\sum_{k=-\infty}^{+\infty}x[k]\delta[n-k]\]

This corresponds to the representation of an arbitrary sequence as a linear combination of shifted unit impulses \(\delta[n-k]\), where the weights in this linear combination are \(x[k]\).

As an example, consider \(x[n]=u[n]\), the unit step. In this case, since \(u[k]=0\) for \(k\lt0\) and \(u[k]=1\) for \(k\ge0\), eq. (2.2) becomes

\[u[n]=\sum_{k=0}^{+\infty}\delta[n-k]\]

which is identical to the expression we derived in the unit impulse and unit step functions tutorial. [See eq. (1.67).]

Equation (2.2) is called the sifting property of the discrete-time unit impulse. Because the sequence \(\delta[n-k]\) is nonzero only when \(k=n\), the summation on the right-hand side of eq. (2.2) "sifts" through the sequence of values \(x[k]\) and preserves only the value corresponding to \(k=n\).

In the next subsection, we will exploit this representation of discrete-time signals in order to develop the convolution-sum representation for a discrete-time LTI system.

 

 

2. The Discrete-Time Unit Impulse Response and the Convolution-Sum Representation of LTI Systems

The importance of the sifting property of eqs. (2.1) and (2.2) lies in the fact that it represents \(x[n]\) as a superposition of scaled versions of a very simple set of elementary functions, namely, shifted unit impulses \(\delta[n-k]\), each of which is nonzero (with value 1) at a single point in time specified by the corresponding value of \(k\).

The response of a linear system to \(x[n]\) will be the superposition of the scaled responses of the system to each of these shifted impulses.

Moreover, the property of time invariance tells us that the responses of a time-invariant system to the time-shifted unit impulses are simply time-shifted versions of one another.

The convolution-sum representation for discrete-time systems that are both linear and time invariant results from putting these two basic facts together.

More specifically, consider the response of a linear (but possibly time-varying) system to an arbitrary input \(x[n]\). We can represent the input through eq. (2.2) as a linear combination of shifted unit impulses.

Let \(h_k[n]\) denote the response of the linear system to the shifted unit impulse \(\delta[n-k]\). Then, from the superposition property for a linear system [refer to eqs. (1.123) and (1.124) in the continuous-time and discrete-time systems tutorial], the response \(y[n]\) of the linear system to the input \(x[n]\) in eq. (2.2) is simply the weighted linear combination of these basic responses.

That is, with the input \(x[n]\) to a linear system expressed in the form of eq. (2.2), the output \(y[n]\) can be expressed as

\[\tag{2.3}y[n]=\sum_{k=-\infty}^{+\infty}x[k]h_k[n]\]

Thus, according to eq. (2.3), if we know the response of a linear system to the set of shifted unit impulses, we can construct the response to an arbitrary input.

An interpretation of eq. (2.3) is illustrated in Figure 2.2.

The signal \(x[n]\) is applied as the input to a linear system whose responses \(h_{-1}[n]\), \(h_0[n]\), and \(h_1[n]\) to the signals \(\delta[n+1]\), \(\delta[n]\), and \(\delta[n-1\), respectively, are depicted in Figure 2.2(b).

Since \(x[n]\) can be written as a linear combination of \(\delta[n+1]\), \(\delta[n]\), and \(\delta[n-1]\), superposition allows us to write the response to \(x[n]\) as a linear combination of the responses to the individual shifted impulses.

The individual shifted and scaled impulses that constitute \(x[n]\) are illustrated on the left-hand side of Figure 2.2( c), while the responses to these component signals are pictured on the right-hand side.

In Figure 2.2(d) we have depicted the actual input \(x[n]\), which is the sum of the components on the left side of Figure 2.2(c) and the actual output \(y[n]\), which, by superposition, is the sum of the components on the right side of Figure 2.2(c).

Thus, the response at time \(n\) of a linear system is simply the superposition of the responses due to the input value at each point in time.

 

Figure 2.2.  Graphical interpretation of the response of a discrete-time linear system as expressed in eq. (2.3).

 

In general, of course, the responses \(h_k[n]\) need not be related to each other for different values of \(k\). However, if the linear system is also time invariant, then these responses to time-shifted unit impulses are all time-shifted versions of each other.

Specifically, since \(\delta[n-k]\) is a time-shifted version of \(\delta[n]\), the response \(h_k[n]\) is a time-shifted version of \(h_0[n]\); i.e.,

\[\tag{2.4}h_k[n]=h_0[n-k]\]

For notational convenience, we will drop the subscript on \(h_0[n]\) and define the unit impulse (sample) response

\[\tag{2.5}h[n]=h_0[n]\]

That is, \(h[n]\) is the output of the LTI system when \(\delta[n]\) is the input. Then for an LTI system, eq. (2.3) becomes

\[\tag{2.6}y[n]=\sum_{k=-\infty}^{+\infty}x[k]h[n-k]\]

This result is referred to as the convolution sum or superposition sum, and the operation on the right-hand side of eq. (2.6) is known as the convolution of the sequences \(x[n]\) and \(h[n]\).

We will represent the operation of convolution symbolically as

\[\tag{2.7}y[n]=x[n]*h[n]\]

Note that eq. (2.6) expresses the response of an LTI system to an arbitrary input in terms of the system's response to the unit impulse. From this, we see that an LTI system is completely characterized by its response to a single signal, namely, its response to the unit impulse.

The interpretation of eq. (2.6) is similar to the one we gave for eq. (2.3), where, in the case of an LTI system, the response due to the input \(x[k]\) applied at time \(k\) is \(x[k]h[n-k]\); i.e., it is a shifted and scaled version (an "echo") of \(h[n]\). As before, the actual output is the superposition of all these responses.

 

Example 2.1

Consider an LTI system with impulse response \(h[n]\) and input \(x[n]\), as illustrated in Figure 2.3(a). For this case, since only \(x[0]\) and \(x[1]\) are nonzero, eq. (2.6) simplifies to the expression

\[\tag{2.8}y[n]=x[0]h[n-0]+x[1]h[n-1]=0.5h[n]+2h[n-1]\]

The sequences \(0.5h[n]\) and \(2h[n-1]\) are the two echoes of the impulse response needed for the superposition involved in generating \(y[n]\). These echoes are displayed in Figure 2.3(b). By summing the two echoes for each value of \(n\), we obtain \(y[n]\), which is shown in Figure 2.3(c).

 

Figure 2.3.  (a) The impulse response \(h[n]\) of an LTI system and an input \(x[n]\) to the system;  (b) the responses or "echoes," \(0.5h[n]\) and \(2h[n-1]\), to the nonzero values of the input, namely, \(x[0]=0.5\) and \(x[1]=2\);  (c) the overall response \(y[n]\), which is the sum of the echos in (b).

 

By considering the effect of the superposition sum on each individual output sample, we obtain another very useful way to visualize the calculation of \(y[n]\) using the convolution sum.

In particular, consider the evaluation of the output value at some specific time \(n\). A particularly convenient way of displaying this calculation graphically begins with the two signals \(x[k]\) and \(h[n-k]\) viewed as functions of k.

Multiplying these two functions, we obtain a sequence \(g[k]=x[k]h[n-k]\), which, at each time \(k\), is seen to represent the contribution of \(x[k]\) to the output at time \(n\). We conclude that summing all the samples in the sequence of \(g[k]\) yields the output value at the selected time \(n\).

Thus, to calculate \(y[n]\) for all values of \(n\) requires repeating this procedure for each value of \(n\). Fortunately, changing the value of \(n\) has a very simple graphical interpretation for the two signals \(x[k]\) and \(h[n-k]\), viewed as functions of \(k\).

The following examples illustrate this and the use of the aforementioned viewpoint in evaluating convolution sums.

 

Example 2.2

Let us consider again the convolution problem encountered in Example 2.1. The sequence \(x[k]\) is shown in Figure 2.4(a), while the sequence \(h[n-k]\), for \(n\) fixed and viewed as a function of \(k\), is shown in Figure 2.4(b) for several different values of \(n\).

In sketching these sequences, we have used the fact that \(h[n-k]\) (viewed as a function of \(k\) with \(n\) fixed) is a time-reversed and shifted version of the impulse response \(h[k]\). In particular, as \(k\) increases, the argument \(n-k\) decreases, explaining the need to perform a time reversal of \(h[k]\).

Knowing this, then in order to sketch the signal \(h[n-k]\), we need only determine its value for some particular value of \(k\). For example, the argument \(n-k\) will equal 0 at the value \(k=n\). Thus, if we sketch the signal \(h[-k]\), we can obtain the signal \(h[n-k]\) simply by shifting to the right (by \(n\)) if \(n\) is positive or to the left if \(n\) is negative.

The result for our example for values of \(n\lt0\), \(n=0,1,2,3,\) and \(n\gt3\) are shown in Figure 2.4(b).

 

Figure 2.4.  Interpretation of eq. (2.6) for the signals \(h[n]\) and \(x[n]\) in Figure 2.3;  (a) the signal \(x[k]\) and (b) the signal \(h[n-k]\) (as a function of \(k\) with \(n\) fixed) for several values of \(n\) (\(n\lt0; n=0,1,2,3;n\gt3\)). Each of these signals is obtained by reflection and shifting of the unit impulse response \(h[k]\). The response \(y[n]\) for each value of \(n\) is obtained by multiplying the signals \(x[k]\) and \(h[n-k]\) in (a) and (b) and then summing the products over all values of \(k\). The calculation for this example is carried out in detail in Example 2.2.

 

Having sketched \(x[k]\) and \(h[n-k]\) for any particular value of \(n\), we multiply these two signals and sum over all values of \(k\).

For our example, for \(n\lt0\), we see from Figure 2.4 that \(x[k]h[n-k]=0\) for all \(k\), since the nonzero values of \(x[k]\) and \(h[n-k]\) do not overlap. Consequently, \(y[n]=0\) for \(n\lt0\).

For \(n=0\), since the product of the sequence \(x[k]\) with the sequence \(h[0-k]\) has only one nonzero sample with the value 0.5, we conclude that

\[\tag{2.9}y[0]=\sum_{k=-\infty}^{\infty}x[k]h[0-k]=0.5\]

The product of the sequence \(x[k]\) with the sequence \(h[1-k]\) has two nonzero samples, which may be summed to obtain

\[\tag{2.10}y[1]=\sum_{k=-\infty}^{\infty}x[k]h[1-k]=0.5+2.0=2.5\]

Similarly,

\[\tag{2.11}y[2]=\sum_{k=-\infty}^{\infty}x[k]h[2-k]=0.5+2.0=2.5\]

and

\[\tag{2.12}y[3]=\sum_{k=-\infty}^{\infty}x[k]h[3-k]=2.0\]

Finally, for \(n\gt3\), the product \(x[k]h[n-k]\) is zero for all \(k\), from which we conclude that \(y[n]=0\) for \(n\gt3\).

The resulting output values agree with those obtained in Example 2.1.

 

Example 2.3

Consider an input \(x[n]\) and a unit impulse response \(h[n]\) given by

\[x[n]=\alpha^nu[n]\]

\[h[n]=u[n]\]

with \(0\lt\alpha\lt1\).

These signals are illustrated in Figure 2.5. Also, to help us in visualizing and calculating the convolution of signals, in Figure 2.6 we have depicted the signal \(x[k]\) followed by \(h[-k]\), \(h[-1-k]\), and \(h[1-k]\) (that is, \(h[n-k]\) for \(n=0,-1,\) and \(+1\)) and, finally, \(h[n-k]\) for an arbitrary positive value of \(n\) and an arbitrary negative value of \(n\).

 

Figure 2.5. The signals \(x[n]\) and \(h[n]\) in Example 2.3.

 

 

Figure 2.6.  Graphical interpretation of the calculation of the convolution sum for Example 2.3.

 

From this figure, we note that for \(n\lt0\), there is no overlap between the nonzero points in \(x[k]\) and \(h[n-k]\). Thus, for \(n\lt0\), \(x[k]h[n-k]=0\) for all values of \(k\), and hence, from eq. (2.6), we see that \(y[n]=0\), \(n\lt0\).

For \(n\ge0\),

\[x[k]h[n-k]=\begin{cases}\alpha^k,\qquad0\le{k}\le{n}\\0,\qquad\text{otherwise}\end{cases}\]

Thus, for \(n\ge0\),

\[y[n]=\sum_{k=0}^{n}\alpha^k\]

and we can write this as

\[\tag{2.13}y[n]=\sum_{k=0}^{n}\alpha^k=\frac{1-\alpha^{n+1}}{1-\alpha}\qquad\text{for }n\ge0\]

Thus, for all \(n\),

\[y[n]=\left(\frac{1-\alpha^{n+1}}{1-\alpha}\right)u[n]\]

The signal \(y[n]\) is sketched in Figure 2.7.

 

Figure 2.7. Output for Example 2.3.

 

The operation of convolution is sometimes described in terms of "sliding" the sequence \(h[n-k]\) past \(x[k]\).

For example, suppose we have evaluated \(y[n]\) for some particular value of \(n\), say, \(n=n_0\). That is, we have sketched the signal \(h[n_0-k]\), multiplied it by the signal \(x[k]\), and summed the result over all values of \(k\).

To evaluate \(y[n]\) at the next value of \(n\) — i.e., \(n=n_0+1\) — we need to sketch the signal \(h[(n_0+1)-k]\). However, we can do this simply by taking the signal \(h[n_0-k]\) and shifting it to the right by one point.

For each successive value of \(n\), we continue this process of shifting \(h[n-k]\) to the right by one point, multiplying by \(x[k]\), and summing the result over \(k\).

 

Example 2.4

As a further example, consider the two sequences

\[x[n]=\begin{cases}1,\qquad0\le{n}\le4\\0,\qquad\text{otherwise}\end{cases}\]

and

\[h[n]=\begin{cases}\alpha^n,\qquad0\le{n}\le6\\0,\qquad\text{otherwise}\end{cases}\]

These signals are depicted in Figure 2.8 for a positive value of \(\alpha\gt1\). In order to calculate the convolution of the two signals, it is convenient to consider five separate intervals for \(n\). This is illustrated in Figure 2.9.

 

Figure 2.8.  The signals to be convolved in Example 2.4.

 

 

Figure 2.9.  Graphical interpretation of the convolution performed in Example 2.4.

 

Interval 1.  For \(n\lt0\), there is no overlap between the nonzero portions of \(x[k]\) and \(h[n-k]\), and consequently, \(y[n]=0\).

Interval 2.  For \(0\le{n}\le4\),

\[x[k]h[n-k]=\begin{cases}\alpha^{n-k},\qquad0\le{k}\le{n}\\0,\qquad\text{otherwise}\end{cases}\]

Thus, in this interval,

\[\tag{2.14}y[n]=\sum_{k=0}^n\alpha^{n-k}\]

We can evaluate this sum using the finite sum formula, eq. (2.13). Specifically, changing the variable of summation in eq. (2.14) from \(k\) to \(r=n-k\), we obtain

\[y[n]=\sum_{r=0}^n\alpha^r=\frac{1-\alpha^{n+1}}{1-\alpha}\]

Interval 3.  For \(n\gt4\) but \(n-6\le0\) (i.e., \(4\lt{n}\le6\)),

\[x[k]h[n-k]=\begin{cases}\alpha^{n-k},\qquad0\le{k}\le4\\0,\qquad\text{otherwise}\end{cases}\]

Thus, in this interval,

\[\tag{2.15}y[n]=\sum_{k=0}^4\alpha^{n-k}\]

Once again, we can use the geometric sum formula in eq. (2.13) to evaluate eq. (2.15). Specifically, factoring out the constant factor \(\alpha^n\) from the summation in eq. (2.15) yields

\[\tag{2.16}y[n]=\alpha^n\sum_{k=0}^4(\alpha^{-1})^k=\alpha^n\frac{1-(\alpha^{-1})^5}{1-\alpha^{-1}}=\frac{\alpha^{n-4}-\alpha^{n+1}}{1-\alpha}\]

Interval 4.  For \(n\gt6\) but \(n-6\le4\) (i.e., for \(6\lt{n}\le10\)),

\[x[k]h[n-k]=\begin{cases}\alpha^{n-k},\qquad(n-6)\le{k}\le4\\0,\qquad\text{otherwise}\end{cases}\]

So that

\[y[n]=\sum_{k=n-6}^4\alpha^{n-k}\]

We can again use eq. (2.13) to evaluate this summation. Letting \(r=k-n+6\), we obtain

\[y[n]=\sum_{r=0}^{10-n}\alpha^{6-r}=\alpha^6\sum_{r=0}^{10-n}(\alpha^{-1})^r=\alpha^6\frac{1-\alpha^{n-11}}{1-\alpha^{-1}}=\frac{\alpha^{n-4}-\alpha^7}{1-\alpha}\]

Interval 5.  For \(n-6\gt4\), or equivalent, \(n\gt10\), there is no overlap between the nonzero portions of \(x[k]\) and \(h[n-k]\), and hence,

\[y[n]=0\]

 

Summarizing, then, we obtain

\[y[n]=\begin{cases}0,\qquad\qquad\qquad{n\lt0}\\\frac{1-\alpha^{n+1}}{1-\alpha},\qquad\qquad0\le{n}\le4\\\frac{\alpha^{n-4}-\alpha^{n+1}}{1-\alpha},\quad\qquad4\lt{n}\le6\\\frac{\alpha^{n-4}-\alpha^7}{1-\alpha},\quad\quad\qquad6\lt{n}\le10\\0,\qquad\qquad\qquad10\lt{n}\end{cases}\]

which is pictured in Figure 2.10.

 

Figure 2.10   Result of performing the convolution in Example 2.4.

 

 

Example 2.5

Consider an LTI system with input \(x[n]\) and unit impulse response \(h[n]\) specified as follows:

\[\tag{2.17}x[n]=2^nu[-n]\]

\[\tag{2.18}h[n]=u[n]\]

The sequences \(x[k]\) and \(h[n-k]\) are plotted as functions of \(k\) in Figure 2.11(a). Note that \(x[k]\) is zero for \(k\gt0\) and \(h[n-k]\) is zero for \(k\gt{n}\).

We also observe that, regardless of the value of \(n\), the sequence \(x[k]h[n-k]\) always has nonzero samples along the \(k\)-axis.

When \(n\ge0\), \(x[k]h[n-k]\) has nonzero samples in the interval \(k\le0\). It follows that, for \(n\ge0\),

\[\tag{2.19}y[n]=\sum_{k=-\infty}^0x[k]h[n-k]=\sum_{k=-\infty}^02^k\]

To evaluate the infinite sum in eq. (2.19), we may use the infinite sum formula,

\[\tag{2.20}\sum_{k=0}^\infty\alpha^k=\frac{1}{1-\alpha},\qquad0\lt|\alpha|\lt1\]

Changing the variable of summation in eq. (2.19) from \(k\) to \(r=-k\), we obtain

\[\tag{2.21}\sum_{k=-\infty}^02^k=\sum_{r=0}^\infty\left(\frac{1}{2}\right)^r=\frac{1}{1-(1/2)}=2\]

Thus, \(y[n]\) takes on a constant value of \(2\) for \(n\ge0\).

When \(n\lt0\), \(x[k]h[n-k]\) has nonzero samples for \(k\le{n}\). It follows that, for \(n\lt0\),

\[\tag{2.22}y[n]=\sum_{k=-\infty}^nx[k]h[n-k]=\sum_{k=-\infty}^n2^k\]

By performing a change of variable \(l=-k\) and then \(m=l+n\), we can again make use of the infinite sum formula, eq. (2.20), to evaluate the sum in eq. (2.22). The result is the following for \(n\lt0\):

\[\tag{2.23}y[n]=\sum_{l=-n}^{\infty}\left(\frac{1}{2}\right)^l=\sum_{m=0}^{\infty}\left(\frac{1}{2}\right)^{m-n}=\left(\frac{1}{2}\right)^{-n}\sum_{m=0}^{\infty}\left(\frac{1}{2}\right)^m=2^n\cdot2=2^{n+1}\]

The complete sequence of \(y[n]\) is sketched in Figure 2.11(b).

 

Figure 2.11  (a) The sequences \(x[k]\) and \(h[n-k]\) for the convolution problem considered in Example 2.5;  (b) the resulting output signal \(y[n]\).

 

These examples illustrate the usefulness of visualizing the calculation of the convolution sum graphically. Moreover, in addition to providing a useful way in which to calculate the response of an LTI system, the convolution sum also provides an extremely useful representation for LTI systems that allows us to examine their properties in great detail.

In particular, in later tutorials we will describe some of the properties of convolution and will also examine some of the system properties introduced in the previous tutorials in order to see how these properties can be characterized for LTI systems.

 

The next tutorial discusses continuous-time LTI systems and convolution integral.


Share this post


Sale

Unavailable

Sold Out