# Continuous-Time and Discrete-Time Systems

This is a continuation from the previous tutorial - ** the unit impulse and unit step functions**.

Physical systems in the broadest sense are an interconnection of components, devices, or subsystems.

In contexts ranging from signal processing and communications to electromechanical motors, automotive vehicles, and chemical-processing plants, a system can be viewed as a process in which input signals are transformed by the system or cause the system to respond in some way, resulting in other signals as outputs.

For example, a high fidelity system takes a recorded audio signal and generates a reproduction of that signal. If the hi-fi system has tone controls, we can change the tonal quality of the reproduced signal.

Similarly, the circuit in Figure 1.1 can be viewed as a system with input voltage \(V_\text{s}(t)\) and output voltage \(V_\text{c}(t)\), while the automobile in Figure 1.2 can be thought of as a system with input equal to the force \(f(t)\) and output equal to the velocity \(v(t)\) of the vehicle. An image-enhancement system transforms an input image into an output image that has some desired properties, such as improved contrast.

A ** continuous-time system** is a system in which continuous-time input signals are

applied and result in continuous-time output signals. Such a system will be represented pictorially as in Figure 1.41(a), where \(x(t)\) is the input and \(y(t)\) is the output.

Alternatively, we will often represent the input-output relation of a continuous-time system by the notation

\[\tag{1.78}x(t)\rightarrow{y(t)}\]

Similarly, a ** discrete-time system** - that is, a system that transforms discrete-time inputs into discrete-time outputs - will be depicted as in Figure 1.41(b) and will sometimes be represented symbolically as

\[\tag{1.79}x[n]\rightarrow{y[n]}\]

In most of our tutorials, we will treat discrete-time systems and continuous-time systems separately but in parallel. In a later tutorial series, we will bring continuous-time and discrete-time systems together through the concept of sampling, and we will develop some insights into the use of discrete-time systems to process continuous-time signals that have been sampled.

## 1. Simple Examples of Systems

One of the most important motivations for the development of general tools for analyzing and designing systems is that systems from many different applications have very similar mathematical descriptions. To illustrate this, we begin with a few simple examples.

**Example 1.8**

Consider the RC circuit depicted in Figure 1.1.

If we regard \(v_\text{s}(t)\) as the input signal and \(v_\text{c}(t)\) as the output signal, then we can use simple circuit analysis to derive an equation describing the relationship between the input and output.

Specifically, from Ohm's law, the current \(i(t)\) through the resistor is proportional (with proportionality constant \(1/R\)) to the voltage drop across the resistor; i.e.,

\[\tag{1.80}i(t)=\frac{v_\text{s}(t)-v_\text{c}(t)}{R}\]

Similarly, using the defining constitutive relation for a capacitor, we can relate \(i(t)\) to the rate of change with time of the voltage across the capacitor:

\[\tag{1.81}i(t)=C\frac{\text{d}v_\text{c}(t)}{\text{d}t}\]

Equating the right-hand sides of eqs. (1.80) and (1.81), we obtain a differential equation describing the relationship between the input \(v_\text{s}(t)\) and the output \(v_\text{c}(t)\):

\[\tag{1.82}\frac{\text{d}v_\text{c}(t)}{\text{d}t}+\frac{1}{RC}v_\text{c}(t)=\frac{1}{RC}v_\text{s}(t)\]

**Example 1.9**

Consider Figure 1.2, in which we regard the force \(f(t)\) as the input and the velocity \(v(t)\) as the output. If we let \(m\) denote the mass of the automobile and \(m\rho{v}\) the resistance due to friction, then equating acceleration - i.e., the time derivative of velocity - with net force divided by mass, we obtain

\[\tag{1.83}\frac{\text{d}v(t)}{\text{d}t}=\frac{1}{m}[f(t)-\rho{v(t)}]\]

i.e.,

\[\tag{1.84}\frac{\text{d}v(t)}{\text{d}t}+\frac{\rho}{m}v(t)=\frac{1}{m}f(t)\]

Examining and comparing eqs. (1.82) and (1.84) in the above examples, we see that the input-output relationships captured in these two equations for these two very different physical systems are basically the same. In particular, they are both examples of first-order linear differential equations of the form

\[\tag{1.85}\frac{\text{d}y(t)}{\text{d}t}+ay(t)=bx(t)\]

where \(x(t)\) is the input, \(y(t)\) is the output, and \(a\) and \(b\) are constants. This is one very simple example of the fact that, by developing methods for analyzing general classes of systems such as that represented by eq. (1.85), we will be able to use them in a wide variety of applications.

**Example 1.10**

As a simple example of a discrete-time system, consider a simple model for the balance in a bank account from month to month. Specifically, let \(y[n]\) denote the balance at the end of the \(n\)th month, and suppose that \(y[n]\) evolves from month to month according to the equation

\[\tag{1.86}y[n]=1.01y[n-1]+x[n]\]

or equivalently,

\[\tag{1.87}y[n]-1.01y[n-1]=x[n]\]

where \(x[n]\) represents the net deposit (i.e., deposits minus withdrawals) during the \(n\)th month and the term \(1.01y[n-1]\) models the fact that we accrue 1% interest each month.

**Example 1.11**

As a second example, consider a simple digital simulation of the differential equation in eq. (1.84) in which we resolve time into discrete intervals of length \(\Delta\) and approximate \(\text{d}v(t)/\text{d}t\) at \(t=n\Delta\) by the first backward difference, i.e.,

\[\frac{v(n\Delta)-v((n-1)\Delta)}{\Delta}\]

In this case, if we let \(v[n]=v(n\Delta)\) and \(f[n]=f(n\Delta)\), we obtain the following discrete-time model relating the sampled signals \(f[n]\) and \(v[n]\):

\[\tag{1.88}v[n]-\frac{m}{(m+\rho\Delta)}v[n-1]=\frac{\Delta}{(m+\rho\Delta)}f[n]\]

Comparing eqs. (1.87) and (1.88), we see that they are both examples of the same general first-order linear difference equation, namely,

\[\tag{1.89}y[n]+ay[n-1]=bx[n]\]

As the preceding examples suggest, the mathematical descriptions of systems from a wide variety of applications frequently have a great deal in common, and it is this fact that provides considerable motivation for the development of broadly applicable tools for signal and system analysis.

The key to doing this successfully is identifying classes of systems that have two important characteristics:

- The systems in this class have properties and structures that we can exploit to gain insight into their behavior and to develop effective tools for their analysis.
- Many systems of practical importance can be accurately modeled using systems in this class.

It is on the first of these characteristics that most of our tutorials focus, as we develop tools for a particular class of systems referred to as linear, time-invariant systems. We will introduce the properties that characterize this class, as well as a number of other very important basic system properties.

The second characteristic mentioned is of obvious importance for any system analysis technique to be of value in practice. It is a well-established fact that a wide range of physical systems (including those in Examples 1.8-1.10) can be well modeled within the class of systems on which we focus in our tutorials.

However, a critical point is that any model used in describing or analyzing a physical system represents an idealization of that system, and thus, any resulting analysis is only as good as the model itself.

For example, the simple linear model of a resistor in eq. (1.80) and that of a capacitor in eq. (1.81) are idealizations. However, these idealizations are quite accurate for real resistors and capacitors in many applications, and thus, analyses employing such idealizations provide useful results and conclusions, as long as the voltages and currents remain within the operating conditions under which these simple linear models are valid.

Similarly, the use of a linear retarding force to represent frictional effects in eq. (1.83) is an approximation with a range of validity.

Consequently, it is important to remember that an essential component of engineering practice in using the methods we develop here consists of identifying the range of validity of the assumptions that have gone into a model and ensuring that any analysis or design based on that model does not violate those assumptions.

## 2. Interconnections of Systems

An important idea that we will use is the concept of the interconnection of systems. Many real systems are built as interconnections of several subsystems.

One example is an audio system, which involves the interconnection of a radio receiver, compact disc player, or tape deck with an amplifier and one or more speakers.

Another is a digitally controlled aircraft, which is an interconnection of the aircraft, described by its equations of motion and the aerodynamic forces affecting it; the sensors, which measure various aircraft variables such as accelerations, rotation rates, and heading; a digital autopilot, which responds to the measured variables and to command inputs from the pilot (e.g., the desired course, altitude, and speed); and the aircraft's actuators, which respond to inputs provided by the autopilot in order to use the aircraft control surfaces (rudder, tail, ailerons) to change the aerodynamic forces on the aircraft.

By viewing such a system as an interconnection of its components, we can use our understanding of the component systems and of how they are interconnected in order to analyze the operation and behavior of the overall system.

In addition, by describing a system in terms of an interconnection of simpler subsystems, we may in fact be able to define useful ways in which to synthesize complex systems out of simpler, basic building blocks.

While one can construct a variety of system interconnections, there are several basic ones that are frequently encountered.

A ** series** or

**of two systems is illustrated in Figure 1.42(a). Diagrams such as this are referred to as block diagrams. Here, the output of System 1 is the input to System 2, and the overall system transforms an input by processing it first by System 1 and then by System 2.**

*cascade interconnection*An example of a series interconnection is a radio receiver followed by an amplifier. Similarly, one can define a series interconnection of three or more systems.

A ** parallel interconnection** of two systems is illustrated in Figure 1.42(b). Here, the same input signal is applied to Systems 1 and 2. The symbol "\(\oplus\)" in the figure denotes addition, so that the output of the parallel interconnection is the sum of the outputs of Systems 1 and 2.

An example of a parallel interconnection is a simple audio system with several microphones feeding into a single amplifier and speaker system.

In addition to the simple parallel interconnection in Figure 1.42(b), we can define parallel interconnections of more than two systems, and we can combine both cascade and parallel interconnections to obtain more complicated interconnections. An example of such an interconnection is given in Figure 1.42(c).

Another important type of system interconnection is a ** feedback interconnection**, an example of which is illustrated in Figure 1.43.

Here, the output of System 1 is the input to System 2, while the output of System 2 is fed back and added to the external input to produce the actual input to System 1.

Feedback systems arise in a wide variety of applications. For example, a cruise control system on an automobile senses the vehicle's velocity and adjusts the fuel flow in order to keep the speed at the desired level.

Similarly, a digitally controlled aircraft is most naturally thought of as a feedback system in which differences between actual and desired speed, heading, or altitude are fed back through the autopilot in order to correct these discrepancies.

Also, electrical circuits are often usefully viewed as containing feedback interconnections. As an example, consider the circuit depicted in Figure 1.44(a). As indicated in Figure 1.44(b), this system can be viewed as the feedback interconnection of the two circuit elements.

## 3. Basic System Properties

In this section we introduce and discuss a number of basic properties of continuous-time and discrete-time systems. These properties have important physical interpretations and relatively simple mathematical descriptions using the signals and systems language that we have begun to develop.

### 3.1 Systems with and without Memory

A system is said to be ** memoryless** if its output for each value of the independent variable at a given time is dependent on the input at only that same time.

For example, the system specified by the relationship

\[\tag{1.90}y[n]=(2x[n]-x^2[n])^2\]

is memoryless, as the value of \(y[n]\) at any particular time \(n_0\) depends only on the value of \(x[n]\) at that time.

Similarly, a resistor is a memoryless system; with the input \(x(t)\) taken as the current and with the voltage taken as the output \(y(t)\), the input-output relationship of a resistor is

\[\tag{1.91}y(t)=Rx(t)\]

where \(R\) is the resistance.

One particularly simple memoryless system is the ** identity system**, whose output is identical to its input. That is, the input-output relationship for the continuous-time identity system is

\[y(t)=x(t)\]

and the corresponding relationship in discrete time is

\[y[n]=x[n]\]

An example of a discrete-time system with memory is an ** accumulator** or

*summer*\[\tag{1.92}y[n]=\sum_{k=-\infty}^nx[k]\]

and a second example is a *delay*

\[\tag{1.93}y[n]=x[n-1]\]

A capacitor is an example of a continuous-time system with memory, since if the input is taken to be the current and the output is the voltage, then

\[\tag{1.94}y(t)=\frac{1}{C}\displaystyle\int\limits_{-\infty}^tx(\tau)\text{d}\tau\]

where \(C\) is the capacitance.

Roughly speaking, the concept of memory in a system corresponds to the presence of a mechanism in the system that retains or stores information about input values at times other than the current time.

For example, the delay in eq. (1.93) must retain or store the preceding value of the input. Similarly, the accumulator in eq. (1.92) must "remember" or store information about past inputs.

In particular, the accumulator computes the running sum of all inputs up to the current time, and thus, at each instant of time, the accumulator must add the current input value to the preceding value of the running sum.

In other words, the relationship between the input and output of an accumulator can be described as

\[\tag{1.95}y[n]=\sum_{k=-\infty}^{n-1}x[k]+x[n]\]

or equivalently,

\[\tag{1.96}y[n]=y[n-1]+x[n]\]

Represented in the latter way, to obtain the output at the current time \(n\), the accumulator must remember the running sum of previous input values, which is exactly the preceding value of the accumulator output.

In many physical systems, memory is directly associated with the storage of energy.

For example, the capacitor in eq. (1.94) stores energy by accumulating electrical charge, represented as the integral of the current. Thus, the simple RC circuit in Example 1.8 and Figure 1.1 has memory physically stored in the capacitor. Similarly, the automobile in Figure 1.2 has memory stored in its kinetic energy.

In discrete-time systems implemented with computers or digital microprocessors, memory is typically directly associated with storage registers that retain values between clock pulses.

While the concept of memory in a system would typically suggest storing past input and output values, our formal definition also leads to our referring to a system as having memory if the current output is dependent on ** future** values of the input and output.

While systems having this dependence on future values might at first seem unnatural, they in fact form an important class of systems, as we discuss further in part 3.3.

### 3.2 Invertibility and Inverse Systems

A system is said to be ** invertible** if distinct inputs lead to distinct outputs.

As illustrated in Figure 1.45(a) for the discrete-time case, if a system is invertible, then an ** inverse system** exists that, when cascaded with the original system, yields an output \(w[n]\) equal to the input \(x[n]\) to the first system.

Thus, the series interconnection in Figure 1.45(a) has an overall input-output relationship which is the same as that for the identity system.

An example of an invertible continuous-time system is

\[\tag{1.97}y(t)=2x(t)\]

for which the inverse system is

\[\tag{1.98}w(t)=\frac{1}{2}y(t)\]

This example is illustrated in Figure 1.45(b).

Another example of an invertible system is the accumulator of eq. (1.92). For this system, the difference between two successive values of the output is precisely the last input value. Therefore, in this case, the inverse system is

\[\tag{1.99}w[n]=y[n]-y[n-1]\]

as illustrated in Figure 1.45(c).

Examples of noninvertible systems are

\[\tag{1.100}y[n]=0\]

that is, the system that produces the zero output sequence for any input sequence, and

\[\tag{1.101}y(t)=x^2(t)\]

in which case we cannot determine the sign of the input from knowledge of the output.

The concept of invertibility is important in many contexts. One example arises in systems for encoding used in a wide variety of communications applications.

In such a system, a signal that we wish to transmit is first applied as the input to a system known as an encoder. There are many reasons for doing this, ranging from the desire to encrypt the original message for secure or private communication to the objective of providing some redundancy in the signal (for example, by adding what are known as parity bits) so that any errors that occur in transmission can be detected and, possibly, corrected.

For ** lossless** coding, the input to the encoder must be exactly recoverable from the output; i.e., the encoder must be invertible.

### 3.3 Causality

A system is ** causal** if the output at any time depends on values of the input at only the present and past times. Such a system is often referred to as being

**, as the system output does not anticipate future values of the input.**

*nonanticipative*Consequently, if two inputs to a causal system are identical up to some point in time \(t_0\) or \(n_0\), the corresponding outputs must also be equal up to this same time.

The RC circuit of Figure 1.1 is causal, since the capacitor voltage responds only to the present and past values of the source voltage. Similarly, the motion of an automobile is causal, since it does not anticipate future actions of the driver.

The systems described in eqs. (1.92) - (1.94) are also causal, but the systems defined by

\[\tag{1.102}y[n]=x[n]-x[n+1]\]

and

\[\tag{1.103}y(t)=x(t+1)\]

are not.

All memoryless systems are causal, since the output responds only to the current value of the input.

Although causal systems are of great importance, they do not by any means constitute the only systems that are of practical significance.

For example, causality is not often an essential constraint in applications in which the independent variable is not time, such as in image processing. Furthermore, in processing data that have been recorded previously, as often happens with speech, geophysical, or meteorological signals, to name a few, we are by no means constrained to causal processing.

As another example, in many applications, including historical stock market analysis and demographic studies, we may be interested in determining a slowly varying trend in data that also contain high-frequency fluctuations about that trend. In this case, a commonly used approach is to average data over an interval in order to smooth out the fluctuations and keep only the trend.

An example of a noncausal averaging system is

\[\tag{1.104}y[n]=\frac{1}{2M+1}\sum_{k=-M}^{+M}x[n-k]\]

**Example 1.12**

When checking the causality of a system, it is important to look carefully at the input-output relation. To illustrate some of the issues involved in doing this, we will check the causality of two particular systems.

The first system is defined by

\[\tag{1.105}y[n]=x[-n]\]

Note that the output \(y[n_0]\) at a positive time \(n_0\) depends only on the value of the input signal \(x[-n_0]\) at time \((-n_0)\), which is negative and therefore in the past of \(n_0\).

We may be tempted to conclude at this point that the given system is causal. However, we should always be careful to check the input-output relation for ** all** times.

In particular, for \(n\lt0\), e.g. \(n=-4\), we see that \(y[-4]=n[4]\), so that the output at this time depends on a future value of the input. Hence, the system is not causal.

It is also important to distinguish carefully the effects of the input from those of any other functions used in the definition of the system. For example, consider the system

\[\tag{1.106}y(t)=x(t)\cos(t+1)\]

In this system, the output at any time \(t\) equals the input at that same time multiplied by a number that varies with time. Specifically, we can rewrite eq. (1.106) as

\[y(t)=x(t)g(t)\]

where \(g(t)\) is a time-varying function, namely \(g(t)=\cos(t+1)\). Thus, only the current value of the input \(x(t)\) influences the current value of the output \(y(t)\), and we conclude that this system is causal (and, in fact, memoryless).

### 3.4 Stability

** Stability** is another important system property. Informally, a stable system is one in which small inputs lead to responses that do not diverge.

For example, consider the pendulum in Figure 1.46(a), in which the input is the applied force \(x(t)\) and the output is the angular deviation \(y(t)\) from the vertical. In this case, gravity applies a restoring force that tends to return the pendulum to the vertical position, and frictional losses due to drag tend to slow it down.

Consequently, if a small force \(x(t)\) is applied, the resulting deflection from vertical will also be small.

In contrast, for the inverted pendulum in Figure 1.46(b), the effect of gravity is to apply a force that tends to increase the deviation from vertical. Thus, a small applied force leads to a large vertical deflection causing the pendulum to topple over, despite any retarding forces due to friction.

The system in Figure 1.46(a) is an example of a stable system, while that in Figure 1.46(b) is unstable.

Models for chain reactions or for population growth with unlimited food supplies and no predators are examples of unstable systems, since the system response grows without bound in response to small inputs.

Another example of an unstable system is the model for a bank account balance in eq. (1.86), since if an initial deposit is made (i.e., \(x[0]\) = a positive amount) and there are no subsequent withdrawals, then that deposit will grow each month without bound, because of the compounding effect of interest payments.

There are also numerous examples of stable systems. Stability of physical systems generally results from the presence of mechanisms that dissipate energy.

For example, assuming positive component values in the simple RC circuit of Example 1.8, the resistor dissipates energy and this circuit is a stable system. The system in Example 1.9 is also stable because of the dissipation of energy through friction.

The preceding examples provide us with an intuitive understanding of the concept of stability.

*More formally, if the input to a stable system is bounded (i.e., if its magnitude does not grow without bound), then the output must also be bounded and therefore cannot diverge.*

This is the definition of stability that we will use throughout our tutorials.

For example, consider applying a constant force \(f(t)=F\) to the automobile in Figure 1.2, with the vehicle initially at rest. In this case the velocity of the car will increase, but not without bound, since the retarding frictional force also increases with velocity.

In fact, the velocity will continue to increase until the frictional force exactly balances the applied force; so, from eq. (1.84), we see that this terminal velocity value \(V\) must satisfy

\[\tag{1.107}\frac{\rho}{m}V=\frac{1}{m}F\]

i.e.,

\[\tag{1.108}V=\frac{F}{\rho}\]

As another example, consider the discrete-time system defined by eq. (1.104), and suppose that the input \(x[n]\) is bounded in magnitude by some number, say, \(B\), for all values of \(n\). Then the largest possible magnitude for \(y[n]\) is also \(B\), because \(y[n]\) is the average of a finite set of values of the input. Therefore, \(y[n]\) is bounded and the system is stable.

On the other hand, consider the accumulator described by eq. (1.92). Unlike the system in eq. (1.104), this system sums all of the past values of the input rather than just a finite set of values, and the system is unstable, since the sum can grow continually even if \(x[n]\) is bounded. For example, if the input to the accumulator is a unit step \(u[n]\), the output will be

\[y[n]=\sum_{k=-\infty}^nu[k]=(n+1)u[n]\]

That is, \(y[0]=1\), \(y[1]=2\), \(y[2]=3\), and so on, and \(y[n]\) grows without bound.

**Example 1.13**

If we suspect that a system is unstable, then a useful strategy to verify this is to look for a ** specific** bounded input that leads to an unbounded output. Finding one such example enables us to conclude that the given system is unstable.

If such an example does not exist or is difficult to find, we must check for stability by using a method that does not utilize specific examples of input signals.

To illustrate this approach, let us check the stability of two systems,

\[\tag{1.109}S_1:y(t)=tx(t)\]

and

\[\tag{1.110}S_2:y(t)=e^{x(t)}\]

In seeking a specific counterexample in order to disprove stability, we might try simple bounded inputs such as a constant or a unit step. For system \(S_1\) in eq. (1.109), a constant input \(x(t)=1\) yields \(y(t)=t\), which is unbounded, since no matter what finite constant we pick, \(|y(t)|\) will exceed that constant for some \(t\). We conclude that system \(S_1\) is unstable.

For system \(S_2\), which happens to be stable, we would be unable to find a bounded input that results in an unbounded output. So we proceed to verify that all bounded inputs result in bounded outputs.

Specifically, let \(B\) be an arbitrary positive number, and let \(x(t)\) be an arbitrary signal bounded by \(B\); that is, we are making no assumption about \(x(t)\), except that

\[\tag{1.111}|x(t)|\lt{B}\]

or

\[\tag{1.112}-B\lt{x(t)}\lt{B}\]

for all \(t\).

Using the definition of \(S_2\) in eq. (1.110), we then see that if \(x(t)\) satisfies eq. (1.111), then \(y(t)\) must satisfy

\[\tag{1.113}e^{-B}\lt|y(t)|\lt{e^B}\]

We conclude that if any input to \(S_2\) is bounded by an arbitrary positive number \(B\), the corresponding output is guaranteed to be bounded by \(e^{B}\). Thus, \(S_2\) is stable.

The system properties and concepts that we have introduced so far in this section are of great importance, and we will examine some of these in more detail later.

There remain, however, two additional properties - time invariance and linearity that play a particularly central role in subsequent tutorials, and in the remainder of this section we introduce and provide initial discussions of these two very important concepts.

### 3.5 Time Invariance

Conceptually, a system is time invariant if the behavior and characteristics of the system are fixed over time.

For example, the RC circuit of Figure 1.1 is time invariant if the resistance and capacitance values \(R\) and \(C\) are constant over time: We would expect to get the same results from an experiment with this circuit today as we would if we ran the identical experiment tomorrow. On the other hand, if the values of \(R\) and \(C\) are changed or fluctuate over time, then we would expect the results of our experiment to depend on the time at which we run it.

Similarly, if the frictional coefficient \(b\) and mass \(m\) of the automobile in Figure 1.2 are constant, we would expect the vehicle to respond identically independently of when we drive it. On the other hand, if we load the auto's trunk with heavy suitcases one day, thus increasing \(m\), we would expect the car to behave differently than at other times when it is not so heavily loaded.

The property of time in variance can be described very simply in terms of the signals and systems language that we have introduced.

Specifically, a system is time invariant if a time shift in the input signal results in an identical time shift in the output signal. That is, if \(y[n]\) is the output of a discrete-time, time-invariant system when \(x[n]\) is the input, then \(y[n-n_0]\) is the output when \(x[n-n_0]\) is applied. In continuous time with \(y(t)\) the output corresponding to the input \(x(t)\), a time-invariant system will have \(y(t-t_0)\) as the output when \(x(t-t_0)\) is the input.

To see how to determine whether a system is time invariant or not, and to gain some insight into this property, consider the following examples:

**Example 1.14**

Consider the continuous-time system defined by

\[\tag{1.114}y(t)=\sin[x(t)]\]

To check that this system is time invariant, we must determine whether the time-invariance property holds for ** any** input and

**time shift \(t_0\). Thus, let \(x_1(t)\) be an arbitrary input to this system, and let**

*any*\[\tag{1.115}y_1(t)=\sin[x_1(t)]\]

be the corresponding output. Then, consider a second input obtained by shifting \(x_1(t)\) in time:

\[\tag{1.116}x_2(t)=x_1(t-t_0)\]

The output corresponding to this input is

\[\tag{1.117}y_2(t)=\sin[x_2(t)]=\sin[x_1(t-t_0)]\]

Similarly, from eq. (1.115),

\[\tag{1.118}y_1(t-t_0)=\sin[x_1(t-t_0)]\]

Comparing eqs. (1.117) and (1.118), we see that \(y_2(t)=y_1(t-t_0)\), and therefore, this system is time invariant.

**Example 1.15**

As a second example, consider the discrete-time system

\[\tag{1.119}y[n]=nx[n]\]

This is a time-varying system, a fact that can be verified using the same formal procedure as that used in the preceding example.

However, when a system is suspected of being time varying, an approach to showing this that is often very useful is to seek a counterexample - i.e., to use our intuition to find an input signal for which the condition of time invariance is violated.

In particular, the system in this example represents a system with a time-varying gain. For example, if we know that the current input value is \(1\) (\(x[n]=1\)), we cannot determine the current output value \(y[n]\) without knowing the current time \(n\).

Consequently, consider the input signal \(x_1[n]=\delta[n]\), which yields an output \(y_1[n]\) that is identically \(0\) (since \(n\delta[n]=0\)). However, the input \(x_2[n]=\delta[n-1]\) yields the output \(y_2[n]=n\delta[n-1]=\delta[n-1]\). Thus, while \(x_2[n]\) is a shifted version of \(x_1[n]\), \(y_2[n]\) is ** not** a shifted version of \(y_1[n]\).

While the system in the preceding example has a time-varying gain and as a result is a time-varying system, the system in eq. (1.97) has a constant gain and, in fact, is time invariant.

Other examples of time-invariant systems are given by eqs. (1.91) - (1.104). The following example illustrates a time-varying system.

**Example 1.16**

Consider the system

\[\tag{1.120}y(t)=x(2t)\]

This system represents a time scaling. That is, \(y(t)\) is a time-compressed (by a factor of 2) version of \(x(t)\). Intuitively, then, any time shift in the input will also be compressed by a factor of 2, and it is for this reason that the system is not time invariant.

To demonstrate this by counterexample, consider the input \(x_1(t)\) shown in Figure 1.47(a) and the resulting output \(y_1(t)\) depicted in Figure 1.47(b). If we then shift the input by 2 - i.e., consider \(x_2(t)=x_1(t-2)\), as shown in Figure 1.47(c) - we obtain the resulting output \(y_2(t)=x_2(2t)\) shown in Figure 1.47(d). Comparing Figures 1.47(d) and (e), we see that \(y_2(t)\ne{y_1(t-2)}\), so that the system is not time invariant.

In fact, \(y_2(t)=y_1(t-1)\), so that the output time shift is only half as big as it should be for time invariance, due to the time compression imparted by the system.

### 3.6 Linearity

A ** linear system**, in continuous time or discrete time, is a system that possesses the important property of superposition: If an input consists of the weighted sum of several signals, then the output is the superposition - that is, the weighted sum - of the response of the system to each of these signals.

More precisely, let \(y_1(t)\) be the response of a continuous-time system to an input \(x_1(t)\), and let \(y_2(t)\) be the output corresponding to the input \(x_2(t)\). Then the system is linear if:

- The response to \(x_1(t)+x_2(t)\) is \(y_1(t)+y_2(t)\).
- The response to \(ax_1(t)\) is \(ay_1(t)\), where \(a\) is any complex constant.

The first of these two properties is known as the ** additivity** property; the second is known as the

**or**

*scaling***property.**

*homogeneity*Although we have written this description using continuous-time signals, the same definition holds in discrete time. The systems specified by eqs. (1.91)-(1.100), (1.102)-(1.104), and (1.119) are linear, while those defined by eqs. (1.101) and (1.114) are nonlinear. Note that a system can be linear without being time invariant, as in eq. (1.119), and it can be time invariant without being linear, as in eqs. (1.101) and (1.114).

The two properties defining a linear system can be combined into a single statement:

\[\tag{1.121}\text{continuous time:}\qquad{a}x_1(t)+bx_2(t)\rightarrow{a}y_1(t)+by_2(t)\]

\[\tag{1.122}\text{discrete time:}\qquad{a}x_1[n]+bx_2[n]\rightarrow{a}y_1[n]+by_2[n]\]

Here, \(a\) and \(b\) are any complex constants.

Furthermore, it is straightforward to show from the definition of linearity that if \(x_k[n]\), \(k=1,2,3,\ldots\), are a set of inputs to a discrete-time linear system with corresponding outputs \(y_k[n]\), \(k=1,2,3,\ldots\), then the response to a linear combination of these inputs given by

\[\tag{1.123}x[n]=\sum_ka_kx_k[n]=a_1x_1[n]+a_2x_2[n]+a_3x_3[n]+\ldots\]

is

\[\tag{1.124}y[n]=\sum_ka_ky_k[n]=a_1y_1[n]+a_2y_2[n]+a_3y_3[n]+\ldots\]

This very important fact is known as the ** superposition property**, which holds for linear systems in both continuous and discrete time.

A direct consequence of the superposition property is that, for linear systems, an input which is zero for all time results in an output which is zero for all time. For example, if \(x[n]\rightarrow{y}[n]\), then the homogeneity property tells us that

\[\tag{1.125}0=0\cdot{x[n]}\rightarrow0\cdot{y[n]}=0\]

In the following examples we illustrate how the linearity of a given system can be checked by directly applying the definition of linearity.

**Example 1.17**

Consider a system \(S\) whose input \(x(t)\) and output \(y(t)\) are related by

\[y(t)=tx(t)\]

To determine whether or not \(S\) is linear, we consider two arbitrary inputs \(x_1(t)\) and \(x_2(t)\).

\[x_1(t)\rightarrow{y_1}(t)=tx_1(t)\]

\[x_2(t)\rightarrow{y_2}(t)=tx_2(t)\]

Let \(x_3(t)\) be a linear combination of \(x_1(t)\) and \(x_2(t)\). That is

\[x_3(t)=ax_1(t)+bx_2(t)\]

where \(a\) and \(b\) are arbitrary scalars. If \(x_3(t)\) is the input to \(S\), then the corresponding output may be expressed as

\[\begin{align}y_3(t)&=tx_3(t)\\&=t(ax_1(t)+bx_2(t))\\&=atx_1(t)+btx_2(t)\\&=ay_1(t)+by_2(t)\end{align}\]

We conclude that the system \(S\) is linear.

**Example 1.18**

Let us apply the linearity-checking procedure of the previous example to another system \(S\) whose input \(x(t)\) and output \(y(t)\) are related by

\[y(t)=x^2(t)\]

Defining \(x_1(t)\), \(x_2(t)\), and \(x_3(t)\) as in the previous example, we have

\[x_1(t)\rightarrow{y_1}(t)=x_1^2(t)\]

\[x_2(t)\rightarrow{y_2}(t)=x_2^2(t)\]

and

\[\begin{align}x_3(t)\rightarrow{y_3}(t)&=x_3^2(t)\\&=(ax_1(t)+bx_2(t))^2\\&=a^2x_1^2(t)+b^2x_2^2(t)+2abx_1(t)x_2(t)\\&=a^2y_1(t)+b^2y_2(t)+2abx_1(t)x_2(t)\end{align}\]

Clearly, we can specify \(x_1(t)\), \(x_2(t)\), \(a\), and \(b\) such that \(y_3(t)\) is not the same as \(ay_1(t)+by_2(t)\).

For example, if \(x_1(t)=1\), \(x_2(t)=0\), \(a=2\), and \(b=0\), then \(y_3(t)=(2x_1(t))^2=4\), but \(2y_1(t)=2(x_1(t))^2=2\). We conclude that the system \(S\) is not linear.

**Example 1.19**

In checking the linearity of a system, it is important to remember that the system must satisfy both the additivity and homogeneity properties and that the signals, as well as any scaling constants, are allowed to be complex. To emphasize the importance of these points, consider the system specified by

\[\tag{1.126}y[n]=\mathcal{Re}\{x[n]\}\]

This system is additive; however, it does not satisfy the homogeneity property, as we now demonstrate.

Let

\[\tag{1.127}x_1[n]=r[n]+js[n]\]

be an arbitrary complex input with real and imaginary parts \(r[n]\) and \(s[n]\), respectively, so that the corresponding output is

\[\tag{1.128}y_1[n]=r[n]\]

Now, consider scaling \(x_1[n]\) by a complex number, for example, \(a=j\); i.e., consider the input

\[\tag{1.129}\begin{align}x_2[n]&=jx_1[n]=j(r[n]+js[n])\\&=-s[n]+jr[n]\end{align}\]

The output corresponding to \(x_2[n]\) is

\[\tag{1.130}y_2[n]=\mathcal{Re}\{x_2[n]\}=-s[n]\]

which is not equal to the scaled version of \(y_1[n]\),

\[\tag{1.131}ay_1[n]=jr[n]\]

We conclude the the system violates the homogeneity property and hence is not linear.

**Example 1.20**

Consider the system

\[\tag{1.132}y[n]=2x[n]+3\]

This system is not linear, as can be verified in several ways.

For example, the system violates the additivity property: If \(x_1[n]=2\) and \(x_2[n]=3\), then

\[\tag{1.133}x_1[n]\rightarrow{y_1}[n]=2x_1[n]+3=7\]

\[\tag{1.134}x_2[n]\rightarrow{y_2}[n]=2x_2[n]+3=9\]

However, the response to \(x_3[n]=x_1[n]+x_2[n]\) is

\[\tag{1.135}y_3[n]=2[x_1[n]+x_2[n]]+3=13\]

which does not equal \(y_1[n]+y_2[n]=16\).

Alternatively, since \(y[n]=3\) if \(x[n]=0\), we see that the system violates the "zero-in/zero-out" property of linear systems given in eq. (1.125).

It may seem surprising that the system in the above example is nonlinear, since

eq. (1.132) is a linear equation. On the other hand, as depicted in Figure 1.48, the output of this system can be represented as the sum of the output of a linear system and another signal equal to the ** zero-input response** of the system.

For the system in eq. (1.132), the linear system is

\[x[n]\rightarrow2x[n]\]

and the zero-input response is

\[y_0[n]=3\]

There are, in fact, large classes of systems in both continuous and discrete time that can be represented as in Figure 1.48 - i.e., for which the overall system output consists of the superposition of the response of a linear system with a zero-input response.

Such systems correspond to the class of ** incrementally linear systems** - i.e., systems in continuous or discrete time that respond linearly to

**in the input.**

*changes*In other words, the ** difference** between the responses to any two inputs to an incrementally linear system is a linear (i.e., additive and homogeneous) function of the

**between the two inputs.**

*difference*For example, if \(x_1[n]\) and \(x_2[n]\) are two inputs to the system specified by eq. (1.132), and if \(y_1[n]\) and \(y_2[n]\) are the corresponding outputs, then

\[\tag{1.136}y_1[n]-y_2[n]=2x_1[n]+3-\{2x_2[n]+3\}=2\{x_1[n]-x_2[n]\}\]

## 4. Summary

In this tutorial, we have developed a number of basic concepts related to continuous-time and discrete-time signals and systems. We have presented both an intuitive picture of what signals and systems are through several examples and a mathematical representation for signals and systems.

Specifically, we introduced graphical and mathematical representations of signals and used these representations in performing transformations of the independent variable.

We also defined and examined several basic signals, both in continuous time and in discrete time. These included complex exponential signals, sinusoidal signals, and unit impulse and step functions. In addition, we investigated the concept of periodicity for continuous-time and discrete-time signals.

In developing some of the elementary ideas related to systems, we introduced block diagrams to facilitate our discussions concerning the interconnection of systems, and we defined a number of important properties of systems, including causality, stability, time invariance, and linearity.

The primary focus will be on the class of linear, time-invariant (LTI) systems, both in continuous time and in discrete time. These systems play a particularly important role in system analysis and design, in part due to the fact that many systems encountered in nature can be successfully modeled as linear and time invariant.

Furthermore, as we shall see in later tutorials, the properties of linearity and time invariance allow us to analyze in detail the behavior of LTI systems.

The next tutorial will introduce ** discrete-time linear time-invariant (LTI) systems and convolution sum**.