Menu
Cart 0

Continuous-Time and Discrete-Time Signals

Introduction

The intuitive notions of signals and systems arise in a rich variety of contexts. Moreover, there is an analytical framework that is, a language for describing signals and systems and an extremely powerful set of tools for analyzing them-that applies equally well to problems in many fields.

We begin our development of the analytical framework for signals and systems by introducing their mathematical description and representations. We build on this foundation in order to develop and describe additional concepts and methods that add considerably both to our understanding of signals and systems and to our ability to analyze and solve problems involving signals and systems that arise in a broad array of applications.

 

 

1.  Examples and Mathematical Representation

Signals may describe a wide variety of physical phenomena. Although signals can be represented in many ways, in all cases the information in a signal is contained in a pattern of variations of some form.

For example, consider the simple circuit in Figure 1.1. In this case, the patterns of variation over time in the source and capacitor voltages, \(v_\text{s}\) and \(v_\text{c}\), are examples of signals.

Figure 1.1   A simple RC circuit with source voltage \(v_\text{s}\) and capacitor voltage \(v_\text{c}\).

 

Similarly, as depicted in Figure 1.2, the variations over time of the applied force \(f\) and the resulting automobile velocity \(v\) are signals.

 

Figure 1.2  An automobile responding to an applied force \(f\) from the engine and to a retarding frictional force \(\rho{v}\) proportional to the automobile's velocity \(v\).

 

As another example, consider the human vocal mechanism, which produces speech by creating fluctuations in acoustic pressure. Figure 1.3 is an illustration of a recording of such a speech signal, obtained by using a microphone to sense variations in acoustic pressure, which are then converted into an electrical signal.

As can be seen in the figure, different sounds correspond to different patterns in the variations of acoustic pressure, and the human vocal system produces intelligible speech by generating particular sequences of these patterns.

 

Figure 1.3   Example of a recording of speech. The signal represents acoustic pressure variations as a function of time for the spoken words "should we chase." The top line of the figure corresponds to the word "should," the second line to the word "we," and the last two lines to the word "chase." (we have indicated the approximate beginnings and endings of each successive sound in each word.)

 

Alternatively, for the monochromatic picture, shown in Figure 1.4, it is the pattern of variations in brightness across the image that is important.

 

 
Figure 1.4   A monochromatic picture.

 

Signals are represented mathematically as functions of one or more independent variables. For example, a speech signal can be represented mathematically by acoustic pressure as a function of time, and a picture can be represented by brightness as a function of two spatial variables.

In our tutorials, we focus our attention on signals involving a single independent variable. For convenience, we will generally refer to the independent variable as time, although it may not in fact represent time in specific applications.

For example, in geophysics, signals representing variations with depth of physical quantities such as density, porosity, and electrical resistivity are used to study the structure of the earth. Also, knowledge of the variations of air pressure, temperature, and wind speed with altitude are extremely important in meteorological investigations.

Figure 1.5 depicts a typical example of annual average vertical wind profile as a function of height. The measured variations of wind speed with height are used in examining weather patterns, as well as wind conditions that may affect an aircraft during final approach and landing.

 

Figure 1.5  Typical annual vertical wind profile.

 

Throughout our tutorials we will be considering two basic types of signals: continuous-time signals and discrete-time signals.

In the case of continuous-time signals the independent variable is continuous, and thus these signals are defined for a continuum of values of the independent variable.

A speech signal as a function of time and atmospheric pressure as a function of altitude are examples of continuous-time signals.

On the other hand,  discrete-time signals are defined only at discrete times, and consequently, for these signals, the independent variable takes on only a discrete set of values.

The weekly Dow-Jones stock market index, as illustrated in Figure 1.6, is an example of a discrete-time signal.

Other examples of discrete-time signals can be found in demographic studies in which various attributes, such as average budget, crime rate, or pounds of fish caught, are tabulated against such discrete variables as family size, total population, or type of fishing vessel, respectively.

 

Figure 1.6   An example of a discrete-time signal: The weekly Dow-Jones stock market index from January 5, 1929, to January 4, 1930.

 

To distinguish between continuous-time and discrete-time signals, we will use the symbol \(t\) to denote the continuous-time independent variable and \(n\) to denote the discrete-time independent variable.

In addition, for continuous-time signals we will enclose the independent variable in parentheses ( · ), whereas for discrete-time signals we will use brackets [ · ] to enclose the independent variable.

We will also have frequent occasions when it will be useful to represent signals graphically. Illustrations of a continuous-time signal \(x(t)\) and a discrete-time signal \(x[n]\) are shown in Figure 1.7.

It is important to note that the discrete-time signal \(x[n]\) is defined only for integer values of the independent variable. Our choice of graphical representation for \(x[n]\) emphasizes this fact, and for further emphasis we will on occasion refer to \(x[n]\) as a discrete-time sequence.

 

Figure 1.7  Graphical representations of (a) continuous-time and (b) discrete-time signals.

 

A discrete-time signal \(x[n]\) may represent a phenomenon for which the independent variable is inherently discrete. Signals such as demographic data are examples of this.

On the other hand, a very important class of discrete-time signals arises from the sampling of continuous-time signals. In this case, the discrete-time signal \(x[n]\) represents successive samples of an underlying phenomenon for which the independent variable is continuous.

Because of their speed, computational power, and flexibility, modem digital processors are used to implement many practical systems, ranging from digital autopilots to digital audio systems.

Such systems require the use of discrete-time sequences representing sampled versions of continuous-time signals-e.g., aircraft position, velocity, and heading for an autopilot or speech and music for an audio system.

Also, pictures in newspapers actually consist of a very fine grid of points, and each of these points represents a sample of the brightness of the corresponding point in the original image.

No matter what the source of the data, however, the signal \(x[n]\) is defined only for integer values of \(n\). It makes no more sense to refer to the \(3\frac{1}{2}\)th sample of a digital speech signal than it does to refer to the average budget for a family with \(2\frac{1}{2}\) family members.

Throughout our tutorials we will treat discrete-time signals and continuous-time signals separately but in parallel, so that we can draw on insights developed in one setting to aid our understanding of another.

In later tutorials we will return to the question of sampling, and in that context we will bring continuous-time and discrete-time concepts together in order to examine the relationship between a continuous-time signal and a discrete-time signal obtained from it by sampling.

 

 

2. Signal Energy and Power

From the range of examples provided so far, we see that signals may represent a broad variety of phenomena. In many, but not all, applications, the signals we consider are directly related to physical quantities capturing power and energy in a physical system.

For example, if \(v(t)\) and \(i(t)\) are, respectively, the voltage and current across a resistor with resistance \(R\), then the instantaneous power is

\[\tag{1.1}p(t)=v(t)i(t)=\frac{1}{R}v^2(t)\]

The total energy expended over the time interval \(t_1\le{t}\le{t_2}\) is

\[\tag{1.2}\displaystyle\int\limits_{t_1}^{t_2}p(t)\text{d}t=\displaystyle\int\limits_{t_1}^{t_2}\frac{1}{R}v^2(t)\text{d}t\]

and the average power over this time interval is

\[\tag{1.3}\frac{1}{t_2-t_1}\displaystyle\int\limits_{t_1}^{t_2}p(t)\text{d}t=\frac{1}{t_2-t_1}\displaystyle\int\limits_{t_1}^{t_2}\frac{1}{R}v^2(t)\text{d}t\]

Similarly, for the automobile depicted in Figure 1.2, the instantaneous power dissipated through friction is \(p(t)=bv^2(t)\), and we can then define the total energy and average power over a time interval in the same way as in eqs. (1.2) and (1.3). 

 

With simple physical examples such as these as motivation, it is a common and worthwhile convention to use similar terminology for power and energy for any continuous-time signal \(x(t)\) or any discrete-time signal \(x[n]\).

Moreover, as we will see shortly, we will frequently find it convenient to consider signals that take on complex values.

In this case, the total energy over the time interval \(t_1\le{t}\le{t_2}\) in a continuous-time signal \(x(t)\) is defined as

\[\tag{1.4}\displaystyle\int\limits_{t_1}^{t_2}|x(t)|^2\text{d}t\]

where \(|x|\) denotes the magnitude of the (possibly complex) number \(x\). The time-averaged power is obtained by dividing eq. (1.4) by the length, \(t_2-t_1\), of the time interval.

Similarly, the total energy in a discrete-time signal \(x[n]\) over the time interval \(n_1\le{n}\le{n_2}\) is defined as

\[\tag{1.5}\sum_{n=n_1}^{n_2}|x[n]|^2\]

and dividing by the number of points in the interval, \(n_2-n_1+1\), yields the average power over the interval.

It is important to remember that the terms "power" and "energy" are used here independently of whether the quantities in eqs. (1.4) and (1.5) actually are related to physical energy.

Even if such a relationship does exist, eqs. (1.4) and (1.5) may have the wrong dimensions and scalings. For example, comparing eqs. (1.2) and (1.4), we see that if \(x(t)\) represents the voltage across a resistor, then eq. (1.4) must be divided by the resistance (measured, for example, in ohms) to obtain units of physical energy.

Nevertheless, we will find it convenient to use these terms in a general fashion.

Furthermore, in many systems we will be interested in examining power and energy in signals over an infinite time interval, i.e., for \(-\infty\lt{t}\lt+\infty\) or for \(-\infty\lt{n}\lt+\infty\).

In these cases, we define the total energy as limits of eqs. (1.4) and (1.5) as the time interval increases without bound. That is, in continuous time,

\[\tag{1.6}E_\infty\triangleq\lim_{T\rightarrow\infty}\displaystyle\int\limits_{-T}^{T}|x(t)|^2\text{d}t=\displaystyle\int\limits_{-\infty}^\infty|x(t)|^2\text{d}t\]

and in discrete time,

\[\tag{1.7}E_\infty\triangleq\lim_{N\rightarrow{\infty}}\sum_{n=-N}^{+N}|x[n]|^2=\sum_{n=-\infty}^{+\infty}|x[n]|^2\]

Note that for some signals the integral in eq. (1.6) or sum in eq. (1.7) might not converge, e.g., if \(x(t)\) or \(x[n]\) equals a nonzero constant value for all time. Such signals have infinite energy, while signals with \(E_\infty\lt\infty\) have finite energy.

In an analogous fashion, we can define the time-averaged power over an infinite interval as

\[\tag{1.8}P_\infty\triangleq\lim_{T\rightarrow\infty}\frac{1}{2T}\displaystyle\int\limits_{-T}^{T}|x(t)|^2\text{d}t\]

and

\[\tag{1.9}P_\infty\triangleq\lim_{N\rightarrow\infty}\frac{1}{2N+1}\sum_{n=-N}^{+N}|x[n]|^2\]

in continuous time and discrete time, respectively.

 

With these definitions, we can identify three important classes of signals.

(1). The first of these is the class of signals with finite total energy, i.e., those signals for which \(E_\infty\lt\infty\). Such a signal must have zero average power, since in the continuous time case, for example, we see from eq. (1.8) that

\[\tag{1.10}P_\infty=\lim_{T\rightarrow\infty}\frac{E_\infty}{2T}=0\]

An example of a finite-energy signal is a signal that takes on the value \(1\) for \(0\le{t}\le1\) and \(0\) otherwise. In this case, \(E_\infty=1\) and \(P_\infty=0\).

 

(2). A second class of signals are those with finite average power \(P_\infty\). From what we have just seen, if \(P_\infty\gt0\), then, of necessity, \(E_\infty=\infty\). This, of course, makes sense, since if there is a nonzero average energy per unit time (i.e., nonzero power), then integrating or summing this over an infinite time interval yields an infinite amount of energy. For example, the constant signal \(x[n]=4\) has infinite energy, but average power \(P_\infty=16\).

 

(3). There are also signals for which neither \(P_\infty\) nor \(E_\infty\) are finite. A simple example is the signal \(x(t)=t\).

 

We will encounter other examples of signals in each of these classes in later tutorials.

 

 

The next tutorial introduces transformations of the independent variable

 

 


Share this post


Sale

Unavailable

Sold Out