# Transformations of the Independent Variable

This is a continuation from the previous tutorial - ** continuous-time and discrete-time signals**.

A central concept in signal and system analysis is that of the transformation of a signal.

For example, in an aircraft control system, signals corresponding to the actions of the pilot are transformed by electrical and mechanical systems into changes in aircraft thrust or the positions of aircraft control surfaces such as the rudder or ailerons, which in tum are transformed through the dynamics and kinematics of the vehicle into changes in aircraft velocity and heading.

Also, in a high-fidelity audio system, an input signal representing music as recorded on a cassette or compact disc is modified in order to enhance desirable characteristics, to remove recording noise, or to balance the several components of the signal (e.g., treble and bass).

In this tutorial, we focus on a very limited but important class of elementary signal transformations that involve simple modification of the independent variable, i.e., the time axis.

As we will see in this and subsequent tutorials, these elementary transformations allow us to introduce several basic properties of signals and systems.

In later tutorials, we will find that they also play an important role in defining and characterizing far richer and important classes of systems.

## 1. Examples of Transformations of the Independent Variable

**Time Shift**

A simple and very important example of transforming the independent variable of a signal is a ** time shift**.

A time shift in discrete time is illustrated in Figure 1.8, in which we have two signals \(x[n]\) and \(x[n-n_0]\) that are identical in shape, but that are displaced or shifted relative to each other.

We will also encounter time shifts in continuous time, as illustrated in Figure 1.9, in which \(x(t-t_0)\) represents a delayed (if \(t_0\) is positive) or advanced (if \(t_0\) is negative) version of \(x(t)\).

Signals that are related in this fashion arise in applications such as radar, sonar, and seismic signal processing, in which several receivers at different locations observe a signal being transmitted through a medium (water, rock, air, etc.). In this case, the difference in propagation time from the point of origin of the transmitted signal to any two receivers results in a time shift between the signals at the two receivers.

**Time Reversal**

A second basic transformation of the time axis is that of ** time reversal**.

For example, as illustrated in Figure 1.10, the signal \(x[-n]\) is obtained from the signal \(x[n]\) by a reflection about \(n=0\) (i.e., by reversing the signal).

Similarly, as depicted in Figure 1.11, the signal \(x(-t)\) is obtained from the signal \(x(t)\) by a reflection about \(t=0\). Thus, if \(x(t)\) represents an audio tape recording, then \(x(-t)\) is the same tape recording played backward.

**Time Scaling**

Another transformation is that of ** time scaling**.

In Figure 1.12 we have illustrated three signals, \(x(t)\), \(x(2t)\), and \(x(t/2)\), that are related by linear scale changes in the independent variable. If we again think of the example of \(x(t)\) as a tape recording, then \(x(2t)\) is that recording played at twice the speed, and \(x(t/2)\) is the recording played at half-speed.

It is often of interest to determine the effect of transforming the independent variable of a given signal \(x(t)\) to obtain a signal of the form \(x(\alpha{t}+\beta)\), where \(\alpha\) and \(\beta\) are given numbers.

Such a transformation of the independent variable preserves the shape of \(x(t)\), except that the resulting signal may be linearly stretched if \(|\alpha|\lt1\), linearly compressed if \(|\alpha|\gt1\), reversed in time if \(\alpha\lt0\), and shifted in time if \(\beta\) is nonzero. This is illustrated in the following set of examples.

**Example 1.1**

Given the signal \(x(t)\) shown in Figure 1.13(a), the signal \(x(t+1)\) corresponds to an advance (shift to the left) by one unit along the \(t\) axis as illustrated in Figure 1.13(b).

Specifically, we note that the value of \(x(t)\) at \(t=t_0\) occurs in \(x(t+1)\) at \(t=t_0-1\). For example, the value of \(x(t)\) at \(t=1\) is found in \(x(t+1)\) at \(t=1-1=0\). Also, since \(x(t)\) is zero for \(t\lt0\), we have \(x(t+1)\) zero for \(t\lt-1\). Similarly, since \(x(t)\) is zero for \(t\gt2\), \(x(t+1)\) is zero for \(t\gt1\).

Let us also consider the signal \(x(-t+1)\), which may be obtained by replacing \(t\) with \(-t\) in \(x(t+1)\). That is, \(x(-t+1)\) is the time reversed version of \(x(t+1)\). Thus, \(x(-t+1)\) may be obtained graphically by reflecting \(x(t+1)\) about the \(t\) axis as shown in Figure 1.13(c).

**Example 1.2**

Given the signal \(x(t)\), shown in Figure 1.13(a), the signal \(x(\frac{3}{2}t)\) corresponds to a linear compression of \(x(t)\) by a factor of \(\frac{2}{3}\) as illustrated in Figure 1.13(d).

Specifically we note that the value of \(x(t)\) at \(t=t_0\) occurs in \(x(\frac{3}{2}t)\) at \(t=\frac{2}{3}t_0\). For example, the value of \(x(t)\) at \(t=1\) is found in \(x(\frac{3}{2}t)\) at \(t=\frac{2}{3}(1)=\frac{2}{3}\). Also, since \(x(t)\) is zero for \(t\lt0\), we have \(x(\frac{3}{2}t)\) zero for \(t\lt0\).Similarly, since \(x(t)\) is zero for \(t\gt2\), \(x(\frac{3}{2}t)\) is zero for \(t\gt\frac{4}{3}\).

**Example 1.3**

Suppose that we would like to determine the effect of transforming the independent variable of a given signal, \(x(t)\), to obtain a signal of the form \(x(\alpha{t}+\beta)\), where \(\alpha\) and \(\beta\) are given numbers.

A systematic approach to doing this is to first delay or advance \(x(t)\) in accordance with the value of \(\beta\), and then to perform time scaling and/or time reversal on the resulting signal in accordance with the value of \(\alpha\). The delayed or advanced signal is linearly stretched if \(|\alpha|\lt1\), linearly compressed if \(|\alpha|\gt1\), and reversed in time if \(\alpha\lt0\).

To illustrate this approach, let us show how \(x(\frac{3}{2}t+1)\) may be determined for the signal \(x(t)\) shown in Figure 1.13(a). Since \(\beta=1\), we first advance (shift to the left) \(x(t)\) by \(1\) as shown in Figure 1.13(b). Since \(|\alpha|=\frac{3}{2}\), we may linearly compress the shifted signal of Figure 1.13(b) by a factor of \(\frac{2}{3}\) to obtain the signal shown in Figure 1.13(e).

In addition to their use in representing physical phenomena such as the time shift

in a sonar signal and the speeding up or reversal of an audiotape, transformations of the independent variable are extremely useful in signal and system analysis.

In later tutorials, we will use transformations of the independent variable to introduce and analyze the properties of systems. These transformations are also important in defining and examining some important properties of signals.

## 2. Periodic Signals

An important class of signals that we will encounter frequently throughout our tutorials is the class of ** periodic signals**.

A periodic continuous-time signal \(x(t)\) has the property that there is a positive value of \(T\) for which

\[\tag{1.11}x(t)=x(t+T)\]

for all values of \(t\).

In other words, a periodic signal has the property that it is unchanged by a time shift of \(T\). In this case, we say that \(x(t)\) is ** periodic with period \(T\)**.

Periodic continuous-time signals arise in a variety of contexts. For example, the natural response of systems in which energy is conserved, such as ideal LC circuits without resistive energy dissipation and ideal mechanical systems without frictional losses, are periodic and, in fact, are composed of some of the basic periodic signals that we will introduce in later tutorials.

An example of a periodic continuous-time signal is given in Figure 1.14. From the figure or from eq. (1.11), we can readily deduce that if \(x(t)\) is periodic with period \(T\), then \(x(t)=x(t+mT)\) for all \(t\) and for any integer \(m\). Thus, \(x(t)\) is also periodic with period \(2T\), \(3T\), \(4T\), ....

The ** fundamental period** \(T_0\) of \(x(t)\) is the smallest positive value of \(T\) for which eq. (1.11) holds. This definition of the fundamental period works, except if \(x(t)\) is a constant. In this case the fundamental period is undefined, since \(x(t)\) is periodic for

**choice of \(T\) (so there is no smallest positive value).**

*any*A signal \(x(t)\) that is not periodic will be referred to as an ** aperiodic** signal.

Periodic signals are defined analogously in discrete time. Specifically, a discrete-time signal \(x[n]\) is periodic with period \(N\), where \(N\) is a positive integer, if it is unchanged by a time shift of \(N\), i.e., if

\[\tag{1.12}x[n]=x[n+N]\]

for all values of \(n\).

If eq. (1.12) holds, then \(x[n]\) is also periodic with period \(2N\), \(3N\), .... The ** fundamental period** \(N_0\) is the smallest positive value of \(N\) for which eq. (1.12) holds.

An example of a discrete-time periodic signal with fundamental period \(N_0=3\) is shown in Figure 1.15.

**Example 1.4**

Let us illustrate the type of problem solving that may be required in determining whether or not a given signal is periodic. The signal whose periodicity we wish to check is given by

\[\tag{1.13}x(t)=\begin{cases}\cos(t)\qquad\text{if }t\lt0\\\sin(t)\qquad\text{if }t\ge0\end{cases}\]

From trigonometry, we know that \(\cos(t+2\pi)=\cos(t)\) and \(\sin(t+2\pi)=\sin(t)\). Thus, considering \(t\gt0\) and \(t\lt0\) separately, we see that \(x(t)\) does repeat itself over every interval of length \(2\pi\).

However, as illustrated in Figure 1.16, \(x(t)\) also has a discontinuity at the time origin that does not recur at any other time. Since every feature in the shape of a periodic signal must recur periodically, we conclude that the signal \(x(t)\) is not periodic.

## 3. Even and Odd Signals

Another set of useful properties of signals relates to their symmetry under time reversal. A signal \(x(t)\) or \(x[n]\) is referred to as an ** even** signal is it is identical to its time-reversed counterpart, i.e., with its reflection about the origin.

In continuous time a signal is even if

\[\tag{1.14}x(-t)=x(t)\]

while a discrete-time signal is even if

\[\tag{1.15}x[-n]=x[n]\]

A signal is referred to as ** odd** if

\[\tag{1.16}x(-t)=-x(t)\]

\[\tag{1.17}x[-n]=-x[n]\]

An odd signal must necessarily be \(0\) at \(t=0\) or \(n=0\), since eqs. (1.16) and (1.17) require that \(x(0)=-x(0)\) and \(x[0]=-x[0]\).

Examples of even and odd continuous-time signals are shown in Figure 1.17.

*An important fact is that any signal can be broken into a sum of two signals, one of which is even and one of which is odd.*

To see this, consider the signal

\[\tag{1.18}\mathcal{Ev}\left\{x(t)\right\}=\frac{1}{2}[x(t)+x(-t)]\]

which is referred to as the ** even part** of \(x(t)\).

Similarly, the ** odd part** of \(x(t)\) is given by

\[\tag{1.19}\mathcal{Od}\left\{x(t)\right\}=\frac{1}{2}[x(t)-x(-t)]\]

It is a simple exercise to check that the even part is in fact even, that the odd part is odd, and that \(x(t)\) is the sum of the two.

Exactly analogous definitions hold in the discrete-time case. An example of the even-odd decomposition of a discrete-time signal is given in Figure 1.18.

The next tutorial discusses in detail about ** exponential and sinusoidal signals**.