# Fourier Series Representation of Continuous-Time Periodic Signals

This is a continuation from the previous tutorial - ** the response of LTI systems to complex exponentials**.

## 1. Linear Combinations of Harmonically Related Complex Exponentials

As defined in the *exponential and sinusoidal signals tutorial*, a signal is periodic if, for some positive value of \(T\),

\[\tag{3.21}x(t)=x(t+T)\qquad\text{for all }t\]

The fundamental period of \(x(t)\) is the minimum positive, nonzero value of \(T\) for which eq. (3.21) is satisfied, and the value \(\omega_0=2\pi/T\) is referred to as the fundamental frequency.

In the *exponential and sinusoidal signals tutorial* we also introduced two basic periodic signals, the sinusoidal signal

\[\tag{3.22}x(t)=\cos\omega_0t\]

and the periodic complex exponential

\[\tag{3.23}x(t)=e^{j\omega_0t}\]

Both of these signals are periodic with fundamental frequency \(\omega_0\) and fundamental period \(T=2\pi/\omega_0\).

Associated with the signal in eq. (3.23) is the set of ** harmonically related** complex exponentials

\[\tag{3.24}\phi_k(t)=e^{jk\omega_0t}=e^{jk(2\pi/T)t},\qquad{k=}0,\pm1,\pm2,\ldots\]

Each of these signals has a fundamental frequency that is a multiple of \(\omega_0\), and therefore, each is periodic with period \(T\) (although for \(|k|\ge2\), the fundamental period of \(\phi_k(t)\) is a fraction of \(T\)).

Thus, a linear combination of harmonically related complex exponentials of the form

\[\tag{3.25}x(t)=\sum_{k=-\infty}^{+\infty}a_ke^{jk\omega_0t}=\sum_{k=-\infty}^{+\infty}a_ke^{jk(2\pi/T)t}\]

is also periodic with period \(T\).

- In eq. (3.25), the term for \(k=0\) is a constant.
- The terms for \(k=+1\) and \(k=-1\) both have fundamental frequency equal to \(\omega_0\) and are collectively referred to as the
or the*fundamental components*.*first harmonic components* - The two terms for \(k=+2\) and \(k=-2\) are periodic with half the period (or, equivalently, twice the frequency) of the fundamental components and are referred to as the
.*second harmonic components* - More generally, the components for \(k=+N\) and \(k=-N\) are referred to as the \(N\)th harmonic components.

The representation of a periodic signal in the form of eq. (3.25) is referred to as the ** Fourier series** representation. Before developing the properties of this representation, let us consider an example.

**Example 3.2**

Consider a periodic signal \(x(t)\), with fundamental frequency \(2\pi\), that is expressed in the form of eq. (3.25) as

\[\tag{3.26}x(t)=\sum_{k=-3}^{+3}a_ke^{jk2\pi{t}}\]

where

\[\begin{align}a_0&=1\\a_1&=a_{-1}=\frac{1}{4}\\a_2&=a_{-2}=\frac{1}{2}\\a_3&=a_{-3}=\frac{1}{3}\end{align}\]

Rewriting eq. (3.26) and collecting each of the harmonic components which have the same fundamental frequency, we obtain

\[\tag{3.27}\begin{align}x(t)&=1+\frac{1}{4}(e^{j2\pi{t}}+e^{-j2\pi{t}})+\frac{1}{2}(e^{j4\pi{t}}+e^{-j4\pi{t}})\\&\quad+\frac{1}{3}(e^{j6\pi{t}}+e^{-j6\pi{t}})\end{align}\]

Equivalently, using Euler's relation, we can write \(x(t)\) in the form

\[\tag{3.28}x(t)=1+\frac{1}{2}\cos2\pi{t}+\cos4\pi{t}+\frac{2}{3}\cos6\pi{t}\]

In Figure 3.4, we illustrate graphically how the signal \(x(t)\) is built up from its harmonic components.

Equation (3.28) is an example of an alternative form for the Fourier series of real periodic signals.

Specifically, suppose that \(x(t)\) is real and can be represented in the form of eq. (3.25). Then, since \(x^*(t)=x(t)\), we obtain

\[x(t)=\sum_{k=-\infty}^{+\infty}a_k^*e^{-jk\omega_0t}\]

Replacing \(k\) by \(-k\) in the summation, we have

\[x(t)=\sum_{k=-\infty}^{+\infty}a_{-k}^*e^{jk\omega_0t}\]

which, by comparison with eq. (3.25), requires that \(a_k=a_{-k}^*\), or equivalently, that

\[\tag{3.29}a_k^*=a_{-k}\]

Note that this is the case in Example 3.2, where the \(a_k\)'s are in fact real and \(a_k=a_{-k}\).

To derive the alternative forms of the Fourier series, we first rearrange the summation in eq. (3.25) as

\[x(t)=a_0+\sum_{k=1}^{\infty}[a_ke^{jk\omega_0t}+a_{-k}e^{-jk\omega_0t}]\]

Substituting \(a_k^*\) for \(a_{-k}\) from eq. (3.29), we obtain

\[x(t)=a_0+\sum_{k=1}^{\infty}[a_ke^{jk\omega_0t}+a_k^*e^{-jk\omega_0t}]\]

Since the two terms inside the summation are complex conjugates of each other, this can be expressed as

\[\tag{3.30}x(t)=a_0+\sum_{k=1}^{\infty}2\mathcal{Re}\{a_ke^{jk\omega_0t}\}\]

If \(a_k\) is expressed in polar form as

\[a_k=A_ke^{j\theta_k}\]

then eq. (3.30) becomes

\[x(t)=a_0+\sum_{k=1}^{\infty}2\mathcal{Re}\{A_ke^{j(k\omega_0t+\theta_k)}\}\]

That is,

\[\tag{3.31}x(t)=a_0+2\sum_{k=1}^{\infty}A_k\cos(k\omega_0t+\theta_k)\]

Equation (3.31) is one commonly encountered form for the Fourier series of real periodic signals in continuous time.

Another form is obtained by writing \(a_k\) in rectangular form as

\[a_k=B_k+jC_k\]

where \(B_k\) and \(C_k\) are both real. With this expression for \(a_k\), eq. (3.30) takes the form

\[\tag{3.32}x(t)=a_0+2\sum_{k=1}^{\infty}[B_k\cos{k\omega_0t}-C_k\sin{k\omega_0t}]\]

In Example 3.2 the \(a_k\)'s are all real, so that \(a_k=A_k=B_k\). and therefore, both representations, eqs. (3.31) and (3.32), reduce to the same form, eq. (3.28).

Thus, for real periodic functions, the Fourier series in terms of complex exponentials, as given in eq. (3.25), is mathematically equivalent to either of the two forms in eqs. (3.31) and (3.32) that use trigonometric functions.

Although the latter two are common forms for Fourier series, the complex exponential form of eq. (3.25) is particularly convenient for our purposes, so we will use that form almost exclusively.

Equation (3.29) illustrates one of many properties associated with Fourier series. These properties are often quite useful in gaining insight and for computational purposes, and in later tutorials we collect together the most important of them.

In later tutorials, we also will develop the majority of the properties within the broader context of the Fourier transform.

## 2. Determination of the Fourier Series Representation of a Continuous-Time Periodic Signal

Assuming that a given periodic signal can be represented with the series of eq. (3.25), we need a procedure for determining the coefficients \(a_k\).

Multiplying both sides of eq. (3.25) by \(e^{-jn\omega_0t}\) we obtain

\[\tag{3.33}x(t)e^{-jn\omega_0t}=\sum_{k=-\infty}^{+\infty}a_ke^{jk\omega_0t}e^{-jn\omega_0t}\]

Integrating both sides from 0 to \(T=2\pi/\omega_0\) , we have

\[\int\limits_0^Tx(t)e^{-jn\omega_0t}\text{d}t=\int\limits_0^T\sum_{k=-\infty}^{+\infty}a_ke^{jk\omega_0t}e^{-jn\omega_0t}\text{d}t\]

Here, \(T\) is the fundamental period of \(x(t)\), and consequently, we are integrating over one period. Interchanging the order of integration and summation yields

\[\tag{3.34}\int\limits_0^Tx(t)e^{-jn\omega_0t}\text{d}t=\sum_{k=-\infty}^{+\infty}a_k\left[\int\limits_0^Te^{j(k-n)\omega_0t}\text{d}t\right]\]

The evaluation of the bracketed integral is straightforward. Rewriting this integral using Euler's formula, we obtain

\[\tag{3.35}\int\limits_0^Te^{j(k-n)\omega_0t}\text{d}t=\int\limits_0^T[\cos(k-n)\omega_0t]\text{d}t+j\int\limits_0^T[\sin(k-n)\omega_0t]\text{d}t\]

For \(k\ne{n}\), \(\cos(k-n)\omega_0t\) and \(\sin(k-n)\omega_0t\) are periodic sinusoids with fundamental period \(T/|k-n|\). Therefore, in eq. (3.35), we are integrating over an interval (of length \(T\)) that is an integral number of periods of these signals. Since the integral may be viewed as measuring the total area under the functions over the interval, we see that for \(k\ne{n}\), both of the integrals on the right-hand side of eq. (3.35) are zero.

For \(k=n\), the integrand on the left-hand side of eq. (3.35) equals 1, and thus, the integral equals \(T\), In sum, we then have

\[\int\limits_0^Te^{j(k-n)\omega_0t}\text{d}t=\begin{cases}T,\qquad{k=n}\\0,\qquad{k\ne{n}}\end{cases}\]

and consequently, the right-hand side of eq. (3.34) reduces to \(Ta_n\). Therefore,

\[\tag{3.36}a_n=\frac{1}{T}\int\limits_0^Tx(t)e^{-jn\omega_0t}\text{d}t\]

which provides the equation for determining the coefficients.

Furthermore, note that in evaluating eq. (3.35), the only fact that we used concerning the interval of integration was that we were integrating over an interval of length \(T\), which is an integral number of periods of \(\cos(k-n)\omega_0t\) and \(\sin(k-n)\omega_0t\). Therefore, we will obtain the same result if we integrate over any interval of length \(T\).

That is, if we denote integration over ** any** interval of length \(T\) by \(\int_T\) we have

\[\int_Te^{j(k-n)\omega_0t}\text{d}t=\begin{cases}T,\qquad{k=n}\\0,\qquad{k\ne{n}}\end{cases}\]

and consequently,

\[\tag{3.37}a_n=\frac{1}{T}\int_Tx(t)e^{-jn\omega_0t}\text{d}t\]

To summarize, if \(x(t)\) has a Fourier series representation [i.e., if it can be expressed as a linear combination of harmonically related complex exponentials in the form of eq. (3.25)], then the coefficients are given by eq. (3.37).

This pair of equations, then, defines the Fourier series of a periodic continuous-time signal:

\[\tag{3.38}x(t)=\sum_{k=-\infty}^{+\infty}a_ke^{jk\omega_0t}=\sum_{k=-\infty}^{+\infty}a_ke^{jk(2\pi/T)t}\]

\[\tag{3.39}a_k=\frac{1}{T}\int_Tx(t)e^{-jk\omega_0t}\text{d}t=\frac{1}{T}\int_Tx(t)e^{-jk(2\pi/T)t}\text{d}t\]

Here, we have written equivalent expressions for the Fourier series in terms of the fundamental frequency \(\omega_0\) and the fundamental period \(T\).

Equation (3.38) is referred to as the ** synthesis** equation and eq. (3.39) as the

**equation.**

*analysis*The set of coefficients \(\{a_k\}\) are often called the ** Fourier series coefficients** or the

**of \(x(t)\). These complex coefficients measure the portion of the signal \(x(t)\) that is at each harmonic of the fundamental component.**

*spectral coefficients*The coefficient \(a_0\) is the dc or constant component of \(x(t)\) and is given by eq. (3.39) with \(k=0\). That is,

\[\tag{3.40}a_0=\frac{1}{T}\int_Tx(t)\text{d}t\]

which is simply the average value of \(x(t)\) over one period.

Equations (3.38) and (3.39) were known to both Euler and Lagrange in the middle of the 18th century. However, they discarded this line of analysis without having examined the question of how large a class of periodic signals could, in fact, be represented in such a fashion.

Before we turn to this question in the next section, let us illustrate the continuous-time Fourier series by means of a few examples.

**Example 3.3**

Consider the signal

\[x(t)=\sin\omega_0t\]

whose fundamental frequency is \(\omega_0\).

One approach to determining the Fourier series coefficients for this signal is to apply eq. (3.39). For this simple case, however, it is easier to expand the sinusoidal signal as a linear combination of complex exponentials and identify the Fourier series coefficients by inspection.

Specifically, we can express \(\sin\omega_0t\) as

\[\sin\omega_0t=\frac{1}{2j}e^{j\omega_0t}-\frac{1}{2j}e^{-j\omega_0t}\]

Comparing the right-hand sides of this equation and eq. (3.38), we obtain

\[\begin{align}a_1&=\frac{1}{2j},\qquad{a_{-1}}=-\frac{1}{2j}\\a_k&=0,\qquad{k\ne+1}\text{ or }-1\end{align}\]

**Example 3.4**

Let

\[x(t)=1+\sin\omega_0t+2\cos\omega_0t+\cos\left(2\omega_0t+\frac{\pi}{4}\right)\]

which has fundamental frequency \(\omega_0\).

As with Example 3.3, we can again expand \(x(t)\) directly in terms of complex exponentials, so that

\[x(t)=1+\frac{1}{2j}[e^{j\omega_0t}-e^{-j\omega_0t}]+[e^{j\omega_0t}+e^{-j\omega_0t}]+\frac{1}{2}[e^{j(2\omega_0t+\pi/4)}+e^{-j(2\omega_0t+\pi/4)}]\]

Collecting terms, we obtain

\[x(t)=1+\left(1+\frac{1}{2j}\right)e^{j\omega_0t}+\left(1-\frac{1}{2j}\right)e^{-j\omega_0t}+\left(\frac{1}{2}e^{j(\pi/4)}\right)e^{j2\omega_0t}+\left(\frac{1}{2}e^{-j(\pi/4)}\right)e^{-j2\omega_0t}\]

Thus, the Fourier series coefficients for this example are

\[\begin{align}a_0&=1\\a_1&=\left(1+\frac{1}{2j}\right)=1-\frac{1}{2}j\\a_{-1}&=\left(1-\frac{1}{2j}\right)=1+\frac{1}{2}j\\a_2&=\frac{1}{2}e^{j(\pi/4)}=\frac{\sqrt{2}}{4}(1+j)\\a_{-2}&=\frac{1}{2}e^{-j(\pi/4)}=\frac{\sqrt{2}}{4}(1-j)\\a_k&=0,\qquad|k|\gt2\end{align}\]

In Figure 3.5, we show a bar graph of the magnitude and phase of \(a_k\).

**Example 3.5**

The periodic square wave, sketched in Figure 3.6 and defined over one period as

\[\tag{3.41}x(t)=\begin{cases}1,\qquad|t|\lt{T_1}\\0,\qquad{T_1}\lt|t|\lt{T/2}\end{cases}\]

is a signal that we will encounter a number of times throughout our tutorials. This signal is periodic with fundamental period \(T\) and fundamental frequency \(\omega_0=2\pi/T\).

To determine the Fourier series coefficients for \(x(t)\), we use eq. (3.39). Because of the symmetry of \(x(t)\) about \(t=0\), it is convenient to choose \(-T/2\le{t}\lt{T/2}\) as the interval over which the integration is performed, although any interval of length \(T\) is equally valid and thus will lead to the same result.

Using these limits of integration and substituting from eq. (3.41), we have first, for \(k=0\),

\[\tag{3.42}a_0=\frac{1}{T}\int_{-T_1}^{T_1}\text{d}t=\frac{2T_1}{T}\]

As mentioned previously, \(a_0\) is interpreted to be the average value of \(x(t)\), which in this case equals the fraction of each period during which \(x(t)=1\).

For \(k\ne0\), we obtain

\[a_k=\frac{1}{T}\int_{-T_1}^{T_1}e^{-jk\omega_0t}\text{d}t=\left.-\frac{1}{jk\omega_0T}e^{-jk\omega_0t}\right|_{-T_1}^{T_1}\]

which we may rewrite as

\[\tag{3.43}a_k=\frac{2}{k\omega_0T}\left[\frac{e^{jk\omega_0T_1}-e^{-jk\omega_0T_1}}{2j}\right]\]

Noting that the term in brackets is \(\sin(k\omega_0T_1)\), we can express the coefficients \(a_k\) as

\[\tag{3.44}a_k=\frac{2\sin(k\omega_0T_1)}{k\omega_0T}=\frac{\sin(k\omega_0T_1)}{k\pi},\qquad{k\ne0}\]

where we have used the fact that \(\omega_0T=2\pi\).

Figure 3.7 is a bar graph of the Fourier series coefficients for this example. In particular, the coefficients are plotted for a fixed value of \(T_1\) and several values of \(T\).

For this specific example, the Fourier coefficients are real, and consequently, they can be depicted graphically with only a single graph. More generally, of course, the Fourier coefficients are complex, so that two graphs, corresponding to the real and imaginary parts, or magnitude and phase, of each coefficient, would be required.

For \(T=4T_1\), \(x(t)\) is a square wave that is unity for half the period and zero for half the period. In this case, \(\omega_0T_1=\pi/2\), and from eq. (3.44),

\[\tag{3.45}a_k=\frac{\sin(\pi{k}/2)}{k\pi},\qquad{k\ne0}\]

while

\[\tag{3.46}a_0=\frac{1}{2}\]

From eq. (3.45), \(a_k=0\) for \(k\) even and nonzero. Also, \(\sin(\pi{k}/2)\) alternates between \(\pm1\) for successive odd values of \(k\). Therefore,

\[\begin{align}a_1&=a_{-1}=\frac{1}{\pi}\\a_3&=a_{-3}=-\frac{1}{3\pi}\\a_5&=a_{-5}=\frac{1}{5\pi}\\&\quad\vdots\end{align}\]

## 3. Convergence of the Fourier Series

Although Euler and Lagrange would have been happy with the results of Examples 3.3 and 3.4, they would have objected to Example 3.5, since \(x(t)\) is discontinuous while each of its harmonic components is continuous.

Fourier, on the other hand, considered the same example and maintained that the Fourier series representation of the square wave is valid. In fact, Fourier maintained that ** any** periodic signal could be represented by a Fourier series. Although this is not quite true, it

**true that Fourier series can be used to represent an extremely large class of periodic signals, including the square wave and all other periodic signals with which we will be concerned in our tutorials and which are of interest in practice.**

*is*To gain an understanding of the square-wave example and, more generally, of the question of the validity of Fourier series representations, let us examine the problem of approximating a given periodic signal \(x(t)\) by a linear combination of a finite number of harmonically related complex exponentials—that is, by a finite series of the form

\[\tag{3.47}x_N(t)=\sum_{k=-N}^{N}a_ke^{jk\omega_0t}\]

Let \(e_N(t)\) denote the approximation error; that is,

\[\tag{3.48}e_N(t)=x(t)-x_N(t)=x(t)-\sum_{k=-N}^{N}a_ke^{jk\omega_0t}\]

In order to determine how good any particular approximation is, we need to specify a quantitative measure of the size of the approximation error. The criterion that we will use is the energy in the error over one period:

\[\tag{3.49}E_N=\int_T|e_N(t)|^2\text{d}t\]

The particular choice for the coefficients in eq. (3.47) that minimize the energy in the error is

\[\tag{3.50}a_k=\frac{1}{T}\int_Tx(t)e^{-jk\omega_0t}\text{d}t\]

Comparing eqs. (3.50) and (3.39), we see that eq. (3.50) is identical to the expression used to determine the Fourier series coefficients.

Thus, if \(x(t)\) has a Fourier series representation, the best approximation using only a finite number of harmonically related complex exponentials is obtained by truncating the Fourier series to the desired number of terms.

As \(N\) increases, new terms are added and \(E_N\) decreases. If, in fact, \(x(t)\) has a Fourier series representation, then the limit of \(E_N\) as \(N\rightarrow\infty\) is zero.

**Let us turn now to the question of when a periodic signal \(x(t)\) does in fact have a Fourier series representation.**

Of course, for any signal, we can attempt to obtain a set of Fourier coefficients through the use of eq. (3.39). However, in some cases, the integral in eq. (3.39) may diverge; that is, the value obtained for some of the \(a_k\) may be infinite. Moreover, even if all of the coefficients obtained from eq. (3.39) are finite, when these coefficients are substituted into the synthesis equation (3.38), the resulting infinite series may not converge to the original signal \(x(t)\).

Fortunately, there are no convergence difficulties for large classes of periodic signals.

For example, every continuous periodic signal has a Fourier series representation for which the energy \(E_N\) in the approximation error approaches 0 as \(N\) goes to \(\infty\).

This is also true for many discontinuous signals. Since we will find it very useful to include discontinuous signals such as square waves in our discussions, it is worthwhile to investigate the issue of convergence in a bit more detail.

Specifically, there are two somewhat different classes of conditions that a periodic signal can satisfy to guarantee that it can be represented by a Fourier series. In discussing these, we will not attempt to provide a complete mathematical justification; more rigorous treatments can be found in many texts on Fourier analysis.

**1. First class of conditions**

One class of periodic signals that are representable through the Fourier series is those signals which have finite energy over a single period, i.e., signals for which

\[\tag{3.51}\int_T|x(t)|^2\text{d}t\lt\infty\]

When this condition is satisfied, we are guaranteed that the coefficients \(a_k\) obtained from eq. (3.39) are finite.

Furthermore, let \(x_N(t)\) be the approximation to \(x(t)\) obtained by using these coefficients for \(|k|\le{N}\):

\[\tag{3.52}x_N(t)=\sum_{k=-N}^{+N}a_ke^{jk\omega_0t}\]

Then we are guaranteed that the energy \(E_N\) in the approximation error, as defined in eq. (3.49), converges to 0 as we add more and more terms, i.e., as \(N\rightarrow\infty\).

That is, if we define

\[\tag{3.53}e(t)=x(t)-\sum_{k=-\infty}^{+\infty}a_ke^{jk\omega_0t}\]

then

\[\tag{3.54}\int_T|e(t)|^2\text{d}t=0\]

As we will see in an example at the end of this section, eq. (3.54) does ** not** imply that the signal \(x(t)\) and its Fourier series representation

\[\tag{3.55}\sum_{k=-\infty}^{+\infty}a_ke^{jk\omega_0t}\]

are equal at every value of \(t\). What it does say is that there is no energy in their difference.

The type of convergence guaranteed when \(x(t)\) has finite energy over a single period is quite useful.

In this case eq. (3.54) states that the difference between \(x(t)\) and its Fourier series representation has zero energy. Since physical systems respond to signal energy, from this perspective \(x(t)\) and its Fourier series representation are indistinguishable.

Because most of the periodic signals that we consider do have finite energy over a single period, they have Fourier series representations.

**2. Second class of conditions**

Moreover, an alternative set of conditions, developed by P. L. Dirichlet and also satisfied by essentially all of the signals with which we will be concerned, guarantees that \(x(t)\) ** equals** its Fourier series representation, except at isolated values of \(t\) for which \(x(t)\) is discontinuous. At these values, the infinite series of eq. (3.55) converges to the average of the values on either side of the discontinuity.

The Dirichlet conditions are as follows:

**Condition 1.**

Over any period, \(x(t)\) must be ** absolutely integrable**; that is,

\[\tag{3.56}\int_T|x(t)|\text{d}t\lt\infty\]

As with square integrability, this guarantees that each coefficient \(a_k\) will be finite, since

\[|a_k|\le\frac{1}{T}\int_T|x(t)e^{-jk\omega_0t}|\text{d}t=\frac{1}{T}\int_T|x(t)|\text{d}t\]

So if

\[\frac{1}{T}\int_T|x(t)|\text{d}t\lt\infty\]

then

\[|a_k|\lt\infty\]

A periodic signal that violates the first Dirichlet condition is

\[x(t)=\frac{1}{t},\qquad0\lt{t}\le1;\]

that is, \(x(t)\) is periodic with period 1. This signal is illustrated in Figure 3.8(a).

**Condition 2.**

In any finite interval of time, \(x(t)\) is of bounded variation; that is, there are no more than a finite number of maxima and minima during any single period of the signal.

An example of a function that meets Condition 1 but not Condition 2 is

\[\tag{3.57}x(t)=\sin\left(\frac{2\pi}{t}\right),\qquad0\lt{t}\le1\]

as illustrated in Figure 3.8(b). For this function, which is periodic with \(T=1\),

\[\int_0^1|x(t)|\text{d}t\lt1\]

The function has, however, an infinite number of maxima and minima in the interval.

**Condition 3.**

In any finite interval of time, there are only a finite number of discontinuities. Furthermore, each of these discontinuities is finite.

An example of a function that violates Condition 3 is illustrated in Figure 3.8(c). The signal, of period \(T=8\), is composed of an infinite number of sections, each of which is half the height and half the width of the previous section.

Thus, the area under one period of the function is clearly less than 8. However, there are an infinite number of discontinuities in each period, thereby violating Condition 3.

As can be seen from the examples given in Figure 3.8, signals that do not satisfy the Dirichlet conditions are generally pathological in nature and consequently do not typically arise in practical contexts. For this reason, the question of the convergence of Fourier series will not play a particularly significant role in the remainder of our tutorials.

For a periodic signal that has no discontinuities, the Fourier series representation converges and equals the original signal at every value of \(t\).

For a periodic signal with a finite number of discontinuities in each period, the Fourier series representation equals the signal everywhere except at the isolated points of discontinuity, at which the series converges to the average value of the signal on either side of the discontinuity. In this case the difference between the original signal and its Fourier series representation contains no energy, and consequently, the two signals can be thought of as being the same for all practical purposes.

Specifically, since the signals differ only at isolated points, the integrals of both signals over any interval are identical. For this reason, the two signals behave identically under convolution and consequently are identical from the standpoint of the analysis of LTI systems.

**Gibbs Phenomenon**

To gain some additional understanding of how the Fourier series converges for a periodic signal with discontinuities, let us return to the example of a square wave.

In particular, in 1898, an American physicist, Albert Michelson, constructed a harmonic analyzer, a device that, for any periodic signal \(x(t)\), would compute the truncated Fourier series approximation of eq. (3.52) for values of \(N\) up to 80.

Michelson tested his device on many functions, with the expected result that \(x_N(t)\) looked very much like \(x(t)\). However, when he tried the square wave, he obtained an important and, to him, very surprising result.

Michelson was concerned about the behavior he observed and thought that his device might have had a defect. He wrote about the problem to the famous mathematical physicist Josiah Gibbs, who investigated it and reported his explanation in 1899.

What Michelson had observed is illustrated in Figure 3.9, where we have shown \(x_N(t)\) for several values of \(N\) for \(x(t)\), a symmetric square wave (\(T=4T_1\)). In each case, the partial sum is superimposed on the original square wave.

Since the square wave satisfies the Dirichlet conditions, the limit as \(N\rightarrow\infty\) of \(x_N(t)\) at the discontinuities should be the average value of the discontinuity. We see from the figure that this is in fact the case, since for any \(N\), \(x_N(t)\) has exactly that value at the discontinuities.

Furthermore, for any other value of \(t\), say, \(t=t_1\), we are guaranteed that

\[\lim_{N\rightarrow\infty}x_N(t_1)=x(t_1)\]

Therefore, the squared error in the Fourier series representation of the square wave has zero area, as in eqs. (3.53) and (3.54).

For this example, the interesting effect that Michelson observed is that the behavior of the partial sum in the vicinity of the discontinuity exhibits ripples and that the peak amplitude of these ripples does not seem to decrease with increasing \(N\). Gibbs showed that these are in fact the case.

Specifically, for a discontinuity of unity height, the partial sum exhibits a maximum value of 1.09 (i.e., an overshoot of 9% of the height of the discontinuity), no matter how large \(N\) becomes.

One must be careful to interpret this correctly, however. As stated before, for any ** fixed** value of \(t\), say, \(t=t_1\), the partial sums will converge to the correct value, and at the discontinuity they will converge to one-half the sum of the values of the signal on either side of the discontinuity.

However, the closer \(t_1\) is chosen to the point of discontinuity, the larger \(N\) must be in order to reduce the error below a specified amount. Thus, as \(N\) increases, the ripples in the partial sums become compressed toward the discontinuity, but for any finite value of \(N\), the peak amplitude of the ripples remains constant. This behavior has come to be known as the Gibbs phenomenon.

The implication is that the truncated Fourier series approximation \(x_N(t)\) of a discontinuous signal \(x(t)\) will in general exhibit high-frequency ripples and overshoot \(x(t)\) near the discontinuities. If such an approximation is used in practice, a large enough value of \(N\) should be chosen so as to guarantee that the total energy in these ripples is insignificant.

In the limit, of course, we know that the energy in the approximation error vanishes and that the Fourier series representation of a discontinuous signal such as the square wave converges.

The next tutorial discusses about ** properties of continuous-time Fourier series**.