With the advent of optical amplifiers, fiber losses can be compensated by inserting amplifiers periodically along a long-haul fiber link as shown below.
At the same time, the effects of fiber dispersion (Group Velocity Delay - GVD) can be reduced by using dispersion management. Since neither the fiber loss nor the GVD is then a limiting factor, one may ask how many in-line amplifiers can be cascaded in series, and what limits the total link length. This topic will be covered in another tutorial. Here we focus on the factors that limit the performance of amplified fiber links and provide a few design guidelines. This tutorial also outlines the progress realized in the development of terrestrial and undersea lightwave systems since 1980 when the first system was installed.
1. Performance-Limiting Factors
The most important considerations in designing a periodically amplified fiber link is related to the nonlinear effects occurring inside all optical fibers. For single-channel lightwave systems, the dominant nonlinear phenomenon that limits the system performance is self-phase modulation (SPM). When optoelectronic regenerators are used, the SPM effects accumulate only over one repeater spacing (typically < 100 km) and are of little concern if the launch power satisfies
with NA = 1, or the condition PIn << 22 mW. In contrast, the SPM effects accumulate over long lengths (~1000 km) when in-line amplifiers are used periodically for loss compensation. A rough estimate of the limitation imposed by the SPM is again obtained from the equation above. This equation predicts that the peak power should be below 2.2 mW for 10 cascaded amplifiers when the nonlinear parameter γ = 2 W-1/km. The condition on the average power depends on the modulation format and the shape of optical pulses. It is nonetheless clear that the average power should be reduced to below 1 mW for SPM effects remain negligible for a lightwave system designed to operate over a distance of more than 1000 km. The limiting value of the average power also depends on the type of fiber in which light is propagating through the effective core area Aeff. The SPM effects are most dominant inside dispersion-compensating fibers for which Aeff is typically close to 20 μm2.
The foregoing discussion of the SPM-induced limitations is too simplistic to be accurate since it completely ignores the role of fiber dispersion. In fact, as the dispersive and nonlinear effects act on the optical signal simultaneously, their mutual interplay becomes quite important. The effect of SPM on pulses propagating inside an optical fiber can be included by using the nonlinear Schrödinger (NLS) equation. This equation is given by
where fiber losses are included through the α term. This term can also include periodic amplification of the signal by treating α as a function of z. The NLS equation is used routinely for designing modern lightwave systems.
Because of the nonlinear nature of the NLS equation above, it should be solved numerically in general. A numerical approach is indeed adopted for quantifying the impact of SPM on the performance of long-haul lightwave systems. The use of a large-effective-area fiber (LEAF) helps by reducing the nonlinear parameter γ defined as γ = 2πn2/(λAeff). Appropriate chirping of input pulses can also be beneficial for reducing the SPM effects. This feature has led to the adoption of a new modulation format known as the chirped RZ or CRZ format. Numerical simulations shows that, in general, the launch power must be optimized to a value that depends on many design parameters such as the bit rate, total link length, and amplifier spacing. In one study, the optimum launch power was found to be about 1 mW for a 5-Gb/s signal transmitted over 9000 km with 40-km amplifier spacing.
The combined effects of GVD and SPM also depend on the sign of the dispersion parameter β2. In the case of anomalous dispersion (β2 < 0), the nonlinear phenomenon of modulation instability can affect the system performance drastically. This problem can be overcome by using a combination of fibers with normal and anomalous GVD such that the average dispersion over the entire fiber link is "normal". However, a new kind of modulation instability, referred to as sideband instability, can occur in both the normal and anomalous GVD regions. It has its origin in the periodic variation of the signal power along the fiber link when equally spaced optical amplifiers are used to compensate for fiber losses. Since the quantity γ|A|2 in the NLS equation above is then a periodic function of z, the resulting nonlinear-index grating can initiate a four-wave-mixing process that generates sidebands in the signal spectrum. It can be avoided by making the amplifier spacing nonuniform.
Another factor that plays a crucial role is the noise added by optical amplifiers. Similar to the case of electronic amplifiers, the noise of optical amplifiers is quantified through an amplifier noise figure Fn. The nonlinear interaction between the amplified spontaneous emission and the signal can lead to a large spectral broadening through the nonlinear phenomena such as cross-phase modulation and four-wave mixing. Because the noise has a much larger bandwidth than the signal, its impact can be reduced by using optical filters. Numerical simulations indeed show a considerable improvement when optical filters are used after every in-line amplifier.
The polarization effects that are totally negligible in the traditional 'nonamplified" lightwave systems become of concern for long-haul systems with in-line amplifiers. In addition to PMD, optical amplifiers can also induce polarization-dependent gain and loss. Although the PMD effects must be considered during system design, their impact depends on the design parameters such as the bit rate and the transmission distance. For bit rates as high as 10-Gb/s, the PMD effects can be reduced to an acceptable level with a proper design. However, PMD becomes of major concern for 40-Gb/s systems for which the bit slot is only 25 ps wide. The use of a PMD-compensation technique is often necessary at such high bit rates.
The fourth generation of lightwave systems began in 1995 when lightwave systems employing amplifiers first became available commercially. Of course, the laboratory demonstrations began as early as 1989. Many experiments used a recirculating fiber lop to demonstrate system feasibility as it was not practical to use long lengths of fiber in a laboratory setting. Already in 1991, an experiment showed the possibility of data transmission over 21,000 km at 2.5 Gb/s, and over 14,300 km at 5 Gb/s, by using the recirculating-loop configuration. In a system trial carried out in 1995 by using actual submarine cables and repeaters, a 5.3-Gb/s signal was transmitted over 11,300 km with 60 km of amplifier spacing. This system trial led to the deployment of a commercial transpacific cable (TPC-5) that began operating in 1996.
The bit rate of fourth-generation systems was extended to 10 Gb/s beginning in 1992. As early as 1995, a 10-Gb/s signal was transmitted over 6480 km with 90-km amplifier spacing. With a further increase in the distance, the SNR decreased below the value need to maintain the BER below 10-9. One may think that the performance should improve by operating close to the zero-dispersion wavelength of the fiber. However, an experiment, performed under such conditions, achieved a distance of only 6000 km at 10 Gb/s even with 40-km amplifier spacing, and the situation became worse when the RZ modulation format was used. Starting in 1999, the single-channel bit rate was pushed toward 40 Gb/s in several experiments, and by 2002 such systems became available commercially. The design of 40-Gb/s lightwave systems requires the use of several new ideas including the CRZ format, dispersion management with GVD-slope compensation, and distributed Raman amplification. Even then, the combined effects of the higher-order dispersion, PMD, and SPM degrade the system performance considerably at a bit rate of 40 Gb/s.
2. Terrestrial Lightwave Systems
An important application of fiber-optic communication links is for enhancing the capacity of telecommunication networks worldwide. Indeed, it is this application that started the field of optical fiber communications in 1977 and has propelled it since then by demanding systems with higher and higher capacities. Here we focus on the status of commercial systems by considering the terrestrial and undersea system separately.
After a successful Chicago field trial in 1977, terrestrial lightwave systems became available commercially beginning in 1980. The table below lists the operating characteristics of several terrestrial systems developed since then.
The first-generation systems operated near 0.85 μm and used multimode graded-index fibers as the transmission medium. As seen in the figure below, the BL product of such systems is limited to 2 (Gb/s)-km.
A commercial lightwave system (FT-3C0 operating at 90 Mb/s with a repeating spacing of about 12 km realized a BL product of nearly 1 (Gb/s)-km; it is shown by a filled a circle in the figure above. The operating wavelength moved to 1.3 μm in second-generation lightwave systems to take advantage of low fiber losses and low dispersion near this wavelength. The BL product of 1.3-μm lightwave systems is limited to about 100 (Gb/s)-km when a multimode semiconductor laser is used inside the transmitter. In 1987, a commercial 1.3-μm lightwave system provided data transmission at 1.7 Gb/s with a repeater spacing of about 45 km. A filled circle in the figure above shows that this system operates quite close to the dispersion limit.
The third generation of lightwave systems became available commercially in 1991. Such systems operate near 1.55 μm at bit rates in excess of 2 Gb/s, typically at 2.488 Gb/s, corresponding to the OC-48 level of the SONET, or the STS-16 level of the SDH, specifications. Teh switch to the 1.55-μm wavelength helps to increase the loss-limited transmission distance to more than 100 km because of fiber losses of less than 0.25 dB/km in this wavelength region. However, the repeater spacing was limited to below 100 km because of the high GVD of standard telecommunication fibers. IN fact, the deployment of third-generation lightwave systems was possible only after the development of distributed feedback (DFB) semiconductor lasers, which reduce the impact of fiber dispersion by reducing the source spectral width to below 100 MHz.
The fourth generation of lightwave systems appeared around 1996. Such systems operate in the 1.55-μm region at a bit rate as high as 40 Gb/s by using dispersion-shifted fibers in combination with optical amplifiers. However, more than 50 million kilometers of the standard telecommunication fiber is already installed in the worldwide telephone network. Economic reasons dictate that the fourth generation of lightwave systems make use of this existing base. Two approaches are being used to solve the dispersion problem. First, several dispersion-management schemes make it possible to extend the bit rate to 10 Gb/s while maintaining an amplifier spacing of up to 100 km. Second, several 10-Gb/s signals can be transmitted simultaneously by using the WDM technique. Moreover, if the WDM technique is combined with dispersion management, the total transmission distance can approach several thousand kilometers provided that fiber losses are compensated periodically by using optical amplifiers. Such WDM lightwave systems were deployed commercially worldwide beginning in 1996 and allowed a system capacity of 1.6 Tb/s by 2000 for the 160-channel commercial WDM systems.
The fifth generation of lightwave systems began to emerge around 2001. Th bit rate of each channel in this generation of WDM systems is 40 Gb/s (corresponding to the STM-256 or OC-768 level). Several new techniques developed in recent years make it possible to transmit a 40-Gb/s optical signal over long distances. New dispersion-shifted fibers have been developed with smaller PMD levels. Their use in combination with tunable dispersion-compensating techniques can compensate the GVD for all channels simultaneously. The use of Raman amplification helps to reduce the noise and improves the SNR at the receiver. The use of a forward-error correction (FEC) technique helps to increase the transmission distance by reducing the required SNR. The number of WDM channels can be increased by using the L and S bands located on the long- and short-wavelength sides of the conventional C band occupying the 1530-1570-nm spectral region. In a 2001 experiment, 77 channels, each operating at 42.7-Gb/s, were transmitted over 1200 km by using the C and L bands simultaneously, resulting in a 3-Tb/s capacity. In another 2001 experiment, the system capacity was extended to 10.2 Tb/s by transmitting 256 channels over 100 km at 42.7 Gb/s per channel using only the C and L bands, resulting a spectral efficiency of 1.28 (b/s)/Hz. The bit rate was 42.7 Gb/s in both of these experiments because of the overhead associated with the FEC technique.
Starting in 2002, the research focus shifted toward advanced modulation formats in which information is coded using optical phase rather than amplitude of the carrier wave. This approach has led to considerable improvements in the spectral efficiency of WDM systems. In a 2007 experiment, 25.6-Tb/s transmission was realized over 240 km of optical fibers using 160 WDM channels that spanned both the C and L bands with 50-GHz channel spacing. Each channel contained two polarization-multiplexed 85.4-Gb/s signals coded with the DQPSK format, resulting in a spectral efficiency of 3.2 (b/s)/Hz. By 2010, transmission at a total bit rate of 69.1 Tb/s was demonstrated over 240 km of fiber using 432 WDM channels, each operating at 171 Gb/s with a 7% FEC overhead.
3. Undersea Lightwave Systems
Undersea or submarine transmission systems are used for intercontinental communications and are capable of providing a network spanning the whole earth. Reliability is of major concern for such systems as repairs are expensive. Generally, undersea systems are designed for a 25-year service life, with at most three failures during operation. The figure below shows the multitude of undersea systems deployed worldwide.
The table below lists several high-capacity undersea fiber-optic cable systems installed after the year 2000. Most of them transport multiple WDM channels, each operating at 10 Gb/s, and employ several fiber pairs within each cable to further enhance the system capacity to beyond 1 Tb/s.
The first undersea fiber-optic cable (TAT-8) was a second-generation system. It was installed in 1988 in the Atlantic Ocean, with a repeater spacing of up to 70 km. and transported a single channel at a bit rate of 280 Mb/s. The system design was on the conservative side, mainly to ensure reliability. The same technology was used for the first transpacific lightwave system (TPC-3), which became operational in 1989. By 1990 the third-generation lightwave systems had been developed. The TAT-9 submarine system used this technology in 1991; it was designed to operate near 1.55 μm at a bit rate of 560 Mb/s with a repeater spacing of about 80 km. The increasing traffic across the Atlantic Ocean led to the deployment of the TAT-10 and TAT-11 lightwave systems by 1993 with the same technology.
The advent of optical amplifiers prompted their use in the next generation of undersea systems. The TAT-12 cable, installed in 1995, employed optical amplifiers in place of optoelectronic regenerators and operated at a bit rate of 5.3 Gb/s with an amplifier spacing of about 50 km. The bit rate was slightly larger than the STM-32 bit rate of 5 Gb/s because of the overhead associated with the forward-error-correction technique. The design of such lightwave systems becomes quite complex because of the cumulative effects of fiber dispersion and nonlinearity, which must be controlled over long distances. The transmitter power and the dispersion profile along the link must be optimized to combat such effects.
A second category of undersea lightwave systems requires repeaterless transmission over several hundred kilometers. Such systems are used for interisland communication or for looping a shoreline such that the signal is regenerated on the shore periodically after a few hundred kilometers of undersea transmission. The dispersive and nonlinear effects are of les concern for such systems than for transoceanic lightwave systems, but fiber losses become a major issue. The reason is easily appreciated by nothing that the cable loss exceeds 100 dB over a distance of 500 km even under the best operating conditions. In the 1990s several laboratory experiments demonstrated repeaterless transmission at 2.5 Gb/s over more than 500 km by using two in-line amplifiers that were pumped remotely from the transmitter and receiver ends with high-power pump lasers. Another amplifier at the transmitter boosted the launched power to close to 100 mW.
Such high input powers exceed the threshold level for stimulated Brillouin scattering (SBS). The suppression of SBS is often realized by modulating the phase of the optical carrier such that the carrier linewidth is broadened to 200 MHz or more from its initial value of <10 MHz. Directly modulated DFB lasers can also be used for this purpose. In a 1996 experiment, a 2.5-Gb/s signal was transmitted over 465 km by direct modulation of a DFB laser. Chirping of the modulated signal broadened the spectrum enough that an external phase modulator was not required provided that the launched power was kept below 100 mW. The bit rate of repeaterless undersea systems can be increased to 10 Gb/s by employing the same techniques used at 2.5 Gb/s. In a 1996 experiment, the 10-Gb/s signal was transmitted over 442 km by using two remotely pumped in-line amplifiers. Two external modulators were used, one for SBS suppression and another for signal generation. In a 1998 experiment, a 40-Gb/s signal was transmitted over 240 km using the RZ format and an alternating polarization format.
The use of the WDM technique in combination with optical amplifiers, dispersion management, and error correction has revolutionized the design of submarine fiber optic systems. IN 1998, a submarine cable known as AC-1 was deployed across the Atlantic Ocean with a capacity of 80 Gb/s using the WDM technology. An identically designed system (PC-1) crossed the Pacific Ocean. The use of dense WDM, in combination with multiple fiber pairs per cable, resulted in systems with large capacities. By 2001, several systems with a capacity of >1 Tb/s became operational across the Atlantic Ocean. These systems employ a ring configuration and cross the Atlantic Ocean twice to ensure fault tolerance. The VSNL Transatlantic submarine system can achieve a total capacity of 2.56 Tb/s and spans a total distance of 13,000 km. Another system, known as Apollo, is capable of carrying traffic at speeds of up to 3.2 Tb/s by transmitting 80 channels (each operating at 10 Gb/s) over 4 fiber pairs.
The pace slowed down after 2001 with the bursting of the "telecom bubble." However, the development of undersea systems has continued within industrial laboratories. In a 2003 experiment, transmission over 9400 km of 40 channels (each operating at 42.7 Gb/s with a 70-GHz channel spacing) was realized using phase modulation (with the DPSK format), FEC coding, and distributed Raman amplification. By 2009, another experiment transmitted 72 channels, each operating at 100 Gb/s, over 10,000 km using the QPSK modulation format with digital processing in a coherent receiver. On the commercial side, a field trial was carried out as early as 2004 in which 96 channels at 10 Gb/s were transmitted successfully over a distance of 13,000 km. As seen in the table above, several new transoceanic system shave been deployed worldwide in recent years. A few others such as Europe-India Gateway were in various stages of completion in 2010.