Menu
Cart 0

RATE-ADAPTABLE OPTICAL TRANSMISSION AND ELASTIC OPTICAL NETWORKS

This is a continuation from the previous tutorial - Optical performance monitoring for fiber-optic communication networks

 

 

1. INTRODUCTION

Fiber optical systems are nowpresent in multiple segments of the network, as depicted in Figure 1. The segments are derived from different requirements and functionalities associated with each segment.

This leads to a plurality of technology selections; for instance, the core segment relies on the coherent transponder technology while the access segment with passive optical networks \(\text{(PONs)}\) uses low-cost transponders with noncoherent detection. 

Though different technologies are used, each segment faces the need of high-capacity networks in order to support the growth of traffic as well as the change in customer usage with the new era of cloud networking and connected devices. Due to the limited available bandwidth in an optical fiber, a higher data rate translates into a need for higher spectral efficiency.

This can drastically reduce the reach of optical signals and requires the use of more optoelectronic \(\text{(OEO)}\) regeneration resources. As a result, traditional optical networks with a fixed transmission rate do not scale well and are, therefore, not economically viable. To meet these challenges, future optical networks have generated interest in scalable, reconfigurable, and sustainable solutions, known as elastic optical networks \(\text{(EONs)}\). This elastic concept can be applicable to the various segments and some example of applications are shown later in this chapter.

 

 

FIGURE 1.  Overview of optical network segments.

 

 

FIGURE 2.  (a) Traditional fixed networks with a 50 GHz frequency grid; (b) elastic networks with flex-grid using variable channel spacing and symbol rate.

 

Focusing now on core networks, unless specified otherwise, traditional fixed networks divide the spectrum bandwidth into a set of parallel channels of fixed channel spacing by steps of 50 or 100 GHz. In addition, the modulation format and symbol rate (or equivalently the spectrum occupancy) are fixed for a selected generation and are also generally associated with a data rate such as legacy 10 Gb/s or current 100 Gb/s. 

In the latter case, a 100 Gb/s transmission classically uses a polarization division multiplexed \(\text{(PDM)}\) \(\text{QPSK}\) modulation with a baud rate of 28 Gbaud and a hard-decision forward error correction \(\text{(FEC)}\) with 7% overhead.

In contrast, \(\text{EONs}\) are capable of tuning one or several transmission parameters to adjust their data rate and their reliability to the connection characteristics such as lightpath distance, transmission impairments, capacity demand.

As shown in Figure 2, one of the most attractive handles for elastic networking is the possibility to use a flexible frequency spacing between channels, also called flex-grid, together with a tunable symbol rate.

This allows to optimize the spectrum usage and to gain in spectral efficiency. Other options of key interest are the ability to adjust modulation format and/or \(\text{FEC}\) overhead to (dynamically) match the data rate to the transmission quality.

Needless to say, elasticity is empowered by software-defined devices such as transponders where the advent of coherent detection with digital signal processing, high-speed digital-to-analog converter \(\text{(DAC)}\) and analog-to-digital converter \(\text{(ADC)}\) elements have drastically enhanced the programmable flexibility.

In an \(\text{EON}\), the software-defined devices are under the supervision of a management tool with a local controller to adapt the different elements of the network in a harmonized fashion after, for instance, the selection of flex-grid and modulation format. The local controller may also be run by a control plane for more automated processes.

Such elasticity concept is very common and successful in other telecommunication industries, such as wireless and copper-based fixed access networks, but is a newcomer in optical transport.

To summarize, elasticity allows both a more dynamic resource management and operation as close as possible to physical limits; the benefits are one or a combination of the following: \(\text{(i)}\) increased capacity, \(\text{(ii)}\) cost reduction, \(\text{(iii)}\) reduced power consumption, and \(\text{(iv)}\) enhanced scalability. These benefits are assessed in a number of scenarios later in this chapter.

 

History of Elastic Optical Networks

Optical networks have a fourfold heterogeneity in terms of \(\text{(i)}\) connection length, from a few hundred to several thousands of kilometers, \(\text{(ii)}\) deployed infrastructures, with the coexistence of terminal or transmission equipments from different generations (different data-rates, generations and/or types of fiber, amplifier type or technology), \(\text{(iii)}\) capacity demand, from tenths to tens of Gb/s, and \(\text{(iv)}\) connection duration, from hour-long (or shorter) to quasi-permanent connection.

This has generated interest in rate-adaptive networks.

The connection length and capacity demand heterogeneities have first been addressed by mixed-line rate networks, that is, low bit rate transponders are provisioned for the longest lightpath while higher bit rate transponders are used for shorter distances.

However, managing multiple types of devices (one for each data rate) is cumbersome for the operators. Committed transponders are provisioned for a given traffic and network conditions; hence, mixed-line rate networks suffer from a lack of flexibility and scalability when traffic and connections evolve.

To ensure a competitive solution, \(\text{EONs}\) have been introduced in 2008 where one transponder device is able to deliver multiple bit rates per connection needs. This is also known as a universal transponder. Initially, the transponder was made tunable by delivering “just-enough” spectrum to each connection demand with the ability to increase or decrease the bit rate dynamically.

This concept was based on \(\text{OFDM}\) technology and the network architecture called \(\text{SLICE}\). Next, the ability to select between different modulation formats has been shown to be well coupled with a distance adaptation according to the physical impairments.

Indeed, higher-order modulations achieve high data rate but require a good signal-to-noise ratio \(\text{(SNR)}\); hence, \(\text{16QAM}\) and even more \(\text{64QAM}\) are best suited for short distances. In addition, the FEC overhead may also be dynamically adjusted to match the channel quality and to avoid the over-provisioning of margins. The larger the \(\text{FEC}\) overhead, the better the transmission reliability. 

However, when an elastic transponder configured for short-reach high-capacity, it is not operating at its maximum data rate and it may look like part of the available capacity is wasted.

To address this issue, the concept of multiflow transponders (or also known as sliceable transponders) has been introduced in 2011 . The aim is to transmit from one source node to one or multiple destination nodes by having a (dynamic) sharing of bandwidth. It is worth noting that different modulations, \(\text{FEC}\) overheads, and baud rates may be selected for each of the flows.

This multiflow concept is especially relevant for very high bit rate transponders such as 400G, \(1T\) to benefit from cost-savings . Indeed, the network roll-out of an operator will most likely use these devices at a low bit rate with multiple destinations during the first years of deployment before upgrading them to a higher bit rate, hence with a lower number of destinations, when the traffic is growing.

In the remainder of this chapter, we discuss the changes in technology option or design to support the elastic concept in an optical network. Next, we introduce in detail some practical aspects of the elastic transponder and aggregation devices and show a first green application of elasticity.

Leveraging on dimensioning resource allocation tools and techno-economic studies, we dedicate the next section to the foreseen short- to medium-term opportunities in core networks before opening on longer term opportunities with elastic burst optical networks.

 

 

2. KEY BUILDING BLOCKS

As previously mentioned, \(\text{EONs}\) would bring some benefits in terms of cost, capacity, and energy consumption, but making them real requires evolutions of hardware, software as well as control plane, as illustrated by Figure 3. 

The hardware elements contain three major blocks – depicted from the optical layer to the \(\text{IP}\) router layer: \(\text{(i)}\) optical cross-connects \(\text{(OXCs)}\) with the optional capability to handle flexible channel spacing and spectrum allocation so as to support flex-grid scenarios, \(\text{(ii)}\) transponders for which the choice of an appropriate modulation, symbol rate, \(\text{FEC}\) overhead is critical to achieve a good efficiency, \(\text{(iii)\) elastic aggregation interfaces that have the ability to deliver variable bit rate by switching off lanes and/or by sending a variable bit rate per lane; this is in complement to existing grooming and aggregation features using digital capabilities of the optical transport network \(\text{(OTN)}\).

Indeed, the \(\text{OTN}\) comprises an optical layer and a digital layer: the optical channel payload unit \(\text{(OPU)}\), the optical channel digital unit \(\text{(ODU)}\) that is a method for encapsulating data and the optical channel transport unit \(\text{(OTU)}\) corresponding to the resulting line rate. \(\text{OTN}\) may encompass flexible functionalities such as switching, multiplexing, and inverse multiplexing.  The \(\text{OTN}\) layers are depicted in Figure 4. 

 

 

FIGURE 3.  High-level architecture of optical networks with hardware and software challenges.

 

 

FIGURE 4.  OTN hierarchy. The payload in the OTUk for \(k=\{1,2,3,\text{or}\;4\}\) is approximately 2.5, 10, 40, and 100 Gb/s respectively.

The software modules also need to evolve as the hardware elements become more flexible. In particular, during the planning phase it is of high importance to understand the impairments along the candidate lightpaths in order to be able to select the right transmission parameters (e.g., data rate, bandwidth, path characteristics) that allow the digital client to be carried with a satisfactory bit error ratio \(\text{(BER)}\).

This allows the optimization of the elastic system design on a point-to-point lightpath. At the network level, the development of novel impairment-aware algorithms for routing that include rate selection and spectrum allocation is essential to estimate the number of optoelectronic interfaces to be provisioned and optimize the cost of the whole network deployment.

In addition, online optimization can be seen as an extension of planning tool with the new capability of handling dynamic management of the network connections. 

In this context of dynamic behavior (e.g., accommodating new connection requests or traffic variations), the control plane would also need some extensions.

Specific implementation issues and challenges of the building blocks are further described in the following sections. 

 

Optical Cross-Connect 

\(\text{OXCs}\), made essentially of optical amplifiers and interconnected wavelength-selective switches \(\text{(WSS)}\), are capable of multiplexing and demultiplexing signals in both the wavelength and space domain. The incoming signals from direction \(d\) at wavelength \(w\) can be switched to direction \(d\)′ potentially at another wavelength \(w\)′. \(\text{OXCs}\) are important in a mesh network to improve its efficiency and capacity through a better ability for grooming, an improved network reliability and scalability. 

Deployed \(\text{WSS}\) are switching devices that filter the spectrum based on a 50 or 100 GHz \(\text{ITU}\) grid (hence creating penalties if the signal is larger than allowed) while the new \(\text{WSS}\) generation (already commercially available) supports a dynamic bandwidth allocation.

The spectrum can now be switched by steps of about 12.5 GHz. The most predominant technology to support this fine bandwidth granularity involves the liquid crystal on silicon \(\text{(LCoS)}\) technology.

The accommodation of flex-grid technology (for standard flexible grid definition) with the new \(\text{WSS}\) generation allows the spectrum slots to be concatenated so as to create much larger spectrum chunks with no filtering within it. It thus offers a good spectral efficiency if the channels are compactly packed.

In addition, \(\text{EONs}\) relying on the flex-grid handle may need a mechanism, also known as spectrum defragmentation, which is capable of removing unused small-frequency blocks by moving the central frequency of established connections. This permits to generate a spectrum block that is sufficiently large to accommodate a new demand.

 

Elastic Transponder

Transponders are capable of sending the optical signals on the fiber media channel as well as performing the signal processing both at the emission and reception. When a transponder changes one or several of its transmission parameters, the robustness to channel impairments also changes. Therefore, we present hereafter a few examples of trade-offs between data rate and optical reach. 

Modulation Formats It is well-known that the distance to be covered is highly dependent upon the selected modulation. This is due to the reduction of the minimal distance between two points of the constellation, which reduces the resilience to channel impairments.

For instance, going from a \(\text{PDM}\)-\(\text{QPSK}\) up to a \(\text{PDM}\)-\(16\text{QAM}\) transmission doubles the data rate at the cost of an optical reach divided by a factor of 5. To alleviate this steep trade-off, the flexible transponder may support additional modulation formats and in particular more complex formats such as recently investigated four-dimensional coded modulations making use of set partitioning \(\text{(SP)}\).

Set partitioning comes from Ungerboeck where the idea is to partition the constellation points into smaller subsets that fulfill an increase of the minimal Euclidean distance of the original constellation as well as a decrease of the resulting data-rate.

This type of coded modulation exhibits the advantage of reusing some \(\text{DSP}\) algorithms designed for classical \(\text{8QAM}\), \(16\text{QAM}\), and other formats: for instance, 8-\(\text{SP}\)-\(16\text{QAM}\) utilizes half the symbols of the \(16\text{QAM}\) constellation, thus reducing the data-rate by 25%; it is rather equivalent to \(\text{8QAM}\) in terms of performance and data-rate while requiring the same \(\text{DSP}\) than \(\text{16QAM}\).

Alternatively, time-domain hybrid-\(\text{QAM}\) is capable to show a very fine granularity of spectral efficiency due to its own construction. Indeed, the principle is to split the frame into multiple time slots, each slot is filled from the set of \(x\)-\(\text{QAM}\) modulations so as to allocate a variable portion of low- and high-order QAM (e.g., 2 out of 3 slots filled with \(\text{32QAM}\) and 1 slot filled with \(\text{64QAM}\) results in a spectral efficiency of 5.33 bit/symbol). 

Figure 5 presents a set of candidate modulation formats with a fine incremental data rate. It can be seen that the optical reach progressively decreases as the data rate grows; a transponder with this set of modulations is therefore suitable for a wide variety of field conditions.

 

 

FIGURE 5.  Example of trade-off between data rate and distance for standard modulations with dual polarization.

 

 

FIGURE 6.  Reach estimation for different symbol rate in a 50 \(\text{GHz}\) grid.

 

Symbol Rate   The variation in symbol rate (equivalently called baud rate) changes the spectrum occupancy. Therefore, in nondispersion-managed system, a \(\text{PDM}\)-\(\text{QPSK}\) transmission system over a fixed channel spacing of 50GHz grid is only weakly changing the optical reach.

This is illustrated in Figure 6, where the reach is estimated at optimal power based on split-step Fourier method numerical simulation. It can be seen that a lower symbol rate is more robust to the tolerance of amplified spontaneous emission \(\text{(ASE)}\) noise while a higher symbol rate shows less nonlinear degradation, which interestingly almost cancels out the variation of sensitivity to \(\text{ASE}\) noise. 

Forward Error Correction (FEC)   The \(\text{FEC}\) inserts parity bits to the initial information bits in order to improve the reliability of the transmission but at the cost of a lower effective throughput.

This is also called an \(\text{FEC}\) coding gain that translates into an increased \(\text{SNR}\) in order to achieve the same target \(\text{BER}\) performance. 

 

 

FIGURE 7.  Impact of the number of decoding iterations on the optical reach.

 

Playing on the coding gain, either by varying the number of decoding iterations in a soft-decision \(\text{FEC}\) or directly by adapting the overhead of the \(\text{FEC}\), results in the ability to trade-off capacity versus distance and power-consumption. 

As evidenced in Figure 7, the optical reach depends on the number of \(\text{LDPC}\) decoding iterations. This was computed for an \(\text{LDPC}\) (23,125, 20,000) concatenated with a Reed–Solomon of 7% overhead (the hard \(\text{FEC}\) defined in the \(\text{OTN}\) standard).

The optical reach was obtained for \(\text{PDM}\)-\(\text{QPSK}\) transmission at 100 Gb/s with 16 \(\text{dB}\) \(\text{OSNR}\) sensitivity (within 0.1 nm) in back to back, 100 km-long spans with 22 \(\text{dB}\) loss, separated by optical amplifiers with 6 \(\text{dB}\) noise figure. 

 

Elastic Aggregation

Aggregation interconnects several ports of an \(\text{IP}\) router to a \(\text{WDM}\) fiber pair. Today’s metro/core optical networks are optimized for (relatively) static conditions and in the event of client traffic changes (e.g., bandwidth, new demand) unforeseen in the planning process, transport \(\text{WDM}\) signals may operate well below their maximum capacity in some portions of the network, while demonstrating insufficient capacity in others.With the introduction of elastic \(\text{WDM}\) transmission with flexible data rates, the transport side can become more responsive to data rate requirements from the client side, thus bringing additional opportunities. 

Two major setups of the \(\text{IP}\) core network on top of the optical \(\text{WDM}\) transport can be distinguished: 

• \(\text{IP}\) over \(\text{WDM}\) – a two-layer network interconnecting directly the \(\text{IP}\) layer with core routers upon the \(\text{WDM}\) layer. The flexible switching/grooming capabilities are in the \(\text{IP}\) layer while the \(\text{WDM}\) signals are just carried within an \(\text{OTUk}\) \((k=1,2,3,\;\text{or}\;4)\). At an intermediate node, the transit traffic that needs subwavelength flexibility is processed at the \(\text{IP}\) layer.

• \(\text{IP}\) over \(\text{OTN}\) over \(\text{WDM}\) – a three-layer network relying on an additional intermediate \(\text{OTN}\) layer. The aim is to reduce the number of expensive router ports and to offload the transit traffic from the \(\text{IP}\) routers. 

The \(\text{OTN}\) layer possesses digital capabilities such as \(\text{ODU}\) multiplexing, which can be seen as a first level of adaptation to the client data rate. The goal of \(\text{ODU}\) multiplexing is to aggregate lower-order \(\text{ODUs}\) into a higher-order \(\text{ODU}\).

In addition, more flexibility has been introduced in \(\text{ODU}\) by varying the size of the aggregate packet flow mapped into a resizable \(\text{ODUflex}\) \(\text{(GFP)}\), where the goal was to fill the gap of the \(\text{ODU}\) hierarchy. However, the \(\text{OTU}\) part is still not yet specified to support flexibility. Thus, \(\text{ODUflex}\) has to be carried within an \(\text{OTUk}(k=1\cdots4)\), which means that the \(\text{WDM}\) transport bit rate is still not adaptable. 

In both scenarios, the incoming traffic from the client interface is sent in a parallel fashion with a lanes transporting \(b\;\text{Gb/s}\) so as to reach a total bit rate of \(a\times b\text{Gb/s}\). As an illustrative example, the standard \(\text{100GbE}\)  signal defines a transmission of 4 lanes of 25 \(\text{Gb/s}\) each.

Therefore, from a fixed input bit rate the elastic aggregation is able to deliver a variable output bit rate. To this end, two candidate options are available, which can possibly be used in combination. 

  • The number of active lanes: lanes are able to switch between on and off states. This operation should be done without losing any data.
  • The bit rate per lane: lanes are able to vary the bit rate at a given granularity (e.g., 1 \(\text{Gb/s)}\). To this end, an electric module should process the input flows to generate flexible \(\text{OTU}\) frames.
  • The combination of the two previous modes – this offers the most flexible and promising approach, but also results in the highest complexity.

 

Performance Prediction 

Performance predictions are of utmost importance in elastic network for routing and management systems in order to optimize the overall network functionality. For nondispersion managed systems, have presented a system model, validated both numerically and experimentally for coherent optical links, in which nonlinear Kerr effects and linear Amplified Spontaneous Emission \(\text{(ASE)}\) noise are both well-approximated by an additive white Gaussian noise \(\text{(AWGN)}\) model.

The combination of this Gaussian noise modeling of signal distortions associated to coherent detection with possibly matched transmitter and receiver filters make the system impact of such distortions being quite accurately captured by the total noise variance, or more precisely the signal to noise ratio. 

Generic Multi-Impairments BER Prediction Models  As a result, some simple and accurate expressions to estimate the \(\text{BER}\) before error correction (for \(\text{BER}\) below \(10^2)\) can be derived whatever the modulation format:

\[\tag{1}\text{BER}\approx\frac{x}{2}\text{erfc}\left(\frac{d_\text{min}}{2\sqrt{\bar{P}}}\sqrt{\text p\text SNR}\right)\]

where \(x\) is a scaling factor representing the modulation constellation, the bit mapping (such as Gray coding) and the average number of bit errors per symbol error. \(d_\text{min}\) is the minimum Euclidian distance between the symbols of the noiseless constellation.

Eventually, \(\text{SNR}\) is the electrical signal to noise ratio, equal to the ratio of average channel power per polarization \(\bar{P}/p\) (with \(p\) modulated polarizations) over the total electrical noise variance per received polarization. \(\text{SNR}\) can be related to optical signal to noise ratio (OSNR) expressed within a reference bandwidth \(B_\text{ref}\) (typically 12.5 GHz) by:

\[\tag{2}\text{SNR}=\frac{1}{p}\frac{B_\text{ref}}{B_\text{elec}}\text{OSNR}_{B\text re\text f}\]

where \(B_\text{elec}\) is the receiver electrical noise-equivalent bandwidth (typically half the symbol-rate if Nyquist matched filters are used), such that (1) can be written in a generic way, also applicable for legacy intensity-modulation direct detection \(\text{(IMDD)}\) systems

\[\tag{3}\begin{align}\text{BER}\approx\frac{x}{2}\text{erfc}\left(\sqrt{\eta.\text{OSNR}_\text{Bref}}\right)\approx\frac{x}{2}\left(\frac{Q'}{\sqrt{2}}\sqrt{\frac{\text{OSNR}_\text{Bref}\text{B}_\text{ref}}{\text B_\text{elec}}}\right)\\\text{with}Q'=\frac{d_\text{min}}{\sqrt{2\bar{P}}},\text{Q}'_\text{PDM}=\frac{Q'_\text{single}\;_\text{polarization}}{\sqrt{2}},\text{and}\;\eta=\frac{Q'^2B_\text{ref}}{2B_\text{elec}}\end{align}\]

\(Q\)' can be seen as a geometrical eye aperture, signature of the modulation format. For instance \(Q'=(\sqrt{P_1}-\sqrt{P_0})/\sqrt{2\bar{P}}\) for legacy \(\text{IMDD}\) formats, while for \(\text{PDM}\)-\(\text{QPSK}\) with coherent detection at baud rate \(R=28\) Gbaud and Nyquist matched filters \(Q\)'\(=1/\sqrt{2}\), and \(B_\text{elec}=R∕2\), leading to \(\eta=0.22\) in line with experiments from.

Equation 3. can be generalized to the case of a nonideal transceiver and of signal impairment over a typical transmission link, through an extension of the sources of Gaussian noise into play leading to the total equivalent OSNR: transmitter imperfections can be captured by an \(\text{SNR}_\text{TRx}\) term, optical amplifiers Amplified Spontaneous Emission noise can be related to an \(\text{OSNR}_\text{ASE}\) term, the interplay of nonlinear Kerr effect and Group-Velocity Dispersion can be related to an \(\text{SNR}_\text{NL}\) term, in-band and out-of-band crosstalks due to the nonideal rejection of other signals throughout the optical network or distortions stemming from signal filtering can be related to an additional \(\text{SNR}_\text{X}\) term, such that Equation 3 becomes:

\[\tag{4}\text{BER}\approx\frac{x}{2}\text{erfc}\left(\sqrt{\frac{\eta}{\frac{1}{\text{OSNR}_\text{ASE}}+\frac{1}{SNR_{TRX}}+\frac{1}{SNR_{NL}}+\frac{1}{SNR_X}}}\right)\]

For 28 Gbaud \(\text{PDM}\)-\(\text{QPSK}\), an \(\text{SNR}_\text{TRX}\) equal to 23.5 \(\text{dB}\) within 0.1 nm associated to an effective \(\eta=0.2\) yielded very good fit with experimental data. In the context of an \(\text{EON}\), \(\eta\) is about to change from one format / baud-rate to another, as well as the \(\text{SNR}\) terms. 

Generic Nonlinear Models for Elastic Optical Networks  Beside the amplifier noise, which is straightforward to model, the nonlinear noise is usually the second dominant effect that limits system reach. Its modeling is considerably simplified by the use of high modulation rates, complex \(\text{2D}\)-\(\text{4D}\) formats and the absence of inline dispersion management.

The resulting signal distortions can be considered as an additive Gaussian noise, which variance normalized to received signal power is proportional to the square of fiber input power per channel (thus leading to simple scaling rules and tools to predict penalties or to set powers). Hence perturbative propagation theories of the nonlinear Schrödinger equation and even simplified closed-form expressions of the nonlinear noise variance now appear quite predictive conversely to past 10 Gb/s \(\text{IMDD}\) systems.

Most models accounting for Kerr effect stem from perturbative approaches and share a few assumptions: each span is considered as a source of additive Gaussian noise, with contributions stemming from intra-channel and inter-channel nonlinear effects. This enables to compute an end to end total noise variance (or power spectral density), or to separate contributions from the different spans or channels, depending on the needs. 

First Example: Computing Nonlinear Noise in a Cumulative Way The span to span accumulation of nonlinear noise can be written: considering that total noise \(n\) is the sum of the contributions \(n_k\) of each span \(k\), the total noise variance becomes after \(N\) spans:

\[\tag{5}\frac{\bar{P}}{\text{SNR}_\text{NL}}=\text{var}(n)=\text{var}\left(\sum^N_{k=1}n_k\right)=\sum^N_{k=1}\text{var}(n_k)+2\sum^N_{k=1}\sum^{k-1}_{k'=1}Re(cov(n_k,n_{k'}))\]

Fortunately, it has been shown both numerically and experimentally  that the covariance terms can be neglected in \(\text{WDM}\) dispersion unmanaged systems, such that the total nonlinear noise variance can be accurately modeled by a sum of single-span variances. Without loss of generality,  this single-span variance can be written (at least for inter-channel nonlinearities) as:

\[\tag{6}\text{var}(n_k)\propto\bar{P}_\text{RX}\;*\;P_{ink^2}\;*\gamma k^2f(D_{in,k},\text{format,fiber}\;\text{type})\]

with \(P_\text{in,k}\) the span input power per channel, \(\gamma_k\) the span nonlinear coefficient, and \(f\) a function of the modulation format, fiber type (local dispersion, attenuation, length) and \(D_\text{in,k}\) the cumulated dispersion at input of span \(k\). It is also possible to decouple this equation into summed contributions from different channels in a pump-probe approach, considering that the noise variance contributions stemming a neighbouring

 

 

FIGURE 8.  Nonlinear noise variance generated on a single span as a function of the chromatic dispersion at span input. Simulations of nine 50 GHz spaced 100 Gbit/s \(\text{PDM}\)-\(\text{QPSK}\) channels.

 

channel spaced by \(\Delta f\) from the impaired channel is proportional to \(1/\Delta f\) with good accuracy.

A typical evolution of \(f\) with \(D_\text{in}\) is depicted in Figure 8, after numerical simulation of the propagation of nine 50 GHz-spaced 112 Gb/s \(\text{NRZ}\)-\(\text{PDM}\)-\(\text{QPSK}\) channels over 100 km-long fiber. \(f\) is an increasing function of the absolute value of input cumulated dispersion, with a rather linear evolution (in \(dB\times dB\) scale) for dispersion values up to 5–10 ns/nm and a saturation for higher dispersion values, leading to a plateau roughly 6 \(\text{dB}\) higher than for zero input dispersion.

Such an evolution can be correlated to the increase of signal peak-to-average power ratio with input cumulated dispersion. This generic approach has been shown accurate for single-fiber type as well as mixed-fiber type terrestrial and submarine systems for various modulation formats \(\text{(BPSK, QPSK, 16QAM)\), provided an experimental determination of the function \(f\).

Besides, it allows explaining the impact of fiber order in a mixed-fiber type system, and the quasi-linear dependence of \(f\) (with positive slope \(\varepsilon dB/dB)\) in logarithmic scale for low cumulated dispersions allows capturing why supra-linear evolution of variance with number of spans N (proportional to \(N^{1+\varepsilon}\) for terrestrial systems, while for very high input cumulated, the saturation of f leads to an effective \(\varepsilon=0\) and a linear increase of total noise variance with distance, also in agreement with experiments.

Eventually, the simplicity of this generic model makes it very attractive for path allocation in elastic networking and the cumulative nature of the model renders it particularly well-adapted to a distributed control plane.

Second Example: End-to-End Computation of Nonlinear Noise End to end models are interesting since they do not rely a priori on an in-depth experimental calibration of this function f, even though a calibrated scaling factor is eventually recommended (one per format ).

They allow seamlessly changing fiber dispersion, spectral efficiency, modulation format, amplification type (localized, distributed) and enable to build quite accurate \(\text{BER}\) estimation tools or system reach estimation tools, essentially for homogeneous long reach systems.

The derivation of the function \(f\) is straightforward, the \(\text{GN}\) model, the nonlinear noise variance is independent of span input cumulated dispersion, or even modulation format, thus such models fail to capture accurately the supra linear evolution of variance with distance or the impact of fiber order observed for terrestrial distances. The main origin of such limitations for the \(\text{GN}\) model lies in the simplified modeling of the input signal as a stationary Gaussian random process.

A more advanced input signal modeling considering third and fourth moments of the equivalent random process (depending on modulation scheme) is considered, leading to the so-called extended Gaussian noise model \(\text{(EGN)}\).

The first validations suggest that this modeling successfully overcomes the abovementioned limitations, at least as far as inter-channel nonlinearities are concerned, however at the expense of computing time. A closed-form expression of the variance is proposed, accounting for the modulation format, not for fiber input cumulated dispersion.

In the context of \(\text{EONs}\) with a wide variety of possibly coexisting modulation formats, symbol rates, channel spacing, the use of such models, with the necessary calibrations, becomes paramount. More details on those models can be found in this tutorial ANALYTICAL MODELING OF THE IMPACT OF FIBER NON-LINEAR PROPAGATION ON COHERENT SYSTEMS AND NETWORKS. 

 

Resource Allocation Tools 

Resource allocation tools are used by network providers and operators for computing the number of resources necessary to transport a given traffic by ensuring their quality of service and minimizing as much as possible the whole network cost. 

Depending on their application, resource allocation tools are classified into two categories: off-line and on-line. Off-line tools address the network planning phases, occurring initially to deploy the resources necessary to transport the forecast traffic (“greenfield”) and later on during the network life to cope with planned changes of the network state, such as network upgrades (set-up/tear-down of optical connections) or maintenance operations.

On-line tools apply to unpredictable dynamic provisioning of optoelectronic resources during the network life, either to cope with changes in the network state (e.g., upon network failures) or to set-up on-demand services; on-line tools are implemented in control planes. The main differences between these two categories of tools lie in the computational time constraints and on the distance of the algorithm output from the optimal solution; complex route and resource allocation problems are decomposed in several subparts at the expense of the global solution accuracy.

Off-line tools have to ensure the transport of the forecast traffic at a minimal cost, this operation can require millisecond to minutes per connection. Off-line algorithms can be solved either with mathematical optimization methods, such as integer linear programming \(\text{(ILP)}\) and mixed \(\text{ILP}\) \(\text{(MILP)}\), which advantage is the optimality of the provided solution for the solved (sub)problem, resulting in high computation times; or with heuristic approaches, which ensure low computation time but return suboptimal solutions; faster computation times typically stem from simplified solutions that are further to the optimal solution.

Meta-heuristic solutions strike a balance between computation time and solution optimality. \(\text{(M)ILP}\) and meta-heuristic solutions are not suitable for the implementation of on-line algorithms because of their time constraints, and suboptimal heuristics are preferred.

Thanks to the advances in the physical layer technologies triggered by the introduction of flex-grid \(\text{WSS}\) and elastic transponders, it will be possible to select the rate of an optical connection according to the network state by choosing the best combination among: modulation format, coding rate, and spectrum width.

This problem is known as routing, data-rate and wavelength assignment \(\text{(RDWA)}\) if fixed grid networks are considered, and as routing, modulation level and spectrum allocation \(\text{(RMLSA)}\) in flex-grid scenarios. Consequently, in the \(\text{RMLSA}\) case, traditional resource allocation tools, which are based on routing and wavelength assignment \(\text{(RWA)}\) for fixed grids and for a limited set of data-rate devices with given physical properties, have to evolve into routing and spectrum assignment \(\text{(RSA)}\) routines accounting for the variable frequency slot occupancy of a channel and its physical properties depending on the chosen channel configuration; taking into account the physical constraints at the optical layer during the routing phase is known as Impairment-Aware \(\text{RWA/RSA}\). \(\text{RMLSA}\) accounts for the physical impairments acting on the path before choosing transmission channel parameters, such as modulation format, coding rate and symbol rate.

The complexity of resource allocation tools does not limit to the introduction of a flex grid and to the choice of the channel configuration, but also depends on additional dimensions to be considered while computing the number of resources and the total network chose, such as traffic grooming (i.e., how to aggregate multiple low-rate flows to larger traffic units by means of electronic processing), network resiliency, blocking probability, and energy efficiency.

All such additional dimensions increase the number of network configurations to be explored, increasing the complexity of the resource allocation tools. The complexity of \(\text{RSA}\) algorithms is investigated when traffic grooming and/or regeneration and/or variable modulation formats and baud rates are taken into account, and also under which network and traffic conditions the resource utilization savings are worth the additional complexity.

To simplify the complexity of these tools, the resource computation procedure is decomposed in several subproblems at the expense of the solution optimality; The \(\text{RMLSA}\) is decomposed in \(\text{RML}\) and \(\text{SA}\) problems; The proposed sequential heuristic combined with an appropriate ordering gives solutions close to the \(\text{ILP}\) ones in low running times.

Table 1. shows a summary on the main approaches of resource assignment tools for EONs based on fixed- and flex-grids; symbol “+” (“&”) means that two successive phases are performed separately (jointly, resp.).

Each resource assignment approach can be solved with mathematical formulations such as \(\text{(M)ILP}\), meta-heuristics or heuristics as a function of their application (off-line or on-line resolution).

We remember that independently on the adopted solution method, decomposing a problem in several subproblems provides suboptimal solutions with respect to a problem formulation considering all phases jointly.

In dynamic \(\text{EONs}\), connections having different bandwidth occupation are continuously set-up and released, such that an optimal resource allocation cannot be

TABLE 1.  Main approaches for resource assignment for fixed- and flex-grid elastic networks, their application scopes and solution methods 

 

performed for each change of the network state. For dynamic scenarios, a main concern is the spectrum fragmentation, which occurs when empty spectrum slots are not contiguous within a network kink or over adjacent links as a result of several set-up and tear-down of dynamic connections.

Such unused slots cannot be used by future connections and their presence worsens both the performance (increase of blocking ratio) and the resource utilization (spectrum and spectrum converters) efficiency of the network. The spectrum fragmentation is similar to the wavelength fragmentation in legacy \(\text{RWA}\) problems, but now the availability of not only a single slot, but that of a group of contiguous slots has to be guaranteed along the path. 

Figure 9. illustrates an example of demand blocking due to spectrum fragmentation: on the considered link a certain number of connections, with different bandwidth requirements, are established over a given network link (Figure 9(a)). Later in the network life, one of these connections is released, freeing up the corresponding spectrum (Figure 9(b)). Then, a new connection requiring six slots has to be set up, this connection is blocked even though six slots are free on the link, as those slots are not contiguous. 

In the literature many works aim to solve this issue, known as spectrum defragmentation, while setting-up a new request or by re-configuring existing paths periodically or according to a fragmentation threshold.

the maximum common larger segment (MCLS) tries to maximize the number of consecutive unused slots on the frequency axis. The entropy state associated to a link, path and network is introduced; this entropy describes the fragmentation state of each one of these entities by assessing the percentage of empty slots before and after a path set-up; in this manner the spectrum allocation minimizing the entropy of the whole network is selected.

Concerning dynamic spectrum defragmentation, a set of existing connections has to be reconfigured by a rerouting and/or spectral reallocation performed sequentially or at the same time. This operation is generally time consuming and is not suitable because service disruption can arise during the reconfiguration phase. Some defragmentation methods have been proposed and demonstrated in research laboratories.

Such methods mainly rely on the “make-before-break” paradigm, whereby an additional path is set-up before switching the signal that has to be reallocated, or on push-pull defragmentation (also known as wavelength sweeping), whereby the spectrum of a channel is shifted on free contiguous slots belonging to the same path with no service interruption. Similarly, hop-tuning techniques appear as an extension of push-pull by allowing channel spectrum shifts to noncontiguous slots.

A qualitative comparison of the diverse defragmentation techniques is provided in. Another manner to cope with the fragmentation issue is to add spectrum converters along the path; this solution seems to be the more straightforward, but involves expensive optoelectronic devices making elastic networking less cost-attractive.

 

 

FIGURE 9.  Example of spectrum fragmentation on a given link for a dynamic scenario.

 

Control Plane for Flexible Optical Networks 

The affirmation of cloud computing services  is creating more and more dynamic traffic requests, where on-demand bandwidth at Internet speeds have to be set-up. Thanks to elastic/flexible optical solutions, it becomes possible to adapt the optical resources to these dynamic bandwidth services.

The dynamic set-up of on-demand service is no longer compatible with manual interventions, hence automatic processes and resource orchestration become mandatory across multiple network layers (from \(\text{IP}\) to optical) and between diverse vendors and operators. 

This orchestration and its automatic reconfigurations need advanced control framework in order to guarantee the update of the network state information and to be able to route connections whenever a reconfiguration is demanded. Presently, the implementation of such framework relies on two main approaches: the first extends the existing generalized multiprotocol label switching \(\text{(GMPLS)}\) and path computation element \(\text{(PCE)}\) based protocols, while the second opts for the newly proposed software-defined network \(\text{(SDN)}\)/OpenFlow \(\text{(OF)}\) architecture.

GMPLS/PCE-Based Control Plane To support elasticity and to guarantee the establishment and the maintenance of connections control plane enhancements consists in the extensions of \(\text{GMPLS}\) routing and signaling protocols [56], that is, open shortest path first \(\text{(OSPF)}\) and resource reservation protocol traffic engineering \(\text{(RSVP}\)-\(\text{TE)}\).

The former protocol, also known as routing protocol, disseminates the network state information (e.g., frequency slot utilization) required by \(\text{RDWA}\)/\(\text{RMLSA}\) on-line algorithms, whereas the latter enables establishment/tear-down of resources from a source node to a destination node and provide the acknowledgement about resource reservation.

\(\text{GMPLS}\) abstracts the network in logical representations of each of its layers (one layer per used switching capability, e.g. one for \(\text{IP}\) and another for \(\text{WDM)}\).

The path computation can be performed either in a centralized manner, by the means of the \(\text{PCE}\), or in a distributed way. Although the use of \(\text{PCE}\) is computationally more powerful than the distributed solution, it reduces the scalability of the network for larger network sized; to cope with this problem, several \(\text{PCEs}\) can be placed in the network, providing a logically centralized system with a specified set of protocols, called PCE protocol \(\text{(PCEP)}\).

Currently in standard bodies extensions of the control plane protocols to support elasticity are discussed. More specifically, \(\text{OSPF}\) protocol now needs to disseminate not only the number of free wavelengths but information about free spectral slots along with the value of the slot width.

In a flex-grid, the reservation of a channel through \(\text{RSVP}\)-\(\text{TE}\) requires to specify the nominal central frequency of the channel and its spectral slot width. There are examples of protocol extensions required in flex-grid networks where resource allocation and path establishment are performed in both centralized and distributed manners, whether the routing and spectrum allocation are performed jointly or separately.

Beside standards, many methods have been proposed in the scientific literature to aggregate information about the spectral occupancy.

 

 

FIGURE 10.  Representation of current fixed-grid (a), where each channel is equally spaced and occupies a unique 50 GHz-slot of the grid, and of the proposed flex-grid (b), where the grid is made of 12.5 GHz slots and a channel occupies a variable number of slots depending on its spectral occupation.

A redefinition of the \(\text{RSVP}\)-\(\text{TE}\) protocol is proposed to specify the total spectrum occupancy of a channel by specifying its extremity frequencies, as indicated in Figure 10. 

Figure 10(a) provides an example of the current 50 GHz fixed-grid network where each channel occupies a slot of the grid and is equally spaced; the spectral occupation of a channel is indicated by the number of the occupied slots. In a flex-grid network, the slot width is no longer equal to the channel spacing or to the spectrum occupied by a channel. Indeed, a channel can occupy several slots and the channel spacing depends on the width of two contiguous channels as shown in Figure 10(b).

As a result it becomes mandatory to specify the total number of occupied slots and where they are placed; this is solved by indicating the start and end slots associated to each channel.

Moreover, a channel can be realized with a single carrier or with multiple carriers, hence among the forwarded information also the number of subcarriers, their symbol-rate and their modulation format have to be specified.

SDN Architecture   \(\text{SDN}\) is based on the concept of decoupling data and control planes. It relies on the abstraction of underlying network infrastructure that can be used by applications and network services as virtual entity. This abstraction enables the coexistence of various network slices relying on diverse transport technologies, domains and control protocols.

\(\text{SDN}\) is based on a centralized entity, named \(\text{SDN}\) controller, which implements the configuration of the underlying network devices. To this purpose, the best suited protocol for the \(\text{SDN}\) architecture is OpenFlow \(\text{(OF)}\). 

OpenFlow is a vendor and technology agnostic protocol that relies on traffic flow tables, enabling software/user-defined flow-based routing, control and management in the \(\text{SDN}\) controller, outside the data path. Indeed any data-plane entity is connected to external software controllers managing the network operating system \(\text{(netOS)}\), which sends the operation messages to be executed by the selected data-plane. 

The \(\text{SDN}\) network paradigm facilitates the packet-optical integration (also known as packet-optical integration convergence, PAC.C) as packet-switches and routers can operate jointly with optical transport elements, which are circuit-based. The packet-optical integration will facilitate the provision of services fitted to specific application requirements (e.g., on demand bandwidth services) at the optical layer.

Current \(\text{SDN}\) cannot fully manage the optical layer, based on circuit and wavelength-based architectures, meanwhile protocol extensions are largely proposed. Today to control heterogeneous technology domains, OpenFlow protocol has to be able to interact directly with existing \(\text{GMPSL}\)/\(\text{PCE}\) control plane. This is possible by reusing the existing \(\text{GMPLS}\) encodings and the link state protocol by introducing additional circuit flow table for circuit provisioning in \(\text{SDN}\) for optical transport layer. It results in a distributed GMPLS control plane under the centralized management performed by the \(\text{SDN}\) Controller, and exploited in various Metro/Core Border nodes.

For \(\text{EONs}\), both \(\text{GMPLS}\)/\(\text{PCE}\) and \(\text{SDN}\) control plane approaches are investigated, extended and compared, but a complete comparison study is not yet available because problems like network scalability and inter-operability are not taken into account.

To conclude, \(\text{SDN}\) combined with \(\text{EONs}\) are a promising solution enabling the network virtualization, for delivering user-adapted services; elastic devices can be seen as virtual and programmable devices (e.g., sliceable transponders) and \(\text{SDN}\) controller allows the orchestration of a heterogeneous network. 

 

 

3. PRACTICAL CONSIDERATIONS FOR ELASTIC WDM TRANSMISSION 

We outline the key required technology concepts with an emphasis on proposed solutions built upon existing the standard coherent (Nyquist) \(\text{WDM}\) transmission techniques. However, the concepts are also largely applicable to an orthogonal frequency division multiplexing \(\text{(OFDM)}\) transmission. 

 

Flexible Transponder Architecture

Numerous elastic transponder designs have been proposed, notably based on single-carrier technologies or \(\text{OFDM}\). We chose to focus on implementations associated with 100 Gb/s \(\text{PDM}\)-\(\text{QPSK}\) coherent detection since this technology can be made adaptive in various ways with little additional complexity. 

Before depicting the various adaptations on the hardware design, Figure 11 briefly recalls the architecture of a 100 Gb/s \(\text{PDM}\)-\(\text{QPSK}\) transceiver.

The 100 Gb/s transport frame can carry a single 100 Gb/s client signal (e.g., 100GbE or OTU4) or ten 10 Gb/s client signals (e.g., 10GbE or \(\text{OTU2)}\). This transport frame includes FEC encoding to mitigate physical impairments arising along the lightpath.

The encoded bit stream is mapped onto four electrical 28 Gb/s lanes going to the optical module, including the 100 Gb/s payload plus additional framing and \(\text{FEC}\) overhead. Each signal, represented by I1, \(\text{Q1}\), \(\text{I2}\), and \(\text{Q2}\) on the

 

 

FIGURE 11.  Architecture of the 100 Gb/s transponder that will be made elastic.

block diagram, is then independently modulated to binary phase shift keying \(\text{(BPSK)}\) modulation thanks to Mach–Zender modulators. The resulting four BPSK signals are then combined through phase shifters and polarization beam combiners so as to produce a 28 Gbaud \(\text{PDM}\)-\(\text{QPSK}\) signal. 

After transmission, the signal is detected by means of a coherent receiver: it is first combined with a local oscillator within a polarization diversity \(90^\circ\) hybrid mixer which outputs allow the linear sampling of the in-phase and quadrature components of the optical signal along two arbitrary orthogonal polarizations. The entire optical field (amplitude, phase, and polarization states) can thus be reconstructed.

A clock frequency is then extracted from the digital signal before going to the \(\text{DSP}\) stage. The \(\text{DSP}\) allows a wide range of techniques to compensate for signal distortions such as chromatic dispersion and polarization mode dispersion \(\text{(PMD)}\), and also enables the recovery of an estimate of the frequency and phase carrier. Then, a symbol-to-bit decision stage is performed before decoding the \(\text{FEC}\).

Due to the introduction of elasticity, the signal processing of one or multiple blocks needs to be adjusted depending upon the level of flexibility one wants to achieve. Elasticity is indeed enabled either by modulation format, symbol rate, channel spacing, \(\text{FEC}\) adaptation, or a combination of these options.

Modulation format adaptation. The emitter side can straightforwardly support various modulations by adjusting the sequences feeding the four modulators (for in-phase and in-quadrature of both polarizations).

For instance, dual polarization \(\text{BPSK}\) is generated by \(I1=Q1\) and \(I2=Q2\) while \(\text{SP}\)-\(\text{QPSK}\) is generated by \(Q2=I1\bigoplus Q1\bigoplus I2\) where \(\bigoplus\) denotes the \(\text{XOR}\) operator.

At the receiver, much of the \(\text{DSP}\) processing can be reused between different modulations and in particular in the chromatic dispersion block which accounts for a very large part of the total \(\text{DSP}\).

Higher-order modulations such as \(\text{8QAM}\), \(\text{16QAM}\) require \(\text{DAC}\) modules prior to the Mach–Zehnder modulators in order to generate \(I\), \(Q\) multi-level signals inherent to dual polarization modulation larger than 4 bit/symbols.

Symbol rate adaptation. The generation of variable-bandwidth signals requires a tunable clock or alternatively it can be emulated with a fixed clock in combination with symbol repetition at the transmitter and decimation at the receiver.

The former has the advantage of offering a large flexibility thanks to the tunable clock while the latter does not require a new phase, bit or frame synchronization after a symbol rate change but is less flexible since repetition/decimation uses an integer scaling factor.

Channel spacing adaptation. The challenge for the transceiver is rather limited to the need for a tunable laser (potentially fully tunable if over the entire \(\text{C}\) or \(L\) band) and local oscillator. However, all filtering elements such as optical filters and \(\text{WSS}\) must be compatible with the nonstandard \(\text{ITU}\) grid.

If such optical filters are not sharp enough or if the chosen channel spacing is very tight to reach high spectral efficiency, additional constraints can be put on the transceiver. In the former case, the emitter with the addition of a pre-compensation block can indeed help to compensate for physical impairments, while the receiver can enhance the equalization. On the other hand, the latter case can rely on Nyquist \(\text{WDM}\) pulse shaping.

• \(\text{FEC}\) adaptation. By varying the overhead and hence the reliability and coding gain, the bit rate can be adjusted to match the transmission impairments and distance. Both the coding and decoding blocks should be adapted to reflect the overhead change. The other \(\text{DSP}\) blocks can be reused as such.

 

Example of a Real-Time Energy-Proportional Prototype

This section considers the realization of an energy-efficient optical point-to-point transmission with elastic-enabled hardware elements as a proof-of-concept example. A muxponder combines the functions of aggregation and transponder. We present a real-time muxponder prototype that is able to follow the traffic fluctuations in order to reduce its energy-consumption during low traffic periods. 

The experimental setup consists of a fully equipped demonstration built upon an original elastic device capable of aggregating and disaggregating partially-filled 10GbE Ethernet clients, and of a real-time coherent elastic transponder with symbol-rate adaptation, as depicted in Figure 12. 

The elastic aggregation unit has ten classical (fixed) 10GbE interfaces with the client side (left of the module) filled by random traffic but where we can control the ratio of useful frames (i.e., representative of real traffic) to dummy frames (added to reach the 10 Gb/s Ethernet nominal data rate).

At the output of the aggregation unit (right of the module), the interface is elastic with a variable bit rate per output lane so as to deliver the proper bit rate to the transponder unit. We use a voltage controlled oscillator to generate a centralized clock and distribute it all across the aggregation module and the transponder. The transponder is a typical \(\text{PDM}\)-\(\text{QPSK}\) with 7% \(\text{FEC}\) overhead (using Reed–Solomon), which is fed with 4 lanes corresponding to in-phase and in-quadrature of the two polarizations.

The transponder adapts its data rate by tuning the symbol rate from 1 to 7.5 GHz due to hardware limitations. This translates in a maximum bit rate of 30 Gb/s with \(\text{PDM}\)-\(\text{QPSK}\). The detailed experimental setup.

 

 

FIGURE 12.  High-level muxponder prototype architecture.

 

 

 

FIGURE 13.  Impact of the symbol rate on the power consumption

We measure the energy consumption of the real-time \(\text{DSP}\), which includes the sampling of data in field programmable gate arrays \(\text{(FPGAs)}\) where polarization demultiplexing is performed with 9-tap finite impulse response filters, arranged in a butterfly structure and updated by the constant modulus algorithm. Carrier frequency and phase estimation are performed using the Viterbi algorithm. The measurement was performed by direct inspection of the voltage and current supplied to the receiver board and is shown in Figure 13.

A linear relationship is demonstrated between power consumption and actual transported traffic. This is typical of the power consumption of logical gates versus clock rate, when gate voltage is held constant for all clock rates. Overall, we found a 41% reduction in power consumption between the highest investigated bit rate (30 Gb/s) and the lowest (7 Gb/s). 

Even though, qualitatively speaking, these results are extremely promising, quantitatively they are specific to our hardware implementation, and to the specific technological choices for our receiver.

For example, our implementation is based on \(\text{FPGA}\), whereas higher bit rate transponders are typically based on application specific integrated circuits. The research in the area of energy proportional optical network elements is still relatively young, and several improvements to our proposal can be imagined, for example the use of dynamic voltage and frequency scaling.

 

 

4. OPPORTUNITIES FOR ELASTIC TECHNOLOGIES IN CORE NETWORKS

\(\text{EON}\) results in more cost- and energy-efficient solutions because of an ad-hoc utilization of resources. The opportunities offered by elastic networks are multiple and depend on the improvements brought with both hardware and software solutions, as well as at the control plane.

Usually, such modifications are introduced gradually by network operators following the trade-off between the near-term cost impact of the new adopted solution and the expected benefits for future network evolutions. In the following we quantify the various opportunities enabled by the adoption of \(\text{EON}\) by investigating network-level dimensioning via resource allocation tools as described in Section Resource Allocation Tools.

 

More Cost-Efficient Networks 

\(\text{EONs}\) allow significant cost savings, depending on the network scenarios. In the short term, rate-adaptive devices will likely be deployed in legacy fixed-grid networks. In this context, the capability of selecting an appropriate modulation format to the connection capacity and the distance it has to cover will increase the network capacity by approximately 30%. 

Today, to fit the capacity of a connection to the capacity required by a demand, multiple types of devices are deployed, each carrying a specific rate and with a specific design. The use of a single type of rate-tunable technology to handle all types of connections simplifies the design of the network (e.g., the same dispersion map can be used for all links) and allows the sharing of resources in dynamic networking scenarios.

Surely, elastic devices are able to configure their data-rate to meet planned (e.g., upgrades) or unexpected (e.g., failures) evolution of the network conditions: when a new connection has to be established, any available elastic optoelectronic interface can be used (on unplanned failure recovery or dynamic demand set-up) or reconfigured (on upgrade or pre-computed failure recovery).

In restorable networks (with online rerouting of the connection and new light-path establishment after planned and/or unplanned failure, as opposed to protected networks where protection light-paths are always lit-on) it has been demonstrated that the use of elastic interfaces limit the over-provisioning of the number of spare resources, as these resources are no longer associated to a specific rate and can be shared between any failed connections, regardless of their rate.

The number of spare resources in an elastic network reduces by 30–70% with respect to mixed-data rate scenarios (where a specific device per each data rate exists), yielding up to 37%.

In case of a network upgrade, the use of rate adaptive resources ensures a higher resilience of the deployed resources to traffic evolution and a postponement of the deployment of regenerators; thanks to this capability, when a diverse network upgrades are considered, the cumulative cost of the network over the total period is up to 18% less expensive for elastic scenarios than for mixed-line rate and 34% lesser expensive than single rate scenarios.

Though flexible equipment may cost more than conventional \(\text{WDM}\) equipment, possible fast price erosion with time due to large-scale production can be observed. The maximum additional cost of a sliceable transponder with respect to a fixed one is estimated to lie between 20% and 60% of the cost of a fixed transponder to render a flexible network more cost-attractive than a fixed one.

Economic interests are also provided by elastic networks due to flex-grid. Cost savings are compared in terms of equipment but also in terms of saved spectrum. Savings are also demonstrated for resilient networks, both in case of pre-planned and on-line restoration.

 

More Energy Efficient Network

Elastic networks are also attractive for their eco-sustainability, which has been important due to environmental awareness and the pressure to reduce the operational expenditure of network operators; due to the introduction of high-capacity devices that are more and more power greedy; and due to the increase of the cost of energy. 

The capability of elastic interfaces to tune the data rate to the carried capacity minimizes the energy per bit as just enough power is used. Data rate adaptation can be provided by two different methods: \(\text{(i)}\) by adjusting the modulation format, hence increasing the optical reach of systems if less complex modulation format are used and skipping useless regenerators; and \(\text{(ii)}\) by adapting the symbol rate of the connection, thereby reducing the energy consumption if lower symbol rates are transmitted, because the energy consumption decreases proportionally to the frequency clock of electronic devices (like \(\text{DSP}\), line cards and framer/deframer) as considered and shown in  Example of a Real-Time Energy-Proportional Prototype from the realization. Figure 14 depicts an example of the aforementioned two rate adaptation allowing energy savings.

This dynamic capability of changing the connection data-rate can be used for greening the network in static operation and for adapting the energy consumption of each single connection to its daily and weekly traffic variations.

As an example, in a European-sized network and with unprotected traffic, following daily traffic variations brings up to 20% of energy savings when using modulation format adaptation, and up to 24% when symbol rate adaptation is implemented; gains reach 32% when both are simultaneously implemented. Energy savings have also been demonstrated by setting to a low power mode the spare devices planned to be used only for failure recovery.

 

Filtering Issues and Superchannel Solution

Most advantages associated to flex-grid exist under the assumption that filters with finer spectrum granularity have sharp profiles and do not affect the transmission performance of the optical channels.

Many of the previously cited studies have shown the interest of adopting narrower channel spacing when comparing the overall extra-capacity of flex-grid networks with respect to fixed-grid ones (33% when 37.5 GHz grid systems replaces legacy 50 GHz-grid systems). 

With commercially available flex-grid filters, \(\text{OSNR}\) penalties induced by tight optical filtering when individual channel steering is performed are not negligible. In the total network throughput in a transparent network featuring 37.5 GHz channel spacing is compared with the whole throughput in a 50 GHz grid network.

It is shown that the penalties due to tight filtering reduce the optical reach of transparent signals proportionally to the number of traversed nodes. It follows that, under the assumption of a constant cost per transported bit, the expected 33% gain brought by narrowing the channel spacing from 50 down to 37.5 GHz strongly depends on network size: it can actually be halved for nation-wide networks or vanish for larger ones.

To cope with the filtering-induced penalties and improve the spectral efficiency while keeping as low as possible the cost per bit some transmission paradigms have 

 

 

FIGURE 14.  Representation of two methods allowing energy savings thanks to the reconfiguration of elastic transponders: by means of modulation rate adaptation (a) and symbol rate adaptation (b).

 

 

FIGURE 15.   Example of the presence of superchannels in the network. The connection is composed of one or more subcarriers that are inserted, transported and dropped together.

 

to change. A solution to this problem consists in adding to the channel bandwidth a guard band between narrower channels (or conversely a half-guard band at each channel extremity) so as to mitigate the penalties due to both filtering functions and the cross-talk from partially blocked additional channels, at the expense of the overall spectral efficiency.

To improve the spectral efficiency and cope with filtering issues at the same time, superchannel routing may be adopted. A superchannel consists of a group of adjacent optical subcarriers, as shown in Figure 15, that propagate together along the same optical path: they are inserted, transported, and extracted together.

All subcarriers belonging to the same superchannel have the same modulation format and symbol rate, which are adapted to the physical degradations occurring along the path. The number of subcarriers depends on the capacity of the connection and on the spectral resources available along the selected path. 

The flexibility and reconfiguration of high-capacity superchannels could also make them prime candidates for inter-data center transmissions, which require a tremendous amount of bandwidth.

 

 

5. LONG TERM OPPORTUNITIES 

Burst Mode Elasticity 

We have considered so far elastic networking solutions aimed at circuit-switched core networks, where enhanced flexibility is most needed and where the high cost of the coherent technology compounded by the equipment cost overhead incurred by elasticity are compatible with the overall network cost.

In the segments that are closer to the end-user – access networks and metro networks – and inside datacenter networks (see Figure 1), the tremendous bandwidth made available by optical fibers is not (yet) fully leveraged and optimizing the spectrum using flex-grid technologies is not of interest today and in the near term.

However, as capacity requirement is growing (Bell Labs predicts a 560% in traffic increase in the metro segment alone coherent technology and elasticity could eventually be needed to optimize equipment utilization, and thus be deployed in access, metro and datacenter networks. This section shows how elasticity can increase the capacity made available to end-users.

In core networks, due to the aforementioned fundamental trade-off between reach (in terms of distance or number of nodes) and data rate, either several transponders are installed on the network nodes, or \(\text{OEO}\) regeneration is used, so that any node can communicate with any other node. In access, metro and datacenter networks, the cost of the terminating equipment such as the transponders is crucial and the cost of deploying several transponders per node or of deploying regenerators can be prohibitive.

Thus the same transponder should send different data flows to different destinations, or receive different data flows from different sources through time-sharing of the channel capacity. A transponder may have to be reconfigured many times every second or even millisecond so that communication between any source and any destination is possible at virtually any time.

Such fast reconfiguration capability is denoted by “burst-mode,” where a burst typically consisting of a stream of data going from a source to destination for a limited duration of a few tens of nanoseconds to a few seconds depending on the implementation. Microsecond-scale bursts are typically considered.

Burst-mode coherent reception is challenging because not only clock but also polarization must be recovered for each burst, and DSP algorithms must be reset at each burst and must converge within a duration that is much lower than the burst duration, in order to minimize the overhead due to burst-mode reception. Additional details about coherent burst mode reception. 

Coherent transmission enables rate-adaptation and permits a single transponder to communicate with both close nodes (at a high data rate) and further nodes (at a lower data rate).

In the segments considered here, elasticity is typically implemented by varying the modulation format for each burst (potentially every microsecond) based on the distance or number of nodes between source and destination.

Observe nonetheless that, when energy efficiency is primordial, baud rate elasticity may be used as explained in the previous sections, as a complement to modulation format elasticity.

The concept of flex-rate burst-mode networking is illustrated in Figure 16, which depicts how one node \(\text{A)}\) communicates with all other nodes. Node A sends data to \(D\) via nodes \(B\) and \(C\), and to node \(G\) via node \(F\). Node \(B\) is sufficiently close to node A such that A can send high rate, \(\text{PDM}\)-\(\text{16QAM}\) modulated bursts to \(B\); node \(C\) is slightly further and a lower modulation format (e.g., \(\text{PDM}\)-\(\text{8QAM)}\) is used while an even lower modulation format (e.g., \(\text{PDM}\)-\(\text{QPSK)}\) is needed by node \(A\) to send bursts to node \(D\). Unlike with circuit switching, with optical burst switching node \(A\) can use the same transponder using time-sharing access to send data to nodes \(B\), \(C\) and \(D\) without requiring nodes \(B\) and \(C\) to perform optoelectronic conversions.

In the following sections we review two examples of flex-rate burst-mode networks and show their potential benefits: a coherent \(\text{PON}\), and a ring-based metro or datacenter network.

 

 

FIGURE 16.  Burst-mode elastic network.

 

Elastic Passive Optical Networks 

\(\text{PONs}\) can be viewed as a star-shaped optical burst switching network where the node located in the operator’s central office, the Optical Line Terminal \(\text{(OLT)}\), and \(N\) nodes located at the customers’ premises, the Optical Network Units \(\text{(ONU)}\), communicate via a \(1:N\) optical splitter. Today’s \(\text{PONs}\) use noncoherent modulation formats at data rates up to 10 Gb/s both downstream and upstream. Although coherent modulation is currently prohibitively expensive for a deployment in the highly cost-sensitive access segment, the combination of technological advances in coherent transmission equipment and of the demand growth could lead to the introduction of the coherent technology in \(\text{PONs}\) in the medium to long term. 

In a \(\text{PON}\), the distance between the operator’s central office and the end users varies widely: the data rate in the whole \(\text{PON}\) is driven by the data rate achievable by the \(\text{ONU}\) that is most distant from the \(\text{OLT}\); closer \(\text{ONUs}\) have transmission margins that are not exploited.

In a flex-rate coherent \(\text{PON}\) (flex-\(\text{PON)}\), terminals adjust their rate depending on the distance to be covered in order to exploit the aforementioned margins, as shown in Figure 17. To ensure fairness across all users, that is that all \(\text{ONUs}\) experience, on average, the same service (in terms of sending or receiving data rate) irrespective of their physical distance to the \(\text{OLT}\), a generalized proportional-fair scheduler that allocates more sending time to the more distant users experiencing more impairments can be used as explained.

Assuming 256 \(\text{ONUs}\) per \(\text{OLT}\) with a maximum \(\text{OLT}\)-\(\text{ONU}\) distance of 20 km, a standard, fixed-rate coherent \(\text{PON}\) would operate at 75 Gb/s (14 Gbaud signals with a \(\text{PDM}\)-\(\text{8QAM}\) modulation and a standard 12% hard decision \(\text{FEC)}\) corresponding to an effective throughput of 292 Mb/s per user, while flex-\(\text{PON}\) with possible dual polarization modulations up to \(\text{16QAM}\) would deliver on average 318 Mb/s to each user, a 9% premium over fixed-rate \(\text{PON}\). When the \(\text{OLT}\)-\(\text{ONU}\) distances are more widely spread, flex-\(\text{PON}\)

 

 

FIGURE 17.   Flexible passive access optical network (flex-PON).

becomes significantly more efficient than a fixed-rate \(\text{PON}\); for instance, if the most distant user is now 50 km away from the \(\text{OLT}\), the average gain jumps to 120% (220 Mb/s per user instead of 100 Mb/s). Since those gains are per-wavelength, they do not change if several wavelengths are used in the \(\text{PON}\). 

 

Metro and Datacenter Networks 

An even more forward-looking application to burst-mode elasticity is the intra-datacenter interconnection network, that is a network within a datacenter. \(\text{WDM}\) optical technologies such as optical burst switching may find a promising application in the datacenter segment to improve scalability, for instance in terms of energy consumption or cabling woes.

Although this section focuses on datacenters, the same technology could be used for large scale metro networks. 

Datacenters are currently built with electronic switches and many point-to-point copper or fiber interconnections: a typical datacenter consists of servers set into racks; within each rack the servers are connected to a Top of Rack \(\text{(ToR)}\) Ethernet switch, connected in turn to a hierarchy of electronic switches following for instance the Folded Clos topology shown in Figure 18.

 

 

FIGURE 18.  Folded Clos-based datacenter network.

 

 

FIGURE 19.  Optical burst switching ring-based datacenter network.

 

Consider the following alternative topology. ToRs are connected to large optical burst switching nodes, arranged into a ring network, as shown in Figure 19. In datacenters, propagation distances are so small that they cause few physical layer impairments, however, as signals traverse nodes potentially many optical filters (which are located within the nodes) need to be traversed, resulting in signal distortion due to filter concatenation.

Experimental measurements for an implementation of an optical burst switching node shows that \(\text{PDM}\)-\(\text{QPSK}\) signals could cross 120 nodes but \(\text{PDM}\)-\(\text{16QAM}\) signals are limited to a reach of only 20 nodes. If for instance a ring of 88 nodes is deployed then the transponder capacity is limited to \(\text{PDM}\)-\(\text{QPSK}\) signals with an effective throughput of 100 Gb/s (after \(\text{FEC)}\) per wavelength, in order to ensure that each node is reachable from any other node.

With elastic-rate transponders capable of rates from \(\text{PDM}\)-\(\text{QPSK}\) (100 Gb/s) to \(\text{PDM}\)-\(\text{32QAM}\) (250 Gb/s), each node can adjust its sending rate on a per-burst basis to the number of nodes to be crossed to reach a destination, resulting in an average operating rate of 156 Gb/s, a 56% premium over fixed-rate transponders.

This of course comes at the cost of deploying more complex transponders that are \(\text{PDM}\)-\(\text{32QAM}\) capable. The same applies to metro networks, where ToRs are replaced with \(\text{PON}\) \(\text{OLTs}\) or \(\text{DSLAMs}\). 

To better grasp the benefits of burst-mode coherent transmission and elasticity within a datacenter, consider the following example; The full details.With the standard 10 Gb/s, single-wavelength per fiber technology, a datacenter with 140,000 servers each equipped with a gigabit Ethernet network adapter and a \(10:1\) oversubscription ratio (i.e., some links are under-provisioned by a factor 10) would require around 30,000 linecards (and cables) each operating at 10 Gb/s to interconnect the large switches above the ToR level.

The same datacenter would require only 88 linecards (and cables) each operating at up to 250 Gb/s if it was built using a coherent optical burst switching ring. Although no real datacenter will ever be built using a single ring due to protection issues, this small example gives a good sense of what could be achieved using the elastic burst-mode coherent technology.

 

 

 

 

 

 

 

 


Share this post


Sale

Unavailable

Sold Out