Menu
Cart 0

What are Data Centers?

Data Centers house critical computing resources in controlled environment and under centralized management, which enable enterprises to operate around the clock or according to their business needs.

These computing resources include:

  • Mainframes
  • Web and application servers
  • File and printer servers
  • Messaging servers
  • Application software and the operating systems that run them
  • Storage subsystems
  • Network Infrastructure (IP or Storage-Area Network (SAN))

Applications range from internal financial and human resources to external e-commerce and business-to-business applications.

Additionally, a number of servers support network operations and network-based applications.

Network operation applications include:

  • Network Time Protocol (NTP)
  • TN3270
  • FTP
  • Domain Name System (DNS)
  • Dynamic Host Configuration Protocol (DHCP)
  • Simple Network Management Protocol (SNMP)
  • TFTP
  • Network File System (NFS)

Network-based applications include:

  • IP telephony
  • Video streaming over IP
  • IP video conferencing
  • and so on …

Virtually, every enterprise has one or more Data Centers. Some have evolved rapidly to accommodate various enterprise application environments using distinct operating systems and hardware platforms. The evolution has resulted in complex and disparate environments that are expensive to manage and maintain.

In addition to the application environment, the supporting network infrastructure might not have changed fast enough to be flexible in accommodating ongoing redundancy, scalability, security, and management requirements.

A Data Center network design lacking in any of these areas risks not being able to sustain the expected service level agreements (SLAs). Data Center downtime, service degradation, or the inability to roll new services implies that SLAs are not met, which leads to a loss of access to critical resources and a quantifiable impact on normal business operation. The impact could be as simple as increased response time or as severe as loss of data.

 

>> Data Center Goals

The benefits provided by a Data Center include traditional business-oriented goals such as the support for business operations around the clock (resiliency), lowering the total cost of operation and the maintenance needed to sustain the business function (total cost of ownership), and the rapid deployment of applications and consolidation of computing resources (flexibility).

These business goals generate a number of information technology (IT) initiatives, including:

  • Business continuance
  • Increased security in the Data Center
  • Application, server, and Data Center consolidation
  • Integration of applications whether client/server and multitier (n-tier), or web services –related applications
  • Storage consolidation

These IT initiatives are a combination of the need to address short-term problems and establishing a long-term strategic direction, all of which require an architectural approach to avoid unnecessary instability if the Data Center network is not flexible enough to accommodate future changes.

The design criteria are:

  • Availability
  • Scalability
  • Security
  • Performance
  • Manageability

These design criteria are applied to these distinct functional areas of a Data Center network:

  • Infrastructure services – Routing, switching, and server-farm architecture
  • Application services – Load balancing, Secure Socket Layer (SSL) offloading, and caching
  • Security services – Packet filtering and inspection, intrusion detection, and intrusion prevention
  • Storage services – SAN architecture, Fibre Channel switching, backup, and archival
  • Business continuance – SAN extension, site selection, and Data Center interconnectivity

 

>> Data Center Facilities

Because Data Centers house critical computing resources, enterprises must make special arrangements with respect to both the facilities that house the equipment and the personnel required for a 24-by-7 operation.

These facilities are likely to support a high concentration of server resources and network infrastructure. The demands posed by these resources, coupled with the business criticality of the applications, create the need to address the following areas:

  • Power capacity
  • Cooling capacity
  • Cabling
  • Temperature and humidity controls
  • Fire and smoke systems
  • Physical security: restricted access and surveillance systems
  • Rack space and raised floors

 

>> Roles of Data Centers in the Enterprise

Figure 1-1 presents the different building blocks used in the typical enterprise network and illustrates the location of the Data Center within that architecture.

The building blocks of this typical enterprise network include:

  • Campus network
  • Private WAN
  • Remote access
  • Internet server farm
  • Extranet server farm
  • Intranet server farm

image

 

Data Centers typically house many components that support the infrastructure building blocks, such as the core switches of the campus network or the edge routers of the private WAN.

Data Center designs can include any or all of the building blocks in Figure 1-1, including any or all server farm types. Each type of server farm can be a separate physical entity, depending on the business requirements of the enterprise.

For example, a company might build a single Data Center and share all resources, such as servers, firewalls, routers, switches, and so on. Another company might require that the three server farms be physically separated with no shared equipment.

Enterprise applications typically focus on one of the following major business areas:

  • Customer relationship management (CRM)
  • Enterprise Resource Planning (ERP)
  • Supply chain management (SCM)
  • Sales force automation (SFA)
  • Order processing
  • E-commerce

 

>> Roles of Data Centers in the Service Provider Environment

Data Centers in service provider (SP) environments, known as Internet Data Centers (IDCs), unlike in enterprise environments, are the source of revenue that supports collocated server farms for enterprise customers.

The SP Data Center is a service-oriented environment built to house, or host, an enterprise customer’s application environment under tightly controlled SLAs for uptime and availability. Enterprises also build IDCs when the sole reason for the Data Center is to support Internet-facing applications.

The IDCs are separated from the SP internal Data Centers that support the internal business applications environments.

Whether built for internal facing or collocated applications, application environments follow specific application architectural models such as the classic client/server or the n-tier model.

 

>> The Client/Server Model and Its Evolution

The classic client/server model describes the communication between an application and a user through the use of a server and a client. The classic client/server model consists of the following:

  • A thick client that provides a graphical user interface (GUI) on top of an application or business logic where some processing occurs
  • A server where the remaining business logic resides

Thick client is an expression referring to the complexity of the business logic (software) required on the client side and the necessary hardware to support it.

A thick client is then a portion of the application code running at the client’s computer that has the responsibility of retrieving data from the server and presenting it to the client. The thick client code requires a fair amount of processing capacity and resources to run in addition to the management overhead caused by loading and maintaining it on the client base.

The server side is a single server running the presentation, application, and database code that uses multiple internal processes to communicate information across these distinct functions.

The exchange of information between client and server is mostly data because the thick client performs local presentation functions so that the end user can interact with the application using a local user interface.

Client/server applications are still widely used, yet the client and server use proprietary interfaces and message formats that different applications cannot easily share.

Part of a Figure 1-2 shows the client/server model.

image

The most fundamental changes to the thick client and single-server model started when web-based applications first appeared.

Web-based applications reply on more standard interfaces and message formats where applications are easier to share. HTML and HTTP provide a standard framework that allows generic clients such as web browsers to communicate with generic applications as long as they use web servers for the presentation function.

HTML describes how the client should render the data; HTTP is the transport protocol used to carry HTML data. Microsoft Internet Explorer is an example of client (web browser); Apache, Microsoft Internet Information Server (IIS) are examples of web servers.

The migration from the classic client/server to a web-based architecture implies the use of thin clients (web browsers), web servers, application servers, and database servers.

The web browser interacts with web servers and application servers, and the web servers interact with application servers and database servers. These distinct functions supported by the servers are referred to as tiers, which, in addition to the client tier, refer to the n-tier model.

 

>> The n-Tier Model

Part b of Figure 1-2 shows the n-tier model. Figure 1-2 presents the evolution from the class client/server model to the n-tier model.

The client/server model uses the thick client with its own business logic and GUI to interact with a server that provides the counterpart business logic and database functions on the same physical device.

The n-tier model uses a thin client and a web browser to access the data in many different ways. The server side of the n-tier model is divided into distinct functional areas that include the web, application, and database servers.

The n-tier model relies on a standard web architecture where the web browser formats and presents the information retrieved from the web server. The server side in the web architecture consists of multiple and distinct servers that are functionally separate. The n-tier model can be the client and a web server; or the client, the web server, and an application server; or the client, web, application, and database servers. This model is more scalable and manageable, and even though it is more complex than the classic client/server model, it enables application environments to evolve toward distributed computing environments.

The n-tier model makes a significant step in the evolution of distributed computing from the classic client/server model. The n-tier model provides a mechanism to increase performance and maintainability of client/server applications while the control and management of application code is simplified.

Figure 1-3 introduces the n-tier model and maps each tier to a partial list of currently available technologies at each tier.

image

Notice that the client-facing servers provide the interface to access the business logic at the application tier. Although some applications provide a non-web-based front end, current trends indicate the process of “web-transforming” business applications is well underway.

This process implies that the front end relies on a web-based interface to face the users which interacts with a middle layer of applications that obtain data from the back-end system.

These middle tier applications and the back-end database systems are distinct pieces of logic that perform specific functions. The logical separation of front-end application and back-end functions has enable their physical separation. The implications are that the web and application servers, as well as application and database servers, no longer have to coexist in the same physical server. This separation increases the scalability of the services and eases the management of large-scale server farms. From a network perspective, these groups of servers performing distinct functions could also be physically separated into different network segments for security and manageability reasons.

 

>> Multitier Architecture Application Environment

Multitier architectures refer to the Data Center server farms supporting applications that provide a logical and physical separation between between various application functions, such as web, application, and database (n-tier model).

The network architecture is then dictated by the requirements of application in use and their specific availability, scalability, and security and management goals. For each server-side tier, there is a one-to-one mapping to network segment that supports the specific application function and its requirements. Because the resulting network segments are closely aligned with the tiered applications, they are described in reference to the different application tiers.

Figure 1-4 presents the mapping from the n-tier model to the supporting network segments used in a multitier design.

image

The web server tier is mapped to the front-end segment, the business logic to the application segment, and the database tier to the back-end segment.

Notice that all the segments supporting the server farm connect to the access layer switches, which in a multitier architecture are different access switches supporting the various server functions.

The evolution of application architectures and departing from multitier application environments still requires a network to support the interaction between the communicating entities.

 

>> Data Center Architecture

The enterprise Data Center architecture is inclusive of many functional areas, as presented earlier in Figure 1-1.

The focus of this section is the architecture of a generic enterprise Data Center connected to the Internet and supporting an intranet server farm.

Other types of server farms follow the same architecture used for intranet server farms yet with different scalability, security, and management requirements.

Figure 1-5 introduces the topology of the Data Center architecture.

image

Figure 1-5 shows a fully redundant enterprise Data Center supporting the following areas:

  • No single-point of failure – redundant components
  • Redundant Data Centers

The core connectivity functions supported by Data Centers are Internet Edge connectivity, campus connectivity, and server-farm connectivity, as presented by Figure 1-5.

Internet Edge

The Internet Edge provides the connectivity from the enterprise to the Internet and its associated redundancy and security functions, as following:

  • Redundant connections to different service providers
  • External and internal routing through exterior border gateway protocol (EBGP) and interior border gateway protocol (IBGP)
  • Edge security to control access from the Internet
  • Control for access to the Internet from the enterprise clients

Campus Core Switches

The campus core switches provide connectivity between the Internet Edge, the intranet server farms, the campus network, and the private WAN.

The core switches physically connect to the devices that provide access to other major network areas, such as the private WAN edge routers, the server-farm aggregation switches, and campus distribution switches.

Network Layers of the Server Farm

As depicted in Figure 1-6, the following are the network layers of the server farm:

  • Aggregation layer
  • Access Layer
      — Front-end segment
      — Application segment
      — Back-end segment
  • Storage Layer
  • Data Center transport layer

Some of these layers depend on the specific implementation of the n-tier model or the requirements for Data Center-to-Data Center connectivity, which implies that they might not exist in every Data Center implementation.

Although some of the these layers might be optional in the Data Center architecture, they represent the trend in continuing to build highly available and scalable enterprise Data Centers.

This trend specifically applies to the storage and Data Center transport layers supporting storage consolidation, backup and archival consolidation, high-speed mirroring or clustering between remote server farms, and so on.

 

>> Aggregation Layer

image

The aggregation layer is the aggregation point for devices that provide services to all server farms. These devices are multilayer switches, firewall, load balancers, and other devices that typically support services across all servers.

The multilayer switches are referred to as aggregation switches because of the aggregation function they perform. Service devices are shared by all server farms. Specific server farms are likely to span multiple access switches for redundancy, thus making the aggregation switches the logical connection point for service devices, instead of the access switches.

As depicted in Figure 1-6, the aggregation switches provide basic infrastructure services and connectivity for other service devices. The aggregation layer is analogous to the traditional distribution layer in the campus network in its Layer 3 and Layer 2 functionality.

The aggregation switches support the traditional switching of packets at Layer 3 and Layer 2 in addition to the protocols and features to support Layer 3 and Layer 2 connectivity.

 

>> Access Layer

The access layer provides Layer 2 connectivity and Layer 2 features to the server farm. Because in a multitier server farm, each server function could be located on different access switches on different segments, the following section explains the details of each segment.

1. Front-End Segment

The front-end segment consists of Layer 2 switches, security devices or features, and the front-end server farms.

The front-end segment is analogous to the traditional access layer of the hierarchical campus network design and provides the same functionality.

The access switches are connected to the aggregation switches in the manner depicted in Figure 1-6.

The front-end server farms typically include FTP, Telnet, TN3270 (mainframe terminals), Simple Mail Transport Protocol (SMTP), web servers, DNS servers, and other business application servers, in addition to network-based application servers such as IP television (IPTV) broadcast servers and IP telephony call managers that are not placed at the aggregation layer because of port density or other design requirements.

The specific network features required in the front-end segment depend on the servers and their functions. For example, if a network support video streaming over IP, it might require multicast, or if it support Voice over IP (VoIP), quality of service (QoS) must be enabled.

The need for Layer 2 adjacency is the result of Network Address Translation (NAT) and other header rewrite functions performed by load balancers or firewalls on traffic destined to the server farm. The return traffic must be processed by the same device that performed the header rewrite operations.

Layer 2 connectivity is also required between servers that use clustering for high availability or require communicating on the same subnet. This requirement implies that multiple access switches supporting front-end servers can support the same set of VLANs to provide layer adjacency between them.

Security features include Address Resolution Protocol (ARP) inspection, broadcast suppression, private VLANs, and others that are enable to counteract Layer 2 attacks.

Security devices include network-based intrusion detection system (IDSs) and host-based IDSs to monitor and detect intruders and prevent vulnerabilities from being exploited. In general, the infrastructure components such as the Layer 2 switches provide intelligent network services that enable front-end servers to provide their functions.

Note that the front-end servers are typically taxed in their I/O and CPU capabilities. For I/O, this strain is a direct result of serving content to the end users; for CPU, it is the connection rate and the number of concurrent connections needed to be processed.

Scaling mechanisms for front-end servers typically include adding more servers with identical content and then equally distributing the load they receive using load balancers.

Load balancers distribute the load (or load balance) based on Layer 4 or Layer 5 information. Layer 4 is widely used for front-end servers to sustain a high connection rate without necessarily overwhelming the servers.

Scaling mechanisms for web servers also include the use of SSL offloaders and Reverse Proxy Caching (RPC).

 

2. Application Segment

The application segment has the same network infrastructure components as the front-end segment and the application servers.

The features required by the application segment are almost identical to those needed in the front-end segment, albeit with additional security.

This segment relies strictly on Layer 2 connectivity, yet the additional security is a direct requirement of how much protection the application servers need because they have direct access to the database systems.

Depending on the security policies, this segment uses firewalls between web and application servers, IDSs, and host IDSs. Like the front-end segment, the application segment infrastructure must support intelligent network services as a direct result of the functions provided by the application services.

Application servers run a portion of the software used by business applications and provide the communication logic between the front end and the back end, which is typically referred to as middleware or business logic.

Application servers translate user requests to commands that the back-end database systems understand. Increasing the security at this segment focuses on controlling the protocols used between the front-end servers and the application servers to avoid trust exploitation and attacks that exploit known application vulnerabilities.

Figure 1-7 introduces the front-end, application, and back-end segments in a logical topology.

image

Note that the application servers are typically CPU-stressed because they need to support the business logic. Scaling mechanisms for application servers also include load balancers. Load balancers can select the right application based on Layer 5 information.

Deep packet inspection on load balancers allow the partitioning of application server farms by content. Some server farms could be dedicated to selecting a server farm based on the scripting language (.cgi, .jsp, and so on). This arrangement allows application administrators to control and manage the server behavior more efficiently.

 

3. Back-End Segment

The back-end segment is the same as the previous two segments except that it support the connectivity to database servers. The back-end segment features are almost identical to those at the application segment, yet the security considerations are more stringent and aim at protecting the data, critical or not.

The hardware supporting the database systems ranges from medium-sized servers to high-end servers, some with direct locally attached storage and others using disk arrays attached to a SAN.

When the storage is separated, the database server is connected to both the Ethernet switch and the SAN. The connection to the SAN is through a Fibre Channel interface. Figure 1-8 presents the back-end segment in reference to the storage layers. Notice the connections from the database server to the back-end segment and the storage layer.

Note that in other connectivity alternatives, the security requirements do not call for physical separation between the different server tiers.

 

>> Storage Layer

The storage layer consists of the storage infrastructure such as Fibre Channel switches and routers that support small computer system interface (SCSI) over IP (iSCSI) or Fibre Channel over IP (FCIP). Storage network devices provide the connectivity to servers, storage devices such as disk subsystems, and tape subsystems.

SAN environments in Data Centers commonly use Fibre Channel to connect servers to the storage device and to transmit SCSI commands between them. Storage networks allow the transport of SCSI command over the network. This transport is possible over the Fibre Channel infrastructure or over IP using FCIP and iSCSI.

FCIP and iSCSI are the emerging Internet Engineering Task Force (IETF) standards that enable SCSI access and connectivity over IP.

The network used by these storage devices is referred as a SAN. The Data Center is the location where the consolidation of applications, servers, and storage occurs and where the highest concentration of servers is likely, thus where SANs are located. The current trends in server and storage consolidation are the result of the need for increased efficiency in the application environments and for lower costs of operation.

Data Center environments are expected to support high-speed communication between servers and storage and between storage devices. These high-speed environments require block-level access to the information supported by SAN technology.

There are also requirements to support file-level access specifically for applications that use Network Attached Storage (NAS) technology. Figure 1-8 introduces the storage layer and the typical elements of single and distributed Data Center environments.

image

Figure 1-8 shows a number of database servers as well as tape and disk arrays connected to the Fibre Channel switches.

Severs connected to the Fibre Channel switches are typically critical servers and always dual-homed. Other common alternatives to increase availability include mirroring, replication, and clustering between database systems or storage devices.

These alternatives typically require the data to be housed in multiple facilities, thus lowering the likelihood of a site failure preventing normal systems operation.

Site failures are recovered by replicas of the data at different sites, thus creating the need for distributed Data Centers and distributed server farms and the obvious transport technologies to enable communication between them.

 

>> Data Center Transport Layer

The Data Center transport layer includes the transport technologies required for the following purposes:

  • Communication between distributed Data Centers for rerouting client-to-server traffic
  • Communication between distributed server farms located in distributed Data Centers for the purposes of remote mirroring, replication, or clustering

Transport technologies must support a wide range of requirements for bandwidth and latency depending on the traffic profiles, which imply a number of media types ranging from Ethernet to Fibre Channel.

For user-to-server communication, the possible technologies include Frame Relay, ATM, DS channels in the form of T1/E1 circuits, Metro Ethernet, and SONET.

For server-to-server and storage-to-storage communication, the technologies required are dictated by server media types and the transport technology that support them transparently. For example, as depicted in Figure 1-8, storage devices use Fibre Channel and Enterprise Systems Connectivity (ESCON), which should be supported by the metro optical transport infrastructure between the distributed server farms.

If ATM and Gigabit Ethernet (GE) are used between distributed server farms, the metro optical transport could consolidate the use of fiber more efficiently. For example, instead of having dedicated fiber for ESCON, GE, and ATM, the metro optical technology could transport them concurrently.

The likely transport technologies are dark fiber, coarse wavelength division multiplexing (CWDM), and dense wavelength division multiplexing (DWDM), which offer transparent connectivity (Layer 1 transport) between distributed Data Centers for media types such as GE, Fibre Channel, ESCON, and fiber connectivity (FICON).


Share this post


Sale

Unavailable

Sold Out