This page uses content from Wikipedia and is licensed under CC BY-SA.

Data center

An operation engineer overseeing a network operations control room of a data center

A data center (American English)[1] [2] or data centre (British English)[3] is a dedicated space used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup components and infrastructure for power supply, data communications connections, environmental controls (e.g. air conditioning, fire suppression) and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town.[4][5]


NASA mission control computer room circa 1962

Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center.[6][7] Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power, and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes.[6][8] Basic design-guidelines for controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The advent of Unix from the early 1970s led to the subsequent proliferation of freely available Linux-compatible PC operating-systems during the 1990s. These were called "servers", as timesharing operating systems like Unix rely heavily on the client-server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition about this time.[6][9]

The boom of data centers came during the dot-com bubble of 1997–2000.[10][11] Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide commercial clients with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. Data centers for cloud computing are called cloud data centers (CDCs). But nowadays, the division of these terms has almost disappeared and they are being integrated into the term "data center".

With an increase in the uptake of cloud computing, business and government organizations scrutinize data centers to a higher degree in areas such as security, availability, environmental impact and adherence to standards. Standards documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data-center design. Well-known operational metrics for data-center availability can serve to evaluate the commercial impact of a disruption. Development continues in operational practice, and also in environmentally-friendly data-center design. Data centers typically cost a lot to build and to maintain.[10][12]

Requirements for modern data centers

Racks of telecommunications equipment in part of a data center

Modernization and data center transformation enhances performance and energy efficiency.[13]

IT operations are a crucial aspect of most organizational operations around the world. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of mechanical cooling and power systems (including emergency backup power generators) serving the data center along with fiber optic cables.

The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers[14] specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.[15]

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces,[16] provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:

  • Operate and manage a carrier's telecommunication network
  • Provide data center based applications directly to the carrier's customers
  • Provide hosted applications for a third party to provide services to their customers
  • Provide a combination of these and similar data center applications

Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment suitable for equipment installation. Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers, both for now and for later.

Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old.[13] Gartner, another research company, says data centers older than seven years are obsolete.[17] The growth in data (163 zettabytes by 2025[18]) is one factor driving the need for data centers to modernize.

In May 2011, data center research organization Uptime Institute reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.[19]

Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.[20] The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.

  • Standardization/consolidation: Reducing the number of data centers[21][22] and avoiding server sprawl[23] (both physical and virtual)[24] often includes replacing aging data center equipment,[25] and is aided by standardization.[26]
  • Virtualize: IT virtualization technologies help to lower capital and operational expenses,[27] and reduce energy consumption.[28] Virtualization technologies are also used to create virtual desktops, which can then be hosted in data centers and rented out on a subscription basis.[29] Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.[30]
  • Automating: Automating tasks such as provisioning, configuration, patching, release management and compliance is needed, not just when facing fewer skilled IT workers.[26]
  • Securing: Protection of virtual systems is integrated with existing security of physical infrastructures.[31]

Machine room

The term "Machine Room" is at times used to refer to the large room within a Data Center where the actual Central Processing Unit is located; this may be separate from where high-speed printers are located. Air conditioning is most important in the machine room.[32][33][34]

Aside from air-conditioning, there must be monitoring equipment, one type of which is to detect water prior to flood-level situations.[35] One company, for several decades,[36] has had share-of-mind: Water Alert.[37] The company, as of 2018, has 2 competing manufacturers (Invetex, Hydro-Temp) and 3 competing distributors (Longden,Northeast Flooring,[38], Slayton[39]).

Raised floor

A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson.[40]

Although the first raised floor computer room was made by IBM in 1956[41], and they've "been around since the 1960s,"[42] it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficienctly.[43][44]

The first purpose of the raised floor was to allow access for wiring.[41]

Lights out

The "lights-out" data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.[45][46]

Data center levels and tiers

The two organizations in the United States that publish data center standards are the Telecommunications Industry Association (TIA) and the Uptime Institute.

Telecommunications Industry Association

The Telecommunications Industry Association's TIA-942 standard for data centers, published in 2005 and updated four times since, defined four infrastructure levels.[47]

  • Level 1 - basically a server room, following basic guidelines
  • Level 4 - designed to host the most mission critical computer systems, with fully redundant subsystems, the ability to continuously operate for an indefinite period of time during primary power outages.

Uptime Institute - Data Center Tier Standards

Four tiers are define by the Uptime Institute:

  • Tier I:[48] lacks redundant IT equipment, with 99.671% availability, maximum of 1729 minutes annual downtime
  • Tier II: adds redundant infrastructure - 99.741% availability (1361 minutes)
  • Tier III: adds more data paths, duplicate equipment, and that all IT equipment must be dual-powered (99.982%, 95 minutes)
  • Tier IV: all cooling equipment is independently dual-powered; adds Fault-tolerance (99.995%, 26 minutes)

Data center design

The field of data center design has been growing for decades in various directions:

  • Major data centers did not appear in India until the late 1990s.[49]
  • a 65 story data center has already been proposed[50]
  • the number of data centers as of 2016 had grown beyond 3 million USA-wide, and more than triple

that number worldwide[10]

Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are:

A typical server rack, commonly seen in colocation
  • size - one room of a building, one or more floors, or an entire building, and can hold 1,000 or more servers[51]
  • space, power, cooling, and costs in the data center.[52]
CRAC Air Handler
  • Mechanical engineering infrastructure - heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization.[53]
  • Electrical engineering infrastructure design - utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more.[53][54]

Design criteria and tradeoffs

  • Availability expectations: Cost of avoiding downtime should not exceed the cost of downtime itself[55]
  • Site selection: Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Others are flight paths, neighbouring uses, geological risks and climate (associated with cooling costs).[56]
    • Often available power is hardest to change.

High availability

Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many "nines" can be placed after "99%".[57]

Modularity and flexibility

Cabinet aisle in a data center

Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.[58]

A modular data center may consist of data center equipment contained within shipping containers or similar portable containers.[59] Components of the data center can be prefabricated and standardized which facilitates moving if needed.[60]

Environmental control

The physical environment of a data center is rigorously controlled. Air conditioning is used to control the temperature and humidity in the data center; indirect cooling, such as using outside air, is also increasingly being implemented.[61][62]

Electrical power

A bank of batteries in a large data center, used to provide power until diesel generators can start

Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators.[63]

To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.

Low-voltage cable routing

Data cabling is typically routed through overhead cable trays in modern data centers.[64] But some[who?] are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a hot aisle arrangement to maximize airflow efficiency.

Fire protection

FM200 Fire Suppression Tanks

Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage.

Two water-based options are[65]

  • sprinkler
  • mist
  • No water - some of the benefits of using chemical suppression (clean agent fire suppression gaseous system).


Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including a layered security system often starting with fencing, bollards and mantraps.[66] Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition mantraps is starting to be commonplace.

Documenting access is required by some data protection regulations. To do so, some organizations use access control systems that provide a logging report of accesses. Logging can occur at the main entrance, at the entrances to mechanical rooms and white spaces, as well as in at the equipment cabinets. Modern access control at the cabinet allows for integration with intelligent power distribution units so that the locks can be powered and networked through the same appliance.[67]

Energy use

Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.[68] For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center.[69] By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.[70]

According to a Greenpeace study, in 2012, data centers represented 21% of the electricity consumed by the IT sector, which was about 382 billion kWh a year.[71] U.S. data centers use more than 90 billion kWh of electricity a year. Global data centers used roughly 416 TWh in 2016, nearly 40% more than the entire United Kingdom.[72]

Greenhouse gas emissions

In 2007 the entire information and communication technologies or ICT sector was estimated to be responsible for roughly 2% of global carbon emissions with data centers accounting for 14% of the ICT footprint.[73] The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption,[74] or roughly .5% of US GHG emissions,[75] for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020.[73]

Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as: Canada,[76] Finland,[77] Sweden,[78] Norway [79] and Switzerland,[80] are trying to attract cloud computing data centers.

In an 18-month investigation by scholars at Rice University's Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020. [81]

Energy efficiency

The most commonly used metric to determine the energy efficiency of a data center is power usage effectiveness, or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment.

Total facility power consists of power used by IT equipment plus any overhead power consumed by anything that is not considered a computing or data communication device (i.e. cooling, lighting, etc.). An ideal PUE is 1.0 for the hypothetical situation of zero overhead power. The average data center in the US has a PUE of 2.0,[74] meaning that the facility uses two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2.[82] Some large data center operators like Microsoft and Yahoo! have published projections of PUE for facilities in development; Google publishes quarterly actual efficiency performance from data centers in operation.[83]

The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities.[84] The United States passed the Energy Efficiency Improvement Act of 2015, which requires federal facilities — including data centers — to operate more efficiently. In 2014, California enacted title 24 of the California Code of Regulations, which mandates that every newly constructed data center must have some form of airflow containment in place, as a measure to optimize energy efficiency.

European Union also has a similar initiative: EU Code of Conduct for Data Centres[85]

Energy use analysis

Often, the first step toward curbing energy use in a data center is to understand how energy is being used in the data center. Multiple types of analysis exist to measure data center energy use. Aspects measured include not just energy used by IT equipment itself, but also by the data center facility equipment, such as chillers and fans.[86] Recent research has shown the substantial amount of energy that could be conserved by optimizing IT refresh rates and increasing server utilization.[87]

Power and cooling analysis

Power is the largest recurring cost to the user of a data center.[88] A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures.[89] A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity.[90] The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers.

Energy efficiency analysis

An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.[91] However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise.[92]

Computational fluid dynamics (CFD) analysis

This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling.[93] By predicting the effects of these environmental conditions, CFD analysis in the data center can be used to predict the impact of high-density racks mixed with low-density racks[94] and the onward impact on cooling resources, poor infrastructure management practices and AC failure or AC shutdown for scheduled maintenance.

Thermal zone mapping

Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.[95]

This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.

Green data centers

This water-cooled data center in the Port of Strasbourg, France claims the attribute green.

Data centers use a lot of power, consumed by two main usages: the power required to run the actual equipment and then the power required to cool the equipment. The first category is addressed by designing computers and storage systems that are increasingly power-efficient.[5] To bring down cooling costs data center designers try to use natural ways to cool the equipment. Many data centers are located near good fiber connectivity, power grid connections and also people-concentrations to manage the equipment, but there are also circumstances where the data center can be miles away from the users and don't need a lot of local management. Examples of this are the 'mass' data centers like Google or Facebook: these DC's are built around many standardized servers and storage-arrays and the actual users of the systems are located all around the world. After the initial build of a data center staff numbers required to keep it running are often relatively low: especially data centers that provide mass-storage or computing power which don't need to be near population centers.Data centers in arctic locations where outside air provides all cooling are getting more popular as cooling and electricity are the two main variable cost components.[96]

Energy reuse

The practice of cooling data centers is a topic of discussion. It is very difficult to reuse the heat which comes from air cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps.[97] An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid cooled infrastructure which captures all heat in water. Different liquid technologies are categorised in 3 main groups, Indirect liquid cooling (water cooled racks), Direct liquid cooling (direct-to-chip cooling) and Total liquid cooling (complete immersion in liquid). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high temperature water outputs from the data center.

Network infrastructure

An example of "rack mounted" servers

Communications in data centers today are most often based on networks running the IP protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world[98] which are connected according to the data center network architecture. Redundancy of the Internet connection is often provided by using two or more upstream service providers (see Multihoming).

Some of the servers at the data center are used for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.

Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.

Software/Data Backup

Non-mutually exclusive options for backup are:

  • Onsite
  • Offsite

Onsite is traditional,[99] and one major advantage is immediate availability.

Offsite backup storage

Data backup techniques includes having an encrypted copy of the data offsite. Methods used for transporting data are:[100]

  • having the customer write the data to a physical medium, such as magnetic tape, and then transporting the tape elsewhere
  • directly transferring the data to another site during the backup, using appropriate links
  • uploading the data "into the cloud"[101]


For quick deployment or disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in very short time.

See also


  1. ^ "An Oregon Mill Town Learns to Love Facebook and Apple". The New York Times. March 6, 2018.
  2. ^ "data center .. buildings and equipment
  3. ^ "Google announces London cloud computing data centre". July 13, 2017.
  4. ^ James Glanz (September 22, 2012). "Power, Pollution and the Internet". The New York Times. Retrieved 2012-09-25.
  5. ^ a b Sparsh, Mittal,. "Power Management Techniques for Data Centers: A Survey".
  6. ^ a b c Angela Bartels (August 31, 2011). "Data Center Evolution: 1960 to 2000".
  7. ^ Old large computer rooms that housed machines like the U.S. Army's ENIAC, which were developed pre-1960 (1945), were now referred to as 'data centers.'
  8. ^ Til the early 60s, it was primarily the government that used computers, which were large mainframes housed in rooms that today we call datacenters.
  9. ^ In the 1990s, Minicomputers, now called servers, were housed in the old computer rooms (now called data centers). "Server rooms" were built within company walls, co-located with low-cost networking equipment.
  10. ^ a b c Cynthia Harvey (July 10, 2017). "Data Center". Datamation.
  11. ^ There was considerable construction of Data Centers during the early 2000s, in the period of expanding dot-com businesses.
  12. ^ Cloud computing was supposed to be less expensive, yet ...
  13. ^ a b "Mukhar, Nicholas. "HP Updates Data Center Transformation Solutions," August 17, 2011".
  14. ^ "TIA-942 Certified Data Centers - Consultants - Auditors -".
  15. ^ "Archived copy". Archived from the original on 2011-11-06. Retrieved 2011-11-07.
  16. ^ "GR-3160 - Telecommunications Data Center - Telcordia".
  17. ^ "Sperling, Ed. "Next-Generation Data Centers," Forbes, March 15. 2010". Retrieved 2013-08-30.
  18. ^ "IDC white paper, sponsored by Seagate" (PDF).
  19. ^ Niccolai, James. "Data Centers Turn to Outsourcing to Meet Capacity Needs".
  20. ^ "Tang, Helen. "Three Signs it's time to transform your data center," August 3, 2010, Data Center Knowledge".
  21. ^ "the Era of Great Data Center Consolidation". Fortune. February 16, 2017. 'Friends don't let friends build data centers,' said Charles Phillips, chief executive officer of Infor, a business software maker
  22. ^ "This Wave of Data Center Consolidation is Different from the First One". February 8, 2018.
  23. ^ "12 New Year's resolutions for your data".
  24. ^ "Stop Virtual Server Sprawl".
  25. ^ "Top reasons to upgrade vintage data centers" (PDF).
  26. ^ a b Miller, Rich. "Complexity: Growing Data Center Challenge," Data Center Knowledge, May 16, 2007
  27. ^ Sims, David. "Carousel's Expert Walks Through Major Benefits of Virtualization," TMC Net, July 6, 2010
  28. ^ Delahunty, Stephen (August 15, 2011). "The New urgency for Server Virtualization". InformationWeek. Archived from the original on 2012-04-02.
  29. ^ "HVD: the cloud's silver lining" (PDF). Intrinsic Technology. Archived from the original (PDF) on 2012-10-02. Retrieved 2012-08-30.
  30. ^ "Gartner: Virtualization Disrupts Server Vendors". 2 December 2008.
  31. ^ "Ritter, Ted. Nemertes Research, "Securing the Data-Center Transformation Aligning Security and Data-Center Dynamics"".
  32. ^ "Data Center and Server Room Standards". CRAC (Computer Room Air Conditioner) Units: ... kit used ... to support ... Data Center Machine Room Floor.
  33. ^ "computers in machine room". ... machine room is ...
  34. ^ "IST Machine Room Uninterrupted Power Project". Our two Computer Room Air Conditioners (CRACs) ... providing redundant ...
  35. ^ (In this arena, only six companies were noted by Thomas, a financial data publisher) "Computer Room Flooring Water Detectors Suppliers". Thomas Publishing Company.
  36. ^ "How to Design A Computer Room". Computerworld. June 7, 1982. p. 120. Dorlen Products (Continued from Page 107) ... Liebert ...
  37. ^ URL [] - manufacturer name: Doren Products
  38. ^ both of which focus on raised floors; this is not their main business)
  39. ^ a soup-to-nuts distributor/service company
  40. ^ "GR-2930 - NEBS: Raised Floor Requirements".
  41. ^ a b "Data Center Raised Floor History" (PDF).
  42. ^ "Tips for Ordering Replacement Raised Floor Tiles".
  43. ^ Hwaiyu Geng (2014). Data Center Handbook. ISBN 1118436636.
  44. ^ Steven Spinazzola (2005). "HVAC: The Challenge And Benefits of Under Floor Air Distribution Systems".
  45. ^ Kasacavage, Victor (2002). Complete book of remote access: connectivity and security. The Auerbach Best Practices Series. CRC Press. p. 227. ISBN 0-8493-1253-1.
  46. ^ Burkey, Roxanne E.; Breakfield, Charles V. (2000). Designing a total data solution: technology, implementation and deployment. Auerbach Best Practices. CRC Press. p. 24. ISBN 0-8493-0893-3.
  47. ^ "Telecommunications Infrastructure Standard for Data Centers". 2005-04-12. Retrieved 2017-02-28.
  48. ^ []
  49. ^ "About Data Center". (ESDS Pvt. Ltd.).
  50. ^ Patrick Thibodeau (April 12, 2016). "Envisioning a 65-story data center". Computerworld.
  51. ^ "Google Container Datacenter Tour (video)".
  52. ^ "Romonet Offers Predictive Modeling Tool For Data Center Planning". 29 June 2011.
  53. ^ a b "BICSI News Magazine - May/June 2010".
  54. ^ "Hedging Your Data Center Power".
  55. ^ Clark, Jeffrey. "The Price of Data Center Availability—How much availability do you need?", Oct. 12, 2011, The Data Center Journal "Archived copy". Archived from the original on 2011-12-03. Retrieved 2012-02-08.
  56. ^ "Five tips on selecting a data center location".
  57. ^ "IBM zEnterprise EC12 Business Value Video".
  58. ^ Niles, Susan. "Standardization and Modularity in Data Center Physical Infrastructure," 2011, Schneider Electric, page 4. "Archived copy" (PDF). Archived from the original (PDF) on 2012-04-16. Retrieved 2012-02-08.
  59. ^ "Strategies for the Containerized Data Center". 8 September 2011.
  60. ^ Niccolai, James. "HP says prefab data center cuts costs in half".
  61. ^ "tw telecom and NYSERDA Announce Co-location Expansion". Reuters. 2009-09-14.
  62. ^ "Air to air combat - indirect air cooling wars".
  63. ^ Detailed explanation of UPS topologies "EVALUATING THE ECONOMIC IMPACT OF UPS TECHNOLOGY" (PDF). Archived from the original (PDF) on 2010-11-22.
  64. ^ "Cable tray systems support cables' journey through the data center".
  65. ^ "Data Center Fire Suppression Systems: What Facility Managers Should Consider".
  66. ^ Sarah D. Scalet (2005-11-01). "19 Ways to Build Physical Security Into a Data Center". Retrieved 2013-08-30.
  67. ^ Systems and methods for controlling an electronic lock for a remote device, 2016-08-01, retrieved 2018-04-25
  68. ^ "Data Center Energy Consumption Trends". U.S. Department of Energy. Retrieved 2010-06-10.
  69. ^ J. Koomey, C. Belady, M. Patterson, A. Santos, K.D. Lange: Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers Released on the web August 17th, 2009.
  70. ^ "Quick Start Guide to Increase Data Center Energy Efficiency" (PDF). U.S. Department of Energy. Archived from the original (PDF) on 2010-11-22. Retrieved 2010-06-10.
  72. ^ Danilak, Radoslav. "Why Energy Is A Big And Rapidly Growing Problem For Data Centers". Forbes. Retrieved 2018-07-06.
  73. ^ a b "Smart 2020: Enabling the low carbon economy in the information age" (PDF). The Climate Group for the Global e-Sustainability Initiative. Archived from the original (PDF) on 2011-07-28. Retrieved 2008-05-11.
  74. ^ a b "Report to Congress on Server and Data Center Energy Efficiency" (PDF). U.S. Environmental Protection Agency ENERGY STAR Program.
  75. ^ A calculation of data center electricity burden cited in the Report to Congress on Server and Data Center Energy Efficiency and electricity generation contributions to green house gas emissions published by the EPA in the Greenhouse Gas Emissions Inventory Report. Retrieved 2010-06-08.
  76. ^ Canada Called Prime Real Estate for Massive Data Computers - Globe & Mail Retrieved June 29, 2011.
  77. ^ Finland - First Choice for Siting Your Cloud Computing Data Center.. Retrieved 4 August 2010.
  78. ^ "Stockholm sets sights on data center customers". Archived from the original on 19 August 2010. Retrieved 4 August 2010.
  79. ^ In a world of rapidly increasing carbon emissions from the ICT industry, Norway offers a sustainable solution Retrieved 1 March 2016.
  80. ^ Swiss Carbon-Neutral Servers Hit the Cloud.. Retrieved 4 August 2010.
  81. ^ Katrice R. Jalbuena (October 15, 2010). "Green business news". EcoSeed. Archived from the original on 2016-06-18. Retrieved 2010-11-11.
  82. ^ "Data Center Energy Forecast" (PDF). Silicon Valley Leadership Group.
  83. ^ "Efficiency: How we do it – Data centers". Google. Retrieved 2015-01-19.
  84. ^ Commentary on introduction of Energy Star for Data Centers "Introducing EPA ENERGY STAR for Data Centers". Jack Pouchet. 2010-09-27. Archived from the original (Web site) on 2010-09-25. Retrieved 2010-09-27.
  85. ^ "EU Code of Conduct for Data Centres". Retrieved 2013-08-30.
  86. ^ "UNICOM Global :: Home" (PDF).
  87. ^ Bashroush, Rabih (2018). "A Comprehensive Reasoning Framework for Hardware Refresh in Data Centres". IEEE Transactions on Sustainable Computing.
  88. ^ Cosmano, Joe (2009), Choosing a Data Center (PDF), Disaster Recovery Journal, retrieved 2012-07-21
  89. ^ "HP's Green Data Center Portfolio Keeps Growing - InternetNews".
  90. ^ Inc. staff (2010), How to Choose a Data Center, retrieved 2012-07-21
  91. ^ "Siranosian, Kathryn. "HP Shows Companies How to Integrate Energy Management and Carbon Reduction," TriplePundit, April 5, 2011".
  92. ^ Bashroush, Rabih; Woods, Eoin (2017). "Architectural Principles for Energy-Aware Internet-Scale Applications". IEEE Software. 34 (3).
  93. ^ Bullock, Michael. "Computation Fluid Dynamics - Hot topic at Data Center World," Transitional Data Services, March 18, 2010. Archived January 3, 2012, at the Wayback Machine.
  94. ^ "Bouley, Dennis (editor). "Impact of Virtualization on Data Center Physical Infrastructure," The Green grid, 2010" (PDF).
  95. ^ "HP Thermal Zone Mapping plots data center hot spots".
  96. ^ "Fjord-cooled DC in Norway claims to be greenest". Retrieved 23 December 2011.
  97. ^ "Data Center Cooling with Heat Recovery" (PDF). January 23, 2017.
  98. ^ Noormohammadpour, Mohammad; Raghavendra, Cauligi (16 July 2018). "Datacenter Traffic Control: Understanding Techniques and Tradeoffs". Communications Surveys & Tutorials, IEEE. 20 (2): 1492-1525.
  99. ^ "Protecting Data Without Blowing The Budget, Part 1: Onsite Backup". Forbes. October 4, 2018.
  100. ^ "Iron Mountain vs Amazon Glacier: Total Cost Analysis" (PDF).
  101. ^ "Iron Mountain introduces cloud backup and management service". September 14, 2017.

External links