The art of physical, outer perimeter security for a Data Center

Một bài toán tôi từng gặp đó là thiết kế Physical Security cho một Data Center. Các tài liệu nói về phần Security cho DC thường có thiên hướng đi sâu vào khía cạnh ” bảo mật với lính gác, tuần tra, hệ thống CCTV, và lớp bên trong thiên về công nghệ như các loại thẻ quẹt, Firewall …)

Trong khi đó phần Security về mặt vật lý không đơn giản chỉ là như thế, đối với các DC có diện tích rộng lớn, diện tích khu đất lớn gấp  >= 10 lần diện tích xây dựng DC thì mọi việc trở nên đơn giản hơn rất nhiều, việc đáp ứng các tiêu chí cơ bản của Tier 3 – 942 ở mức độ vĩ mô là khả quan.

Nhưng đối với các DC có diện tích khu đất nhỏ hơn thì vấn đề lại khác biệt hoàn toàn. Các quy định về khoảng cách hàng rào, chiều cao hàng rào, đèn chiếu sáng… là tương đối mù mờ – thậm chí không có tiêu chuẩn trong các khóa học chuẩn. Tôi thấy bài viết này hay và chia sẻ với các bạn.


When information security professionals think of perimeter security, firewalls, SSL VPN, RADIUS servers, and other technical controls immediately come to mind.  However, guarding the physical perimeter is just as important.

During the past weeks, I’ve written a series of articles that describe various components of an effective physical security strategy.  In this final article in the series, we’ll look closely at best practices for constructing the initial barrier to physical access to your information assets: the outer perimeter.

Components of a physical perimeter

Having served for several years in the military police, the concept of physical perimeter has two meanings.  However, we’ll skip the combat definition with its automatic weapons placement and final protective lines and focus on facility security.  (At least I hope your information asset physical security isn’t that strict, department of defense facilities excluded…)

The outer perimeter of a facility is its first line of defense.  It can consist of two types of barriers: natural and structural.  According to the United States Army’s Physical Security Field Manual, FM 3-19.30 (2001, p. 4-1):

  • Natural protective barriers are mountains and deserts, cliffs and ditches, water obstacles, or other terrain features that are difficult to traverse.
  • Structural protective barriers are man-made devices (such as fences, walls, floors, roofs, grills, bars, roadblocks, signs, or other construction) used to restrict, channel, or impede progress.

In other words, if you can use the terrain, do so.  Otherwise, you have to spend a little money and build your own obstructions.

The most common type of structural outer perimeter barrier is the venerable chain-link fence.  However, it isn’t good enough to simply throw up a fence and call it a day.  Instead, your fence, a preventive device, should be supported by one or more additional prevention and detection controls.  The number of controls you implement and to what extent are dependent upon the risks your organization faces.

Fence basics

A fence is both a psychological and a physical barrier.  The psychology comes into play when casual passers-by encounter it.  It tells them that the area on the other side is off-limits, and the owner would probably rather they didn’t walk across the property.  A fence or wall of three to four feet is good enough for this.

For those who are intent on getting to your data center or other collection of information assets, fence height should be about seven feet.  See Figure A.  For facilities with high risk concerns, a top guard is usually added.  The top guard consists of three to four strands of barbed wire spaced about six inches apart and extends outward at a 45 degree angle.  The total height, including fence and top guard, should reach eight feet. Figure A

Fence installation

Installing a perimeter fence requires some planning.  See Figure B.  Set the poles in concrete and ensure the links are pulled tight.  The links should form squares with sides of about two inches.  The fence should not leave more than a two inch gap between its lower edge and the ground. Figure B

Figure C depicts other considerations regarding fence placement.  First, identify any culverts, ditches, or objects that cause an opening beneath the fence.  Remember the two-inch rule above.  There should be no gaps greater than two inches below the edge of the fence. When any opening under the fence–whether enclosed as with the culvert in our example, or open–exceeds an area greater than 96 square inches, it should be blocked (FM 3-19.30, p. 4-5).  This is a good rule-of-thumb.  However, use common sense.  If you think a hole is big enough for a person to defeat your fence, block it.  Figures D and E (MIL-HDBK-1013/10, 1993, p. 15) show two methods. Figure C

Clear the area on both sides of the fence to provide a clear view of future intruders.  The recommended clearances, as shown in Figure C, are:

  • 50 feet between the fence and any internal natural or man-made obstructions.
  • 20 feet between the fence and any external natural or man-made obstructions.

Natural obstructions include trees and high weeds or grass.

Figure D

Figure E

Supporting controls

Vehicle Barriers

When vehicular intrusions are a concern, support the fence and gate opening with bollards or other obstructions, as depicted in Figure F (FM 3-19.30, p. 3-4). Figure F


Lighting is a critical piece of perimeter security.  It works as a deterrent and assists human controls (roving guards, monitored cameras, first responders to alarms, etc.) detect intruders.  Lighting standards are pretty simple:

  • Provide sufficient light for the detection controls used
  • Position lighting to “blind” intruders and keep security personnel in shadows
  • Provide extra lighting for gates, areas of shadow, or probable ingress routes, as shown in Figure C.

A general rule to start with is to position lights with two-foot candle-power at a height of about eight feet.

Intrusion detection controls

As with our technical controls, we make the assumption that if someone wants to get through our perimeter, they will.  So we need to supplement our fence with intrusion detection technology, including:

Use of detection technology must be coupled with a documented and practiced response process.

The final word

The field of physical security is broad and is often a dedicated career path.  So the information here is not intended to make you an expert.  However, organizations are increasingly integrating computer and physical security under one manager.

The need for information security professionals to understand physical controls is great enough that the most popular certifications, such as CISSP, require some knowledge of the topic.  Don’t be left behind.

Finally, many of the controls discussed in this article are too extreme for many organizations.  However, It’s always better to understand all your options.

About Tom Olzak

Tom is a security researcher for the InfoSec Institute and an IT professional with over 30 years of experience. He has written three books, Just Enough Security, Microsoft Virtualization, and Enterprise Security: A Practitioner’s Guide (to be publish…


About 10 “must haves” your data center needs to be successful.

The evolution of the data center may transform it into a very different environment thanks to the advent of new technologies such as cloud computing and virtualization. However, there will always be certain essential elements required by any data center to operate smoothly and successfully.  These elements will apply whether your data center is the size of a walk-in closet or an airplane hanger – or perhaps even on a floating barge, which rumors indicate Google is building:

Figure A

 Credit: Wikimedia Commons

1. Environmental controls

A standardized and predictable environment is the cornerstone of any quality data center.  It’s not just about keeping things cool and maintaining appropriate humidity levels (according to Wikipedia, the recommended temperature range is 61-75 degrees Fahrenheit/16-24 degrees Celsius and 40-55% humidity). You also have to factor in fire suppression, air flow and power distribution.  One company I worked at was so serious about ensuring their data center remained as pristine as possible that it mandated no cardboard boxes could be stored in that room. The theory behind this was that cardboard particles could enter the airstream and potentially pollute the servers thanks to the distribution mechanism which brought cooler air to the front of the racks. That might be extreme but it illustrates the importance of the concept.

2. Security

It goes without saying (but I’m going to say it anyhow) that physical security is a foundation of a reliable data center. Keeping your systems under lock and key and providing entry only to authorized personnel goes hand and hand with permitting only the necessary access to servers, applications and data over the network. It’s safe to say that the most valuable assets of any company (other than people, of course) reside in the data center. Small-time thieves will go after laptops or personal cell phones. Professionals will target the data center. Door locks can be overcome, so I recommend alarms as well. Of course, alarms can also be fallible so think about your next measure: locking the server racks? Backup power for your security system? Hiring security guards? It depends on your security needs, but keep in mind that “security is a journey, not a destination.”

3. Accountability

Speaking as a system administrator, I can attest that most IT people are professional and trustworthy.  However, that doesn’t negate the need for accountability in the data center to track the interactions people have with it. Data centers should log entry details via badge access (and I recommend that these logs are held by someone outside of IT such as the Security department, or that copies of the information are kept in multiple hands such as the IT Director and VP). Visitors should sign in and sign out and remain under supervision at all times. Auditing of network/application/file resources should be turned on. Last but not least, every system should have an identified owner, whether it is a server, a router, a data center chiller, or an alarm system.

4. Policies

Every process involved with the data center should have a policy behind it to help keep the environment maintained and managed. You need policies for system access and usage (for instance, only database administrators have full control to the SQL server). You should have policies for data retention – how long do you store backups? Do you keep them off-site and if so when do these expire? The same concept applies to installing new systems, checking for obsolete devices/services, and removal of old equipment – for instance, wiping server hard drives and donating or recycling the hardware.

5. Redundancy

 Credit: Wikimedia Commons

The first car I ever owned was a blue Ford Pinto. My parents paid $400 for it and at the time, gas was a buck a gallon, so I drove everywhere. It had a spare tire which came in handy quite often. I’m telling you this not to wax nostalgic but to make a point: even my old breakdown-prone car had redundancy. Your data center is probably much shinier, more expensive, and highly critical, so you need more than a spare tire to ensure it stays healthy. You need at least two of everything that your business requires to stay afloat, whether this applies to mail servers, ISPs, data fiber links, or voice over IP (VOIP) phone system VMs. Three or more wouldn’t hurt on many scenarios either!

It’s not just redundant components that are important but also the process to test and make sure they work reliably – such as scheduled failover drills and research into new methodologies.

6. Monitoring

Monitoring of all systems for uptime and health will bring tremendous proactive value but that’s just the beginning. You also need to monitor how much bandwidth is in use, as well as energy, storage, physical rack space, and anything else which is a “commodity” provided by your data center.

There are free tools such as Nagios for the nuts and bolts monitoring and more elaborate solutions such as Dranetz for power measurement. Alerts when outages or low thresholds occur is part of the process – and make sure to arrange a failsafe for your alerts so they are independent of the data center (for instance, if your email server is on a VMWare ESX host which is dead, another system should monitor for this and have the ability to send out notifications).

7. Scalability

So your company needs 25 servers today for an array of tasks including virtualization, redundancy, file services, email, databases, and analytics? What might you need next month, next year, or in the next decade? Make sure you have the appropriate sized data center with sufficient expansion capacity to increase power, network, physical space, and storage.  If your data center needs are going to grow – and if your company is profitable I can guarantee this is the case – today is the day to start planning.

Planning for scalability isn’t something you stop, either; it’s an ongoing process. Smart companies actively track and report on this concept. I’ve seen references in these reports to “the next rivet to pop” which identifies a gap in a critical area of scalability that must be met (e.g., lack of physical rack space) as soon as possible.

8. Change management

You might argue that Change Management falls under the “Policies” section, a consideration which has some bearing. However, I would respond that it is both a policy and a philosophy. Proper guidelines for change management ensure that nothing occurs in your data center which hasn’t been planned, scheduled, discussed and agreed upon along with providing backout steps or a Plan “B.” Whether it’s bringing new systems to life or burying old ones, the lifecycle of all elements of your data center must fall in accordance with your change management outlook.

9. Organization

I’ve never known an IT pro who wasn’t pressed for time. Rollout of new systems can result in some corners being cut due to panic over missed deadlines – and these corners invariably seem to include making the environment nice and neat.

A successful system implementation doesn’t just mean plugging it in and turning it on; it also includes integrating devices into the data center via standardized and supportable methods. Your server racks should be clean and laid out in a logical fashion (production systems in one rack, test systems in another). Your cables should be the appropriate length and run through cabling guides rather than haphazardly draped. Which do you think is easier to troubleshoot and support; a data center that looks like this:

 Credit: Wikimedia Commons


 Credit: Wikimedia Commons

10. Documentation

The final piece of the puzzle is appropriate, helpful, and timely documentation – another ball which can easily be dropped during an implementation if you don’t follow strict procedures. It’s not enough to just throw together a diagram of your switch layout and which server is plugged in where; your change management guidelines should mandate that documentation is kept relevant and available to all appropriate personnel as the details evolve – which they always do.

Not to sound morbid, but I live by the “hit by a bus” rule. If I’m hit by a bus tomorrow, one less thing for everyone to worry about is whether my work or personal documentation is up to date, since I spend time each week making sure all changes and adjustments are logged accordingly. On a less melodramatic note, if I decide to switch jobs I don’t want to spend two weeks straight in a frantic braindump of everything my systems do.

The whole ball of wax

The great thing about these concepts is that they are completely hardware/software agnostic.  Whether your data center contains servers running Linux, Windows or other operating systems, or is just a collection of network switches and a mainframe, hopefully these will be of use to you and your organization.

To tie it all together, think of your IT environment as a wheel, with the data center as the hub and these ten concepts as the surrounding “tire”:

 Credit: Wikimedia Commons

Devoting time and energy to each component will ensure the wheels of your organization turn smoothly.  After all, that’s the goal of your data center, right?

Know your data center monitoring system

You can’t depend on a building system to run the data center. Implement a BMS and a DCIM tool to monitor and predict system changes, tighten security and more.
In this Article


Optimizing your DC - start with its Data
DCIM will do many things a BMS won’tData center monitoring systems are critical for managing a facility. Knowing whether you need a BMS or a DCIM system…
Download TechTarget’s Guide for IT and Business Managers in Southeast Asia

Access this Guide for IT and Business Managers in Southeast Asia, written by technology experts and advisors, to find out more about cloud migration, big data benefits, and more.

Nearly every building today, new and old, has a building management or automation system (BMS or BAS) to monitor the major power, cooling and lighting systems. Building management systems are robust, comprised of standardized software platforms and communications protocols.

A BMS monitors and controls the total building infrastructure, primarily those systems that use the most energy. For example, the BMS senses temperatures on each floor — sometimes in every room — and adjusts the heating and cooling output as necessary.

The BMS usually monitors all the equipment in the central utility plant: chillers, air handlers, cooling towers, pumps, water temperatures and flow rates and power draws. Automation systems shut off lights at night, control window shades as the sun angle changes, and track and react to several other conditions. Regardless of control sophistication, the BMS’ most important function is to raise alarms if something goes out of pre-set limits or fails.

Are these the same things we want from our DCIM?

There is no single standard of data center infrastructure management (DCIM); it can be as simple as a monitor on cabinet power strips, or as sophisticated as a granular, all-inclusive data center monitoring system.

The BMS is a facilities tool that also deals with systems, so why do we need a separate DCIM system as well? DCIM provides more detailed information than BMS, and helps the data center manager run the wide range of critical systems under their care.

DCIM and BMS are not mutually exclusive; they should be complimentary. Some equipment in the data center should be monitored by the BMS. When choosing a DCIM tool, ensure it can interface with the BMS.

There are three fundamental differences between BMS and DCIM.

BMS monitors major parameters of major systems, and raises alarms if something fails. Although you can see trends that portend a problem, predictive analysis is not BMS’ purpose.

If the building air conditioning fails, it’s uncomfortable, but if the data center air conditioning fails, it’s catastrophic. That’s one example of why DCIM provides trend information and the monitoring data to invoke preventive maintenance before something critical fails. Prediction requires the accumulation, storage and analysis of an enormous amount of data — data that would overwhelm a BMS. Turning the mass of data from all the monitored devices into useful information can prevent a serious crash.

The BMS uses different connectivity protocols than IT. Most common to BMS are DeviceNet, XML, BACnet, LonWorks and Modbus, whereas IT uses mainly Internet Protocol (IP). Monitoring the data center with BMS would require the system to have a communications interface or adapter for every IP connection.

Data center devices handle large quantities of data points — often the common binary number of 256. The cumulative input from every device in the facility would overwhelm a BMS in terms of both data point interfaces and data reduction and analysis tasks. DCIM software accumulates those thousands of pieces of information from IP-based data streams and distills them into usable information.

Only major alarms and primary data should be segmented by DCIM and re-transmitted to the BMS. The rest is of little use in running the building.
DCIM will do many things a BMS won’t

DCIM is an evolving field, and not every DCIM product does all of these things, but these are the general areas DCIM handles and BMS products do not:

Electrical phase balancing: The output of every large uninterruptible power supply (UPS), as well as the branch circuit distribution to many data center cabinets, is three-phase. In order to realize maximum capacity from each circuit and the UPS, equalize the current draws on each phase. All UPS systems — and many power distribution units (ePDUs, iPDUs, CDUs, etc.) — have built-in monitoring, but it’s inefficient to run from cabinet-to-cabinet and device-to-device to balance power.

If the data center uses “smart” PDUs with IP-addressable interfaces, the data center monitoring system can track the power draws on each phase in each cabinet, as well as at each juncture in the power chain. Users can calculate a balanced scheme before making actual power changes in cabinets. The BMS looks only at the incoming power to the UPS, which is insufficient for this important and ongoing task.

Rack and cabinet temperature/humidity monitoring: The BMS monitors some representative point in the room and alarms if this point hits a significant out-of-range condition, but that’s not enough for a good monitoring system in the data center. Temperatures vary significantly from the top to bottom of a cabinet and across the breadth of a facility. With higher inlet temperatures and denser cabinets becoming the norm, comprehensive temperature information matters when deciding where to install a new piece of equipment, or during an air conditioner maintenance or failure period.

Most “smart” PDUs have temperature and humidity probe accessories to monitor critical points on cabinets and in the room via the same IP port that transmits power use. Even minimal DCIM packages can turn this additional data load into useful information.

Cabinet security: The building security system — tied into the BMS or not — observes data center entry and exit, but rarely anything else. It is becoming more common for data centers to house equipment from different owners, such as in colocation facilities, or to have cabinets with restricted access and equip those cabinets with cipher locks. Remote-monitored locks are available, and many can be connected through intelligent power strips. A DCIM tool can be configured to track security information so only the data center manager or other authorized parties access it.

Inventory control: Some of the more robust DCIM software packages track IT hardware — sometimes with the help of radio frequency identification tags. This is useful in a large facility where assets are regularly added, replaced and moved.

Source from:

Gartner Names Schneider, Emerson, CA, Nlyte DCIM Leaders

Gartner has released its first Magic Quadrant (MQ) report on Data Center Infrastructure Management, laying out the market and positions for several DCIM providers across the four quadrants of leaders, challengers, visionaries and niche players.

While there is a lot of interest in DCIM, it’s difficult for customers to determine where to start. Understanding where a DCIM provider’s strengths are is a good thing in an often-confusing market.

The report adds some clarity to the market, analyzing strengths and weaknesses for 17 players. Gartner isn’t the first to tackle the fairly young DCIM space. Its competitors 451 Research and TechNavio both have taken a stab at defining and segmenting the space.

Gartner defines DCIM market as space that encompasses tools that monitor, measure, manage and control data center resources and energy consumption of IT and facility components. The market research house forecasts that by 2017 DCIM tools will be deployed in more than 60 percent of larger data center in North America.

Providers often offer different pieces of the overall infrastructure management picture and use different and complicated pricing models. All vendors in the MQ must offer a portfolio of IT-related and facilities infrastructure components rather than one specific component. All included vendors must enable monitoring down to the rack level at minimum. Building management systems are not included.

The four companies in the Leaders Quadrant – those proven to be leaders in technology and capable of executing well — are Schneider Electric, Emerson Network Power, CA Technologies and Nlyte Software. All but Nlyte are major vendors that offer several other products and services outside of DCIM, putting Nlyte, a San Mateo, California-based startup, in company of heavyweights.

Here is Gartner’s first ever Magic Quadrant for DCIM vendors:

Gartner DCIM Magic Quadrant 2014

IO, the Arizona data center provider best known for its modular data centers, was named a visionary in the report for the IO.OS software it developed to manage its customers’ data center deployments.

“We are very pleased with the findings articulated in the Garter Magic Quadrant for DCIM,” said Bill Slessman, CTO of IO. “IO customers have trusted the IO.OS to intelligently control their data centers since 2012.”

The other three quadrants are for challengers, visionaries and niche players, and it’s not a bad thing to be listed in any portion of the MQ. Challengers stand to threaten leaders; visionaries stand to change the market, and niche players focus on certain functions above others, though a narrow focus can limit their ability to outperform leaders. Being listed in the MQ is a win in itself.

DCIM value, according to Gartner:

  • Enable continuous optimization of data center power, cooling and space
  • Integrate IT and facilities management
  • Help to achieve greater efficiency
  • Model and simulate the data center for “what if” scenarios
  • Show how resources and assets are interrelated

About the Author

Jason VergeJason Verge is an Editor/Industry Analyst on the Data Center Knowledge team with a strong background in the data center and Web hosting industries. In the past he’s covered all things Internet Infrastructure, including cloud (IaaS, PaaS and SaaS), mass market hosting, managed hosting, enterprise IT spending trends and M&A. He writes about a range of topics at DCK, with an emphasis on cloud hosting.

Data center hot-aisle/cold-aisle containment how-tos


Though data center hot-aisle/cold-aisle containment is not yet the status quo, it has quickly become a design option every facility should consider.

Server and chip vendors packing more compute power into smaller envelopes has caused sharp rises in data center energy densities. Ten years ago, most data centers ran 500 watts to 1 kilowatt (kW) per rack or cabinet. Today densities can get to 20 kW per rack and beyond, and most expect the number to continue to increase.

Data center hot-aisle cold aisle containment can better control where hot and cold air goes so that a data center’s cooling system runs more efficiently. And the method has gained traction. According to a’s “Data Center Decisions 2009” survey of data center managers last year, almost half had already implemented the technology or planned to last year. But there are several considerations, and various questions that data center managers should ask themselves:

  • Is containment right for you?
  • Should you do hot-aisle containment or cold-aisle containment?
  • Should you do it yourself or buy vendor products?
  • What about fire code issues?
  • How do you measure whether containment actually worked as hoped?

Do you need hot/cold aisle containment?

First, a data center manager needs to decide whether hot-aisle/cold-aisle containment is a good fit for his facility. Dean Nelson, the senior director of global data center strategy at eBay Inc., said it’s not a question for his company, which already uses the method

“The hot-aisle/cold-aisle method has gained traction”.

 But as Bill Tschudi, an engineer at Lawrence Berkeley National Laboratory who has done research on the topic, said, it’s all about taking the right steps to get there.

“You can do it progressively,” he said. “Make sure you’re in a good hot-aisle/cold-aisle arrangement and that openings are blocked off. You don’t want openings in racks and through the floors.”

These hot- and cold-aisle best design practices are key precursors to containment, because when they’re done incorrectly, containment will likely fail to work as expected.

Containment might not be worth it in lower-density data centers because there is less chance for the hot and cold air to mix in a traditional hot-aisle/cold-aisle design.

“I think the ROI in low-density environments probably won’t be there,” Nelson said. “The cost of implementing curtains or whatever would exceed how much you would save.”

But that threshold is low. Data centers with densities as low as 2 kW per rack should consider hot-aisle/cold-aisle containment, Nelson said. He suggests calling the utility company, or other data center companies, who will perform free data center assessments. In some cases, the utility will then offer a rebate if a data center decides to implement containment. Utilities have handed out millions of dollars to data centers for implementing energy efficient designs.

Hot aisle containment or cold aisle containment?

Next up for data center managers is deciding whether to contain the hot or the cold aisle. On this score, opinions vary. For example, American Power Conversion Corp. (APC) sells a pre-packaged hot -aisle containment product. Liebert Corp. sells cold-aisle containment. Not surprisingly, both APC and Liebert argue that their solution is best.

“Containing the hot aisle means you can turn the rest of your data center into the cold aisle”.

Containing the hot aisle means you can turn the rest of your data center into the cold aisle, as long as there is containment everywhere. That is how data center colocation company Advanced Data Centers built its Sacramento, Calif., facility, which the U.S. Green Building Council has pre-certified for Leadership in Energy and Environmental Design (or LEED) Platinum status in energy efficiency.

“We’re just pressuring the entire space with cool air where the cabinets are located, said Bob Seese, the president of Advanced Data Centers. “The room is considered the cold aisle.”

This approach includes concerns that when contained the hot aisle might get too hot for the IT equipment and uncomfortable for people to work in the space. Nelson, however, said that as long as there’s good airflow and the air is being swiftly exhausted from the space, overheating shouldn’t be a problem.

Containing the cold aisle means you may more easily use containment in certain sections of a data center rather than implementing containment everywhere. But it also requires finding a way to channel the hot air back to the computer room air conditioners (CRACs) or contending with a data center that is hotter than normal.

Cold-aisle containment proponents cite the flexibility of their approach. Cold aisle can be used for raised-floor and overhead cooling environments. Cold-aisle advocates also say that containing the cold aisle means you can better control the flow and volume of cool air entering the front of the servers.

Then, of course, data centers could contain both the hot and cold aisles.

Do-it-yourself methods vs. prepackaged vendor products

There are many ways to accomplish data center containment. If a company wants, it can hire APC, Liebert, Wright Line LLC or another vendor to install a prepackaged product.

“Homegrown methods of containment are often cheaper and, if done correctly, are just as effective as vendor-provided approaches”.

        ” This may bring peace of mind to a data center manager who wants accountability should containment fail to work as advertised”.

“They’re good if you want someone to come in and do the work,” Nelson said. “You can hire them.”

But these offerings come at a price. Homegrown methods of containment are often cheaper and, if done correctly, are just as effective as vendor-provided approaches. Nelson and Tschudi said they prefer do-it-yourself methods because of the lower cost.

If a data center staff does undertake data center containment strategies themselves, there are various options. Some data centers have installed thick plastic curtains, which can hang from the ceiling to the top of the racks or on the end of a row of racks, or both. In addition, a data center can build something like a roof over the cold aisles or simply extend the heights of the racks by installing sheet metal or some other product on top of the cabinets. All these structures prevent hot and cold air from mixing, making the cooling system more efficient.

Fire code issues with hot/cold aisle containment

Almost every fire marshal is different, so getting a marshal involved early in the process is important. A data center manager must know what the local fire code requires and design accordingly, as hot-aisle/cold-aisle containment can stoke fire-code issues.

“The earlier you get them involved, the better,” Tschudi said.

A fire marshal will want to ensure that the data center has sprinkler coverage throughout. So if a data center has plastic curtains isolating the aisles, they may need fusible links that melt at high temperatures so the curtains fall to the floor and the sprinklers reach everywhere. In designs with roofs over the aisles, this may require a sprinkler head under the roof.

“We made sure we could adapt to whatever the fire marshal required,” Seese said.

Measuring hot/cold containment efficacy

It’s also crucial to determine whether containment has worked; otherwise, there’s no justification for the project.

“Containment benefits can reverberate throughout a data center”.

Containment benefits can reverberate throughout a data center. If hot and cold air cannot mix, the air conditioners don’t have to work as hard to get cool air to the front of servers. That can mean the ability to raise the temperature in the room and ramp down air handlers with variable speed drive fans. That in turn could make it worthwhile to install an air-side or water-side economizer. Because the data center can run warmer, an economizer can be used to get free cooling for longer periods of the year.

Experts suggest taking a baseline measurement of a data center’s power, which compares total facility power with the power used by the IT equipment.

Nelson said that one of eBay’s data centers had a power usage effectiveness rating of more than 2, which is close to average. After installing containment in his data center, eBay got the number down to 1.78.

“It was an overall 20% reduction in cooling costs, and it paid for itself well within a year,” he said. “It is really the lowest-hanging fruit that anyone with a data center should be looking at.”

source from:

Making Big Cuts in Data Center Energy Use

The energy used by our nation’s servers and data centers is significant. In a 2007 report, the Environmental Protection Agency estimated that this sector consumed about 61 billion kilowatt-hours (kWh), accounting for 1.5 percent of total U.S. electricity consumption. While the 2006 energy use for servers and data centers was more than double the electricity consumed for this purpose in 2000, recent work by RMI Senior Fellow Jonathan Koomey, a researcher and consulting professor at Stanford University, found that this rapid growth slowed because of the economic recession. At the same time, the economic climate led data center owner/operators to focus on improving energy efficiency of their existing facilities.

So how much room for improvement is there within this sector? The National Snow and Ice Data Center in Boulder, Colorado, achieved a reduction of more than 90 percent in its energy use in a recent remodeling (case study below). More broadly, Koomey’s study indicates that typical data centers have a PUE (see sidebar) between 1.83 and 1.92. If all losses were eliminated, the PUE would be 1.0. Impossible to get close to that value, right? A survey following a 2011 conference of information infrastructure professionals asked, “…what data center efficiency level will be considered average over the next five years?”

More than 20 percent of the respondents expected average PUE to be within the 1.4 to 1.5 range, and 54 percent were optimistic that the efficiency of facilities would improve to realize PUE in the 1.2 to 1.3 range.
Further, consider this: Google’s average PUE for its data centers is only 1.14. Even more impressive, Google’s PUE calculations include transmission and distribution from the electric utility. Google has developed its own efficient server level construction, optimized power distribution, and utilized many strategies to drastically reduce cooling energy consumption, including a unique approach for cooling in a hot and humid climate using recycled water.


For every unit of IT power produced, energy is used to cool and light the rooms that house the servers. Additionally, energy is lost due to inefficient power supplies, idling servers, unnecessary processes, and bloatware (pre-installed programs that aren’t needed or wanted). In fact, about 65 percent of the energy used in a data center or server room goes to space cooling and electrical (transformer, UPS, distribution, etc.) losses. Several efficiency strategies can reduce this.

For more information on best practices on designing low energy data centers, refer to this Best Practices Guide from the Federal Energy Management Program.


About half of the energy use in data centers goes to cooling and dehumidification, which poses huge opportunities for savings. First, focus on reducing the cooling loads in the space. After the load has been reduced through passive measures and smart design, select the most efficient and appropriate technologies to meet the remaining loads. Reducing loads is often the cheapest and most effective way to save energy; thus, we will focus on those strategies here.

Cooling loads in data centers can be reduced a number of ways: more efficient servers and power supplies, virtualization, and consolidation into hot and cold aisles. In its simplest form, hot aisle/cold aisle design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. In more sophisticated designs, a containment system (anything from plastic sheeting to commercial products with variable fans) can be used to isolate the aisles and prevent hot and cold air from mixing.

But one of the simplest ways to save energy in a data center is simply to raise the temperature. It’s a myth that data centers must be kept cold for optimum equipment performance. You can raise the cold aisle setpoint of a data center to 80°F or higher, significantly reducing energy use while still conforming with both the American Society of Heating, Refrigerating, and Air Conditioning Engineers’ (ASHRAE) recommendations and most IT equipment manufacturers’ specs. In 2004, ASHRAE Technical Committee 9.9 (TC 9.9) standardized temperate (68 to 77°F) and humidity guidelines for data centers. In 2008, TC 9.9 widened the temperature range (64.4 to 80.6°F), enabling an increasing number of locations throughout the world to operate with more hours of economizer usage.

For even more energy savings, refer to ASHRAE’s 2011 Thermal Guidelines for Data Processing Environments, which presents an even wider range of allowable temperatures within certain classes of server equipment.


Just up the road from RMI’s office in Boulder, The National Snow and Ice Data Center is running around the clock to provide 120 terabytes of scientific data to researchers across the globe. Cooling the server room used to require over 300,000 kWh of energy per year, enough to power 34 homes. The data center was recently redesigned with all major equipment sourced within 20 miles of the site. The redesign resulted in a reduction of more than 90 percent in the energy used for cooling. The new Cooleradosystem, basically a superefficient indirect evaporative cooler that capitalizes on a patented heat and mass exchanger, uses only 2,560 kWh/year.

Before the engineers from RMH Group could use the Coolerado in lieu of compressor-based air conditioning, they had to drastically reduce the cooling loads. They accomplished this with the following strategies:

  • Less stringent temperature and humidity setpoints for the server room—this design meets the ASHRAE Allowable Class 1 Computing Environment setpoints (see Figure 2)
  • Airside economizers (enabled to run far more often within the expanded temperature ranges)
  • Virtualization of servers
  • Rearrangement and consolidation into hot and cold aisles

The remaining energy that is required for cooling and to power the server is offset with the energy produced from the onsite 50.4 kW solar PV system. In addition to producing clean energy onsite, the battery backup system provided added security in the case of a power outage.

Rick Osbaugh, the lead design engineer from RMH Group, cites three key enabling factors that allowed such huge energy savings:

  • A Neighborly Inspiration: The initial collaboration between NREL and NASA on utilizing a technology never used on a data center was the starting point of the design process. This collaboration was born from two neighbors living off the grid in Idaho Springs—but in this case, these neighbors also happened to be researchers at NREL and NASA.
  • Motivated Client: In this case, the client, as well as the entire NSIDC staff, wanted to set an example for the industry, and pushed the engineers to work out an aggressive low-energy solution. In order to minimize downtime, the staff members at the NSIDC all pitched in to help ensure that the entire retrofit was done in only 90 hours.
  • Taking Risks: Finally, the right team was assembled to implement a design that pushes the envelope. The owner and engineer were willing to assume risks associated with something never done before.


In 2011, Mortenson Construction completed an 85,000-square foot data center expansion for a top five search engine company in Washington state. This scalable, modular system supports a 6 MW critical IT load and has a PUE of only 1.08! This level of efficiency was possible because of a virtual design process that utilized extensive 3D modeling coupled with an innovative cooling strategy. Referred to as “computing coops,” the pre-engineered metal buildings incorporate many of the same free-air cooling concepts chicken coops utilize by bringing outside air through the sides of the building through the servers, and then exhausting hot air through the cupola, creating a chimney effect.

With a tight construction schedule (only eight months), the design team created an ultraefficient data center while also saving over $5 million compared to the original project budget.

A special thanks to Rick Osbaugh of the RMH Group, and Hansel Bradley of Mortenson Construction for contributing content for this article.





Some note for Data Centre Power Better Practice Guide

ICT Efficiency
Reducing the power needed by the ICT equipment (the productive use) is often the most effective way to maximise power efficiency. Reducing the ICT equipment’s power load means smaller overhead power is needed, as for example, less heat is generated so less cooling is needed.

Actions that reduce the power needed by ICT equipment include:

  • Virtualisation – moving workloads from dedicated ICT equipment (including servers, storage and networks) to shared ICT equipment can reduce the amount of power required by 10% to 40%.
  • Decommissioning – disused ICT equipment can be left powered on rather than decommissioned and removed.
  • Modernising – the latest models of ICT hardware are using much less power for equivalent performance. Gartner advises that server power requirements have dropped by two thirds over the past two generations.
  • Consolidation – Physical and logical consolidation projects can rationalise the data centre ICT equipment.

Cooling Efficiency

The cooling systems are usually the major source of overhead power consumption, and so there is usually value in making cooling more efficient. There is a wide range of data centre cooling technology, which provides agencies with great flexibility about investing in an optimum solution.

Common techniques to minimise power use include:

  • Free air cooling brings the cooler air outside the data centre into the data centre through dust and particle filters. In most Australian cities free air cooling can be used over 50 per cent of the time, and in Canberra over 80 per cent of the time.
  • Hot or cold aisle containment is a technique that aligns all the ICT equipment in the racks so that all of the cold air arrives on side of the rack and leaves on the other side of the rack. This means that the chilled air produced by the cooling system is delivered to the ICT equipment without mixing with the warmer exhaust air.
  • Raising the data centre temperature exploits the capability of modern ICT equipment to operate reliably at higher temperatures. Data centres can now operate at between 23 and 28 degrees Celsius, rather than the 18 to 21 degrees Celsius. Operating at higher temperatures means much less power is needed for cooling, and free air cooling becomes even more effective. The American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) publish guidance on maintaining optimum air temperatures in data centres.




Agencies should also evaluate the environmental impact of cooling solutions. The environmental impact of cooling systems is typically excessive water use, however some cooling systems use hazardous chemicals.

The investment case for cooling systems is quite different to ICT equipment. The asset life is usually 7 to 15 years. During the life of the cooling systems, the ICT equipment can be expected to change between two and five times. The amount of cooling required will vary significantly as the ICT equipment changes. This variability means that agencies should seek cooling solutions that can adjust as the demand for cooling rises and falls.

iSpace đưa công nghệ Data Center vào chương trình đào tạo

GD&TĐ – Lễ ký kết hợp tác chiến lược giữa trường Cao đẳng CNTT (iSpace) và công ty DataCenter Services, nhằm đưa công nghệ Data Center vào chương trình đào tạo cho SV diễn ra sáng nay (2/4).

Lễ ký kết hợp tác chiến lược giữa trường Cao đẳng CNTT (iSpace) và công ty DataCenter Services

Đây được xem là trường đầu tiên trên cả nước đưa chương trình đào tạo về công nghệ Data Center chuyên sâu vào giảng dạy cho SV ngành CNTT và định hướng phát triển nghề chuyên gia Data Center tại Việt Nam.
Ông Nguyễn Hoàng Anh, hiệu trưởng trường Cao đẳng CNTT iSpace cho biết: Kiến thức về công nghệ Data Center sẽ được thiết kế nằm trong chương trình đào tạo chung của nhà trường cho SV năm cuối và là điều kiện bắt buộc cho chuẩn đầu ra của nhà trường. SV ngoài việc được trang bị kiến thức để đảm bảo thông tin của tổ chức, doanh nghiệp được an toàn, bảo mật, các em sẽ được bổ sung thêm các kiến thức và kỹ năng cần thiết để làm việc tại các Data center có công nghệ hiện đại nhất.Theo thiết kế chương trình đào tạo (do đội ngũ chuyên gia cao cấp của DataCenter Services phối hợp với trường iSpace), SV năm cuối sẽ được học trong khoảng thời gian 2-3 tháng và được cấp một chứng nhận hoàn thành khóa học sau khi ra trường. Bên cạnh đó, các em còn được trang bị những kiến thức mới nhất, hiện đại nhất trong việc quản lý, vận hành các Data Center.

Value of Monitoring Capabilities

“Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.”
H. James Harrington

Hãy sử dụng giải pháp Solarwinds Orion với mục đích quản trị hệ thống IT, cũng như áp dụng framework ITIL trong việc đưa ra các quy trình quản trị chất lượng dịch vụ CNTT của công ty các bạn.

Solarwinds for DC cropped-369ddd31e6ae3a80112c59bbefed02bd.jpg