The 10 best ways to visually represent IT data

Chịu khó sử dụng các Tools này trong việc quản lý thông tin của Trung Tâm Dữ liệu. Trong này mình đã dùng quen Camtasia, MS Visio, MS Powerpoint, Excel và thấy thực sự hữu dụng trong việc quản lý thông tin.

The right chart, image, or diagram can be invaluable in clarifying and conveying IT information. The trick is finding the best tool to illustrate the specific concept or type of data you’re representing.

In all areas of IT, there are a number of situations where certain ways of presenting data, configuration details, or a sequence of events work best. We often tend to rely on one tool for everything because we’re familiar with it, but that isn’t always the best approach. Here is my top 10 list of the most effective ways to visually represent IT data.

Note: This article is also available as a PDF download and as a TechRepublic photo gallery.

1: Network connectivity — Microsoft Visio

I’ll admit that one of my nicknames over the years has been “Kid Visio.” Visio is a capable tool for documenting network connectivity. It’s not the right tool for documenting the configuration, but it does a good job of outlining the logical layout. From a top-down perspective, I feel Visio does this best. Figure A shows a sample network diagram that clearly shows the logical layout of the network.

Figure A

2: Application layout and architecture — Microsoft Visio

Let’s face it: Applications can get complex today. Virtual machines, replicated databases, firewall configurations, virtual IP addresses, mobile applications, and more make documenting an application flow no easy task. Again, I’ve found Visio to be the tool that reigns supreme. In the example shown in Figure B, many complicated aspects of the infrastructure are represented visually in one flow. While it doesn’t address the details of aspects such as the database replication, it is a good springboard to those other areas of key content.

Figure B

3: Free disk space — Pie charts

I’m not really a fan of pie charts, but they do the trick for representing free space on a disk. This can be Windows drives as well as critical volumes, such as a VMware VMFS datastore or a drive on a storage area network (SAN). The pie chart is a veteran at representing free space, and in the example shown in Figure C, you can see its effectiveness for this application. But take a pie chart with a grain of salt. We need to visualize how much drive space is used as well as how much free space is available.

Figure C

4: Year-over-year performance tracking — Excel 3D bar charts

For tracking performance year over year for a moving target, I find that the 3D bar charts within Excel do a good job of showing the progress. It doesn’t have to be year over year, either; it can represent quarterly assessments or even a comparison of something, such as different offices. In my work experience, I created a simple 3D bar chart within Excel that looked something like the one in Figure D to track progress moving to virtual machines from physical servers.

Figure D

5: Consumption compared to other like entities — Excel Bubble charts

Quickly visualizing the consumption in proportion to other like consumers is easy with the bubble chart. One common example is representing the number of servers (or PCs) in a given location, which the bubble chart in Figure E does well. But it’s important to note that there is a significant limitation with the bubble chart: It assumes that all items are equal consumers. A good example would be 100 file servers compared to 100 Oracle database servers. In most situations, the file servers require much less maintenance and resources than the database servers. Nonetheless, the bubble chart is effective in displaying numbers by category.

Figure E

6: Performance reporting — Line graphs

The line graph is a good way to represent direct consumption. A number of tools utilize the line graph for this function, including the VMware vSphere Client, shown in Figure F. But the line graph also has a limitation: If the tool displaying the consumption does any normalization of data, there may be missing highs or lows. To be fair, when there is so much data to manage, normalization of performance data is a common occurrence.

Figure F

7: Step-by-step procedures — Camtasia Studio

When it comes to showing something onscreen, the de facto standard for recording the activity for replay is Camtasia Studio (Figure G). Camtasia has all the features you would want, including voice overlay and easy uploads to popular sites such as YouTube. This is a good way to practice a presentation and deliver solid emphasis without having to reinvent the wheel every time. I’ve also used Camtasia a number of times for prerecording demos to play during live presentations. Pausing the recording to explain an important point or field a question isn’t as distracting as interrupting a live demo. Even if I am giving a live demo, a Camtasia recording is a nice backup or “emergency demo,” if I need it.

Figure G

8: Topics in outline form — Microsoft PowerPoint

There are a number of strategies for creating and delivering PowerPoint presentations (and presentations in general). But PowerPoint is especially useful for creating an outline that can be conversationally discussed (Figure H). I’ve learned a few tricks over the years: Never have a presentation go longer than 59 minutes and 59 seconds; don’t cover more than three main topics per slide; and make the outline focus primarily on the problem, which you can then backfill with the solution.

Figure H

9: Customized maps — Microsoft Visio

Visio has map stencil objects (Figure I) you can use to document all kinds of things, such as assigning territories within a business and mapping out network and datacenter connectivity. You can download the map stencils from Microsoft (click the Find Shapes Online option). A U.S. stencil and a world stencil are available for modern Visio versions.

Figure I

10: Specific data sets –Webdesigner Depot

This awesome resource has a number of links to tools that provide specific visualizations of things such as Internet trending topics and the Internet as a whole, as well as images of an event or even the history of science. Figure J shows a good way to visualize current events on the Internet using Web Trend Map 4. The popular Infographic series is also a great resource that will inspire new ways to present data in an interpretable manner.

Figure J

About Rick Vanover

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.



10 stupid things people do in their Data Centers

Small missteps can turn into huge problems in the data center — and that can mean big trouble for your organization (and for you).

We’ve all done it — made that stupid mistake and hoped nobody saw it, prayed that it wouldn’t have an adverse effect on the systems or the network. And it’s usually okay, so long as the mistake didn’t happen in the data center. It’s one thing to let your inner knucklehead come out around end user desktop machines. But when you’re in the server room, that knucklehead needs to be kept in check. Whether you’re setting up the data center or managing it, you must always use the utmost caution.

Well, you know what they say about the best laid plans… Eventually you will slip up. But knowing about some of the more common mistakes can help you avoid them.

1: Cable gaffes

You know the old adage — measure twice, cut once. How many times have you visited a data center to see cables everywhere? On the floor, hanging down from drop ceilings, looped over server racks and over desks. This should simply not happen. Cable layout should be given the care it needs. Not only is it a safety hazard, it is also a disaster waiting to happen. Someone gets tangled up and goes down — you run the risk of a law suit AND data loss, all because someone was too lazy to measure cable runs or take the time to zip tie some Cat5.

2: Drink disasters

I know, this might seem crazy, but I’ve witnessed it first hand too many times. Admins (or other IT staff) enter the data center, drink in hand, and spill that drink onto (or into) a piece of equipment. In a split second, that equipment goes from life to death with no chance for you to save it. Every data center should have a highly visible sign that says, “No drink or food allowed. Period.” This policy must be enforced with zero tolerance or exception. Even covered drinks should be banned.

3: Electricity failures

This applies to nearly any electricity problem: accidentally shutting off power, lack of battery backups, no generator, pulling too much power from a single source. Electricity in the data center is your only means of life. Without it, your data center is nothing. At the same time, electricity is your worst enemy. If you do not design your electrical needs in such a way as to prevent failures, your data center begins its life at a disadvantage. Make sure all circuit breakers (and any other switch that could cause an accidental power loss) have covers and that your fire alarms and cutoff switches are not located where they might tempt pranksters.

4: Security blunders

How many keys to your data center have you given out? Do you have a spreadsheet with every name associated with every key? If not, why? If you aren’t keeping track of who has access to the data center, you might as well open up the door and say, “Come steal my data!” And what about that time you propped the exit door open so you could carry in all of those blades and cable? How much time was that open door left unattended? Or what about when you gave out the security code to the intern or the delivery man to make your job easier…. See where this is going?

5: Pigpen foibles

When you step into data center, what is your first impression? Would you bring the CEO of the company into that data center and say, “This is the empire your money has paid for?” Or would you need a day’s notice before letting the chairman of the board lay eyes on your work?

6: Documentation dereliction

How exactly did you map out that network? What are the domain credentials and which server does what? If you’re about to head out for vacation, and you’ve neglected to document your data center, your second in command might have a bit of drama on his or her hands. Or worse, even you’ve forgotten the domain admin credentials. I know, I know — fat chance.  But there’s this guy named Murphy. He has this law. You know how it goes. If you’re not documenting your data center, eventually the fates will decide it’s time to deal you a dirty hand and you will have a tangled mess to sift through.

7: Desktop fun

How many times have you caught yourself or IT staff using one of the machines in the data center as a desktop? Unless that machine is a Linux or Mac desktop, one time is all it takes to send something like the sexy.exe virus running rampant through your data center. Yes, an end user can do the same thing. But why risk having that problem originate in the heart of your network topology? Sure, it’d be cool to host a LAN party in your data center and invite all your buds for a round of CoD or WoW. Just don’t.

8: Forgotten commitments

When was the last time you actually visited your data center? Or did you just “set it and forget it”? Do you think that because you can remote into your data center everything is okay? Shame on you. That data center needs a regular visit. It doesn’t need to be an all-day tour. Just stop by to check batteries, temperature, cabling, etc. If you fail to give the data center the face time it needs, you could wind up with a disaster on your hands.

9: Tourist traps

You’re proud of your data center — so much so, you want to show it off to the outside world. So you bring in the press; you allow tours to walk through and take in its utter awesomeness. But then one of those tourists gets a bit too curious and down goes the network. You’ve spent hundreds of thousands of dollars on that data center (or maybe just tens of thousands — or even just thousands). You can’t risk the prying eyes and fingers of the public to gain access to the tenth wonder of the world.

10: Midnight massacre

Don’t deny it: You’ve spent all-nighters locked in your data center. Whether it was a server rebuild or a downed data network, you’ve sucked down enough caffeine that you’re absolutely sure you’re awake enough to do your job and do it right. Famous. Last. Words. If you’ve already spent nine or 10 hours at work, the last thing you need to do is spend another five or 10 trying to fix something. Most likely you’ll break more things than you fix. If you have third-shift staff members, let them take care of the problem. Or solve the issue in shifts. Don’t try to be a hero and lock yourself in the data center for “however long it takes.” Be smart.

Source from:

How many Data Center are enough for One Banking System :)

Analysts to Discuss Infrastructure and Operations Transformation at Gartner Data Center Summit 2013, November 25-26 in London and Gartner Data Center Conference 2013, December 9-12 in Las Vegas

Most global organizations have too many data centers in too many countries, according to Gartner, Inc. Gartner said that in order for enterprises to save costs and optimize service delivery, they need a twin data center topology for each continent of major business activity.

“It’s a fact that most global organizations run too many data centers in too many countries. This is normally the result of business expansion, either organically or through acquisition over many years,” said Rakesh Kumar, research vice president at Gartner. “While the logic of business growth makes sense, having too many data centers results in excessive capital and operational costs, an overly complex architecture and, in many cases, a lack of business-IT agility.”

Many companies have stated that having too many data centers inhibits their ability to respond quickly to business changes. This is because of too many organizational layers signing off on decisions, and because solutions designed for one data center may have to be completely redesigned for another site. Moreover, because of the significant cost involved (often hundreds of millions of dollars) and the possible savings through a more streamlined architecture, there is a huge financial incentive to change the topology to that of a dual data center.

For most organizations, it will mean two sites each for North America, South America, Europe, Africa and the Asia/Pacific region.

Although many global organizations will typically own all of the sites, in some cases it makes sense to use a hosted site that provides the physical building, power and cooling, while the global organization owns the IT assets. This has been the case for many organizations entering regions such as India and China. In other cases, a service management contract may be appropriate where no assets are owned and a third party will provide IT services through two data centers in a region. The details of how the sites are owned and managed are important but should not be confused with topology of the data center architecture.

“The twin data center topology provides many benefits, such as allowing for an adequate level of disaster recovery. This can be through an active/active configuration where each data center splits the production and development work and can fail over the load of the other site in the event of a disaster,” said Mr. Kumar. “However, this presupposes a synchronous copy of data and, so, a physical separation of about 60 to 100 miles. This may be too risky for certain industries, such as banking and government security, and so a third site may be required.”

The twin-site approach also allows the central IT organization to better manage data center operations because the number of sites is limited and each is of a significant size, and so will be able to negotiate well with suppliers and attract good skills.

A further benefit of the twin data center approach is that it allows companies to have a streamlined approach to business expansion. As the business grows, it is well-understood that increased IT needs will come from existing sites, so remote sites are closed as part of the initial acquisition process and not deferred because of complexity, lack of decision making or organizational politics.

While a twin data center topology per continent of major business activity is an ideal model, and should be pursued, there will always be some variation. Disaster recovery and business continuity driven by industry-specific compliance reasons (banking) or local specific risk assessment (risk of problems in a major city) may force the need to have a third site much farther away. For example, many banking clients will augment a twin data center strategy with one or more remote data centers that will typically be smaller and/or lower level. These additional data centers act as a data repository with some production capabilities, and as a final defense in the event of catastrophe.

Other variations exist where cultural or language differences exist. For example, many global companies will have a specific data center in China even though they may have a regional hub in Singapore. Another example is where closing down a site in one country and moving everything to another site is too difficult. Further variations come about because the cost of changing from a multi-data center topology to a twin data center topology per major geography is too much, or the resulting cost structure actually does not justify the change.

“While these variations are logical and need to be incorporated into the decision process, they should be viewed as exceptions to the ideal model of a twin data center topology per continent of major business activity, rather than an accepted IT expansion cost,” said Mr. Kumar. “By adopting this dual center approach wherever possible, the whole growth strategy will incorporate a belief system that will help to create an optimum data center topology.”

Additional information is available in the report “Save Costs and Optimize Service Delivery by Limiting Your Data Center Topology to Two per Continent.” The full report can be found on the Gartner website at

About Gartner Data Center Conferences

Gartner analysts will take a deeper look at the outlook of the data center market at the Gartner Data Center Summit 2013, November 25-26 in London and the Gartner Data Center Conference 2013 taking place December 9-12 in Las Vegas. More information on the London event can be found at More information on the Las Vegas event is at

Members of the media can register for press passes to the events by contacting (London) or (Las Vegas).

Information from the Gartner Data Center 2013 events will be shared on Twitter at using #GartnerDC.

source from

A Guide to Physical Security for Data Centers

July 26, 2012
A Guide to Physical Security for Data Centers

The aim of physical data center security is largely the same worldwide, barring any local regulatory restrictions: that is, to keep out the people you don’t want in your building, and if they do make it in, then identify them as soon as possible (ideally also keeping them contained to a section of the building). The old adage of network security specialists, that “security is like an onion” (it makes you cry!) because you need to have it in layers built up from the area you’re trying to protect, applies just as much for the physical security of a data center.

There are plenty of resources to guide you through the process of designing a highly secure data center that will focus on building a “gold standard” facility capable of hosting the most sensitive government data. For the majority of companies, however, this approach will be overkill and will end up costing millions to implement.

When looking at physical security for a new or existing data center, you first need to perform a basic risk assessment of the data and equipment that the facility will hold according to the usual impact-versus-likelihood scale (i.e., the impact of a breach of the data center versus the likelihood of that breach actually happening). This assessment should then serve as the basis of how far you go with the physical security. It is impossible to counter all potential threats you could face, and this is where identification of a breach, then containment, comes in. By the same token, you need to ask yourself if you are likely to face someone trying to blast their way in through the walls with explosives!

There are a few basic principles that I feel any data center build should follow, however:

  • Low-key appearance: Especially in a populated area, you don’t want to be advertising to everyone that you are running a data center. Avoid any signage that references “data center” and try to keep the exterior of the building as nondescript as possible so that it blends in with the other premises in the area.
  • Avoid windows: There shouldn’t be windows directly onto the data floor, and any glazing required should open onto common areas and offices. Use laminate glass where possible, but otherwise make sure windows are double-glazed and shatter resistant.
  • Limit entry points: Access to the building needs to be controlled. Having a single point of entry for visitors and contacts along with a loading bay for deliveries allows you to funnel all visitors through one location where they can be identified. Loading-bay access should be controlled from security or reception, ideally with the shutter motors completely powered down (so they can’t be opened manually either). Your security personnel should only open the doors when a pre-notified delivery is arriving (i.e., one where security has been informed of the time/date and the delivery is correctly labelled with any internal references). Of course all loading-bay activity should also be monitored by CCTV.
  • Anti-passback and man-traps: Tailgating (following someone through a door before it closes) is one of the main ways that an unauthorized visitor will gain access to your facility. By implementing man-traps that only allow one person through at a time, you force visitors to be identified before allowing access. And anti-passback means that if someone tailgates into a building, it’s much harder for them to leave.
  • Hinges on the inside: A common mistake when repurposing an older building is upgrading the locks on doors and windows but leaving the hinges on the outside of the building. This makes is really easy for someone to pop the pins out and just take the door off its hinges (negating the effect of that expensive lock you put on it!).
  • Plenty of cameras: CCTV cameras are a good deterrent for an opportunist and cover one of the main principles of security, which is identification (both of a security breach occurring and the perpetrator). At a minimum you should have full pan, tilt and zoom cameras on the perimeter of your building, along with fixed CCTV cameras covering building and data floor entrances/exits. All footage should be stored digitally and archived offsite, ideally in real time, so that you have a copy if the DVR is taken during a breach.
  • Make fire doors exit only (and install alarms on them): Fire doors are a requirement for health and safety, but you should make sure they only open outward and have active alarms at all times. Alarms need to sound if fire doors are opened at any time and should indicate, via the alarm panel, which door has been opened; it could just be someone going out for a cigarette, but it could also be someone trying to make a quick escape or loading up a van! On the subject of alarms, all doors need to have  alarms and be set to go off if they are left open for too long, and your system should be linked to your local police force, who can respond when certain conditions are met.
  • Door control: You need granular control over which visitors can access certain parts of your facility. The easiest way to do this is through proximity access card readers (lately, biometrics have become more common) on the doors; these readers should trigger a maglock to open. This way you can specify through the access control software which doors can be opened by any individual card. It also provides an auditable log of visitors trying to access those doors (ideally tied in with CCTV footage), and by using maglocks, there are no tumblers to lock pick, or numerical keypads to copy.
  • Parking lot entry control: Access to the facility compound, usually a parking lot, needs to be strictly controlled either with gated entry that can be opened remotely by your reception/security once the driver has been identified, or with retractable bollards. The idea of this measure is to not only prevent unauthorized visitors from just driving into your parking lot and having a look around, but also to prevent anyone from coming straight into the lot with the intention of ramming the building for access. You can also make effective use of landscaping to assist with security by having your building set back from the road, and by using a winding route into the parking lot, you can limit the speed of any vehicles. And large boulders make effective barriers while also looking nice!
  • Permanent security staff: Many facilities are manned with contract staff from a security company. These personnel are suitable for the majority of situations, but if you have particularly sensitive data or equipment, you will want to consider hiring your security staff permanently. A plus and minus of contract staff is that they can be changed on short notice (e.g., illness is the main cause of this). But it creates the opportunity for someone to impersonate your contracted security to gain access. You are also at more risk by having a security guard who doesn’t know your site and probably isn’t familiar with your processes.
  • Test, test and test again: No matter how simple or complex your security system, it will be useless if you don’t test it regularly (both systems and staff) to make sure it works as expected. You need to make sure alarms are working, CCTV cameras are functioning, door controls work, staff understands how visitors are identified and, most importantly, no one has access privileges that they shouldn’t have. It is common for a disgruntled employee who has been fired to still have access to a building, or for a visitor to leave with a proximity access card that is never canceled; you need to make sure your HR and security policies cover removing access as soon as possible. It’s only by regular testing and auditing of your security systems that any gaps will be identified before someone can take advantage of them.
  • Don’t forget the layers: Last, all security systems should be layered on each other. This ensures that anyone trying to access your “core” (in most cases the data floor) has passed through multiple checks and controls; the idea is that if one check fails, the next will work.

The general rule is that anyone entering the most secure part of the data center will have been authenticated at least four times:

1. At the outer door or parking entrance. Don’t forget you’ll need a way for visitors to contact the front desk.

2. At the inner door that separates the visitors from the general building staff. This will be where identification or biometrics are checked to issue a proximity card for building access.

3. At the entrance to the data floor. Usually, this is the layer that has the strongest “positive control,” meaning no tailgating is allowed through this check. Access should only be through a proximity access card and all access should be monitored by CCTV. So this will generally be one of the following:

  • A floor-to-ceiling turnstile. If someone tries to sneak in behind an authorized visitor, the door gently revolves in the reverse direction. (In case of a fire, the walls of the turnstile flatten to allow quick egress.)
  • A man-trap. Provides alternate access for equipment and for persons with disabilities. This consists of two separate doors with an airlock in between. Only one door can be opened at a time and authentication is needed for both doors.

4. At the door to an individual server cabinet. Racks should have lockable front and rear doors that use a three-digit combination lock as a minimum. This is a final check, once someone has access to the data floor, to ensure they only access authorized equipment.

The above isn’t an exhaustive list but should cover the basics of what you need to consider when building or retrofitting a data center. It’s also a useful checklist for auditing your colocation provider if you don’t run your own facility.

In the end, however, all physical security comes down to managing risks, along with the balance of “CIA” (confidentiality, integrity and access). It’s easy to create a highly secure building that is very confidential and has very high integrity of information stored within: you just encase the whole thing in a yard of concrete once it’s built! But this defeats the purpose of access, so you need a balance between the three to ensure that reasonable risks are mitigated and to work within your budget—everything comes down to how much money you have to spend.

About the Author

David Barker is technical director of 4D Data Centres. David (26) founded the company in 1999 at age 14. Since then he has masterminded 4D’s development into the full-fledged colocation and connectivity provider that it is today. As technical director, David is responsible for the ongoing strategic overview of 4D Data Centres’ IT and physical infrastructure. Working closely with the head of IT server administration and head of network infrastructure, David also leads any major technical change-management projects that the company undertakes.

The art of physical, outer perimeter security for a Data Center

Một bài toán tôi từng gặp đó là thiết kế Physical Security cho một Data Center. Các tài liệu nói về phần Security cho DC thường có thiên hướng đi sâu vào khía cạnh ” bảo mật với lính gác, tuần tra, hệ thống CCTV, và lớp bên trong thiên về công nghệ như các loại thẻ quẹt, Firewall …)

Trong khi đó phần Security về mặt vật lý không đơn giản chỉ là như thế, đối với các DC có diện tích rộng lớn, diện tích khu đất lớn gấp  >= 10 lần diện tích xây dựng DC thì mọi việc trở nên đơn giản hơn rất nhiều, việc đáp ứng các tiêu chí cơ bản của Tier 3 – 942 ở mức độ vĩ mô là khả quan.

Nhưng đối với các DC có diện tích khu đất nhỏ hơn thì vấn đề lại khác biệt hoàn toàn. Các quy định về khoảng cách hàng rào, chiều cao hàng rào, đèn chiếu sáng… là tương đối mù mờ – thậm chí không có tiêu chuẩn trong các khóa học chuẩn. Tôi thấy bài viết này hay và chia sẻ với các bạn.


When information security professionals think of perimeter security, firewalls, SSL VPN, RADIUS servers, and other technical controls immediately come to mind.  However, guarding the physical perimeter is just as important.

During the past weeks, I’ve written a series of articles that describe various components of an effective physical security strategy.  In this final article in the series, we’ll look closely at best practices for constructing the initial barrier to physical access to your information assets: the outer perimeter.

Components of a physical perimeter

Having served for several years in the military police, the concept of physical perimeter has two meanings.  However, we’ll skip the combat definition with its automatic weapons placement and final protective lines and focus on facility security.  (At least I hope your information asset physical security isn’t that strict, department of defense facilities excluded…)

The outer perimeter of a facility is its first line of defense.  It can consist of two types of barriers: natural and structural.  According to the United States Army’s Physical Security Field Manual, FM 3-19.30 (2001, p. 4-1):

  • Natural protective barriers are mountains and deserts, cliffs and ditches, water obstacles, or other terrain features that are difficult to traverse.
  • Structural protective barriers are man-made devices (such as fences, walls, floors, roofs, grills, bars, roadblocks, signs, or other construction) used to restrict, channel, or impede progress.

In other words, if you can use the terrain, do so.  Otherwise, you have to spend a little money and build your own obstructions.

The most common type of structural outer perimeter barrier is the venerable chain-link fence.  However, it isn’t good enough to simply throw up a fence and call it a day.  Instead, your fence, a preventive device, should be supported by one or more additional prevention and detection controls.  The number of controls you implement and to what extent are dependent upon the risks your organization faces.

Fence basics

A fence is both a psychological and a physical barrier.  The psychology comes into play when casual passers-by encounter it.  It tells them that the area on the other side is off-limits, and the owner would probably rather they didn’t walk across the property.  A fence or wall of three to four feet is good enough for this.

For those who are intent on getting to your data center or other collection of information assets, fence height should be about seven feet.  See Figure A.  For facilities with high risk concerns, a top guard is usually added.  The top guard consists of three to four strands of barbed wire spaced about six inches apart and extends outward at a 45 degree angle.  The total height, including fence and top guard, should reach eight feet. Figure A

Fence installation

Installing a perimeter fence requires some planning.  See Figure B.  Set the poles in concrete and ensure the links are pulled tight.  The links should form squares with sides of about two inches.  The fence should not leave more than a two inch gap between its lower edge and the ground. Figure B

Figure C depicts other considerations regarding fence placement.  First, identify any culverts, ditches, or objects that cause an opening beneath the fence.  Remember the two-inch rule above.  There should be no gaps greater than two inches below the edge of the fence. When any opening under the fence–whether enclosed as with the culvert in our example, or open–exceeds an area greater than 96 square inches, it should be blocked (FM 3-19.30, p. 4-5).  This is a good rule-of-thumb.  However, use common sense.  If you think a hole is big enough for a person to defeat your fence, block it.  Figures D and E (MIL-HDBK-1013/10, 1993, p. 15) show two methods. Figure C

Clear the area on both sides of the fence to provide a clear view of future intruders.  The recommended clearances, as shown in Figure C, are:

  • 50 feet between the fence and any internal natural or man-made obstructions.
  • 20 feet between the fence and any external natural or man-made obstructions.

Natural obstructions include trees and high weeds or grass.

Figure D

Figure E

Supporting controls

Vehicle Barriers

When vehicular intrusions are a concern, support the fence and gate opening with bollards or other obstructions, as depicted in Figure F (FM 3-19.30, p. 3-4). Figure F


Lighting is a critical piece of perimeter security.  It works as a deterrent and assists human controls (roving guards, monitored cameras, first responders to alarms, etc.) detect intruders.  Lighting standards are pretty simple:

  • Provide sufficient light for the detection controls used
  • Position lighting to “blind” intruders and keep security personnel in shadows
  • Provide extra lighting for gates, areas of shadow, or probable ingress routes, as shown in Figure C.

A general rule to start with is to position lights with two-foot candle-power at a height of about eight feet.

Intrusion detection controls

As with our technical controls, we make the assumption that if someone wants to get through our perimeter, they will.  So we need to supplement our fence with intrusion detection technology, including:

Use of detection technology must be coupled with a documented and practiced response process.

The final word

The field of physical security is broad and is often a dedicated career path.  So the information here is not intended to make you an expert.  However, organizations are increasingly integrating computer and physical security under one manager.

The need for information security professionals to understand physical controls is great enough that the most popular certifications, such as CISSP, require some knowledge of the topic.  Don’t be left behind.

Finally, many of the controls discussed in this article are too extreme for many organizations.  However, It’s always better to understand all your options.

About Tom Olzak

Tom is a security researcher for the InfoSec Institute and an IT professional with over 30 years of experience. He has written three books, Just Enough Security, Microsoft Virtualization, and Enterprise Security: A Practitioner’s Guide (to be publish…


About 10 “must haves” your data center needs to be successful.

The evolution of the data center may transform it into a very different environment thanks to the advent of new technologies such as cloud computing and virtualization. However, there will always be certain essential elements required by any data center to operate smoothly and successfully.  These elements will apply whether your data center is the size of a walk-in closet or an airplane hanger – or perhaps even on a floating barge, which rumors indicate Google is building:

Figure A

 Credit: Wikimedia Commons

1. Environmental controls

A standardized and predictable environment is the cornerstone of any quality data center.  It’s not just about keeping things cool and maintaining appropriate humidity levels (according to Wikipedia, the recommended temperature range is 61-75 degrees Fahrenheit/16-24 degrees Celsius and 40-55% humidity). You also have to factor in fire suppression, air flow and power distribution.  One company I worked at was so serious about ensuring their data center remained as pristine as possible that it mandated no cardboard boxes could be stored in that room. The theory behind this was that cardboard particles could enter the airstream and potentially pollute the servers thanks to the distribution mechanism which brought cooler air to the front of the racks. That might be extreme but it illustrates the importance of the concept.

2. Security

It goes without saying (but I’m going to say it anyhow) that physical security is a foundation of a reliable data center. Keeping your systems under lock and key and providing entry only to authorized personnel goes hand and hand with permitting only the necessary access to servers, applications and data over the network. It’s safe to say that the most valuable assets of any company (other than people, of course) reside in the data center. Small-time thieves will go after laptops or personal cell phones. Professionals will target the data center. Door locks can be overcome, so I recommend alarms as well. Of course, alarms can also be fallible so think about your next measure: locking the server racks? Backup power for your security system? Hiring security guards? It depends on your security needs, but keep in mind that “security is a journey, not a destination.”

3. Accountability

Speaking as a system administrator, I can attest that most IT people are professional and trustworthy.  However, that doesn’t negate the need for accountability in the data center to track the interactions people have with it. Data centers should log entry details via badge access (and I recommend that these logs are held by someone outside of IT such as the Security department, or that copies of the information are kept in multiple hands such as the IT Director and VP). Visitors should sign in and sign out and remain under supervision at all times. Auditing of network/application/file resources should be turned on. Last but not least, every system should have an identified owner, whether it is a server, a router, a data center chiller, or an alarm system.

4. Policies

Every process involved with the data center should have a policy behind it to help keep the environment maintained and managed. You need policies for system access and usage (for instance, only database administrators have full control to the SQL server). You should have policies for data retention – how long do you store backups? Do you keep them off-site and if so when do these expire? The same concept applies to installing new systems, checking for obsolete devices/services, and removal of old equipment – for instance, wiping server hard drives and donating or recycling the hardware.

5. Redundancy

 Credit: Wikimedia Commons

The first car I ever owned was a blue Ford Pinto. My parents paid $400 for it and at the time, gas was a buck a gallon, so I drove everywhere. It had a spare tire which came in handy quite often. I’m telling you this not to wax nostalgic but to make a point: even my old breakdown-prone car had redundancy. Your data center is probably much shinier, more expensive, and highly critical, so you need more than a spare tire to ensure it stays healthy. You need at least two of everything that your business requires to stay afloat, whether this applies to mail servers, ISPs, data fiber links, or voice over IP (VOIP) phone system VMs. Three or more wouldn’t hurt on many scenarios either!

It’s not just redundant components that are important but also the process to test and make sure they work reliably – such as scheduled failover drills and research into new methodologies.

6. Monitoring

Monitoring of all systems for uptime and health will bring tremendous proactive value but that’s just the beginning. You also need to monitor how much bandwidth is in use, as well as energy, storage, physical rack space, and anything else which is a “commodity” provided by your data center.

There are free tools such as Nagios for the nuts and bolts monitoring and more elaborate solutions such as Dranetz for power measurement. Alerts when outages or low thresholds occur is part of the process – and make sure to arrange a failsafe for your alerts so they are independent of the data center (for instance, if your email server is on a VMWare ESX host which is dead, another system should monitor for this and have the ability to send out notifications).

7. Scalability

So your company needs 25 servers today for an array of tasks including virtualization, redundancy, file services, email, databases, and analytics? What might you need next month, next year, or in the next decade? Make sure you have the appropriate sized data center with sufficient expansion capacity to increase power, network, physical space, and storage.  If your data center needs are going to grow – and if your company is profitable I can guarantee this is the case – today is the day to start planning.

Planning for scalability isn’t something you stop, either; it’s an ongoing process. Smart companies actively track and report on this concept. I’ve seen references in these reports to “the next rivet to pop” which identifies a gap in a critical area of scalability that must be met (e.g., lack of physical rack space) as soon as possible.

8. Change management

You might argue that Change Management falls under the “Policies” section, a consideration which has some bearing. However, I would respond that it is both a policy and a philosophy. Proper guidelines for change management ensure that nothing occurs in your data center which hasn’t been planned, scheduled, discussed and agreed upon along with providing backout steps or a Plan “B.” Whether it’s bringing new systems to life or burying old ones, the lifecycle of all elements of your data center must fall in accordance with your change management outlook.

9. Organization

I’ve never known an IT pro who wasn’t pressed for time. Rollout of new systems can result in some corners being cut due to panic over missed deadlines – and these corners invariably seem to include making the environment nice and neat.

A successful system implementation doesn’t just mean plugging it in and turning it on; it also includes integrating devices into the data center via standardized and supportable methods. Your server racks should be clean and laid out in a logical fashion (production systems in one rack, test systems in another). Your cables should be the appropriate length and run through cabling guides rather than haphazardly draped. Which do you think is easier to troubleshoot and support; a data center that looks like this:

 Credit: Wikimedia Commons


 Credit: Wikimedia Commons

10. Documentation

The final piece of the puzzle is appropriate, helpful, and timely documentation – another ball which can easily be dropped during an implementation if you don’t follow strict procedures. It’s not enough to just throw together a diagram of your switch layout and which server is plugged in where; your change management guidelines should mandate that documentation is kept relevant and available to all appropriate personnel as the details evolve – which they always do.

Not to sound morbid, but I live by the “hit by a bus” rule. If I’m hit by a bus tomorrow, one less thing for everyone to worry about is whether my work or personal documentation is up to date, since I spend time each week making sure all changes and adjustments are logged accordingly. On a less melodramatic note, if I decide to switch jobs I don’t want to spend two weeks straight in a frantic braindump of everything my systems do.

The whole ball of wax

The great thing about these concepts is that they are completely hardware/software agnostic.  Whether your data center contains servers running Linux, Windows or other operating systems, or is just a collection of network switches and a mainframe, hopefully these will be of use to you and your organization.

To tie it all together, think of your IT environment as a wheel, with the data center as the hub and these ten concepts as the surrounding “tire”:

 Credit: Wikimedia Commons

Devoting time and energy to each component will ensure the wheels of your organization turn smoothly.  After all, that’s the goal of your data center, right?

Know your data center monitoring system

You can’t depend on a building system to run the data center. Implement a BMS and a DCIM tool to monitor and predict system changes, tighten security and more.
In this Article


Optimizing your DC - start with its Data
DCIM will do many things a BMS won’tData center monitoring systems are critical for managing a facility. Knowing whether you need a BMS or a DCIM system…
Download TechTarget’s Guide for IT and Business Managers in Southeast Asia

Access this Guide for IT and Business Managers in Southeast Asia, written by technology experts and advisors, to find out more about cloud migration, big data benefits, and more.

Nearly every building today, new and old, has a building management or automation system (BMS or BAS) to monitor the major power, cooling and lighting systems. Building management systems are robust, comprised of standardized software platforms and communications protocols.

A BMS monitors and controls the total building infrastructure, primarily those systems that use the most energy. For example, the BMS senses temperatures on each floor — sometimes in every room — and adjusts the heating and cooling output as necessary.

The BMS usually monitors all the equipment in the central utility plant: chillers, air handlers, cooling towers, pumps, water temperatures and flow rates and power draws. Automation systems shut off lights at night, control window shades as the sun angle changes, and track and react to several other conditions. Regardless of control sophistication, the BMS’ most important function is to raise alarms if something goes out of pre-set limits or fails.

Are these the same things we want from our DCIM?

There is no single standard of data center infrastructure management (DCIM); it can be as simple as a monitor on cabinet power strips, or as sophisticated as a granular, all-inclusive data center monitoring system.

The BMS is a facilities tool that also deals with systems, so why do we need a separate DCIM system as well? DCIM provides more detailed information than BMS, and helps the data center manager run the wide range of critical systems under their care.

DCIM and BMS are not mutually exclusive; they should be complimentary. Some equipment in the data center should be monitored by the BMS. When choosing a DCIM tool, ensure it can interface with the BMS.

There are three fundamental differences between BMS and DCIM.

BMS monitors major parameters of major systems, and raises alarms if something fails. Although you can see trends that portend a problem, predictive analysis is not BMS’ purpose.

If the building air conditioning fails, it’s uncomfortable, but if the data center air conditioning fails, it’s catastrophic. That’s one example of why DCIM provides trend information and the monitoring data to invoke preventive maintenance before something critical fails. Prediction requires the accumulation, storage and analysis of an enormous amount of data — data that would overwhelm a BMS. Turning the mass of data from all the monitored devices into useful information can prevent a serious crash.

The BMS uses different connectivity protocols than IT. Most common to BMS are DeviceNet, XML, BACnet, LonWorks and Modbus, whereas IT uses mainly Internet Protocol (IP). Monitoring the data center with BMS would require the system to have a communications interface or adapter for every IP connection.

Data center devices handle large quantities of data points — often the common binary number of 256. The cumulative input from every device in the facility would overwhelm a BMS in terms of both data point interfaces and data reduction and analysis tasks. DCIM software accumulates those thousands of pieces of information from IP-based data streams and distills them into usable information.

Only major alarms and primary data should be segmented by DCIM and re-transmitted to the BMS. The rest is of little use in running the building.
DCIM will do many things a BMS won’t

DCIM is an evolving field, and not every DCIM product does all of these things, but these are the general areas DCIM handles and BMS products do not:

Electrical phase balancing: The output of every large uninterruptible power supply (UPS), as well as the branch circuit distribution to many data center cabinets, is three-phase. In order to realize maximum capacity from each circuit and the UPS, equalize the current draws on each phase. All UPS systems — and many power distribution units (ePDUs, iPDUs, CDUs, etc.) — have built-in monitoring, but it’s inefficient to run from cabinet-to-cabinet and device-to-device to balance power.

If the data center uses “smart” PDUs with IP-addressable interfaces, the data center monitoring system can track the power draws on each phase in each cabinet, as well as at each juncture in the power chain. Users can calculate a balanced scheme before making actual power changes in cabinets. The BMS looks only at the incoming power to the UPS, which is insufficient for this important and ongoing task.

Rack and cabinet temperature/humidity monitoring: The BMS monitors some representative point in the room and alarms if this point hits a significant out-of-range condition, but that’s not enough for a good monitoring system in the data center. Temperatures vary significantly from the top to bottom of a cabinet and across the breadth of a facility. With higher inlet temperatures and denser cabinets becoming the norm, comprehensive temperature information matters when deciding where to install a new piece of equipment, or during an air conditioner maintenance or failure period.

Most “smart” PDUs have temperature and humidity probe accessories to monitor critical points on cabinets and in the room via the same IP port that transmits power use. Even minimal DCIM packages can turn this additional data load into useful information.

Cabinet security: The building security system — tied into the BMS or not — observes data center entry and exit, but rarely anything else. It is becoming more common for data centers to house equipment from different owners, such as in colocation facilities, or to have cabinets with restricted access and equip those cabinets with cipher locks. Remote-monitored locks are available, and many can be connected through intelligent power strips. A DCIM tool can be configured to track security information so only the data center manager or other authorized parties access it.

Inventory control: Some of the more robust DCIM software packages track IT hardware — sometimes with the help of radio frequency identification tags. This is useful in a large facility where assets are regularly added, replaced and moved.

Source from:

Gartner Names Schneider, Emerson, CA, Nlyte DCIM Leaders

Gartner has released its first Magic Quadrant (MQ) report on Data Center Infrastructure Management, laying out the market and positions for several DCIM providers across the four quadrants of leaders, challengers, visionaries and niche players.

While there is a lot of interest in DCIM, it’s difficult for customers to determine where to start. Understanding where a DCIM provider’s strengths are is a good thing in an often-confusing market.

The report adds some clarity to the market, analyzing strengths and weaknesses for 17 players. Gartner isn’t the first to tackle the fairly young DCIM space. Its competitors 451 Research and TechNavio both have taken a stab at defining and segmenting the space.

Gartner defines DCIM market as space that encompasses tools that monitor, measure, manage and control data center resources and energy consumption of IT and facility components. The market research house forecasts that by 2017 DCIM tools will be deployed in more than 60 percent of larger data center in North America.

Providers often offer different pieces of the overall infrastructure management picture and use different and complicated pricing models. All vendors in the MQ must offer a portfolio of IT-related and facilities infrastructure components rather than one specific component. All included vendors must enable monitoring down to the rack level at minimum. Building management systems are not included.

The four companies in the Leaders Quadrant – those proven to be leaders in technology and capable of executing well — are Schneider Electric, Emerson Network Power, CA Technologies and Nlyte Software. All but Nlyte are major vendors that offer several other products and services outside of DCIM, putting Nlyte, a San Mateo, California-based startup, in company of heavyweights.

Here is Gartner’s first ever Magic Quadrant for DCIM vendors:

Gartner DCIM Magic Quadrant 2014

IO, the Arizona data center provider best known for its modular data centers, was named a visionary in the report for the IO.OS software it developed to manage its customers’ data center deployments.

“We are very pleased with the findings articulated in the Garter Magic Quadrant for DCIM,” said Bill Slessman, CTO of IO. “IO customers have trusted the IO.OS to intelligently control their data centers since 2012.”

The other three quadrants are for challengers, visionaries and niche players, and it’s not a bad thing to be listed in any portion of the MQ. Challengers stand to threaten leaders; visionaries stand to change the market, and niche players focus on certain functions above others, though a narrow focus can limit their ability to outperform leaders. Being listed in the MQ is a win in itself.

DCIM value, according to Gartner:

  • Enable continuous optimization of data center power, cooling and space
  • Integrate IT and facilities management
  • Help to achieve greater efficiency
  • Model and simulate the data center for “what if” scenarios
  • Show how resources and assets are interrelated

About the Author

Jason VergeJason Verge is an Editor/Industry Analyst on the Data Center Knowledge team with a strong background in the data center and Web hosting industries. In the past he’s covered all things Internet Infrastructure, including cloud (IaaS, PaaS and SaaS), mass market hosting, managed hosting, enterprise IT spending trends and M&A. He writes about a range of topics at DCK, with an emphasis on cloud hosting.

Data center hot-aisle/cold-aisle containment how-tos


Though data center hot-aisle/cold-aisle containment is not yet the status quo, it has quickly become a design option every facility should consider.

Server and chip vendors packing more compute power into smaller envelopes has caused sharp rises in data center energy densities. Ten years ago, most data centers ran 500 watts to 1 kilowatt (kW) per rack or cabinet. Today densities can get to 20 kW per rack and beyond, and most expect the number to continue to increase.

Data center hot-aisle cold aisle containment can better control where hot and cold air goes so that a data center’s cooling system runs more efficiently. And the method has gained traction. According to a’s “Data Center Decisions 2009” survey of data center managers last year, almost half had already implemented the technology or planned to last year. But there are several considerations, and various questions that data center managers should ask themselves:

  • Is containment right for you?
  • Should you do hot-aisle containment or cold-aisle containment?
  • Should you do it yourself or buy vendor products?
  • What about fire code issues?
  • How do you measure whether containment actually worked as hoped?

Do you need hot/cold aisle containment?

First, a data center manager needs to decide whether hot-aisle/cold-aisle containment is a good fit for his facility. Dean Nelson, the senior director of global data center strategy at eBay Inc., said it’s not a question for his company, which already uses the method

“The hot-aisle/cold-aisle method has gained traction”.

 But as Bill Tschudi, an engineer at Lawrence Berkeley National Laboratory who has done research on the topic, said, it’s all about taking the right steps to get there.

“You can do it progressively,” he said. “Make sure you’re in a good hot-aisle/cold-aisle arrangement and that openings are blocked off. You don’t want openings in racks and through the floors.”

These hot- and cold-aisle best design practices are key precursors to containment, because when they’re done incorrectly, containment will likely fail to work as expected.

Containment might not be worth it in lower-density data centers because there is less chance for the hot and cold air to mix in a traditional hot-aisle/cold-aisle design.

“I think the ROI in low-density environments probably won’t be there,” Nelson said. “The cost of implementing curtains or whatever would exceed how much you would save.”

But that threshold is low. Data centers with densities as low as 2 kW per rack should consider hot-aisle/cold-aisle containment, Nelson said. He suggests calling the utility company, or other data center companies, who will perform free data center assessments. In some cases, the utility will then offer a rebate if a data center decides to implement containment. Utilities have handed out millions of dollars to data centers for implementing energy efficient designs.

Hot aisle containment or cold aisle containment?

Next up for data center managers is deciding whether to contain the hot or the cold aisle. On this score, opinions vary. For example, American Power Conversion Corp. (APC) sells a pre-packaged hot -aisle containment product. Liebert Corp. sells cold-aisle containment. Not surprisingly, both APC and Liebert argue that their solution is best.

“Containing the hot aisle means you can turn the rest of your data center into the cold aisle”.

Containing the hot aisle means you can turn the rest of your data center into the cold aisle, as long as there is containment everywhere. That is how data center colocation company Advanced Data Centers built its Sacramento, Calif., facility, which the U.S. Green Building Council has pre-certified for Leadership in Energy and Environmental Design (or LEED) Platinum status in energy efficiency.

“We’re just pressuring the entire space with cool air where the cabinets are located, said Bob Seese, the president of Advanced Data Centers. “The room is considered the cold aisle.”

This approach includes concerns that when contained the hot aisle might get too hot for the IT equipment and uncomfortable for people to work in the space. Nelson, however, said that as long as there’s good airflow and the air is being swiftly exhausted from the space, overheating shouldn’t be a problem.

Containing the cold aisle means you may more easily use containment in certain sections of a data center rather than implementing containment everywhere. But it also requires finding a way to channel the hot air back to the computer room air conditioners (CRACs) or contending with a data center that is hotter than normal.

Cold-aisle containment proponents cite the flexibility of their approach. Cold aisle can be used for raised-floor and overhead cooling environments. Cold-aisle advocates also say that containing the cold aisle means you can better control the flow and volume of cool air entering the front of the servers.

Then, of course, data centers could contain both the hot and cold aisles.

Do-it-yourself methods vs. prepackaged vendor products

There are many ways to accomplish data center containment. If a company wants, it can hire APC, Liebert, Wright Line LLC or another vendor to install a prepackaged product.

“Homegrown methods of containment are often cheaper and, if done correctly, are just as effective as vendor-provided approaches”.

        ” This may bring peace of mind to a data center manager who wants accountability should containment fail to work as advertised”.

“They’re good if you want someone to come in and do the work,” Nelson said. “You can hire them.”

But these offerings come at a price. Homegrown methods of containment are often cheaper and, if done correctly, are just as effective as vendor-provided approaches. Nelson and Tschudi said they prefer do-it-yourself methods because of the lower cost.

If a data center staff does undertake data center containment strategies themselves, there are various options. Some data centers have installed thick plastic curtains, which can hang from the ceiling to the top of the racks or on the end of a row of racks, or both. In addition, a data center can build something like a roof over the cold aisles or simply extend the heights of the racks by installing sheet metal or some other product on top of the cabinets. All these structures prevent hot and cold air from mixing, making the cooling system more efficient.

Fire code issues with hot/cold aisle containment

Almost every fire marshal is different, so getting a marshal involved early in the process is important. A data center manager must know what the local fire code requires and design accordingly, as hot-aisle/cold-aisle containment can stoke fire-code issues.

“The earlier you get them involved, the better,” Tschudi said.

A fire marshal will want to ensure that the data center has sprinkler coverage throughout. So if a data center has plastic curtains isolating the aisles, they may need fusible links that melt at high temperatures so the curtains fall to the floor and the sprinklers reach everywhere. In designs with roofs over the aisles, this may require a sprinkler head under the roof.

“We made sure we could adapt to whatever the fire marshal required,” Seese said.

Measuring hot/cold containment efficacy

It’s also crucial to determine whether containment has worked; otherwise, there’s no justification for the project.

“Containment benefits can reverberate throughout a data center”.

Containment benefits can reverberate throughout a data center. If hot and cold air cannot mix, the air conditioners don’t have to work as hard to get cool air to the front of servers. That can mean the ability to raise the temperature in the room and ramp down air handlers with variable speed drive fans. That in turn could make it worthwhile to install an air-side or water-side economizer. Because the data center can run warmer, an economizer can be used to get free cooling for longer periods of the year.

Experts suggest taking a baseline measurement of a data center’s power, which compares total facility power with the power used by the IT equipment.

Nelson said that one of eBay’s data centers had a power usage effectiveness rating of more than 2, which is close to average. After installing containment in his data center, eBay got the number down to 1.78.

“It was an overall 20% reduction in cooling costs, and it paid for itself well within a year,” he said. “It is really the lowest-hanging fruit that anyone with a data center should be looking at.”

source from:

Making Big Cuts in Data Center Energy Use

The energy used by our nation’s servers and data centers is significant. In a 2007 report, the Environmental Protection Agency estimated that this sector consumed about 61 billion kilowatt-hours (kWh), accounting for 1.5 percent of total U.S. electricity consumption. While the 2006 energy use for servers and data centers was more than double the electricity consumed for this purpose in 2000, recent work by RMI Senior Fellow Jonathan Koomey, a researcher and consulting professor at Stanford University, found that this rapid growth slowed because of the economic recession. At the same time, the economic climate led data center owner/operators to focus on improving energy efficiency of their existing facilities.

So how much room for improvement is there within this sector? The National Snow and Ice Data Center in Boulder, Colorado, achieved a reduction of more than 90 percent in its energy use in a recent remodeling (case study below). More broadly, Koomey’s study indicates that typical data centers have a PUE (see sidebar) between 1.83 and 1.92. If all losses were eliminated, the PUE would be 1.0. Impossible to get close to that value, right? A survey following a 2011 conference of information infrastructure professionals asked, “…what data center efficiency level will be considered average over the next five years?”

More than 20 percent of the respondents expected average PUE to be within the 1.4 to 1.5 range, and 54 percent were optimistic that the efficiency of facilities would improve to realize PUE in the 1.2 to 1.3 range.
Further, consider this: Google’s average PUE for its data centers is only 1.14. Even more impressive, Google’s PUE calculations include transmission and distribution from the electric utility. Google has developed its own efficient server level construction, optimized power distribution, and utilized many strategies to drastically reduce cooling energy consumption, including a unique approach for cooling in a hot and humid climate using recycled water.


For every unit of IT power produced, energy is used to cool and light the rooms that house the servers. Additionally, energy is lost due to inefficient power supplies, idling servers, unnecessary processes, and bloatware (pre-installed programs that aren’t needed or wanted). In fact, about 65 percent of the energy used in a data center or server room goes to space cooling and electrical (transformer, UPS, distribution, etc.) losses. Several efficiency strategies can reduce this.

For more information on best practices on designing low energy data centers, refer to this Best Practices Guide from the Federal Energy Management Program.


About half of the energy use in data centers goes to cooling and dehumidification, which poses huge opportunities for savings. First, focus on reducing the cooling loads in the space. After the load has been reduced through passive measures and smart design, select the most efficient and appropriate technologies to meet the remaining loads. Reducing loads is often the cheapest and most effective way to save energy; thus, we will focus on those strategies here.

Cooling loads in data centers can be reduced a number of ways: more efficient servers and power supplies, virtualization, and consolidation into hot and cold aisles. In its simplest form, hot aisle/cold aisle design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. In more sophisticated designs, a containment system (anything from plastic sheeting to commercial products with variable fans) can be used to isolate the aisles and prevent hot and cold air from mixing.

But one of the simplest ways to save energy in a data center is simply to raise the temperature. It’s a myth that data centers must be kept cold for optimum equipment performance. You can raise the cold aisle setpoint of a data center to 80°F or higher, significantly reducing energy use while still conforming with both the American Society of Heating, Refrigerating, and Air Conditioning Engineers’ (ASHRAE) recommendations and most IT equipment manufacturers’ specs. In 2004, ASHRAE Technical Committee 9.9 (TC 9.9) standardized temperate (68 to 77°F) and humidity guidelines for data centers. In 2008, TC 9.9 widened the temperature range (64.4 to 80.6°F), enabling an increasing number of locations throughout the world to operate with more hours of economizer usage.

For even more energy savings, refer to ASHRAE’s 2011 Thermal Guidelines for Data Processing Environments, which presents an even wider range of allowable temperatures within certain classes of server equipment.


Just up the road from RMI’s office in Boulder, The National Snow and Ice Data Center is running around the clock to provide 120 terabytes of scientific data to researchers across the globe. Cooling the server room used to require over 300,000 kWh of energy per year, enough to power 34 homes. The data center was recently redesigned with all major equipment sourced within 20 miles of the site. The redesign resulted in a reduction of more than 90 percent in the energy used for cooling. The new Cooleradosystem, basically a superefficient indirect evaporative cooler that capitalizes on a patented heat and mass exchanger, uses only 2,560 kWh/year.

Before the engineers from RMH Group could use the Coolerado in lieu of compressor-based air conditioning, they had to drastically reduce the cooling loads. They accomplished this with the following strategies:

  • Less stringent temperature and humidity setpoints for the server room—this design meets the ASHRAE Allowable Class 1 Computing Environment setpoints (see Figure 2)
  • Airside economizers (enabled to run far more often within the expanded temperature ranges)
  • Virtualization of servers
  • Rearrangement and consolidation into hot and cold aisles

The remaining energy that is required for cooling and to power the server is offset with the energy produced from the onsite 50.4 kW solar PV system. In addition to producing clean energy onsite, the battery backup system provided added security in the case of a power outage.

Rick Osbaugh, the lead design engineer from RMH Group, cites three key enabling factors that allowed such huge energy savings:

  • A Neighborly Inspiration: The initial collaboration between NREL and NASA on utilizing a technology never used on a data center was the starting point of the design process. This collaboration was born from two neighbors living off the grid in Idaho Springs—but in this case, these neighbors also happened to be researchers at NREL and NASA.
  • Motivated Client: In this case, the client, as well as the entire NSIDC staff, wanted to set an example for the industry, and pushed the engineers to work out an aggressive low-energy solution. In order to minimize downtime, the staff members at the NSIDC all pitched in to help ensure that the entire retrofit was done in only 90 hours.
  • Taking Risks: Finally, the right team was assembled to implement a design that pushes the envelope. The owner and engineer were willing to assume risks associated with something never done before.


In 2011, Mortenson Construction completed an 85,000-square foot data center expansion for a top five search engine company in Washington state. This scalable, modular system supports a 6 MW critical IT load and has a PUE of only 1.08! This level of efficiency was possible because of a virtual design process that utilized extensive 3D modeling coupled with an innovative cooling strategy. Referred to as “computing coops,” the pre-engineered metal buildings incorporate many of the same free-air cooling concepts chicken coops utilize by bringing outside air through the sides of the building through the servers, and then exhausting hot air through the cupola, creating a chimney effect.

With a tight construction schedule (only eight months), the design team created an ultraefficient data center while also saving over $5 million compared to the original project budget.

A special thanks to Rick Osbaugh of the RMH Group, and Hansel Bradley of Mortenson Construction for contributing content for this article.