Cách tính dung lượng tụ bù

Cách tính dung lượng tụ bù cần thiết để nâng cao hệ số công suất cos phi, giảm tiền phạt – không phải dân điện nhưng vẫn phải biết để đỡ bị “chém”.

Công thức tính dung lượng tụ bù

Để chọn tụ bù cho một tải nào đó thì ta cần biết công suất (P) của tải đó và hệ số công suất (Cosφ) của tải đó :
Giả sử ta có công suất của tải là P
Hệ số công suất của tải là Cosφ1 → φ1 → tgφ1 ( trước khi bù, cosφ1 nhỏ còn tgφ1 lớn )
Hệ số công suất sau khi bù là Cosφ2 → φ2 → tgφ2 ( sau khi bù, cosφ2 lớn còn tgφ2 nhỏ)
Công suất phản kháng cần bù là Qb = P (tgφ1 – tgφ2 ).
Từ công suất cần bù ta chọn tụ bù cho phù hợp trong bảng catalog của nhà cung cấp tụ bù.

Giả sử ta có công suất tải là P = 100 (KW).
Hệ số công suất trước khi bù là cosφ1 = 0.75 → tgφ1 = 0.88
Hệ số công suất sau khi bù là Cosφ2 = 0.95 → tgφ2 = 0.33
Vậy công suất phản kháng cần bù là Qbù = P ( tgφ1 – tgφ2 )
Qbù = 100( 0.88 – 0.33 ) = 55 (KVAr)

Từ số liệu này ta chọn tụ bù trong bảng catalogue của nhà sản xuất giả sử ta có tụ 10KVAr. Để bù đủ cho tải thì ta cần bù 6 tụ 10 KVAr tổng công suất phản kháng là 6×10=60(KVAr).


Bảng tra dung lượng tụ cần bù

Phương pháp tính dung lượng cần bù theo công thức thường rất mất thời gian và phải có máy tính có thể bấm được hàm arcos, tan. Để quá trình tính toán nhanh, người ta thường dung bảng tra hệ số để tính dung lượng tụ bù

Lúc này, ta áp dụng công thức : Qb = P*k

Với k là hệ số cần bù tra trong bảng tra dưới đây

0.88 0.89 0.90 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00
0.50 1.19 1.22 1.25 1.28 1.31 1.34 1.37 1.40 1.44 1.48 1.53 1.59 1.73
0.51 1.15 1.17 1.20 1.23 1.26 1.29 1.32 1.36 1.39 1.44 1.48 1.54 1.69
0.52 1.10 1.13 1.16 1.19 1.22 1.25 1.28 1.31 1.35 1.39 1.44 1.50 1.64
0.53 1.06 1.09 1.12 1.14 1.17 1.20 1.24 1.27 1.31 1.35 1.40 1.46 1.60
0.54 1.02 1.05 1.07 1.10 1.13 1.16 1.20 1.23 1.27 1.31 1.36 1.42 1.56
0.55 0.98 1.01 1.03 1.06 1.09 1.12 1.16 1.19 1.23 1.27 1.32 1.38 1.52
0.56 0.94 0.97 1.00 1.02 1.05 1.08 1.12 1.15 1.19 1.23 1.28 1.34 1.48
0.57 0.90 0.93 0.96 0.99 1.02 1.05 1.08 1.11 1.15 1.19 1.24 1.30 1.44
0.58 0.86 0.89 0.92 0.95 0.98 1.01 1.04 1.08 1.11 1.15 1.20 1.26 1.40
0.59 0.83 0.86 0.88 0.91 0.94 0.97 1.01 1.04 1.08 1.12 1.17 1.23 1.37
0.60 0.79 0.82 0.85 0.88 0.91 0.94 0.97 1.00 1.04 1.08 1.13 1.19 1.33
0.61 0.76 0.79 0.81 0.84 0.87 0.90 0.94 0.97 1.01 1.05 1.10 1.16 1.30
0.62 0.73 0.75 0.78 0.81 0.84 0.87 0.90 0.94 0.97 1.01 1.06 1.12 1.27
0.63 0.69 0.72 0.75 0.78 0.81 0.84 0.87 0.90 0.94 0.98 1.03 1.09 1.23
0.64 0.66 0.69 0.72 0.74 0.77 0.81 0.84 0.87 0.91 0.95 1.00 1.06 1.20
0.65 0.63 0.66 0.68 0.71 0.74 0.77 0.81 0.84 0.88 0.92 0.97 1.03 1.17
0.66 0.60 0.63 0.65 0.68 0.71 0.74 0.78 0.81 0.85 0.89 0.94 1.00 1.14
0.67 0.57 0.60 0.62 0.65 0.68 0.71 0.75 0.78 0.82 0.86 0.90 0.97 1.11
0.68 0.54 0.57 0.59 0.62 0.65 0.68 0.72 0.75 0.79 0.83 0.88 0.94 1.08
0.69 0.51 0.54 0.56 0.59 0.62 0.65 0.69 0.72 0.76 0.80 0.85 0.91 1.05
0.70 0.48 0.51 0.54 0.56 0.59 0.62 0.66 0.69 0.73 0.77 0.82 0.88 1.02
0.71 0.45 0.48 0.51 0.54 0.57 0.60 0.63 0.66 0.70 0.74 0.79 0.85 0.99
0.72 0.42 0.45 0.48 0.51 0.54 0.57 0.60 0.64 0.67 0.71 0.76 0.82 0.96
0.73 0.40 0.42 0.45 0.48 0.51 0.54 0.57 0.61 0.64 0.69 0.73 0.79 0.94
0.74 0.37 0.40 0.42 0.45 0.48 0.51 0.55 0.58 0.62 0.66 0.71 0.77 0.91
0.75 0.34 0.37 0.40 0.43 0.46 0.49 0.52 0.55 0.59 0.63 0.68 0.74 0.88
0.76 0.32 0.34 0.37 0.40 0.43 0.46 0.49 0.53 0.56 0.60 0.65 0.71 0.86
0.77 0.29 0.32 0.34 0.37 0.40 0.43 0.47 0.50 0.54 0.58 0.63 0.69 0.83
0.78 0.26 0.29 0.32 0.35 0.38 0.41 0.44 0.47 0.51 0.55 0.60 0.66 0.80
0.79 0.24 0.26 0.29 0.32 0.35 0.38 0.41 0.45 0.48 0.53 0.57 0.63 0.78
0.80 0.21 0.24 0.27 0.29 0.32 0.35 0.39 0.42 0.46 0.50 0.55 0.61 0.75
0.81 0.18 0.21 0.24 0.27 0.30 0.33 0.36 0.40 0.43 0.47 0.52 0.58 0.72
0.82 0.16 0.19 0.21 0.24 0.27 0.30 0.34 0.37 0.41 0.45 0.49 0.56 0.70
0.83 0.13 0.16 0.19 0.22 0.25 0.28 0.31 0.34 0.38 0.42 0.47 0.53 0.67
0.84 0.11 0.13 0.16 0.19 0.22 0.25 0.28 0.32 0.35 0.40 0.44 0.50 0.65
0.85 0.08 0.11 0.14 0.16 0.19 0.22 0.26 0.29 0.33 0.37 0.42 0.48 0.62
0.86 0.05 0.08 0.11 0.14 0.17 0.20 0.23 0.26 0.30 0.34 0.39 0.45 0.59
0.87 0.03 0.05 0.08 0.11 0.14 0.17 0.20 0.24 0.28 0.32 0.36 0.42 0.57
0.88 0.00 0.03 0.06 0.08 0.11 0.14 0.18 0.21 0.25 0.29 0.34 0.40 0.54


Ví dụ:

Với bài toán như trên, từ cosφ1 = 0.75 và cosφ2 = 0.95. Ta gióng theo hàng và theo cột sẽ gặp nhau tại ô có giá trị k=0.55. Từ k = 0.55 ta tính toán tương tự sẽ ra kết quả như tính bằng công thức.

A Guide to Physical Security for Data Centers

July 26, 2012
A Guide to Physical Security for Data Centers

The aim of physical data center security is largely the same worldwide, barring any local regulatory restrictions: that is, to keep out the people you don’t want in your building, and if they do make it in, then identify them as soon as possible (ideally also keeping them contained to a section of the building). The old adage of network security specialists, that “security is like an onion” (it makes you cry!) because you need to have it in layers built up from the area you’re trying to protect, applies just as much for the physical security of a data center.

There are plenty of resources to guide you through the process of designing a highly secure data center that will focus on building a “gold standard” facility capable of hosting the most sensitive government data. For the majority of companies, however, this approach will be overkill and will end up costing millions to implement.

When looking at physical security for a new or existing data center, you first need to perform a basic risk assessment of the data and equipment that the facility will hold according to the usual impact-versus-likelihood scale (i.e., the impact of a breach of the data center versus the likelihood of that breach actually happening). This assessment should then serve as the basis of how far you go with the physical security. It is impossible to counter all potential threats you could face, and this is where identification of a breach, then containment, comes in. By the same token, you need to ask yourself if you are likely to face someone trying to blast their way in through the walls with explosives!

There are a few basic principles that I feel any data center build should follow, however:

  • Low-key appearance: Especially in a populated area, you don’t want to be advertising to everyone that you are running a data center. Avoid any signage that references “data center” and try to keep the exterior of the building as nondescript as possible so that it blends in with the other premises in the area.
  • Avoid windows: There shouldn’t be windows directly onto the data floor, and any glazing required should open onto common areas and offices. Use laminate glass where possible, but otherwise make sure windows are double-glazed and shatter resistant.
  • Limit entry points: Access to the building needs to be controlled. Having a single point of entry for visitors and contacts along with a loading bay for deliveries allows you to funnel all visitors through one location where they can be identified. Loading-bay access should be controlled from security or reception, ideally with the shutter motors completely powered down (so they can’t be opened manually either). Your security personnel should only open the doors when a pre-notified delivery is arriving (i.e., one where security has been informed of the time/date and the delivery is correctly labelled with any internal references). Of course all loading-bay activity should also be monitored by CCTV.
  • Anti-passback and man-traps: Tailgating (following someone through a door before it closes) is one of the main ways that an unauthorized visitor will gain access to your facility. By implementing man-traps that only allow one person through at a time, you force visitors to be identified before allowing access. And anti-passback means that if someone tailgates into a building, it’s much harder for them to leave.
  • Hinges on the inside: A common mistake when repurposing an older building is upgrading the locks on doors and windows but leaving the hinges on the outside of the building. This makes is really easy for someone to pop the pins out and just take the door off its hinges (negating the effect of that expensive lock you put on it!).
  • Plenty of cameras: CCTV cameras are a good deterrent for an opportunist and cover one of the main principles of security, which is identification (both of a security breach occurring and the perpetrator). At a minimum you should have full pan, tilt and zoom cameras on the perimeter of your building, along with fixed CCTV cameras covering building and data floor entrances/exits. All footage should be stored digitally and archived offsite, ideally in real time, so that you have a copy if the DVR is taken during a breach.
  • Make fire doors exit only (and install alarms on them): Fire doors are a requirement for health and safety, but you should make sure they only open outward and have active alarms at all times. Alarms need to sound if fire doors are opened at any time and should indicate, via the alarm panel, which door has been opened; it could just be someone going out for a cigarette, but it could also be someone trying to make a quick escape or loading up a van! On the subject of alarms, all doors need to have  alarms and be set to go off if they are left open for too long, and your system should be linked to your local police force, who can respond when certain conditions are met.
  • Door control: You need granular control over which visitors can access certain parts of your facility. The easiest way to do this is through proximity access card readers (lately, biometrics have become more common) on the doors; these readers should trigger a maglock to open. This way you can specify through the access control software which doors can be opened by any individual card. It also provides an auditable log of visitors trying to access those doors (ideally tied in with CCTV footage), and by using maglocks, there are no tumblers to lock pick, or numerical keypads to copy.
  • Parking lot entry control: Access to the facility compound, usually a parking lot, needs to be strictly controlled either with gated entry that can be opened remotely by your reception/security once the driver has been identified, or with retractable bollards. The idea of this measure is to not only prevent unauthorized visitors from just driving into your parking lot and having a look around, but also to prevent anyone from coming straight into the lot with the intention of ramming the building for access. You can also make effective use of landscaping to assist with security by having your building set back from the road, and by using a winding route into the parking lot, you can limit the speed of any vehicles. And large boulders make effective barriers while also looking nice!
  • Permanent security staff: Many facilities are manned with contract staff from a security company. These personnel are suitable for the majority of situations, but if you have particularly sensitive data or equipment, you will want to consider hiring your security staff permanently. A plus and minus of contract staff is that they can be changed on short notice (e.g., illness is the main cause of this). But it creates the opportunity for someone to impersonate your contracted security to gain access. You are also at more risk by having a security guard who doesn’t know your site and probably isn’t familiar with your processes.
  • Test, test and test again: No matter how simple or complex your security system, it will be useless if you don’t test it regularly (both systems and staff) to make sure it works as expected. You need to make sure alarms are working, CCTV cameras are functioning, door controls work, staff understands how visitors are identified and, most importantly, no one has access privileges that they shouldn’t have. It is common for a disgruntled employee who has been fired to still have access to a building, or for a visitor to leave with a proximity access card that is never canceled; you need to make sure your HR and security policies cover removing access as soon as possible. It’s only by regular testing and auditing of your security systems that any gaps will be identified before someone can take advantage of them.
  • Don’t forget the layers: Last, all security systems should be layered on each other. This ensures that anyone trying to access your “core” (in most cases the data floor) has passed through multiple checks and controls; the idea is that if one check fails, the next will work.

The general rule is that anyone entering the most secure part of the data center will have been authenticated at least four times:

1. At the outer door or parking entrance. Don’t forget you’ll need a way for visitors to contact the front desk.

2. At the inner door that separates the visitors from the general building staff. This will be where identification or biometrics are checked to issue a proximity card for building access.

3. At the entrance to the data floor. Usually, this is the layer that has the strongest “positive control,” meaning no tailgating is allowed through this check. Access should only be through a proximity access card and all access should be monitored by CCTV. So this will generally be one of the following:

  • A floor-to-ceiling turnstile. If someone tries to sneak in behind an authorized visitor, the door gently revolves in the reverse direction. (In case of a fire, the walls of the turnstile flatten to allow quick egress.)
  • A man-trap. Provides alternate access for equipment and for persons with disabilities. This consists of two separate doors with an airlock in between. Only one door can be opened at a time and authentication is needed for both doors.

4. At the door to an individual server cabinet. Racks should have lockable front and rear doors that use a three-digit combination lock as a minimum. This is a final check, once someone has access to the data floor, to ensure they only access authorized equipment.

The above isn’t an exhaustive list but should cover the basics of what you need to consider when building or retrofitting a data center. It’s also a useful checklist for auditing your colocation provider if you don’t run your own facility.

In the end, however, all physical security comes down to managing risks, along with the balance of “CIA” (confidentiality, integrity and access). It’s easy to create a highly secure building that is very confidential and has very high integrity of information stored within: you just encase the whole thing in a yard of concrete once it’s built! But this defeats the purpose of access, so you need a balance between the three to ensure that reasonable risks are mitigated and to work within your budget—everything comes down to how much money you have to spend.

About the Author

David Barker is technical director of 4D Data Centres. David (26) founded the company in 1999 at age 14. Since then he has masterminded 4D’s development into the full-fledged colocation and connectivity provider that it is today. As technical director, David is responsible for the ongoing strategic overview of 4D Data Centres’ IT and physical infrastructure. Working closely with the head of IT server administration and head of network infrastructure, David also leads any major technical change-management projects that the company undertakes.

The art of physical, outer perimeter security for a Data Center

Một bài toán tôi từng gặp đó là thiết kế Physical Security cho một Data Center. Các tài liệu nói về phần Security cho DC thường có thiên hướng đi sâu vào khía cạnh ” bảo mật với lính gác, tuần tra, hệ thống CCTV, và lớp bên trong thiên về công nghệ như các loại thẻ quẹt, Firewall …)

Trong khi đó phần Security về mặt vật lý không đơn giản chỉ là như thế, đối với các DC có diện tích rộng lớn, diện tích khu đất lớn gấp  >= 10 lần diện tích xây dựng DC thì mọi việc trở nên đơn giản hơn rất nhiều, việc đáp ứng các tiêu chí cơ bản của Tier 3 – 942 ở mức độ vĩ mô là khả quan.

Nhưng đối với các DC có diện tích khu đất nhỏ hơn thì vấn đề lại khác biệt hoàn toàn. Các quy định về khoảng cách hàng rào, chiều cao hàng rào, đèn chiếu sáng… là tương đối mù mờ – thậm chí không có tiêu chuẩn trong các khóa học chuẩn. Tôi thấy bài viết này hay và chia sẻ với các bạn.


When information security professionals think of perimeter security, firewalls, SSL VPN, RADIUS servers, and other technical controls immediately come to mind.  However, guarding the physical perimeter is just as important.

During the past weeks, I’ve written a series of articles that describe various components of an effective physical security strategy.  In this final article in the series, we’ll look closely at best practices for constructing the initial barrier to physical access to your information assets: the outer perimeter.

Components of a physical perimeter

Having served for several years in the military police, the concept of physical perimeter has two meanings.  However, we’ll skip the combat definition with its automatic weapons placement and final protective lines and focus on facility security.  (At least I hope your information asset physical security isn’t that strict, department of defense facilities excluded…)

The outer perimeter of a facility is its first line of defense.  It can consist of two types of barriers: natural and structural.  According to the United States Army’s Physical Security Field Manual, FM 3-19.30 (2001, p. 4-1):

  • Natural protective barriers are mountains and deserts, cliffs and ditches, water obstacles, or other terrain features that are difficult to traverse.
  • Structural protective barriers are man-made devices (such as fences, walls, floors, roofs, grills, bars, roadblocks, signs, or other construction) used to restrict, channel, or impede progress.

In other words, if you can use the terrain, do so.  Otherwise, you have to spend a little money and build your own obstructions.

The most common type of structural outer perimeter barrier is the venerable chain-link fence.  However, it isn’t good enough to simply throw up a fence and call it a day.  Instead, your fence, a preventive device, should be supported by one or more additional prevention and detection controls.  The number of controls you implement and to what extent are dependent upon the risks your organization faces.

Fence basics

A fence is both a psychological and a physical barrier.  The psychology comes into play when casual passers-by encounter it.  It tells them that the area on the other side is off-limits, and the owner would probably rather they didn’t walk across the property.  A fence or wall of three to four feet is good enough for this.

For those who are intent on getting to your data center or other collection of information assets, fence height should be about seven feet.  See Figure A.  For facilities with high risk concerns, a top guard is usually added.  The top guard consists of three to four strands of barbed wire spaced about six inches apart and extends outward at a 45 degree angle.  The total height, including fence and top guard, should reach eight feet. Figure A

Fence installation

Installing a perimeter fence requires some planning.  See Figure B.  Set the poles in concrete and ensure the links are pulled tight.  The links should form squares with sides of about two inches.  The fence should not leave more than a two inch gap between its lower edge and the ground. Figure B

Figure C depicts other considerations regarding fence placement.  First, identify any culverts, ditches, or objects that cause an opening beneath the fence.  Remember the two-inch rule above.  There should be no gaps greater than two inches below the edge of the fence. When any opening under the fence–whether enclosed as with the culvert in our example, or open–exceeds an area greater than 96 square inches, it should be blocked (FM 3-19.30, p. 4-5).  This is a good rule-of-thumb.  However, use common sense.  If you think a hole is big enough for a person to defeat your fence, block it.  Figures D and E (MIL-HDBK-1013/10, 1993, p. 15) show two methods. Figure C

Clear the area on both sides of the fence to provide a clear view of future intruders.  The recommended clearances, as shown in Figure C, are:

  • 50 feet between the fence and any internal natural or man-made obstructions.
  • 20 feet between the fence and any external natural or man-made obstructions.

Natural obstructions include trees and high weeds or grass.

Figure D

Figure E

Supporting controls

Vehicle Barriers

When vehicular intrusions are a concern, support the fence and gate opening with bollards or other obstructions, as depicted in Figure F (FM 3-19.30, p. 3-4). Figure F


Lighting is a critical piece of perimeter security.  It works as a deterrent and assists human controls (roving guards, monitored cameras, first responders to alarms, etc.) detect intruders.  Lighting standards are pretty simple:

  • Provide sufficient light for the detection controls used
  • Position lighting to “blind” intruders and keep security personnel in shadows
  • Provide extra lighting for gates, areas of shadow, or probable ingress routes, as shown in Figure C.

A general rule to start with is to position lights with two-foot candle-power at a height of about eight feet.

Intrusion detection controls

As with our technical controls, we make the assumption that if someone wants to get through our perimeter, they will.  So we need to supplement our fence with intrusion detection technology, including:

Use of detection technology must be coupled with a documented and practiced response process.

The final word

The field of physical security is broad and is often a dedicated career path.  So the information here is not intended to make you an expert.  However, organizations are increasingly integrating computer and physical security under one manager.

The need for information security professionals to understand physical controls is great enough that the most popular certifications, such as CISSP, require some knowledge of the topic.  Don’t be left behind.

Finally, many of the controls discussed in this article are too extreme for many organizations.  However, It’s always better to understand all your options.

About Tom Olzak

Tom is a security researcher for the InfoSec Institute and an IT professional with over 30 years of experience. He has written three books, Just Enough Security, Microsoft Virtualization, and Enterprise Security: A Practitioner’s Guide (to be publish…


About 10 “must haves” your data center needs to be successful.

The evolution of the data center may transform it into a very different environment thanks to the advent of new technologies such as cloud computing and virtualization. However, there will always be certain essential elements required by any data center to operate smoothly and successfully.  These elements will apply whether your data center is the size of a walk-in closet or an airplane hanger – or perhaps even on a floating barge, which rumors indicate Google is building:

Figure A

 Credit: Wikimedia Commons

1. Environmental controls

A standardized and predictable environment is the cornerstone of any quality data center.  It’s not just about keeping things cool and maintaining appropriate humidity levels (according to Wikipedia, the recommended temperature range is 61-75 degrees Fahrenheit/16-24 degrees Celsius and 40-55% humidity). You also have to factor in fire suppression, air flow and power distribution.  One company I worked at was so serious about ensuring their data center remained as pristine as possible that it mandated no cardboard boxes could be stored in that room. The theory behind this was that cardboard particles could enter the airstream and potentially pollute the servers thanks to the distribution mechanism which brought cooler air to the front of the racks. That might be extreme but it illustrates the importance of the concept.

2. Security

It goes without saying (but I’m going to say it anyhow) that physical security is a foundation of a reliable data center. Keeping your systems under lock and key and providing entry only to authorized personnel goes hand and hand with permitting only the necessary access to servers, applications and data over the network. It’s safe to say that the most valuable assets of any company (other than people, of course) reside in the data center. Small-time thieves will go after laptops or personal cell phones. Professionals will target the data center. Door locks can be overcome, so I recommend alarms as well. Of course, alarms can also be fallible so think about your next measure: locking the server racks? Backup power for your security system? Hiring security guards? It depends on your security needs, but keep in mind that “security is a journey, not a destination.”

3. Accountability

Speaking as a system administrator, I can attest that most IT people are professional and trustworthy.  However, that doesn’t negate the need for accountability in the data center to track the interactions people have with it. Data centers should log entry details via badge access (and I recommend that these logs are held by someone outside of IT such as the Security department, or that copies of the information are kept in multiple hands such as the IT Director and VP). Visitors should sign in and sign out and remain under supervision at all times. Auditing of network/application/file resources should be turned on. Last but not least, every system should have an identified owner, whether it is a server, a router, a data center chiller, or an alarm system.

4. Policies

Every process involved with the data center should have a policy behind it to help keep the environment maintained and managed. You need policies for system access and usage (for instance, only database administrators have full control to the SQL server). You should have policies for data retention – how long do you store backups? Do you keep them off-site and if so when do these expire? The same concept applies to installing new systems, checking for obsolete devices/services, and removal of old equipment – for instance, wiping server hard drives and donating or recycling the hardware.

5. Redundancy

 Credit: Wikimedia Commons

The first car I ever owned was a blue Ford Pinto. My parents paid $400 for it and at the time, gas was a buck a gallon, so I drove everywhere. It had a spare tire which came in handy quite often. I’m telling you this not to wax nostalgic but to make a point: even my old breakdown-prone car had redundancy. Your data center is probably much shinier, more expensive, and highly critical, so you need more than a spare tire to ensure it stays healthy. You need at least two of everything that your business requires to stay afloat, whether this applies to mail servers, ISPs, data fiber links, or voice over IP (VOIP) phone system VMs. Three or more wouldn’t hurt on many scenarios either!

It’s not just redundant components that are important but also the process to test and make sure they work reliably – such as scheduled failover drills and research into new methodologies.

6. Monitoring

Monitoring of all systems for uptime and health will bring tremendous proactive value but that’s just the beginning. You also need to monitor how much bandwidth is in use, as well as energy, storage, physical rack space, and anything else which is a “commodity” provided by your data center.

There are free tools such as Nagios for the nuts and bolts monitoring and more elaborate solutions such as Dranetz for power measurement. Alerts when outages or low thresholds occur is part of the process – and make sure to arrange a failsafe for your alerts so they are independent of the data center (for instance, if your email server is on a VMWare ESX host which is dead, another system should monitor for this and have the ability to send out notifications).

7. Scalability

So your company needs 25 servers today for an array of tasks including virtualization, redundancy, file services, email, databases, and analytics? What might you need next month, next year, or in the next decade? Make sure you have the appropriate sized data center with sufficient expansion capacity to increase power, network, physical space, and storage.  If your data center needs are going to grow – and if your company is profitable I can guarantee this is the case – today is the day to start planning.

Planning for scalability isn’t something you stop, either; it’s an ongoing process. Smart companies actively track and report on this concept. I’ve seen references in these reports to “the next rivet to pop” which identifies a gap in a critical area of scalability that must be met (e.g., lack of physical rack space) as soon as possible.

8. Change management

You might argue that Change Management falls under the “Policies” section, a consideration which has some bearing. However, I would respond that it is both a policy and a philosophy. Proper guidelines for change management ensure that nothing occurs in your data center which hasn’t been planned, scheduled, discussed and agreed upon along with providing backout steps or a Plan “B.” Whether it’s bringing new systems to life or burying old ones, the lifecycle of all elements of your data center must fall in accordance with your change management outlook.

9. Organization

I’ve never known an IT pro who wasn’t pressed for time. Rollout of new systems can result in some corners being cut due to panic over missed deadlines – and these corners invariably seem to include making the environment nice and neat.

A successful system implementation doesn’t just mean plugging it in and turning it on; it also includes integrating devices into the data center via standardized and supportable methods. Your server racks should be clean and laid out in a logical fashion (production systems in one rack, test systems in another). Your cables should be the appropriate length and run through cabling guides rather than haphazardly draped. Which do you think is easier to troubleshoot and support; a data center that looks like this:

 Credit: Wikimedia Commons


 Credit: Wikimedia Commons

10. Documentation

The final piece of the puzzle is appropriate, helpful, and timely documentation – another ball which can easily be dropped during an implementation if you don’t follow strict procedures. It’s not enough to just throw together a diagram of your switch layout and which server is plugged in where; your change management guidelines should mandate that documentation is kept relevant and available to all appropriate personnel as the details evolve – which they always do.

Not to sound morbid, but I live by the “hit by a bus” rule. If I’m hit by a bus tomorrow, one less thing for everyone to worry about is whether my work or personal documentation is up to date, since I spend time each week making sure all changes and adjustments are logged accordingly. On a less melodramatic note, if I decide to switch jobs I don’t want to spend two weeks straight in a frantic braindump of everything my systems do.

The whole ball of wax

The great thing about these concepts is that they are completely hardware/software agnostic.  Whether your data center contains servers running Linux, Windows or other operating systems, or is just a collection of network switches and a mainframe, hopefully these will be of use to you and your organization.

To tie it all together, think of your IT environment as a wheel, with the data center as the hub and these ten concepts as the surrounding “tire”:

 Credit: Wikimedia Commons

Devoting time and energy to each component will ensure the wheels of your organization turn smoothly.  After all, that’s the goal of your data center, right?

Know your data center monitoring system

You can’t depend on a building system to run the data center. Implement a BMS and a DCIM tool to monitor and predict system changes, tighten security and more.
In this Article


Optimizing your DC - start with its Data
DCIM will do many things a BMS won’tData center monitoring systems are critical for managing a facility. Knowing whether you need a BMS or a DCIM system…
Download TechTarget’s Guide for IT and Business Managers in Southeast Asia

Access this Guide for IT and Business Managers in Southeast Asia, written by technology experts and advisors, to find out more about cloud migration, big data benefits, and more.

Nearly every building today, new and old, has a building management or automation system (BMS or BAS) to monitor the major power, cooling and lighting systems. Building management systems are robust, comprised of standardized software platforms and communications protocols.

A BMS monitors and controls the total building infrastructure, primarily those systems that use the most energy. For example, the BMS senses temperatures on each floor — sometimes in every room — and adjusts the heating and cooling output as necessary.

The BMS usually monitors all the equipment in the central utility plant: chillers, air handlers, cooling towers, pumps, water temperatures and flow rates and power draws. Automation systems shut off lights at night, control window shades as the sun angle changes, and track and react to several other conditions. Regardless of control sophistication, the BMS’ most important function is to raise alarms if something goes out of pre-set limits or fails.

Are these the same things we want from our DCIM?

There is no single standard of data center infrastructure management (DCIM); it can be as simple as a monitor on cabinet power strips, or as sophisticated as a granular, all-inclusive data center monitoring system.

The BMS is a facilities tool that also deals with systems, so why do we need a separate DCIM system as well? DCIM provides more detailed information than BMS, and helps the data center manager run the wide range of critical systems under their care.

DCIM and BMS are not mutually exclusive; they should be complimentary. Some equipment in the data center should be monitored by the BMS. When choosing a DCIM tool, ensure it can interface with the BMS.

There are three fundamental differences between BMS and DCIM.

BMS monitors major parameters of major systems, and raises alarms if something fails. Although you can see trends that portend a problem, predictive analysis is not BMS’ purpose.

If the building air conditioning fails, it’s uncomfortable, but if the data center air conditioning fails, it’s catastrophic. That’s one example of why DCIM provides trend information and the monitoring data to invoke preventive maintenance before something critical fails. Prediction requires the accumulation, storage and analysis of an enormous amount of data — data that would overwhelm a BMS. Turning the mass of data from all the monitored devices into useful information can prevent a serious crash.

The BMS uses different connectivity protocols than IT. Most common to BMS are DeviceNet, XML, BACnet, LonWorks and Modbus, whereas IT uses mainly Internet Protocol (IP). Monitoring the data center with BMS would require the system to have a communications interface or adapter for every IP connection.

Data center devices handle large quantities of data points — often the common binary number of 256. The cumulative input from every device in the facility would overwhelm a BMS in terms of both data point interfaces and data reduction and analysis tasks. DCIM software accumulates those thousands of pieces of information from IP-based data streams and distills them into usable information.

Only major alarms and primary data should be segmented by DCIM and re-transmitted to the BMS. The rest is of little use in running the building.
DCIM will do many things a BMS won’t

DCIM is an evolving field, and not every DCIM product does all of these things, but these are the general areas DCIM handles and BMS products do not:

Electrical phase balancing: The output of every large uninterruptible power supply (UPS), as well as the branch circuit distribution to many data center cabinets, is three-phase. In order to realize maximum capacity from each circuit and the UPS, equalize the current draws on each phase. All UPS systems — and many power distribution units (ePDUs, iPDUs, CDUs, etc.) — have built-in monitoring, but it’s inefficient to run from cabinet-to-cabinet and device-to-device to balance power.

If the data center uses “smart” PDUs with IP-addressable interfaces, the data center monitoring system can track the power draws on each phase in each cabinet, as well as at each juncture in the power chain. Users can calculate a balanced scheme before making actual power changes in cabinets. The BMS looks only at the incoming power to the UPS, which is insufficient for this important and ongoing task.

Rack and cabinet temperature/humidity monitoring: The BMS monitors some representative point in the room and alarms if this point hits a significant out-of-range condition, but that’s not enough for a good monitoring system in the data center. Temperatures vary significantly from the top to bottom of a cabinet and across the breadth of a facility. With higher inlet temperatures and denser cabinets becoming the norm, comprehensive temperature information matters when deciding where to install a new piece of equipment, or during an air conditioner maintenance or failure period.

Most “smart” PDUs have temperature and humidity probe accessories to monitor critical points on cabinets and in the room via the same IP port that transmits power use. Even minimal DCIM packages can turn this additional data load into useful information.

Cabinet security: The building security system — tied into the BMS or not — observes data center entry and exit, but rarely anything else. It is becoming more common for data centers to house equipment from different owners, such as in colocation facilities, or to have cabinets with restricted access and equip those cabinets with cipher locks. Remote-monitored locks are available, and many can be connected through intelligent power strips. A DCIM tool can be configured to track security information so only the data center manager or other authorized parties access it.

Inventory control: Some of the more robust DCIM software packages track IT hardware — sometimes with the help of radio frequency identification tags. This is useful in a large facility where assets are regularly added, replaced and moved.

Source from: http://searchdatacenter.techtarget.com/tip/Know-your-data-center-monitoring-system

Data center hot-aisle/cold-aisle containment how-tos


Though data center hot-aisle/cold-aisle containment is not yet the status quo, it has quickly become a design option every facility should consider.

Server and chip vendors packing more compute power into smaller envelopes has caused sharp rises in data center energy densities. Ten years ago, most data centers ran 500 watts to 1 kilowatt (kW) per rack or cabinet. Today densities can get to 20 kW per rack and beyond, and most expect the number to continue to increase.

Data center hot-aisle cold aisle containment can better control where hot and cold air goes so that a data center’s cooling system runs more efficiently. And the method has gained traction. According to a SearchDataCenter.com’s “Data Center Decisions 2009” survey of data center managers last year, almost half had already implemented the technology or planned to last year. But there are several considerations, and various questions that data center managers should ask themselves:

  • Is containment right for you?
  • Should you do hot-aisle containment or cold-aisle containment?
  • Should you do it yourself or buy vendor products?
  • What about fire code issues?
  • How do you measure whether containment actually worked as hoped?

Do you need hot/cold aisle containment?

First, a data center manager needs to decide whether hot-aisle/cold-aisle containment is a good fit for his facility. Dean Nelson, the senior director of global data center strategy at eBay Inc., said it’s not a question for his company, which already uses the method

“The hot-aisle/cold-aisle method has gained traction”.

 But as Bill Tschudi, an engineer at Lawrence Berkeley National Laboratory who has done research on the topic, said, it’s all about taking the right steps to get there.

“You can do it progressively,” he said. “Make sure you’re in a good hot-aisle/cold-aisle arrangement and that openings are blocked off. You don’t want openings in racks and through the floors.”

These hot- and cold-aisle best design practices are key precursors to containment, because when they’re done incorrectly, containment will likely fail to work as expected.

Containment might not be worth it in lower-density data centers because there is less chance for the hot and cold air to mix in a traditional hot-aisle/cold-aisle design.

“I think the ROI in low-density environments probably won’t be there,” Nelson said. “The cost of implementing curtains or whatever would exceed how much you would save.”

But that threshold is low. Data centers with densities as low as 2 kW per rack should consider hot-aisle/cold-aisle containment, Nelson said. He suggests calling the utility company, or other data center companies, who will perform free data center assessments. In some cases, the utility will then offer a rebate if a data center decides to implement containment. Utilities have handed out millions of dollars to data centers for implementing energy efficient designs.

Hot aisle containment or cold aisle containment?

Next up for data center managers is deciding whether to contain the hot or the cold aisle. On this score, opinions vary. For example, American Power Conversion Corp. (APC) sells a pre-packaged hot -aisle containment product. Liebert Corp. sells cold-aisle containment. Not surprisingly, both APC and Liebert argue that their solution is best.

“Containing the hot aisle means you can turn the rest of your data center into the cold aisle”.

Containing the hot aisle means you can turn the rest of your data center into the cold aisle, as long as there is containment everywhere. That is how data center colocation company Advanced Data Centers built its Sacramento, Calif., facility, which the U.S. Green Building Council has pre-certified for Leadership in Energy and Environmental Design (or LEED) Platinum status in energy efficiency.

“We’re just pressuring the entire space with cool air where the cabinets are located, said Bob Seese, the president of Advanced Data Centers. “The room is considered the cold aisle.”

This approach includes concerns that when contained the hot aisle might get too hot for the IT equipment and uncomfortable for people to work in the space. Nelson, however, said that as long as there’s good airflow and the air is being swiftly exhausted from the space, overheating shouldn’t be a problem.

Containing the cold aisle means you may more easily use containment in certain sections of a data center rather than implementing containment everywhere. But it also requires finding a way to channel the hot air back to the computer room air conditioners (CRACs) or contending with a data center that is hotter than normal.

Cold-aisle containment proponents cite the flexibility of their approach. Cold aisle can be used for raised-floor and overhead cooling environments. Cold-aisle advocates also say that containing the cold aisle means you can better control the flow and volume of cool air entering the front of the servers.

Then, of course, data centers could contain both the hot and cold aisles.

Do-it-yourself methods vs. prepackaged vendor products

There are many ways to accomplish data center containment. If a company wants, it can hire APC, Liebert, Wright Line LLC or another vendor to install a prepackaged product.

“Homegrown methods of containment are often cheaper and, if done correctly, are just as effective as vendor-provided approaches”.

        ” This may bring peace of mind to a data center manager who wants accountability should containment fail to work as advertised”.

“They’re good if you want someone to come in and do the work,” Nelson said. “You can hire them.”

But these offerings come at a price. Homegrown methods of containment are often cheaper and, if done correctly, are just as effective as vendor-provided approaches. Nelson and Tschudi said they prefer do-it-yourself methods because of the lower cost.

If a data center staff does undertake data center containment strategies themselves, there are various options. Some data centers have installed thick plastic curtains, which can hang from the ceiling to the top of the racks or on the end of a row of racks, or both. In addition, a data center can build something like a roof over the cold aisles or simply extend the heights of the racks by installing sheet metal or some other product on top of the cabinets. All these structures prevent hot and cold air from mixing, making the cooling system more efficient.

Fire code issues with hot/cold aisle containment

Almost every fire marshal is different, so getting a marshal involved early in the process is important. A data center manager must know what the local fire code requires and design accordingly, as hot-aisle/cold-aisle containment can stoke fire-code issues.

“The earlier you get them involved, the better,” Tschudi said.

A fire marshal will want to ensure that the data center has sprinkler coverage throughout. So if a data center has plastic curtains isolating the aisles, they may need fusible links that melt at high temperatures so the curtains fall to the floor and the sprinklers reach everywhere. In designs with roofs over the aisles, this may require a sprinkler head under the roof.

“We made sure we could adapt to whatever the fire marshal required,” Seese said.

Measuring hot/cold containment efficacy

It’s also crucial to determine whether containment has worked; otherwise, there’s no justification for the project.

“Containment benefits can reverberate throughout a data center”.

Containment benefits can reverberate throughout a data center. If hot and cold air cannot mix, the air conditioners don’t have to work as hard to get cool air to the front of servers. That can mean the ability to raise the temperature in the room and ramp down air handlers with variable speed drive fans. That in turn could make it worthwhile to install an air-side or water-side economizer. Because the data center can run warmer, an economizer can be used to get free cooling for longer periods of the year.

Experts suggest taking a baseline measurement of a data center’s power, which compares total facility power with the power used by the IT equipment.

Nelson said that one of eBay’s data centers had a power usage effectiveness rating of more than 2, which is close to average. After installing containment in his data center, eBay got the number down to 1.78.

“It was an overall 20% reduction in cooling costs, and it paid for itself well within a year,” he said. “It is really the lowest-hanging fruit that anyone with a data center should be looking at.”

source from: http://searchdatacenter.techtarget.com/news/1379116/Data-center-hot-aisle-cold-aisle-containment-how-tos

Making Big Cuts in Data Center Energy Use

The energy used by our nation’s servers and data centers is significant. In a 2007 report, the Environmental Protection Agency estimated that this sector consumed about 61 billion kilowatt-hours (kWh), accounting for 1.5 percent of total U.S. electricity consumption. While the 2006 energy use for servers and data centers was more than double the electricity consumed for this purpose in 2000, recent work by RMI Senior Fellow Jonathan Koomey, a researcher and consulting professor at Stanford University, found that this rapid growth slowed because of the economic recession. At the same time, the economic climate led data center owner/operators to focus on improving energy efficiency of their existing facilities.

So how much room for improvement is there within this sector? The National Snow and Ice Data Center in Boulder, Colorado, achieved a reduction of more than 90 percent in its energy use in a recent remodeling (case study below). More broadly, Koomey’s study indicates that typical data centers have a PUE (see sidebar) between 1.83 and 1.92. If all losses were eliminated, the PUE would be 1.0. Impossible to get close to that value, right? A survey following a 2011 conference of information infrastructure professionals asked, “…what data center efficiency level will be considered average over the next five years?”

More than 20 percent of the respondents expected average PUE to be within the 1.4 to 1.5 range, and 54 percent were optimistic that the efficiency of facilities would improve to realize PUE in the 1.2 to 1.3 range.
Further, consider this: Google’s average PUE for its data centers is only 1.14. Even more impressive, Google’s PUE calculations include transmission and distribution from the electric utility. Google has developed its own efficient server level construction, optimized power distribution, and utilized many strategies to drastically reduce cooling energy consumption, including a unique approach for cooling in a hot and humid climate using recycled water.


For every unit of IT power produced, energy is used to cool and light the rooms that house the servers. Additionally, energy is lost due to inefficient power supplies, idling servers, unnecessary processes, and bloatware (pre-installed programs that aren’t needed or wanted). In fact, about 65 percent of the energy used in a data center or server room goes to space cooling and electrical (transformer, UPS, distribution, etc.) losses. Several efficiency strategies can reduce this.

For more information on best practices on designing low energy data centers, refer to this Best Practices Guide from the Federal Energy Management Program.


About half of the energy use in data centers goes to cooling and dehumidification, which poses huge opportunities for savings. First, focus on reducing the cooling loads in the space. After the load has been reduced through passive measures and smart design, select the most efficient and appropriate technologies to meet the remaining loads. Reducing loads is often the cheapest and most effective way to save energy; thus, we will focus on those strategies here.

Cooling loads in data centers can be reduced a number of ways: more efficient servers and power supplies, virtualization, and consolidation into hot and cold aisles. In its simplest form, hot aisle/cold aisle design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. In more sophisticated designs, a containment system (anything from plastic sheeting to commercial products with variable fans) can be used to isolate the aisles and prevent hot and cold air from mixing.

But one of the simplest ways to save energy in a data center is simply to raise the temperature. It’s a myth that data centers must be kept cold for optimum equipment performance. You can raise the cold aisle setpoint of a data center to 80°F or higher, significantly reducing energy use while still conforming with both the American Society of Heating, Refrigerating, and Air Conditioning Engineers’ (ASHRAE) recommendations and most IT equipment manufacturers’ specs. In 2004, ASHRAE Technical Committee 9.9 (TC 9.9) standardized temperate (68 to 77°F) and humidity guidelines for data centers. In 2008, TC 9.9 widened the temperature range (64.4 to 80.6°F), enabling an increasing number of locations throughout the world to operate with more hours of economizer usage.

For even more energy savings, refer to ASHRAE’s 2011 Thermal Guidelines for Data Processing Environments, which presents an even wider range of allowable temperatures within certain classes of server equipment.


Just up the road from RMI’s office in Boulder, The National Snow and Ice Data Center is running around the clock to provide 120 terabytes of scientific data to researchers across the globe. Cooling the server room used to require over 300,000 kWh of energy per year, enough to power 34 homes. The data center was recently redesigned with all major equipment sourced within 20 miles of the site. The redesign resulted in a reduction of more than 90 percent in the energy used for cooling. The new Cooleradosystem, basically a superefficient indirect evaporative cooler that capitalizes on a patented heat and mass exchanger, uses only 2,560 kWh/year.

Before the engineers from RMH Group could use the Coolerado in lieu of compressor-based air conditioning, they had to drastically reduce the cooling loads. They accomplished this with the following strategies:

  • Less stringent temperature and humidity setpoints for the server room—this design meets the ASHRAE Allowable Class 1 Computing Environment setpoints (see Figure 2)
  • Airside economizers (enabled to run far more often within the expanded temperature ranges)
  • Virtualization of servers
  • Rearrangement and consolidation into hot and cold aisles

The remaining energy that is required for cooling and to power the server is offset with the energy produced from the onsite 50.4 kW solar PV system. In addition to producing clean energy onsite, the battery backup system provided added security in the case of a power outage.

Rick Osbaugh, the lead design engineer from RMH Group, cites three key enabling factors that allowed such huge energy savings:

  • A Neighborly Inspiration: The initial collaboration between NREL and NASA on utilizing a technology never used on a data center was the starting point of the design process. This collaboration was born from two neighbors living off the grid in Idaho Springs—but in this case, these neighbors also happened to be researchers at NREL and NASA.
  • Motivated Client: In this case, the client, as well as the entire NSIDC staff, wanted to set an example for the industry, and pushed the engineers to work out an aggressive low-energy solution. In order to minimize downtime, the staff members at the NSIDC all pitched in to help ensure that the entire retrofit was done in only 90 hours.
  • Taking Risks: Finally, the right team was assembled to implement a design that pushes the envelope. The owner and engineer were willing to assume risks associated with something never done before.


In 2011, Mortenson Construction completed an 85,000-square foot data center expansion for a top five search engine company in Washington state. This scalable, modular system supports a 6 MW critical IT load and has a PUE of only 1.08! This level of efficiency was possible because of a virtual design process that utilized extensive 3D modeling coupled with an innovative cooling strategy. Referred to as “computing coops,” the pre-engineered metal buildings incorporate many of the same free-air cooling concepts chicken coops utilize by bringing outside air through the sides of the building through the servers, and then exhausting hot air through the cupola, creating a chimney effect.

With a tight construction schedule (only eight months), the design team created an ultraefficient data center while also saving over $5 million compared to the original project budget.

A special thanks to Rick Osbaugh of the RMH Group, and Hansel Bradley of Mortenson Construction for contributing content for this article.




from: http://blog.rmi.org/blog_making_big_cuts_in_data_center_energy_use

iSpace đưa công nghệ Data Center vào chương trình đào tạo

GD&TĐ – Lễ ký kết hợp tác chiến lược giữa trường Cao đẳng CNTT (iSpace) và công ty DataCenter Services, nhằm đưa công nghệ Data Center vào chương trình đào tạo cho SV diễn ra sáng nay (2/4).

Lễ ký kết hợp tác chiến lược giữa trường Cao đẳng CNTT (iSpace) và công ty DataCenter Services

Đây được xem là trường đầu tiên trên cả nước đưa chương trình đào tạo về công nghệ Data Center chuyên sâu vào giảng dạy cho SV ngành CNTT và định hướng phát triển nghề chuyên gia Data Center tại Việt Nam.
Ông Nguyễn Hoàng Anh, hiệu trưởng trường Cao đẳng CNTT iSpace cho biết: Kiến thức về công nghệ Data Center sẽ được thiết kế nằm trong chương trình đào tạo chung của nhà trường cho SV năm cuối và là điều kiện bắt buộc cho chuẩn đầu ra của nhà trường. SV ngoài việc được trang bị kiến thức để đảm bảo thông tin của tổ chức, doanh nghiệp được an toàn, bảo mật, các em sẽ được bổ sung thêm các kiến thức và kỹ năng cần thiết để làm việc tại các Data center có công nghệ hiện đại nhất.Theo thiết kế chương trình đào tạo (do đội ngũ chuyên gia cao cấp của DataCenter Services phối hợp với trường iSpace), SV năm cuối sẽ được học trong khoảng thời gian 2-3 tháng và được cấp một chứng nhận hoàn thành khóa học sau khi ra trường. Bên cạnh đó, các em còn được trang bị những kiến thức mới nhất, hiện đại nhất trong việc quản lý, vận hành các Data Center.