relo-life-cycle6(An Actual Customer Call)
By Larry Smith, President, ABR Consulting Group, Inc.


The draft budget below is the result of an actual call by a person visiting our website.  We produced this draft budget as the result of a 5-minute phone call and not seeing the data center.  The $720,000 budget is very real for planning and relocating a large data center containing IBM mainframes and peripheral equipment.  70% of the costs are directly related to acquiring special cables and components for the IBM system, voice and data cabling of the computer room and the costs for IBM and other large vendors to relocate their own equipment.  As a comparison, planning and relocating a 4,000-6,000 sq.ft. data center containing only servers, routers, etc. would be approximately $200,000 with half the cost being for cabling, components and relocation expense.  The remainder is for planning and project management.

This draft budget was done quickly and without seeing the site.  Ultimately, it could vary either way dramatically.  We must see the site in order to provide an accurate budget.  The importance of the exercise is to get a detailed list of what items are involved in planning and relocating a data center in front of the customer.

Have a data center project on the horizon?  Need help in finalizing your project budget?  The one thing you absolutely do not want to do is come in too low (see our newsletter on this subject).  ABR can help.  We invite you to go to our contact page (Contact Us) and either send us an email or call us directly.



On Thursday, July 5, 2001, we received a call from an operations manager of a data center in the mid-west that went something like this on our voice mail:  “This is (name withheld) from (name of organization withheld) and I need to know how much its going to cost to move my data center.  Could someone from your office come out here today or tomorrow to meet with us?”

I made contact with this individual a short time later and had a very brief 5-minute conversation.  In that conversation, I learned that this individual was a manager of a large data center containing an IBM mainframe and other IBM systems.  I asked for the size of the data center and she didn’t know.  I told her that I was not able to travel immediately but I would have something for her by the next morning.  I was able to get her email ID and the conversation ended.

Equipped with vast experience in planning and relocating IBM-based data centers I made the following assumptions and prepared the draft budget below:

  1. We were recently involved with two IBM-based data centers in similar organizations (same industry) and simply estimated the size of the data center to be approximately 10,000 sq.ft. for budgeting purposes.
  2. Learning that this was a very Blue shop, we were certain that IBM Global Services was hovering around this project somewhere and that the customer probably had a quote from IBM in hand.  We think that the sticker shock prompted the management of this organization to seek alternate solutions.
  3. A project of this size and scope will need 5,000-6,000 hours to plan and manage from beginning to end.  Note that the entire 6,000 hour total is over and above the normal day-to-day workload.
  4. Of the 6,000 hour total, we would suggest 2,500 hours for outside consulting and project management resources (such as ABR).  The remaining 3,500 hours will come by increasing the workload of existing staff.


The draft budget below was emailed by 10:00am PST on the following day (Friday 6, 2001).  Two hours later, I contacted the caller to discuss the draft budget.  First, she was quite surprised that we could provide such a good estimate of her data center inventory without seeing the room and without her naming one piece of equipment in the room.  Second, her organization had indeed asked for a quote from IBM Global Services and was attempting to find a less expensive solution.  Third, from her questions and comments, our draft quote had to be much lower than what she saw from IBM.  She did not reveal IBM’s quote but we know they cannot outbid ABR given that we both bid on the same specification and given that they bid using their normal labor rates.  Our labor costs are up to $100/hr. less depending on the resource category.  Plus, our quotes include all consulting and project management to relocate every piece of IT equipment in your data center.  The key word is EVERYTHING.  Our competition excludes many items that you will find in the draft budget below.

Draft Relocation Budget

Our immediate objective was to answer her question “How much does it cost to move my data center?”  The following draft budget is very similar to what the caller saw on her email 24 hours after her initial call.  It has been modified slightly for viewing by our website visitors.


Thank you so much for contacting the ABR Consulting Group, Inc. with regards to relocating your data center. It sounds like you need to come up with something quick so let me be brief, make an enormous amount of assumptions based on our five minute phone call and provide you with a number. I’ll call you shortly and we can modify the assumptions as necessary.

Assumptions: (All of this is a pure guess based on previous experience)

1.   You have a data center that is approximately 10,000 square feet in size

  1. The following is included in your construction budget and is not needed here:
  • Building construction (including the raised floor)
  • UPS/Generator, switchgear, etc
  • Voice/data cabling for the general building (all but the data center)
  • Underfloor electrical (not final placement, however)
  • All HVAC, fire safety, security, etc.)

3.  The following is not included as part of the construction and you will need to
budget for it.

  • All voice/data cabling to equipment, racks, cabinets and remainder of computer room
  • All pre-wiring for the PBX
  • Bus & Tag cables for approx. 40 channels
  • 20 data cabinets
  • Furniture for server lab area
  • KVM systems for 80 servers
  • Planning for relocation of all IT equipment
  • Final identification of electrical receptacles and their locations
  1. Equipment – Mainframe & Peripherals
  • One 2-3 cabinet IBM 9672 RXX.     If you have an IBM 3090, we need to talk.
  • 4-6 strings of IBM (or equivalent) DASD with controllers
  • 10,000-15,000 tape cartridge systems and a tape storage area
  • 1 IBM 3745/46
  • 4-6 IBM 3274 controllers
  • 20 operating and network consoles
  • Possible IBM 9700 printer
  • Tables, chairs, bookcases, storage cabinets, etc.
  1. Equipment – Mid Frames
  • 6-8 large Sun systems, DEC 7000 with Storage Works, etc.

6.  Equipment – Servers

  • 40-60 NT servers
  • 80-100 Unix servers
  • 80 CRTs
  • 60% of these servers are in cabinets.  40% are on lab-type furniture systems
  1. Equipment – Network
  • 2-3 routers
  • 4-6 switches (Cisco 6509, 5000)
  • 4 cabinets full of modems, other comm.  Other stuff
  • 8 relay racks full of other network equipment
  1. Voice/Data Circuits
  • 60 dial-in circuits with rotor system for students, etc.
  • 15 T1 lines from other buildings and to web
  1. Workstations for Staff
  • 40 workstations for staff


In planning the budget, note that for a 12,000 to 15,000 sq.ft. data center with IBM mainframes, you will need approximately 2,500 hours of consulting and project management assistance to design all equipment layouts, produce the entire equipment migration plan and be onsite to supervise the entire migration event. Note that the labor rates that we quote below are our labor rates. IBM Global Services rates are about $180/hr.-$225/hr. You will also need to budget for components that are not normally part of the construction budget.  They are included here.

Completing the Data Center and Pre-Move Installs

Note:   This section does not include costs for new network hardware

1. Voice/data cabling for computer room (includes 12 relay racks $    45,000
2. 10 KVMs $    24,000
3. 20 new data cabinets $    44,000
4. Shelves for cabinets $      6,000
5. New bus & tag cables (plenum-rated) (includes installation) $    50,000
6. New LIC cables for 3745/46 (plenum-rated) (includes install) $    12,000
7. New ESCON cables (includes installation) $    16,000
8. New RS232 and V.35 cables (includes installation) $      8,000
9. New furniture systems for NOC, servers, etc. $    40,000
10. Power strips $      6,000
11. 3X74 Controller racks $      2,000
11. Baluns, patch cords, coax cables, etc. $      8,000
12. Additional PBX cards, components, etc. $    16,000
13. Seed modems, CSUs, etc. $    24,000
14. Contingency $    20,000
Total Components $  321,000*

*  This amount can increase dramatically if you must acquire “seed” equipment to be
be pre-installed to reduce downtime (i.e. data center must move in 12 hours but
certain operations must be online within 4-6 hours).  I do not detect a serious
need here unless you have an IBM 3495 tape system which takes 8-10 days
to relocate.

Contracted Relocation Expense

We are assuming that you are relocating to either another floor in the same building or to another building in your multi-building campus.

Note:  The costs below do not include re-configuring your fiber/copper backbone
to other buildings in your campus should you move to a different building.
We can make this estimate but we need to see the site.

1. Contracting with IBM to relocate all IBM equipment $    60,000 **
2. Contracting with other vendors to relocate the large, free-standing equipment $    18,000
3. Relocate the tape library $    10,000
4. Relocate servers, printers, CRTs, etc. in computer room $    20,000
5. Relocate staff PC workstations $      8,000
6. Mover expense $      6,000
Total Relocation Expense $  122,000 ***

**  This projected expense does not include IBM’s special equipment replacement
insurance that guarantees replacement equipment within a specified amount
of hours should your equipment be damaged during the move.

*** The relocation expense could be as low as $50,000 depending on actual inventory

Consulting and Project Management

For more detail on consulting and project management, see our article on Equipment Planning and Migration

1.  Early Project Planning and Management

Working with the architect and engineers.  Equipment layouts, drawings, equipment inventory, coordinating activities for final construction and pre-install. Includes cabling RFP.  Includes PBX inventory.

Consulting & Project Mgt 500 hrs $125/hr. $    62,500
Other Technical Labor 300 hrs $90/hr $    27,000
Sub Total Consulting & Proj. Mgt. $    89,500
  1. Equipment Migration Planning

Complete planning for the teardown, movement and reinstall of all equipment.  Includes drawings, project plan, data circuit cutovers, team meetings, vendor meetings and other activities

Consulting & Project Management 900 hrs $125/hr. $  112,500
Other Technical Labor 500 hrs $90/hr. $    45,000
Sub-Total Equip. Migration & Plan $   157,500
  1. Actual Move Events

Onsite presence to manage all technical relocation events

Consulting & Project Management 100 hrs. $125/hr. $    12,500
Other Technical Labor 200 hrs. $90/hr. $    18,000
$    30,500


Total Cost for Consulting and Project Management $   277,500

TOTAL COSTS FOR PROJECT                                 $ 720,500

This is the estimated expense that will be needed to relocate your entire data center. I’ve assumed that you have a 10,000 sq.ft. data center and lots of equipment.  We have only two exclusions:

  1. This budget does not include any expense for any type of software engineering.
  2. This budget does not include any expense for network or server “seed” equipment.
  3. This budget does not include any expense for re-engineering you copper and fiber voice/data backbone cabling system as a result of relocating to another building on your multi-building campus.

Many customers overlook two major areas of expense; (1) the cost of preparing the new computer room for the move (Customer fit-up) and, (2) the cost of the extra consulting and project management. These are huge expenses. If I have over estimated the size of your computer room and operations, this number can be reduced significantly but you won’t escape most of it.

Thank you once again for contacting the ABR Consulting Group, Inc.

IT Service Management

Service Portfolio vs Service Catalog: 5 Reasons You Should Know the Differences

At first glance, the service portfolio and service catalog almost seem like the same thing. After all, both contain details of IT services. However, there are important differences when you’re talking about service portfolio vs. service catalog.

two hammers
To the casual observer, these may look similar, but use the wrong one for the job, and the differences become obvious.

service portfolio is an overarching document used in the management of the life cycles of all services: including those no longer offered, those currently offered, and those in the pipeline. The service portfolio is more of a living historical document of service-related activities.

service catalog, on the other hand, details the currently-active IT services and may include information on those that will be deployed soon. The service catalog is an “outward facing” document for your end users.

To use an analogy, suppose you’re an architect. Your portfolio contains examples of work you have completed for your clients, work representative of what you’re doing now, and information about where you want to take your expertise in the future. If you as an architect were to create the equivalent of the “service catalog,” it would contain information about exact services you provide, how the services are performed, how long they take to complete, and how much you charge.

There are several reasons you should understand the service portfolio vs service catalog differences. Here are 5 of them.

1. To Remain Consistent with ITIL Framework

This is a matter of good corporate IT hygiene. When you bring in a new IT service manager, collaborate with another company on an IT initiative, bring in a consultant, or take on the task of creating a service catalog and portfolio, knowing the difference between the service portfolio and the service catalog keeps everyone on the same page and makes communication easier.

2. To Prioritize Your Efforts

There are varying opinions on which should come first: the service catalog or the service portfolio. The choice may depend on many factors, including how well-documented past IT services were and what your resources allow. The service catalog is a more focused document, and many people think that this is where your initial efforts should be focused, followed by use of the information in the service catalog as a springboard to creating a service portfolio. The “right” answer about which to tackle first depends on your particular organization’s priorities and resources.

3. To Know Where to Place Your “Marketing” Efforts

The service portfolio is usually an internal document that the IT help desk and management use to gain a historical overview of IT services, assess what worked and what didn’t, and try to lay out long-term plans. It doesn’t “market” services, per se. Your service catalog, however, being an outward-facing document primarily directed at end users, really is like a catalog: here is a service you may be interested in, what this service does, how it’s done, and how long you can expect it to take. It should be written with less “IT-speak” so that end-users understand and appreciate it.

4. To View ITSM Both Long Term and Short Term

Service portfolio vs. service catalog is also about long-term versus short-term. The service portfolio gives the long view and helps you determine how to play the long game, with fewer specifics. Technology changes so rapidly that trying to nail down specific future services using just the information in your service portfolio may be an exercise in futility. Your service catalog, on the other hand, is about here and now, and the near future.

5. To Prepare End Users for Upcoming Changes

Just as your local game store gives you release dates so you’ll know when to expect an anticipated product, your service catalog can tell end users: “Our social help desk app is scheduled to launch September 1” (or whatever). Service catalog users generally have less interest in long-term plans with unknown effects (like when your new data center is expected to be complete), and are more interested in finding out things like, “When does the help desk integration with Salesforce Chatter go live?” or “When will the IT help desk start using remote desktop support so I don’t have to wait for someone to show up or walk me through a fix?”

The service portfolio and service catalog are both important, living documents that make planning and delivery of IT services better. Samanage, a leading cloud IT service management software provider, gives you the tools you need for creating and managing your IT service catalog and developing a service portfolio that can help your organization map out where it’s been and where it needs to go.

Source from:

6 xu hướng đáng chú ý của trung tâm dữ liệu trong 2015

Theo Emerson Network Power, 1 trong 6 xu hướng ảnh hưởng đến quyết định của các nhà thiết kế, điều hành và quản lý trung tâm dữ liệu chính là điện toán đám mây. – Công ty dịch vụ CNTT Emerson Network Power (Emerson) vừa đưa ra nhận định về 6 xu hướng của trung tâm dữ liệu đang trở nên ngày càng quan trọng trong năm 2015.

Theo nhận định của Emerson Network Power, khi các nhà điều hành trung tâm dữ liệu luôn tìm cách để đáp ứng những tiêu chuẩn của thị trường một cách nhanh chóng và hiệu quả, có 6 xu hướng đang ngày càng trở nên quan trọng, ảnh hưởng đến những quyết định của các nhà thiết kế, điều hành và quản lý trung tâm dữ liệu, đó là: Thời đại của đám mây; Mở rộng phạm vi hội nhập; Tính hội tụ phát triển nên tầm vĩ mô; Phần mềm mở đường cho sự ra đời nhiều phần mềm khác; Cận biên trở nên quan trọng hơn; An ninh trở thành một giá trị sẵn có mới.

Thời đại của đám mây

Điện toán đám mây đã và đang được thiết lập trong hệ sinh thái trung tâm dữ liệu vì hầu hết các tổ chức đều sử dụng một số hình thức của phần mềm dịch vụ (Saas). Hiện tại, đám mây đã sẵn sàng mở rộng và trở thành một công cụ cho sự đổi mới.

Những tổ chức có tầm nhìn xa đang kết hợp các dịch vụ dựa trên nền tảng đám mây như phân tích, cộng tác và truyền thông để hiểu hơn về khách hàng và đưa các sản phẩm và dịch vụ mới ra thị trường nhanh hơn. Từ đó, ngày càng có nhiều tổ chức sẽ quản lý các môi trường hybrid với nguồn lực CNTT tại chỗ được bổ sung cùng việc sử dụng các dịch vụ đám mây và colocation một cách chiến lược để tăng cường sự tối ưu, khả năng phục hồi và tính linh hoạt.

Về phần mình, để phát triển trong môi trường ngày càng cạnh tranh khốc liệt, các nhà cung cấp dịch vụ đám mây phải chứng minh khả năng mở rộng nhanh chóng trong khi luôn đáp ứng được các thỏa thuận về cấp độ dịch vụ. Các nhà cung cấp đám mây sẽ thúc đẩy sự đổi mới trong ngành khi áp dụng các công nghệ và thực tiễn có độ tin cậy cao với chi phí thấp nhất.

Mở rộng phạm vi hội nhập

Các hệ thống tích hợp được phát triển để giúp các tổ chức triển khai và mở rộng các ứng dụng nhanh chóng hơn đồng thời vẫn giảm thiểu rủi ro và chi phí. Với sự thay đổi nhanh chóng trong nhiều thị trường được thúc đẩy nhờ sự cải tiến, số hóa và tính di động, nhu cầu về tốc độ của sự hội nhập và hội tụ là lớn hơn bao giờ hết. Kết quả là sự hội nhập và hội tụ đã mở rộng vượt ra ngoài ngăn xếp CNTT đến các hệ thống hỗ trợ ngăn xếp đó.

Đáng chú ý nhất, các thiết bị trung tâm dữ liệu đang được thiết kế và xây dựng từ những mô-đun tích hợp được đúc sẵn. Cách tiếp cận mới đến sự phát triển cơ sở đã cho phép các tổ chức như Facebook, có thể phát triển hoàn toàn tùy biến, các trung tâm dữ liệu hiệu suất cao giúp tiết kiệm 30% thời gian so với việc áp dụng các quy trình xây dựng truyền thống. Kết hợp các thuộc tính của việc triển khai nhanh, khả năng mở rộng vốn có và hiệu suất tuyệt vời, cách tiếp cận này đang trở thành một lựa chọn hấp dẫn để hỗ trợ năng lực CNTT phụ trợ.

Tính hội tụ phát triển lên tầm vĩ mô

Hệ thống công nghệ không phải là thứ duy nhất trải qua vấn đề hội tụ. Ngành công nghiệp viễn thông và CNTT đang xích lại gần nhau hơn bởi những dịch vụ thoại và dữ liệu đang có xu hướng được sử dụng từ cùng 1 thiết bị. Trên thực tế, hơn một nửa số thành viên tham gia dự án Trung tâm Dữ liệu năm 2025 dự đoán rằng có ít nhất 60% hạ tầng của mạng lưới viễn thông sẽ trở thành trung tâm dữ liệu vào năm 2025, 79% cho rằng có ít nhất một nửa số công ty viễn thông sẽ tạo ra hạ tầng colocation như 1 phần trong mạng lưới của họ. Sự hội tụ này dẫn đến sự tiêu chuẩn hóa hơn trong những công nghệ sử dụng để hỗ trợ các dịch vụ thoại và dữ liệu.

Phần mềm mở đường cho sự ra đời nhiều phần mềm khác

Ảo hóa là 1 trong những xu hướng quan trọng nhất trong ngành trung tâm dữ liệu trong 20 năm qua. Xu hướng phát triển này sẽ tiếp tục tạo ra sự thay đổi trong tương lai mà có thể dự đoán trước vì ảo hóa đang mở rộng trên lĩnh vực điện toán đến lĩnh vực hoạt động mạng lưới và lưu trữ. Quản lý phần cứng sẽ là một trong những thách thức chính trong cuộc cách mạng ảo này. Đa số các tổ chức thiếu tầm nhìn trong việc phối hợp quản lý các hệ thống ảo và hệ thống vật lý, và khoảng cách đó phải được thu hẹp để mở đường cho trung tâm dữ liệu định nghĩa bằng phần mềm.

Giải pháp quản trị hạ tầng Trung tâm dữ liệu (DCIM) đã ra đời để xóa khoảng cách này và những người sớm ứng dụng DCIM đang minh chứng những giá trị của nó: những trung tâm dữ liệu sử dụng giải pháp DCIM phục hồi được từ sự cố mất điện 85% nhanh hơn so với những trung tâm dữ liệu không sử dụng giải pháp này, theo nghiên cứu về sự cố mất điện trung tâm dữ liệu năm 2013 do Viện Nghiên cứu Ponemon thực hiện.

Cận biên trở nên quan trọng hơn

Sau nhiều năm hợp nhất và tập trung, các tổ chức CNTT đang chuyển sự chú ý sang cận biên của mạng lưới để cải thiện sự tương tác với các khách hàng và các ứng dụng. Khi các tổ chức tăng cường sử dụng các dịch vụ phân tích, dịch vụ dựa trên vị trí và những nội dung được cá nhân hóa, cận biên của cơ sở mạng sẽ càng trở nên quan trọng để tạo ra lợi thế cạnh tranh. Để tận dụng được cơ hội này đòi hỏi phải có cơ sở hạ tầng theo tiêu chuẩn, thông minh với tính sẵn có cao và được triển khai gần gũi với người dùng. Trong thập kỷ đầu tiên của thế kỷ này, nhiều tổ chức đã phải nỗ lực để đáp ứng được nhu cầu điện toán. Các doanh nghiệp không nắm bắt được những vấn đề về mạng lưới có liên quan đến cận biên sẽ không thể theo kịp sự gia tăng bùng nổ của lưu lượng truy cập mạng.

An ninh trở thành một giá trị sẵn có mới

Về việc giảm bớt rủi ro, các nhà quản lý trung tâm dữ liệu từ lâu đã đặc biệt chú trọng đến việc ngăn chặn thời gian chết (downtime). Downtime luôn là một rủi ro, đồng thời còn là một hiểm họa mới xuất hiện dưới dạng an ninh mạng. Khi một trong những lỗ hỗng an ninh nghiêm trọng nhất trong vòng 18 tháng qua có nguồn gốc từ hệ thống HVAC, các nhà quản lý trung tâm dữ liệu và các chuyên gia bảo mật CNTT đã phải chú ý.

Các nhà quản lý trung tâm dữ liệu và hạ tầng cơ sở sẽ phải làm việc với đội ngũ an ninh CNTT để kiểm nghiệm các công nghệ và phần mềm của thiết bị trung tâm dữ liệu, đảm bảo an ninh và đánh giá việc thực hành bảo mật của các nhà thầu và các nhà cung cấp dịch vụ có quyền truy cập vào thiết bị.


Small updates meet big data center requirements

Not all IT infrastructure projects require a large budget and lengthy schedule. These relatively inexpensive updates boost performance and reliability.

IT leaders constantly dream up ways to meet data center requirements for performance and efficiency, but time and money always seem to quash grand plans.

Not every IT infrastructure project needs to be a time-consuming, capital-intensive, paradigm-shifting corporate initiative. Quick and easy updates significantly benefit data center facilities and IT performance, and act as a training ground for new employees.

1. Upgrade server hardware

Strategic memory and local disk upgrades give servers quick and easy performance or capacity boosts.

Memory is a limiting resource in virtualization, and servers rarely come with a full complement onboard. Inventory unused slots and add memory to assist existing VMs or accommodate future server consolidation.

Solid state drives (SSD) are a local disk storage upgrade for strategic servers. SSDs improve I/O and lower latency, ideal for workloads sensitive to storage bandwidth. SSDs can accelerate performance if a server’s workloads rely on disk caching. Rather than rip and replace all the disk drives, add an SSD to a server’s local storage to clear bottlenecks and stop errors.

People tend to forget that when the physical network gear is upgraded, your cabling may not be taking full advantage.
Server firmware upgrades are fast and free, but also disruptive. Only perform them to fix specific problems like hardware or operating system support. Check your asset management inventory and get a list of the current server models and firmware versions, and then check the server vendors’ download sites for updates. Ascertain via the details or release notes whether the update actually solves a problem for you. Peripheral interface and adapter devices also have firmware that may need updates.

Use cases for implementing a converged infrastructure product

Memory and disk upgrades pose downtime (unless hot plugging) and re-racking issues. “RAM upgrades are cheap and effective, but … it’s not exactly an ‘in place’ upgrade,” said Pete Sclafani, COO and co-founder of 6connect, a network automation solutions provider in San Francisco. Perform memory and SSD upgrades during scheduled server downtime.

Disk capacity is expensive, and you can forestall major capacity additions by removing unnecessary content or migrating data to lower storage tiers. For example, temporary directories flood with unneeded data, so clear out /tmp and c:/temp directories in servers and storage subsystems.

Try a zero byte reclaim for thin storage deployments. “Write zeros to all allocated but unused space,” said Tim Noble, director of IT operations at ReachIPS, a cloud platform provider in Anaheim Hills, Calif. A zero byte reclaim of the server’s allocated, never needed storage frees up space on the array.

2. Redo cables

As network bandwidth reaches 10 Gigabit Ethernet (GigE), 25 GigE and faster, aging Category (Cat) 5 and 5e copper cabling infrastructure for 1 GigE is unable to cope with the new data center requirements.

In some cases, the right hardware is in place for higher bandwidth networks, but the cabling is not. “People tend to forget that when the physical network gear is upgraded, your cabling may not be taking full advantage,” Sclafani said.

Don’t rip out aging cabling all at once; Ethernet cabling is fully backward-compatible. Make relatively small, incremental investments in faster cables as time and money allow. Servers will remain on 10 GigE for the foreseeable future, so focus on network backbones, especially Ethernet-based iSCSI and Fibre Channel over Ethernet storage arrays. For example, Cat 6 cables can support 10 GigE to 55 meters while Cat 6a and Cat 7 cables can handle 10 GigE to 100 meters, without requiring new network adapters, switches or other components.

Long distances — and 40 GigE+ Ethernet bandwidths — need expensive optical fiber media and specialized skills to splice and integrate, which entail a formal capital upgrade project.

Differentiate the new cables from older twisted-pair lines with colored jackets or another labeling scheme. Clearly duplicate markings or labels for patch panels.

3. Add sensors

If you can’t measure it, you can’t manage it. Data center infrastructure management (DCIM) tools monitor the electrical and environmental behaviors of complex facilities.

DCIM requires a proliferation of sensors placed strategically around the data center. These tools may trigger automated responses to situational events, such as migrating workloads when a server becomes too hot, or sounding an alert when moisture suggests a cooling loop leak. Missing or inadequate sensors can leave input gaps.

What are you missing?

Temperature sensors locate hot spots within racks and rows.
Humidity sensors warn of excessively dry air or damaging condensation levels.
Moisture (liquid) sensors are essential when chilled water circulates in heat exchangers or rack doors.
Power monitors track energy use in real time.
Air flow sensors ensure that fans are running and filters are unclogged.
Motion detectors spot unauthorized intruders and trigger security alerts and cameras.
Smoke/fire sensors protect valuable assets and lives.
RFID tags help automate hardware inventory control.

“Data center monitoring tends to be the last addition to the budget and the first to get axed when project timelines go sideways,” Sclafani said. “Your sensors and instrumentation probably have room for improvement.”

New sensors are quick and non-invasive installs, done in small increments to keep cost and time commitments minimal.

4. Boost data security

OS and application security updates might seem obvious, but these low-level tasks get postponed by day-to-day firefighting and complex data center projects.

Check system inventory reports and patch each server with the latest available security updates, Noble said. “This will be easier if you have automation tools like Puppet,” he added. “But even a large number of servers can be patched pretty quickly if there is a concerted effort.”
Don’t leave out the facility

Updates for better cooling, power

Gradual energy efficiency improvements

Short-term upgrades to save money

Hypervisor updates, such moving to VMware vSphere 6, are rarer and might be delayed by testing. Check the hardware and software inventory of your virtualized servers to verify that they support the new requirements, and finish lab testing so the new features can move to production. “You might also simply update the VMware Tools on all of your hosts to [your current ESXi version],” Noble said.

Look for other security enhancements: Check and fix file permissions, scour Active Directory user accounts for old or inaccurate entries and so on. These activities pose little risk to operating services.

5. Check and improve processes

Modern data centers are process-driven — policies and procedures reduce errors and ensure consistent results regardless of who performs the work. As more IT departments move beyond script-based automation (such as PowerShell) to embrace sophisticated workflow automation tools, it’s easy to forget the actual steps and why they’re there. Roles and priorities change, opening strategic opportunities to review, streamline and optimize workflows.

“Find an operational task, map it out and see how you can make it more efficient,” Sclafani said. “You get extra points if you also ask your internal or external customers for input on processes [to] optimize.”

Perform a fire drill to verify that existing infrastructure works as expected. This is particularly important with disaster recovery (DR) and resilient systems such as server clusters. Test server failover in active/passive clusters or simulate the loss of a server in active/active configurations.

“If you have a DR site, a weekend of maintenance in one data center might be a good time to test operations in your alternate data center,” Noble said. Unacceptable service disruptions indicate additional remediation work to meet the data center’s requirements before real trouble strikes.

About the author:
Stephen J. Bigelow is a senior technology editor at TechTarget, covering data center and virtualization technologies. He acquired many CompTIA certifications in his more than two decades writing about the IT industry



TS IT Rack Training Video

This video presentation provides you with a comprehensive overview of the TS IT Rack – the new industry standard in data centre rack technology. A fusion of rack and accessories it provides flexibility in design of rack architecture with fast assembly and tool free installation. Gain an understanding of TS IT features and benefits, TS IT varieties, fitting accessories, mounting power distribution units or PSM busbars and cable management.

6 steps to better data centers

Review existing data centers for improvement opportunities like power consumption and effective heating and cooling.

Management of data storage and processing are a part of every business, with a requirement for data centers and IT facilities common across nearly all business types. Data centers provide centralized IT systems, requiring power, cooling, and operational requirements above and beyond typical design parameters. This large density of power and cooling drives the need for continuous improvements; the goal for any system design or redesign should be to optimize performance of existing equipment, and prioritize replacement and reorganization of outdated systems.

This article provides a number of steps to lead the evaluation of an existing facility and proposes targeted improvements for reducing energy use and CO2 emissions into our environment.

Why improve performance of an existing data center? There are several reasons.

Operational enhancement: Improving the performance of data center systems will offer great benefits to the bottom line and allow for greater flexibility in future expansion:

  • Decreased operating and energy costs
  • Decreased greenhouse gas emissions, critical in anticipation of a future carbon economy.
Increased reliability and resilience: Continuity of supply and zero downtime in IT services result in greater resilience for the business. Improving resilience and reliability results in increased accessibility and facility use, and provides for adaptability into the future.

Consider how critical the data center applications and services are to an operation: What will it cost if no one can send e-mail, access an electronic funds transfer system, or use Web applications? How will other aspects of the business be affected if the facility fails?

Greater system dynamics: Assessment of an existing facility will lead to increased integration of all system components. Increasing data processing potential cannot be considered without understanding the implications on cooling and power demand, and the management systems behind the processes. All aspects of the data center system must be looked at holistically to achieve the greatest results.

Review and improve

Compared to similar-sized office spaces, data center facilities typically consume 35 to 50 times the amount of energy in normal operation and contribute CO2 into our environment. Power demand for IT equipment greater than 100W/sq ft is not uncommon, and as we move into the future, the requirement for data storage and transfer capability is only going to rise.

Whether the driver for improvements is overloaded servers, programmed budget, or corporate energy-saving policy, an analysis of the energy use and system management will have benefits for the business. The assessment process should be to first understand where energy is being used and how the system currently operates; then to identify where supply systems, infrastructure, and management of the facility can be optimized.

1: Review computer rack use

Levels of data storage and frequency of application use will fluctuate in a data center, as users turn on computers and access e-mail, Internet, and local servers. Design of the supporting power and cooling systems for data storage and processing is typically sized with no diversity in IT demand, and therefore rarely will be required in its entirety.

Figure 3 illustrates typical server activity across a normal office week. At different times of the day each server may encounter near-maximum use, but for the majority of time, utilization of racks may be only at 10% to 20%.

Low use of servers results in inefficient and redundant power consumption for the facility. For many server and rack combinations, the power consumed at 50% use is similar to that consumed at 100% use. For racks up to and exceeding 3 kW, this energy consumption can be quite large, when the subsequent cooling and other facility loads are also considered. To improve the system’s energy use, a higher level of use in fewer racks should be achieved.

Consolidation of the servers allows multiple applications and data to be stored on fewer racks, therefore consuming less “redundant” power. Physical consolidation of servers can be further improved when implemented with virtualization software. Virtualization allows a separation between the computer hardware (servers) and the software that they are operating, eliminating the physical bonds of certain applications to dedicated servers. Effective application leads to markedly improved use rates.

2: Review power consumption, supply

Reducing power consumed by a facility requires an understanding of where and how the energy is being used and supplied. There are many possibilities for inefficiency, which need to be known to improve the data center energy statistics.

Power that enters a data center can be divided into two components:

  • IT equipment (servers for data storage, processing, and applications)
  • Supporting infrastructure like cooling, UPS and switchgear, power distribution units (PDU), lighting, and others.

Figure 6 provides an example of the split for power demand across a facility. For this example, 45% of total data center power is utilized by supporting infrastructure and therefore not used for the core data processing applications. If a facility is operating at 100 W/sq ft IT power demand, energy used for the supporting infrastructure alone would result in an additional 80 W/sq ft of energy, energy costs, and the associated CO2 emissions.

To compare performance of one data center’s power usage to another’s, a useful metric is the power usage effectiveness, or PUE. This provides a ratio of total facility power to the IT equipment power:

PUE = total facility power / IT equipment power

The optimal use of power in a data center is achieved as the PUE approaches 1. Studies show that on average, data centers have a PUE of 2.0 to 2.5, with goals of 1.5 and even 1.1 for state-of-the-art-facilities. For example, a facility with PUE nearing 3.0 will consume greater than 200% more power than a facility operating at a PUE of around 1.3.

Effective metering of a data center should be implemented to accurately understand the inputs and outputs of the facility. Accurate data measurement will allow continuous monitoring of the PUE and also allow effective segregation of power used by the data center from other facilities in the building.

To improve the total system efficiency and PUE of a site, the first step is to reduce the demand for power. Table 1 highlights the strategies for demand reduction with a basic description of each.

Following the reduction of power consumption, the second step toward improving the facility’s efficiency and performance is to improve the supply of power. Power supply for data center systems will typically rely on many components. Each of these components has an associated efficiency of transmission (or generation) of power. As the power is transferred from the grid, through the UPS, PDUs, and to the racks, the system efficiency will consecutively decrease. Therefore, all components are important for entire system efficiency.

A review of the manufacturer’s operational data will highlight the supply equipment’s efficiency, but it is important to note that as the equipment increases in age, the efficiency will decrease.

An effective power supply system should ensure that supply is always available, even in the event of equipment failure. Resilience of the supply system is determined by the level of redundancy in place, and the limitations of single-points-of-failure. The Uptime Institute’s four-tier classification system should be consulted, with the most suitable level selected for the site.

In most locations, reduction of demand from grid supply will result in higher efficiency and reduced greenhouse gas emissions. The constant cooling and electrical load required for the site can provide an advantage in the implementation of a centralized energy hub, possibly using a cogeneration/trigeneration system, which can use the waste heat from the production of electrical power to provide cooling via an absorption chiller.

3: Review room heat gains

As is the case with power consumption, any heat gains that are not directly due to IT server equipment represent an additional energy cost that must be minimized. Often, improvements from reduction in unnecessary heat gains can be implemented with little costs, resulting in short payback periods from energy savings.

Computing systems use incoming energy and transform this into heat. For server racks, every 1 kW of power generally requires 1 kW of cooling; this equates to very large heat loads, typically in the range of 100 W/sq ft and larger. These average heat loads are rarely distributed evenly across the room, allowing excessive hot spots to form.

The layout of the racks in the data center must be investigated and any excessively hot zones identified. Isolated hot spots can result in over- or under-cooling and need to be managed. For a typical system with room-wide control, any excessively hot servers should be evenly spaced out by physically or virtually moving the servers (Figure 8). If the room’s control provides rack-level monitoring and targeted cooling, then isolated hot spots may be less of an issue.

Room construction

A data center does not have the same aesthetic and stimulating requirements as an office space. It should be constructed with materials offering the greatest insulation against transmission of heat, both internally and externally.

Solar heat gains through windows should be eliminated, and any gaps that allow unnecessary infiltration/exfiltration need to be sealed.

Switchgear, UPS, other heat gains

Associated electrical supply and distribution equipment in the space will add to the heat gain in the room, due to transmission losses and inefficiency in the units. Selection of any new equipment needs to take this heat gain into account, and placement should be managed to minimize infringement on core IT systems for cooling.

4: Review cooling system

Data center cooling is as critical to the facility’s operation as the main power supply. The excessive heat loads provided by server racks will result in room and equipment temperature rising above critical levels in minutes, upon failure of a cooling system.

Ventilation and cooling equipment

Ventilation is required in the data center space for the following reasons only:

  • Provide positive air pressure and replace exhausted air
  • Allow minimum outside airflow rates for maintenance personnel, as per ASHRAE 62.1
  • Smoke extraction in the event of fire.

Ventilation rates in the facility do not need to exceed the minimum requirement and should be revised if they are exceeding it, in order to reduce unnecessary treatment of the excess makeup air. The performance of the cooling system in a data center largely affects the total facility’s energy consumption and CO2 emissions. There are various configurations of ventilation and cooling equipment, with many different types of systems for room control. Improvements to an existing facility may be restricted by the greater building’s infrastructure and the room’s location. Recent changes to ASHRAE 90.1 now include minimum requirements for efficiency of all computer room air conditioning units, providing a baseline for equipment performance.

After reviewing the heat gains from the facility (step 3), the required cooling for the room will be evident.

  • Can the existing cooling system meet this demand or does it need to be upgraded?
  • Is cooling provided by chilled water or direct expansion (DX)? Chilled water will typically offer greater efficiency but is restricted by location and plant space for chillers.
  • Does the site’s climate allow energy savings from economizers and in-direct cooling?
  • What type of heat removal system is used in the data center space? Can it effectively remove the heat from the servers and provide conditioned air to the front of the racks as required?

Effective cooling: removing server heat

Mixing of hot and cold air should be minimized as much as possible. There should be a clear path for cold air to flow to servers, with minimal intersection of the hot return air. The most effective separation of hot and cold air will depend on the type of air distribution system installed.

Underfloor supply systems rely on a raised floor with computer room air conditioning (CRAC) located around the perimeter of the room. Conditioned air is supplied to the racks via floor-mounted grilles; the air passes through the racks, then returns to the CRAC units at high level. To minimize interaction of the hot return air with cold supply air, a hot and cold aisle configuration will provide the most effective layout. The hot air should be drawn to CRAC units located in line with the hot aisles with minimal contact with the cold supply air at the rack front.

An in-row or in-rack cooling system provides localized supply and will also benefit from hot and cold aisle configuration. Airflow mixing is less likely because this type of system will supply conditioned air directly to the front of the rack and draw hot air from the rear of the rack. If the system does not use an enclosed rack, the implementation of hot aisle and cold aisle containment will ensure that airflows do not mix.

For data center facilities that are created in existing office buildings and fitouts, the cooling system may not be stand-alone but rather rely on central air handling unit systems, wall-mounted split air conditioners, or even exhaust fans. These nonspecific cooling systems typically will not offer the same efficiency as a dedicated CRAC system or in-row cooling, and will be limited in the potential for improvement.

To optimize this type of cooling system and ensure that conditioned air is delivered to the rack inlet, consider the following:

  • Racks should be arranged into hot and cold aisles.
  • Air distribution units should be placed in line with the cold aisles.

Reduce short circuiting

Improved airflow through racks and reducing the opportunities for “short circuiting” of conditioned air into hot aisles will enable improved control of server temperature.

  • Provide blanking plates at any empty sections of server cabinets to prevent direct mixing of hot and cold air.
  • Use server racks with a large open area for cold air intake at the front, with a clear path for the hot air to draw through at the rear.
  • Cable penetrations should be positioned to minimize obstruction of supply air passing through the racks. Any penetrations in raised floor systems should be sealed with brushes or pillows.
  • Cable trays and cable ties will manage these to ensure they do not impinge on the effective airflow.

Associated equipment

Any pumps or fans for cooling in the data center should be as efficient as possible. Installation of variable speed drives (VSD) will reduce the power consumed by the electric motors when operating at part load. If large numbers of VSDs are selected, a harmonics analysis is recommended for the site’s power supply.

Temperature, humidity control

Without appropriate control, the data center equipment’s performance will weaken; it will be detrimentally affected if the room conditions are not within its tolerance. However, studies have shown that the tolerance of data communication equipment is greater than that proposed for offices and human comfort.

According to ASHRAE, the ideal inlet conditions for IT equipment are:

  • Dry bulb temperature: 64.4 to 80.6 F
  • Dew point: 41.9 to 59 F.

Temperature and humidity sensors need to be placed effectively around the room to actively measure the real conditions and adjust the cooling supply accordingly. Optimal placement for measurement and monitoring points is at the front of the rack, to actively measure the inlet condition.

CRAC units and in-row coolers can be controlled with various sensor locations, setpoints, and strategies. Control strategies for regulation of cooling load and fan speed can be based on air conditions entering or leaving the unit.

  • Generally, supply air control systems will allow higher air temperatures in the room, resulting in improved efficiency for the cooling system.
  • Improvements in CRAC unit fans also allow for reductions in energy use as fans can be cycled up and down in response to underfloor static pressure monitoring, and therefore reduce power demand.
  • Ensure effective communication from all sensors and equipment with the building management system (BMS) for monitoring and analysis.

5: Optimize monitoring and maintenance

Designing an energy-efficient data center is not the final step for ensuring an efficient system. A data center may have been designed to operate within tight boundaries and conditions. During construction, commissioning, and hand-over, inefficiencies in power distribution, heat gains, and cooling provision can easily arise from poor training and ineffective communication of design intent.

Studies of existing facilities have shown that many data centers have faults and alarms that the facility’s management are not aware of. Effective monitoring of the facility allows the system to optimize operation, providing ideal room conditions and IT equipment use.

Rack-centric monitoring

Traditionally, control of the data center’s heat loads (cooling requirement) and power supply has been largely centralized, with the BMS connecting the components (Figure 12).

This type of system allows easy response for any center-wide faults and alarms but minimizes the control and management of individual equipment. To improve the efficiency of the data center, management of temperature and power distribution should lead toward a rack-centric approach, with sensors and meters at each rack, maximizing the operator’s ability to analyze how the system is performing on a micro scale.

Systematic checking and maintenance

The power usage, heat loads, and IT equipment should be reviewed on a regular basis to ensure the data center is operating as designed. The prevention of future failures in either the power or cooling systems will save the business large amounts of time and money.

6: Implement

This article has identified a range of performance measures and improvements to lead toward greater system efficiency.

The level of improvement that an existing facility is capable of depends on a number of factors, including its ability to finance, the total space available and expectations of future growth, and the age of the existing system. Therefore, the assessment and improvement process should consider which level to choose. Table 2 highlights the options for each aspect of assessment and ranks each based on the ease and cost of implementation (low to high).

Hallett is a mechanical engineer with Arup. He has been involved in the design and review of a number of significant data center projects, with energy and environmental footprint reduction a major part of the design process. Hallett’s experience includes design of greenfield sites and refurbishments, using modeling and simulation to optimize energy use and cooling applications.


[1] ASHRAE. 2007. Ventilation for Acceptable Indoor Air Quality. ANSI/ASHRAE/IESNA Standard 62.1-2007.

[1] ASHRAE. 2009. Thermal guidelines for data processing environments, Second Edition. ASHRAE Technical Committee 9.9.

[1] ASHRAE. 2010. Energy standard for buildings except low-rise residential buildings. ANSI/ASHRAE/IESNA Standard 90.1-2010.

[1] Dunlap, K. 2006. Cooling audit for identifying potential cooling problems in data centers. APC White Paper #40.

[1] Ebbers, M., A. Galea., M. Schaefer, and M.T.D. Khiem. 2008. The green data center: Steps for the Journey. IBM Redpaper.

Emerson Network Power. 2009. Energy Logic: Reducing data center energy consumption by creating savings that cascade across systems. White Paper.

[1] Green Grid. 2008. Green Grid data center efficiency metrics: PUE and DCIE. White Paper #6.


Selecting a Data Center Consultant

Có những điều chúng ta biết là đúng, mà nhiều khi vẫn không làm đến nơi đến chốn.  Việc đầu tiên là cần phải “biết” đã.
Selecting a Data Center Consultant

If you’re planning to build a new data center facility (or retrofit an existing structure for use as a data center), you should seriously consider hiring a data center consultant. Like any important decision, however, you must prepare yourself to choose a good consultant by first studying your own company’s needs as well as the qualifications, skills and characteristics of a data center professional. The more your company relies on IT resources to function, the more critical is the success of your data center project. Professional assistance can quickly provide returns on your investment by saving you frustration, additional costs from errors and inefficient implementations, and even hazardous situations that can threaten equipment and personnel.

The first step in selecting a data center consultant is evaluating your company’s needs. The consultant can help you to some extent in this regard, but no one knows your company better than you do. The more information you gather and evaluate beforehand, the less you’ll have to pay your consultant for work that you could have done just as well. The second step is evaluating the candidates in light of your needs and their levels of expertise. And finally, of course, you need to make that final decision—perhaps the most difficult part of the process.

You Need a Data Center Consultant

If you’re planning a data center project—meaning more than just a server room or other small IT implementation—then you need to hire a data center consultant. And if your company needs its own data center, then it cannot afford a facility that isn’t designed and built properly to maintain consistent uptime. Richard Einhorn, worldwide director of Critical Facilities Services at HP, summarizes this situation as follows: “Data center programs impact a multitude of business, financial, operational, technology, and facilities stakeholders. This can create complexity as each group may have different objectives and requirements. Additionally, the time, capital commitments and margin for error when building or updating a data center can be staggering. Given the tremendous complexities and risks (downtime, loss of revenues, jobs and reputation), the goal is to get it right the first time.”

Einhorn also notes the effects of regulations. To pass muster with local governments regarding your new facility, “certain portions of a data center project have to be performed by a licensed consultant.” Aside from the obvious difficulties that can arise from running afoul of local regulations, failing to meet the various codes can leave your company open to insurance, liability and other consequences.

In addition, good data center consultants know the contours and pitfalls associated with data center construction and do not suffer from the difficulties of the learning curve. Unless your company can afford to make mistakes that may lead to failure to meet the budget, down time later on and other operational difficulties, you should hire a consultant. Einhorn states, “A data center consultant can help address the complexities mentioned above while reducing risks associated with strategic direction, design and building of data centers. Data center consultants have the unique ability to provide a truly integrated delivery framework across all involved disciplines, guiding clients through complex, pivotal decisions and keeping the program moving on schedule and within budget.”

Know Your Company—The First Step

If you know precisely what your company needs, you will be in the best position to select a qualified consultant, help create a good design and produce a data center that will serve your company well for many years. Dr. Mickey Zandi, managing director at SunGard Availability Services, states, “A company should have a clear picture of the end state of its data center project before beginning the search for a consultant. The company should perform detailed inventory and asset tracking, as well as develop an interdependency matrix. This will help companies identify all their applications and the interrelations of the applications.” Einhorn notes that although an organization may rely on the consultant to do some of this discovery work, it must be at least somewhat prepared beforehand: “Many data center consultants are trained to help instigate these discussions to develop an integrated and strategic data center plan that aligns IT with business goals and objectives. However, the company should have a preliminary discussion to help outline the goals of the data center prior to searching for a consultant.”

In addition to determining your current needs, also investigate your company’s growth potential. A data center that will suffice now may be entirely insufficient a few years from now. A data center consultant can help you design a facility that will take into account growth, saving you the hassle of building a new facility or expanding your existing one every few years. Also, consider whether your company may wish to use the cloud, focus on green infrastructure or employ other strategies. Your consultant can help you integrate these strategies into your design.

What Should You Expect?

“To be most successful, a data center facility consultant must be able to provide end-to-end consulting throughout the data center program. An experienced data center facility consultant should oversee all elements of the data center lifecycle including design, construction and project management as well as be able to manage or partner with general contractors for the build element,” according to Einhorn. In other words, the data center consultant should provide more than just IT direction. In particular, Zandi notes that “a data center consultant should be able to address the key areas of power and cooling. Companies’ power consumption needs often change, and the consultant should be aware of how a power scheme can be shifted to make all options available. The consultant should also be able to provide advice on advanced cooling and in-row cooling options, because these will impact the architecture and layout of the data center.” Because power distribution, cooling, and IT infrastructure are so interdependent in the data center, your consultant should be able to address all these areas in a unified manner.

Beyond these fundamental expectations of a consultant, your project may require additional services. Einhorn notes a number of offerings that, either as a whole or piecemeal, can benefit a data center project: program management; review or development of a master plan; business case development; preparation for executive presentation; and data center commissioning, migration and transition to operations, to name a few. Of course, these services are supplemental to the more basic design and development services. When interviewing data center consultants, you may learn about these or other additional services that can benefit your project.

Data Center Consultants: A Checklist

The following are a number of considerations you should investigate when searching for a data center consultant. The weight you give to each of these factors will, of course, depend on your particular situation: your industry, intended use of your data center, budget, and so on. For most companies, however, each of these factors should have some bearing on their choices.

Experience—This should be the most important factor in your choice. An experienced data center consultancy will know many helpful strategies to save you time, trouble and money, and it will also be aware of the many pitfalls that can ensnare your project and lead to schedule lapses, unforeseen expenses, and down time later on. “There is no substitute for experience. Companies should consider vendors’ past experience, and ask for references for equivalent projects where they have successfully guided other clients through similar efforts,” said Einhorn.

Expertise—“When selecting a consultant, it is important to know their area of expertise; an example of expertise can be data center design and architecture. A company can validate the consultant’s expertise and history through their references, methodology, and number of competencies the team is engaged in,” notes Zandi. Your data center will not be the same as another company’s data center, nor should it be the same. Depending on your industry, budget and other requirements, you will likely want to focus on certain aspects of your facility more than others. So, take careful note of what prospective consultants are best at; a consultancy whose expertise overlaps your focus areas may be an excellent candidate.

Certifications/Licenses—Some of the work in your data center will at a minimum require oversight by licensed professionals. According to Einhorn, “For the facility design in most countries, a licensed professional engineer is required to stamp drawings. It is also recommended that a third-party consultant do a peer review on any design, given the criticality of these projects.” Zandi also notes that “it is important for the structure of the data center facility to have certified engineers on staff for power and cooling as well as a mechanical engineer.” Certifications and licenses can be beneficial both in showing that the consultant has achieved a certain level of expertise and professionalism as judged by an outside party, and in providing you with some protection in the event that an accident or other incident occurs later on in your data center’s life. Although licenses and certifications are not fool-proof means of judging a professional (of any sort), they do offer you some benefits in your search.

Codes—Like it or not, the local government in your new data center’s locality will be sticking its nose in your business. Building codes can be complex and difficult to understand, and even if you do understand them, your building inspector may say something entirely different. Your consultant, therefore, should be able to deal with both the code in the particular locality and the inspectors that will be judging the facility. Zandi states that “a company should be knowledgeable about the region and its code requirements; this will allow for selecting a consultant who can survey and assist with the codes.” In other words, part of the responsibility is yours to know a little about the code and the region you’ll be building in so that you can choose a consultant that is competent in this matter.

Budget—The price tag of a product or service is always a consideration (and especially so in difficult economic times). Part of any decision is balancing a number of considerations, and seldom is any option ideal in all respects. Unfortunately, the best data center consultant may be out of reach simply because that service provider is too expensive. But when beginning the process of searching for a consultant, you should have some idea of what you can afford (such as a percentage of your total project budget) to pay for the consultant’s fees.

Needless to say, the cost you should expect to pay is difficult to estimate for a generic data center, but some range can be pinned down. Your project may deviate from any estimate depending on the details of the implementation, of course. Einhorn says, “While pricing may vary according to location and implementation, typically the costs for data center consultants (strategists and planners, construction managers, architects, mechanical, electrical, civil, landscape, and acoustical engineers) can total about 10–15% of the overall ‘soft’ costs of a project. As data centers are complex facilities from a mechanical and electrical (power and cooling) perspective, the mechanical and electrical engineering consulting effort can be approximately 4–7% of that total.” Other services that are increasingly employed by companies when designing and building their data centers may add another one or two percent to the project cost.

Zandi notes the likely range for a consultant as a percentage of the project price tag: “The cost of hiring a data center consultant depends on the depth and detail of the overall project. If a consultant is brought on for discovery and design and will manage the project’s implementation, the cost could be up to 25 percent of the project as a whole. If a consultant is only on board for design review or peer review, it may only be 10 percent of the overall project cost.” Here, knowing what you need for your project will greatly assist you in determining what you should expect to pay for a consultant and how that will fit into your budget.

Professionalism—Participation in professional organizations and publication of documents may also indicate a good candidate. “It is also very important to do research on consultants by seeking out white papers and journal articles written by the individuals or organizations, as well as examples of their leadership in developing industry standards such as the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) chapters, the European Union Code of Conduct for Data Centers and LEED standards for data centers.”

Of course, each of these six areas may encompass a variety of sub-areas that you need to consider. For example, you may want to implement a data center with very low impact on the environment: in such a case, you want a consultant that focuses on green issues and is aware of the technologies and strategies that will help you realize this goal. This consideration would fall under the rubric of expertise. Within each category, you should develop your own list of items that are important to your company’s project. Discuss them with candidates; those that are able to address your concerns will be better candidates.

Ultimately, however, you will need to make that final decision. By applying the considerations above, that decision should be a choice between just a few good candidates rather than a random drawing from a giant pool of unknowns.


Einhorn summarizes a general philosophy for seeking a good consultant: “The ideal data center consultant (or team of consultants) will truly understand how to plan, design and implement IT and/or data center programs typically led by a very senior executive consultant and supported by other consultants who are experts within the strategy, operations, risk, technology and facilities disciplines.” Data center projects are complex and expensive, and your business cannot afford to do it wrong. Hiring a data center consultant at somewhere between 10% and 25% of your total project cost may seem too expensive, but the risks and costs of not doing so can quickly eclipse this amount. You need a data center consultant; your best bet, then, is to take the time to prepare by evaluating your company’s requirements and then interview prospective candidates in light of the above considerations. If you know what you need and what services prospective consultants can provide, you will be able to select the one that is best for your company’s data center project.

Article originally published June 2011


Hiệu ứng tích cực khi sử dụng công nghệ trong cung cấp dịch vụ CNTT

Môi trường cạnh tranh không cho phép sự chậm trễ, sự lãnh phí, và đưa ra quyết định sai lầm…

Tốc độ thay đổi về công nghệ chóng mặt, mức độ tích hợp giữa các thành phần công nghệ thông tin ngày càng phức tạp và không ngừng mở rộng phạm vi, tiến tới ma trận.

Tốc độ triển khai các dịch vụ mới, đi kèm theo đó là y/c đáp ứng về hạ tầng CNTT như các máy chủ, ứng dụng… đòi hỏi mức độ quản lý tài nguyên hệ thống và tài nguyên nhân lực khoa học…

Toàn thấy yêu cầu, nhu cầu… vậy những người chịu trách nhiệm ra quyết định, thực hiện quyết định phải làm sao ?

Câu trả lời là phải áp dụng “công nghệ” vào trong quá trình cung cấp dịch vụ. Việc áp dụng cần thực hiện từ mức quy trình (process), đến các công nghệ hiện đại trong phạm trù công nghệ thông tin. “Số hóa” một cách tích cực nhất hệ thống CNTT để đáp ứng được cả năng “Hội tụ thông tin”.

Một ví dụ sinh động

Implement moves and changes 70% faster with the Trellis platfor


Cấp bảo vệ IP (IP54, IP55, IP64, IP65) là gì?

Cấp bảo vệ IP (IP54, IP55, IP64, IP65) là gì?

IP được định nghĩa bởi IEC, quy định mức độ bảo vệ của thiết bị điện từ bụi và nước. Ví dụ IP54, IP55, IP64, IP65


Nếu bạn thường xuyên thực hiện việc bốc dự toán cho 1 công trình, sẽ có những thiết bị yêu cầu độ bảo vệ IP54 chẳng hạn. Nhưng bạn tìm ngoài thị trường chỉ có loại có IP55. Vậy có thể thay thế được không?

Nếu bạn là nhà sản xuất tủ bảng điện, chủ đầu tư yêu cầu bạn sản xuất tủ cho họ đạt tiêu chuẩn IP44 chẳng hạn. Nếu bạn không hiểu IP44 đòi hỏi gì thì bạn sẽ không dám nhận đặt hàng.

Hiểu biết về cấp bảo vệ IP sẽ giúp bạn giải quyết tốt 2 vấn đề trên


Cấu trúc của cấp bảo vệ IP ví dụ IP54 gồm: IP và 2 chữ số. Chữ số thứ nhất (5) nói lên độ bảo vệ chống bụi thâm nhập, chữ số thứ 2 (4) nói lên độ bảo vệ chống sự thâm nhập từ nước.


1 Cho biết để ngăn chặn sự xâm nhập của các vật thể rắn lớn hơn 50mm.  Bảo vệ từ đối tượng (chẳng hạn như bàn tay) chạm vào các bộ phận đèn do ngẫu nhiên. Ngăn chặn các vật có kích thước (có đường kính) lớn hơn 50mm.

2 Cho biết có thể ngăn chặn cuộc xâm nhập của các đối tượng có kích thước trung bình lớn hơn 12mm. Ngăn chặn sự xâm nhập của ngón tay và các đối tượng khác với kích thước trung bình (đường kính lớn hơn 12mm, chiều dài lớn hơn 80mm).

3 Cho biết để ngăn chặn cuộc xâm nhập của các đối tượng rắn lớn hơn 2.5mm. Ngăn chặn các đối tượng (như công cụ, các loại dây hoặc tương tự) có đường kính hoặc độ dày lớn hơn 2,5 mm để chạm vào các bộ phận bên trong của đèn.

4 Cho biết để ngăn chặn sự xâm nhập của các vật rắn lớn hơn 1.0mm. Ngăn chặn các đối tượng (công cụ, dây hoặc tương tự) với đường kính hoặc độ dày lớn hơn 1.0mm chạm vào bên trong của đèn.

5 Chỉ ra bảo vệ bụi. Ngăn chặn sự xâm nhập hoàn toàn của vật rắn, nó không thể ngăn chặn sự xâm nhập bụi hoàn toàn, nhưng bụi xâm nhập không ảnh hưởng đến hoạt động bình thường của thiết bị.
6 Chỉ ra bảo vệ bụi hoàn toàn. Ngăn chặn sự xâm nhập của các đối tượng và bụi hoàn toàn.


0 Cho biết không có bảo vệ.

1 Chỉ ngăn chặn sự xâm nhập của nước nhỏ giọt. Nước giọt thẳng đứng (như mưa, không kèm theo gió) không ảnh hưởng đến hoạt động của thiết bị.

2 Chỉ ngăn chặn được sự xâm nhập của nước ở góc nghiêng 15 độ. Hoặc khi thiết bị được nghiêng 15 độ, nước nhỏ giọt thẳng đứng sẽ không gây ra tác hại nào.

3 Cho biết có thể ngăn chặn sự xâm nhập của tia nước nhỏ, nhẹ. Thiết bị có thể chịu được các tia nước, vòi nước sinh hoạt ở góc nhỏ hơn 60 độ (Cụ thể như mưa kèm theo gió mạnh)

4 Cho biết để ngăn chặn sự xâm nhập của nước từ vòi phun ở tất cả các hướng.

5 Cho biết để ngăn chặn sự xâm nhập của nước vòi phun áp lực lớn ở tất cả các hướng.

6 Cho biết có thể chống sự xâm nhập của những con sóng lớn. Thiết bị có thể lắp trên boong tàu, và có thể chịu được những con sóng lớn.

7 Cho biết có thể ngâm thiết bị trong nước trong 1 thời gian ngắn ở áp lực nước nhỏ.
8 Cho biết thiết bị có thể hoạt động bình thường khi ngâm lâu trong nước ở 1 áp suất nước nhất định nào đó, và đảm bảo rằng không có hại do nước gây ra.


cap bao ve ip


Hướng dẫn chọn dây dẫn, thanh cái theo tiêu chuẩn IEC 60439

Lựa chọn tiết diện dây điện, cáp điện, thanh cái (busbar) là công việc quan trọng và thường xuyên đối với ngành điện. Mỗi người có một cách chọn khác nhau. Thông thường xảy ra 2 trường hợp :

  • Chọn dây, cáp điện, thanh cái theo tính toán
  • Chọn dây, cáp điện, thanh cái theo kinh nghiệm
  • Chọn dây, cáp điện, thanh cái theo các tiêu chuẩn

Chọn dây điện, cáp điện, thanh cái theo các tiêu chuẩn thường được dùng rất nhiều. Tại sao vậy? Vì các tiêu chuẩn đó được đưa ra dựa vào tính toán kết hợp với kinh nghiệm. Việc chọn theo các tiêu chuẩn còn giúp cho việc thiết kế, thi công công trình hợp các tiêu chuẩn đã có sẵn.

Theo tiêu chuẩn IEC 60439. Dòng điện và tiết diện dây dẫn đến 400A được chọn trong các bảng 8 IEC60439-1

Range of rated current 1) Conductor cross-sectional area 2), 3)
0 8 1,0 18
8 12 1,5 16
12 15 2,5 14
15 20 2,5 12
20 25 4,0 10
25 32 6,0 10
32 50 10 8
50 65 16 6
65 85 25 4
85 100 35 3
100 115 35 2
115 130 50 1
130 150 50 0
150 175 70 00
175 200 95 000
200 225 95 0000
225 250 120 250
250 275 150 300
275 300 185 350
300 350 185 400
350 400 240 500
1) 2) 3) The value of the rated current shall be greater than the first value in the first column and less than or equal to the second value in that column. For convenience of testing and with the manufacturer’s consent, smaller conductors than those given for a stated rated current may be used. Either of the two conductors specified for a given rated current range may be used.

Dòng điện và tiết diện dây dẫn, thanh cái từ 400A đến 3150A được chọn trong bảng 9 IEC 60439-1.

Test conductors
Values of the rated current A Range of rated current 1) A
Cables Copper bars 2)
Quantity Cross sectional area 3) mm2 Quantity Dimensions 3) mm
500 400 to 500 2 150(16) 2 30 × 5(15)
630 500 to 630 2 185(18) 2 40 × 5(15)
800 630 to 800 2 240(21) 2 50 × 5(17)
1 000 800 to 1000 2 60 × 5(19)
1 250 1 000 to 1250 2 80 × 5(20)
1 600 1 250 to 1600 2 100 × 5(23)
2 000 1 600 to 2000 3 100 × 5(20)
2 500 2 000 to 2500 4 100 × 5(21)
3 150 2 500 to 3150 3 100 × 10(23)
1) The value of the current shall be greater than the first value and less than or equal to the second value.
2) Bars are assumed to be arranged with their long faces vertical. Arrangements with long faces horizontal may be used if specified by the manufacturer.
3) Values in brackets are estimated temperature rises (in kelvins) of the test conductors given for reference.

Lựa chọn tiết diện dây PE theo điều kiện sau đây (Trong bảng S là tiết diện dây pha)

Cross-sectional area of phase conductors S Minimum cross-sectional area of the corresponding protective conductor (PE, PEN) Sp
mm2 mm2
S ≤ 16 S
16 < S ≤ 35 16
35 < S ≤ 400 S/2
400 < S ≤ 800 200
800 < S S/4

Một điều quan trọng cần lưu ý là việc lựa chọn thanh cái dùng cho tủ điện lại phụ thuộc vào kích thước đầu cực của MCCB. Khi lựa chọn busbar ta thường chọn bề rộng bằng với đầu cực MCCB còn độ dày thì chọn sao cho đạt chuẩn trong các bảng tra. Độ rộng đầu cực MCCB thường như sau :

  • Framesize 63, 100A : 17mm
  • Framesize 200A : 22.5mm
  • Framesize 400A : 30mm
  • Framesize 800A : 41mm
  • Framesize 1200A : 44mm

source :