Best Practices for Data Center Relocation and Migration

Article by Nilesh Rane

DC consolidation and migration journey is a rocky one with challenges such as operational disruption, etc. Mandar Kulkarni, Senior Vice President, Netmagic Solutions shares some best practices to follow for a successful DC relocation and migration project.DC migration and consolidation is an uncomfortable truth for most of these Data Center Managers or CIOs.

An optimally functioning Data Center is business critical. But chances are that your organizations data center is not adequate in some way. Either it is growing out-of-capacity, compute requirements, operationally exorbitant, outdated or simply doesn’t match up to the growth of the organization.

According to a recently published Data Center survey report, over 30% of organizations across the globe plan to migrate or expand their data centers within the next 3 years. Most DCs in India are over 5-7 years old and are not designed for power and cooling needs of today, are running out of space or performance, and their total cost of ownership is almost surpassing the growth in business revenues.

Unplanned DC relocation and migration exercise, done without help of experts run into risky waters resulting in cost issues to downtimes and business loss or complete blackout. Here are some best practices to ensure that DC migration project is successful.

Best Practices For DC Migration

Solution to mitigate challenges of DC relocation and migration is pretty simple. It is important to create a design and migration plan keeping in mind all the common pitfalls and crating contingencies for them. Some of the best practices for successful DC migration are as follows:

Start at the very beginning

Start the migration process as you would build a data center. Look at the migration exercise to ensure the new DC will have planned for at least 2 lifecycles of infrastructure.

Identify and detail the starting point

It is important to do a comprehensive review of the current DC. Identify and document your organizations technology and business requirements, priorities and processes. Then do a detailed review of the costs involved in various methods of DC migration and consolidation.

Design the migration strategy

It is important to establish business downtime, determine hardware, application and other technology requirements, and prioritize business processes. Identify at least 2 migration methodologies and create plan for both. Bring in all vendors and utility providers into the migration strategies and take them along.

Plan the layout – space planning

It is important to plan the new DC layout before you plan the migration plan. Think about white spaces, creating enough to allow for future growth – it is important to plan the space judiciously, and take help of DC architects to successfully design this part.

Plan the DC migration

Putting relocation design into action plan – detailed floor plan, responsibility chart and checklists, migration priorities, map interdependencies, etc. Take into the plan inputs from telecom and power providers, technology vendors, and specialists.

Inventory everything

Start with a detailed inventory of everything – from applications to business needs to infrastructure including each cable and device to network including every link and port. It all needs to go into a database similar to CMDB.

Create a baseline

It is critical to know the current DC performance and TCO ratings. Basically, it is important to know your DC well before migration and clear understanding of all aspects of it. Create a baseline for your DC so that it is easy to measure and tweak performance and efficiencies post migration.

Identify and create a risk management plan

Organizations should simply assume that things would go wrong and create adequate contingency plan. Detailing and drafting a fully documented risk mitigation and management plan is essential. Then assess, classify, and prioritize them for the purposes of mitigation.

Take users and business owners along

It is important to inform all users of the migration plan, from end users to support teams and business owners. The key is to plan to the last T and go through the plan to the minutest detail. Make sure to bring all the critical people to the planning events – facilities staff, project management teams, etc.

Identifying the right time for migration

It is important to select the right time for migration – such as choosing non-month-end and year-end, not coinciding with public events such as elections, festivals, etc.

Logistics arrangements

Arrangement of logistics arrangement needs some looking into – who is going to pack and number, name all equipment, who is going to move the equipment to the destination, is there a backup vehicle in case of break down, is there a need for armed guards for the transportation of equipment, etc.

Upgrading systems during the migration

Old servers, switches, and storage devices that are out of warranty or considered a risk when subjected to strains and stress of migration should be identified and considered while planning to replace with new. It is an opportunity for you to consider reducing the overall footprint through consolidation in quest to improve reliability, performance, and efficiency of your DC. It is a popular practice to use data center move to consolidate the DC through virtualization.

Do pre and post migration testing

It is important to create a baseline on infrastructure, network and applications before executing the migration plan. It is important to exactly know how things work – creating the baseline. Document and repeat the tests – a full-fledged success plan.

Rely on experience

DC relocation and migration is not a regular occurrence for any single IT professional to have substantial experience. It is highly recommended to entrust the DC relocation and migration exercise in the hands of an experienced organization who have proven capabilities.

Consider Experts

If it is only a data center move from one location to the other you should consider a reputable third party to support the move – professional IT mover who will use specialized packing materials, etc. It is recommended to use a professional DC provider with expertise in DC migration and relocation – these establishments will have proven data center relocation methodology and best practices that they can leverage for better results and success.

Contingency planning

Finally, even superior planning cannot offset unexpected failures. Contingency planning is critical even after the migration plan has taken into consideration all the common pitfalls. Planning for a failure is better than running pillar to post when it occurs.

Standby Equipment

If during transportation equipment is damaged or does not function at the destination, it amounts to delays or disruptions in setting up the new DC. It is important for the DC migration expert helping you to have standby equipment in cases such as these.


It is important for insuring all equipment in case of any major disasters occur during the whole migration process. If you are using a professional DataCenter provider, it is important to add insurance to the checklist of requirements.

Identify and Plan for External Dependencies

It is critical to identify all external dependencies such as network service providers, etc. and their availability at the destination.

In Conclusion

In today’s dynamically changing marketplace and unpredictable economic climate, it is critical that data centers facilitate current business operations as well as provide for the future growth of the business. Following the best practices will ensure success of the DC relocation and migration – a good way to prevent disaster.

link source:

How do you define data center size, density?

With shifts in the scale and density of data centers, one industry organization is drawing up ways to standardize how we talk about data center size and power needs.

There are plenty of metrics to measure data center footprint and power and cooling needs. AFCOM, the data center managers’ association, thinks it’s time to pare that down.
“You’ll hear people say ‘I have a very dense data center’ or ‘We have a small data center’ and that doesn’t really mean anything or relate to specific numbers,” said Tom Roberts, AFCOM president.

The association’s Data Center Institute think tank worked with data center designers, operators and vendors to qualify the terms for data center size and density, presented in the free paper, Data Center Standards. Read an excerpt from the paper here.

AFCOM describes data center size by compute space, and density by measured peak kilowatt (kW) load.

To the extreme

AFCOM segments data center density into four categories: low (up to 4 kW per rack), medium (5 kW to 8 kW), high (8 kW to 15 kW) and extreme (more than 16 kW per rack average).

The focus on density is timely. Colocation contracts revolve more around power today than they did five years ago, when the conversation was about space, said John Sheputis, president, Infomart Data Centers, a U.S. colocation space provider.

Server consolidation — via virtualization and processor evolution — increases data center density per square foot. There are fewer cabinets and fewer power supplies to manage, with less fiber to run — all good things from an IT operations point of view, Sheputis said. But these trends change the understanding of high and low density.

Cosentry, a colocation provider headquartered in Omaha, Neb., tracks average power draw per cabinet in its facilities to baseline server space designs.

“Ten years ago, average power draw per cabinet was probably 700 to 800 watts,” said Jason Black, VP of data center services at Cosentry. “Five years ago, it was 1.5 kW. Now, 3 kW. On current trend, we’ll see five or six kilowatt average power draw in five years.”

Infomart experienced this firsthand when merging its Dallas operations with Fortune Data Centers’ Hillsboro, Ore., and San Jose operations, and acquired a former AOL data center in Ashburn, Va.

“The energy density of older data centers is two to three times lower than in newer data centers,” Sheputis said, adding that standards for energy density change greatly in a short time.

This was evident comparing the older Ashburn facility to the state-of-the-art facility in Dallas. Ashburn will undergo a renovation, not just for space but for higher-density operations, before opening in 2015.

AFCOM plans to aggregate similar baseline tracking and comparison data for a broad swath of data centers by standardizing size and density terminology.

Devil in the density details

Although AFCOM’s categories classify the total density of the data center, the devil for planning that space is in the details.

The same square footage that previously held 2 kW mixed cabinets now has a row of 8 kW servers, a set of storage arrays consuming 4 kW each, and low-power network and peripheral cabinets. A supercomputing island in one part of the data center handles big data processing at 15 kW per rack, while the other racks use only 3 kW or 4 kW each. Facility planning isn’t just about aggregate power and cooling needs, but also the layout of IT systems using the space.

Square footage discussions are still useful, Black said. But the most important thing is how many rack location units are available in a given space.

AFCOM therefore segments data center sizes, from mini (room for up to 10 racks) through mega (room for more than 9,000 racks), in combination with the density measurements above that yield power demand information.

“Watts per square foot is a flawed standard for today’s workloads,” Cosentry’s Black said.

Rack location units is a term that’s evolved recently to help estimate utilization in a given room footprint, or estimate capacity. It takes into account the cabinet footprint and hot and cold aisle allowances. But not every IT organization can discuss their data center needs by this metric.

“In many cases, the art of managing physical space has been dished off to IT people with expertise in other areas, like storage and network,” Black said. “Most people are sub-optimized in the data center and don’t know best practices.”

In an on-premises data center, perhaps clarity around power and density doesn’t matter as much. The power bill comes out of the facilities budget, and as long as cooling keeps up with the hottest cabinet in the room, your terminology is unimportant. But today, on-premises facilities face end of life or major upgrades, power usage effectiveness comes under executive-level (and executive branch) scrutiny, and many companies plan the move into a colocation facility. Suddenly, IT leaders need to know how to communicate effectively about the space, power and cooling that important workloads require.

AFCOM’s intent is for a data center manager to be able to measure compute space, designed density and current power draw, and say that they run, for example, a small-size data center designed for low density, currently operating at medium density at 52% of rack yield.

link source:


Proper Data Center Staffing is Key to Reliable Operations

The care and feeding of a data center
By Richard F. Van Loo

Managing and operating a data center comprises a wide variety of activities, including the maintenance of all the equipment and systems in the data center, housekeeping, training, and capacity management for space power and cooling. These functions have one requirement in common: the need for trained personnel. As a result, an ineffective staffing model can impair overall availability.

The Tier Standard: Operational Sustainability outlines behaviors and risks that reduce the ability of a data center to meet its business objectives over the long term. According to the Standard, the three elements of Operational Sustainability are Management and Operations, Building Characteristics, and Site Location (see Figure 1).

Figure 1. According to Tier Standard: Operational Sustainability, the three elements of Operational Sustainability are Management and Operations, Building Characteristics, and Site Location.

Management and Operations comprises behaviors associated with:

• Staffing and organization

• Maintenance

• Training

• Planning, coordination, and management

• Operating conditions

Building Characteristics examines behaviors associated with:

• Pre-Operations

• Building features

• Infrastructure

Site Location addresses site risks due to:

• Natural disasters

• Human disasters

Management and Operations includes the behaviors that are most easily changed and have the greatest effect on the day-to-day operations of data centers. All the Management and Operations behaviors are important to the successful and reliable operation of a data center, but staffing provides the foundation for all the others.

Data center staffing encompasses the three main groups that support the data center, Facility, IT, and Security Operations. Facility operations staff addresses management, building operations, and engineering and administrative support. Shift presence, maintenance, and vendor support are the areas that support the daily activities that can affect data center availability.

The Tier Standard: Operational Sustainability breaks Staffing into three categories:

• Staffing. The number of personnel needed to meet the workload requirements for specific maintenance
activities and shift presence.

• Qualifications. The licenses, experience, and technical training required to properly maintain and
operate the installed infrastructure.

• Organization. The reporting chain for escalating issues or concerns, with roles and responsibilities
defined for each group.

In order to be fully effective, an enterprise must have the proper number of qualified personnel, organized correctly. Uptime Institute Tier Certification of Operation Sustainability and Management & Operations Stamp of Approval assessments repeatedly show that many data centers are less than fully effective because their staffing plan does not address all three categories.

The first step in developing a staffing plan is to determine the overall headcount. Figure 2 can assist in determining the number of personnel required.

Figure 2. Factors that go into calculating staffing requirements

The initial steps address how to determine the total number of hours required for maintenance activities and shift presence. Maintenance hours include activities such as:

• Preventive maintenance

• Corrective maintenance

• Vendor support

• Project support

• Tenant work orders

The number of hours for all these activities must be determined for the year and attributed to each trade.

For instance, the data center must determine what level of shift presence is required to support its business objective. As uptime objectives increase so do staffing presence requirements. Besides deciding whether personnel is needed on site 24 x 7 or some lesser level, the data center operator must also decide what level of technical expertise or trade is needed. This may result in two or three people on site for each shift. These decisions make it possible to determine the number of people and hours required to support shift presence for the year. Activities performed on shift include conducting rounds, monitoring the building management system (BMS), operating equipment, and responding to alarms. These jobs do not typically require all the hours allotted to a shift, so other maintenance activities can be assigned during that shift, which will reduce the overall total number of staffing hours required.

Once the total number hours required by trade for maintenance and shift presence has been determined, divide it by the number of productive hours (hours/person/year available to perform work) to get the required number of personnel for each trade. The resulting numbers will be fractional numbers that can be addressed by overtime (less than 10% overtime advised), contracting, or rounding up.

Qualification Levels
Data center personnel also need to be technically qualified to perform their assigned activities. As the Tier level or complexity of the data center increases, the qualification levels for the technicians also increase. They all need to have the required licenses for their trades and job description as well as the appropriate experience with data center operations. Lack of qualified personnel results in:

• Maintenance being performed incorrectly

• Poor quality of work

• Higher incidents of human error

• Inability to react and correct data center issues

Organized for Response
A properly organized data center staff understands the reporting chain of each organization, along with their individual roles and responsibilities. To aid that understanding, an organization chart showing the reporting chain and interfaces between Facilities, IT, and Security should be readily available and identify backups for key positions in case a primary contact is unavailable.

Impacts to Operations
The following examples from three actual operational data centers show how staffing inefficiencies may affect data center availability

The first data center had two to three personnel per shift covering the data center 24 x 7, which is one of the larger staff counts that Uptime Institute typically sees. Further investigation revealed that only two individuals on the entire data center staff were qualified to operate and maintain equipment. All other staff had primary functions in other non-critical support areas. As a result, personnel unfamiliar with the critical data center systems were performing activities for shift presence. Although maintenance functions were being done, if anything was discovered during rounds additional personnel had to be called in increasing the response time before the incident could be addressed.

The second data center had very qualified personnel; however, the overall head count was low. This resulted in overtime rates far exceeding the advised 10% limit. The personnel were showing signs of fatigue that could result in increased errors during maintenance activities and rounds.

The third data center relied solely on a call in method to respond to any incidents or abnormalities. Qualified technicians performed maintenance two or three days a week. No personnel were assigned to perform shift rounds. On-site Security staff monitored alarms, which required security staff to call in maintenance technicians to respond to alarms. The data center was relying on the redundancy of systems and components to cover the time it took for technicians to respond and return the data center to normal operations after an incident.

Assessment Findings
Although these examples show deficiencies in individual data centers, many data centers are less than optimally staffed. In order to be fully effective in a management and operations behavior, the organization must be Proactive, Practiced, and Informed. Data centers may have the right number of personnel (Proactive), but they may not be qualified to perform the required maintenance or shift presence functions (Practiced), or they may not have well-defined roles and responsibilities to identify which group is responsible for certain activities (Informed).

Figure 3 shows the percentage of data centers that were found to have ineffective behaviors in the areas of staffing, qualifications, and organization.

Figure 3. Ineffective behaviors in the areas of staffing, qualifications, and organization.

Staffing (appropriate number of personnel) is found to be inadequate in only 7% of data centers assessed. However, personnel qualifications are found to be inadequate in twice as many data centers, and the way the data center is organized is found to be ineffective even more often. Although these percentages are not very high, staffing affects all data center management. Staffing shortcomings are found to affect maintenance, planning, coordination, and load management activities.

The effects of staffing inadequacies show up most often in data center operations. According to the Uptime Institute Abnormal Incident Reports (AIRs) database, the root cause of 39% of data center incidents falls into the operational area (see Figure 4). The causes can be attributed to human error stemming from fatigue, lack of knowledge on a system, and not following proper procedure, etc. The right, qualified staff could potentially prevent many of these types of incidents.

Figure 4. According to the Uptime Institute Abnormal Incident Reports (AIRs) database, the root cause of 39% of data center incidents falls into the operational area.

Adopting the proven Start with the End in Mind methodology provides the opportunity to justify the operations staff early in the planning cycle by clearly defining service levels and the required staff to support the business.  Having those discussions with the business and correlating it to the cost of downtime should help management understand the returns on this investment.

Staffing 24 x 7
When developing an operations team to support a data center, the first and most crucial decision to make is to determine how often personnel need to be available on site. Shift presence duties can include a number of things, including facility rounds and inspections, alarm response, vendor and guest escorts, and procedure development. This decision must be made by weighing a variety of factors, including criticality of the facility to the business, complexity of the systems supporting the data center, and, of course, cost.

For business objectives that are critical enough to require Tier III or IV facilities, Uptime Institute recommends a minimum of one to two qualified operators on site 24 hours per day, 7 days per week, 365 days per year (24 x 7). Some facilities feel that having operators on site only during normal business hours is adequate, but they are running at a higher risk the rest of the time. Even with outstanding on-call and escalation procedures, emergencies may intensify quickly in the time it takes an operator to get to the site.

Increased automation within critical facilities causes some to believe it appropriate to operate as a “Lights Out” facility. However, there is an increased risk to the facility any time there is not a qualified operator on site to react to an emergency. While a highly automated building may be able to make a correction autonomously from a single fault, those single faults often cascade and require a human operator to step in and make a correction.

The value of having qualified personnel on site is reflected in Figure 5, which shows the percentage of data center saves (incident avoidance) based on the AIRs database.

Figure 5. The percentage of data center saves (incident avoidance) based on the AIRs database

Equipment redundancy is the largest single category of saves at 38%. However, saves from staff performing proper maintenance and having technicians on site that detected problems before becoming incidents totaled 42%.

Justifying Qualified Staff
The cost of having qualified staff operating and maintaining a data center is typically one of the largest, if not the largest, expense in a data center operating budget. Because of this, it is often a target for budget reduction. Communicating the risk to continuous operations may be the best way to fight off staffing cuts when budget cuts are proposed. Documenting the specific maintenance activities that will no longer be performed or the availability of personnel to monitor and respond to events can support the importance of maintaining staffing levels.

Cutting budget in this way will ultimately prove counterproductive, result in ineffective staffing, and waste initial efforts to design and plan for the operation of a highly available and reliable data center. Properly staffing, and maintaining the appropriate staffing, can reduce the number and severity of incidents. In addition, appropriate staffing helps the facility operate as designed, ensuring planned reliability and energy use levels.

Source link:

Fibre Channel over IP: What it is and what it’s used for

Fibre Channel over IP bundles Fibre Channel frames into IP packets and can be a cost-sensitive solution to link remote fabrics where no dark fibre exists between sites.

What is FCIP, and what is it used for?

Fibre Channel over IP, or FCIP, is a tunnelling protocol used to connect Fibre Channel (FC) switches over an IP network, enabling interconnection of remote locations. From the fabric view, an FCIP link is an inter-switch link (ISL) that transports FC control and data frames between switches.

FCIP routers link SANs to enable data to traverse fabrics without the need to merge fabrics. FCIP as an ISL between Fibre Channel SANs makes sense in situations such as:

  • Where two sites are connected by existing IP-based networks but not dark fibre.
  • Where IP networking is preferred because of cost or the distance exceeds the FC limit of 500 kilometres.
  • Where the duration or lead time of the requirement does not enable dark fibre to be installed.

FCIP ISLs have inherent performance, reliability, data integrity and manageability limitations compared with native FC ISLs. Reliability measured in percentage of uptime is on average higher for SAN fabrics than for IP networks. Network delays and packet loss may create bottlenecks in IP networks. FCIP troubleshooting and performance analysis requires evaluating the whole data path from FC fabric, IP LAN and WAN networks, which can make it more complex to manage than other extension options.

Protocol conversion from FC to FCIP can impact the performance that is achieved, unless the IP LAN and WAN are optimally configured, and large FC frames are likely to fragment into two Ethernet packets. The defaultmaximum transfer unit (MTU) size for Ethernet is 1,500 bytes, and the maximum Fibre Channel frame size is 2,172 byes, including FC headers. So, a review of the IP network’s support of jumbo frames is important if sustained gigabit throughput is required. To determine the optimum MTU size for the network, you should review IP WAN header overheads for network resources such as the VPN and MPLS.

FCIP is typically deployed for long-haul applications that are not business-critical and do not need especially high performance.

Applications for FCIP include:

  • Remote data asynchronous replication to a secondary site.
  • Centralised SAN backup and archiving, although tape writes can fail if packets are dropped.
  • Data migration between sites, as part of a data centre migration or consolidation project.



A data center migration checklist to mitigate risk

Alleviate the pain points for IT upgrades, consolidation or mergers with this 10-step data center migration checklist.

Major IT changes are inevitable, but businesses can mitigate their inherent risk by following a proper data center migration checklist.

Data centers consist of complicated, densely populated racks of hardware running all kinds of software, connected by oodles of cabling. So, when the firm plans to migrate an application, a business group or perhaps the entire IT infrastructure to a new platform, it can cause a panic. A migration means sifting through the complex web of connected devices, applications, cooling systems and cables to map out all interdependencies, then planning and executing on a data center migration project plan with minimal disruptions.

Here is a data center migration checklist in 10 easy steps.

1. Understand why you’re migrating

Businesses have different reasons for migrating to a new system, and those motives alter the potential challenges IT will face during migration. Perhaps market success caused explosive growth that rendered the current data center facility obsolete: More processing power is needed. Perhaps the company wants to save costs: Data center consolidation and right-sizing by combining systems will lower licensing and operational expenses.

Mergers and acquisitions often drive a data center migration project: The two groups must become one cohesive organization. Regulatory requirements also spark change: A corporation will revamp its data center to shore up backup, archiving, data management and security.

2. Map out a clear plan

The success or failure of a migration project depends on how well the IT department completes its due diligence. Ask the right questions long before you touch any data center system. “Generally, companies start 18 months ahead,” said Tim Schutt, vice president at Transitional Data Services (TDS), a technology consulting company based in Westborough, Mass.

Create a data center migration project plan that identifies the steps in the process, as well as the key resources needed. Define the scope and size of the project, and then examine key limiting factors, such as system availability and security. Set a migration budget and get the organization’s approval. Finally, account for future system requirements, and leave enough capacity in the new systems to support future growth.

3. Get everyone on board

How will the change affect other departments within the organization? Individual stakeholders view the data center migration uniquely because they concentrate only on how a move affects their daily operations.

The CFO views the project as a cost cutter. The data center manager perceives it as a logistical nightmare — one giant, multiyear checklist of actions with hazards lurking everywhere. The systems administrators view it as a technical challenge. The business units might envision outages that will threaten their performance.

First, it is incumbent on the data center manager to understand those different viewpoints within and outside the IT team. Spend time in the various departments. Early in the process, make these employees aware of the changes that are coming. As the migration unfolds, pull in different departments’ executives for planning; make sure their voice is heard. This will encourage non-IT personnel to support the project and work with your team to solve any problems.

4. Complete an inventory

IT departments often support systems that aren’t officially on the books; data center resources enter the organization through the front and back doors. Before beginning a migration project, the IT shop must identify all of its components. That means — especially in larger companies — finding servers hidden under employees’ desks and applications that have been running in departmental stealth mode for years.

Once all the secret and known IT assets are accounted for, the IT team must map their complex set of interdependencies. “The biggest challenge is figuring out the dependencies among all of the different elements,” said Aaron Cox, practice manager at Forsythe Technology Inc., a management consulting and technology services provider in Skokie, Ill. “You don’t want to change one system and knock another one offline.”

Identify all the hardware, software, network equipment, storage devices, air and cooling systems, power equipment, and data involved in the move. Then pinpoint the location of each of these data center elements, determine where each will move and estimate how long that process will take.

5. Set a downtime limit

Businesses today are intolerant of long service disruptions, aka downtime. Everyone expects their systems to be available 24/7. But saying you can’t afford downtime isn’t the same as saying you can afford uptime. Keeping systems up during a migration adds to the project’s cost. To truly eliminate any downtime, you would need a duplicate data center, which is not practical.

IT needs to work with business units to identify times to take department and company applications offline. If they look closely enough, departments can find windows when the migration would least hinder their operation. “[For example,] a department may have a backup window when their systems are down,” TDS’ Schutt explained.

6. Develop a strong contingency plan

Problems will arise during the migration, and they will influence system availability. The challenge is to figure out the data center migration risks ahead of time, and determine how they will affect the company’s plans and which steps can lessen their impact. The success or failure of contingency plans stem from the strength or weakness of the initial audit. For example, if a firm has a complete picture of its local and wireless area networks, the IT team knows where to place backup communication lines to keep information flowing despite downtime on a major circuit.

Include interim equipment and backup systems in the contingency plan wherever necessary. Determine ahead of time how much the business is willing to spend on such devices and what will happen with them after the data center migration. Ideally, the extras will become part of the IT device pool and get used as various components age out or break down.

7. Sweat the small stuff

IT departments often have a broad understanding about what needs to happen in a data center migration project. Unfortunately, they slip up on the little things. Employees get sick — some will be out during the move. Do you have the staffing levels to continue the project? Equipment will be damaged during the move. Do you have spares? Do you have the right packing supplies for delicate items?

When data storage supplier Carbonite Inc. moved its data center, it even made allowances for the traffic in Boston. “Traffic can get quite heavy during certain times,” said Brion L’Heureux, director of data center operations. The company worked with law enforcement to avoid traffic jams and accidents as equipment moved from one location to the other.

Even the most fastidious planners cannot account for every possible obstacle. During its move, Carbonite’s fire alarm sounded, which left the staff out on the sidewalk. Factoring in some unexpected snags like these allowed the company to complete the migration on schedule.

8. Take baby steps, not giant leaps

Data center migrations typically occur in stages. First, the new system is deployed and tested. The data center staff verifies that the servers, racks, power circuits and storage all operate. Then, network connections are installed and tested. And finally, the IT team tests its backup systems and the change is made.

Once the new systems are deployed, the focus shifts to the existing system. Many companies make a dry run, testing a few elements to be sure their plan is achievable. Typically, a company will get the new systems up and running and operate the old and new equipment in tandem for some time, allowing IT to roll back a change if a significant problem arises.

9. Don’t forget about the old equipment

Companies undertaking data center migration projects end up with a lot of old equipment that cannot simply be thrown away. Firms must create a detailed decommissioning and rebuilding plan that accounts for local health and safety procedures around electronic waste. In many cases, the systems will be repurposed in some way.

Since confidential corporate data sat on the drives and in memory, IT organizations must ensure that information is wiped clean, so no one else can access it.

10. Update business processes

It is imperative that the data center manager updates processes, procedures and documentation once the migration is complete. The new system will not function as the old one did, so staff need time to familiarize themselves. Hold a training session or sessions shortly after the migration to ensure the staff doesn’t revert to old, familiar processes that don’t suit the new data center setup.

Given businesses’ reliance on IT systems and the number of possible problems that could arise, migrations cause IT managers a great deal of consternation. With a data center migration checklist and game plan, managers can lessen the likelihood of problems arising and, when they do, can deal with problems without getting off track.

About the author:
Paul Korzeniowski is a freelance writer who specializes in data center issues. He has been writing about technology for two decades, is based in Sudbury, Mass., and can be reached at

source from:

A Buyer’s Guide to Choosing and Deploying DCIM Software [Infographic]


Source from:


relo-life-cycle6(An Actual Customer Call)
By Larry Smith, President, ABR Consulting Group, Inc.


The draft budget below is the result of an actual call by a person visiting our website.  We produced this draft budget as the result of a 5-minute phone call and not seeing the data center.  The $720,000 budget is very real for planning and relocating a large data center containing IBM mainframes and peripheral equipment.  70% of the costs are directly related to acquiring special cables and components for the IBM system, voice and data cabling of the computer room and the costs for IBM and other large vendors to relocate their own equipment.  As a comparison, planning and relocating a 4,000-6,000 sq.ft. data center containing only servers, routers, etc. would be approximately $200,000 with half the cost being for cabling, components and relocation expense.  The remainder is for planning and project management.

This draft budget was done quickly and without seeing the site.  Ultimately, it could vary either way dramatically.  We must see the site in order to provide an accurate budget.  The importance of the exercise is to get a detailed list of what items are involved in planning and relocating a data center in front of the customer.

Have a data center project on the horizon?  Need help in finalizing your project budget?  The one thing you absolutely do not want to do is come in too low (see our newsletter on this subject).  ABR can help.  We invite you to go to our contact page (Contact Us) and either send us an email or call us directly.



On Thursday, July 5, 2001, we received a call from an operations manager of a data center in the mid-west that went something like this on our voice mail:  “This is (name withheld) from (name of organization withheld) and I need to know how much its going to cost to move my data center.  Could someone from your office come out here today or tomorrow to meet with us?”

I made contact with this individual a short time later and had a very brief 5-minute conversation.  In that conversation, I learned that this individual was a manager of a large data center containing an IBM mainframe and other IBM systems.  I asked for the size of the data center and she didn’t know.  I told her that I was not able to travel immediately but I would have something for her by the next morning.  I was able to get her email ID and the conversation ended.

Equipped with vast experience in planning and relocating IBM-based data centers I made the following assumptions and prepared the draft budget below:

  1. We were recently involved with two IBM-based data centers in similar organizations (same industry) and simply estimated the size of the data center to be approximately 10,000 sq.ft. for budgeting purposes.
  2. Learning that this was a very Blue shop, we were certain that IBM Global Services was hovering around this project somewhere and that the customer probably had a quote from IBM in hand.  We think that the sticker shock prompted the management of this organization to seek alternate solutions.
  3. A project of this size and scope will need 5,000-6,000 hours to plan and manage from beginning to end.  Note that the entire 6,000 hour total is over and above the normal day-to-day workload.
  4. Of the 6,000 hour total, we would suggest 2,500 hours for outside consulting and project management resources (such as ABR).  The remaining 3,500 hours will come by increasing the workload of existing staff.


The draft budget below was emailed by 10:00am PST on the following day (Friday 6, 2001).  Two hours later, I contacted the caller to discuss the draft budget.  First, she was quite surprised that we could provide such a good estimate of her data center inventory without seeing the room and without her naming one piece of equipment in the room.  Second, her organization had indeed asked for a quote from IBM Global Services and was attempting to find a less expensive solution.  Third, from her questions and comments, our draft quote had to be much lower than what she saw from IBM.  She did not reveal IBM’s quote but we know they cannot outbid ABR given that we both bid on the same specification and given that they bid using their normal labor rates.  Our labor costs are up to $100/hr. less depending on the resource category.  Plus, our quotes include all consulting and project management to relocate every piece of IT equipment in your data center.  The key word is EVERYTHING.  Our competition excludes many items that you will find in the draft budget below.

Draft Relocation Budget

Our immediate objective was to answer her question “How much does it cost to move my data center?”  The following draft budget is very similar to what the caller saw on her email 24 hours after her initial call.  It has been modified slightly for viewing by our website visitors.


Thank you so much for contacting the ABR Consulting Group, Inc. with regards to relocating your data center. It sounds like you need to come up with something quick so let me be brief, make an enormous amount of assumptions based on our five minute phone call and provide you with a number. I’ll call you shortly and we can modify the assumptions as necessary.

Assumptions: (All of this is a pure guess based on previous experience)

1.   You have a data center that is approximately 10,000 square feet in size

  1. The following is included in your construction budget and is not needed here:
  • Building construction (including the raised floor)
  • UPS/Generator, switchgear, etc
  • Voice/data cabling for the general building (all but the data center)
  • Underfloor electrical (not final placement, however)
  • All HVAC, fire safety, security, etc.)

3.  The following is not included as part of the construction and you will need to
budget for it.

  • All voice/data cabling to equipment, racks, cabinets and remainder of computer room
  • All pre-wiring for the PBX
  • Bus & Tag cables for approx. 40 channels
  • 20 data cabinets
  • Furniture for server lab area
  • KVM systems for 80 servers
  • Planning for relocation of all IT equipment
  • Final identification of electrical receptacles and their locations
  1. Equipment – Mainframe & Peripherals
  • One 2-3 cabinet IBM 9672 RXX.     If you have an IBM 3090, we need to talk.
  • 4-6 strings of IBM (or equivalent) DASD with controllers
  • 10,000-15,000 tape cartridge systems and a tape storage area
  • 1 IBM 3745/46
  • 4-6 IBM 3274 controllers
  • 20 operating and network consoles
  • Possible IBM 9700 printer
  • Tables, chairs, bookcases, storage cabinets, etc.
  1. Equipment – Mid Frames
  • 6-8 large Sun systems, DEC 7000 with Storage Works, etc.

6.  Equipment – Servers

  • 40-60 NT servers
  • 80-100 Unix servers
  • 80 CRTs
  • 60% of these servers are in cabinets.  40% are on lab-type furniture systems
  1. Equipment – Network
  • 2-3 routers
  • 4-6 switches (Cisco 6509, 5000)
  • 4 cabinets full of modems, other comm.  Other stuff
  • 8 relay racks full of other network equipment
  1. Voice/Data Circuits
  • 60 dial-in circuits with rotor system for students, etc.
  • 15 T1 lines from other buildings and to web
  1. Workstations for Staff
  • 40 workstations for staff


In planning the budget, note that for a 12,000 to 15,000 sq.ft. data center with IBM mainframes, you will need approximately 2,500 hours of consulting and project management assistance to design all equipment layouts, produce the entire equipment migration plan and be onsite to supervise the entire migration event. Note that the labor rates that we quote below are our labor rates. IBM Global Services rates are about $180/hr.-$225/hr. You will also need to budget for components that are not normally part of the construction budget.  They are included here.

Completing the Data Center and Pre-Move Installs

Note:   This section does not include costs for new network hardware

1. Voice/data cabling for computer room (includes 12 relay racks $    45,000
2. 10 KVMs $    24,000
3. 20 new data cabinets $    44,000
4. Shelves for cabinets $      6,000
5. New bus & tag cables (plenum-rated) (includes installation) $    50,000
6. New LIC cables for 3745/46 (plenum-rated) (includes install) $    12,000
7. New ESCON cables (includes installation) $    16,000
8. New RS232 and V.35 cables (includes installation) $      8,000
9. New furniture systems for NOC, servers, etc. $    40,000
10. Power strips $      6,000
11. 3X74 Controller racks $      2,000
11. Baluns, patch cords, coax cables, etc. $      8,000
12. Additional PBX cards, components, etc. $    16,000
13. Seed modems, CSUs, etc. $    24,000
14. Contingency $    20,000
Total Components $  321,000*

*  This amount can increase dramatically if you must acquire “seed” equipment to be
be pre-installed to reduce downtime (i.e. data center must move in 12 hours but
certain operations must be online within 4-6 hours).  I do not detect a serious
need here unless you have an IBM 3495 tape system which takes 8-10 days
to relocate.

Contracted Relocation Expense

We are assuming that you are relocating to either another floor in the same building or to another building in your multi-building campus.

Note:  The costs below do not include re-configuring your fiber/copper backbone
to other buildings in your campus should you move to a different building.
We can make this estimate but we need to see the site.

1. Contracting with IBM to relocate all IBM equipment $    60,000 **
2. Contracting with other vendors to relocate the large, free-standing equipment $    18,000
3. Relocate the tape library $    10,000
4. Relocate servers, printers, CRTs, etc. in computer room $    20,000
5. Relocate staff PC workstations $      8,000
6. Mover expense $      6,000
Total Relocation Expense $  122,000 ***

**  This projected expense does not include IBM’s special equipment replacement
insurance that guarantees replacement equipment within a specified amount
of hours should your equipment be damaged during the move.

*** The relocation expense could be as low as $50,000 depending on actual inventory

Consulting and Project Management

For more detail on consulting and project management, see our article on Equipment Planning and Migration

1.  Early Project Planning and Management

Working with the architect and engineers.  Equipment layouts, drawings, equipment inventory, coordinating activities for final construction and pre-install. Includes cabling RFP.  Includes PBX inventory.

Consulting & Project Mgt 500 hrs $125/hr. $    62,500
Other Technical Labor 300 hrs $90/hr $    27,000
Sub Total Consulting & Proj. Mgt. $    89,500
  1. Equipment Migration Planning

Complete planning for the teardown, movement and reinstall of all equipment.  Includes drawings, project plan, data circuit cutovers, team meetings, vendor meetings and other activities

Consulting & Project Management 900 hrs $125/hr. $  112,500
Other Technical Labor 500 hrs $90/hr. $    45,000
Sub-Total Equip. Migration & Plan $   157,500
  1. Actual Move Events

Onsite presence to manage all technical relocation events

Consulting & Project Management 100 hrs. $125/hr. $    12,500
Other Technical Labor 200 hrs. $90/hr. $    18,000
$    30,500


Total Cost for Consulting and Project Management $   277,500

TOTAL COSTS FOR PROJECT                                 $ 720,500

This is the estimated expense that will be needed to relocate your entire data center. I’ve assumed that you have a 10,000 sq.ft. data center and lots of equipment.  We have only two exclusions:

  1. This budget does not include any expense for any type of software engineering.
  2. This budget does not include any expense for network or server “seed” equipment.
  3. This budget does not include any expense for re-engineering you copper and fiber voice/data backbone cabling system as a result of relocating to another building on your multi-building campus.

Many customers overlook two major areas of expense; (1) the cost of preparing the new computer room for the move (Customer fit-up) and, (2) the cost of the extra consulting and project management. These are huge expenses. If I have over estimated the size of your computer room and operations, this number can be reduced significantly but you won’t escape most of it.

Thank you once again for contacting the ABR Consulting Group, Inc.