Memorize PMP Processes – The Easy Way

In this Article I will try to give some insight on how to memorize PMP processes, and how to relate them to the correspondent Process Group, and Knowledge Area.

We will be using the famous Project Management Process Groups and Knowledge Areas Mapping table from PMBOK guide, which is shown below, to demonstrate how we can easily relate the process to its Process Group and Knowledge Area.

As we all know, according to PMBOK® 5th Edition we have 47 PM processes, and these processes are scattered among 5 process groups, and 10 knowledge areas:

  • Initiating : 2 processes
  • Planning: 24 processes
  • Execution: 8 processes
  • Monitoring & Controlling: 11 processes
  • Closing: 2 processes.

So what is the easiest way we can use to relate each process to its process group?  It’s very obvious that Initiating & Closing process groups are not the ones that causing confusion here, since both of them has 4 processes in total, which leaves us with three Process Groups to focus on. You don’t have to memorize the 47 processes names; you just want to focus on some flash words, such as “Plan“, “Estimate“, “Perform“, “Develop“, “Control“, and “Validate“. Will discuss each one of them below.

  • Plan:
    • 11 processes have this word in its description:
      • Develop Project Management Plan: Integration Management Knowledge Area.
      • Plan Scope Management: Scope Management Knowledge Area.
      • Plan Schedule Management: Time Management Knowledge Area.
      • Plan Cost Management: Cost Management Knowledge Area.
      • Plan Quality Management: Quality Management Knowledge Area.
      • Plan Human Resource Management: Human Resource Management Knowledge Area.
      • Plan Communication Management: Communication Management Knowledge Area.
      • Plan Risk Management: Risk Management Knowledge Area.
      • Plan Risk Responses: Risk Management Knowledge Area.
      • Plan Procurement: Procurement Management Knowledge Area.
      • Plan Stakeholder Management: Stakeholder Management Knowledge Area.

All of the mentioned above processes belongs to the Planning Process Group, so if there is any question stating that you are in a process that has “plan” word, and then asks what you should do next, you should know that you are in the Planning phase of the project.

  • Estimate:
    • 3 processes have this word in its description:
      • Estimate Activity Resources: Time Management Knowledge Area.
      • Estimate Activity Duration: Time Management Knowledge Area.
      • Estimate Costs: Cost Management Knowledge Area.

All of the mentioned above processes belongs to the Planning Process Group.

  • Perform:
    • 4 processes have this word in its description:
      • Perform Integrated Change Control: Integration Management Knowledge Area.
      • Perform Quality Assurance: Quality Management Knowledge Area.
      • Perform Qualitative Risk Analysis: Risk Management Knowledge Area.
      • Perform Quantitative Risk Analysis: Risk Management Knowledge Area.

The first one belongs to Monitoring & Controlling Process Group and that because it has the“Control” word, “Perform Quality Assurance” goes under Executing Process Group, and the last two belongs to the Planning Process Group.

  • Develop:
    • 4 processes have this word in its description:
      • Develop Project Charter: Integration Management Knowledge Area.
      • Develop Project Management Plan: Integration Management Knowledge Area.
      • Develop Schedule: Time Management Knowledge Area.
      • Develop Project Team: Human Resource Management Knowledge Area.

All of the mentioned above processes belong to the Planning Process Group, except for the “Develop Project Team” belongs to Executing Process Group, and “Develop Project Charter” it belongs to Initiating Process Group.

  • Control:
    • 10 processes have this word in its description:
      • Monitor & Control Project Work: Integration Management Knowledge Area.
      • Perform Integrated Change Control: Integration Management Knowledge Area.
      • Control Scope: Scope Management Knowledge Area.
      • Control Schedule: Time Management Knowledge Area.
      • Control Costs: Cost Management Knowledge Area.
      • Control Quality: Quality Management Knowledge Area.
      • Control Communications: Communications Management Knowledge Area.
      • Control Risks: Risk Management Knowledge Area.
      • Control Procurements: Procurement Management Knowledge Area.
      • Control Stakeholder Engagement: Stakeholder Management Knowledge Area.

All of the mentioned above processes belong to the Monitoring & Controlling Process Group, the good news that 10 of 11 processes in the Monitoring & Controlling has the word “Control 🙂.

  • Validate:
    • only 1 process has this word in its description:
      • Validate Scope: Scope Management Knowledge Area, this process belongs to Monitoring & Controlling Process Group.

Project Management Process Groups and Knowledge Areas Mapping

So the advice here is you don’t have to memorize all and every process and in which Process Group and Knowledge Area it belongs, you need to focus on the flash words, as mentioned above. I hope you will find this post useful and it can help you in passing your PMP exam.

Your feedback is highly appreciated.

References: PMBOK® Guide 5th Edition by PMI

Source from: http://wafimohtaseb.com/2012/05/27/how-to-memorize-pmp-processes

 

Proper Data Center Staffing is Key to Reliable Operations

The care and feeding of a data center
By Richard F. Van Loo

Managing and operating a data center comprises a wide variety of activities, including the maintenance of all the equipment and systems in the data center, housekeeping, training, and capacity management for space power and cooling. These functions have one requirement in common: the need for trained personnel. As a result, an ineffective staffing model can impair overall availability.

The Tier Standard: Operational Sustainability outlines behaviors and risks that reduce the ability of a data center to meet its business objectives over the long term. According to the Standard, the three elements of Operational Sustainability are Management and Operations, Building Characteristics, and Site Location (see Figure 1).

Figure 1. According to Tier Standard: Operational Sustainability, the three elements of Operational Sustainability are Management and Operations, Building Characteristics, and Site Location.

Management and Operations comprises behaviors associated with:

• Staffing and organization

• Maintenance

• Training

• Planning, coordination, and management

• Operating conditions

Building Characteristics examines behaviors associated with:

• Pre-Operations

• Building features

• Infrastructure

Site Location addresses site risks due to:

• Natural disasters

• Human disasters

Management and Operations includes the behaviors that are most easily changed and have the greatest effect on the day-to-day operations of data centers. All the Management and Operations behaviors are important to the successful and reliable operation of a data center, but staffing provides the foundation for all the others.

Staffing
Data center staffing encompasses the three main groups that support the data center, Facility, IT, and Security Operations. Facility operations staff addresses management, building operations, and engineering and administrative support. Shift presence, maintenance, and vendor support are the areas that support the daily activities that can affect data center availability.

The Tier Standard: Operational Sustainability breaks Staffing into three categories:

• Staffing. The number of personnel needed to meet the workload requirements for specific maintenance
activities and shift presence.

• Qualifications. The licenses, experience, and technical training required to properly maintain and
operate the installed infrastructure.

• Organization. The reporting chain for escalating issues or concerns, with roles and responsibilities
defined for each group.

In order to be fully effective, an enterprise must have the proper number of qualified personnel, organized correctly. Uptime Institute Tier Certification of Operation Sustainability and Management & Operations Stamp of Approval assessments repeatedly show that many data centers are less than fully effective because their staffing plan does not address all three categories.

Headcount
The first step in developing a staffing plan is to determine the overall headcount. Figure 2 can assist in determining the number of personnel required.

Figure 2. Factors that go into calculating staffing requirements

The initial steps address how to determine the total number of hours required for maintenance activities and shift presence. Maintenance hours include activities such as:

• Preventive maintenance

• Corrective maintenance

• Vendor support

• Project support

• Tenant work orders

The number of hours for all these activities must be determined for the year and attributed to each trade.

For instance, the data center must determine what level of shift presence is required to support its business objective. As uptime objectives increase so do staffing presence requirements. Besides deciding whether personnel is needed on site 24 x 7 or some lesser level, the data center operator must also decide what level of technical expertise or trade is needed. This may result in two or three people on site for each shift. These decisions make it possible to determine the number of people and hours required to support shift presence for the year. Activities performed on shift include conducting rounds, monitoring the building management system (BMS), operating equipment, and responding to alarms. These jobs do not typically require all the hours allotted to a shift, so other maintenance activities can be assigned during that shift, which will reduce the overall total number of staffing hours required.

Once the total number hours required by trade for maintenance and shift presence has been determined, divide it by the number of productive hours (hours/person/year available to perform work) to get the required number of personnel for each trade. The resulting numbers will be fractional numbers that can be addressed by overtime (less than 10% overtime advised), contracting, or rounding up.

Qualification Levels
Data center personnel also need to be technically qualified to perform their assigned activities. As the Tier level or complexity of the data center increases, the qualification levels for the technicians also increase. They all need to have the required licenses for their trades and job description as well as the appropriate experience with data center operations. Lack of qualified personnel results in:

• Maintenance being performed incorrectly

• Poor quality of work

• Higher incidents of human error

• Inability to react and correct data center issues

Organized for Response
A properly organized data center staff understands the reporting chain of each organization, along with their individual roles and responsibilities. To aid that understanding, an organization chart showing the reporting chain and interfaces between Facilities, IT, and Security should be readily available and identify backups for key positions in case a primary contact is unavailable.

Impacts to Operations
The following examples from three actual operational data centers show how staffing inefficiencies may affect data center availability

The first data center had two to three personnel per shift covering the data center 24 x 7, which is one of the larger staff counts that Uptime Institute typically sees. Further investigation revealed that only two individuals on the entire data center staff were qualified to operate and maintain equipment. All other staff had primary functions in other non-critical support areas. As a result, personnel unfamiliar with the critical data center systems were performing activities for shift presence. Although maintenance functions were being done, if anything was discovered during rounds additional personnel had to be called in increasing the response time before the incident could be addressed.

The second data center had very qualified personnel; however, the overall head count was low. This resulted in overtime rates far exceeding the advised 10% limit. The personnel were showing signs of fatigue that could result in increased errors during maintenance activities and rounds.

The third data center relied solely on a call in method to respond to any incidents or abnormalities. Qualified technicians performed maintenance two or three days a week. No personnel were assigned to perform shift rounds. On-site Security staff monitored alarms, which required security staff to call in maintenance technicians to respond to alarms. The data center was relying on the redundancy of systems and components to cover the time it took for technicians to respond and return the data center to normal operations after an incident.

Assessment Findings
Although these examples show deficiencies in individual data centers, many data centers are less than optimally staffed. In order to be fully effective in a management and operations behavior, the organization must be Proactive, Practiced, and Informed. Data centers may have the right number of personnel (Proactive), but they may not be qualified to perform the required maintenance or shift presence functions (Practiced), or they may not have well-defined roles and responsibilities to identify which group is responsible for certain activities (Informed).

Figure 3 shows the percentage of data centers that were found to have ineffective behaviors in the areas of staffing, qualifications, and organization.

Figure 3. Ineffective behaviors in the areas of staffing, qualifications, and organization.

Staffing (appropriate number of personnel) is found to be inadequate in only 7% of data centers assessed. However, personnel qualifications are found to be inadequate in twice as many data centers, and the way the data center is organized is found to be ineffective even more often. Although these percentages are not very high, staffing affects all data center management. Staffing shortcomings are found to affect maintenance, planning, coordination, and load management activities.

The effects of staffing inadequacies show up most often in data center operations. According to the Uptime Institute Abnormal Incident Reports (AIRs) database, the root cause of 39% of data center incidents falls into the operational area (see Figure 4). The causes can be attributed to human error stemming from fatigue, lack of knowledge on a system, and not following proper procedure, etc. The right, qualified staff could potentially prevent many of these types of incidents.

Figure 4. According to the Uptime Institute Abnormal Incident Reports (AIRs) database, the root cause of 39% of data center incidents falls into the operational area.

Adopting the proven Start with the End in Mind methodology provides the opportunity to justify the operations staff early in the planning cycle by clearly defining service levels and the required staff to support the business.  Having those discussions with the business and correlating it to the cost of downtime should help management understand the returns on this investment.

Staffing 24 x 7
When developing an operations team to support a data center, the first and most crucial decision to make is to determine how often personnel need to be available on site. Shift presence duties can include a number of things, including facility rounds and inspections, alarm response, vendor and guest escorts, and procedure development. This decision must be made by weighing a variety of factors, including criticality of the facility to the business, complexity of the systems supporting the data center, and, of course, cost.

For business objectives that are critical enough to require Tier III or IV facilities, Uptime Institute recommends a minimum of one to two qualified operators on site 24 hours per day, 7 days per week, 365 days per year (24 x 7). Some facilities feel that having operators on site only during normal business hours is adequate, but they are running at a higher risk the rest of the time. Even with outstanding on-call and escalation procedures, emergencies may intensify quickly in the time it takes an operator to get to the site.

Increased automation within critical facilities causes some to believe it appropriate to operate as a “Lights Out” facility. However, there is an increased risk to the facility any time there is not a qualified operator on site to react to an emergency. While a highly automated building may be able to make a correction autonomously from a single fault, those single faults often cascade and require a human operator to step in and make a correction.

The value of having qualified personnel on site is reflected in Figure 5, which shows the percentage of data center saves (incident avoidance) based on the AIRs database.

Figure 5. The percentage of data center saves (incident avoidance) based on the AIRs database

Equipment redundancy is the largest single category of saves at 38%. However, saves from staff performing proper maintenance and having technicians on site that detected problems before becoming incidents totaled 42%.

Justifying Qualified Staff
The cost of having qualified staff operating and maintaining a data center is typically one of the largest, if not the largest, expense in a data center operating budget. Because of this, it is often a target for budget reduction. Communicating the risk to continuous operations may be the best way to fight off staffing cuts when budget cuts are proposed. Documenting the specific maintenance activities that will no longer be performed or the availability of personnel to monitor and respond to events can support the importance of maintaining staffing levels.

Cutting budget in this way will ultimately prove counterproductive, result in ineffective staffing, and waste initial efforts to design and plan for the operation of a highly available and reliable data center. Properly staffing, and maintaining the appropriate staffing, can reduce the number and severity of incidents. In addition, appropriate staffing helps the facility operate as designed, ensuring planned reliability and energy use levels.

Source link: https://journal.uptimeinstitute.com/data-center-staffing

Fibre Channel over IP: What it is and what it’s used for

Fibre Channel over IP bundles Fibre Channel frames into IP packets and can be a cost-sensitive solution to link remote fabrics where no dark fibre exists between sites.

What is FCIP, and what is it used for?

Fibre Channel over IP, or FCIP, is a tunnelling protocol used to connect Fibre Channel (FC) switches over an IP network, enabling interconnection of remote locations. From the fabric view, an FCIP link is an inter-switch link (ISL) that transports FC control and data frames between switches.

FCIP routers link SANs to enable data to traverse fabrics without the need to merge fabrics. FCIP as an ISL between Fibre Channel SANs makes sense in situations such as:

  • Where two sites are connected by existing IP-based networks but not dark fibre.
  • Where IP networking is preferred because of cost or the distance exceeds the FC limit of 500 kilometres.
  • Where the duration or lead time of the requirement does not enable dark fibre to be installed.

FCIP ISLs have inherent performance, reliability, data integrity and manageability limitations compared with native FC ISLs. Reliability measured in percentage of uptime is on average higher for SAN fabrics than for IP networks. Network delays and packet loss may create bottlenecks in IP networks. FCIP troubleshooting and performance analysis requires evaluating the whole data path from FC fabric, IP LAN and WAN networks, which can make it more complex to manage than other extension options.

Protocol conversion from FC to FCIP can impact the performance that is achieved, unless the IP LAN and WAN are optimally configured, and large FC frames are likely to fragment into two Ethernet packets. The defaultmaximum transfer unit (MTU) size for Ethernet is 1,500 bytes, and the maximum Fibre Channel frame size is 2,172 byes, including FC headers. So, a review of the IP network’s support of jumbo frames is important if sustained gigabit throughput is required. To determine the optimum MTU size for the network, you should review IP WAN header overheads for network resources such as the VPN and MPLS.

FCIP is typically deployed for long-haul applications that are not business-critical and do not need especially high performance.

Applications for FCIP include:

  • Remote data asynchronous replication to a secondary site.
  • Centralised SAN backup and archiving, although tape writes can fail if packets are dropped.
  • Data migration between sites, as part of a data centre migration or consolidation project.

source: http://www.computerweekly.com/answer/Fibre-Channel-over-IP-What-it-is-and-what-its-used-for

 

A data center migration checklist to mitigate risk

Alleviate the pain points for IT upgrades, consolidation or mergers with this 10-step data center migration checklist.

Major IT changes are inevitable, but businesses can mitigate their inherent risk by following a proper data center migration checklist.

Data centers consist of complicated, densely populated racks of hardware running all kinds of software, connected by oodles of cabling. So, when the firm plans to migrate an application, a business group or perhaps the entire IT infrastructure to a new platform, it can cause a panic. A migration means sifting through the complex web of connected devices, applications, cooling systems and cables to map out all interdependencies, then planning and executing on a data center migration project plan with minimal disruptions.

Here is a data center migration checklist in 10 easy steps.

1. Understand why you’re migrating

Businesses have different reasons for migrating to a new system, and those motives alter the potential challenges IT will face during migration. Perhaps market success caused explosive growth that rendered the current data center facility obsolete: More processing power is needed. Perhaps the company wants to save costs: Data center consolidation and right-sizing by combining systems will lower licensing and operational expenses.

Mergers and acquisitions often drive a data center migration project: The two groups must become one cohesive organization. Regulatory requirements also spark change: A corporation will revamp its data center to shore up backup, archiving, data management and security.

2. Map out a clear plan

The success or failure of a migration project depends on how well the IT department completes its due diligence. Ask the right questions long before you touch any data center system. “Generally, companies start 18 months ahead,” said Tim Schutt, vice president at Transitional Data Services (TDS), a technology consulting company based in Westborough, Mass.

Create a data center migration project plan that identifies the steps in the process, as well as the key resources needed. Define the scope and size of the project, and then examine key limiting factors, such as system availability and security. Set a migration budget and get the organization’s approval. Finally, account for future system requirements, and leave enough capacity in the new systems to support future growth.

3. Get everyone on board

How will the change affect other departments within the organization? Individual stakeholders view the data center migration uniquely because they concentrate only on how a move affects their daily operations.

The CFO views the project as a cost cutter. The data center manager perceives it as a logistical nightmare — one giant, multiyear checklist of actions with hazards lurking everywhere. The systems administrators view it as a technical challenge. The business units might envision outages that will threaten their performance.

First, it is incumbent on the data center manager to understand those different viewpoints within and outside the IT team. Spend time in the various departments. Early in the process, make these employees aware of the changes that are coming. As the migration unfolds, pull in different departments’ executives for planning; make sure their voice is heard. This will encourage non-IT personnel to support the project and work with your team to solve any problems.

4. Complete an inventory

IT departments often support systems that aren’t officially on the books; data center resources enter the organization through the front and back doors. Before beginning a migration project, the IT shop must identify all of its components. That means — especially in larger companies — finding servers hidden under employees’ desks and applications that have been running in departmental stealth mode for years.

Once all the secret and known IT assets are accounted for, the IT team must map their complex set of interdependencies. “The biggest challenge is figuring out the dependencies among all of the different elements,” said Aaron Cox, practice manager at Forsythe Technology Inc., a management consulting and technology services provider in Skokie, Ill. “You don’t want to change one system and knock another one offline.”

Identify all the hardware, software, network equipment, storage devices, air and cooling systems, power equipment, and data involved in the move. Then pinpoint the location of each of these data center elements, determine where each will move and estimate how long that process will take.

5. Set a downtime limit

Businesses today are intolerant of long service disruptions, aka downtime. Everyone expects their systems to be available 24/7. But saying you can’t afford downtime isn’t the same as saying you can afford uptime. Keeping systems up during a migration adds to the project’s cost. To truly eliminate any downtime, you would need a duplicate data center, which is not practical.

IT needs to work with business units to identify times to take department and company applications offline. If they look closely enough, departments can find windows when the migration would least hinder their operation. “[For example,] a department may have a backup window when their systems are down,” TDS’ Schutt explained.

6. Develop a strong contingency plan

Problems will arise during the migration, and they will influence system availability. The challenge is to figure out the data center migration risks ahead of time, and determine how they will affect the company’s plans and which steps can lessen their impact. The success or failure of contingency plans stem from the strength or weakness of the initial audit. For example, if a firm has a complete picture of its local and wireless area networks, the IT team knows where to place backup communication lines to keep information flowing despite downtime on a major circuit.

Include interim equipment and backup systems in the contingency plan wherever necessary. Determine ahead of time how much the business is willing to spend on such devices and what will happen with them after the data center migration. Ideally, the extras will become part of the IT device pool and get used as various components age out or break down.

7. Sweat the small stuff

IT departments often have a broad understanding about what needs to happen in a data center migration project. Unfortunately, they slip up on the little things. Employees get sick — some will be out during the move. Do you have the staffing levels to continue the project? Equipment will be damaged during the move. Do you have spares? Do you have the right packing supplies for delicate items?

When data storage supplier Carbonite Inc. moved its data center, it even made allowances for the traffic in Boston. “Traffic can get quite heavy during certain times,” said Brion L’Heureux, director of data center operations. The company worked with law enforcement to avoid traffic jams and accidents as equipment moved from one location to the other.

Even the most fastidious planners cannot account for every possible obstacle. During its move, Carbonite’s fire alarm sounded, which left the staff out on the sidewalk. Factoring in some unexpected snags like these allowed the company to complete the migration on schedule.

8. Take baby steps, not giant leaps

Data center migrations typically occur in stages. First, the new system is deployed and tested. The data center staff verifies that the servers, racks, power circuits and storage all operate. Then, network connections are installed and tested. And finally, the IT team tests its backup systems and the change is made.

Once the new systems are deployed, the focus shifts to the existing system. Many companies make a dry run, testing a few elements to be sure their plan is achievable. Typically, a company will get the new systems up and running and operate the old and new equipment in tandem for some time, allowing IT to roll back a change if a significant problem arises.

9. Don’t forget about the old equipment

Companies undertaking data center migration projects end up with a lot of old equipment that cannot simply be thrown away. Firms must create a detailed decommissioning and rebuilding plan that accounts for local health and safety procedures around electronic waste. In many cases, the systems will be repurposed in some way.

Since confidential corporate data sat on the drives and in memory, IT organizations must ensure that information is wiped clean, so no one else can access it.

10. Update business processes

It is imperative that the data center manager updates processes, procedures and documentation once the migration is complete. The new system will not function as the old one did, so staff need time to familiarize themselves. Hold a training session or sessions shortly after the migration to ensure the staff doesn’t revert to old, familiar processes that don’t suit the new data center setup.

Given businesses’ reliance on IT systems and the number of possible problems that could arise, migrations cause IT managers a great deal of consternation. With a data center migration checklist and game plan, managers can lessen the likelihood of problems arising and, when they do, can deal with problems without getting off track.

About the author:
Paul Korzeniowski is a freelance writer who specializes in data center issues. He has been writing about technology for two decades, is based in Sudbury, Mass., and can be reached at paulkorzen@aol.com.

source from: http://searchdatacenter.techtarget.com/tip/A-data-center-migration-checklist-to-mitigate-risk

TOGAF™ 9 and ITIL® Two Frameworks Whitepaper

cropped-ITIL-framework.jpg

TOGAF and ITIL are both frameworks that follow a process approach. They are both based upon best practice and are supported by a large community of users. However, whereas TOGAF is focused on Enterprise Architecture, ITIL focuses on Service Management. In the years of development of these frameworks, they have described an ever-growing change of domain, from IT to business processes. In their final versions they appear to have entered into each other’s domains. In this paper we try to explain that it is not a question of whether these models describe similar processes and that one has to make a choice between them. It is more important that the people who are concerned with Service Management understand TOGAF and that Enterprise Architects understand ITIL – because in most large companies worldwide, both will be used next to each other. As most IT architects and IT Service Managers probably have more knowledge of TOGAF than ITIL, and vice versa, this white paper will help them see and understand how these two frameworks are interrelated. Maybe even more important is how the ‘other’ framework can enhance the value of your ‘own’ framework.

Although these frameworks describe areas of common interest, it is not necessarily the case that they do that from the same perspective. Basically, ITIL was developed to support Service Management and TOGAF was developed to support organizations in the development of Enterprise Architecture. The focus of ITIL is therefore on services, whereas TOGAF is focused on architecture. However, since services have become part of fast-changing organizations, the prediction of what will be needed tomorrow is of growing interest to the people that deliver these services. Conversely, architecture has changed from a rather static design discipline to an organization- encompassing discipline, and is only useful if the rest of the organization is using it to enable all developments to be aligned to each other.

Small updates meet big data center requirements

Not all IT infrastructure projects require a large budget and lengthy schedule. These relatively inexpensive updates boost performance and reliability.

IT leaders constantly dream up ways to meet data center requirements for performance and efficiency, but time and money always seem to quash grand plans.

Not every IT infrastructure project needs to be a time-consuming, capital-intensive, paradigm-shifting corporate initiative. Quick and easy updates significantly benefit data center facilities and IT performance, and act as a training ground for new employees.

1. Upgrade server hardware

Strategic memory and local disk upgrades give servers quick and easy performance or capacity boosts.

Memory is a limiting resource in virtualization, and servers rarely come with a full complement onboard. Inventory unused slots and add memory to assist existing VMs or accommodate future server consolidation.

Solid state drives (SSD) are a local disk storage upgrade for strategic servers. SSDs improve I/O and lower latency, ideal for workloads sensitive to storage bandwidth. SSDs can accelerate performance if a server’s workloads rely on disk caching. Rather than rip and replace all the disk drives, add an SSD to a server’s local storage to clear bottlenecks and stop errors.

People tend to forget that when the physical network gear is upgraded, your cabling may not be taking full advantage.
Server firmware upgrades are fast and free, but also disruptive. Only perform them to fix specific problems like hardware or operating system support. Check your asset management inventory and get a list of the current server models and firmware versions, and then check the server vendors’ download sites for updates. Ascertain via the details or release notes whether the update actually solves a problem for you. Peripheral interface and adapter devices also have firmware that may need updates.

Use cases for implementing a converged infrastructure product

Memory and disk upgrades pose downtime (unless hot plugging) and re-racking issues. “RAM upgrades are cheap and effective, but … it’s not exactly an ‘in place’ upgrade,” said Pete Sclafani, COO and co-founder of 6connect, a network automation solutions provider in San Francisco. Perform memory and SSD upgrades during scheduled server downtime.

Disk capacity is expensive, and you can forestall major capacity additions by removing unnecessary content or migrating data to lower storage tiers. For example, temporary directories flood with unneeded data, so clear out /tmp and c:/temp directories in servers and storage subsystems.

Try a zero byte reclaim for thin storage deployments. “Write zeros to all allocated but unused space,” said Tim Noble, director of IT operations at ReachIPS, a cloud platform provider in Anaheim Hills, Calif. A zero byte reclaim of the server’s allocated, never needed storage frees up space on the array.

2. Redo cables

As network bandwidth reaches 10 Gigabit Ethernet (GigE), 25 GigE and faster, aging Category (Cat) 5 and 5e copper cabling infrastructure for 1 GigE is unable to cope with the new data center requirements.

In some cases, the right hardware is in place for higher bandwidth networks, but the cabling is not. “People tend to forget that when the physical network gear is upgraded, your cabling may not be taking full advantage,” Sclafani said.

Don’t rip out aging cabling all at once; Ethernet cabling is fully backward-compatible. Make relatively small, incremental investments in faster cables as time and money allow. Servers will remain on 10 GigE for the foreseeable future, so focus on network backbones, especially Ethernet-based iSCSI and Fibre Channel over Ethernet storage arrays. For example, Cat 6 cables can support 10 GigE to 55 meters while Cat 6a and Cat 7 cables can handle 10 GigE to 100 meters, without requiring new network adapters, switches or other components.

Long distances — and 40 GigE+ Ethernet bandwidths — need expensive optical fiber media and specialized skills to splice and integrate, which entail a formal capital upgrade project.

Differentiate the new cables from older twisted-pair lines with colored jackets or another labeling scheme. Clearly duplicate markings or labels for patch panels.

3. Add sensors

If you can’t measure it, you can’t manage it. Data center infrastructure management (DCIM) tools monitor the electrical and environmental behaviors of complex facilities.

DCIM requires a proliferation of sensors placed strategically around the data center. These tools may trigger automated responses to situational events, such as migrating workloads when a server becomes too hot, or sounding an alert when moisture suggests a cooling loop leak. Missing or inadequate sensors can leave input gaps.

What are you missing?

Temperature sensors locate hot spots within racks and rows.
Humidity sensors warn of excessively dry air or damaging condensation levels.
Moisture (liquid) sensors are essential when chilled water circulates in heat exchangers or rack doors.
Power monitors track energy use in real time.
Air flow sensors ensure that fans are running and filters are unclogged.
Motion detectors spot unauthorized intruders and trigger security alerts and cameras.
Smoke/fire sensors protect valuable assets and lives.
RFID tags help automate hardware inventory control.

“Data center monitoring tends to be the last addition to the budget and the first to get axed when project timelines go sideways,” Sclafani said. “Your sensors and instrumentation probably have room for improvement.”

New sensors are quick and non-invasive installs, done in small increments to keep cost and time commitments minimal.

4. Boost data security

OS and application security updates might seem obvious, but these low-level tasks get postponed by day-to-day firefighting and complex data center projects.

Check system inventory reports and patch each server with the latest available security updates, Noble said. “This will be easier if you have automation tools like Puppet,” he added. “But even a large number of servers can be patched pretty quickly if there is a concerted effort.”
Don’t leave out the facility

Updates for better cooling, power

Gradual energy efficiency improvements

Short-term upgrades to save money

Hypervisor updates, such moving to VMware vSphere 6, are rarer and might be delayed by testing. Check the hardware and software inventory of your virtualized servers to verify that they support the new requirements, and finish lab testing so the new features can move to production. “You might also simply update the VMware Tools on all of your hosts to [your current ESXi version],” Noble said.

Look for other security enhancements: Check and fix file permissions, scour Active Directory user accounts for old or inaccurate entries and so on. These activities pose little risk to operating services.

5. Check and improve processes

Modern data centers are process-driven — policies and procedures reduce errors and ensure consistent results regardless of who performs the work. As more IT departments move beyond script-based automation (such as PowerShell) to embrace sophisticated workflow automation tools, it’s easy to forget the actual steps and why they’re there. Roles and priorities change, opening strategic opportunities to review, streamline and optimize workflows.

“Find an operational task, map it out and see how you can make it more efficient,” Sclafani said. “You get extra points if you also ask your internal or external customers for input on processes [to] optimize.”

Perform a fire drill to verify that existing infrastructure works as expected. This is particularly important with disaster recovery (DR) and resilient systems such as server clusters. Test server failover in active/passive clusters or simulate the loss of a server in active/active configurations.

“If you have a DR site, a weekend of maintenance in one data center might be a good time to test operations in your alternate data center,” Noble said. Unacceptable service disruptions indicate additional remediation work to meet the data center’s requirements before real trouble strikes.

About the author:
Stephen J. Bigelow is a senior technology editor at TechTarget, covering data center and virtualization technologies. He acquired many CompTIA certifications in his more than two decades writing about the IT industry

Source: http://searchdatacenter.techtarget.com/tip/Small-updates-meet-big-data-center-requirements?utm_medium=EM&asrc=EM_ERU_45130705&utm_campaign=20150713_ERU%20Transmission%20for%2007/13/2015%20%28UserUniverse:%201628038%29_myka-reports@techtarget.com&utm_source=ERU&src=5407571

 

How Can I Use ITIL to Improve IT Services?

How Can I Use ITIL to Improve IT Services?

When it comes to management of IT services, the question, “How can we improve ’x’?” is often asked. The “x” can represent a multitude of project facets (services offered, productivity, organizational planning, etc.), and one of the best methods for process identification and improvement is the IT Infrastructure Library (ITIL), a framework for information management that focuses on continuous improvement to business processes. There are five core process areas related to the ITIL framework:

1. Service Strategy

2. Service Design

3. Service Transition

4. Service Operations Process

5. Continual Service Improvement

Each process is iterative, and process outputs serve as inputs for subsequent process areas. This framework allows organizations to integrate business and service strategies, monitor measure and optimize performance, optimize and reduce costs, and manage knowledge and risks effectively.

Service Strategy

As new technologies emerge, it’s important for us to understand how these tools fit into the overall service strategy of our project deliverables. ITIL’s Service Strategy process area allows organizations to identify their business objectives, customer needs, manage service portfolios, and answer the, “why are we doing ‘x’?” instead of, “how do we do ‘x’?” question. Artifacts of the Service Strategy process area lay the groundwork for all subsequent core process areas (e.g. Service portfolio, Vision and Mission, Patterns of business activity and demand forecasts, and financial information and budgets).

Service Design

Once a service strategy has been determined, it is important that those services are designed as efficiently as possible to reduce the need to improve those services over their lifecycle; this is where the Service Design process area takes center stage. This process area reduces total cost of ownership (TCO), improves service quality, consistency, and performance. Service Design covers any requirements for new or changed services, management information systems and tools, technology and management architectures, measurement methods and metrics, and processes required to support the service being offered. Artifacts of the Service Design process area include: service design packages, financial reports, SLAs/OLAs, and achievements against key performance indicators (KPIs).

Service Transition

The Service Transition process area allows service providers to plan and manage changes efficiently and effectively, manage risks to new/changed or retired services, ensure knowledge transfer occurrence, and set performance expectations. Artifacts of this process include: a change schedule, feedback to other lifecycle stages, and providing information to the service knowledge management system (SKMS).

Service Operations Process

The Service Operations process area provides an opportunity to see the benefits of each of the previous process areas, in action. This process area covers the coordination and carrying out of activities and processes at the agreed-upon levels, for your business users and customers. Management of events, incidents, problems, and access are all covered under the processes of Service Operations. Artifacts generated as part of this process area include: operational requirements, financial reports, and operation performance data and service records.

Continual Service Improvement

Continual Service Improvement (CSI) differs from the other four process areas because it involves incremental or large-scale improvements across a service lifecycle. CSI involves more than just measuring current performance. It incorporates understanding of what to measure, why it is being measured, and what the successful outcome should be. All processes should have clearly defined objectives and actionable measurements, which will lead to actionable improvements. Artifacts generated as part of this process area include: change requests for improvement implementation, service improvement plans, updates to the SKMS, achievement of metrics against KPIs, and service reports and dashboards.

Implementing an ITIL framework of iterative continuing processes can help service provider organizations improve their “x,” in a defined and measured manner. By following the processes of strategy, design, transition, operations, and CSI, organizations can reduce inefficiency, and deliver greater value to their customers.