Top 10 data center operating procedures

Every data center needs to define its policies, procedures, and operational processes.

An ideal set of documentation goes beyond technical details about application configuration and notification matrices.

These top 10 areas should be part of your data center’s standard operating procedures manuals.

    1. Change control. In addition to defining the formal change control process, include a roster of change control board members and forms for change control requests, plans and logs.
    2. FacilitiesInjury prevention program information is a good idea, as well as documentation regarding power and cooling emergency shut offprocesses; fire suppression system information; unsafe condition reporting forms; new employee safety training information, logs and attendance records; illness or injury reporting forms; and visitor policies.
    3. Human resources. Include policies regarding technology training, as well as acceptable use policies, working hours and shift schedules, workplace violence policies, employee emergency contact update forms, vacation schedules, and anti-harassment and discrimination policies.
    4. Security. This is a critical area for most organizations. Getting all staff access to the security policies of your organization is half the battle. An IT organization should implement policies regarding third-party or customer system access, security violations, auditing, classification of sensitive resources, confidentiality, physical security, passwords, information control, encryption and system access controls.
    5. Templates. Providing templates for regularly used documentation types makes it easier to accurately capture the data you need in a format familiar to your staff. Templates to consider include policies, processes, logs, user guides and test/report forms.
    6. Crisis management. Having a crisis response scripted out in advance goes a long way toward reducing the stress of a bad situation. Consider including crisis management documentation around definitions; a roster of crisis response team members; crisis planning; an escalation and notification matrix; a crisis checklist; guidelines for communications; situation update forms, policies, and processes; and post-mortem processes and policies.
    7. Deployment. Repeatable processes are the key to speedy and successful workload deployments. Provide your staff with activation checklists, installation procedures, deployment plans, location of server baseline loads or images, revision history of past loads or images and activation testing processes.
    8. Materials management. Controlling your inventory of IT equipmentpays off. Consider including these items in your organization’s documentation library: policies governing requesting, ordering, receiving and use of equipment for testing; procedures for handling, storing, inventorying, and securing hardware and software; and forms for requesting and borrowing hardware for testing.
    9. Internal communications. Interactions with other divisions and departments within your organization may be straightforward, but it is almost always helpful to provide a contact list of all employees in each department, with their work phone numbers and e-mail addresses. Keep a list of services and functions provided by each department, and scenarios in which it may be necessary to contact these other departments for assistance.
    10. Engineering standardsTesting, reviewing and implementing new technology in the data center is important for every organization. Consider adding these items to your organization’s standard operating procedures manuals: new technology request forms, technology evaluation forms and reports, descriptions of standards, testing processes, standards review and change processes, and test equipment policies.

About the author
Kackie Cohen is a Silicon Valley-based consultant providing data center planning and operations management to government and private sector clients. Kackie is the author of Windows 2000 Routing and Remote Access Service and co-author of Windows XP Networking.

source from:

ITIL and Security Management

ITIL and Security Management Overview

David McPhee

What is ITIL?For the purpose of this chapter, the focus is how information security management works within the Information Technology Infrastructure Library (ITIL).

The Information Technology Infrastructure Library (ITIL) is a framework of best practices. The concepts within ITIL support information technology services delivery organizations with the planning of consistent, documented, and repeatable or customized processes that improve service delivery to the business. The ITIL framework consists of the following IT processes: Service Support (Service Desk, Incident Management, Problem Management, Change Management, Configuration Management, and Release Management) and Services Delivery (Service Level Management, Capacity Management, Availability Management, Financial Management and IT Service Continuity Management).

History of ITIL

The ITIL concept emerged in the 1980s, when the British government determined that the level of IT service quality provided to them was not sufficient. The Central Computer and Telecommunications Agency (CCTA), now called the Office of Government Commerce (OGC), was tasked with developing a framework for efficient and financially responsible use of IT resources within the British government and the private sector.

ITIL Overview
Figure 1. ITIL Overview.

The earliest version of ITIL was actually originally called GITIM, Government Information Technology Infrastructure Management. Obviously this was very different to the current ITIL, but conceptually very similar, focusing around service support and delivery.

Large companies and government agencies in Europe adopted the framework very quickly in the early 1990s. ITIL was spreading far and, and was used in both government and non-government organizations. As it grew in popularity, both in the UK and across the world, IT itself changed and evolved, and so did ITIL.

What Is Security Management?

Security management details the process of planning and managing a defined level of security for information and IT services, including all aspects associated with reaction to security Incidents. It also includes the assessment and management of risks and vulnerabilities, and the implementation of cost justifiable countermeasures.

Security management is the process of managing a defined level of security on information and IT services. Included is managing the reaction to security incidents. The importance of information security has increased dramatically because of the move of open internal networks to customers and business partners; the move towards electronic commerce, the increasing use of public networks like Internet and Intranets. The wide spread use of information and information processing as well as the increasing dependency of process results on information requires structural and organized protection of information.


Service Support Overview
Service support describes the processes associated with the day-to day support and maintenance activities associated with the provision of IT services: Service Desk, Incident Management, Problem Management, Change Management, Configuration Management, and Release Management.

  • Service Desk: This function is the single point of contact between the end users and IT Service Management.
  • Incident Management: Best practices for resolving incidents (any event that causes an interruption to, or a reduction in, the quality of an IT service) and quickly restoring IT services.
  • Problem Management: Best practices for identifying the underlying causes of IT incidents in order to prevent future recurrences. These practices seek to proactively prevent incidents and problems.
  • Change Management: Best practices for standardizing and authorizing the controlled implementation of IT changes. These practices ensure that changes are implemented with minimum adverse impact on IT services, and that they are traceable.
  • Configuration Management: Best practices for controlling production configurations; for example, standardization, status monitoring, and asset identification. By identifying, controlling, maintaining and verifying the items that make up an organization’s IT infrastructure, these practices ensure that there is a logical model of the infrastructure.
  • Release Management: Best practices for the release of hardware and software. These practices ensure that only tested and correct versions of authorized software and hardware is provided to IT customers.

Service Support Details

Service Desk
The objective of the service desk is to be a single point of contact for customers who need assistance with incidents, problems, questions, and to provide an interface for other activities related to IT and ITIL services.

Service desk diagram
Figure 2. Service desk diagram.

Benefits of Implementing a Service Desk

  • Increased first call resolution
  • Skill based support
  • Rapidly restore service
  • Improved incident response time
  • Quick service restoration
  • Improved tracking of service quality
  • Improved recognition of trends and incidents
  • Improved employee satisfaction

Processes Utilized by the Service Desk

  • Workflow and procedures diagrams
  • Roles and responsibilities
  • Training evaluation sheets and skill set assessments
  • Implemented metrics and continuous improvement procedures

Incident Management
The objective of Incident management is minimize the disruption to the business by restoring service operations to agreed levels as quickly as possible and to ensure the availability of IT services is maximized, and could also protect the integrity and confidentiality of information by identifying the root cause of a problem.

Benefits of an Incident Management Process

  • Incident detection and recording
  • Classification and initial support
  • Investigation and diagnosis
  • Resolution and recovery
  • Incident closure
  • Incident ownership, monitoring, tracking and communication
  • Repeatable Process

With a formal incident management practice, IT quality will improve through ensuring ticket quality, standardizing ticket ownership, and providing a clear understanding of ticket types while decreasing the number of un-reported or misreported incidents.

Incident management ticket owner workflow diagram
Figure 3. Incident management ticket owner workflow diagram.

Problem Management
The object of problem management is to resolve the root cause of incidents to minimize the adverse impact of incidents and problems on the business and secondly to prevent recurrence of incidents related to these errors. A `problem’ is an unknown underlying cause of one or more incidents, and a `known error’ is a problem that is successfully diagnosed and for which a work-around has been identified. The outcome of known error is a request for change (RFC).

Problem management diagram overview
Figure 4. Problem management diagram overview.

A problem is a condition often identified as a result of multiple Incidents that exhibit common symptoms. Problems can also be identified from a single significant incident, indicative of a single error, for which the cause is unknown, but for which the impact is significant.

A known error is a condition identified by successful diagnosis of the root cause of a problem, and the subsequent development of a work-around.

An RFC is a proposal to IT infrastructure for a change to the environment.

Incident Management and Problem Management: What’s the Difference?
Incidents and service requests are formally managed through a staged process to conclusion. This process is referred to as the “incident management lifecycle.” The objective of the incident management lifecycle is to restore the service as quickly as possible to meet service level agreements (SLAs). The process is primarily aimed at the user level.

Problem management deals with resolving the underlying cause of one or more incidents. The focus of problem management is to resolve the root cause of errors and to find permanent solutions. Although every effort will be made to resolve the problem as quickly as possible this process is focused on the resolution of the problem rather than the speed of the resolution. This process deals at the enterprise level.

Change Management
Change management ensures that all areas follow a standardized process when implementing change into a production environment. Change is defined as any adjustment, enhancement, or maintenance to a production business application, system software, system hardware, communications network, or operational facility.

Benefits of Change Management

  • Planning change
  • Impact analysis
  • Change approval
  • Managing and implementing change
  • Increase formalization and compliance
  • Post change review
  • Better alignment of IT infrastructure to business requirements
  • Efficient and prompt handling of all changes
  • Fewer changes to be backed out
  • Greater ability to handle a large volume of change
  • Increased user productivity

Configuration Management
Configuration management is the implemtation of a configuration management database (CMDB) that contains details of the organization’s elements that are used in the provision and management of its IT services. The main activities of configuration management are:

  • Planning: Planning and defining the scope, objectives, policy and process of the CMDB.
  • Identification: Selecting and identifying the configuration structures and items within the scope of your IT infrastructure.
  • Configuration control: Ensuring that only authorized and identifiable configuration items are accepted and recorded in the CMDB throughout its lifecycle.
  • Status accounting: Keeping track of the status of components throughout the entire lifecycle of configuration items.
  • Verification and audit: Auditing after the implementation of configuration management to verify that the correct information is recorded in the CMDB, followed by scheduled audits to ensure the CMDB is kept up-to-date.

Configuration Management and Information Security
Without the definition of all configuration items that are used to provide an organizations’s IT services, it can be very difficult to identify which items are used for which services. This could result in critical configuration items being stolen, moved or misplaced, affecting the availability pf tje services dependent on them. It could also result in unauthorized items being used in the provision of IT services.

Benefits of Configuration Management

  • Reduced cost to implement, manage, and support the infrastructure
  • Decreased incident and problem resolution times
  • Improved management of software licensing and compliance
  • Consistent, automated processes for infrastructure mapping
  • Increased ability to identify and comply with architecture and standards requirements
  • Incident troubleshooting
  • Usage trending
  • Change evaluation
  • Financial chargeback and asset lifecycle management
  • Service Level Agreement (SLA) and software license negotiations

Release Management
Release Management is used for platform-independent and automated distribution of software and hardware, including license controls across the entire IT infrastructure. Proper Software and Hardware Control ensure the availability of licensed, tested, and version certified software and hardware, which will function correctly and respectively with the available hardware. Quality control during the development and implementation of new hardware and software is also the responsibility of Release Management. This guarantees that all software can be conceptually optimized to meet the demands of the business processes.

Benefits of Release Management

  • Ability to plan resource requirements in advance
  • Provides a structured approach, leading to an efficient and effective process
  • Changes are bundled together in a release, minimizing the impact on the user
  • Helps to verify correct usability and functionality before release by testing
  • Control the distribution and installation of changes to IT systems
  • Design and implement procedures for the distribution and installation of changes to IT systems
  • Effectively communicate and manage expectations of the customer during the planning and rollout of new releases

The focus of release management is the protection of the live environment and its services through the use of formal procedures and checks.

Release Categories
A release consists of the new or changed software or hardware required to implement approved change.

  • Major software releases and hardware upgrades, normally containing large areas of new functionality, some of which may make intervening fixes to problems redundant. A major upgrade or release usually supersedes all preceding minor upgrades, releases and emergency fixes
  • Minor software releases and hardware upgrades, normally containing small enhancements and fixes, some of which may have already been issued as emergency fixes. A minor upgrade or release usually supersedes all preceding emergency fixes.
  • Emergency software and hardware fixes, normally containing the corrections to a small number of known problems

Release management overview
Figure 5. Release management overview.

Releases can be divided based on the release unit into:

  • Delta Release is a release of only that part of the software which has been changed. For example, security patches to plug bugs in a software.
  • Full Release means that the entire software program will be release again. For example, an entire version of an application.
  • Packaged Release is a combination of many changes: for example, an operating system image containing the applications as well.

Service Delivery Overview

Services delivery is the discipline that ensures IT infrastructure is provided at the right time in the right volume at the right price, and ensuring that IT is used in the most efficient manner. This involves analysis and decisions to balance capacity at a production or service point with demand from customers, it also covers the processes required for the planning and delivery of quality IT services and looks at the longer term processes associated with improving the quality of IT services delivered.

  • Service Level Management: Service level management (SLM) is responsible for negotiating and agreeing to service requirements and expected service characteristics with the customer
  • Capacity Management: Capacity management is responsible for ensuring that IT processing and storage capacity provision match the evolving demands of the business in a cost effective and timely manner
  • Availability Management: Availability management is responsible for optimizing availability
  • Financial Management: The object of financial management for IT services is to provide cost effective stewardship of the IT assets and the financial resources used in providing IT services.
  • IT Service Continuity Management: Service continuity is responsible for ensuring that the available IT Service Continuity options are understood and the most appropriate solution is chosen in support of the business requirements

Service Level Management
The object of service level management (SLM) is to maintain and gradually improve business aligned IT service quality, through a constant cycle of agreeing, monitoring, reporting and reviewing IT service achievements and through instigating actions to eradicate unacceptable levels of service.

SLM is responsible for ensuring that the service targets are documented and agreed in SLAs and monitors and reviews the actual service levels achieved against their SLA targets. SLM should also be trying to proactively improve all service levels within the imposed cost constraints. SLM is the process that manages and improves agreed level of service between two parties, the provider and the receiver of a service.

SLM is responsible for negotiating and agreeing to service requirements and expected service characteristics with the Customer, measuring and reporting of Service Levels actually being achieved against target, resources required, cost of service provision. SLM is also responsible for continuously improving service levels in line with business processes, with a SIP, co-coordinating other Service Management and support functions, including third party suppliers, reviewing SLAs to meet changed business needs or resolving major service issues and producing, reviewing and maintaining the Service Catalogue.

Benefits of Implementing Service Level Management

  • Implementing the service level management process enables both the customer and the IT services provider to have a clear understanding of the expected level of delivered services and their associated costs for the organization, by documenting these goals into formal agreements.
  • Service level management can be used as a basis for charging for services, and can demonstrate to customers the value they are receiving from the Service Desk.
  • It also assists the service desk with managing external supplier relationships, and introduces the possibility of negotiating improved services or reduced costs.

Capacity Management
Capacity management is responsible for ensuring that IT processing and storage capacity provisioning match the evolving demands of the business in a cost effective and timely manner. The process includes monitoring the performance and the throughput of the IT services and supporting IT components, tuning activities to make efficient use of resources, understanding the current demands for IT resources and deriving forecasts for future requirements, influencing the demand for resource in conjunction with other Service Management processes, and producing a capacity plan predicting the IT resources needed to achieve agreed service levels.

Capacity management has three main areas of responsibility. First of these is BCM, which is responsible for ensuring that the future business requirements for IT services are considered, planned and implemented in a timely fashion. These future requirements will come from business plans outlining new services, improvements and growth in existing services, development plans, etc. This requires knowledge of existing service levels and SLAs, future service levels and SLRs, the Business and Capacity plans, modeling techniques (Analytical, Simulation, Trending and Base lining), and application sizing methods.

The second main area of responsibility is SCM, which focuses on managing the performance of the IT services provided to the Customers, and is responsible for monitoring and measuring services, as detailed in SLAs and collecting recording, analyzing and reporting on data. This requires knowledge of service levels and SLAs, systems, networks, service throughput and performance, monitoring, measurement, analysis, tuning and demand management.

The third and final main area of responsibility is RCM, which focuses on management of the components of the IT infrastructure and ensuring that all finite resources within the IT infrastructure are monitored and measured, and collected data is recorded, analyzed and reported. This requires knowledge of the current technology and its utilization, future or alternative technologies, and the resilience of systems and services.

Capacity Management Processes:

  • Performance monitoring
  • Workload monitoring
  • Application sizing
  • Resource forecasting
  • Demand forecasting
  • Modeling

From these processes come the results of capacity management, these being the capacity plan itself, forecasts, tuning data and Service Level Management guidelines.

Availability Management
Availability management is concerned with design, implementation, measurement and management of IT services to ensure the stated business requirements for availability are consistently met. Availability management requires an understanding of the reasons why IT service failures occur and the time taken to resume this service. Incident management and problem management provide a key input to ensure the appropriate corrective actionss are being implemented.

  • Availability Management is the ability of an IT component to perform at an agreed level over a period of time.
  • Reliability is the ability of an IT component to perform at an agreed level at described conditions.
  • Maintainability is the ability of an IT Component to remain in, or be restored to an operational state.
  • Serviceability is the ability for an external supplier to maintain the availability of a component or function under a third party contract
  • Resilience is a measure of freedom from operational failure and a method of keeping services reliable. One popular method of resilience is redundancy.
  • Security refers to the confidentiality, integrity, and availability of the data associated with a service.

Availability Management
Security is an essential part of availability management, this being the primary focus of ensuring IT infrastructure continues to be available for the provision of IT services.

Some of the elements mentioned earlier are the products of performing risk analysis to identify how reliable elements are and how many problems have been caused as a result of system failure.

The risk analysis also recommends controls to improve availability of IT infrastructure such as development standards, testing, physical security and the right skills in the right place at the right time.

Financial Management
Financial management for IT services is an integral part of service management. It provides the essential management information to ensure that services are run efficiently, economically and cost effectively. An effective financial management system will assist in the management and reduction of overall long term costs, and identify the actual cost of services. This provisioning provides accurate and vital financial information to assist in decision making, identify the value of IT services, enable the calculation of TCO and ROI.

The practice of financial management enables the service manager to identify the amount being spent on security counter measures in the provision of the IT services. The amount being spent on these counter measures needs to be balanced with the risks and the potential losses that the service could incur as identified during a business impact assessment (BIA) and risk assessment. Management of these costs will ultimately reflect on the cost of providing the IT services, and potentially what is charged in the recovery of those costs.

Service Continuity Management
Management is to support the overall business continuity management process by ensuring that the required IT technical and services facilities can be recovered within required and agreed business time-scales.

IT service continuity management is concerned with managing an organization’s ability to continue to provide a pre-determined and agreed level of IT services to support the minimum business requirements, following an interruption to the business. This includes ensuring business survival by reducing the impact of a disaster or major failure, reducing the vulnerability and risk to the business by effective risk analysis and risk management, preventing the loss of customer and user confidence, and producing IT recovery plans that are integrated with and fully support the organization’s overall business continuity plan.

IT service continuity is responsible for ensuring that the available IT service continuity options are understood and the most appropriate solution is chosen in support of the business requirements. It is also responsible for identifying roles and responsibilities and making sure these are endorsed and communicated from a senior level to ensure respect and commitment for the process. Finally, IT service continuity is responsible for guaranteeing that the IT recovery plans and the business continuity plans are aligned, and are regularly reviewed, revised and tested.

The Security Management Process

Security management provides a framework to capture the occurrence of security-related incidents and limit the impact of security breaches. The activities within the security management process must be revised continuously, in order to stay up-to-date and effective. security management is a continuous process and it can be compared to Deming’s Quality Circle (Plan, Do, Check and Act).

Security image diagram
Figure 6. Security image diagram.

The inputs are the requirements which are formed by the clients. The requirements are translated into security services, security quality that needs to be provided in the security section of the service level agreements. As you can see in the picture there are arrows going both ways; from the client to the SLA; from the SLA to the client and from the SLA to the plan sub-process; from the plan sub-process to the SLA. This means that both the client and the plan sub-process have inputs in the SLA and the SLA is an input for both the client and the process. The provider then develops the security plans for his organization. These security plans contain the security policies and the Operational level agreements. The security plans (Plan) are then implemented (Do) and the implementation is then evaluated (Check). After the evaluation the both the plans and the implementation of the plan are maintained (Act).

The first activity in the security management process is the “control” sub-process. The control sub-process organizes and manages the security management process itself. The control sub-process defines the processes, the allocation of responsibility the policy statements and the management framework.

The security management framework defines the sub-processes for the development of security plans, the implementation of the security plans, the evaluation and how the results of the evaluations are translated into action plans.

The plan sub-process contains activities that in cooperation with the service level management lead to the information security section in the SLA. The plan sub-process contains activities that are related to the underpinning contracts which are specific for information security.

In the plan sub-process, the goals formulated in the SLA are specified in the form of operational level agreements (OLA). These OLAs can be defined as security plans for a specific internal organization entity of the service provider.

Besides the input of the SLA, the plan sub-process also works with the policy statements of the service provider itself. As said earlier these policy statements are defined in the control sub-process.

The operational level agreements for information security are setup and implemented based on the ITIL process. This means that there has to be cooperation with other ITIL processes. For example, if the security management wishes to change the IT infrastructure in order to achieve maximum security, these changes will only be done through the change management process. The security management will deliver the input (request for change) for this change. The change manager is responsible for the change management process itself.

The implementation sub-process makes sure that all measures, as specified in the plans, are properly implemented. During the implementation sub-process no (new) measures are defined or changed. The definition or change of measures will take place in the plan sub-process in cooperation with the change management process.

The evaluation of the implementation and the plans is very important. The evaluation is necessary to measure the success of the implementation and the security plans. The evaluation is also very important for the clients and possibly third parties. The results of the evaluation sub-process are used to maintain the agreed measures and the implementation itself. Evaluation results can lead to new requirements and so lead to a request for change. The request for change is then defined and it is then sent to the change management process.

It is necessary for the security to be maintained. Because of changes in the IT infrastructure and changes in the organization itself, security risks are bound to change over time. The maintenance of the security concerns both the maintenance of the security section of the service level agreements and the more detailed security plans.

The maintenance is based on the results of the evaluation sub-process and insight in the changing risks. These activities will only produce proposals. The proposals serve as inputs for the plan sub-process and will go through the whole cycle or the proposals can be taken in the maintenance of the service level agreements. In both cases the proposals could lead to activities in the action plan. The actual changes will be carried by the change management process.

The maintenance sub-process starts with the maintenance of the service level agreements and the maintenance of the operational level agreements. After these activities take place in no particular order and there is a request for a change, the request for change activity will take place and after the request for change activity is concluded the reporting activity starts. If there is no request for a change then the reporting activity will start directly after the first two activities.

About the Author
From Information Security Management Handbook, Sixth Edition, Volume 2, edited by Harold F. Tipton and Micki Krause. New York: Auerbach Publications, 2008.


Process Owner, Process Manager or Process Engineer

Process Owner, Process Manager or Process Engineer?

While they might appear much the same at first glance, these roles are actually very different

Many times people who are just getting started with ITIL (or broader speaking ITSM) stumble over what the differences are between a Process Owner and Process Manager and, to a lesser extent, a Process Engineer.

These are different roles, with different skill sets and expectations but there are some overlaps. Often, especially in smaller organizations, these roles are all served by a single person. Even in that case, it is important to know the different objectives of each role so we can ensure we are in the right frame of mind when working to either promote, create, edit, or report on a process.

Process Owner

In general then the Process Owner is the ultimate authority on what the process should help the company accomplish, ensures the process supports company policies, represents and promotes the process to the business, IT leadership and other process owners, continuously verifies the process is still fit for purpose and use and finally, manages any and all exceptions that may occur.

Overall Accountability and Responsibility:

  • Overall design
  • Ensuring the process delivers business value
  • Ensures compliance with any and all related Policies
  • Process role definitions
  • Identification of Critical Success Factors and Key Performance Indicators
  • Process advocacy and ensuring proper training is conducted
  • Process integration with other processes
  • Continual Process Improvement efforts
  • Managing process exceptions

As you can see the Process Owner is really the process champion. Typically the person filling this role is in a higher level in Leadership to help ensure the process gets the protection and attention it deserves.

The Process Owner will be the main driving force for the process creation, any value the process produces, to include acceptance and compliance within the organization and also any improvements. It is therefore crucial that the Process Owner really understands the organization and its goals as well as its own culture. This is not about reading a book and trying to implement a book version of a process but really understanding how to create a process that will deliver the most value for this particular organization.

General Skills and Knowledge needed:

  • Company and IT Department goals and objectives
  • IT Department organizational structure and culture
  • Ability to create a collaborative environment and deliver a consensus agreement with key IT personnel
  • Authority to manage exceptions as required.
  • ITIL Foundation is recommended
  • ITIL Service Design and Continual Service Improvement could be helpful

Level of Authority in the Organization

  • Director
  • Senior Manager

Process Manager

The Process Manager is more operational than the Process Owner. You may have multiple Process Managers but you will only ever have a single Process Owner.

You can have a Process Manager for different regions or different groups within your IT Department. Think of IT Service Continuity with a ITSC Process Manager for each of your different Data Centers or Change Management having a different Change Process Manager for Applications versus Infrastructure. The Process Owner will define the roles as appropriate for the organizational structure and culture (see above). The Process Manager is there to manage the day to day execution of the process. The Process Manager should also serve as the first line for any process escalation, they should be very familiar with the ins and outs of the process and will be able to determine the appropriate path or if he/she needs to involve the ultimate authority – the Process Owner.

Overall Accountability and Responsibility:

  • Ensuring the process is executed appropriately at each described step
  • Ensuring the appropriate inputs/outputs are being produced
  • Guiding process practitioners (those moving through the process) appropriately
  • Producing and monitoring process KPI reports

The Process Manager is key to the day to day operations of the process. Without a good and helpful Process Manager it won’t matter how well a process was designed and promoted by the Process Owner, the process will flounder in the rough seas of IT day to day execution.

General Skills and Knowledge needed:

  • In depth knowledge of the process workflow and process CSF/KPI’s
  • Ability and authority to accept/reject all inputs/outputs related to the process
  • Ability to successful explain and guide people through the process and handle any low level process issues
  • ITIL Foundation is recommended
  • ITIL Intermediate in an area that covers their particular process could be helpful

Level of Authority in the Organization

  • Mid Level Manager
  • First Line Manager
  • Supervisor

Process Engineer

The Process Engineer is likely to have a lot of Business Analysis and Technical Writer skills and knowledge. This person needs to be able to take the Process Owner’s vision and intent of the process and actually create the process document that will be functionally usable by Process Managers and Process Practitioners. Another useful role of the Process Engineer is help ensure that each process in the enterprise is written in a common manner to ensure consistency in approach and method.

Overall Accountability and Responsibility:

  • Understanding the Process Owner’s vision and intent
  • Documenting the process in a usable and readable manner
    • Organized
    • Simple
    • Unambiguous
    • Ensuring flow charts match text
    • Ensuring processes are documented in a common manner across the enterprise

General Skills and Knowledge needed:

  • Ability to capture process requirements and translate them into a process document
  • Ability to write well
  • Ability to create effective work flow diagrams
  • ITIL Foundation could be helpful

Level of Authority in the Organization

  • Individual Contributor

As you can see a Process Engineer can be quite helpful in ensuring that the vision of the Process Owner is translated into a functional process document.


It is possible that a single person can do all three roles effectively but more likely the person will be more effective at one of these roles and less so at the others. If your organization is such that it is not possible that the three can be filled separately with people possessing the appropriate skills it is still advisable that a separate Process Engineer is utilized across the enterprise. A Process Engineer can work on several processes at once and will always be helpful for any process improvement efforts. A Process Owner can also function as a Process Manager without much issue given an appropriate scope and demand.

Source :

Free tools for ITSM – supporting IT Service Management for zero tool cost

Any application or computer program that enables you to run one or more IT Service Management processes is considered to be an ITSM tool. As with any application or program, there are a great number of both commercial and free tools for ITSM. In a small IT organization, parts of IT Service Management can be done by using office tools, such as spreadsheets, databases and word processing applications. However, managing larger amounts of data over time, with flexibility and consistency, requires specialized tools for the task at hand, regardless of organization size. Here is a list of the most common open source (free) ITSM tools:

Free ITSM software

Help Desk and Ticketing

  • RT: Request tracker – RT is an “issue tracking system which thousands of organizations use for bug tracking, help desk ticketing, customer service, workflow processes, change management, network operations, and even more…”
  • SpiceWorks – Spiceworks’ free app will allow you to easily manage your daily projects and user requests – all from one spot. And if you’re a help desk pro, you’ll still be amazed at how painless Spiceworks is to get up and running.
  • Triage – The web-based application will provide interfaces for handling tickets with notes and solutions, full-text search indexing, and allowing for plug-ins which can generate tickets from external sources (i.e. Asterisk, OpenNMS, Nagios, IMAP, POP3, etc.).
  • FreeHelpDesk – FreeHelpDesk is a feature-rich help desk system designed from the ground up to meet the demands of help desk staff and their users. It is a web-based system that can accept new calls from your users directly into the system. Calls can be tracked and searched to enable faster response times.
  • OSTicket – Easily manage, organize, and streamline your customer service and drastically improve your customer’s experience – all with one simple, easy-to-use (and free) system.
  • OTRS Help Desk – OTRS Help Desk software provides the tools needed to deliver superior service to your customers. Build stronger, longer-lasting relationships and gain a solid competitive edge with the proven functionality of OTRS.

If you need more information about Help Desk, Service Desk and Call Center distinction, follow this great blog post: Service Desk: Single point of contact.

Inventory and Change Management DataBase (CMDB)

  • i-doIT – Open Source IT Documentation and CMDB.
  • OCS Inventory NG – Open Computers and Software Inventory Next Generation is a technical management solution of IT assets. It uses small client software that has to be installed on every machine, and a server that aggregates information about those machines. It can be used for software deployment as well.

Learn more on ITIL V3 Change Management – at the heart of Service Management.

Service Monitoring

  • Nagios – Achieve instant awareness of IT infrastructure problems, so downtime doesn’t adversely affect your business. Nagios offers complete monitoring and alerting for servers, switches, applications, and services.
  • Icinga – is an enterprise-grade open source monitoring system which keeps watch over networks and any conceivable network resource, notifies the user of errors and recoveries and generates performance data for reporting. Scalable and extensible, Icinga can monitor large, complex environments across dispersed locations. Icinga is a branch of Nagios and is backward compatible.
  • Zabbix – is the open source availability and performance monitoring solution. Zabbix offers advanced monitoring, alerting, and visualization features today which are missing in other monitoring systems – even some of the best commercial ones.
  • GroundWork – monitors your entire datacenter and collects all its information in one place, helping to make better sense of your IT environment performance and availability data.

Service Management

  • OTRS:ITSM – is a scalable, high-performance, enterprise-grade IT Service Management (ITSM) software that couples the best practices of the IT Infrastructure Library (ITIL v3). The OTRS IT Service Management software is a powerful set of tools for managing complex IT administration processes, reducing business risk and ensuring high service quality.
  • iTop – written in a simple, popular programming language (PHP) that can be customized in an instant, iTop was developed to let you choose the modules you are interested in. If you just want a CMDB, you just get a CMDB. If you need to deal with all ITIL processes, you can get all ITIL modules covered by iTop. Adding a module is a question of minutes.
  • Project Open (]Project Open[) – is a modular open source project and service management tool with a focus on finance and knowledge management. “]po[ ITSM” is a special configuration of ]po[ designed to address the specific needs of IT departments and IT service providers, according to ITIL V3 best practices.

Learn more on IT Service Management in general.

Note: Product descriptions have been given by their respective developers, and are to be used for informational purposes only. As they are all free to download and use, take your time to try them before implementing.

Free does not always equal zero cost

There are many free ITSM tools available for you to download, install, and use, but you don’t get any support or help implementing the tool itself or its processes. Open source ITSM tools generally have nice communities built around the tools, so there might be some help available if you get stuck, but don’t expect instant answers or solutions.

Companies that offer free ITSM tools generate their revenue by offering a) hosting and cloud services for the tool, b) consulting and help with implementation, c) support once the tool has been implemented, d) and sometimes additional features have to be purchased separately. It’s important to remember: these are all the things that will be up to you; find a resource to run the software (server), have the know-how to install & configure, use, teach others to use it, and support the software itself if needed.

Where to start

If there aren’t any kinds of ITSM tools implemented in your organization, then the best way to start would be tools for processes that revolve around IT Operations, and are most visible to end users. These include Incident and Service Management (Help Desk / Service Desk), Configuration Management, Change Management, and some sort of Service Monitoring tools.

Make a list of products that may interest you, and some criteria which will help you decide: installation requirements (OS, resources, web based, etc.), modules available (incident management, configuration management, change management, etc.), are modules aligned with best practices such as ITIL (Read more on: How to implement ITIL and information about other ITSM Standards and Frameworks), is there support available (community based or commercial), additional features such as self-service portal and/or e-mail integration, and how confident you feel about being able to implement it.

Author: Neven Zitek


Data Center Migrations & Consolidations

Business Challenge:

A major academic based health system in Philadelphia required migration services consisting of relocating nearly 600 servers and related technology equipment from the primary data center to the new “designated data center”. ComSource needed to develop a data center migration plan which allowed the move to occur in phases, enabling the IT staff to concentrate on one critical factor at a time and minimize the danger of excessive downtime at any point during the move.

The Solution:

ComSource, in conjunction with our migration partner, worked closely with the client’s project management office, attending pre-move meetings and planning sessions to develop a “playbook” based on a selected “move event” approach and timeline. Key to the plan’s success was our move methodology which was based on the applications priorities and dependencies. Once these “logical” dependencies were determined, a hardware or physical dependency check was performed. This helped put the servers into various groups and identify which ones needed an asset swap, parallel, forklift, etc. type of approach. The data migration took place during 13 move events utilizing one truck per move. ComSource also provided relocation of the customer’s IT equipment, including packing and crating, loading, transporting, unloading and uncrating. As part of this data center migration move it was imperative that customer manufacturer hardware agreements and warranty coverage remained valid throughout the relocation event. ComSource provided consulting services which included suggesting best practices for relocating, inventorying, communicating, change management and other measures associated with the data center migration.


ComSource successfully completed a nearly 600 server and technology infrastructure move over a 13 weekend period on schedule, working within the customer’s timeline and budget and meeting minimal downtime to the ongoing business operations.

Business Continuity, Recovery Services & Co-Location

Business Challenge:

A rapidly growing U.S. based retail corporation maintained a production data center in New York State. While they had always cut daily incremental backup tapes and weekly “fulls”…and in turn sent them offsite to a secure location, they never had a contracted warm or hot site facility from which to recover their critical applications in the event of a disaster at the main production facility. Several years ago the lack of a contract at a hot site recovery facility never seemed a major issue for a small retail company…just more of a potential minor inconvenience. As the company grew, in fact doubled and tripled in size, it became all the more apparent and actually critical to come up with a more effective and comprehensive business recovery plan.

The Solution:

The ComSource sales and support team went to work immediately. First and foremost, ComSource and their business recovery team experts worked with the company to examine all workload and applications with the intent of prioritizing which applications absolutely needed to be up and running in hours versus days in the event of a “disaster”. The company’s key applications were hosted on both IBM’s Power family with OS/400 applications as well as several applications running on Dell x86 Servers. Once ComSource and their recovery expert team collectively completed a full audit of all hardware platforms, all critical and non-critical applications and all current backup and recovery infrastructure they jointly selected one of ComSource’s elite Recovery Site locations in northern Georgia for the hot site facility. In this case, the ComSource long time affiliate, a true “Best in Class” Disaster Recovery Services organization, provided the customer with the best overall top to bottom recovery option with a secure facility, redundant components, extensive equipment inventory and a staff expertise across all of the end user platforms..


Dedicated platforms were selected and deployed processes were implemented to insure that in the event of a disaster at the production facility a rapidly growing retail organization could recover its mission critical applications quickly and efficiently. This long time valued ComSource customer has continued to utilize this premier Disaster Recovery organization and has performed many complete recovery tests over several years. The end user’s executive team can now “sleep at night” knowing that in the event of a disaster…most any disaster…the company can bring up and run all selected applications in a timely fashion with a highly skilled support team working closely with them along the recovery process.

Information Technology in the Healthcare Sector

Business Challenge:

A 528-bed tertiary care facility in western New York needed to successfully implement an EMR solution. ComSource, along with our partner affiliate, competed with top IT healthcare solution providers and consultants to win this major project that required significant pre-implementation planning, management and support to help deploy the mission critical EPIC software.

The Solution:

Due to timeline sensitivity and federal mandates, ComSource and our partner affiliate were tasked to successfully implement the EPIC software by providing the key services listed below:

  • Planning and implementation pre-planning
  • System’s analysis
  • Change management
  • Screen/report design
  • Tailoring/configuration
  • Integration testing
  • Training
  • Activation planning
  • Post implementation review


ComSource was able to assist this tertiary care facility in achieving their targeted deadlines and obtaining full funding of the project. The facility realized significant cost savings by choosing our ComSource partner affiliate over other alternative IT healthcare systems integrators. This successful EPIC implementation helped the client attain meaningful use objectives in a cost effective manner. In addition, the doctors and hospitals were able to report required quality measures that demonstrate outcomes, such as:

  • Improved process efficiencies
  • Maximized use of human resources
  • Improved “return on investment” on the technology purchase
  • Employee satisfaction
  • Physician satisfaction
  • Improved clinical quality outcomes
  • Increased case flow
  • Improved profitability
  • Improved patient care and safety

Mobile Technology and Logistical Solutions

Business Challenge:

A leading freight and logistics provider needed to reduce their use of paper through the full delivery cycle, improve their customer’s view time for payment on deliveries online and increase efficiency among drivers, IT support staff and employees completing back-office procedures. The “partial” paper based system being used by this company was creating inefficiencies such as, data loss, lack of quality control and wasted driver time.

The Solution:

ComSource and our partner affiliate coordinated with all levels of the corporate structure to create a new solution. This interactive process allowed the employees to see how the new processes directly affected their jobs and incorporated their requested features in the new system. A complete mobility solution was implemented to allow the company to automate their entire delivery and collection process in real time. Key elements of this solution include:

  • Drivers were able to scan items both within and outside the depot.
  • Consignments were manifested electronically.
  • “Sign-on glass” allowed the company to collect proof of delivery as well as accept and complete pickups in the field.
  • Information was instantly transferred to back office systems which increased functionality for staff in regard to schedules,
    deliveries, collections and depot operations.
  • Handheld remote mobile hardware and software assets allow support staff to access the device to assist the courier
    as needed. If a device is stolen it can be wiped of any sensitive customer information or corporate data remotely.


This provider benefited from the new mobility solution in the following ways:

  • Significant cost savings through ongoing maintenance, processing infrastructure, “rate of return” and equipment repairs.
  • Improved speed and efficiencies receiving deliveries, creating invoices and meeting the increasing demands of customers.

Information Technology Assessments

Business Challenge:

A nationally recognized retail corporation selected ComSource as a “checks and balances” to evaluate the performance of their current network and propose an architectural strategy that was both redundant and secure while requiring less maintenance. This company had a fast growing retail business and needed to ensure that their environment could support their current rate of growth.

The Solution:

ComSource assessed the current network design with an onsite CCIE engineer and an array of tools. The network design assessed was:

  • IP Addressing Strategy
  • VLAN Strategy
  • Access Layer Switching Strategy
  • Distribution Layer Switching Strategy
  • Core Layer Switching Strategy
  • Wide Area Network Strategy
  • Internet Access Strategy

The infrastructure assessed was:

  • Cabling Infrastructure Strategy
  • System Security Strategy
  • Production Network Management Strategy

These assessments lead to recommendations from our CCIE engineer, to include:

  • Compressing large image files instead of just adding bandwidth.
  • MPLS for larger sites.
  • The utilization of QOS when used with VPNs.
  • Manual routing IDs were established in OSPF using loopbacks for stability.
  • Increasing MTU size on remotes to cut down on fragmentation in TCP.
  • Filtering with a dedicated firewall.
  • Eliminating single points of failure and simplify cabling by collapsing all switches within the datacenter, excluding top
    of rack switches, to 2 Core switches.
  • Network management solution to take configuration backups of all devices at regular intervals and push out
    mass configuration changes.


At the end of the assessment the customer had a clear road map as to how their network should continue to grow effectively in concert with their rapidly growing business enterprise. Strategic implementation of the recommended solutions increased throughput, functionality and security in conjunction with the expanding company.

3rd Party Maintenance and Support, Non OEM

Business Challenge:

A Fortune 1500 privately held cosmetic company was tasked by senior management executives to reduce costs in their data center. Knowing that IT maintenance contracts are subject to frequent annual price increases, often associated with renewals, this company reached out to ComSource for strategies on maintenance cost reduction.

The Solution:

ComSource, along with our trusted and recognized 3rd party maintenance service provider, looked at 2 corporate datacenter locations for this cosmetic company that had expiring IBM and Dell maintenance contracts and were able to help the company save over 40% on support in the first twelve months. Due to the cost savings from just one year of using 3rdparty maintenance with ComSource, this company expanded their portfolio and not only renewed the contracts for the same IBM and Dell equipment, but also added additional IBM, Dell and Brocade equipment to the existing contracts. The service levels provided to this cosmetic company were: a 3rd party maintenance coordinator to track expiration dates and adds/deletes, 7x24x365 hardware maintenance, local service depots, call-home, online portal for asset management and incident tracking. This online portal allows our customers to see contracts in place with our 3rd party maintenance provider across all platforms and gives the customer the ability to upload maintenance contracts that are held with other maintenance providers as well.


ComSource and our 3rd party maintenance provider allow our customers to show a cost savings across multiple platforms and all major manufacturers. Where a typical OEM increases maintenance costs, we are able to decrease (or maintain at a lower price point) the costs as the equipment ages. We work with our customers to keep the equipment on the floor instead of trying to “end of life” the equipment as so many OEM’s tend to do. In this specific case utilizing our 3rd party maintenance solution, this cosmetic company saved approximately 40% on their maintenance contract costs across their expanded IT portfolio.

 Source Link:

Tier 3 data center specifications checklist

This section of our two part series on tier 3 data center specifications deals with the power supply aspects.

As the most critical part of business, an organization needs to ensure 100% availability for its data center. This is why building a data center according to tier 3 data center specifications ensures a certain assured level of availability or uptime.

A data center built according to tier 3 data center specifications should satisfy two key requirements: redundancy and concurrent maintainability. It requires at least n+1 redundancy as well as concurrent maintainability for all power and cooling components and distribution systems. A component’s lack of availability due to failure (or maintenance) should not affect the infrastructure’s normal functioning.

These specifications have to be met only from the power, cooling and building infrastructure fronts till the server rack level. Tier 3 data center specifications do not specify requirements at the IT architecture levels. By leveraging the following steps, your data center’s power supply infrastructure can meet the tier 3 data center specifications.

Stage 1: Power supply from utility service provider

The Uptime Institute regards electricity from utility service providers as an unreliable source of power. Therefore, tier 3 data center specifications require that the data center should have diesel generators as a backup for the utility power supply.

An automatic transfer switch (ATS) automatically switches over to the backup generator if the utility power supply goes down. While many organizations have just a single ATS connecting a backup generator and power supply from the utility service provider, the tier 3 data center specifications mandate two ATSs connected in parallel to ensure redundancy and concurrent maintainability. The specifications however, don’t call for the two ATSs to be powered by different utility service providers.

Stage 2: Backup generators

Tier 3 data center specifications require the diesel generators to have a minimum of 12 hours of fuel supply as reserves. Redundancy can be achieved by having two tanks, each with 12 hours of fuel. In this case, concurrent maintainability can be ensured using two or more fuel pipes for the tanks. Fuel pipes can then be maintained without affecting flow of fuel to the generators.

Stage 3: Power distribution Panel

The power distribution panel distributes power to the IT load (such as servers and networks) via the UPS. It also provides power for non IT loads (air conditioning and other infrastructure systems).

Redundancy and concurrent availability can be achieved using separate power distribution panels for each ATS. This is because connecting two ATSs to a panel will necessitate bringing down both ATS units during panel maintenance or replacement. However, the tier 3 data center specifications require two or more power lines between each ATS and power distribution panel to ensure redundancy and concurrent maintainability. Similarly, each power distribution panel and UPS should also have two or more lines for the same purpose.

Stage 4: UPS

Power from the distribution panel is used by the UPS and supplied to the power distribution boxes for server racks as well as network infrastructure. For example, if a 20 KVA UPS is required for a data center, redundancy can be achieved by deploying two 20 KVA UPS or four 7 KVA UPS units. Redundancy can even be achieved with five 5 KVA UPS units.

The tier 3 data center specifications require that each UPS be connected to just a single distribution box for redundancy and concurrent maintainability. This ensures that only a single power distribution circuit goes down, in case of a UPS failure or maintenance.

Stage 5: Server racks

Each server rack must have two power distribution boxes in order to conform to tier 3 data center specifications. The servers in each rack should have dual power supply features so that they can connect to the power distribution boxes.

A static switch can be used for devices which lack dual power mode features. This switch takes in supply from both power distribution boxes and gives a single output. The static switch can transfer from a power distribution box to another in case of failures, within a few milliseconds.

About the author: Mahalingam Ramasamy is the managing director of 4T technology consulting, a company specializing in data center design, implementation and certification. He is an accredited tier designer (ATD) from The Uptime Institute, USA and the first one from India to get this certification.

Redundancy: N+1, N+2 vs. 2N vs. 2N+1

A typical definition of redundancy in relation to engineering is:  “the duplication of critical components or functions of a system with the intention of increasing reliability of the system, usually in the case of a backup or fail-safe.”   When it comes to datacenters,  the need for redundancy focuses on how much extra or spare power the data center can offer its customers as a back up during a power outage.  Unexpected power outages are the overwhelming usual cause for datacenter downtime.*


Photo Courtesy of the Ponemon Institute

Photo Courtesy of the Ponemon Institute

According to the industry-leading Ponemon Institute’s 2013 Study on Data Center Outages (or “downtime” – a four-letter word in the data center industry) that surveyed 584 individuals in U.S. organizations who have responsibility for data center operations in some capacity, from the “rank and file” to C-Level, 85% participants report their organizations experienced a loss of primary utility power in the past 24 months.  And of that 85% – 91% reported their organizations had an unplanned outage.  That means that most data centers experienced downtime in the last 24 months.  During these outages respondents averaged two complete data center shutdowns, with an average downtime of 91 minutes per failure.

The entire study also speaks of the implementation and the impact of DCIM (Data Center Infrastructure Management) – and how it was used to fix or correct the root cause of the outages.*

The most common are due to the weather, but they can also occur from simple equipment failure or even an accidental cutting of a power line due to a backhoe.  No matter what the reasons are, an unplanned outage can cost a company a lot of money, especially if there revenues are dependent upon internet sales.

For example, if you’re Amazon and you go down, you lose a mind-blowing amount of money: an estimated $1,104 in sales for every second of downtime. The “average” U.S. data center loses $138,000 for one hour of data center downtime per year.  So, if one compares Ponemon’s 91-minute average downtime per year – that’s an approximate loss of $207,000 for each organization accessing the data center.

What’s this all mean?  Downtown matters, and downtime prevention matters, so Redundancy matters.

Perferably, large businesses and corporations  have their servers set up at either Tier 3 or Tier4 data centers because they offer a sufficient amount of redundancy in case of a unforeseen power outage. With this in mind, not all data center’s redundancy power systems are created equal.  Some offer N+1, 2N, and 2N +1 redundancy systems.

What’s the Difference Between N+1, 2N and 2N+1?

The simple way to look at N+1 is to think of it in terms of throwing a birthday party for your child or yourself, because who doesn’t love cupcakes?.  Say you have ten guests and need ten cupcakes, but just in case you have that  “unexpected” guest show up, you order eleven cupcakes.  “N” represents the exact amount of cupcakes you need, and the extra cupcake represents the +1.  Therefore you have N+1 cupcakes for the party.  In the world of datacenters, an N+1system, also called parallel redundancy, and is a safeguard to ensure that an uninterruptible power supply (UPS) system is always available. N+1 stands for the number of UPS modules that are required to handle an adequate supply of power for essential connected systems, plus one more, so 11 cupcakes for 10 people, and less chance of downtime.

Although an N+1 system contains redundant equipment, it is not, however, a fully redundant system and can still fail because the system is run on  common circuitry or feeds at one or more points rather than two completely separate feeds.

Back at the birthday party!  If you plan a birthday party with a 2N redundancy system in place, then you would have the ten cupcakes you need for the ten guests, plus an additional ten cupcakes, so 20 cupcakes.  2N is simply two times, or double the amount of cupcakes you need.   At a data center, a 2N system contains double the amount of equipment needed that run separately with no single points of failure.  These 2N systems are far more reliable than an N+1 system because they offer a fully redundant system that can be easily maintained on a regular basis without losing any power to subsequent systems.  In the event of an extended power outage, a 2N system will still keep things up and running.  Some data centers offer 2N+1, which is actually double the amount needed plus an extra piece of equipment as well, so back at the party you’ll have 21 cupcakes, 2 per guest and 3 for you!

 For more information on Redundancy, N+1, 2N, 2N+1, and the difference between them, as well as, the different Tier levels offered by datacenters around the world visit or call us at (877) 406-2248.

*Sources: Ponemon Institute 2013 Study on Data Center Outages: Sponsored by Emerson Network Power – This link will take you to the entire study, which is also an interesting read about the how data center employees view their data center’s structure and superiors.

82 lời răn về cách sống ai cũng phải đọc

82 điều răn của Alejandro Jodorowsky – nhà làm phim, nhà văn, nhà soạn nhạc, nhà trị liệu tâm lý người Chi-lê – nghe có vẻ giống như đang đọc Kinh Thánh hay kinh Phật.

lời răn, cách sống
Alejandro Jodorowsky

Những lời răn về cách sống nhằm mục đích giúp chúng ta “thay đổi thói quen, chiến thắng thói lười biếng và trở nên đạo đức…”

Dưới đây là 82 lời răn của Alejandro Jodorowsky:

1. Dành sự chú ý của bản thân cho chính mình. Luôn ý thức về cái mà bạn nghĩ, hiểu, cảm nhận, mong muốn và làm.

2. Luôn hoàn thành thứ mà bạn đã bắt đầu.

3. Dù bạn làm gì, hãy làm tốt nhất có thể.

4. Đừng gắn liền với bất cứ thứ gì có thể tiêu diệt bạn theo thời gian.

5. Âm thầm nhân lên sự rộng lượng

6. Đối xử với tất cả mọi người như người thân.

7. Sắp xếp thứ mà bạn đã phá hoại.

8. Học cách nhận và nói cảm ơn khi nhận được quà.

9. Đừng tự định nghĩa bản thân.

10. Đừng nói dối hay ăn cắp, vì bạn đang nói dối chính bản thân mình và ăn cắp của chính mình

11. Hãy giúp đỡ hàng xóm, nhưng đừng làm họ bị phụ thuộc.

12. Không khuyến khích người khác bắt chước bạn.

13. Lên kế hoạch công việc và thực hiện chúng.

14. Đừng chiếm quá nhiều không gian.

15. Đừng tạo ra những âm thanh hay dịch chuyển vô ích.

16. Nếu bạn thiếu niềm tin, hãy giả vờ là bạn có nó.

17. Đừng cho phép bản thân bị ấn tượng bởi những tính cách mạnh mẽ.

18. Đừng coi bất cứ ai hay bất cứ điều gì là sở hữu của bạn.

19. Chia sẻ một cách công bằng.

20. Đừng cám dỗ.

21. Ngủ và ăn ít nhất có thể.

22. Không nói về những vấn đề riêng tư.

23. Đừng đánh giá hay chỉ trích khi bạn không hiểu hầu hết những yếu tố liên quan.

24. Đừng gây dựng những tình bạn vô dụng.

25. Đừng chạy theo thời trang.

26. Đừng bán chính mình.

27. Hãy tôn trọng những hợp đồng mà bạn đã ký.

28. Hãy đúng giờ.

29. Đừng bao giờ ghen tị với may mắn hay thành công của bất cứ ai.

30. Nói “không” nhiều hơn.

31. Đừng nghĩ về lợi nhuận mà công việc của bạn sẽ mang lại.

32. Đừng bao giờ đe dọa bất cứ ai.

33. Hãy giữ lời hứa.

34. Trong bất kỳ cuộc thảo luận nào, hãy đặt mình vào vị trí của người khác.

35. Hãy thừa nhận bất cứ ai cũng có thể tốt hơn bạn.

36. Đừng loại bỏ, mà hãy biến đổi.

37. Hãy chinh phục nỗi sợ hãi.

38. Giúp người khác tự cứu lấy họ.

39. Chế ngự những ác cảm của bạn.

40. Không phản ứng với những điều mà người khác nói về bạn, dù là khen hay chê.

41. Biến niềm kiêu hãnh thành lòng tự trọng.

42. Biến sự tức giận thành sự sáng tạo.

43. Biến lòng tham thành sự tôn trọng cái đẹp.

44. Biến sự ghen tị thành sự ngưỡng mộ những giá trị của người khác.

45. Biến sự căm ghét thành lòng thiện nguyện

46. Đừng tự khen hay xúc phạm bản thân.

47. Hãy coi những thứ không thuộc về bạn như thể nó không thuộc về bạn.

48. Đừng than vãn.

49. Hãy phát triển trí tưởng tượng.

50. Đừng bao giờ đưa ra mệnh lệnh chỉ để nhận sự thỏa mãn vì được vâng lời.

51. Trả tiền cho những dịch vụ phục vụ bạn.

52. Đừng cải đạo công việc hay ý tưởng của bạn.

53. Đừng cố gắng làm người khác cảm nhận ở bạn những cảm xúc như tiếc nuối, ngưỡng mộ, đồng cảm hay đồng lõa.

54. Đừng cố gắng làm mình khác biệt bằng ngoại hình.

55. Đừng bao giờ phản bác, thay vào đó, hãy im lặng.

56. Đừng mắc nợ. Hãy kiếm tiền và thanh toán ngay lập tức.

57. Nếu bạn xúc phạm ai đó, hãy xin sự tha thứ. Nếu bạn xúc phạm một người công khai, hãy xin lỗi công khai.

58. Khi bạn nhận ra bạn đã nói điều gì đó sai lầm, đừng bảo thủ chỉ vì tự ái của mình, hãy rút lại nó ngay lập tức.

59. Đừng bao giờ bảo vệ những ý tưởng cũ kỹ của bạn đơn giản chỉ vì bạn là người nói ra chúng.

60. Đừng giữ những vật dụng vô ích.

61. Đừng tô điểm cho mình bằng những ý tưởng kỳ lạ.

62. Đừng chụp ảnh với người nổi tiếng.

63. Đừng tự bào chữa cho bản thân, hãy gọi luật sư của mình.

64. Đừng bao giờ định nghĩa bản thân bằng thứ mà bạn sở hữu.

65. Đừng bao giờ nói về bản thân mà không xem xét đến việc bạn có thể thay đổi.

66. Chấp nhận rằng chẳng có gì thuộc về bạn.

67. Khi ai đó hỏi quan điểm của bạn về thứ gì đó hoặc về ai đó, hãy chỉ nói về những phẩm chất của họ.

68. Khi bạn bị ốm, hãy coi bệnh tật như giáo viên của bạn, chứ không phải như thứ gì đó đáng ghét.

69. Hãy nhìn thẳng và không che giấu bản thân.

70. Đừng quên cái chết của bạn, nhưng đừng cho phép nó xâm chiếm cuộc sống của bạn.

71. Dù bạn sống ở bất cứ nơi đâu, hãy luôn tìm ra một nơi để bạn thể hiện sự sùng kính.

72. Khi bạn cung cấp một dịch vụ, hãy để những nỗ lực của bạn không dễ thấy.

73. Nếu bạn quyết định làm việc để giúp đỡ người khác, hãy làm với niềm vui.

74. Nếu bạn đang đắn đo giữa làm và không làm, hãy mạo hiểm làm nó.

75. Đừng cố gắng trở thành tất cả với bạn đời, hãy chấp nhận rằng có những thứ mà bạn không thể cho anh ấy/ cô ấy nhưng người khác thì có thể.

76. Khi ai đó đang nói chuyện với một khán giả chăm chú, đừng phủ nhận họ và đánh cắp khán giả của anh ta/ cô ta.

77. Hãy sống bằng tiền mà bạn kiếm được.

78. Đừng bao giờ khoe khoang về những cuộc phiêu lưu tình ái.

79. Đừng bao giờ tô điểm điểm yếu của mình.

80. Đừng bao giờ tới thăm ai chỉ để giết thời gian.

81. Giành được mọi thứ là để chia sẻ chúng.

82. Nếu bạn đang ngồi thiền và một con quỷ xuất hiện, hãy khiến con quỷ cũng ngồi thiền.

Source: Nguyễn Thảo (Theo Open Culture)

Survey: UPS Issues Are Top Cause of Outages

This chart shows the perception gap between the executive suite and data center staff on key issues (click for larger version).

Problems with UPS equipment and configuration are the most frequently cited cause of data center outages, according to a survey of more than 450 data center professionals. The survey by the Ponemon Institute, which was sponsored by Emerson Network Power, also highlights a disconnect between data center staff and the executive suite about uptime readiness.

The National Survey on Data Center Outages surveyed 453 individuals in U.S. organizations who have responsibility for data center operations, who were asked about the frequency and root causes of unplanned data center outages, as well as corporate efforts to avert downtime. Ninety five percent of participants reported an unplanned data center outage in the past two years, with most citing inadequate practices and investments as factors in the downtime.

Here are the most frequently cited causes for downtime:

  • UPS battery failure (65 percent)
  • Exceeding UPS capacity (53 percent)
  • Accidental emergency power off (EPO)/human error (51 percent)
  • UPS equipment failure (49 percent)

There were signs that the ongoing focus on cost containment was being felt in the data center. Fifty nine percent of respondents agreed with the statement that “the risk of an unplanned outage increased as a result of cost constraints inside our data center.”

“As computing demands and energy costs continue to rise amidst shrinking IT budgets, companies are seeking tactics – like cutting energy consumption – to cut costs inside the data center,” said Peter Panfil, vice president and general manager, Emerson Network Power’s AC Power business in North America. “This has led to an increased risk of unplanned downtime, with companies not fully realizing the impact these outages have on their operations.”

Perception Gap
The focus on UPS issues isn’t unexpected, given the role of uninterruptible power supplies in data center power infrastructure. It’s also consistent with Emerson’s position as a leading vendor of UPS equipment. But the survey byPonemon, which is known for its surveys on security and privacy, also points to a perception gap between senior-level and rank-and-file respondents regarding data center outages.

Sixty percent of senior-level respondents feel senior management fully supports efforts to prevent and manage unplanned outages, compared to just 40 percent of supervisor-level employees and below. Senior-level and rank-and-file respondents also disagreed regarding how frequently their facilities experienced downtime, with 56 percent of the senior executives believing unplanned outages are infrequent, while just 45 percent of rank-and-file respondents agreed to the same statement.

“When you consider that downtime can potentially cost data centers thousands of dollars per minute, our survey shows a serious disconnect between senior-level employees and those in the data center trenches,” said Larry Ponemon, Ph.D., chairman and founder of the Ponemon Institute. “This sets up a challenge for data center management to justify to senior leadership the need to implement data center systems and best practices that increase availability and ensure the functioning of mission-critical applications. It’s imperative that these two groups be on the same page in terms of the severity of the problem and potential solutions.”



Data Center Generators

Generators are a key to data center reliability. Supplementing a battery-based uninterruptible power supply (UPS) with an emergency generator should be considered by all data center operators. The question has become increasing important as super storms such as Hurricane Sandy in the Northeast United States knocked out utility power stations and caused many downed power lines, resulting in days and weeks of utility power loss.

Data Center Generator Delivery

Beyond disaster protection, the role of a backup generator to provide power is important when utility providers consider summer rolling blackouts and brownouts and data center operators see reduced utility service reliability. In a rolling blackout, power to industrial facilities is often shut down first. New data center managers should check the utilities contract to see if a data center is subject to such utility disconnects.

Studies show generators played a role in between 45 and 65 percent of outages in data centers with an N+1 configuration (with one spare backup generator). According to Steve Fairfax, President of MTechnology, “Generators are the most critical systems in the data center.” Mr. Fairfax was the keynote speaker at the 2011 7×24 Exchange Fall Conference in Phoenix, Arizona.

What Should You Consider Before Generator Deployment?

  • MTU-Onsite-Energy-Data-Center-Gas-Generators
    MTU Onsite Energy Gas Generator

    Generator Classification / Type. A data center design engineer and the client should determine if the generator will be classified as an Optional Standby power source for the data center, a Code Required Standby power source for the data center, or an Emergency back-up generator that also provides standby power to the data center.

  • Generator Size. When sizing a generator it is critical to consider the total current IT power load as well as expected growth of that IT load. Consideration must also be made for facility supporting infrastructure (i.e. UPS load) requirements. The generator should be sized by an engineer, and specialized sizing software should be utilized.
  • Fuel Type. The most common types of generators are diesel and gas. There are pros and cons to both as diesel fuel deliveries can become an issue during a natural disaster and gas line feeds can be impacted by natural disasters. Making the right choice for your data center generator depends on several factors. The fuel type needs to be determined based upon local environmental issues, (i.e. Long Island primarily uses natural gas to protect the water aquifer under the island), availability, and the required size of the standby/emergency generator.
  • Deployment Location. Where will the generator be installed? Is it an interior installation or an exterior installation? An exterior installation requires the addition of an enclosure. The enclosure may be just a weather-proof type, or local building codes may require a sound attenuated enclosure. An interior installation will usually require some form of vibration isolation and sound attenuation between the generator and the building structure.
  • Cummins-Lean-Burn-Industrial-Gas-Generators
    Cummins Lean-Burn Gas Generator

    Exhaust and Emissions Requirements. Today, most generator installations must meet the new Tier 4 exhaust emissions standards. This may depend upon the location of the installation (i.e. city, suburban, or out in the country).

  • Required Run-time. The run-time for the generator system needs to be determined so the fuel source can be sized (i.e. the volume of diesel or the natural gas delivery capacity to satisfy run time requirements).


What Should You Consider During Generator Deployment?

  • Commissioning The commissioning of the generator system is basically the load testing of the installation plus the documentation trail for the selection of the equipment, the shop drawing approval process, the shipping documentation, receiving and rigging the equipment into place. This process also should include the construction documents for the installation project.
    Generac Generator



  • Load Testing Typically, a generator system is required to run at full load for at least four (4) hours. It will also be required to demonstrate that it can handle step load changes from 25% of its rated kilowatt capacity to 100% of its rated kilowatt capacity. If the load test can be performed with a non-linear load bank that has a power factor that matches the specification of the generator(s) that is the best way to load test. Typically, a non-linear load bank with a power factor between 75% and 85% is utilized.
  • Servicing The generator(s) should be serviced after the load test and commissioning is completed, prior to release for use.


What Should You Consider After Generator Deployment?

  • Caterpillar Industrial Diesel GeneratorsService Agreement. The generator owner should have a service agreement with the local generator manufacturer’s representative.
  • Preventative Maintenance. Preventative Maintenance should be performed at least twice a year. Most generator owners who envision their generator installation as being critical to their business execute a quarterly maintenance program.
  • Monitoring. A building monitoring system should be employed to provide immediate alerts if the generator and ATS systems suffer a failure, or become active because the normal power source has failed. The normal power source is typically from the electric utility company, but it could be an internal feeder breaker inside the facility that has opened and caused an ATS to start the generator(s) in an effort to provide standby power.
  • Regular Testing. The generator should be tested weekly for proper starting, and it should be load tested monthly or quarterly to determine that it will carry the critical load plus the required standby load and any emergency loads that it is intended to support.
  • bloom-energy-server
    The Bloom Box by Bloom Energy

    Maintenance. The generator manufacturer or third party maintenance organization will notify the generator owner when important maintenance milestones are reached such as minor rebuilds and major overhauls. The run hours generally determine when these milestones are reached, but other factors related to the operational characteristics of the generator(s) also apply to determining what needs to be done and when it needs to be done.

PTS Data Center Solutions provides generator sets for power ratings from 150 kW to 2 MW. We can develop the necessary calculations to properly size your requirement and help you with generator selection, procurement, site preparation, rigging, commissioning, and regular maintenance of your generator.

To learn more about PTS recommended data center generators, contact us or visit (in alphabetical order):

To learn more about PTS Data Center Solutions available to support your Data Center Electrical Equipment & Systems needs, contact us or visit:

Link Source: solutions/electricalequipmentandsystems/data-center-generators/