Why Become a Project Manager

Why has project management become a sought after career path? Well, project management has evolved significantly over time and as the nature of industry has changed the role of project manager has become more important.

Nowadays advances in technology, the changing emphasis of environmental impacts and the evolution of industry have made projects a fundamental aspect of the workplace. Whether it’s a small project to identify a new way of working, a process amendment within a team or a multi-billion dollar infrastructure programme project management plays a key role.

Good project managers are sought after across industries. Often pay rates are incredibly high, especially when project managers can also bring industry experience. Furthermore demand for project managers is global and demand for transferable skills is high. Although project workers are often on temporary contracts, the nature of the projects means they should have little problem finding a new role when a project comes to an end.
What is the added value of having a Project Manager?

As industries and trading environments change there are risks to organisations. Quite often the project acts as a tool helping an organisation to evolve through changes and discover a new way of doing things. The activities that manage change processes, the launch of new products and services or the integration of new processes are themselves full of risk. The project manager adds value by ensuring that these risks are kept to a minimum.

By allowing the project manager to plan effectively and identify risks up front the organisation is able to find the steps its needs to take through the project. Highlighting goals and setting a critical path will allow the organisation to clearly see how it will get to where it wants to be. The project manager is there to ensure that such risk occurrences do not take place and that changes are integrated effectively.


10 reasons to become a Project Manager

1. Demand
Project Management is a skill in demand. As businesses shape themselves for the future more of their operations are being driven through projects. Something that is fashionable tomorrow may not be relevant next week.

2. Money
For good project managers there are ample opportunities available in the market and plenty of money on offer. A top quality project manager in the UK with industry specific experience alongside project management skills can potentially demand a salary of £75k and above.

3. Prospects
Not only do project managers earn great salaries but there is scope for growth. No matter how big a project you are working on there are always bigger projects coming up with more responsibility. Career wise the sky is the limit for top class project managers.

4. People
As a project manager one of the main aspects of your role will be to look after and manage various stakeholders. Whether it’s other project staff, company directors, community stakeholders or suppliers you’ll be working around people all the time.

5. Travel
Global travel is definitely a bonus for project managers. As well as the UK, Europe and the States there is huge demand for project managers in Brazil, Russia, India and China (BRIC) as well as demand down under and across the developing world.

6. Sectors
For many project managers sector specialisms are not important. For others, particularly sectors such as IT, Technology and Construction, industry knowledge is incredibly useful and can help you find a niche for yourself.

7. Leadership and Management
If you are the sort of person who naturally takes on leadership and management positions within a group project management will help you to further these skills as you will be responsible for driving the performance of other members of the project team.

8. Results Driven
For those who enjoy seeing the tangible outputs of work then project management is for you. The project, with its strict time, cost and outcome parameters will very clearly allow you to see the fruits of your labour.

9. Responsibility
Project management is not for the faint hearted. From the start you will be a figurehead of the project and handed responsibility. If you like pressure then project management is for you.

10. Change
No two days are the same in project management. Each day you’ll be given different challenges. As a project manager who enjoys change this is the role for you.

10 reasons why businesses should be thankful for a Project Manager

1. Results Orientated
The most important aspect of the project manager is the focus that they bring to the role. They will make goals tangible and add processes that ensure projects are delivered on time and to budget.

2. Understand Deadlines
If you need something doing by a certain date then ask a project manager. They are specialists in timelines and understand the important tasks to ensure the job gets done.

3. Financially Astute
As budgets play such a fundamental role in project management you can trust your project manager to keep a keen eye on the numbers. No more overspend when the project manager is in charge.

4. People
One of the fundamental skills of a project manager is to engage people to work towards the shared goal of the project. As managers they can do the job of motivating staff for you.

5. Great Communicators
Not only are project managers great with staff, they’re also great with stakeholders. When a project is particularly controversial project managers will manage relationships with community stakeholders on your behalf.

6. Risk Managers
A great project manager will scan through the project with a fine-toothed comb and identify any risks to ensure the project runs smoothly.

7. Team Players
When they have to the project manager will knuckle down and help the team. It’s this kind of attitude that helps to earn respect.

8. Hard Workers
Project managers know that sometimes, especially as a project draws to a close they may have to put in those important extra hours. Project Managers are often the most hardworking people you’ll meet.

9. Flexible
The nature of the project manager role means that flexibility is the norm. Whether its travel, start times or other challenges you can rely on the project manager to be flexible.

10. Change Masters
Finally, as change is such an important aspect of projects, project managers thoroughly understand the importance of change. This knowledge could be vital in a business.

Project Management is a dynamic and exciting career. If you are looking for a different challenge every day then Project Management could well be the career for you.

Source url: https://courses.telegraph.co.uk/article-details/2/why-become-a-project-manager/

Ground rules to set you up for success

Project management, today, has many challenges – the most important being people management. Managing people is not an easy task. Conflicts crop up among people everyday.

So what is the solution? 

It’s simple – effective utilization of team ground rules.

Ground rules are policies and guidelines which a group establishes consciously to help individual members decide how to act. To be effective, ground rules must be clear, consistent, agreed-to, and followed.  Team ground rules define a behavioral model which addresses how individuals treat each other, communicate, participate, cooperate, and support each other in joint activities.

A team should create and adopt written ground rules in the project planning stage. They should be added to and revised as and when required. Every project has a unique team and functional structure. Ground rules need to be defined considering project organization in detail. A few factors to be considered are:

– Team location: Location of the team is essential in defining ground rules. A combination of stationary and virtual teams would require additional ground rules.

– Team ethnicity: Consider the ethnicity of the team members and add few ground rules for effective team work.

– Project duration: Ground rules are important for any project irrespective of  the length of the project. Consider the length of the project for defining urgency of implementation.

– Team skills and expertise: Team members should have a mix of skills and expertise in the domain to ensure the success of a project.

Project meeting

  • Be on time for all team meetings.
  • Team leader must create and disseminate agendas for each team meeting.
  • Team leader must create and disseminate minutes after each team meeting.
  • Attend full duration of all team meetings unless a case of emergency.
  • Avoid informal/social talk during team meetings.
  • Build in brief informal/social talk time before or after team meetings.
  • Be patient with alternative viewpoints, different kinds of learners, writers, & speakers.
  • No responsibilities to be assigned unless the person who is being assigned the responsibility accepts it. If a person to be given a responsibility is not at the meeting, the team leader must review that assignment or action item with the person before the responsibility is designated.
  • Set aside a regular weekly meeting time that’s kept open by all members from week to week. Keep the meeting schedule flexible, arranging meetings as needed and based on availability. Project decisions
  • Require consensus on all major team decisions. Avoid apathetic/passive decision making (e.g., “whatever you all think is right”).

 Project delivery

  • Inform team leader if unable to complete work on time.
  • Seek reader/listener feedback before handing in all deliverables.

Set deadlines for each deliverable in advance of due date to allow for collaborative revisions.

Team attitude and culture

  • Rotate responsibilities so each person gets experience with several aspects regardless of quality or qualifications.
  • Make criticisms constructive with suggestions for improvement and non-judgmental language.
  • Confront issues directly and promptly.
  • Promptly relay all interpersonal concerns/conflicts to team leaders.
  • Keep a positive attitude toward the team, individual members, projects and course.
  • Take initiative by offering ideas and volunteering for tasks.
  • Play an equal role in  the team by contributing equally to every task.
  • Be honest with any team member who is not pulling her/his weight.
  • Help one another with difficult or time consuming deliverables.
  • Ask for help from the team or other resources if “stuck” or falling behind.
  • Treat each other with respect.
  • Accept responsibility and accountability along with the authority given.

About the Author

Mahendra Gupta is a PMP and ISEB certified IT Consultant based in United Kingdom with more than 12+ years of experience in Business System Analysis and IT Project Management of wide range of projects within Banking and Trust Business sector.

source link: https://www.simplilearn.com/rules-to-set-you-up-for-success-article

Data Migration Project Checklist: a Template for Effective data migration planning

Data Migration Checklist: The Definitive Guide to Planning Your Next Data Migration

Coming up with a data migration checklist for your data migration project is one of the most challenging tasks, particularly for the uninitiated.

To help, I’ve compiled a list of ‘must-do’ activities that ​I’ve found to be essential to successful migrations.

It’s not a definitive list, you will almost certainly need to add more points but it’s a great starting point.​

Please critique it, extend it using the comments below, share it, but above all use it to ensure that you are fully prepared for the challenging road ahead.

TIP: Data Quality plays a pivotal role to this checklist so be sure to check out Data Quality Pro, our sister site with the largest collection of hands-on tutorials, data quality guides and expert support for Data Quality on the internet.

Get a free checklist kit: Project Planner Spreadsheet + MindMap

Serious about delivering a successful data migration?

Download the same checklist kit I use on client engagements and learn advanced tactics for data migration planning.

  • Project Planning Spreadsheet (for Excel/Google Sheets​)
  • Interactive Online MindMap (great for navigation)

Phase 1: Pre-Migration Planning

Have you assessed the viability of your migration with a pre-migration impact assessment?

Most data migration projects go barreling headlong into the main project without considering whether the migration is viable, how long it will take, what technology it will require and what dangers lie ahead.

It is advisable to perform a pre-migration impact assessment to verify the cost and likely outcome of the migration. The later you plan on doing this the greater the risk so score accordingly.

Have you based project estimates on guesswork or a more accurate assessment?

Don’t worry, you’re not alone, most projects are based on previous project estimates at best or optimistic guesswork at worst.

Once again, your pre-migration impact assessment should provide far more accurate analysis of cost and resource requirements so if you have tight deadlines, a complex migration and limited resources make sure you perform a migration impact assessment asap.

Have you made the business and IT communities aware of their involvement?

It makes perfect sense to inform the relevant data stakeholders and technical teams of their forthcoming commitments before the migration kicks off.

It can be very difficult to drag a subject matter expert out of their day job for a 2-3 hours analysis session once a week if their seniors are not onboard, plus by identifying what resources are required in advance you will eliminate the risk of having gaps in your legacy or target skillset.

In addition, there are numerous aspects of the migration that require business sign-off and commitment.Get in front of sponsors and stakeholders well in advance and ensure they understand AND agree to what their involvement will be.

Have you formally agreed the security restrictions for your project?

I have wonderful memories of one migration where we thought everything was in place so we kicked off the project and then was promptly shut down on the very first day.

We had assumed that the security measures we had agreed with the client project manager were sufficient, however we did not reckon on the corporate security team getting in on the action and demanding a far more stringent set of controls that caused 8 weeks of project delay.

Don’t make the same mistake, obtain a formal agreement from the relevant security governance teams in advance. Simply putting your head in the sand and hoping you won’t get caught out is unprofessional and highly risky given the recent loss of data in many organisations.

Have you identified your key data migration project resources and when they are required?

Don’t start your project hoping that Jobserve.com will magically provision those missing resources you need.

I met a company several months ago who decided they did not require a lead data migration analyst because the “project plan was so well defined”. Suffice to say they’re now heading for trouble as the project spins out of control so make sure you understand precisely what roles are required on a data migration.

Also ensure you have a plan for bringing those roles into the project at the right time.

For example, there is a tendency to launch a project with a full contingent of developers armed with tools and raring to go. This is both costly and unnecessary. A small bunch of data migration, data quality and business analysts can perform the bulk of the migration discovery and mapping well before the developers get involved, often creating a far more successful migration.

So the lesson is to understand the key migration activities and dependencies then plan to have the right resources available when required.

Have you determined the optimal project delivery structure?

Data migrations do not suit a waterfall approach yet the vast majority of data migration plans I have witnessed nearly always resemble a classic waterfall design.

Agile, iterative project planning with highly focused delivery drops are far more effective so ensure that your overall plan is flexible enough to cope with the likely change events that will occur.

In addition, does your project plan have sufficient contingency? 84% of migrations fail or experience delay, are you confident that yours won’t suffer the same consequences?

Ensure you have sufficient capacity in your plan to cope with the highly likely occurrence of delay.

Do you have a well defined set of job descriptions so each member will understand their roles?

Project initiation will be coming at you like a freight train soon so ensure that all your resources know what is expected of them.

If you don’t have an accurate set of tasks and responsibilities already defined it means that you don’t know what your team is expected to deliver and in what order. Clearly not an ideal situation.

Map out the sequence of tasks, deliverables and dependencies you expect to be required and then assign roles to each activity. Check your resource list, do you have the right resources to complete those tasks?

This is an area that most projects struggle with so by clearly understanding what your resources need to accomplish will help you be fully prepared for the project initiation phase.

Have you created a structured task workflow so each member will understand what tasks are expected and in which sequence?

This is an extension of the previous point but is extremely important.

Most project plans will have some vague drop dates or timelines indicating when the business or technical teams require a specific release or activity to be completed.

What this will not show you is the precise workflow that will get you to those points. This needs to be ideally defined before project inception so that there is no confusion as you move into the initiation phase.

It will also help you identify gaps in your resourcing model where the necessary skills or budgets are lacking.

Have you created the appropriate training documentation and designed a training plan?

Data migration projects typically require a lot of additional tools and project support platforms to function smoothly.

Ensure that all your training materials and education tools are tested and in place prior to project inception.

Ideally you would want all the resources to be fully trained in advance of the project but if this isn’t possible at least ensure that training and education is factored into the plan.

Do you have a configuration management policy and software in place?

Data migration projects create a lot of resource materials. Profiling results, data quality issues, mapping specifications, interface specifications – the list is endless.

Ensure that you have a well defined and tested configuration management approach in place before project inception, you don’t want to be stumbling through project initiation trying to make things work, test them in advance first and create the necessary training materials.

Have you planned for a secure, collaborative working environment to be in place?

If your project is likely to involve 3rd parties and cross-organisational support it pays to use a dedicated product for managing all the communications, materials, planning and coordination on the project.

It will also make your project run smoother if this is configured and ready prior to project initiation.

Have you created an agreed set of data migration policy documents?

How will project staff be expected to handle data securely? Who will be responsible for signing off data quality rules? What escalation procedures will be in place?

There are a multitude of different policies required for a typical migration to run smoothly, it pays to agree these in advance of the migration so that the project initiation phase runs effortlessly.

Phase 2: Project Initiation

Have you created a stakeholder communication plan and stakeholder register?

During this phase you need to formalise how each stakeholder will be informed. We may well have created an overall policy beforehand but now we need to instantiate it with each individual stakeholder.

Don’t create an anxiety gap in your project, determine what level of reporting you will deliver for each type of stakeholder and get agreement with them on the format and frequency. Dropping them an email six months into the project that you’re headed for a 8 week delay will not win you any favours.

To communicate with stakeholders obviously assumes you know who they are and how to contact them! Record all the stakeholder types and individuals who will require contact throughout the project.

Have you tweaked and published your project policies?

Now is the time to get your policies completed and circulated across the team and new recruits.

Any policies that define how the business will be involved during the project also need to be circulated and signed off.

Don’t assume that everyone knows what is expected of them so get people used to learning about and signing off project policies early in the lifecycle.

Have you created a high-level first-cut project plan?

If you have followed best-practice and implemented a pre-migration impact assessment you should have a reasonable level of detail for your project plan. If not then simply complete as much as possible with an agreed caveat that the data will drive the project. I would still recommend carrying out a migration impact assessment during the initiation phase irrespective of the analysis activities which will take place in the next phase.

You cannot create accurate timelines for your project plan until you have analysed the data.

For example, simply creating an arbitrary 8 week window for “data cleansing activities” is meaningless if the data is found to be truly abysmal. It is also vital that you understand the dependencies in a data migration project, you can’t code the mappings until you have discovered the relationships and you can’t do that until the analysis and discovery phase has completed.

Also, don’t simply rely on a carbon copy of a previous data migration project plan, your plan will be dictated by the conditions found on the ground and the wider programme commitments that your particular project dictates.

Have you set up your project collaboration platform?

This should ideally have been created before project initiation but if it hasn’t now is the time to get it in place.

There are some great examples of these tools listed over at our sister community site here:

5 Simple Techniques To Differentiate Your Data Quality Service

Have you created your standard project documents?

During this phase you must create your typical project documentation such as risk register, issue register, acceptance criteria, project controls, job descriptions, project progress report, change management report, RACI etc.

They do not need to be complete but they do need to be formalised with a process that everyone is aware of.

Have you defined and formalised your 3rd Party supplier agreements and requirements?

Project initiation is a great starting point to determine what additional expertise is required.

Don’t leave assumptions when engaging with external resources, there should be clear instructions on what exactly needs to be delivered, don’t leave this too late.

Have you scheduled your next phase tasks adequately?

At this phase you should be meticulously planning your next phase activities so ensure that the business and IT communities are aware of the workshops they will be involved in.

Have you resolved any security issues and gained approved access to the legacy datasets?

Don’t assume that because your project has been signed off you will automatically be granted access to the data.

Get approvals from security representatives (before this phase if possible) and consult with IT on how you will be able to analyse the legacy and source systems without impacting the business. Full extracts of data on a secure, independent analysis platform is the best option but you may have to compromise.

It is advisable to create a security policy for the project so that everyone is aware of their responsibilities and the professional approach you will be taking on the project.

Have you defined the hardware and software requirements for the later phases?

What machines will the team run on? What software will they need? What licenses will you require at each phase? Sounds obvious, not for one recent project manager who completely forgot to put the order in and had to watch 7 members of his team sitting idly by as the purchase order crawled through procurement. Don’t make the same mistake, look at each phase of the project and determine what will be required.

Model re-engineering tools? Data quality profiling tools? Data cleansing tools? Project management software? Presentation software? Reporting software? Issue tracking software? ETL tools?

You will also need to determine what operating systems, hardware and licensing is required to build your analysis, test, QA and production servers. It can often take weeks to procure this kind of equipment so you ideally need to have done this even before project initiation.

Phase 3: Landscape Analysis

Have you created a detailed data dictionary?

A data dictionary can mean many things to many people but it is advisable to create a simple catalogue of all the information you have retrieved on the data under assessment. Make this tool easy to search, accessible but with role-based security in place where required. A project wiki is a useful tool in this respect.

Have you created a high-level source to target mapping specification?

At this stage you will not have a complete source-to-target specification but you should have identified the high-level objects and relationships that will be linked during the migration. These will be further analysed in the later design phase.

Have you determined high-level volumetrics and created a high-level scoping report?

It is important that you do not fall foul of the load-rate bottleneck problem so to prevent this situation ensure that you fully assess the scope and volume of data to be migrated.

Focus on pruning data that is historical or surplus to requirements (see here for advice). Create a final scoping report detailing what will be in scope for the migration and get the business to sign this off.

Has the risk management process been shared with the team and have they updated the risk register?

There will be many risks discovered during this phase so make it easy for risks to be recorded. Create a simple online form where anyone can add risks during their analysis, you can also filter them out later but for now we need to gather as many as possible and see where any major issues are coming from.

Have you created a data quality management process and impact report?

If you’ve been following our online coaching calls you will know that without a robust data quality rules management process your project will almost certainly fail or experience delays.

Understand the concept of data quality rules discovery, management and resolution so you deliver a migration that is fit for purpose.

The data quality process is not a one-stop effort, it will continue throughout the project but at this phase we are concerned with discovering the impact of the data so decisions can be made that could affect project timescales, deliverables, budget, resourcing etc.

Have you created and shared a first-cut system retirement strategy?

Now is the time to begin warming up the business to the fact that their beloved systems will be decommissioned post-migration. Ensure that they are briefed on the aims of the project and start the process of discovering what is required to terminate the legacy systems. Better to approach this now than to leave it until later in the project when politics may prevent progress.

Have you created conceptual/logical/physical and common models?

These models are incredibly important for communicating and defining the structure of the legacy and target environments.

The reason we have so many modelling layers is so that we understand all aspects of the migration from the deeply technical through to how the business community run operations today and how they wish to run operations in the future. We will be discussing the project with various business and IT groups so the different models help us to convey meaning for the appropriate community.

Creating conceptual and logical models also help us to identify gaps in thinking or design between the source and target environments far earlier in the project so we can make corrections to the solution design.

Have you refined your project estimates?

Most projects start with some vague notion of how long each phase will take. Use your landscape analysis phase to determine the likely timescales based on data quality, complexity, resources available, technology constraints and a host of other factors that will help you determine how to estimate the project timelines.

Phase 4: Solution Design

Have you created a detailed mapping design specification?

By the end of this phase you should have a thorough specification of how the source and target objects will be mapped, down to attribute level. This needs to be at a sufficient level to be passed to a developer for implementation in a data migration tool.

Note that we do not progress immediately into build following landscape analysis. It is far more cost-effective to map out the migration using specifications as opposed to coding which can prove expensive and more complex to re-design if issues are discovered.

Have you created an interface design specification?

At the end of this stage you should have a firm design for any interface designs that are required to extract the data from your legacy systems or to load the data into the target systems. For example, some migrations require change data capture functionality so this needs to be designed and prototyped during this phase.

Have you created a data quality management specification?

This will define how you plan to manage the various data quality issues discovered during the landscape analysis phase. These may fall into certain categories such as:

  • Ignore
  • Cleanse in source
  • Cleanse in staging process
  • Cleanse in-flight using coding logic
  • Cleanse on target

The following article by John Platten of Vivamex gives a better understanding on how to manage cleansing requirements: Cleanse Prioritisation for Data Migration Projects – Easy as ABC?

Have you defined your production hardware requirements?

At this stage you should have a much firmer idea of what technology will be required in the production environment.

The volumetrics and interface throughput performance should be known so you should be able to specify the appropriate equipment, RAID configurations, operating system etc.

Have you agreed the service level agreements for the migration?

At this phase it is advisable to agree with the business sponsors what your migration will deliver, by when and to what quality.

Quality, cost and time are variables that need to be agreed upon prior to the build phase so ensure that your sponsors are aware of the design limitations of the migration and exactly what that will mean to the business services they plan to launch on the target platform.

Phase 5: Build & Test

Has your build team documented the migration logic?

The team managing the migration execution may not be the team responsible for coding the migration logic.

It is therefore essential that the transformations and rules that were used to map the legacy and target environments are accurately published. This will allow the execution team to analyse the root-cause of any subsequent issues discovered.

Have you tested the migration with a mirror of the live environment?

It is advisable to test the migration with data from the production environment, not a smaller sample set. By limiting your test data sample you will almost certainly run into conditions within the live data that cause a defect in your migration at runtime.

Have you developed an independent migration validation engine?

Many projects base the success of migration on how many “fall-outs” they witness during the process. This is typically where an item of data cannot be migrated due to some constraint or rule violation in the target or transformation data stores. They then go on to resolve these fall-outs and when no more loading issues are found carry out some basic volumetric testing.

“We had 10,000 customers in our legacy system and we now have 10,000 customers in our target, job done”.

We recently took a call community member based in Oman. Their hospital had subcontracted a data migration to a company who had since completed the project. Several months after the migration project they discovered that many thousands of patients now had incomplete records, missing attributes and generally sub-standard data quality.

It is advisable to devise a solution that will independently assess the success of the execution phase. Do not rely on the reports and stats coming back from your migration tool as a basis for how successful the migration was.

I advise clients to vet the migration independently, using a completely different supplier where budgets permit. Once the migration project has officially terminated and those specialist resources have left for new projects it can be incredibly difficult to resolve serious issues so start to build a method of validating the migration during this phase, don’t leave it until project execution, it will be too late.

Have you defined your reporting strategy and associated technology?

Following on from the previous point, you need to create a robust reporting strategy so that the various roles involved in the project execution can see progress in a format that suits them.

For example, a migration manager may wish to see daily statistics, a migration operator will need to see runtime statistics and a business sponsor may wish to see weekly performance etc.

If you have created service level agreements for migration success these need to be incorporated into the reporting strategy so that you can track and verify progress against each SLA.

Have you defined an ongoing data quality monitoring solution?

Data quality is continuous and it should certainly not cease when the migration has been delivered as there can be a range of insidious data defects lurking in the migrated data previously undetected.

In addition, the new users of the system may well introduce errors through inexperience so plan for this now by building an ongoing data quality monitoring environment for the target platform.

A useful tool here is any data quality product that can allow you to create specific data quality rules, possesses matching functionality and also has a dashboard element.

Have you created a migration fallback policy?

What if the migration fails? How will you rollback? What needs to be done to facilitate this?

Hope for the best but plan for the worst case scenario which is an failed migration. This can often be incredibly complex and require cross-organisation support so plan well in advance of execution.

Have you confirmed your legacy decommission strategy?

By now you should have a clear approach, with full agreement, of how you will decommission the legacy environment following the migration execution.

Have you completed any relevant execution training?

The team running the execution phase may differ to those on the build phase, it goes without saying that the migration execution can be complex so ensure that the relevant training materials are planned for and delivered by the end of this phase.

Have you obtained sign-off for anticipated data quality levels in the target?

It is rare that all data defects can be resolved but at this stage you should certainly know what they are and what impact they will cause.

The data is not your responsibility however, it belongs to the business so ensure they sign off any anticipated issues so that they are fully aware of the limitations the data presents.

Have you defined the data migration execution strategy?

Some migrations can take a few hours, some can run into years.

You will need to create a very detailed plan for how the migration execution will take place. This will include sections such as what data will be moved, who will sign-off each phase, what tests will be carried out, what data quality levels are anticipated, when will the business be able to use the data, what transition measures need to be taken.

This can become quite a considerable activity so as ever, plan well in advance.

Have you created a gap-analysis process for measuring actual vs current progress?

This is particularly appropriate on larger scale migrations.

If you have indicated to the business that you will be executing the migration over an 8 week period and that specific deliverables will be created you can then map that out in an excel chart with time points and anticipated volumetrics.

As your migration executes you can then chart actual vs estimated so you can identify any gaps.

Phase 6: Execute & Validate

Have you kept an accurate log of SLA progress?

You will need to demonstrate to the business sponsors and independent auditors that your migration has been compliant. How you will do this varies but if you have agreed SLA’s in advance these need to be reported against.

Have you independently validated the migration?

Already covered this but worth stressing again that you cannot rely on your migration architecture to validate the migration. An independent process must be taken to ensure that the migration process has delivered the data to a sufficient quality level to support the target services.

Phase 7: Decommission & Monitor

Have you completed your system retirement validation?

There will typically be a number of pre-conditions that need to be met before a system can be terminated.

Ensure that these are fully documented and agreed (this should have been done earlier) so you can begin confirming that the migration has met these conditions.

Have you handed over ownership of the data quality monitoring environment?

Close down your project by passing over the process and technology adopted to measure data quality during the project.

Please note that this list is not exhaustive, there are many more activities that could be added here but it should provide you with a reasonable starting point.

You may also find that many of these activities are not required for your type of migration but are included for clarity, as ever, your migration is unique so will require specific actions to be taken that are not on this list.

source from: http://datamigrationpro.com/data-migration-checklist-planner


Uptime vs. TIA-942: Introduction, why this series of articles?

Published on May 8, 2017

Edward van Leent
Chairman & CEO at EPI Group of Companies

Article 1 | Uptime vs. TIA-942: Introduction, why this series of articles?

During a recent one month tour throughout the USA and Asia I had the pleasure to meet numerous data centre owners/operators, consultants and end-users to talk about data centre trends and the challenges they are facing. During those conversations, we also discussed quality benchmarks for data centres facilities including the various standards and guidelines.

I started spotting a clear trend that there is a lot of misperception about data centre facilities benchmarking in relation to ANSI/TIA-942 vs. Uptime. Some of those misperceptions are based on outdated information as some customers didn’t keep up with the developments in that space as well as deception, created by some parties not representing the facts truthfully either by ignorance or intentionally for commercial reasons.

It was clear to me that the market needs to be updated on what is happening, this including the true facts of the matter. That’s what brought me to the idea of writing a few articles about this subject matter to ensure the market gets appropriate, fact based and updated information. I will address in a series of articles a variety of aspects and I hope that this will contribute to a more clear and fact based picture of the current situation and it will hopefully answer any question you might have regarding this subject matter. If you have any suggestions in terms of topics to be covered then please feel free to drop me a note at; edward@epi-ap.com.

Article 2 | Uptime vs. TIA-942: A short history

Before getting into the details of Uptime vs. TIA-942, I thought it would be a good idea to provide a bit of background so that some of the matters that will be discussed in upcoming articles can be seen in the light of the bigger scheme of things.

Uptime (UTI) came up with the data centre classification scheme based on four (4) different levels which probably all readers of this article know are indicated by the term “Tier”. It was first released in 1995 with the title “Tier Classifications Define Site Infrastructure Performance”. In 2005, this title was update to “Tier Standard Topology” also referred to as TST.

In the early 2000’s the TR42 committee of TIA decided to create a telecommunication standard for data centres. UTI and TIA got in touch with each other and UTI gave TIA the legal right to use the Tier philosophy it had developed for inclusion into what ultimately became the ANSI/TIA-942 (TIA-942) standard. There were a few key differences such as that TIA-942 did not only address Electrical and Mechanical as defined at a high level in the TST, but also included many other factors in two additional sections being Architectural and Telecommunication (I will expand more on some of the key (technical) differences in another article). Both UTI and TIA were using the term Tier to indicate the four different levels of design principles. UTI was, and still is, using the Roman numerals (I, II, III, IV) whereas TIA was using the Arabic-Indic numerals (1,2,3,4).

TIA released the ANSI/TIA-942 standard in 2005. The standard very quickly became popular for a variety of reasons. This was amplified when a number of organizations started to perform conformity assessments based on the ANSI/TIA-942 which clearly was creating a much more competitive environment in the market place where previously UTI was pretty much the sole player. There was also some level of confusion in the market when organizations were talking about having a Tier-X data centre without providing the reference as to whether this claim was based on either UTI-TST or the ANSI/TIA-942. These reasons slowly became more and more of an irritation point and in 2013 UTI approached TIA with the request for TIA to drop the term ‘Tier’ from the ANSI/TIA-942 standard.

TIA, being a non-profit organization, had no issues with that and as such it was mutually agreed upon that TIA would strike of the term ‘Tier’ from the ANSI/TIA-942 standard and replace it with the term Rated/Rating in the 2014 version of the Standard. In an upcoming article, I will discuss in more detail about the rights of using the term Tier and/or Rated/Rating as there are unfortunately some misperceptions about the legal rights with respect to the usage of the term ‘Tier’.

The above episode basically ended the relation between UTI and TIA and each of the parties are now working individually on the current and future versions of their own independent documents.


Article 3 | Uptime vs. TIA-942: Standard or guideline?

There have been many debates on the internet to discuss this topic including the confusion about its relation to codes and arguments about using a capital letter to indicate the term Standard. I think it is good to go back to one of the first definitions (as far back as 1667) which defined a Standard as ‘a specified principle, example or measure used for comparison to a level of quality or attainment’.  A guideline was defined as ‘A non-specific rule or principle that provides direction to action, behaviour or outcome’.  These definitions of course still leave some level of interpretation about what exactly can be identified even to the point that some would argue that the both terms can be used for the very same thing. I would argue that a Standard has a few important factors;

  1. Standards are developed by an accredited SDO (Standard Development Organization). This title is awarded by any of the three key members of the WSC (World Standards Cooperation) or their regional or national members who have been given the authority to accredit SDO’s. At a regional level you would have for example CEN which is the European standards body issuing EN standards. At the country level you have for example ANSI in the USA, BSI for the UK, SPRING in Singapore etc. Virtually any country in the world has their own.
  2. The development of the Standard is following a transparent development process as laid down by the organization which is governing the SDO development efforts. This typically includes key points such as that the process should be documented and available for others, members involved should be balanced etc.
  3. SDO’s are typically non-profit organizations
  4. SDO’s do not perform audits nor do they provide certification
  5. All requirements of the standard are transparent i.e. ALL requirements are available to those who wish to have insight in the standard and, just before you even ask the question; NO, this does not mean that the standard should be available for free.
  6. The Standard must be reviewed on a regular basis not to exceed 5-years. The outcome of that review will yield in either one of the three options, reaffirm, revise, withdraw.
  7. The intellectual property (IP) extends only to the standard itself and not to its use. This means that others than the SDO can use the material for various purposes such as using it for developing a service or product that uses the IP of the Standard.

There is a variance to the above which are typically called de-facto/semi standards which are defined as specifications which are accepted by its relatively widely spread usage.

So how can one make sure that a standard is a real Standard? One can review it from a “legal” perspective or one could just apply the following logic;

  1. First of all, a real Standard would bear the prefix of the organization who accredited the SDO. for example, the long description of the TIA-942 is ANSI/TIA-942 which means that ANSI is overseeing TIA as an SDO to ensure that whatever they develop is following due process. Just to be clear, ANSI does not validate the content of the standard as this rests with the SDO and their technical committee of SME’s (Subject Matter Experts).
  2. A real Standards (typically) has a numeral indicator e.g. ISO-9001, TIA-942
  3. A real Standard is a document which provides a clear description of all audit criterion

Coming back to the main question and based on the explanation provided I believe it is very clear, and nobody can even argue, that ANSI/TIA-942 is a real Standard. UTI-TST is not a Standard but a guideline. At best, and with a fair amount of imagination, you could consider calling it a de-facto standard but anything beyond that statement clearly is a misrepresentation of the facts and the intent as how WSC and its members would define and recognize an SDO and a Standard.

Article 4 | Uptime vs. TIA-942: What is within the scope?

One of the key differences between UTI:TST and ANSI/TIA-942 is the scope. For the TST topology guideline of UTI the scope is very clear as it only covers the mechanical and electrical infrastructure. This is often seen as inadequate by data centers owners. As one of a data centre consultant once said to me “you could build a data centre in a wooden hut next to the railroad track and nuclear power plant with no fire suppression and the doors wide open and still be a Tier-IV data centre based on UTI:TST. As ridiculous as it might sound, the reality is that nobody could argue with this consultant as the UTI:TST only covers electrical and mechanical, full stop. Although electrical and mechanical systems are very important, it doesn’t make any sense to ignore all other aspects that would contribute to a reliable, secure and safe data centre.

For ANSI/TIA-942 the situation is slightly more complicated. Officially the standard is called “Telecommunications Infrastructure Standard for Data Centers”. There are a number of annexes in the ANSI/TIA-942  which describe additional criterion such as site location, building construction, electrical and mechanical infrastructure, physical security, safety, fire detection and suppression etc. So, one could easily figure out that ANSI/TIA-942 is clearly covering all aspects of a data center. So what is the issue?

There is a theoretical and practical side to this. Let’s start with the theoretical side first. The standard indicates in the introduction that the 8 annexes are not part of the requirements of the standard and as such the annexes start with the term ‘informative’. However, a few sentences later it states “It is intended for use by designers who need a comprehensive understanding of the data center design, including the facility planning, the cabling system, and the network design”. This indicates that the Technical Committee who put the standard together has a clear intent to cover the whole data centre and not just the network infrastructure alone. Furthermore, the standard also states that “Failsafe power, environmental controls and fire suppression, and system redundancy and security are also common requirements to facilities that serve both the private and public domain’. In addition to this, in Annex-F it states “This Standard includes four ratings relating to various levels of resiliency of the data center facility infrastructure”.  It is hard to ignore by the continues reference to the relation between telecommunications and facilities infrastructure that this should be taken as an overall design standard and not just for telecommunications alone.

Then we have the practical side of the matter which is that any data centre which is taking the ANSI/TIA-942 as their refence point does so by referring to Tier/Rating levels. I have never seen any data centre declaring conformity to the ANSI/TIA-942 which ignored all the annexes as by right one could then just pull the approved network cables in the right way and forget about all other aspects such as electrical and mechanical systems etc. The reality is that data centre operator/owners who are using the ANSI/TIA-942 standard as their reference point are using its full content, including the annexes and rating systems.

So, the conclusion is very simple. No matter how much confusion some parties try to throw into the mix, the reality is that data centre designers/operators/owners take the full document as their reference for designing and building a reliable, secure, efficient and safe data centre. Anybody who says that ANSI/TIA-942 is only used for telecommunications  is either ignoring what is happening in the real world or is just oblivious to the facts of how ANSI/TIA-942 is being written and/or used.

Article 5 | Uptime vs. TIA-942: Outcome based or checklist or can it be both?

In this article, in the series of articles about Uptime vs. TIA-942, I will address a statement often used in favour of Uptime vs. TIA-942. Consultants favouring Uptime are typically using the argument that they are not using a checklist but are assessing designs based on desired outcome. The claim is that ANSI/TIA-942 is not flexible and prevents innovation of designs as it is using a checklist i.e. tick in the box approach. So, let’s examine the true facts of these statements.

Checklists based:

First of all, UTI does have a checklist. However, it is an internal checklist which is used by their own engineers to go through designs in a systematic way. This checklist is not shared with the general public, even though it would be helpful for everybody to have it, in order to get a better understanding about the details of the UTI demonstration/test criteria. This goes back to one of my previous articles about what real standards are, i.e. open and transparent.

ANSI/TIA-942 is a combination of descriptions of what needs to be achieved to meet defined rating levels as well as supplemental annexes to provide guidance on how to achieve this. However, make no mistake, purely applying the table of annex-F as a checklist for conformity without considering the rest of the standard will give you an ugly surprise during an audit as the table is a supporting element to the standard, it is not intended to be a complete checklist for all requirements of the standard. This is a classic mistake of inexperienced consultants/auditors offering consulting/audit services and proudly pull out a copy of the table, putting a tick in every box and then declare a site to conform to ANSI/TIA-942.  These consultants/auditors have clearly not understood the standard and/or do not understand how audits should be conducted. Unfortunately in EPI we have seen data centre owners in “tears” when during an audit we found major non-conformities which were overlooked by these kinds of consultants. Be aware of whom you choose for consulting and audit engagements and make sure they apply the ANSI/TIA-942 appropriately.

Outcome based:

This applies to both UTI and ANSI/TIA-942. Don’t forget that ultimately the description of what constitutes to be a Tier:I-II-III-IV is exactly the same as what ANSI/TIA-942 describes as Rated:1-2-3-4.  For example, UTI:TST defines Tier-III as a Concurrently Maintainable (CM) data centre, similar to ANSI/TIA-942 defining a Rated-3 as a Concurrently Maintainable data centre. However, from the previous article you have learned that UTI:TST only covers Electrical and Mechanical (cooling) whereas ANSI/TIA-942 also includes requirements for Telecommunications to meet the CM requirement. A key difference is of course that ANSI/TIA-942 provides much more transparency and guidance on how this could be achieved, by giving clear indications of what one should/could do to achieve this.

Here is an example of how it works in the real world, which is very different than what is portrayed by consultants favouring Uptime. ANSI/TIA-942 states that for a Rated-3 data centre there should be 2 utility feeds which can come from a single substation. What about if you have only one utility feed? You could still meet Rated-3 if you can prove that you meet the overarching statement of being CM. So, if you have generators and you can prove that during planned maintenance you can switch to the generators then you could still be meeting Rated-3 requirements. Of course, there will be a number of other criteria which you will need to address to ensure that the generator is capable of continuous support of the load over extended period of time etc. but in essence you certainly can meet Rated-3 despite not having followed the exact word in the table that says you need two utility feeds. Inexperienced consultants/auditors do not understand this, Certified consultants/auditors will understand this so make sure you put your design work in capable hands. Consultants favouring Uptime often will try to scare the customer with the “you will never be able to comply to ANSI/TIA-942 as the table tells you that you must have XYZ”. It is kind of hilarious to see that the same type of consultants declare that the annexes are not part of the ANSI/TIA-942 standard but yet try to scare a customer of not meeting the items listed in the very same table they say are not part of the standard…

So, coming back to the question of “Outcome based or checklist or can it be both?”  UTI is outcome based and does not provide practical guidance on how to achieve it. ANSI/TIA-942 is also outcome based but provides guidance by means of clear descriptions in various annexes and a supplemental table for you to use. Those with more advanced technical skills can still use the flexibility to implement the design differently as long as they meet the outcome objectives and guidance of the annexes. Therefore, there is absolutely no truth to the statement that ANSI/TIA-942 hinders innovation when designing data centres.

Feel free to share this range of articles to other LinkedIn groups, friends and other social media.

In my next article, I will address the often-heard misconception of “Uptime is easy, TIA is hard”

LikedUnlikeUptime vs. TIA-942: Outcome based or checklist or can it be both?CommentShareShare Uptime vs. TIA-942: Outcome based or checklist or can it be both?


Article 6 | Uptime vs. TIA-942: Uptime certification is easy, ANSI/TIA-942 certification is difficult

Following up on my previous article, we will now have a closer look at a statement which consultants favouring Uptime tend to throw at data centre operator/owners trying to convince them to go for Uptime certification instead of ANSI/TIA-942 certification. One of the famous statements is “Uptime certification is easier to achieve compared to TIA-942”.

When hearing this, the first thought that comes to my mind is what my late father always said, ‘If something is too easy to achieve, it is probably not worth it’. Having said that, I don’t think that Uptime certification is all that easy to comply with. So why is it that such statements are being made?

At the core of those using such statements is of course a commercial interest by trying to scare data centre owners/operators from pursuing the ANSI/TIA-942 certification. So, what are their justifications and are these true or false? The arguments usually brought up in those conversations are;

  1. UTI:TST only reviews electrical and mechanical (cooling) systems whereas ANSI/TIA-942 is too complicated as it covers everything including telecommunications, physical security etc.
  2. ANSI/TIA-942 is prescriptive and has many strict requirements in the Table which are hard to implement
  3. If you fail to meet one of the ANSI/TIA-942 requirement then you cannot get certified

Let’s have a look at each of these statements one by one to decipher the truth;


Argument-1; UTI:TST only reviews electrical and mechanical;

There are two items to be examined here;

  1. The scope of the audit, in this case being electrical and mechanical
  2. The difficulty in meeting the criteria for the defined scope

As for the scope, yes it is true that the smaller the scope is, the less will be assessed and therefore potentially less issues might be discovered. But to me that sounds like saying that you have a safe car because your seatbelts are certified but you didn’t look at the tires, the structural strength of the car and other factors that have an impact on the overall safety. Similar to a data centre, you can review only electrical and mechanical systems like UTI:TST but if you don’t review the network, physical security and other factors then you still have a very large risk at hand from an overall data centre reliability perspective. So, as a business manager running an enterprise or commercial data centre, or as a user of a commercial data centre, would you be happy to know that a certificate is not covering all aspects that potentially poses a business risk to you? A broader scope like what is in the ANSI/TIA-942 will ensure that all potential physical risks are evaluated.

As for the difficulty of meeting the criteria for the defined scope, in this respect UTI is certainly not easier compared to ANSI/TIA-942. In fact, some of the requirements from UTI are considered to be more difficult; an example is the requirement for prime generators whereas ANSI/TIA-942 is allowing standby generators. Another example, UTI is more stringent on ambient conditions as they look at the most extreme condition considering a 20-year history. These two facts alone have have many engineers (and business owners) baffled as it adds greatly to the cost and one often wonders why go to these extremes as ultimately it adds to the cost. ANSI/TIA-942 is in that sense more practical yet allows you to go to these extremes if you wish to do so and are willing to pay the incremental cost. This will give the business an option to choose a well-balance risk vs. investment model.


Argument-2: ANSI/TIA-942 is prescriptive and has many strict requirements in the Table which are hard to implement

This argument is baseless and is aimed at those who do not understand how audits really work. As indicated in one of my previous articles, “Outcome based or checklist or can it be both?” we made it clear that the table is supporting the overarching requirement for each rating level. So, if something is not meeting the exact description of the table it does not mean that you don’t meet the requirement of the standards.  Read the article I wrote about this subject here: “Outcome based or checklist or can it be both?”


Argument-3: If you fail to meet one of the ANSI/TIA-942 requirements then you cannot get certified

This argument pretty much follows the same “logic” of the previous statement. There is NO such truth as not being able to get certified if you miss out on meeting a particular description of the table. Furthermore, in auditing based on ISO, there are Cat-1 and Cat-2 non-conformities. I will explain the difference in a future article but for now it will be sufficient to say that if a site has one (or multiple) Cat-2 non-conformities then that does not automatically mean that a site cannot be certified.


The conclusion is that UTI:TST is being portrayed to be easier based on a narrow scope but that leaves business owners at risk for having an incomplete true assessment of all important factors that make up a reliable data centre infrastructure. If you compare the same scope for UTI:TST vs. ANSI/TIA-942 then both have the same overarching goal such as concurrently maintainability and fault tolerance etc.  whereby in fact UTI can turn out to be more costly due to some requirements which some data centre operator owners consider to be overkill.

In our next article, I will address the usage of the term Tier and Rating. Stay tuned.


Top 10 data center operating procedures

Every data center needs to define its policies, procedures, and operational processes.

An ideal set of documentation goes beyond technical details about application configuration and notification matrices.

These top 10 areas should be part of your data center’s standard operating procedures manuals.

    1. Change control. In addition to defining the formal change control process, include a roster of change control board members and forms for change control requests, plans and logs.
    2. FacilitiesInjury prevention program information is a good idea, as well as documentation regarding power and cooling emergency shut offprocesses; fire suppression system information; unsafe condition reporting forms; new employee safety training information, logs and attendance records; illness or injury reporting forms; and visitor policies.
    3. Human resources. Include policies regarding technology training, as well as acceptable use policies, working hours and shift schedules, workplace violence policies, employee emergency contact update forms, vacation schedules, and anti-harassment and discrimination policies.
    4. Security. This is a critical area for most organizations. Getting all staff access to the security policies of your organization is half the battle. An IT organization should implement policies regarding third-party or customer system access, security violations, auditing, classification of sensitive resources, confidentiality, physical security, passwords, information control, encryption and system access controls.
    5. Templates. Providing templates for regularly used documentation types makes it easier to accurately capture the data you need in a format familiar to your staff. Templates to consider include policies, processes, logs, user guides and test/report forms.
    6. Crisis management. Having a crisis response scripted out in advance goes a long way toward reducing the stress of a bad situation. Consider including crisis management documentation around definitions; a roster of crisis response team members; crisis planning; an escalation and notification matrix; a crisis checklist; guidelines for communications; situation update forms, policies, and processes; and post-mortem processes and policies.
    7. Deployment. Repeatable processes are the key to speedy and successful workload deployments. Provide your staff with activation checklists, installation procedures, deployment plans, location of server baseline loads or images, revision history of past loads or images and activation testing processes.
    8. Materials management. Controlling your inventory of IT equipmentpays off. Consider including these items in your organization’s documentation library: policies governing requesting, ordering, receiving and use of equipment for testing; procedures for handling, storing, inventorying, and securing hardware and software; and forms for requesting and borrowing hardware for testing.
    9. Internal communications. Interactions with other divisions and departments within your organization may be straightforward, but it is almost always helpful to provide a contact list of all employees in each department, with their work phone numbers and e-mail addresses. Keep a list of services and functions provided by each department, and scenarios in which it may be necessary to contact these other departments for assistance.
    10. Engineering standardsTesting, reviewing and implementing new technology in the data center is important for every organization. Consider adding these items to your organization’s standard operating procedures manuals: new technology request forms, technology evaluation forms and reports, descriptions of standards, testing processes, standards review and change processes, and test equipment policies.

About the author
Kackie Cohen is a Silicon Valley-based consultant providing data center planning and operations management to government and private sector clients. Kackie is the author of Windows 2000 Routing and Remote Access Service and co-author of Windows XP Networking.

source from: http://searchdatacenter.techtarget.com/tip/Top-10-data-center-operating-procedures

ITIL and Security Management

ITIL and Security Management Overview

David McPhee

What is ITIL?For the purpose of this chapter, the focus is how information security management works within the Information Technology Infrastructure Library (ITIL).

The Information Technology Infrastructure Library (ITIL) is a framework of best practices. The concepts within ITIL support information technology services delivery organizations with the planning of consistent, documented, and repeatable or customized processes that improve service delivery to the business. The ITIL framework consists of the following IT processes: Service Support (Service Desk, Incident Management, Problem Management, Change Management, Configuration Management, and Release Management) and Services Delivery (Service Level Management, Capacity Management, Availability Management, Financial Management and IT Service Continuity Management).

History of ITIL

The ITIL concept emerged in the 1980s, when the British government determined that the level of IT service quality provided to them was not sufficient. The Central Computer and Telecommunications Agency (CCTA), now called the Office of Government Commerce (OGC), was tasked with developing a framework for efficient and financially responsible use of IT resources within the British government and the private sector.

ITIL Overview
Figure 1. ITIL Overview.

The earliest version of ITIL was actually originally called GITIM, Government Information Technology Infrastructure Management. Obviously this was very different to the current ITIL, but conceptually very similar, focusing around service support and delivery.

Large companies and government agencies in Europe adopted the framework very quickly in the early 1990s. ITIL was spreading far and, and was used in both government and non-government organizations. As it grew in popularity, both in the UK and across the world, IT itself changed and evolved, and so did ITIL.

What Is Security Management?

Security management details the process of planning and managing a defined level of security for information and IT services, including all aspects associated with reaction to security Incidents. It also includes the assessment and management of risks and vulnerabilities, and the implementation of cost justifiable countermeasures.

Security management is the process of managing a defined level of security on information and IT services. Included is managing the reaction to security incidents. The importance of information security has increased dramatically because of the move of open internal networks to customers and business partners; the move towards electronic commerce, the increasing use of public networks like Internet and Intranets. The wide spread use of information and information processing as well as the increasing dependency of process results on information requires structural and organized protection of information.


Service Support Overview
Service support describes the processes associated with the day-to day support and maintenance activities associated with the provision of IT services: Service Desk, Incident Management, Problem Management, Change Management, Configuration Management, and Release Management.

  • Service Desk: This function is the single point of contact between the end users and IT Service Management.
  • Incident Management: Best practices for resolving incidents (any event that causes an interruption to, or a reduction in, the quality of an IT service) and quickly restoring IT services.
  • Problem Management: Best practices for identifying the underlying causes of IT incidents in order to prevent future recurrences. These practices seek to proactively prevent incidents and problems.
  • Change Management: Best practices for standardizing and authorizing the controlled implementation of IT changes. These practices ensure that changes are implemented with minimum adverse impact on IT services, and that they are traceable.
  • Configuration Management: Best practices for controlling production configurations; for example, standardization, status monitoring, and asset identification. By identifying, controlling, maintaining and verifying the items that make up an organization’s IT infrastructure, these practices ensure that there is a logical model of the infrastructure.
  • Release Management: Best practices for the release of hardware and software. These practices ensure that only tested and correct versions of authorized software and hardware is provided to IT customers.

Service Support Details

Service Desk
The objective of the service desk is to be a single point of contact for customers who need assistance with incidents, problems, questions, and to provide an interface for other activities related to IT and ITIL services.

Service desk diagram
Figure 2. Service desk diagram.

Benefits of Implementing a Service Desk

  • Increased first call resolution
  • Skill based support
  • Rapidly restore service
  • Improved incident response time
  • Quick service restoration
  • Improved tracking of service quality
  • Improved recognition of trends and incidents
  • Improved employee satisfaction

Processes Utilized by the Service Desk

  • Workflow and procedures diagrams
  • Roles and responsibilities
  • Training evaluation sheets and skill set assessments
  • Implemented metrics and continuous improvement procedures

Incident Management
The objective of Incident management is minimize the disruption to the business by restoring service operations to agreed levels as quickly as possible and to ensure the availability of IT services is maximized, and could also protect the integrity and confidentiality of information by identifying the root cause of a problem.

Benefits of an Incident Management Process

  • Incident detection and recording
  • Classification and initial support
  • Investigation and diagnosis
  • Resolution and recovery
  • Incident closure
  • Incident ownership, monitoring, tracking and communication
  • Repeatable Process

With a formal incident management practice, IT quality will improve through ensuring ticket quality, standardizing ticket ownership, and providing a clear understanding of ticket types while decreasing the number of un-reported or misreported incidents.

Incident management ticket owner workflow diagram
Figure 3. Incident management ticket owner workflow diagram.

Problem Management
The object of problem management is to resolve the root cause of incidents to minimize the adverse impact of incidents and problems on the business and secondly to prevent recurrence of incidents related to these errors. A `problem’ is an unknown underlying cause of one or more incidents, and a `known error’ is a problem that is successfully diagnosed and for which a work-around has been identified. The outcome of known error is a request for change (RFC).

Problem management diagram overview
Figure 4. Problem management diagram overview.

A problem is a condition often identified as a result of multiple Incidents that exhibit common symptoms. Problems can also be identified from a single significant incident, indicative of a single error, for which the cause is unknown, but for which the impact is significant.

A known error is a condition identified by successful diagnosis of the root cause of a problem, and the subsequent development of a work-around.

An RFC is a proposal to IT infrastructure for a change to the environment.

Incident Management and Problem Management: What’s the Difference?
Incidents and service requests are formally managed through a staged process to conclusion. This process is referred to as the “incident management lifecycle.” The objective of the incident management lifecycle is to restore the service as quickly as possible to meet service level agreements (SLAs). The process is primarily aimed at the user level.

Problem management deals with resolving the underlying cause of one or more incidents. The focus of problem management is to resolve the root cause of errors and to find permanent solutions. Although every effort will be made to resolve the problem as quickly as possible this process is focused on the resolution of the problem rather than the speed of the resolution. This process deals at the enterprise level.

Change Management
Change management ensures that all areas follow a standardized process when implementing change into a production environment. Change is defined as any adjustment, enhancement, or maintenance to a production business application, system software, system hardware, communications network, or operational facility.

Benefits of Change Management

  • Planning change
  • Impact analysis
  • Change approval
  • Managing and implementing change
  • Increase formalization and compliance
  • Post change review
  • Better alignment of IT infrastructure to business requirements
  • Efficient and prompt handling of all changes
  • Fewer changes to be backed out
  • Greater ability to handle a large volume of change
  • Increased user productivity

Configuration Management
Configuration management is the implemtation of a configuration management database (CMDB) that contains details of the organization’s elements that are used in the provision and management of its IT services. The main activities of configuration management are:

  • Planning: Planning and defining the scope, objectives, policy and process of the CMDB.
  • Identification: Selecting and identifying the configuration structures and items within the scope of your IT infrastructure.
  • Configuration control: Ensuring that only authorized and identifiable configuration items are accepted and recorded in the CMDB throughout its lifecycle.
  • Status accounting: Keeping track of the status of components throughout the entire lifecycle of configuration items.
  • Verification and audit: Auditing after the implementation of configuration management to verify that the correct information is recorded in the CMDB, followed by scheduled audits to ensure the CMDB is kept up-to-date.

Configuration Management and Information Security
Without the definition of all configuration items that are used to provide an organizations’s IT services, it can be very difficult to identify which items are used for which services. This could result in critical configuration items being stolen, moved or misplaced, affecting the availability pf tje services dependent on them. It could also result in unauthorized items being used in the provision of IT services.

Benefits of Configuration Management

  • Reduced cost to implement, manage, and support the infrastructure
  • Decreased incident and problem resolution times
  • Improved management of software licensing and compliance
  • Consistent, automated processes for infrastructure mapping
  • Increased ability to identify and comply with architecture and standards requirements
  • Incident troubleshooting
  • Usage trending
  • Change evaluation
  • Financial chargeback and asset lifecycle management
  • Service Level Agreement (SLA) and software license negotiations

Release Management
Release Management is used for platform-independent and automated distribution of software and hardware, including license controls across the entire IT infrastructure. Proper Software and Hardware Control ensure the availability of licensed, tested, and version certified software and hardware, which will function correctly and respectively with the available hardware. Quality control during the development and implementation of new hardware and software is also the responsibility of Release Management. This guarantees that all software can be conceptually optimized to meet the demands of the business processes.

Benefits of Release Management

  • Ability to plan resource requirements in advance
  • Provides a structured approach, leading to an efficient and effective process
  • Changes are bundled together in a release, minimizing the impact on the user
  • Helps to verify correct usability and functionality before release by testing
  • Control the distribution and installation of changes to IT systems
  • Design and implement procedures for the distribution and installation of changes to IT systems
  • Effectively communicate and manage expectations of the customer during the planning and rollout of new releases

The focus of release management is the protection of the live environment and its services through the use of formal procedures and checks.

Release Categories
A release consists of the new or changed software or hardware required to implement approved change.

  • Major software releases and hardware upgrades, normally containing large areas of new functionality, some of which may make intervening fixes to problems redundant. A major upgrade or release usually supersedes all preceding minor upgrades, releases and emergency fixes
  • Minor software releases and hardware upgrades, normally containing small enhancements and fixes, some of which may have already been issued as emergency fixes. A minor upgrade or release usually supersedes all preceding emergency fixes.
  • Emergency software and hardware fixes, normally containing the corrections to a small number of known problems

Release management overview
Figure 5. Release management overview.

Releases can be divided based on the release unit into:

  • Delta Release is a release of only that part of the software which has been changed. For example, security patches to plug bugs in a software.
  • Full Release means that the entire software program will be release again. For example, an entire version of an application.
  • Packaged Release is a combination of many changes: for example, an operating system image containing the applications as well.

Service Delivery Overview

Services delivery is the discipline that ensures IT infrastructure is provided at the right time in the right volume at the right price, and ensuring that IT is used in the most efficient manner. This involves analysis and decisions to balance capacity at a production or service point with demand from customers, it also covers the processes required for the planning and delivery of quality IT services and looks at the longer term processes associated with improving the quality of IT services delivered.

  • Service Level Management: Service level management (SLM) is responsible for negotiating and agreeing to service requirements and expected service characteristics with the customer
  • Capacity Management: Capacity management is responsible for ensuring that IT processing and storage capacity provision match the evolving demands of the business in a cost effective and timely manner
  • Availability Management: Availability management is responsible for optimizing availability
  • Financial Management: The object of financial management for IT services is to provide cost effective stewardship of the IT assets and the financial resources used in providing IT services.
  • IT Service Continuity Management: Service continuity is responsible for ensuring that the available IT Service Continuity options are understood and the most appropriate solution is chosen in support of the business requirements

Service Level Management
The object of service level management (SLM) is to maintain and gradually improve business aligned IT service quality, through a constant cycle of agreeing, monitoring, reporting and reviewing IT service achievements and through instigating actions to eradicate unacceptable levels of service.

SLM is responsible for ensuring that the service targets are documented and agreed in SLAs and monitors and reviews the actual service levels achieved against their SLA targets. SLM should also be trying to proactively improve all service levels within the imposed cost constraints. SLM is the process that manages and improves agreed level of service between two parties, the provider and the receiver of a service.

SLM is responsible for negotiating and agreeing to service requirements and expected service characteristics with the Customer, measuring and reporting of Service Levels actually being achieved against target, resources required, cost of service provision. SLM is also responsible for continuously improving service levels in line with business processes, with a SIP, co-coordinating other Service Management and support functions, including third party suppliers, reviewing SLAs to meet changed business needs or resolving major service issues and producing, reviewing and maintaining the Service Catalogue.

Benefits of Implementing Service Level Management

  • Implementing the service level management process enables both the customer and the IT services provider to have a clear understanding of the expected level of delivered services and their associated costs for the organization, by documenting these goals into formal agreements.
  • Service level management can be used as a basis for charging for services, and can demonstrate to customers the value they are receiving from the Service Desk.
  • It also assists the service desk with managing external supplier relationships, and introduces the possibility of negotiating improved services or reduced costs.

Capacity Management
Capacity management is responsible for ensuring that IT processing and storage capacity provisioning match the evolving demands of the business in a cost effective and timely manner. The process includes monitoring the performance and the throughput of the IT services and supporting IT components, tuning activities to make efficient use of resources, understanding the current demands for IT resources and deriving forecasts for future requirements, influencing the demand for resource in conjunction with other Service Management processes, and producing a capacity plan predicting the IT resources needed to achieve agreed service levels.

Capacity management has three main areas of responsibility. First of these is BCM, which is responsible for ensuring that the future business requirements for IT services are considered, planned and implemented in a timely fashion. These future requirements will come from business plans outlining new services, improvements and growth in existing services, development plans, etc. This requires knowledge of existing service levels and SLAs, future service levels and SLRs, the Business and Capacity plans, modeling techniques (Analytical, Simulation, Trending and Base lining), and application sizing methods.

The second main area of responsibility is SCM, which focuses on managing the performance of the IT services provided to the Customers, and is responsible for monitoring and measuring services, as detailed in SLAs and collecting recording, analyzing and reporting on data. This requires knowledge of service levels and SLAs, systems, networks, service throughput and performance, monitoring, measurement, analysis, tuning and demand management.

The third and final main area of responsibility is RCM, which focuses on management of the components of the IT infrastructure and ensuring that all finite resources within the IT infrastructure are monitored and measured, and collected data is recorded, analyzed and reported. This requires knowledge of the current technology and its utilization, future or alternative technologies, and the resilience of systems and services.

Capacity Management Processes:

  • Performance monitoring
  • Workload monitoring
  • Application sizing
  • Resource forecasting
  • Demand forecasting
  • Modeling

From these processes come the results of capacity management, these being the capacity plan itself, forecasts, tuning data and Service Level Management guidelines.

Availability Management
Availability management is concerned with design, implementation, measurement and management of IT services to ensure the stated business requirements for availability are consistently met. Availability management requires an understanding of the reasons why IT service failures occur and the time taken to resume this service. Incident management and problem management provide a key input to ensure the appropriate corrective actionss are being implemented.

  • Availability Management is the ability of an IT component to perform at an agreed level over a period of time.
  • Reliability is the ability of an IT component to perform at an agreed level at described conditions.
  • Maintainability is the ability of an IT Component to remain in, or be restored to an operational state.
  • Serviceability is the ability for an external supplier to maintain the availability of a component or function under a third party contract
  • Resilience is a measure of freedom from operational failure and a method of keeping services reliable. One popular method of resilience is redundancy.
  • Security refers to the confidentiality, integrity, and availability of the data associated with a service.

Availability Management
Security is an essential part of availability management, this being the primary focus of ensuring IT infrastructure continues to be available for the provision of IT services.

Some of the elements mentioned earlier are the products of performing risk analysis to identify how reliable elements are and how many problems have been caused as a result of system failure.

The risk analysis also recommends controls to improve availability of IT infrastructure such as development standards, testing, physical security and the right skills in the right place at the right time.

Financial Management
Financial management for IT services is an integral part of service management. It provides the essential management information to ensure that services are run efficiently, economically and cost effectively. An effective financial management system will assist in the management and reduction of overall long term costs, and identify the actual cost of services. This provisioning provides accurate and vital financial information to assist in decision making, identify the value of IT services, enable the calculation of TCO and ROI.

The practice of financial management enables the service manager to identify the amount being spent on security counter measures in the provision of the IT services. The amount being spent on these counter measures needs to be balanced with the risks and the potential losses that the service could incur as identified during a business impact assessment (BIA) and risk assessment. Management of these costs will ultimately reflect on the cost of providing the IT services, and potentially what is charged in the recovery of those costs.

Service Continuity Management
Management is to support the overall business continuity management process by ensuring that the required IT technical and services facilities can be recovered within required and agreed business time-scales.

IT service continuity management is concerned with managing an organization’s ability to continue to provide a pre-determined and agreed level of IT services to support the minimum business requirements, following an interruption to the business. This includes ensuring business survival by reducing the impact of a disaster or major failure, reducing the vulnerability and risk to the business by effective risk analysis and risk management, preventing the loss of customer and user confidence, and producing IT recovery plans that are integrated with and fully support the organization’s overall business continuity plan.

IT service continuity is responsible for ensuring that the available IT service continuity options are understood and the most appropriate solution is chosen in support of the business requirements. It is also responsible for identifying roles and responsibilities and making sure these are endorsed and communicated from a senior level to ensure respect and commitment for the process. Finally, IT service continuity is responsible for guaranteeing that the IT recovery plans and the business continuity plans are aligned, and are regularly reviewed, revised and tested.

The Security Management Process

Security management provides a framework to capture the occurrence of security-related incidents and limit the impact of security breaches. The activities within the security management process must be revised continuously, in order to stay up-to-date and effective. security management is a continuous process and it can be compared to Deming’s Quality Circle (Plan, Do, Check and Act).

Security image diagram
Figure 6. Security image diagram.

The inputs are the requirements which are formed by the clients. The requirements are translated into security services, security quality that needs to be provided in the security section of the service level agreements. As you can see in the picture there are arrows going both ways; from the client to the SLA; from the SLA to the client and from the SLA to the plan sub-process; from the plan sub-process to the SLA. This means that both the client and the plan sub-process have inputs in the SLA and the SLA is an input for both the client and the process. The provider then develops the security plans for his organization. These security plans contain the security policies and the Operational level agreements. The security plans (Plan) are then implemented (Do) and the implementation is then evaluated (Check). After the evaluation the both the plans and the implementation of the plan are maintained (Act).

The first activity in the security management process is the “control” sub-process. The control sub-process organizes and manages the security management process itself. The control sub-process defines the processes, the allocation of responsibility the policy statements and the management framework.

The security management framework defines the sub-processes for the development of security plans, the implementation of the security plans, the evaluation and how the results of the evaluations are translated into action plans.

The plan sub-process contains activities that in cooperation with the service level management lead to the information security section in the SLA. The plan sub-process contains activities that are related to the underpinning contracts which are specific for information security.

In the plan sub-process, the goals formulated in the SLA are specified in the form of operational level agreements (OLA). These OLAs can be defined as security plans for a specific internal organization entity of the service provider.

Besides the input of the SLA, the plan sub-process also works with the policy statements of the service provider itself. As said earlier these policy statements are defined in the control sub-process.

The operational level agreements for information security are setup and implemented based on the ITIL process. This means that there has to be cooperation with other ITIL processes. For example, if the security management wishes to change the IT infrastructure in order to achieve maximum security, these changes will only be done through the change management process. The security management will deliver the input (request for change) for this change. The change manager is responsible for the change management process itself.

The implementation sub-process makes sure that all measures, as specified in the plans, are properly implemented. During the implementation sub-process no (new) measures are defined or changed. The definition or change of measures will take place in the plan sub-process in cooperation with the change management process.

The evaluation of the implementation and the plans is very important. The evaluation is necessary to measure the success of the implementation and the security plans. The evaluation is also very important for the clients and possibly third parties. The results of the evaluation sub-process are used to maintain the agreed measures and the implementation itself. Evaluation results can lead to new requirements and so lead to a request for change. The request for change is then defined and it is then sent to the change management process.

It is necessary for the security to be maintained. Because of changes in the IT infrastructure and changes in the organization itself, security risks are bound to change over time. The maintenance of the security concerns both the maintenance of the security section of the service level agreements and the more detailed security plans.

The maintenance is based on the results of the evaluation sub-process and insight in the changing risks. These activities will only produce proposals. The proposals serve as inputs for the plan sub-process and will go through the whole cycle or the proposals can be taken in the maintenance of the service level agreements. In both cases the proposals could lead to activities in the action plan. The actual changes will be carried by the change management process.

The maintenance sub-process starts with the maintenance of the service level agreements and the maintenance of the operational level agreements. After these activities take place in no particular order and there is a request for a change, the request for change activity will take place and after the request for change activity is concluded the reporting activity starts. If there is no request for a change then the reporting activity will start directly after the first two activities.

About the Author
From Information Security Management Handbook, Sixth Edition, Volume 2, edited by Harold F. Tipton and Micki Krause. New York: Auerbach Publications, 2008.


Process Owner, Process Manager or Process Engineer

Process Owner, Process Manager or Process Engineer?

While they might appear much the same at first glance, these roles are actually very different

Many times people who are just getting started with ITIL (or broader speaking ITSM) stumble over what the differences are between a Process Owner and Process Manager and, to a lesser extent, a Process Engineer.

These are different roles, with different skill sets and expectations but there are some overlaps. Often, especially in smaller organizations, these roles are all served by a single person. Even in that case, it is important to know the different objectives of each role so we can ensure we are in the right frame of mind when working to either promote, create, edit, or report on a process.

Process Owner

In general then the Process Owner is the ultimate authority on what the process should help the company accomplish, ensures the process supports company policies, represents and promotes the process to the business, IT leadership and other process owners, continuously verifies the process is still fit for purpose and use and finally, manages any and all exceptions that may occur.

Overall Accountability and Responsibility:

  • Overall design
  • Ensuring the process delivers business value
  • Ensures compliance with any and all related Policies
  • Process role definitions
  • Identification of Critical Success Factors and Key Performance Indicators
  • Process advocacy and ensuring proper training is conducted
  • Process integration with other processes
  • Continual Process Improvement efforts
  • Managing process exceptions

As you can see the Process Owner is really the process champion. Typically the person filling this role is in a higher level in Leadership to help ensure the process gets the protection and attention it deserves.

The Process Owner will be the main driving force for the process creation, any value the process produces, to include acceptance and compliance within the organization and also any improvements. It is therefore crucial that the Process Owner really understands the organization and its goals as well as its own culture. This is not about reading a book and trying to implement a book version of a process but really understanding how to create a process that will deliver the most value for this particular organization.

General Skills and Knowledge needed:

  • Company and IT Department goals and objectives
  • IT Department organizational structure and culture
  • Ability to create a collaborative environment and deliver a consensus agreement with key IT personnel
  • Authority to manage exceptions as required.
  • ITIL Foundation is recommended
  • ITIL Service Design and Continual Service Improvement could be helpful

Level of Authority in the Organization

  • Director
  • Senior Manager

Process Manager

The Process Manager is more operational than the Process Owner. You may have multiple Process Managers but you will only ever have a single Process Owner.

You can have a Process Manager for different regions or different groups within your IT Department. Think of IT Service Continuity with a ITSC Process Manager for each of your different Data Centers or Change Management having a different Change Process Manager for Applications versus Infrastructure. The Process Owner will define the roles as appropriate for the organizational structure and culture (see above). The Process Manager is there to manage the day to day execution of the process. The Process Manager should also serve as the first line for any process escalation, they should be very familiar with the ins and outs of the process and will be able to determine the appropriate path or if he/she needs to involve the ultimate authority – the Process Owner.

Overall Accountability and Responsibility:

  • Ensuring the process is executed appropriately at each described step
  • Ensuring the appropriate inputs/outputs are being produced
  • Guiding process practitioners (those moving through the process) appropriately
  • Producing and monitoring process KPI reports

The Process Manager is key to the day to day operations of the process. Without a good and helpful Process Manager it won’t matter how well a process was designed and promoted by the Process Owner, the process will flounder in the rough seas of IT day to day execution.

General Skills and Knowledge needed:

  • In depth knowledge of the process workflow and process CSF/KPI’s
  • Ability and authority to accept/reject all inputs/outputs related to the process
  • Ability to successful explain and guide people through the process and handle any low level process issues
  • ITIL Foundation is recommended
  • ITIL Intermediate in an area that covers their particular process could be helpful

Level of Authority in the Organization

  • Mid Level Manager
  • First Line Manager
  • Supervisor

Process Engineer

The Process Engineer is likely to have a lot of Business Analysis and Technical Writer skills and knowledge. This person needs to be able to take the Process Owner’s vision and intent of the process and actually create the process document that will be functionally usable by Process Managers and Process Practitioners. Another useful role of the Process Engineer is help ensure that each process in the enterprise is written in a common manner to ensure consistency in approach and method.

Overall Accountability and Responsibility:

  • Understanding the Process Owner’s vision and intent
  • Documenting the process in a usable and readable manner
    • Organized
    • Simple
    • Unambiguous
    • Ensuring flow charts match text
    • Ensuring processes are documented in a common manner across the enterprise

General Skills and Knowledge needed:

  • Ability to capture process requirements and translate them into a process document
  • Ability to write well
  • Ability to create effective work flow diagrams
  • ITIL Foundation could be helpful

Level of Authority in the Organization

  • Individual Contributor

As you can see a Process Engineer can be quite helpful in ensuring that the vision of the Process Owner is translated into a functional process document.


It is possible that a single person can do all three roles effectively but more likely the person will be more effective at one of these roles and less so at the others. If your organization is such that it is not possible that the three can be filled separately with people possessing the appropriate skills it is still advisable that a separate Process Engineer is utilized across the enterprise. A Process Engineer can work on several processes at once and will always be helpful for any process improvement efforts. A Process Owner can also function as a Process Manager without much issue given an appropriate scope and demand.

Source : http://www.theitsmreview.com/2013/03/process/

Free tools for ITSM – supporting IT Service Management for zero tool cost

Any application or computer program that enables you to run one or more IT Service Management processes is considered to be an ITSM tool. As with any application or program, there are a great number of both commercial and free tools for ITSM. In a small IT organization, parts of IT Service Management can be done by using office tools, such as spreadsheets, databases and word processing applications. However, managing larger amounts of data over time, with flexibility and consistency, requires specialized tools for the task at hand, regardless of organization size. Here is a list of the most common open source (free) ITSM tools:

Free ITSM software

Help Desk and Ticketing

  • RT: Request tracker – RT is an “issue tracking system which thousands of organizations use for bug tracking, help desk ticketing, customer service, workflow processes, change management, network operations, and even more…”
  • SpiceWorks – Spiceworks’ free app will allow you to easily manage your daily projects and user requests – all from one spot. And if you’re a help desk pro, you’ll still be amazed at how painless Spiceworks is to get up and running.
  • Triage – The web-based application will provide interfaces for handling tickets with notes and solutions, full-text search indexing, and allowing for plug-ins which can generate tickets from external sources (i.e. Asterisk, OpenNMS, Nagios, IMAP, POP3, etc.).
  • FreeHelpDesk – FreeHelpDesk is a feature-rich help desk system designed from the ground up to meet the demands of help desk staff and their users. It is a web-based system that can accept new calls from your users directly into the system. Calls can be tracked and searched to enable faster response times.
  • OSTicket – Easily manage, organize, and streamline your customer service and drastically improve your customer’s experience – all with one simple, easy-to-use (and free) system.
  • OTRS Help Desk – OTRS Help Desk software provides the tools needed to deliver superior service to your customers. Build stronger, longer-lasting relationships and gain a solid competitive edge with the proven functionality of OTRS.

If you need more information about Help Desk, Service Desk and Call Center distinction, follow this great blog post: Service Desk: Single point of contact.

Inventory and Change Management DataBase (CMDB)

  • i-doIT – Open Source IT Documentation and CMDB.
  • OCS Inventory NG – Open Computers and Software Inventory Next Generation is a technical management solution of IT assets. It uses small client software that has to be installed on every machine, and a server that aggregates information about those machines. It can be used for software deployment as well.

Learn more on ITIL V3 Change Management – at the heart of Service Management.

Service Monitoring

  • Nagios – Achieve instant awareness of IT infrastructure problems, so downtime doesn’t adversely affect your business. Nagios offers complete monitoring and alerting for servers, switches, applications, and services.
  • Icinga – is an enterprise-grade open source monitoring system which keeps watch over networks and any conceivable network resource, notifies the user of errors and recoveries and generates performance data for reporting. Scalable and extensible, Icinga can monitor large, complex environments across dispersed locations. Icinga is a branch of Nagios and is backward compatible.
  • Zabbix – is the open source availability and performance monitoring solution. Zabbix offers advanced monitoring, alerting, and visualization features today which are missing in other monitoring systems – even some of the best commercial ones.
  • GroundWork – monitors your entire datacenter and collects all its information in one place, helping to make better sense of your IT environment performance and availability data.

Service Management

  • OTRS:ITSM – is a scalable, high-performance, enterprise-grade IT Service Management (ITSM) software that couples the best practices of the IT Infrastructure Library (ITIL v3). The OTRS IT Service Management software is a powerful set of tools for managing complex IT administration processes, reducing business risk and ensuring high service quality.
  • iTop – written in a simple, popular programming language (PHP) that can be customized in an instant, iTop was developed to let you choose the modules you are interested in. If you just want a CMDB, you just get a CMDB. If you need to deal with all ITIL processes, you can get all ITIL modules covered by iTop. Adding a module is a question of minutes.
  • Project Open (]Project Open[) – is a modular open source project and service management tool with a focus on finance and knowledge management. “]po[ ITSM” is a special configuration of ]po[ designed to address the specific needs of IT departments and IT service providers, according to ITIL V3 best practices.

Learn more on IT Service Management in general.

Note: Product descriptions have been given by their respective developers, and are to be used for informational purposes only. As they are all free to download and use, take your time to try them before implementing.

Free does not always equal zero cost

There are many free ITSM tools available for you to download, install, and use, but you don’t get any support or help implementing the tool itself or its processes. Open source ITSM tools generally have nice communities built around the tools, so there might be some help available if you get stuck, but don’t expect instant answers or solutions.

Companies that offer free ITSM tools generate their revenue by offering a) hosting and cloud services for the tool, b) consulting and help with implementation, c) support once the tool has been implemented, d) and sometimes additional features have to be purchased separately. It’s important to remember: these are all the things that will be up to you; find a resource to run the software (server), have the know-how to install & configure, use, teach others to use it, and support the software itself if needed.

Where to start

If there aren’t any kinds of ITSM tools implemented in your organization, then the best way to start would be tools for processes that revolve around IT Operations, and are most visible to end users. These include Incident and Service Management (Help Desk / Service Desk), Configuration Management, Change Management, and some sort of Service Monitoring tools.

Make a list of products that may interest you, and some criteria which will help you decide: installation requirements (OS, resources, web based, etc.), modules available (incident management, configuration management, change management, etc.), are modules aligned with best practices such as ITIL (Read more on: How to implement ITIL and information about other ITSM Standards and Frameworks), is there support available (community based or commercial), additional features such as self-service portal and/or e-mail integration, and how confident you feel about being able to implement it.

Author: Neven Zitek

Source: http://advisera.com/20000academy/knowledgebase/free-tools-for-itsm/