Evaluating Training

At Spearhead we are passionate that every training course we deliver has an impact on your future success. Here we share a few of the models and methodologies that haveGiving Feedback shaped how training evaluation is carried out.

1. Kirkpatrick's Model of Training Evaluation

Professor Donald Kirkpatrick  first published the ideas behind the four level model in 1959, later consolidating them in his book “Evaluating Training Programs” in 1994. Whilst the model is now more than 20 years old it is still the most widely used by training professionals and is considered to be the industry standard.

The four levels are:

  • Level one – Reaction. To what extent did the delegates react favourably to the training course.
  • Level two – Learning. To what degree did the delegates acquire the intended knowledge, skills and attitudes as set out in the course objectives based on their participation in the training.
  • Level three – Behaviour. To what degree did the delegates apply what they have learned on the course when they are back at work.
  • Level four – Results. To what degree did targeted outcomes occur as a result of the training course and the subsequent work-based reinforcement of the training.

 

2. The CIRO Model

The CIRO model was developed by Warr, Bird and Rackham and published in 1970 in their book “Evaluation of Management Training”. CIRO stands for context, input, reaction and output. The key difference in CIRO and Kirkpatrick’s models is that CIRO focuses on measurements taken before and after the training has been carried out.

One criticism of this model is that it does not take into account behaviour. Some practitioners feel that it is, therefore, more suited to management focused training programmes rather than those designed for people working at lower levels in the organisation.

Context: This is about identifying and evaluating training needs based on collecting information about performance deficiencies and based on these, setting training objectives which may be at three levels:

  1. The ultimate objective: The particular organisational deficiency that the training program will eliminate.
  2. The intermediate objectives: The changes to the employees work behaviours necessary if the ultimate objective is to be achieved.
  3. The immediate objectives: The new knowledge, skills or attitudes that employees need to acquire in order to change their behaviour and so achieve the intermediate objectives.

Input: This is about analysing the effectiveness of the training courses in terms of their design, planning, management and delivery. It also involves analysing the organisational resources available and determining how these can be best used to achieve the desired objectives.

Reaction: This is about analysing the reactions of the delegates to the training in order to make improvements. This evaluation is obviously subjective so needs to be collected in as systematic and objective way as possible.

Outcome: Outcomes are evaluated in terms of what actually happened as a result of training. Outcomes are measured at any or all of the following four levels, depending on the purpose of the evaluation and on the resources that are available.

  • The learner level
  • The workplace level
  • The team or department level
  • The business level

 

3. Phillips’ Evaluation Model

Based on Kirkpatrick’s model, Dr. Jack Phillips added a fifth step which gave a practical way to forecast the return on investment (ROI) of a training initiative. ROI can be calculated by following a seven-stage process:

  • Step 1. Collect pre-programme data on performance and/or skill levels
  • Step 2. Collect post-programme data on performance and/or skill levels
  • Step 3. Isolate the effects of training from other positive and negative performance influencers
  • Step 4. Convert the data into a monetary value (i.e. how much actual value is the change worth to the organisation).
  • Step 5. Calculate the costs of delivering the training programme
  • Step 6. Calculate ROI (= programme benefits in £’s/programme costs in £)
  • Step 7. Identify and list the intangible benefits.

This last step is important as Phillips recognised that some training outcomes cannot be easily converted into a monetary value. For example, trying to put a monetary value on outcomes such as a less stressful working environment or improved employee satisfaction can be extremely difficult. Indeed, trying too hard to attach a business value to these intangible benefits may call into question the credibility of the entire evaluation effort!

Phillips recommended that these “soft” business measures should be reported as intangible benefits along with the “hard” business improvement outcomes (such as increased sales, reduction of defects, time savings etc.)

 

4. The Success Case Study Method

This methodology was developed by Professor Robert Brinkerhoff.  Its purpose is to understand the factors that make some delegates highly successful in post-course implementation, and the reasons why some delegates are not successful. These findings are then fed back into the programme in order to make improvements to the training and/or the pre and post-training support.

The evaluation starts with a survey (completed by the delegates) to identify those delegates who were the most and least successful after the training has been delivered. A random sample of these two groups is then selected for an in-depth interview by the evaluator in order to discover:

  • Exactly what they used, when they used it, how and when etc.
  • What results they actually accomplished
  • How valuable the results were (in £’s)
  • What environmental factors enabled them apply the training and get the results.
  • If they were unsuccessful, they are asked: why they were unable to use or benefit from the training; what got in the way, what factors stopped them from being successful etc.

 

If you would like help in the evaluation of training for your company then please get in contact.