How effective is your HSE training?
It’s important to evaluate your health and safety training because it’s easy to assume that if you’ve provided training to workers, you can pat yourself on the back and think you’re done. But if you do, you’ve put your cart before your horse. If your goal is to deliver effective training that changes your worker’s behaviour and skill on the job, then you need to confirm that the training was effective. The standard way to do this is to conduct post-evaluation of the training.
This is done at four different levels:
Level 1: Your employees’ reaction to training
Did the employees like the training? Did they feel like they learned? You can find this out by:
- observing the employees during training
- asking their opinions
- handing out surveys
You can hand out paper-based surveys after training if you want to , but you may get better results if the survey is anonymous.
Level 2: Your employees’ actual learning
The assessments you conducted during the training should evaluate the employees’ actual learning of the objectives. This might include simple tests for knowledge issues, or case studies, job simulations, or hands-on exercises for skills and attitudes.
Level 3: Your employees’ post-training job behaviour
Are the workers taking the new knowledge/skills/attitudes from training and applying them at work where it counts? Observing the employees’ on-the-job work behaviour will determine this, as will other performance-based metrics.
Level 4: Quantifiable business results
Did the training result in reaching the desired business goal (i.e. were workplace incidents reduced)?
Where do you start?
There is one dominant model used to evaluate training effectiveness. It is the Kirkpatrick Model, which is built around a four-step process, in which each step (or level) adds precision, but also requires more time-consuming analysis and greater cost.
Step 1: Evaluating reactions
This measures how your participants value the training and determines whether they were engaged, and believe they can apply what they learned. Evaluation tools include end-of-course surveys that collect feedback as to whether participants are satisfied with the training, and whether they believe the training is effective.
Step 2: Evaluating learning
Level two measures whether participants actually learned enough from the training.
At this step, your evaluation tools will include:
- pre- and post-tests, and quizzes
- observation of learners (i.e. did an employee execute a particular skill effectively?)
- successful completion of activities
Step 3: Evaluating behaviour
Here the measures assess whether training has had a positive effect on job performance (transfer of applied learning). This level will require a cost-benefit decision, because this can be resource-intensive to evaluate, requiring a more time-consuming analysis. You may want to consider performing a ‘level three’ evaluation for safety skills that have a high consequence to error, where you want to make sure safety skills/performance transfer to the job.
Your evaluation tools will include:
- work observation
- focus groups
- interviews with workers and management
Step 4: Evaluating results
At this level, you will measure whether the training is achieving results. To do this, ask yourself such questions as:
- is the training improving safety performance?
- has training resulted in better quality?
- is there increased productivity?
- have sales increased?
- has customer service improved?
The challenge here is that there are many factors that will influence performance, so it is difficult to correlate increased performance to training alone.
Here your evaluation tools will include:
- measuring the reduction in the number, or severity, of incidents or accidents compared to the organisation’s performance (or contract goals)
- measuring the reduction in total recordable cases (TRC)
- measuring the reduction in the DART rate (days away, or restricted work).