Human reliability analysis

The concept of Human Reliability Analysis (HRA) reflects an understanding that people and systems are not error-proof, and that improved reliability requires an understanding of error problems, leading to improved mitigation strategies. Essentially, HRA aims to quantify the likelihood of human error for a given task. HRA can assist in identifying vulnerabilities within a task, and may provide guidance on how to improve reliability for that task.

A number of HRA techniques have been developed for use in a variety of industries, many of which are freely available. A review of HRA methods has been published by the U.K. Health and Safety Executive. Generally, HRA tools calculate the probability of error for a particular type of task, while taking into account the influence of performance shaping factors. Quantitative techniques refer to databases of human tasks and associated error rates to calculate an average error probability for a particular task. Qualitative techniques guide a group of experts through a structured discussion to develop an estimate of failure probability, given specific information and assumptions about tasks and conditions.

The basic process

A hierarchical task analysis is conducted on critical activities (i.e. activities with the potential to cause a hazardous event), and starts with the identification of individual tasks and steps within an activity. Potential errors associated with specific steps are then highlighted; often through the use of keyword prompts identifying possible error mechanisms (e.g. step skipped, right action on wrong object, wrong action on right object, transposed digits, etc.). Once the possible error mechanisms have been identified, associated error probabilities can be estimated. A typical quantitative approach firstly identifies the nominal error rate for the task type. Task types vary between tools and may be very specific or quite general. Then the influence of relevant performance shaping factors is calculated for the task. Performance shaping factors may increase or decrease the likelihood of error for the task in question. The overall error probability figure reflects the average error rate for the task type, while accounting for the influence of relevant situational factors.  Once potential sources of error have been identified, actions can be developed to minimise or mitigate their impact and improve the reliability of human performance within the task.

Image - The SPAR-H Human Reliability Analysis Method (NUREG/CR-6883, INL/EXT-05-00509)

U.S. Nuclear Regulatory Commission (2005). The SPAR-H Human Reliability Analysis Method (nrc.gov).

Note that this summary provides a basic overview of a quantitative approach to Human Reliability Analysis. Variations exist between published HRA methods. Organisations seeking to implement HRA should choose the method best suited to their needs and capabilities.

Limitations of HRA data

As with any approach to risk quantification, the data generated during a HRA should be interpreted with an understanding of the limitations of the particular methodology in use. In general, the following points may assist in determining the applicability of HRA data: 

  • The quality of the analysts can influence the validity of the HRA findings. Consider how long they have worked as an analyst, their experience with the method in question, their experience with the industry and their knowledge of the organisation and particular workplace/s

  • The assumptions made by the analysts will have a significant bearing on their findings. Assumptions should be articulated within the final report and taken into consideration when reviewing the findings 

  • The type of task subject to the analysis can influence the validity of findings. Low level tasks that are highly routine and carried out in well controlled environments tend to be better understood, and therefore probabilities can be more accurate. However, probabilities may be less accurate for tasks that are carried out in more complex environments or for tasks which require interpretation of multiple and contradictory sources of data

  • The data sources used to calculate probabilities may not be relevant when analysing tasks associated with the use of very new technology. It is possible that the types and frequencies of errors associated with the use of new technology may vary significantly from those contained within the databases referenced during the analysis

  • The impact of organisational context is generally not well represented in most quantitative HRA methods. An additional qualitative approach may be required to improve accuracy of probabilities and predictions where organisational context is likely to have a significant impact on error

  • The relevance of performance shaping factors assessed within the analysis should be considered, along with the potential impact of other factors that are not part of the analysis methodology used. Additionally, the interaction between performance shaping factors does not appear to be represented in most methods, so consideration should be given as to how such interactions can be incorporated into the overall analysis

  • Consider a combination of quantitative and qualitative methods as a way of facilitating more robust and reliable analysis.

In addition to the strengths and limitations of various analysis methods, consideration should also be given to the ways in which the analysis findings will be applied. Calculation of error probabilities alone is likely to be of limited use, particularly when considering the factors impacting the validity of such calculations. In particular, caution should be exercised if using error probabilities as cut-off points (e.g. our error probability must be below x to perform this task).

A proactive approach

Useful information is generated at all points throughout the HRA process, which can provide valuable guidance to organisations if applied appropriately. HRA information can be used in a number of proactive approaches, including:

  • during front end engineering and detailed design to identify design options that are likely to reduce opportunities for error

  • during system modifications and other relevant changes to determine whether the changes are likely to impact error in either direction

  • to assist in the identification of commonly experienced performance shaping factors, enabling the organisation to address these at a tertiary level

  • during incident investigations to identify contributing latent conditions

  • at a workplace level, identifying and modifying those performance shaping factors which contribute to error.