System-induced error approach


The structure of this book is based on a model of human error, its causes, and its role in accidents that is represented by Figures 1.4 and 1.5. This perspective is called the system-induced error approach. Up to now, only certain  [c.12]


The overlap between the error tendencies circle and the error-inducing environment circle represents the likelihood that an error would occur. However, given appropriate conditions, recovery from an error is highly likely. Recovery may arise either if the person making the error detects it before its consequences (accidents, product loss, degraded quality) occur, or if the system as a whole is made insensitive to individual human errors and supports error recovery. These aspects of the system-induced error approach are represented as the third circle in Figure 1.5. Thus, the dark area in the center of the model represents the likelihood of unrecovered errors with significant consequences. At least two major influences can be controlled by the organization to reduce the likelihood of error. The first of these is the design of the system to reduce the mismatch between the demands of the job and the capabilities of the worker to respond to these demands. This area can be addressed by modifying or improving performance-influencing factors that either reduce the levels of demand, or provide greater capability for the humans (e.g., through better job design, training, procedures, team organization). The other area that will have a major impact on error is that of organizational culture. This issue is discussed in Chapter 8.  [c.13]

The system-induced error approach can be restated in an alternative form as an accident causation model (see Figure 1.4). This shows how error-inducing conditions in the form of inadequate PIFs interact with error tendencies to  [c.13]



This chapter has provided an overview of the book and has described its underlying philosophy, the system-induced error approach (abbreviated to the systems approach in subsequent chapters). The essence of the systems approach is to move away from the traditional blame and punishment approach to human error, to one which seeks to understand and remedy its underlying causes.  [c.19]

I.3. The System-Induced Error Approach  [c.256]

As described in Chapters 1 and 2 the system-induced error approach comprises the following elements  [c.256]

I.4. Implications of the System-Induced Error Approach for Data Collection  [c.257]

FIGURE 1.5. System-Induced Error Approach. 15  [c.404]

The control of human error at the most fundamental level also needs to consider the impact of management policy and organizational culture. The concepts introduced in Chapter 1, particularly the systems-induced error approach, have emphasized e need to go beyond the direct causes of errors, for example, overload, poor procedures, poor workplace design, to consider the imderlying organizational policies that give rise to these conditions. Failures at the policy level which give rise to negative performance-influencing factors at the operational level are examples of the latent management failures discussed in Chapter 1 and in Section 2.2.2.  [c.85]

The approaches described so far tackle the problem of error in three ways. First, by trying to encourage safe behavior (the traditional safety approach), second by designing the system to ensure that there is a match between human capabilities and systems demands (the human factors engineering approach) and third by understanding the underlying causes of errors, so that error inducing conditions can be eliminated at their source (the cognitive modeling approach). These strategies provide a technical basis for the control of human error at the level of the individual worker or operating team.  [c.85]

Because of the emphasis on modeling accident causation, data collection systems based on the system-induced error approach are likely to modify their data collection strategies over time. Thus, as evidence accumulates that the existing causal categories are inadequate to accoimt for the accidents and near misses that are reported, the data collection philosophy will be modified, and a new accident causation model developed. This, in turn, will be modified on the basis of subsequent evidence.  [c.259]

An examination of the application of IM to MD shows very good numerical properties (e.g., energy conservation and stability) for moderate timesteps, larger than Verlet [62, 41]. However integrator-induced resonance artifacts limit the application of this approach to larger integration stepsizes. Essentially, resonance occurs at special timesteps that are related in a complex way (via the stepwise propagation transformation) to the various timescales of the motion [63, 64, 6, 65]. At those timesteps, a concerted effect stemming from one component of the motion (e.g., heating of a bond-stretch vibrational mode) leads to very large energetic fluctuations or instability (e.g., bond rupture). Thus, resonance problems lead in general to erratic, rather than systematic, error patterns as a function of timestep. They are also method and system dependent [63, 66, 67], occur for both implicit and explicit schemes (e.g., Verlet and IM [63, 62]), and depend strongly on the fastest frequency in the system and possibly on the coupling strength to other vibrational modes.  [c.241]

See pages that mention the term System-induced error approach : [c.87]   
Guidelines for Preventing Human Error in Process Safety (1994) -- [ c.0 ]