What the Data Shows About Why “Human Error” Ends Investigations Too Early in Post-Secondary Science Labs
Brian P. Collins, PhD, and James Palcik
Safer STEM
After a lab accident, everyone wants the same thing: an answer.
An explanation that fits in a report.
A cause that feels stable.
A reason the incident can be closed.
Often, that explanation arrives in two words:
Human error.
In a meta-analysis of 213 documented college and university laboratory and technical shop accidents, the phrase appeared again and again, across disciplines, institutions, tools, and levels of severity. And almost every time, it marked the same precise moment:
The moment the investigation stopped asking critical questions.
A Familiar Explanation, Repeated
“Human error” appeared across incidents involving:
- Table saws, planers, and jointers
- Rotating sanders and grinders
- Material handling and adjustment tasks
- Routine setup, transition, and cleanup activities
Each incident made sense on its own.
An operator misjudged.
A hand moved too close.
A guard was disengaged briefly.
But when the cases were examined together, a pattern emerged.
Injuries clustered during adjustments, not primary operations.
Guards were present, but “temporarily” bypassed.
Tasks were described as routine, familiar, and ordinary.
Time pressure hovered in the background, unnamed.
The tools changed.
The spaces changed.
The people changed.
The explanation stayed the same.
When a Label Becomes a ‘Full Stop’
Calling something “human error” feels decisive. It offers clarity and closure. It suggests an obvious solution: remind people to be careful.
But in the science of safety, human error is not a root cause. It is a description of an outcome, a signal that points toward conditions that made the action likely, not an explanation that stands on its own. This means a lack of awareness or controls in the instructional space.
This distinction is explicit in OSHA accident investigation guidance, which emphasizes identifying system and process contributors rather than stopping at individual actions (OSHA, Accident Investigation Guide).
Across the dataset, incidents labeled as human error repeatedly shared deeper, overlooked contributors:
- Gloves worn near rotating equipment, increasing entanglement risk
- PPE that complied with policy but did not match the task being performed
- Procedures that described ideal work, not real transitions
- Supervision that existed on paper, but not at the moment of choice
Nothing here was mysterious.
Nothing required perfect foresight.
The explanation wasn’t wrong.
It was simply too small.
Why Organizations Like the Phrase
From an organizational psychology perspective, labels do important work. They simplify. They reduce uncertainty. They allow systems to keep moving.
“Human error” shifts attention away from design and toward individuals. Once it appears in a report, certain questions tend to vanish:
- Why did this action feel reasonable at the time?
- What cues suggested the task had become safer than it was?
- How did routine or familiarity narrow attention?
- What tradeoffs was the individual implicitly making?
These are uncomfortable questions. They implicate scheduling, layout, incentives, and norms. They also happen to be the questions most closely linked to prevention.
As Sidney Dekker’s work on human factors makes clear, accidents rarely result from a lack of care. More often, they emerge from care applied under constraints (Dekker, 2014, The Field Guide to Understanding Human Error).
What We Miss When We Stop Too Early
When investigations ended at “human error,” the data showed that institutions often missed contributors well within their control:
- Task sequences that encouraged unsafe hand placement during setup
- Equipment layouts that obscured hazards once guards were disengaged
- Informal norms that contradicted written standard operating procedures
- End-of-period or end-of-shift time pressure that compressed attention during shutdown
None of these fit neatly under a single label.
All of them are addressed by the hierarchy of controls emphasized by NIOSH and adopted across OSHA-aligned safety programs.
This is the difference between training people to be careful and designing systems that make care easier.
A Safer Way to End an Investigation
The most effective investigations in the dataset did not deny human error. They refused to treat it as an endpoint.
Instead of concluding with: “The individual made a mistake.”
They asked: “What made this mistake easy to make, even for a capable, trained person?”
That shift reframes responsibility:
- From blame to design
- From reminders to controls
- From closure to learning
It aligns with NFPA 45 laboratory fire and life safety standards, ACS laboratory safety guidance, and national accepted better professional safety practices in task-based hazard analysis.
A Reflection for University Safety Leadership
“Human error” is an explanation.
It is rarely a conclusion.
When institutions allow it to be the final line in an investigation, they trade insight for efficiency. They close the report and leave the system unchanged.
Safer organizations do something different. They treat errors as invitations to look outward, not inward. They ask not who failed, but what succeeded in making failure likely.
Labels bring comfort.
Questions bring change.
Brief Note on Method
This article draws on a meta-analysis of 213 documented college and university laboratory and instructional workspace accidents across multiple institutions. Rather than evaluating incidents individually, the analysis examined recurring language and causal framing across reports, particularly where investigations concluded despite repeated system-level conditions appearing in similar incidents.
About This Research and Safer STEM
Safer STEM conducts ongoing analysis of laboratory and instructional workspace incidents across post-secondary education to better understand how risk emerges in real teaching and research environments. This article is part of a continuing series that examines patterns across incidents, with the goal of supporting system-level learning, safer design, and more effective prevention.
In parallel with this research, Safer STEM works with colleges and universities to translate these insights into practice through Safety-First AI tools, customized training, and consulting focused on laboratory and instructional workspace safety. Institutions interested in engaging further may connect with Safer STEM to explore approaches to hazard recognition, task-based risk analysis, and alignment with established safety standards. Post-secondary institutions are encouraged to evaluate Safer STEM’s AI-based safety resources, which are designed to support these efforts.