top of page

From blame culture to learning culture: a practical transition guide

  • Jan 20
  • 10 min read

Updated: 5 days ago

There is a pattern that appears in almost every serious workplace incident investigation. When something goes wrong, the initial response is to find the person responsible — the driver who was speeding, the worker who bypassed the guard, the supervisor who missed the hazard. Someone gets retrained, disciplined, or moved on. A corrective action gets filed. And the organisation moves forward, satisfied that the problem has been addressed.


Except, often, it has not. The same incident happens again six months later. Different person. Same conditions.


This is the central problem with blame culture in workplace safety: it addresses the symptom (the individual who was last in contact with the hazard) while leaving the underlying conditions intact. It creates an organisation that is very good at identifying who was involved in incidents, and much less good at understanding why incidents occur.


The alternative — a learning culture — is better understood intellectually than it is implemented in practice. Most organisations know they should encourage near-miss reporting, conduct root cause analysis rather than person-finding, and create an environment where workers feel safe raising concerns. What they often struggle with is how to actually make that shift, especially in facilities where blame has been the default response for years.


This post attempts to be genuinely practical about that transition.




Why blame feels rational (and why it is not)


Before looking at the path forward, it is worth being honest about why blame culture persists. It is not usually a cynical choice. It persists because it is cognitively satisfying, socially expected, and often genuinely feels like accountability.


When something goes wrong, assigning fault to an individual satisfies the intuition that someone must be responsible. It feels like justice. It produces a clear action (the individual is retrained, disciplined, or removed). And it is far easier than confronting the more uncomfortable possibility that the system itself produced the outcome.


Safety researchers describe this as the "fundamental attribution error" — the tendency to explain others' behaviour through character or competence rather than the context in which they were operating. Professor Sidney Dekker, one of the most widely cited thinkers in safety science, describes it this way: people tend to make decisions that seem entirely rational to them at the moment, given their immediate knowledge and context. Understanding what happened requires understanding that context — not just identifying who was present.


Dekker's foundational work, developed across aviation safety and later applied to industry and healthcare, demonstrated that a punitive response to incidents does not just fail to prevent recurrence. It actively makes organisations less safe by pushing risk underground. When workers expect blame, near-miss reporting declines. Events that are not formally reported cannot be analysed, and conditions that would have been warning signs go unexamined. As Dekker's research put it: punitive cultures paradoxically raise error rates by making mistakes invisible.


James Reason, whose Swiss Cheese Model of accident causation shaped modern safety thinking in the 1990s, made the same underlying argument: serious incidents are almost never the product of a single human error. They are the product of multiple, layered weaknesses — gaps in barriers that normally prevent harm from reaching an outcome. Removing one human from the system does not close those gaps. It removes a person and leaves the gaps open for the next one.





Incident investigation or root cause analysis discussion

What high reliability organisations figured out first


The clearest evidence that learning cultures produce better safety outcomes comes not from theory but from industries that had no choice but to get this right.


Commercial aviation is the most studied example. Following a series of disasters in the 1970s and early 1980s — many of which were traced to communication failures rather than technical faults — the industry developed Crew Resource Management (CRM) training, which explicitly addressed the problem of junior crew members being unable to challenge senior pilots even when they saw danger. The same period saw the development of confidential near-miss reporting systems that gave pilots a safe channel to disclose safety events without fear of career consequences.


The National Transportation Safety Board, as Dekker and others have noted, investigates crashes to understand systemic contributors and prevent recurrence — not to punish pilots. That learning orientation is a significant part of why commercial aviation achieved the extraordinary safety record it now holds. The rate of fatal accidents in commercial air travel has fallen by orders of magnitude over the past five decades, even as the number of flights has grown dramatically.


Researchers Karl Weick and Kathleen Sutcliffe, studying nuclear power plants, naval aircraft carriers, and air traffic control systems through the 1990s and 2000s, identified these organisations as "High Reliability Organisations" (HROs): entities that operate in genuinely hazardous environments while maintaining extraordinarily low rates of serious failure. One of Weick and Sutcliffe's five defining principles of HROs is what they called "preoccupation with failure" — not in the sense of anxiety, but in the sense of treating near misses and weak signals as valuable information, rather than proof that the system is working. In HROs, bad news is welcomed, because it arrives while something can still be done about it.


The industrial workplace is not a nuclear aircraft carrier. But the underlying principles translate directly. Organisations that treat near misses as free information about latent risk, rather than as embarrassments to be minimised, are building the informational foundation that learning cultures depend on.





Aviation / flight cockpit or air traffic control (classic HRO reference)

The just culture framework


The safety science literature has converged on a concept called "just culture" as the operating model for organisations trying to move away from blame without abandoning accountability entirely.


Just culture, developed from James Reason's work and extended by David Marx and Sidney Dekker, makes a critical distinction between three types of behaviour that look similar on the surface but require very different responses:


Human error — an honest mistake in a complex system. The appropriate response is to support the individual and examine the conditions that made the error likely.


At-risk behaviour — a shortcut or workaround that has become normalised, often because of time pressure, unclear procedure, or the sense that "we always do it this way." The appropriate response is coaching and system redesign.


Reckless behaviour — conscious disregard of known, serious risk. This warrants accountability. Just culture is not a blanket amnesty. It distinguishes between mistakes made by people operating in good faith within a system, and choices made in defiance of it.


The practical value of this framework is that it gives organisations a principled basis for responding consistently to safety events. Without it, the default tends toward either punishing all errors (which drives underreporting) or attempting to absolve all errors (which abandons accountability and loses the trust of workers who expect fairness). Both extremes undermine the information environment that learning requires.


Under New Zealand's Health and Safety at Work Act 2015 (HSWA), and the equivalent model WHS laws in Australia, PCBUs have an obligation to proactively identify and manage risk. A learning culture is not just a nice-to-have for organisational performance: it is part of what it means to take that obligation seriously. An organisation that systematically suppresses near-miss reporting is, in effect, choosing not to know about the conditions that are most likely to produce harm.




Amy Edmondson and the psychological safety problem


Alongside just culture, Harvard Business School professor Amy Edmondson's research on psychological safety provides the other major theoretical pillar for understanding why learning cultures succeed or fail.


Edmondson's original research, conducted with hospital teams in the 1990s, produced a counterintuitive finding: the highest-performing teams reported more errors, not fewer. The explanation was not that they were less competent. They were simply more willing to talk about mistakes. In psychologically unsafe teams, errors were hidden. In psychologically safe teams, they were discussed, examined, and learned from. The difference was not in error rates. It was in detection rates.


Psychological safety, as Edmondson defines it, is the shared belief that a team is safe for interpersonal risk-taking: that raising a concern, admitting a mistake, or challenging a practice will not be met with humiliation, punishment, or dismissal. It is not the same as comfort or permissiveness. High psychological safety can coexist with high standards — in fact, Edmondson's research demonstrates that psychological safety and accountability reinforce each other when both are present.


For industrial workplaces, this has a concrete implication. A facility's near-miss reporting rate is not just a metric. It is a measure of psychological safety. When workers report frequently, they are signalling that they believe reporting is safe and useful. When reporting is low, the most likely explanation is not that the workplace is unusually hazard-free. It is that workers have learned — through accumulated experience of how reports are received — that raising concerns carries a cost they are not willing to pay.


Google's Project Aristotle, which studied 180 teams over two years from 2012, found psychological safety to be the strongest predictor of team effectiveness across all factors studied — ranking ahead of team composition, individual performance, and leadership style. The finding has been replicated across industries and contexts. It is not specific to healthcare or technology. It is a property of how human teams function under conditions of uncertainty and interdependence: precisely the conditions that describe most high-risk workplaces.




What the transition actually looks like


Understanding the theory is one thing. Making the shift in a facility that has operated under blame norms for years is a different challenge. A few things tend to matter most in practice.


The first incident after you announce the change is the real signal. Organisations can announce a commitment to learning culture and just culture, run workshops, and update policies. But the first time something goes wrong after that announcement, the response will be watched by every worker who was paying attention. If blame returns in that moment — if the investigation focuses on who rather than why, if someone is disciplined without a genuine systemic analysis — the announcement is effectively cancelled. Learning culture is not built through statements. It is built through consistent responses over time, especially under pressure.


Language matters more than most organisations realise. Edmondson notes that calling something an "accident" rather than an "error," or a "study" rather than an "investigation," changes how people engage with the process. This is not wordsmithing. Language shapes the cognitive frame that determines whether people see an event review as a learning process or an accountability hearing. Organisations making the shift often benefit from explicitly redesigning the language of their incident response processes.


Accountability does not disappear — it changes form. One of the most common concerns about moving away from blame culture is that it will reduce accountability. The evidence suggests the opposite is true when just culture is properly implemented. In a learning culture, accountability is systemic and forward-looking: what will change so that this does not happen again, and who is responsible for making that change? Leaders become accountable for the quality of the system. Workers become accountable for engaging honestly with safety processes. The accountability does not disappear; it shifts from a backward-looking search for culpability to a forward-looking commitment to improvement.


You need data that is independent of self-report. One of the structural challenges of building a learning culture is that it depends heavily on workers reporting what they observe — and the reporting system is exactly what blame culture has damaged. Recovery takes time, and in the interim, organisations are often working with incomplete information about what is actually happening at ground level.


This is where continuous monitoring technology changes the dynamic materially. When a computer vision AI system like inviol is running across a facility's highest-risk areas, it generates an objective event record that does not depend on self-report. Near misses that would never have been logged in a blame culture become part of the data set: visible, time-stamped, available for review. This creates a baseline of factual information that supports genuine root cause analysis rather than retrospective reconstruction.


Critically, this also means the coaching conversation that follows a safety event can be specific and concrete, rather than generic and presumptive. A supervisor reviewing actual footage with a worker — with faces blurred, focused on the system conditions that produced the event — is having a fundamentally different conversation from one reconstructed from memory and influenced by the need to assign responsibility. The former is a learning conversation. The latter, in a blame culture, too often becomes a performance review.


Over time, the data from continuous monitoring provides the leading indicator visibility that a mature learning culture depends on. inviol's customers see an average of 67% risk reduction and 42% reduction in incidents over three years. That kind of sustained improvement is not achieved through blame cycles. It is achieved by an organisation that is systematically learning from what its data reveals about how work actually happens.


To understand how inviol supports the transition from reactive to learning-oriented safety management, book a demo.





Data or monitoring — someone reviewing an event on a screen

The long view


The shift from blame culture to learning culture is not a project with a completion date. It is a reorientation of how an organisation relates to failure — and that reorientation happens incrementally, through hundreds of small decisions about how to respond when things go wrong.


What makes it worth pursuing, beyond the moral case for treating workers with fairness, is that it genuinely works. The evidence from aviation, healthcare, nuclear power, and increasingly from industrial workplaces is consistent: organisations that create the conditions for honest reporting and genuine learning reduce serious harm at a rate that blame cultures simply cannot match.


Blame feels like action. Learning is action.




Frequently Asked Questions


What is the difference between a blame culture and a learning culture in workplace safety?


A blame culture focuses on identifying who was responsible when something goes wrong. A learning culture focuses on understanding why — the conditions, system factors, and latent weaknesses that produced the event. Blame cultures tend to suppress near-miss reporting and leave underlying risk conditions intact. Learning cultures treat incident and near-miss data as valuable information for improving the system.


What is "just culture" and how does it relate to accountability?


Just culture is a framework developed from the work of safety researchers James Reason and Sidney Dekker. It distinguishes between human error (mistakes made in good faith in a complex system), at-risk behaviour (normalised shortcuts that warrant coaching and system redesign), and reckless behaviour (conscious disregard of known risk, which warrants accountability). Just culture is not blame-free — it is accountability that is proportionate and forward-looking, focused on preventing recurrence rather than assigning punishment.


Why do near-miss reporting rates matter so much for safety culture?


Near-miss reporting is a leading indicator: it provides data about risk before harm occurs. Research across aviation, healthcare, and industrial workplaces consistently shows that organisations with high near-miss reporting rates have better safety outcomes. Low reporting is typically a sign of psychological unsafety — workers have learned that raising concerns carries a cost. Near-miss data that disappears from the reporting system does not mean the risk has disappeared; it means the risk is no longer visible.


How does psychological safety relate to workplace safety performance?


Amy Edmondson's research at Harvard Business School established that psychological safety — the shared belief that a team is safe for interpersonal risk-taking — is a strong predictor of team effectiveness and learning behaviour. Teams with high psychological safety report errors and near misses more openly, discuss problems candidly, and improve faster. Google's Project Aristotle identified psychological safety as the top predictor of team performance across 180 teams. In industrial workplaces, this translates directly: facilities where workers feel safe raising concerns have better incident data and better long-term outcomes.


How can technology support a transition from blame to learning culture?


Continuous monitoring with computer vision AI creates an objective, independent record of safety events — one that does not depend on self-report. This means safety conversations can be grounded in specific, factual footage rather than recollection and assumption. It separates the identification of what happened from the question of who was at fault, supporting root cause analysis and coaching conversations that focus on conditions rather than culpability.


 
 
bottom of page