What trucking and warehouse safety can learn from aviation
- Nov 18, 2025
- 8 min read
Updated: 6 days ago
Commercial aviation has roughly doubled in safety every decade since the late 1960s. An MIT study found that the risk of a fatality from commercial air travel fell from 1 per 350,000 passenger boardings in 1968-1977 to 1 per 13.7 million in 2018-2022. IATA data shows accident rates declining from 3.72 per million flight sectors in 2005 to 1.32 in 2025. The FAA reports that US commercial aviation fatalities decreased by 95% over two decades.
Now compare that trajectory to warehousing and trucking. In the US, the transportation and warehousing sector consistently records one of the highest fatal injury rates of any industry. Forklift incidents alone account for roughly 85 fatalities and thousands of serious injuries each year. Globally, the pattern is similar: WorkSafe NZ and Safe Work Australia report persistent workplace vehicle and pedestrian interaction fatalities across the logistics sector.
Both industries move heavy equipment at speed in complex environments. Both have well-established regulatory frameworks. Both employ highly trained operators. Yet one industry has achieved extraordinary, sustained safety improvement, and the other has largely plateaued.
The difference isn't equipment or regulation. It's culture, data, and the systems that connect them.
Where warehousing and trucking are stuck
Most warehouse and trucking safety programmes are built on a compliance-plus-investigation model. You define the rules, train people on the rules, audit compliance, and investigate when something goes wrong. This framework is necessary. But it's fundamentally reactive, and it has a ceiling.
The investigation model is particularly limiting. In warehousing, the cycle typically looks like this: an incident occurs, an investigation is conducted, a root cause is identified (often "human error" or "failure to follow procedure"), corrective actions are assigned, and the investigation is filed. The learning, such as it is, stays within the investigation report. Maybe it gets shared in a toolbox talk. Maybe it doesn't.
The structural problem is that this model only generates learning when something goes wrong. The near miss that nobody reported, the exclusion zone breach that happened at 3am, the forklift-pedestrian interaction that was close but not close enough to trigger a report: these events are invisible to the system. They happen, they create risk, and the organisation learns nothing from them.
Aviation had exactly this problem 40 years ago. What it did next is instructive.

Lesson 1: report everything, blame nothing
The single most important shift in aviation safety culture was the move from a punitive reporting environment to a "just culture." Under just culture principles, individuals are not blamed for honest errors. They are encouraged (and in many cases required) to report mistakes, near misses, and anomalies without fear of punishment. The line is drawn clearly: honest errors are protected, reckless violations are not.
This wasn't a soft, feel-good initiative. It was a structural decision based on a hard-nosed calculation: the safety value of the information people would report if they felt safe doing so vastly outweighed the disciplinary value of punishing them for mistakes.
The results were transformative. Voluntary reporting programmes, like the FAA's Aviation Safety Action Program (ASAP), generated enormous volumes of data about events that would previously have gone unreported. That data revealed systemic patterns: equipment design issues, procedural ambiguities, environmental factors, and organisational pressures that no amount of incident investigation could have surfaced, because the incidents were being prevented by the very act of reporting and addressing the precursors.
The warehouse equivalent of this shift is moving from a culture where near misses are underreported (because reporting feels like snitching or admitting fault) to one where every safety event is captured automatically and used constructively. Computer vision AI achieves this without depending on human willingness to report. The system detects events continuously and objectively, regardless of whether anyone chooses to fill in a form. But the cultural principle is the same: the data exists to learn from, not to punish with.
Lesson 2: just culture, the foundation aviation built first
Just culture isn't simply "don't blame people." It's a structured framework with a clear distinction between errors (unintentional, understandable given the context), at-risk behaviour (where the risk wasn't recognised or was mistakenly judged as acceptable), and reckless behaviour (where the risk was consciously disregarded). Each category gets a different response: errors are consoled, at-risk behaviour is coached, and reckless behaviour is disciplined.
This framework matters because it creates the trust that makes reporting work. If workers believe that reporting a near miss will lead to a coaching conversation rather than a disciplinary one, they report more. If they believe the data is being used to improve conditions rather than to catch people out, they engage with the system rather than resisting it.
This is exactly how a coaching-first approach to safety technology should work. When a safety event is detected by computer vision AI, the footage is automatically face-blurred. The event becomes a coaching clip, not an evidence file. The supervisor uses it in a constructive conversation: "Here's what happened in our zone yesterday. What do we think caused it, and what could we do differently?" That's just culture in action, applied to warehousing with the data infrastructure to support it.

Lesson 3: continuous data, not periodic audits
Aviation doesn't assess safety through periodic inspections. It monitors continuously. Flight data recorders capture every parameter of every flight. Cockpit voice recorders preserve communications. Automated systems flag anomalies in real time. The data flows into centralised analysis programmes (like the FAA's ASIAS) that aggregate patterns across the entire industry.
This continuous data model is what allows aviation to identify systemic risks before they cause accidents. A pattern of go-around events at a particular airport, a trend in altitude deviations during a specific approach procedure, a cluster of bird strike reports in a geographic area: these patterns only become visible when you have continuous, high-volume data aggregated across a wide base.
Warehousing has traditionally relied on the opposite model: periodic safety walks, scheduled audits, and reactive incident reports. These tools capture a fraction of what actually happens. The safety walk that covers 30 minutes of a 24-hour operation. The audit that checks compliance against a checklist but can't detect the near miss that happened between visits.
Computer vision AI brings the aviation data model to warehousing. Every connected camera is the equivalent of a flight data recorder: capturing safety events continuously, tagging them with timestamps and locations, and feeding them into a reporting system that aggregates patterns across the facility. The heatmap is the warehouse equivalent of the aviation industry's systemic risk analysis: a visual representation of where events concentrate, how they change over time, and whether interventions are working.

Lesson 4: learn from what goes right, not just what goes wrong
One of the more recent evolutions in aviation safety thinking is the concept of "Safety-II," which argues that safety management should study how things go right (the vast majority of the time), not just how they go wrong (the rare exceptions). The idea is that understanding the adaptations, workarounds, and judgement calls that keep operations safe under varying conditions is as valuable as understanding the failures.
This is a powerful reframe for warehousing. Traditional safety programmes focus almost exclusively on what went wrong: the incident, the near miss, the violation. But the data from computer vision AI also shows what goes right. The intersection where near misses dropped after a traffic flow change. The shift that consistently has the lowest event density. The zone where coaching conversations led to a measurable improvement in pedestrian-vehicle separation distances.
Studying success is just as important as studying failure, because it tells you what to replicate. If one shift consistently outperforms the others, the leading indicator data can help you understand why, and share that understanding across the operation.
Lesson 5: safety is everyone's job, not just the safety team's
In aviation, safety is not the exclusive domain of the safety department. Pilots, cabin crew, air traffic controllers, maintenance engineers, dispatchers, and ground crew all have defined roles in the safety system. The Safety Management System (SMS) mandated by ICAO requires every level of the organisation to participate in hazard identification, risk assessment, and safety assurance.
In many warehouses, safety is still primarily the EHS team's responsibility. Operations manages throughput. EHS manages safety. The two functions share the same space but operate in parallel.
The aviation model suggests a different structure: one where safety data is shared across functions, where operations managers use heatmaps to inform layout decisions, where supervisors own the coaching conversations, and where site leaders review safety trends as part of their operational performance meetings, not as a separate compliance exercise.
inviol customers who achieve the strongest results are the ones who adopt this cross-functional model. The heatmap review happens in the ops meeting. The coaching clips are used by shift supervisors, not just safety managers. The data becomes a shared language that both safety and operations teams speak fluently.
The gap is closing
Aviation's safety transformation didn't happen overnight. It took decades of institutional commitment, regulatory evolution, cultural change, and technology investment. The warehouse industry doesn't need to replicate that entire journey from scratch, because the principles have already been proven and the technology to apply them now exists.
Just culture can be implemented through a coaching-first approach to safety events, supported by face-blurred video that focuses on situations rather than individuals. Continuous data collection is now possible through computer vision AI connected to existing CCTV infrastructure. Systemic risk analysis is delivered through heatmaps and leading indicator dashboards. Cross-functional safety ownership is enabled by shared data platforms that give operations and EHS teams the same visibility.
The warehouse industry won't achieve aviation's safety record in a year. But the organisations that adopt these principles now will be the ones that look back in five years and see a fundamentally different trajectory.
If you're ready to bring continuous safety data and a coaching-first culture to your operation, book a demo and we'll show you how it works.
Frequently Asked Questions
Why is aviation so much safer than warehousing?
Aviation's safety improvement is driven by five cultural and systemic factors that warehousing has been slower to adopt: a just culture that encourages reporting without blame, continuous data collection (not periodic audits), systemic risk analysis that identifies patterns across the entire operation, learning from what goes right as well as what goes wrong, and cross-functional safety ownership where every role has a defined part in the safety system. These are cultural and data infrastructure choices, not inherent differences between the industries.
What is just culture and how does it apply to warehouse safety?
Just culture is a framework that distinguishes between honest errors (consoled), at-risk behaviour (coached), and reckless behaviour (disciplined). It creates the trust that makes safety reporting work, because workers know that honest mistakes will be met with support, not punishment. In warehousing, just culture principles can be applied through coaching-first responses to AI-detected safety events, using face-blurred video clips that focus on the situation rather than the individual.
How is computer vision AI similar to a flight data recorder?
Both systems capture safety-relevant data continuously and objectively, without relying on human observation or voluntary reporting. A flight data recorder captures every parameter of every flight. Computer vision AI captures every detectable safety event across every connected camera. Both generate the high-volume, continuous data sets needed to identify systemic patterns and measure whether interventions are working.
What is Safety-II and how does it apply to warehousing?
Safety-II is an approach that studies how things go right (the vast majority of the time) rather than focusing exclusively on failures. In warehousing, computer vision AI data shows both the events that indicate risk and the patterns that indicate success: the shifts with the lowest event density, the zones where coaching led to measurable improvement, the operational changes that reduced near misses. Studying these successes helps organisations understand what to replicate across their operation.
Can warehouse safety really achieve aviation-level improvement?
Aviation has roughly doubled in safety every decade for over 50 years. While warehousing faces different operational constraints, the principles that drove aviation's improvement (just culture, continuous data, systemic analysis, cross-functional ownership) are directly transferable. The technology to apply these principles in a warehouse setting (computer vision AI, coaching platforms, leading indicator dashboards) now exists and is being deployed by organisations that are already seeing measurable results.


