top of page

Scaling safety across 50+ sites: a logistics provider's journey

  • Aug 17, 2025
  • 8 min read

Updated: Apr 14

This post describes the typical multi-site scaling journey we see with logistics and distribution customers. Specific details reflect common patterns across deployments rather than a single named customer.


Managing safety at one warehouse is hard enough. Managing it consistently across 50, 80, or 100+ sites is a different kind of problem entirely. Each facility has its own layout, traffic patterns, workforce, shift structure, and risk profile. What works at a 10,000 sqm urban DC doesn't automatically work at a 50,000 sqm regional distribution hub.


For logistics providers operating at that scale, the safety challenge isn't a lack of commitment. It's a lack of consistency and visibility. The head of safety can't be at every site, and the data they receive from each site is only as good as the local team's willingness and ability to report it.


This is the scaling problem that computer vision AI was built to solve. Here's what the journey typically looks like.




The consistency problem


Every multi-site safety leader we talk to describes some version of the same challenge. They have a safety framework that looks good on paper. The policies are written, the training is developed, and the expectations are clear. But the execution varies wildly from site to site.


IOSH's Duncan Spencer puts it well: every site in an organisation will have a different level of maturity when it comes to safety management. Some sites have strong local safety champions, experienced supervisors, and a culture where near misses get reported. Others rely on a single safety officer who's stretched across multiple responsibilities, with a workforce that sees safety as something that happens at the annual refresher training.


The result is data that's inconsistent and hard to compare. Site A reports 15 near misses per month because they have an active reporting culture. Site B reports two, not because it's safer, but because nobody is reporting. The corporate safety team can't tell the difference from the numbers alone, and without that visibility, resources get allocated based on reported data rather than actual risk.


This is the fundamental problem: you can't manage what you can't see, and across 50+ sites, manual reporting systems don't give you a consistent picture of what's actually happening on every floor, every shift, every day.





Multiple warehouse locations or network concept

Why traditional scaling fails


The instinctive response to the consistency problem is to throw more people at it. Hire more regional safety managers. Increase audit frequency. Deploy more safety officers to the sites that are underperforming.


That works up to a point, but it doesn't scale economically. A safety manager visiting 10 sites on a rotating schedule might see each site once a month for a few hours. During that visit, they capture a snapshot of performance. For the remaining 29 days, the site operates unobserved by the corporate safety function.


Audit programmes face the same constraint. A quarterly audit tells you how a site was performing on the day of the audit. It doesn't tell you what happened on the night shift last Tuesday, or whether the loading dock was congested during the peak delivery window, or whether forklift speeding increased after the experienced supervisor went on leave.


The data gap between audits is where risk lives. And the wider you scale, the wider those gaps become.




The starting point: prove the model at one site


The logistics providers we work with almost always start with a single site. They pick their highest-risk facility (or their highest-volume one, or the one where the local management team is most willing to pilot new technology) and deploy inviol there as a proof of concept.


inviol connects to a selection of existing CCTV cameras at the pilot site, focused on the highest-risk areas: forklift traffic lanes, loading dock approaches, pedestrian crossings, and exclusion zones. No new cameras required. The on-premise processing unit is installed on-site, and the system starts detecting events from day one.


Within the first two weeks, the pilot site typically has data that nobody had before: the actual frequency of near misses by zone and shift, the real speeding patterns, the exclusion zone compliance rates, and the heatmap showing where risk concentrates.


This pilot data serves two purposes. It demonstrates the technology works in the specific operational environment. And it produces the metrics the corporate safety team needs to build the business case for scaling: here's what we found at site one, here's what it would cost to deploy across the network, and here's the projected risk reduction based on what we've already seen.




What changes when you can see every site


The transformation happens when the technology moves from one site to many. Once inviol is deployed across 10, 20, 50+ sites, the corporate safety team gains something they've never had before: a consistent, objective, real-time picture of safety performance across the entire network.


Every site is measured the same way. The AI applies the same detection criteria to every connected camera at every facility. A near miss at site 12 is classified the same way as a near miss at site 47. A speeding event in Auckland is measured against the same threshold as a speeding event in Melbourne. The data is comparable because the measurement is consistent, regardless of the local team's reporting culture or maturity level.


This changes the corporate safety conversation fundamentally. Instead of relying on self-reported data that varies in quality and completeness, the head of safety can now see genuine cross-site comparisons: which sites have the highest near-miss density, which shifts carry the most risk, which zones across the entire network generate the most events, and (critically) which sites are improving and which aren't.


Dakota Software's research on multi-site EHS highlights a key insight: if one site's culture is vastly better than others, analyse what that group is doing and apply its lessons across the network. Computer vision AI makes that analysis possible, because you can identify your best-performing sites based on objective data and then study what they're doing differently in terms of coaching frequency, supervisor behaviour, traffic management, and process design.





Safety dashboard showing multiple sites

How the rollout typically works


Based on what we consistently see across multi-site logistics deployments, the scaling journey follows a predictable pattern.


Phase one is the pilot (one to three sites, four to eight weeks). Deploy at the highest-risk or most willing site. Validate the technology, calibrate detection thresholds, and gather baseline data. The coaching platform is introduced to the local supervisors, who begin using detected events for team coaching sessions.


Phase two is the first wave (five to fifteen sites, two to four months). Roll out to the next tier of priority sites based on risk profile, incident history, or strategic importance. The corporate safety team begins receiving cross-site dashboards and can start making meaningful comparisons. Best practices from the pilot site are shared with new sites during onboarding.


Phase three is full network deployment (all remaining sites, ongoing). The rollout extends across the entire network. By this stage, the corporate safety team has a mature understanding of what the data looks like, what interventions work, and how to use cross-site benchmarking to drive improvement at underperforming locations.


The deployment itself is fast at each individual site, typically measured in days rather than weeks, because inviol works with existing CCTV infrastructure and only needs a selection of cameras connected. The limiting factor in scaling is usually not the technology but the change management: ensuring each site's supervisors understand the coaching workflow, that the workforce has been briefed on what the system does (and critically, what it doesn't do, like identifying individuals), and that local management is committed to using the data for coaching rather than policing.





Forklift operating in a busy warehouse

The metrics that matter at scale


Once you're operating across dozens of sites, the metrics that matter shift from individual site performance to network-level patterns.


Site-to-site benchmarking. Which sites have the lowest event rates per camera-hour? Which have the fastest time from detection to coaching session? Ranking sites against each other (constructively, not punitively) creates positive pressure for improvement and helps the corporate team identify where to focus attention.


Trend convergence. As coaching programmes take effect across the network, you should see site-level event rates converging toward a lower mean. Sites that started with high event rates improve faster than sites that were already performing well, and the overall network risk level drops. inviol customers see an average 67% risk reduction across deployments, with the highest-risk sites often seeing the most dramatic improvements.


Intervention effectiveness. When a process change is implemented at one site (say, a traffic flow redesign at the loading dock), the data shows whether it worked within days. If it did, the same change can be rolled out across other sites with similar layouts, accelerating improvement across the network.


Operational insights. At scale, the data reveals network-wide patterns that individual sites can't see. If near misses spike during certain delivery windows across multiple DCs, that's a scheduling problem that can be addressed centrally. If a particular forklift model generates more speeding events than others across the fleet, that's a procurement insight. The heatmap and reporting data doesn't just improve safety at scale. It improves operations at scale.




The cultural shift


The most important outcome of multi-site scaling isn't the data or the dashboards. It's the cultural shift that happens when every site in the network is operating with the same visibility, the same coaching tools, and the same expectations.


Sites that previously operated in isolation (each with its own safety culture, its own reporting norms, and its own sense of how well it was performing) are now part of a connected network where performance is transparent. That transparency creates accountability, but more importantly, it creates opportunities for learning. The site in Christchurch that figured out how to reduce loading dock near misses by 70% can share its approach with the site in Sydney that's still struggling with the same problem.


This is what a genuinely proactive, multi-site safety culture looks like. Not a corporate policy document that each site interprets differently, but a shared system of detection, coaching, and continuous improvement that operates consistently everywhere, every shift, every day.




Is your network ready to scale?


If you're managing safety across multiple sites and relying on self-reported data, periodic audits, and regional safety manager visits, you have visibility gaps. You might not know which sites are genuinely performing well and which are simply underreporting. You might not know where risk concentrates across your network or whether your interventions are working consistently.


Computer vision AI closes those gaps, not by replacing your safety team but by giving them consistent, objective, real-time data from every site. The technology deploys quickly, works with existing infrastructure, and produces the cross-site comparisons that make network-level safety management possible.


The logistics providers that get this right don't just reduce risk across their network. They build the kind of safety programme that becomes a competitive advantage: lower incident rates, lower insurance costs, better regulatory positioning, and a workforce that knows the organisation is genuinely invested in their wellbeing.


Running a multi-site operation and want to see what cross-site safety data looks like? Book a demo and we'll show you how inviol scales from one site to many, with the dashboards, coaching workflows, and benchmarking tools that make network-level safety management practical.




Frequently Asked Questions


How do you maintain consistent safety standards across multiple sites?


The key is consistent measurement. Computer vision AI applies the same detection criteria to every connected camera at every site, so a near miss at one facility is classified identically to a near miss at another. This removes the variability of self-reported data and gives the corporate safety team a genuine, comparable picture of performance across the network.


How quickly can AI safety monitoring be deployed across multiple sites?


Individual site deployments typically take days, not weeks, because the system works with existing CCTV cameras. Most multi-site rollouts follow a phased approach: a pilot at one to three sites, a first wave of five to fifteen, then full network deployment. The limiting factor is usually change management (supervisor training, workforce communication) rather than the technology itself.


What are the biggest challenges in scaling safety across 50+ sites?


The main challenges are inconsistent data (sites with strong reporting cultures look different from those with weak ones), limited visibility between audits, uneven safety maturity across locations, and the inability to compare sites fairly when each uses different measurement methods. AI monitoring addresses all of these by providing a single, consistent data source across the entire network.


How does cross-site benchmarking improve safety?


When every site is measured the same way, the corporate safety team can identify best-performing sites, study what they're doing differently, and apply those practices across the network. Constructive benchmarking creates positive pressure for improvement and helps concentrate resources where they'll have the greatest impact.


Can the same safety data that improves safety also improve logistics operations?


Yes. At network scale, the data reveals patterns that individual sites can't see, including delivery window congestion, fleet-wide equipment issues, and traffic flow problems common across similar facility layouts. These insights drive operational improvements alongside safety gains, including better throughput, less damage, and more efficient scheduling.


 
 
bottom of page