The industrial cybersecurity market has, over the last decade, been shaped almost entirely by software vendors. That has produced a market full of dashboards, visibility products, and detection platforms — and a real shortage of programs that produce durable operational outcomes.
This paper argues that for most data centers and heavy manufacturers, a service-first engagement model produces better results than a platform-first engagement model — and offers a framework for evaluating providers accordingly.
Executive summary
Over 2024 and 2025, platform-led OT programs at organizations we reviewed shared a consistent pattern: substantial six- and seven-figure spend on visibility platforms, followed by 18 to 30 months of partial deployment, partial tuning, and fragmented ownership. The platforms themselves performed as advertised. The programs they were meant to produce did not.
The root cause is not product quality. It is a misalignment between what OT security programs actually require — deep facility knowledge, integrator coordination, iterative architecture work, hands-on testing, and sustained oversight — and what platform vendors are commercially incentivized to deliver, which is software licenses.
A service-first model inverts the relationship. The engagement is scoped around the facility's resilience goals. Tools are selected on behalf of those goals, owned by the customer, and maintained by people who understand the facility. The vendor's job is not to sell a platform; the vendor's job is to make the customer's operation measurably safer.
Nobody has ever restored production at a manufacturing plant by showing an executive a dashboard. Production gets restored by people who understand the network, the controls, and the recovery order — and who have practiced.
How we got here: the rise of the OT visibility platform
Beginning around 2016, a cohort of OT-specific security platforms emerged. They solved a real problem: traditional IT security tooling could not passively understand industrial protocols, and plant floors had become genuinely opaque to security teams.
The commercial model that emerged around these platforms, however, borrowed heavily from enterprise IT security sales. Large annual license fees. Multi-year contracts. ROI arguments built around dashboards and asset counts rather than production outcomes. Implementation time measured in quarters.
For a Fortune 100 chemical company with a dedicated OT security team, this model works — they have the internal capacity to translate platform output into program work. For a 600-person manufacturer or a 40-MW data center, the same model produces software that nobody fully owns, alerts that nobody fully investigates, and architecture gaps that nobody fully closes.
What OT security programs actually require
Three things, in sequence:
- Architecture. A segmented, defensible network. A clear Purdue-aligned zoning model. Controlled conduits between zones. Asset inventories that match reality, not aspiration.
- Process. Patch cycles that account for change windows. Vendor remote-access procedures that don't require firewalls to be disabled. Response playbooks that specify when to pull cables and when to call the integrator.
- Practice. Operators who have actually walked through a ransomware event on their floor. Maintenance teams who know what a compromised HMI looks like. Leadership who has made the tough call between uptime and isolation under exercise conditions.
None of these are platform deliverables. All of them are work. A visibility platform can support the work — it is useful, sometimes essential — but it cannot replace the work, and it cannot be the first purchase.
The service-first engagement, sequenced
Phase 1: Discovery (2 weeks)
Walk the facility. Meet the people. Read the one-lines. Understand the production dependency chain. Identify the systems whose failure would halt production, and the systems whose compromise would create physical risk. No tooling is purchased in phase one. No visibility platform is deployed. The output is a facility-specific risk narrative and a short list of the decisions that will shape the remainder of the program.
Phase 2: Assessment (4 weeks)
Technical evaluation of the existing state. Passive network analysis. Targeted active testing in approved windows. Physical walkthroughs — cameras, access control, rack-level physical security. Configuration review of firewalls, switches, historians, engineering workstations. The output is a prioritized findings report with remediation sequencing tied to operational constraints, not a generic risk register.
Phase 3: Remediation (8 weeks, typically)
Execute. Segment the networks. Close the conduits. Harden the workstations. Deploy the visibility tooling, if it is warranted, with a deployment plan that accounts for the production schedule. Write the playbooks. Train the people. Every change is staged, reviewed with operations, and verified against a production-safe test pattern.
Phase 4: Transfer (ongoing)
Hand the program to the customer. Retain an advisory relationship. Run quarterly reviews. Refresh the threat model. Exercise the playbooks. The goal is not a permanent dependency. The goal is a customer with a program they own and understand.
Evaluating providers
When you are considering a provider, whether ours or anyone else's, these are the questions we suggest asking:
- What is your first deliverable? If the answer is "our platform, deployed," be skeptical.
- Who owns the tools you recommend at the end of the engagement — you, or us? The correct answer is the customer.
- Can you walk us through a specific past engagement's phase-by-phase timeline and outcomes?
- What are your consultants' backgrounds — IT security, or OT engineering?
- How do you measure success? Look for operational metrics: detection-to-isolation time, mean-time-to-safe-state, audit pass rates. Not dashboard adoption rates.
The role of platforms in a service-first program
We are not anti-platform. Visibility tooling, SIEMs, and OT-specific detection products are legitimate components of a mature OT program. The question is sequencing and ownership.
In a service-first engagement, tools are selected late, after the architecture and process work has defined what the facility actually needs to detect. They are often smaller, cheaper, and more tightly scoped than what a platform-led sales cycle would have produced. And they are owned by the customer — the customer's credentials, the customer's detection rules, the customer's renewal decision.
Conclusion
The industrial cybersecurity market has over-invested in software and under-invested in the specialized human work that makes facilities genuinely safer. For most Pacific Northwest data centers and manufacturers, a consulting-led engagement model — scoped to specific operational outcomes, delivered by people who can be on the floor the following Tuesday — produces more resilience per dollar than a platform-first alternative.
If you are evaluating an industrial cybersecurity program for your facility, we would rather have a 30-minute conversation about your specific operation than send you a generic capabilities deck. Get in touch.
This article was written by the Cascadia OT Security practice, which advises Pacific Northwest data centers and manufacturers on industrial cybersecurity. For engagement inquiries, reach our practice team.