Consider a 400-employee food processor in the Willamette Valley that detects anomalous activity on its domain controller and calls for help 37 hours later. Within 6 hours of arrival, production can be safely isolated. Within 9 days, the facility can be back at pre-event capacity. This playbook documents how we would approach the event — the architecture changes we would recommend afterward — using a composite that reflects the manufacturers we are built to serve.
This is not a specific client engagement. The facility profile, timeline, and technical choices below are illustrative and written to show how our practice sequences an IT-to-OT ransomware response in a Pacific Northwest food & beverage environment.
Background
The manufacturer operates a single primary facility producing refrigerated dairy products. Annual revenue is approximately $180M. The facility runs 3 shifts, 7 days a week, during peak season. The OT footprint includes roughly 60 PLCs across filling, packaging, pasteurization, and CIP stations, coordinated by a historian-fed SCADA system and a plant-floor MES.
Prior to the event, the security program was typical for a company of this size: an MSP-managed IT environment, modest EDR coverage on corporate endpoints, no OT-specific security tooling, and no formal segmentation between corporate IT and plant-floor OT. The OT network and corporate IT shared a single flat VLAN structure with inter-VLAN routing enforced by an edge firewall that had not been reviewed in roughly 18 months.
Scenario timeline
Day 0, 14:12 — Initial detection
In the scenario, the manufacturer's MSP flags anomalous authentication activity against the primary domain controller. Investigation reveals active lateral movement consistent with the pre-encryption phase of a known ransomware operator.
Day 1, 03:40 — Customer calls us
The MSP has paused account activity but cannot assess OT impact. The VP of Operations reaches our after-hours line. A three-person response team would be dispatched. The plant continues to run on the overnight shift at this point — operators unaware.
Day 1, 08:00 — On-site assessment begins
Our team arrives at the facility at 08:00. Initial focus is two questions: (1) has OT been reached, and (2) is production currently safe. Passive analysis of the plant network would look for known malicious indicators, and would also map reachability from the compromised IT segment into the historian and engineering workstations — a path that must be closed regardless of current compromise state.
Day 1, 10:30 — Controlled isolation
With plant operations' consent, a pre-agreed isolation sequence is executed: the historian is disconnected from corporate, an emergency ACL is applied to the edge firewall cutting IT-to-OT routing, and vendor remote access is administratively suspended. Production continues. Plant MES and HMI functions operate normally in isolated mode. The only observable impact is a loss of near-real-time dashboards in the corporate office.
Days 2–4 — Forensics and containment
We would partner with the MSP and a cooperating IR firm to trace the intrusion. A typical finding in this scenario: initial access 21 days earlier via a phishing event that established persistence on a single user workstation. From there, credential harvesting, escalation through a misconfigured service account, and reach to the domain controller. The historian server may be accessed but not encrypted — common when the attacker has not yet staged OT-specific payloads.
Days 5–7 — Targeted remediation
IT is rebuilt. OT remains in isolated mode throughout. We would replace the historian server (cleanly rebuilt from backup, then reconnected to OT only after the OT network's security baseline is verified). Engineering workstations are imaged and redeployed with hardened baselines.
Days 8–9 — Controlled reconnection
A new firewall ruleset is deployed enforcing a formal IT/OT boundary. Only the specific flows required for MES/ERP integration are permitted. Vendor remote access is restored through a new jump host with MFA and session recording. The historian is reconnected. Production dashboards return to the corporate office. The facility returns to pre-event operating state on day 9.
What would prevent a worse outcome
- Early MSP detection. Anomalous authentication flagged before encryption payloads deploy.
- An edge firewall, even under-maintained, in place. It allows isolation in minutes rather than hours.
- Plant leadership engaging immediately. The choice to call at 03:40 rather than wait for morning is the highest-leverage decision in the scenario.
- A plant that can run without corporate connectivity. MES and HMI functions local. Not universally true in manufacturing, and the single largest factor saving production days.
Architecture we would recommend afterward
- Formal IT/OT segmentation with a defined zone-and-conduit model
- Historian deployed in a DMZ with one-way data flows to corporate
- Jump host for all vendor remote access, with MFA and session recording
- Hardened baselines for engineering workstations, enforced by configuration management
- Offline, immutable backups of historian data with a tested restore procedure
- A written OT incident response playbook — specifically the isolate-or-continue decision tree
- Twice-yearly tabletop exercise with plant operations, IT, and the MSP
Expected posture twelve months out
With the architecture above in place, a facility of this profile would typically see no production-impacting security events across the following year, tabletop exercises running on schedule, and improved cyber insurance terms as insurers accept formal IT/OT segmentation as a compensating control.
If you operate a Pacific Northwest manufacturing facility and would like to walk through what a similar response would look like in your environment — before you need it — contact us.
This article was written by the Cascadia OT Security practice, which advises Pacific Northwest data centers and manufacturers on industrial cybersecurity. For engagement inquiries, reach our practice team.