Cloud infrastructure is attractive for manufacturing: scalable compute, managed databases, and built-in redundancy without capital expense. But OT workloads have fundamentally different requirements than IT workloads. Real-time control, latency-sensitive communication, and devices with 20-year lifespans do not fit cloud economics. The question is not whether to move OT to the cloud, but which specific workloads benefit from cloud infrastructure and how to architect them securely.
The most successful cloud OT deployments we see follow a pattern: time-insensitive data processing, analytics, and reporting run in the cloud; real-time control and safety-critical functions run on-premises. Data flows one way (from production to cloud) with careful validation at the boundary. This pattern scales and remains operationally manageable.
Cloud Workloads That Work
Historian data warehouses run extremely well in cloud infrastructure. Production data flows to cloud databases where it can be analyzed at scale, combined with external data sources (weather, commodity prices, energy markets), and made available to enterprise systems without burdening the production network. Latency is irrelevant—data can be minutes or hours old and still support decision-making.
Predictive maintenance models benefit from cloud compute. Time-series data from production sensors is processed in cloud machine learning pipelines. Anomaly detection, asset degradation prediction, and optimal maintenance scheduling run asynchronously and send recommendations back to the facility. This is not real-time control; it is analytics that improve long-term efficiency.
Cloud Patterns for Manufacturing
- Historian Aggregation: Facility historian servers collect production data locally. Data is periodically exported (hourly, daily) to cloud data warehouses. Cloud systems handle long-term retention, analytics, and reporting. The facility historian is the system of record; the cloud is an analytics extension.
- One-Way Data Transfer: Data flows from facility to cloud, but commands and configurations do not flow the reverse direction. If cloud systems need to influence production behavior, they do so by generating recommended actions (reduce pressure, change recipe) that operators implement, not through direct control.
- Authentication at the Boundary: Data transfer between facility and cloud is authenticated and encrypted. The cloud application validates that data comes from authorized production facilities and reject anything anomalous. This prevents unauthorized facilities from corrupting your analytics or exfiltrating your data.
- Latency-Tolerant Operations Only: Real-time control, safety systems, and hard real-time protocols must stay on-premises. If your control logic requires sub-100ms response, cloud is not appropriate—packet loss and latency jitter on internet connections will break it.
Architecture Anti-Patterns
Do not implement cloud-hosted PLCs or cloud-based safety systems where commands flow from cloud back to production devices in real time. This pattern depends on consistent low-latency internet connectivity, which is unreliable in practice. A network outage causes production to halt, and a cloud service outage disables your plant.
Do not push all configuration and secrets management to cloud repositories and pull them at boot time. Production devices need to start up and operate correctly even if cloud connectivity is down. Configuration should be pushed from cloud to local caches periodically, not pulled on demand.
If you're planning cloud infrastructure for manufacturing data, reach out to discuss architecture for your environment.
This article was written by the Cascadia OT Security practice, which advises Pacific Northwest data centers and manufacturers on industrial cybersecurity. For engagement inquiries, reach our practice team.