Skip to main content

Deployment Topology

Jope.SMB runs across two hosts: Plant IPC (Console + Historian) on Windows, and a dedicated Inference Host on the plant LAN running the Python Inference Server. This page covers prerequisites, installer, process management, network config, backup, and upgrade paths for both hosts.

Target Environment

Plant IPC (Console + Historian)

ItemRequirement
OSWindows 10 21H2+ or Windows 11 21H2+
CPU4+ cores @ 2.5 GHz (6+ cores recommended)
RAM16 GB
Disk500 GB SSD (historian 7-year retention: ~100-200 GB / year)
NetworkIsolated plant VLAN; no direct internet access

Inference Host (dedicated, default)

ItemRequirement
OSLinux (recommended — Ubuntu 22.04 LTS / Rocky 9), Windows Server 2022, or container host (Docker / Podman)
CPU8+ cores @ 3.0 GHz
RAM32 GB (64 GB if models > 500 MB or GPU buffering)
GPU (optional)NVIDIA with CUDA 12+ if GPU-accelerated inference is used
Disk200 GB SSD
NetworkPlant VLAN with TCP to Plant IPC on ports 5555, 5556

Runtime Prerequisites

Plant IPC (Windows)

ComponentVersionPurpose
.NET Desktop Runtime 88.0.x (latest LTS patch)Runs Jope.SMB Console
TimescaleDB2.15+ (PostgreSQL 15+)Historian
Visual C++ Redistributable2015-2022Native dependencies

Inference Host

ComponentVersionPurpose
Python3.11.xRuns Inference Server
Process managersystemd (Linux, recommended) or Docker / PodmanRun Inference as daemon with auto-restart
CUDA Toolkit (optional)12.xGPU-accelerated inference

Installation Layout

Plant IPC (Windows)

C:\Jope\
├── SMB\ ← Console binaries
│ ├── Jope.SMB.WPF.exe
│ ├── config\
│ │ ├── app.json ← endpoints, theme, language
│ │ └── devices.json ← device COM port assignments
│ └── logs\
└── Historian\ ← TimescaleDB data dir (may be on D: for capacity)
└── data\
/opt/jope-inference/
├── venv/ ← isolated Python environment
├── main.py
├── models/ ← .joblib files + metadata
├── config/
│ └── server.toml ← endpoint binding, model paths
└── logs/

/etc/systemd/system/
└── jope-inference.service ← systemd unit

Inference Host (Docker · alternative)

jope-inference/
├── Dockerfile
├── docker-compose.yml ← single-service compose
├── models/ ← mounted as volume
└── config/ ← mounted as volume

Process Management

Plant IPC · Service Registration

Service NameProcessStart ModeDepends On
postgresql-x64-15PostgreSQL + TimescaleDB (Windows Service)Auto

The Console is NOT a service — it runs interactively under the logged-in operator session. This is intentional: electronic signature dialogs must be visible to the user, and a session-tied UI is a compliance expectation.

Inference Host · daemon

  • Linux (recommended): systemctl enable --now jope-inference.service. Unit file uses Restart=always, User=jope, and sets WorkingDirectory=/opt/jope-inference.
  • Docker: docker compose up -d — restart policy unless-stopped.

The daemon exposes ZMQ on :5555 and HTTP on :5556 to the plant LAN.

Network Configuration

EndpointPortHostNotes
Inference ZMQ5555Inference HostTCP bound to plant LAN; configurable in Console app.json
Inference REST5556Inference HostHTTP/1.1; JSON
Historian5432Plant IPC (loopback preferred)PostgreSQL default
Raman gateway502Plant LANModbus TCP
Operator ConsolePlant IPCno listening port; outbound only

Firewall Rules

Inference Host (Linux iptables / firewalld, or Windows Firewall):

  • Allow inbound 5555 + 5556 only from Plant IPC's address
  • Block all other inbound from plant VLAN
  • Block all outbound to internet (plant is air-gapped)

Plant IPC (Windows Firewall):

  • Allow outbound to Inference Host ports 5555 + 5556
  • Block all inbound from plant VLAN
  • Block all outbound to internet

Deployment Layout

Two hosts on the plant LAN:

Plant IPC (Windows)            Inference Host (Linux recommended)
├── Jope.SMB Console ├── Inference Server (daemon)
└── TimescaleDB (service) └── Model Registry (.joblib files)
│ ▲
└────── plant LAN ─────────────┘
ZMQ :5555 + REST :5556
  • Plant IPC — Windows; runs Console interactively + Historian as a service. Owns hardware I/O.
  • Inference Host — Linux + systemd (recommended) or Docker / Podman. Runs Python Inference as a long-lived daemon. Own CPU / RAM / optional GPU budget.

Why the separation:

  • Plant IPC can focus on real-time hardware polling + operator UI + compliance writes without Python GIL / training CPU spikes interfering
  • Inference Host can be sized independently (more RAM for larger models, optional GPU for parallel inference)
  • Training jobs (which can saturate CPU for minutes) don't impact Console responsiveness
  • Python ML tooling runs natively on Linux; no need to force Windows compatibility for libraries
  • Easier to scale / replace the Inference Host over time as models evolve

Latency over the plant LAN is ~1-2 ms, well within the predict p95 ≤ 20 ms target.

Backup

ScopeFrequencyTargetRetentionEncryption
Historian (full)Daily 02:00NAS \\nas01\backup\jope-smb\90 days rolling + monthly archive (1 year)AES-256 at rest
Historian (WAL)ContinuousSameWith full backupsAES-256
Config (C:\Jope\SMB\config\)WeeklySame90 daysAES-256
Models (C:\Jope\Inference\models\)On changeSameUnlimited (small)Signed packages

Restore requires dual approval per Compliance Mapping · 4-Eyes.

Upgrade Path

Every component has a versioned upgrade script:

  1. Announce maintenance window — Console shows banner 24 h in advance
  2. Gracefully stop batch — operator finishes current run; no forced interrupt
  3. Take full backup + WAL snapshot
  4. Stop services (Inference, then Console — leave Historian running for queries)
  5. Run upgrade installer
  6. Apply DB migrations — TimescaleDB schema updates are versioned; down scripts exist
  7. Verify — smoke tests in Jope.SMB.Core.Tests; device connectivity checks
  8. Restart services + Console
  9. Audit event: SystemUpgraded with {old_version, new_version, operator, timestamp}

Rollback: restore from the pre-upgrade backup; dual approval required.

Monitoring

  • Windows Event Log: every service writes structured JSON lines
  • Operator Console · Status Bar: Inference heartbeat, Historian connectivity, free disk
  • Historian self-query: SELECT COUNT(*) FROM audit_events WHERE ts > now() - interval '24 h' — must grow monotonically
  • Optional — plant SCADA can poll /health on Inference Server for plant-wide dashboards

Installer Responsibility Split

ItemOwner
Installer packaging (Plant IPC MSI + Inference Host image / systemd unit)Jope engineering
Plant OS provisioningPlant IT
User account creation (dev + service)Plant IT
First-boot validationJope engineering + Plant QA
Ongoing patching (Windows Update)Plant IT, announced maintenance windows
Jope.SMB version upgradesJope engineering, dual approval by Plant QA