High-Level Topology
This page shows the full runtime topology — where each software process lives, what hardware it talks to, which protocols connect them, and where off-site actors sit relative to the Plant IPC.
Diagram
Inference Server always runs on its own dedicated plant-LAN host. This isolates the inference workload, leaves Plant IPC focused on hardware I/O + compliance, and gives Python / model retraining its own CPU / RAM / (optional) GPU budget.
Components
Operator Console · Jope.SMB
- Process: WPF .NET 8 interactive Windows application
- Responsibilities: Batch control FSM · operator UI · audit trail writes · electronic signature · historian writes · hardware I/O
- State: Stateful — holds batch state machine, user session, in-memory alarm queue, recipe cache
- Built on: Jope.Core (Transport · Compliance · Identity) + Jope.UI (Shell · Login · Signature dialog · Theming · i18n)
Inference Server · Inference Host (dedicated)
- Process: Python 3.11 running as a long-lived daemon on a dedicated plant-LAN host
- OS: Linux (recommended — systemd service) or Docker / Podman container
- Responsibilities: Raman spectrum → concentration prediction (PLS + Ridge) · model registry · training jobs · health endpoint
- State: Stateless per request — no batch memory. Model loaded once from disk registry at startup; hot-swappable at runtime.
- Channels: ZMQ REP for
predict, HTTP REST for/model/*and/training/* - Why dedicated host: isolates inference workload from real-time hardware I/O, allows larger models / parallel inference / optional GPU acceleration, keeps training jobs from interfering with Console responsiveness, Linux is the native environment for Python ML tooling
Historian · TimescaleDB
- Process: TimescaleDB 2.x on the Plant IPC (same machine as Console)
- Content: all Raman spectra · all predictions · all signatures · all alarms · all audit events · all batch records
- Retention: 7 years (FDA requirement)
- Access: Console writes directly; Inference Server may read for optional model QC audits
Hardware Cluster
| Device | Role | Primary Protocol |
|---|---|---|
| RS2000 Raman Spectrometer | 2048-wavenumber spectrum acquisition, dual probe | Modbus RS485 (gateway to Ethernet) |
| NP7000 / EPP / P3700 Pumps | Feed / eluent / recycle flow control | Serial ASCII 16-byte / binary 0x88 / binary 0xFF |
| SKS S3612 rotary valve | 12-port inlet/outlet zone switching | ASCII 12-char over RS232/485 |
| SKS S3203 switching valve | 3-port feed routing | ASCII 4-char |
| APAX pneumatic valves (PV301-306) | On/off gas/liquid isolation | APAX module |
| NU3000 / UV1000D UV Detectors | Auxiliary absorbance readings | Serial ASCII 16-byte |
See Hardware Interfaces (coming soon) for device-level register maps.
Off-Site Actor · AI Research
- Location: Academic / research partner dev machine (off plant network)
- Workflow: Trains PLS + Ridge models on historical Raman + HPLC-labeled data, exports
.joblibmodel file + metadata JSON - Delivery: Signed + encrypted package transferred to plant (sneaker-net or secure transfer). No direct network link to Plant IPC — keeps production air-gap intact.
Communication Channels
| From | To | Protocol | Direction | Purpose |
|---|---|---|---|---|
| Raman | Console | Modbus RS485 | → | Spectrum read |
| Console | Pumps | Serial ASCII / Binary | ↔ | Flow setpoint · status |
| Console | Valves | Serial / APAX | ↔ | Position command · feedback |
| Console | UV Detectors | Serial ASCII | ← | Absorbance read |
| Console | Inference Server | ZMQ REQ-REP | ↔ | predict · hot path ≤ 20 ms |
| Console | Inference Server | HTTP REST | ↔ | Model load · training trigger · health (cold) |
| Console | Historian | TCP (TimescaleDB) | → | Writes: spectra, predictions, audit, batch |
| Inference | Historian | TCP (TimescaleDB) | ← (optional) | Model QC audits, read-only |
| Off-site Dev | Plant | Signed package | → | .joblib model delivery |
See ZMQ Integration Protocol for the Console ↔ Inference wire contract.
Deployment
Inference Host is always a separate machine on the plant LAN. Console reaches it via TCP
using the endpoint address in its config file (example: tcp://10.0.1.42:5555).
Full deployment steps (installer, process manager, backup, upgrade) are in Deployment Topology.