Skip to main content

High-Level Topology

This page shows the full runtime topology — where each software process lives, what hardware it talks to, which protocols connect them, and where off-site actors sit relative to the Plant IPC.

Diagram

Layout

Inference Server always runs on its own dedicated plant-LAN host. This isolates the inference workload, leaves Plant IPC focused on hardware I/O + compliance, and gives Python / model retraining its own CPU / RAM / (optional) GPU budget.

Components

Operator Console · Jope.SMB

  • Process: WPF .NET 8 interactive Windows application
  • Responsibilities: Batch control FSM · operator UI · audit trail writes · electronic signature · historian writes · hardware I/O
  • State: Stateful — holds batch state machine, user session, in-memory alarm queue, recipe cache
  • Built on: Jope.Core (Transport · Compliance · Identity) + Jope.UI (Shell · Login · Signature dialog · Theming · i18n)

Inference Server · Inference Host (dedicated)

  • Process: Python 3.11 running as a long-lived daemon on a dedicated plant-LAN host
  • OS: Linux (recommended — systemd service) or Docker / Podman container
  • Responsibilities: Raman spectrum → concentration prediction (PLS + Ridge) · model registry · training jobs · health endpoint
  • State: Stateless per request — no batch memory. Model loaded once from disk registry at startup; hot-swappable at runtime.
  • Channels: ZMQ REP for predict, HTTP REST for /model/* and /training/*
  • Why dedicated host: isolates inference workload from real-time hardware I/O, allows larger models / parallel inference / optional GPU acceleration, keeps training jobs from interfering with Console responsiveness, Linux is the native environment for Python ML tooling

Historian · TimescaleDB

  • Process: TimescaleDB 2.x on the Plant IPC (same machine as Console)
  • Content: all Raman spectra · all predictions · all signatures · all alarms · all audit events · all batch records
  • Retention: 7 years (FDA requirement)
  • Access: Console writes directly; Inference Server may read for optional model QC audits

Hardware Cluster

DeviceRolePrimary Protocol
RS2000 Raman Spectrometer2048-wavenumber spectrum acquisition, dual probeModbus RS485 (gateway to Ethernet)
NP7000 / EPP / P3700 PumpsFeed / eluent / recycle flow controlSerial ASCII 16-byte / binary 0x88 / binary 0xFF
SKS S3612 rotary valve12-port inlet/outlet zone switchingASCII 12-char over RS232/485
SKS S3203 switching valve3-port feed routingASCII 4-char
APAX pneumatic valves (PV301-306)On/off gas/liquid isolationAPAX module
NU3000 / UV1000D UV DetectorsAuxiliary absorbance readingsSerial ASCII 16-byte

See Hardware Interfaces (coming soon) for device-level register maps.

Off-Site Actor · AI Research

  • Location: Academic / research partner dev machine (off plant network)
  • Workflow: Trains PLS + Ridge models on historical Raman + HPLC-labeled data, exports .joblib model file + metadata JSON
  • Delivery: Signed + encrypted package transferred to plant (sneaker-net or secure transfer). No direct network link to Plant IPC — keeps production air-gap intact.

Communication Channels

FromToProtocolDirectionPurpose
RamanConsoleModbus RS485Spectrum read
ConsolePumpsSerial ASCII / BinaryFlow setpoint · status
ConsoleValvesSerial / APAXPosition command · feedback
ConsoleUV DetectorsSerial ASCIIAbsorbance read
ConsoleInference ServerZMQ REQ-REPpredict · hot path ≤ 20 ms
ConsoleInference ServerHTTP RESTModel load · training trigger · health (cold)
ConsoleHistorianTCP (TimescaleDB)Writes: spectra, predictions, audit, batch
InferenceHistorianTCP (TimescaleDB)← (optional)Model QC audits, read-only
Off-site DevPlantSigned package.joblib model delivery

See ZMQ Integration Protocol for the Console ↔ Inference wire contract.

Deployment

Inference Host is always a separate machine on the plant LAN. Console reaches it via TCP using the endpoint address in its config file (example: tcp://10.0.1.42:5555).

Full deployment steps (installer, process manager, backup, upgrade) are in Deployment Topology.