Connecting ZeroClaw to the Plant Floor - OPC-UA, Kepware, and Cheese Vat Monitoring
This is Part 3 of the ZeroClaw on the OnLogic FR201 series. In Part 1, we deployed ZeroClaw as a hardened static binary with systemd. In Part 2, we showed how to run a local model for air-gapped environments. Now it’s time to actually connect the agent to the plant floor and do something useful with it.
We’re going to build a small Rust OPC-UA client that subscribes to process data from Kepware, stores it locally on the FR201, and then uses ZeroClaw to detect anomalies in cheese vat cycle times and generate real-time alerts. We’ll also show how to extend this into a shift summary report. The entire data pipeline, from PLC tag to natural-language alert, runs on one device with no cloud dependency for data collection.
The Use Case: Cheese Vat Time Loss
Section titled “The Use Case: Cheese Vat Time Loss”If you’ve spent any time in a cheese plant, you know that vat cycle consistency is everything. Every vat in a make room should, in theory, follow the same recipe with roughly the same timing. But in practice, fill times creep up because a pump is wearing out, cook times vary because a steam valve is sluggish, and pump-over times drift because nobody noticed a partially closed butterfly valve.
These time discrepancies between vats add up fast. A vat that takes 8 minutes longer to fill than its neighbors might not trigger a traditional alarm (it’s still “filling”), but over the course of a shift across 20 vats, that’s nearly 3 hours of lost throughput.
The problem is that traditional SCADA alarms are binary: either a value is above a threshold or it isn’t. They don’t compare vats to each other, they don’t trend cycle times over shifts, and they definitely don’t tell you why something might be off. That’s where ZeroClaw comes in.
What We’re Building
Section titled “What We’re Building”The architecture is straightforward:
- Kepware reads tags from the PLCs controlling the cheese vats (Allen-Bradley, Siemens, whatever you’re running).
- A Rust OPC-UA client (cross-compiled and running as a systemd service on the FR201) subscribes to those tags and logs timestamped data to a local SQLite database.
- ZeroClaw periodically queries that database, compares cycle times across vats, and generates alerts when it detects meaningful deviations.
Three static binaries, one SQLite file, zero external dependencies.
Kepware Tag Structure
Section titled “Kepware Tag Structure”This guide assumes you already have KEPServerEX running and connected to your PLC(s). We won’t walk through Kepware installation or PLC driver setup, as that varies heavily depending on your hardware. What matters is the tag structure the OPC-UA client will subscribe to.
For this example, we’ll assume a channel and device structure like this:
Channel: CheeseVats Device: MakeRoom1 Tag Group: Vat01 Vat01.CookTemp (Float, °F) Vat01.CookTime (Float, minutes) Vat01.FillTime (Float, minutes) Vat01.PumpOverTime (Float, minutes) Vat01.BatchActive (Boolean) Vat01.BatchID (String) Tag Group: Vat02 Vat02.CookTemp Vat02.CookTime Vat02.FillTime Vat02.PumpOverTime Vat02.BatchActive Vat02.BatchID ... (repeat for each vat)The OPC-UA node IDs for these tags in Kepware follow the pattern:
ns=2;s=CheeseVats.MakeRoom1.Vat01.CookTempns=2;s=CheeseVats.MakeRoom1.Vat01.CookTimens=2;s=CheeseVats.MakeRoom1.Vat01.FillTime...OPC-UA Endpoint Configuration
Section titled “OPC-UA Endpoint Configuration”In the KEPServerEX Configuration, under OPC UA in the project tree, verify the following:
- Endpoint URL:
opc.tcp://<KEPWARE_IP>:49320(default port for KEPServerEX) - Security Policy: For an isolated plant network,
Noneis acceptable. For anything crossing network boundaries, useBasic256Sha256withSign & Encrypt. - Allow Anonymous Login: Enable this for initial testing, then switch to username/password authentication for production.
Make note of the endpoint URL. The Rust OPC-UA client will need it.
Building the OPC-UA Data Collector
Section titled “Building the OPC-UA Data Collector”Staying consistent with Parts 1 and 2, we’ll cross-compile this on our workstation and deploy a static binary to the FR201. No Python, no Node, no runtime dependencies on the target device.
Project Setup
Section titled “Project Setup”On your workstation, create a new Rust project:
cargo new zeroclaw-opcua-collectorcd zeroclaw-opcua-collectorEdit Cargo.toml:
[package]name = "zeroclaw-opcua-collector"version = "0.1.0"edition = "2021"
[dependencies]opcua = { version = "0.14", features = ["client"] }tokio = { version = "1", features = ["full"] }rusqlite = { version = "0.34", features = ["bundled"] }chrono = "0.4"serde = { version = "1", features = ["derive"] }serde_json = "1"clap = { version = "4", features = ["derive"] }log = "0.4"env_logger = "0.11"The rusqlite bundled feature is important. It compiles SQLite from source and statically links it into our binary, so we don’t need SQLite installed on the FR201.
Configure Cross-Compilation
Section titled “Configure Cross-Compilation”Just like in Part 1, create or update .cargo/config.toml:
[target.aarch64-unknown-linux-musl]linker = "aarch64-linux-musl-gcc"The Data Collector Source
Section titled “The Data Collector Source”Create src/main.rs:
use chrono::Utc;use clap::Parser;use log::{error, info, warn};use opcua::client::prelude::*;use opcua::sync::RwLock;use rusqlite::Connection;use std::path::PathBuf;use std::sync::Arc;
#[derive(Parser, Debug)]#[command(name = "zeroclaw-opcua-collector")]#[command(about = "OPC-UA data collector for cheese vat monitoring")]struct Args { /// Kepware OPC-UA endpoint URL #[arg(long, default_value = "opc.tcp://192.168.1.100:49320")] endpoint: String,
/// Path to the SQLite database file #[arg(long, default_value = "/var/lib/zeroclaw/vatdata.db")] database: PathBuf,
/// Number of vats to monitor #[arg(long, default_value_t = 8)] vat_count: u32,
/// Kepware channel.device prefix #[arg(long, default_value = "CheeseVats.MakeRoom1")] prefix: String,
/// Subscription polling interval in milliseconds #[arg(long, default_value_t = 1000)] poll_interval: u64,}
/// Represents a single data point from a vat#[derive(Debug)]struct VatReading { timestamp: String, vat_id: String, tag_name: String, value: f64,}
fn init_database(db_path: &PathBuf) -> rusqlite::Result<Connection> { let conn = Connection::open(db_path)?;
conn.execute_batch( " CREATE TABLE IF NOT EXISTS vat_readings ( id INTEGER PRIMARY KEY AUTOINCREMENT, timestamp TEXT NOT NULL, vat_id TEXT NOT NULL, tag_name TEXT NOT NULL, value REAL NOT NULL );
CREATE TABLE IF NOT EXISTS batch_events ( id INTEGER PRIMARY KEY AUTOINCREMENT, timestamp TEXT NOT NULL, vat_id TEXT NOT NULL, batch_id TEXT, event_type TEXT NOT NULL, fill_time REAL, cook_time REAL, cook_temp REAL, pump_over_time REAL );
CREATE INDEX IF NOT EXISTS idx_readings_timestamp ON vat_readings(timestamp); CREATE INDEX IF NOT EXISTS idx_readings_vat ON vat_readings(vat_id, tag_name); CREATE INDEX IF NOT EXISTS idx_batch_events_timestamp ON batch_events(timestamp); ", )?;
Ok(conn)}
/// Build the list of OPC-UA node IDs for all vatsfn build_node_ids(prefix: &str, vat_count: u32) -> Vec<(String, String, String)> { let tags = ["CookTemp", "CookTime", "FillTime", "PumpOverTime", "BatchActive"]; let mut nodes = Vec::new();
for vat_num in 1..=vat_count { let vat_id = format!("Vat{:02}", vat_num); for tag in &tags { let node_id = format!("{}.{}.{}", prefix, vat_id, tag); nodes.push((node_id, vat_id.clone(), tag.to_string())); } }
nodes}
#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> { env_logger::init(); let args = Args::parse();
info!("Initializing database at {:?}", args.database); let db = Arc::new(RwLock::new(init_database(&args.database)?));
info!("Building OPC-UA client"); let mut client = ClientBuilder::new() .application_name("ZeroClaw OPC-UA Collector") .application_uri("urn:zeroclaw:opcua:collector") .trust_server_certs(true) .session_retry_limit(10) .client()?;
let endpoint: EndpointDescription = ( args.endpoint.as_str(), "None", MessageSecurityMode::None, UserTokenPolicy::anonymous(), ) .into();
info!("Connecting to Kepware at {}", args.endpoint); let (session, event_loop) = client .new_session_from_endpoint(endpoint, IdentityToken::Anonymous) .await?;
let handle = event_loop.spawn();
session.wait_for_connection().await; info!("Connected to Kepware OPC-UA server");
// Build the node list for all vats let nodes = build_node_ids(&args.prefix, args.vat_count);
// Track batch state per vat for detecting batch start/end let batch_state: Arc<RwLock<std::collections::HashMap<String, bool>>> = Arc::new(RwLock::new(std::collections::HashMap::new()));
// Track latest values per vat for batch event recording let latest_values: Arc<RwLock<std::collections::HashMap<String, std::collections::HashMap<String, f64>>>> = Arc::new(RwLock::new(std::collections::HashMap::new()));
let db_clone = db.clone(); let batch_state_clone = batch_state.clone(); let latest_values_clone = latest_values.clone();
// Create a subscription for data changes let subscription_id = session .create_subscription( std::time::Duration::from_millis(args.poll_interval), 10, // lifetime count 30, // max keepalive count 0, // max notifications per publish 0, // priority true, // publishing enabled DataChangeCallback::new(move |items| { for item in items.iter() { if let Some(ref value) = item.value().value { let node_id = item.item_to_monitor().node_id.to_string();
// Parse out the vat_id and tag_name from the node ID // Expected format: ns=2;s=CheeseVats.MakeRoom1.Vat01.CookTemp if let Some(identifier) = node_id.split(";s=").nth(1) { let parts: Vec<&str> = identifier.split('.').collect(); if parts.len() >= 4 { let vat_id = parts[2].to_string(); let tag_name = parts[3].to_string();
// Handle BatchActive state transitions if tag_name == "BatchActive" { let is_active = value.as_bool().unwrap_or(false); let mut states = batch_state_clone.write(); let prev_active = states .get(&vat_id) .copied() .unwrap_or(false);
if !prev_active && is_active { info!("{}: Batch started", vat_id); } else if prev_active && !is_active { // Batch ended — record the batch event info!("{}: Batch completed", vat_id); let values = latest_values_clone.read(); if let Some(vat_values) = values.get(&vat_id) { let db = db_clone.read(); if let Err(e) = db.execute( "INSERT INTO batch_events (timestamp, vat_id, batch_id, event_type, fill_time, cook_time, cook_temp, pump_over_time) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)", rusqlite::params![ Utc::now().to_rfc3339(), &vat_id, "", "batch_complete", vat_values.get("FillTime").unwrap_or(&0.0), vat_values.get("CookTime").unwrap_or(&0.0), vat_values.get("CookTemp").unwrap_or(&0.0), vat_values.get("PumpOverTime").unwrap_or(&0.0), ], ) { error!("Failed to insert batch event: {}", e); } } } states.insert(vat_id.clone(), is_active); } else { // Store the numeric reading let float_val = value.as_f64().unwrap_or(0.0);
// Update latest values for batch event tracking let mut values = latest_values_clone.write(); values .entry(vat_id.clone()) .or_insert_with(std::collections::HashMap::new) .insert(tag_name.clone(), float_val);
// Write to the time-series table let db = db_clone.read(); if let Err(e) = db.execute( "INSERT INTO vat_readings (timestamp, vat_id, tag_name, value) VALUES (?1, ?2, ?3, ?4)", rusqlite::params![ Utc::now().to_rfc3339(), &vat_id, &tag_name, float_val, ], ) { error!("Failed to insert reading: {}", e); } } } } } } }), ) .await?;
// Create monitored items for all vat tags let items_to_create: Vec<MonitoredItemCreateRequest> = nodes .iter() .map(|(node_id, _, _)| { NodeId::new(2, node_id.as_str()).into() }) .collect();
let results = session .create_monitored_items( subscription_id, TimestampsToReturn::Both, items_to_create, ) .await?;
let success_count = results.iter().filter(|r| r.status_code.is_good()).count(); let fail_count = results.len() - success_count; info!( "Subscribed to {} tags ({} successful, {} failed)", results.len(), success_count, fail_count );
if fail_count > 0 { warn!( "Some tags failed to subscribe. Check that Kepware tags match the expected structure." ); }
info!("Data collection running. Press Ctrl+C to stop.");
// Keep running until the session ends or we get a signal let _ = handle.await;
Ok(())}This is a fair amount of code, so let’s break down what’s happening:
Database schema. We create two tables: vat_readings stores every data point as a time-series record, and batch_events captures the summary of each completed batch (fill time, cook time, cook temp, pump-over time). The batch events table is what ZeroClaw will primarily query for anomaly detection, since it gives us a clean row per vat per batch.
OPC-UA subscription. Instead of polling tags on a timer, we set up an OPC-UA subscription with a data change callback. Kepware will push updates to us whenever a tag value changes. This is more efficient and gives us better time resolution than polling.
Batch state tracking. We watch the BatchActive boolean for each vat. When it transitions from false to true, a batch has started. When it transitions from true to false, the batch is complete and we snapshot all the cycle times into the batch_events table. This gives ZeroClaw clean, per-batch records to analyze.
Build and Deploy
Section titled “Build and Deploy”cargo build --release --target aarch64-unknown-linux-muslVerify and copy to the FR201:
file target/aarch64-unknown-linux-musl/release/zeroclaw-opcua-collector# Should output: ELF 64-bit LSB executable, ARM aarch64
rsync -avz target/aarch64-unknown-linux-musl/release/zeroclaw-opcua-collector \ claw@<IP_ADDRESS>:/home/claw/Install on the FR201
Section titled “Install on the FR201”SSH into the FR201 and set it up following the same pattern from Part 1:
sudo mkdir -p /opt/zeroclawsudo cp zeroclaw-opcua-collector /opt/zeroclaw/sudo chown -R zeroclaw:zeroclaw /opt/zeroclawsudo chmod 755 /opt/zeroclaw/zeroclaw-opcua-collectorsudo ln -sf /opt/zeroclaw/zeroclaw-opcua-collector /usr/local/bin/zeroclaw-opcua-collectorCreate the Systemd Service
Section titled “Create the Systemd Service”sudo nano /etc/systemd/system/zeroclaw-opcua-collector.service[Unit]Description=ZeroClaw OPC-UA Data CollectorAfter=network-online.targetWants=network-online.target
[Service]Type=simpleUser=zeroclawGroup=zeroclaw
ExecStart=/usr/local/bin/zeroclaw-opcua-collector \ --endpoint opc.tcp://192.168.1.100:49320 \ --database /var/lib/zeroclaw/vatdata.db \ --vat-count 8 \ --prefix CheeseVats.MakeRoom1 \ --poll-interval 1000
WorkingDirectory=/var/lib/zeroclaw
Restart=on-failureRestartSec=5
StandardOutput=journalStandardError=journal
# HardeningNoNewPrivileges=truePrivateTmp=trueProtectSystem=strictProtectHome=trueReadWritePaths=/var/lib/zeroclawRestrictNamespaces=trueRestrictRealtime=trueLockPersonality=true
[Install]WantedBy=multi-user.targetUpdate the --endpoint URL to match your Kepware server’s IP address and port. If you have more or fewer than 8 vats, adjust --vat-count accordingly.
Enable and start:
sudo systemctl daemon-reloadsudo systemctl enable zeroclaw-opcua-collectorsudo systemctl start zeroclaw-opcua-collectorVerify it’s running and connected:
journalctl -u zeroclaw-opcua-collector -n 50 --no-pagerYou should see log messages indicating a successful connection to Kepware and the number of tags subscribed. If you see subscription failures, double-check that your Kepware tag names match the expected structure.
Wiring Up ZeroClaw for Anomaly Detection
Section titled “Wiring Up ZeroClaw for Anomaly Detection”Now we have process data flowing into a local SQLite database on the FR201. The next step is telling ZeroClaw to periodically analyze that data and alert when something looks off.
The ZeroClaw Agent Prompt
Section titled “The ZeroClaw Agent Prompt”Create an agent prompt file that tells ZeroClaw what to do with the data. This is where the magic happens. Instead of writing complex statistical analysis code, we describe what we want in natural language and let the model figure out the analysis.
sudo -u zeroclaw nano /var/lib/zeroclaw/vat-monitor-prompt.mdYou are a cheese plant process analyst monitoring vat cycle times on a make line.You have access to a SQLite database at /var/lib/zeroclaw/vatdata.db.
The database has two tables:
**vat_readings**: Time-series data with columns (timestamp, vat_id, tag_name, value).Tag names are: CookTemp, CookTime, FillTime, PumpOverTime.
**batch_events**: One row per completed batch with columns(timestamp, vat_id, batch_id, event_type, fill_time, cook_time, cook_temp, pump_over_time).
Your job is to analyze the most recent batch events and identify time loss discrepancies.Specifically:
1. Query the last 20 completed batch events.2. For each cycle time metric (fill_time, cook_time, pump_over_time), calculate the average across all vats and identify any vat that deviates by more than 15% from the group average.3. For cook_temp, flag any batch where the temperature deviated more than 2°F from the recipe target of 102°F.4. If you detect anomalies, generate a concise alert that includes: - Which vat(s) are affected - Which metric(s) are off and by how much - The trend direction (getting worse, stable, improving) by comparing to the previous shift's data if available - A plain-language hypothesis about what might be causing it
If no anomalies are detected, respond with a brief "All vats operating withinnormal parameters" message with the key averages.
Keep responses under 200 words. Plant operators will read these, so skip jargonand be direct.Setting Up the Monitoring Schedule
Section titled “Setting Up the Monitoring Schedule”ZeroClaw can run the analysis on a schedule using a simple cron-style approach. Create a timer that triggers the analysis every 15 minutes:
sudo nano /etc/systemd/system/zeroclaw-vat-monitor.service[Unit]Description=ZeroClaw Vat Monitor AnalysisAfter=zeroclaw.service zeroclaw-opcua-collector.service
[Service]Type=oneshotUser=zeroclawGroup=zeroclaw
ExecStart=/usr/local/bin/zeroclaw agent \ --prompt-file /var/lib/zeroclaw/vat-monitor-prompt.md \ --output /var/lib/zeroclaw/alerts/latest-alert.json
WorkingDirectory=/var/lib/zeroclaw
# HardeningNoNewPrivileges=truePrivateTmp=trueProtectSystem=strictProtectHome=trueReadWritePaths=/var/lib/zeroclawRestrictNamespaces=trueRestrictRealtime=trueLockPersonality=trueNow create the timer:
sudo nano /etc/systemd/system/zeroclaw-vat-monitor.timer[Unit]Description=Run ZeroClaw vat analysis every 15 minutes
[Timer]OnBootSec=5minOnUnitActiveSec=15minPersistent=true
[Install]WantedBy=timers.targetCreate the alerts directory and enable the timer:
sudo -u zeroclaw mkdir -p /var/lib/zeroclaw/alertssudo systemctl daemon-reloadsudo systemctl enable zeroclaw-vat-monitor.timersudo systemctl start zeroclaw-vat-monitor.timerVerify the timer is scheduled:
systemctl list-timers | grep zeroclawYou can also trigger an analysis manually at any time:
sudo systemctl start zeroclaw-vat-monitor.servicejournalctl -u zeroclaw-vat-monitor -n 100 --no-pagerWhat the Output Looks Like
Section titled “What the Output Looks Like”When ZeroClaw detects an anomaly, the alert might look something like this:
Vat 04 fill time is running 22% above the line average (14.3 min vs. 11.7 min avg). This has been trending upward over the last 3 batches. Vat 04’s pump-over time is also elevated at 18% above average. The combination of slow fill and slow pump-over suggests a flow restriction on Vat 04’s supply line. Check the inlet butterfly valve and CIP spray ball for obstruction.
All other vats are within normal parameters. Cook temps across the line are averaging 101.8°F, within the 102°F ± 2°F window.
Compare that to what a traditional SCADA alarm would give you: “VAT04 FILL TIME HIGH.” No context, no comparison, no suggested cause.
Extending to Shift Summaries
Section titled “Extending to Shift Summaries”The anomaly detection runs every 15 minutes, but you might also want a comprehensive shift summary. This is easy to add as a second agent prompt and timer.
Create the shift summary prompt:
sudo -u zeroclaw nano /var/lib/zeroclaw/shift-summary-prompt.mdYou are a cheese plant process analyst. Generate an end-of-shift summaryfor the make line.
Query the batch_events table for all batches completed in the last 8 hours.
Your summary should include:
1. **Total batches completed** across all vats.2. **Average cycle times** (fill, cook, pump-over) for the shift, compared to the previous shift if data is available.3. **Best and worst performing vats** by total cycle time (fill + cook + pump-over).4. **Any vats that had repeated anomalies** during the shift (multiple batches outside the 15% deviation threshold).5. **Cook temperature consistency** — average, min, max across all batches.6. **Estimated time loss** — sum up the excess time for any vat/metric combinations that were above the line average, and express it in total minutes lost.
Format the summary as a brief report that a shift supervisor could read inunder 2 minutes. Use plain language, not technical jargon. Include specificnumbers.Create the shift summary service and timer:
sudo nano /etc/systemd/system/zeroclaw-shift-summary.service[Unit]Description=ZeroClaw Shift Summary ReportAfter=zeroclaw.service zeroclaw-opcua-collector.service
[Service]Type=oneshotUser=zeroclawGroup=zeroclaw
ExecStart=/usr/local/bin/zeroclaw agent \ --prompt-file /var/lib/zeroclaw/shift-summary-prompt.md \ --output /var/lib/zeroclaw/reports/shift-summary-%i.json
WorkingDirectory=/var/lib/zeroclaw
# HardeningNoNewPrivileges=truePrivateTmp=trueProtectSystem=strictProtectHome=trueReadWritePaths=/var/lib/zeroclawRestrictNamespaces=trueRestrictRealtime=trueLockPersonality=truesudo nano /etc/systemd/system/zeroclaw-shift-summary.timer[Unit]Description=Run ZeroClaw shift summary at shift changes
[Timer]# Assuming 8-hour shifts at 6:00, 14:00, 22:00OnCalendar=*-*-* 06:00:00OnCalendar=*-*-* 14:00:00OnCalendar=*-*-* 22:00:00Persistent=true
[Install]WantedBy=timers.targetsudo -u zeroclaw mkdir -p /var/lib/zeroclaw/reportssudo systemctl daemon-reloadsudo systemctl enable zeroclaw-shift-summary.timersudo systemctl start zeroclaw-shift-summary.timerMonitoring the Full Stack
Section titled “Monitoring the Full Stack”You now have three systemd services and two timers running on the FR201. Here’s how to check on everything at once:
# Service statussystemctl is-active zeroclaw zeroclaw-opcua-collector
# Timer statussystemctl list-timers | grep zeroclaw
# Recent logs from all servicesjournalctl -u zeroclaw -u zeroclaw-opcua-collector -u zeroclaw-vat-monitor \ -u zeroclaw-shift-summary --since "1 hour ago" --no-pager
# Database size (keep an eye on this)ls -lh /var/lib/zeroclaw/vatdata.db
# Row countssudo -u zeroclaw sqlite3 /var/lib/zeroclaw/vatdata.db \ "SELECT 'readings: ' || COUNT(*) FROM vat_readings UNION ALL SELECT 'batch_events: ' || COUNT(*) FROM batch_events;"Database Maintenance
Section titled “Database Maintenance”The vat_readings table will grow continuously. For a production deployment, you’ll want to prune old data. A simple cron job or systemd timer can handle this:
# Keep 7 days of time-series datasudo -u zeroclaw sqlite3 /var/lib/zeroclaw/vatdata.db \ "DELETE FROM vat_readings WHERE timestamp < datetime('now', '-7 days');"
# Keep 30 days of batch eventssudo -u zeroclaw sqlite3 /var/lib/zeroclaw/vatdata.db \ "DELETE FROM batch_events WHERE timestamp < datetime('now', '-30 days');"
# Reclaim disk spacesudo -u zeroclaw sqlite3 /var/lib/zeroclaw/vatdata.db "VACUUM;"The Full Picture
Section titled “The Full Picture”Let’s step back and look at what we’ve built across all three parts of this series:
Part 1 gave us a hardened, production-ready edge device running ZeroClaw as a static binary with SSH key auth, systemd hardening, and no runtime dependencies.
Part 2 showed that the same device can run a local model for air-gapped environments, with the same security model and deployment approach.
Part 3 connected it to the real world. A Rust OPC-UA client pulls live process data from Kepware, stores it in a local SQLite database, and ZeroClaw analyzes it every 15 minutes to catch time loss discrepancies that traditional SCADA alarms would miss. At shift change, it generates a summary report a supervisor can read in under two minutes.
The entire stack on the FR201 is three static binaries, a SQLite database, and a couple of prompt files. No Docker, no Python, no Node, no npm, no pip. If the network goes down, the data collector keeps logging locally. If the device loses power, everything starts back up automatically because of systemd and the FR201’s auto power-on.
For an OT developer looking to bring AI into a manufacturing environment, this is about as lean and secure as it gets. And the best part is that the most powerful piece of the system, the analysis logic, is defined in plain English in a markdown file that anyone on the team can read and modify.