Systems Craft
Decision Lab
Small production-minded demos for the choices that shape reliable data and machine learning systems: latency budgets, pipeline failures, and serving architecture tradeoffs, plus statistical modeling workflows where syntax, diagrams, and diagnostics need to agree.
Allocate a request budget across feature fetch, scoring, policy checks, and response shaping to see where tail latency quietly steals reliability.
p95 request path
163ms total against a 180ms budget
Remote joins dominate p95.
Batching is underused.
Rules are serialized.
Payload shaping is stable.
Walk a production data symptom backward through freshness, schema, feature, and publishing checks until the operational root cause is visible.
Symptom
Claims score dropped 18% after a nightly release.
Root cause
A categorical field shipped with a renamed level before the feature contract was updated.
Repair
Block publish on schema-contract mismatch and replay the affected feature window.
Compare batch, streaming, and low-latency serving modes by freshness, cost, auditability, and failure tolerance before picking an implementation.
Streaming
Best when decisions need fresh features without request-time joins.
Freshness
Seconds
Cost
Medium
Failure mode
Lag
Audit trail
Medium
Translate lavaan syntax into a path diagram, then inspect how measurement loadings, structural paths, and fit diagnostics support an SEM workflow.
PoliticalDemocracy SEM
Build `ind60`, `dem60`, and `dem65` from indicators before reading the structural paths.