Service

The production pcs-service binary: TOML config, factory registry, HTTP control plane, and built-in standalone & Raft cluster runners.

Overview

The service feature ships a runnable binary along with a small framework for plugging custom systems, components, sources, and sinks into a TOML-driven service. It implies io, distributed, distributed-raft, and tracing.

Two top-level modes:

Install & build

cargo install --path crates/pcs-service --features service-cluster
# or, build from source:
cargo build --release --bin pcs-service --features service-cluster

TOML configuration

A ServiceConfig describes the mode, registered components and resources, the pipeline DAG, and optional sources / sinks. The control plane ([http]) is always available.

mode = "standalone"

[http]
bind = "0.0.0.0:8080"

[[components]]
name = "Trade"

[[components.schema]]
name = "id"
type = "u64"

[[components.schema]]
name = "amount"
type = "f64"

[[components.schema]]
name = "currency"
type = "utf8"

[[components.schema]]
name = "usd_amount"
type = "f64"

[[pipelines]]
name = "enrich"

[[pipelines.systems]]
name = "ValidateTradeSystem"

[[pipelines.systems]]
name = "EnrichTradeSystem"

[pipelines.systems.config]
fx_provider = "ecb_daily"

[[pipelines.sources]]
component = "Trade"
type = "parquet"
path = "/data/in/trades.parquet"

[[pipelines.sinks]]
component = "Trade"
type = "parquet"
path = "/data/out/trades.parquet"

Plugging in your own types

The TOML refers to systems, components, sources, and sinks by name. To map names to your own implementations, register factories on the ServiceBuilder before calling build().

use pcs_service::service::{ServiceBuilder, ServiceConfig};

let config: ServiceConfig =
    toml::from_str(&std::fs::read_to_string("service.toml")?)?;

let service = ServiceBuilder::from_config(config)
    .register_component_factory("Trade",    Box::new(TradeFactory))
    .register_system_factory   ("ValidateTradeSystem", Box::new(ValidateFactory))
    .register_system_factory   ("EnrichTradeSystem",   Box::new(EnrichFactory))
    .register_source_factory   ("parquet", Box::new(ParquetSourceFactory))
    .register_sink_factory     ("parquet", Box::new(ParquetSinkFactory))
    .build()?;

service.run().await?;

Each factory takes a toml::Value — the config field from the TOML — and returns a boxed instance of the corresponding trait. Built-in factories for Parquet, CSV, and JSON Lines are registered automatically.

CLI

# Single-node:
pcs-service serve --config service.toml

# Validate config without starting:
pcs-service validate --config service.toml

# Inspect a running instance:
pcs-service status --server http://localhost:8080

# Cluster lifecycle (requires service-cluster feature):
pcs-service cluster init   --config cluster.toml --node-id 1
pcs-service cluster join   --server http://leader:8081 --node-id 2
pcs-service cluster leave  --server http://leader:8081 --node-id 2
pcs-service cluster status --server http://leader:8081

HTTP control plane

EndpointReturns
GET /health200 if the process is up; cheap liveness probe.
GET /ready200 once the service has loaded config and registered factories. Use as Kubernetes readiness probe.
GET /metricsPrometheus exposition format. Pipeline durations, retry counts, claim lifecycles, Raft term/leader.
GET /statusJSON snapshot of pipeline state, last RunStats, claim ledger, current Raft membership.

Deploying a cluster

A 3-node Raft cluster is the typical production target. Start the leader with cluster init, then have followers cluster join against the leader’s control-plane URL. Each node needs:

See the operations guide for full deployment steps, capacity planning, and runbooks for common failure scenarios.

Next steps

Distributed

Distributed Runner

The runtime sitting underneath the service: claims, checkpoints, and Raft replication.

Observability

Tracing

Span-level instrumentation that lights up automatically when the service starts.