Compare commits

..

4 Commits

Author SHA1 Message Date
118f265862 docs: add HIPAA-ready positioning for architecture and pitch
- clarify BEDS is compliance-friendly architecture, not certification
- add healthcare/regulated deployment framing to visual brief
- document controls BEDS provides vs contract implementation responsibilities
- update wiki index description for regulated-deployment positioning
2026-04-12 08:24:12 -07:00
75163fb520 feat: add preflight environment checklist script
scripts/preflight.sh checks all required dependencies:
- Rust/Cargo build toolchain
- RabbitMQ (installed + reachable + management plugin)
- MongoDB (installed + reachable)
- MariaDB (installed + reachable)
- Apache + PHP + php-mongodb (observer tool requirements)

Modes:
  --check    report only, no installs
  (default)  check + apt install missing packages (Debian/Ubuntu)

Idempotent, no sudo prompts during check-only mode.
Detection uses dpkg/systemctl rather than PATH-dependent command -v.
2026-04-10 18:21:47 -07:00
a648e784ce docs: add prerequisites section, fix project structure and status table
- Add Prerequisites section: Rust, RabbitMQ, MongoDB, MariaDB, Apache+PHP
- Document Apache+PHP as required for observer tool with install commands
- Explain rationale: observer must be independent of BEDS binary
- Update project structure tree to reflect unified dispatcher + new modules
- Fix status table: reflect completed dispatcher, template registry, retry/DLQ,
  resident runtime, logger store, dispatch traits; mark PHP observer as Next
2026-04-10 18:11:44 -07:00
3c54635924 docs: comprehensive architecture delta record for hardening phase
Catalogs all architectural changes from resident runtime implementation:
- Runtime model: daemon-like process with coordinated shutdown
- Broker dispatch: shutdown operation integration
- Logger persistence: explicit IPL logging to MongoDB with root GUID lineage
- Developer diagnostics: chain tracing and web-based observability
- Config system: trace_on and logger_admin controls
- Observability utility: modern log_dumper web UI (replaces legacy PHP dumper)
- Operational safety: dev-only purge-on-IPL controls

Files modified: 13 (src/main.rs, brokers/*, config/*, bin/log_dumper.rs, Cargo.*, wiki/*)
Dependencies added: axum, chrono, uuid

See wiki/12-architecture-deltas.md for full details.
2026-04-10 17:12:01 -07:00
6 changed files with 578 additions and 17 deletions

View File

@@ -18,6 +18,38 @@ This is not a greenfield project. The architecture is proven. The Rust rewrite e
---
## Prerequisites
These services must be installed and running before BEDS will start:
| Dependency | Purpose | Notes |
|---|---|---|
| [Rust (stable)](https://rustup.rs) | Build toolchain | `rustup update stable` |
| [RabbitMQ](https://www.rabbitmq.com/docs/install-debian) | AMQP broker | Management plugin recommended: `rabbitmq-plugins enable rabbitmq_management` |
| [MongoDB](https://www.mongodb.com/docs/manual/installation/) | Logger store (`msLogs`) | Tested against 6.x / 7.x |
| [MariaDB](https://mariadb.org/download/) | Primary relational store | Tested against 10.x / 11.x |
| [Apache + PHP](https://httpd.apache.org) | Observer tool (`log_dumper`) | See below |
### Observer Tool — Apache + PHP
The log observer (`utilities/log_dumper/`) is a standalone PHP/Apache application, intentionally **independent of the BEDS binary**. When BEDS is crashing, you still need a working observer. A Rust binary compiled from the same project is useless in that scenario.
```bash
# Debian/Ubuntu
apt install apache2 php libapache2-mod-php php-mongodb
systemctl enable apache2
```
Once installed, copy or symlink `utilities/log_dumper/` into your Apache docroot:
```bash
ln -s /path/to/rustybeds/utilities/log_dumper /var/www/html/beds-logs
```
The observer requires only a running MongoDB instance — no BEDS process needed.
---
## Architecture
BEDS is **AMQP-first**. No component in the application layer ever touches a database directly. Every data operation flows through a message broker. This is not a constraint — it is the product.
@@ -84,17 +116,20 @@ Every node runs the same binary. Configuration determines what it does.
```
rustybeds/
├── src/
│ ├── bin/
│ │ └── log_dumper.rs # (deprecated — replaced by utilities/log_dumper/)
│ ├── brokers/
│ │ ├── mod.rs # Pool manager — spawn_dispatcher_pool()
│ │ ├── dispatcher.rs # Unified AMQP consumer task (replaces r/w_broker)
│ │ ├── logger_store.rs # MongoDB logger persistence + chain fetch
│ │ ├── payload.rs # BrokerPayload — AMQP message envelope struct
│ │ └── error.rs # BrokerError type
│ ├── config/
│ │ ├── mod.rs # Loader — load() and load_from() for testability
│ │ └── structs.rs # Typed config structs (serde Deserialize)
│ ├── brokers/
│ │ ├── mod.rs # Pool manager — spawn_r/w_broker_pool()
│ │ ├── error.rs # BrokerError type
│ │ ├── payload.rs # BrokerPayload — AMQP message body struct
│ │ ├── r_broker.rs # rBroker task — rec.read consume loop
│ │ └── w_broker.rs # wBroker task — rec.write consume loop
│ ├── core/
│ │ ── mod.rs # NamasteCore trait — unified CRUD interface (stub)
│ │ ── mod.rs # NamasteCore trait — unified CRUD interface
│ │ └── dispatch.rs # Dispatch boundary traits: DomainClass, SchemaLayer, BaseIoAdapter
│ ├── services/
│ │ ├── mod.rs # Groups external service transport modules
│ │ ├── amqp/
@@ -105,9 +140,11 @@ rustybeds/
│ │ │ └── mod.rs # validate_all() — TCP reachability
│ │ └── mariadb/
│ │ └── mod.rs # validate_all() — master/secondary pattern
│ ├── template_registry/
│ │ └── mod.rs # REC template registry — load, validate, runtime snapshot
│ ├── lib.rs # Public API surface for integration test harness
│ ├── logging.rs # tracing + journald init
│ └── main.rs # async ipl() sequence + #[tokio::main] main()
│ ├── logging.rs # tracing + journald + console mirror init
│ └── main.rs # IPL sequence + resident runtime loop + coordinated shutdown
├── config/
│ ├── beds.toml # Base config — checked in, no credentials
│ ├── env_dev.toml # Dev overrides — gitignored
@@ -116,8 +153,10 @@ rustybeds/
├── templates/
│ ├── example_rec.toml # Canonical self-documenting REC template
│ └── mst_logger_rec.toml # Logger collection template (msLogs)
├── utilities/
│ └── log_dumper/ # Standalone PHP/Apache observer (independent of BEDS)
├── tests/
│ ├── broker_pool_test.rs # rBroker + wBroker pool integration tests
│ ├── broker_pool_test.rs # Dispatcher pool integration tests
│ ├── common/mod.rs # Shared test helpers — load_test_config()
│ └── fixtures/
│ └── beds_test.toml # Canonical test config fixture
@@ -158,13 +197,18 @@ The `config` crate deep-merges these at startup. Only keys present in the env fi
| Unit test scaffolding + config fixture pattern | Done |
| MongoDB reachability validation | Done |
| MariaDB reachability validation | Done |
| rBroker pool (Tokio tasks, queue declare, consume loop) | Done |
| wBroker pool (Tokio tasks, queue declare, consume loop) | Done |
| BrokerPayload — AMQP message body struct | Done |
| NamasteCore trait (stub) | Done |
| Factory dispatch | Next |
| Unified dispatcher pool (replaces r/w broker split) | Done |
| BrokerPayload — AMQP message envelope struct | Done |
| REC template registry (load, validate, runtime snapshot) | Done |
| Retry / DLQ queue topology | Done |
| Resident runtime loop + coordinated shutdown command | Done |
| MongoDB logger store — IPL persistence + chain fetch | Done |
| Dispatch boundary traits (DomainClass, SchemaLayer, BaseIoAdapter) | Done |
| Observer tool (PHP/Apache log_dumper) | Next |
| RegistryDispatchResolver — class instantiation | Next |
| Database adapters (MariaDB, MongoDB) | Planned |
| AMQP publish / consume (full round-trip) | Planned |
| Broker task supervision / respawn | Planned |
| Config schema validation at startup | Planned |
| AI database object generation | Phase 2 |
---

296
scripts/preflight.sh Executable file
View File

@@ -0,0 +1,296 @@
#!/usr/bin/env bash
# =============================================================================
# preflight.sh — BEDS environment preflight checklist
#
# Checks that all required services and tools are present and reachable.
# Installs anything missing (Debian/Ubuntu only).
# Idempotent — safe to run multiple times.
#
# Usage:
# ./scripts/preflight.sh # check + install missing
# ./scripts/preflight.sh --check # check only, no installs
#
# Requirements: bash 5+, sudo access (for installs)
# Supported: Debian 11+ / Ubuntu 22.04+
# =============================================================================
set -euo pipefail
# --- config ------------------------------------------------------------------
RABBITMQ_HOST="${RABBITMQ_HOST:-localhost}"
RABBITMQ_PORT="${RABBITMQ_PORT:-5672}"
MONGO_HOST="${MONGO_HOST:-localhost}"
MONGO_PORT="${MONGO_PORT:-27017}"
MARIADB_HOST="${MARIADB_HOST:-localhost}"
MARIADB_PORT="${MARIADB_PORT:-3306}"
CHECK_ONLY=false
if [[ "${1:-}" == "--check" ]]; then
CHECK_ONLY=true
fi
# --- output helpers ----------------------------------------------------------
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
BOLD='\033[1m'
RESET='\033[0m'
PASS=0
FAIL=0
WARN=0
pass() { echo -e " ${GREEN}[PASS]${RESET} $1"; PASS=$(( PASS + 1 )); }
fail() { echo -e " ${RED}[FAIL]${RESET} $1"; FAIL=$(( FAIL + 1 )); }
warn() { echo -e " ${YELLOW}[WARN]${RESET} $1"; WARN=$(( WARN + 1 )); }
info() { echo -e " ${CYAN} ${RESET} $1"; }
section() { echo; echo -e "${BOLD}$1${RESET}"; echo "$(printf '%.0s-' {1..60})"; }
# --- distro check ------------------------------------------------------------
detect_distro() {
if [[ -f /etc/os-release ]]; then
source /etc/os-release
DISTRO_ID="${ID:-unknown}"
DISTRO_LIKE="${ID_LIKE:-}"
else
DISTRO_ID="unknown"
DISTRO_LIKE=""
fi
}
is_debian_family() {
[[ "$DISTRO_ID" == "debian" || "$DISTRO_ID" == "ubuntu" || "$DISTRO_LIKE" == *"debian"* ]]
}
apt_install() {
if $CHECK_ONLY; then
fail "$1 — not installed (run without --check to install)"
return
fi
info "Installing $*..."
sudo apt-get install -y "$@" 2>&1 | tail -3
}
# Check via systemd (more reliable than searching sbin paths)
svc_installed() {
systemctl cat "$1" &>/dev/null
}
svc_active() {
systemctl is-active --quiet "$1" 2>/dev/null
}
# Check via dpkg (fast, no PATH issues)
pkg_installed() {
dpkg -s "$1" &>/dev/null && dpkg -s "$1" | grep -q 'Status: install ok installed'
}
# --- tcp reachability --------------------------------------------------------
tcp_check() {
local host="$1" port="$2"
timeout 2 bash -c ">/dev/tcp/${host}/${port}" 2>/dev/null
}
# =============================================================================
section "BEDS Preflight Checklist"
echo " Mode: $( $CHECK_ONLY && echo 'check only' || echo 'check + install')"
detect_distro
echo " Distro: ${DISTRO_ID:-unknown} ${VERSION_ID:-}"
echo
# =============================================================================
section "1. Build Toolchain"
if command -v rustc &>/dev/null; then
RUST_VER=$(rustc --version)
pass "Rust — $RUST_VER"
else
fail "Rust — not found"
info "Install: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh"
info "(rustup is not managed by apt — install manually)"
fi
if command -v cargo &>/dev/null; then
CARGO_VER=$(cargo --version)
pass "Cargo — $CARGO_VER"
else
fail "Cargo — not found (install Rust via rustup)"
fi
# =============================================================================
section "2. RabbitMQ"
if pkg_installed rabbitmq-server; then
pass "RabbitMQ — installed"
else
if $CHECK_ONLY; then
fail "RabbitMQ — not installed"
else
if is_debian_family; then
info "Adding RabbitMQ apt repository..."
curl -fsSL https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/rabbitmq-archive-keyring.gpg 2>/dev/null
echo "deb [signed-by=/usr/share/keyrings/rabbitmq-archive-keyring.gpg] https://packagecloud.io/rabbitmq/rabbitmq-server/debian/ $(source /etc/os-release && echo $VERSION_CODENAME) main" \
| sudo tee /etc/apt/sources.list.d/rabbitmq.list > /dev/null
sudo apt-get update -qq
apt_install rabbitmq-server
else
fail "RabbitMQ — not installed; automatic install only supported on Debian/Ubuntu"
info "See: https://www.rabbitmq.com/docs/install-debian"
fi
fi
fi
if tcp_check "$RABBITMQ_HOST" "$RABBITMQ_PORT"; then
pass "RabbitMQ reachable at ${RABBITMQ_HOST}:${RABBITMQ_PORT}"
else
fail "RabbitMQ not reachable at ${RABBITMQ_HOST}:${RABBITMQ_PORT}"
info "Start: sudo systemctl start rabbitmq-server"
fi
if svc_active rabbitmq-server; then
# Check management plugin via the HTTP API (no sudo needed)
if tcp_check localhost 15672; then
pass "RabbitMQ management plugin enabled (port 15672 open)"
else
warn "RabbitMQ management plugin not enabled"
info "Enable: sudo rabbitmq-plugins enable rabbitmq_management"
info " (provides the admin UI at http://localhost:15672)"
fi
fi
# =============================================================================
section "3. MongoDB"
if svc_installed mongod; then
MONGO_VER=$(mongod --version 2>/dev/null | head -1 || echo 'installed')
pass "MongoDB — $MONGO_VER"
else
if $CHECK_ONLY; then
fail "MongoDB — not installed"
else
if is_debian_family; then
info "Adding MongoDB apt repository..."
curl -fsSL https://www.mongodb.org/static/pgp/server-8.0.asc | sudo gpg --dearmor -o /usr/share/keyrings/mongodb-server-8.0.gpg 2>/dev/null
echo "deb [arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-8.0.gpg] https://repo.mongodb.org/apt/debian bookworm/mongodb-org/8.0 main" \
| sudo tee /etc/apt/sources.list.d/mongodb-org-8.0.list > /dev/null
sudo apt-get update -qq
apt_install mongodb-org
else
fail "MongoDB — not installed; automatic install only supported on Debian/Ubuntu"
info "See: https://www.mongodb.org/docs/manual/installation/"
fi
fi
fi
if tcp_check "$MONGO_HOST" "$MONGO_PORT"; then
pass "MongoDB reachable at ${MONGO_HOST}:${MONGO_PORT}"
else
fail "MongoDB not reachable at ${MONGO_HOST}:${MONGO_PORT}"
info "Start: sudo systemctl start mongod"
fi
# =============================================================================
section "4. MariaDB"
if svc_installed mariadb; then
pass "MariaDB — installed"
else
if $CHECK_ONLY; then
fail "MariaDB — not installed"
else
if is_debian_family; then
apt_install mariadb-server
else
fail "MariaDB — not installed; automatic install only supported on Debian/Ubuntu"
info "See: https://mariadb.org/download/"
fi
fi
fi
if tcp_check "$MARIADB_HOST" "$MARIADB_PORT"; then
pass "MariaDB reachable at ${MARIADB_HOST}:${MARIADB_PORT}"
else
fail "MariaDB not reachable at ${MARIADB_HOST}:${MARIADB_PORT}"
info "Start: sudo systemctl start mariadb"
fi
# =============================================================================
section "5. Apache + PHP (Observer Tool)"
if svc_installed apache2; then
APACHE_VER=$(apache2 -v 2>/dev/null | head -1 | awk '{print $3}' || echo 'installed')
pass "Apache — $APACHE_VER"
else
if $CHECK_ONLY; then
fail "Apache — not installed"
else
if is_debian_family; then
apt_install apache2
else
fail "Apache — not installed; automatic install only supported on Debian/Ubuntu"
fi
fi
fi
if tcp_check localhost 80; then
pass "Apache reachable at localhost:80"
else
fail "Apache not reachable at localhost:80"
info "Start: sudo systemctl start apache2"
fi
if pkg_installed php; then
PHP_VER=$(php --version 2>/dev/null | head -1 || echo 'installed')
pass "PHP — $PHP_VER"
else
if $CHECK_ONLY; then
fail "PHP — not installed"
else
if is_debian_family; then
apt_install php libapache2-mod-php php-mongodb php-cli
sudo systemctl restart apache2
else
fail "PHP — not installed; automatic install only supported on Debian/Ubuntu"
fi
fi
fi
# Check php-mongodb extension specifically
if pkg_installed php-mongodb; then
pass "PHP mongodb extension — installed"
else
warn "PHP mongodb extension — not installed"
info "Install: sudo apt install php-mongodb && sudo systemctl restart apache2"
fi
# =============================================================================
section "Summary"
TOTAL=$(( PASS + FAIL + WARN ))
echo
echo -e " Checked : $TOTAL"
echo -e " ${GREEN}Passed : $PASS${RESET}"
if [[ $WARN -gt 0 ]]; then
echo -e " ${YELLOW}Warnings: $WARN${RESET}"
fi
if [[ $FAIL -gt 0 ]]; then
echo -e " ${RED}Failed : $FAIL${RESET}"
echo
if $CHECK_ONLY; then
echo -e " Run ${BOLD}./scripts/preflight.sh${RESET} (without --check) to install missing dependencies."
else
echo -e " Some checks failed. Review output above and resolve before running BEDS."
fi
exit 1
else
echo
echo -e " ${GREEN}${BOLD}All checks passed. Environment is ready.${RESET}"
if ! $CHECK_ONLY; then
echo
echo -e " Next steps:"
echo -e " 1. Copy config template: cp config/beds.toml.example config/beds.toml"
echo -e " 2. Edit credentials: \$EDITOR config/env_dev.toml"
echo -e " 3. Build and run: cargo run"
fi
fi

View File

@@ -158,3 +158,27 @@ BEDS has no node types in code. All nodes run the same binary. The configuration
- Whether this node is in production mode (fatal IPL failures) or development mode (non-fatal)
Changing a node's role means changing its config file and restarting. No code changes. No redeployment.
## Compliance-Oriented Deployment Pattern
BEDS can be used as the architecture baseline for regulated workloads (including healthcare contracts) because it enforces transport and execution boundaries by design.
Important framing:
- BEDS is not a compliance certification
- BEDS is a control-friendly runtime architecture
- Final compliance posture depends on infrastructure, policy, and operational practice
Why this architecture helps:
- AMQP-first messaging centralizes request flow and transport governance
- Template-driven dispatch limits ad hoc query behavior in application code
- Class -> Schema -> Base I/O layering keeps business logic separate from storage concerns
- Event lineage enables request-chain reconstruction for audits and incident review
What must be added in deployment for HIPAA-class programs:
- At-rest encryption and key lifecycle controls
- Strong service identity and TLS policy in production
- Access governance, least privilege, and operator accountability
- Log retention, backup/restore validation, and incident response processes

View File

@@ -111,6 +111,42 @@ This pattern supports both:
- Better compliance posture from centralized message handling
- Strong foundation for future AI-assisted data object generation
## HIPAA Contract Positioning
Use this framing in proposals and executive briefings:
- BEDS is not marketed as "HIPAA certified software"
- BEDS is an architecture baseline that makes HIPAA-ready implementations practical
- Compliance outcomes come from deployment controls, policy, and operations on top of BEDS
### Why BEDS Helps HIPAA Programs
- AMQP-first transport centralizes ingress, routing, and logging controls
- Template/class boundaries isolate domain data paths and reduce ad hoc access patterns
- Event lineage supports investigation and audit workflows with traceable parent/child chains
- Config-driven node roles support separation of duties and segmented runtime deployment
### What BEDS Provides vs What the Contract Team Must Provide
BEDS provides:
- Controlled transport path for data in transit
- Deterministic routing and broker-level operational guardrails
- Structured telemetry and lineage-ready diagnostics
- Layer boundaries that reduce accidental direct data access
Contract implementation must provide:
- Encryption at rest and key management practices
- TLS and identity policy enforcement in production
- Least-privilege access model and workforce controls
- Retention, backup, incident response, and evidence collection procedures
- BAA/legal governance and organizational compliance program artifacts
### One-Line Sales Statement
"BEDS gives healthcare and regulated teams a compliance-friendly architecture spine; HIPAA compliance is achieved by deploying that spine with required security and operational controls."
## Visual Blueprint (for Diagram or Image Generation)
Use this structure when creating architecture visuals:

View File

@@ -0,0 +1,161 @@
# Architecture Deltas — Recent Hardening Phase
This document catalogs all architectural and design changes made in the recent hardening phase.
## Runtime Model: Daemon-Like Resident Process
**Status**: Completed in `src/main.rs`
**Change**: Converted from startup-IPL-then-exit to a resident, coordinated-shutdown runtime.
**Details**:
- IPL loads config, validates services, initializes broker pools, then enters an event loop.
- Loop waits for either:
- Global shutdown signal (broadcast from dispatcher when AMQP `shutdown` command received).
- User interrupt (Ctrl+C).
- On signal, loop cleanly shuts down Tokio tasks and exits with status code 0.
**Why**: Aligns with operational daemon expectations (systemd, orchestrators). Ensures graceful lifecycle rather than abrupt termination. Supports hot-reload/redeployment workflows.
---
## Broker Dispatch: Unified Consumer with Shutdown Semantics
**Status**: Completed in `src/brokers/dispatcher.rs` and `src/brokers/mod.rs`
**Change**: Integrated shutdown command handling into the unified dispatcher consumer.
**Details**:
- Dispatcher pool now receives a global `shutdown_tx` channel at spawn time.
- Each dispatcher consumer listens for AMQP `shutdown` operation.
- On `shutdown`: acknowledge the message, broadcast shutdown signal to all peers, and exit cleanly.
- All dispatchers also listen on the global shutdown channel and exit if signaled externally.
**Why**: Enables coordinated, multi-node shutdown without forceful process kill. Aligns with AMQP message semantics (shutdown is a standard operation, not a runtime hack).
---
## Logger: Explicit IPL Persistence to MongoDB
**Status**: Completed in `src/main.rs` and `src/brokers/logger_store.rs`
**Change**: IPL startup/failure events now explicitly persisted to `msLogs` collection with structured context.
**Details**:
- Root GUID generated at IPL start; all startup events tagged with this root ID.
- Structured log entries include:
- `root_event_id`: chains all startup events to a single root.
- `timestamp`: human-readable ISO 8601 format.
- `node_id`: configured node name/role.
- `event_type`: IPL phase (e.g., "ipl_start", "service_validated", "broker_pool_spawned", "ipl_complete").
- `message`: human-readable summary.
- `metadata`: optional structured context (validation results, latency, etc.).
- If IPL fails, best-effort logging of failure event to Mongo before process exit.
- After IPL success, showcase log-level examples (INFO, WARN, ERROR) for visibility.
**Why**: Startup is traditionally hardest to debug (logs often lost). Persistent, queryable startup context enables post-mortem analysis of deployment/initialization issues. Root GUID enables chain-crawl diagnostics across distributed startup events.
---
## Developer Diagnostics: Root GUID Lineage and Chain Tracing
**Status**: Completed in `src/brokers/logger_store.rs` and `src/bin/log_dumper.rs`
**Change**: Added root GUID-based event chain tracing and query layer.
**Details**:
- `logger_store::fetch_chain(root_event_id, limit)`: retrieve all events tagged with a root ID, sorted by timestamp.
- `logger_store::fetch_root_record(root_event_id)`: retrieve the initiating root event.
- `log_dumper` web UI exposes:
- Root GUID input field to query and visualize entire event chain.
- Single-record view at `/record?root_event_id=...` to inspect individual startup context.
- Arrow-trigger UX for expanding compact row summaries without constant page reload.
**Why**: Enables developers to rapidly correlate events across a single startup sequence or transaction. Reduces manual log sifting. Scales from single node to multi-node deployments.
---
## Configuration: Trace-On and Logger Admin Controls
**Status**: Completed in `src/config/structs.rs` and `config/env_dev.toml`
**Change**: Added two new config namespaces for developer and administrative control.
**Details**:
### `[runtime.trace_on]`
- Boolean flag (default: false in production, true in `env_dev.toml`).
- When true, logs method entry/exit at TRACE level for all broker consumers and core trait implementations.
- Enables dev to narrow causality in complex message flows without instrumenting code.
### `[logger_admin]`
- `purge_on_ipl` (boolean, default: false): on successful IPL, automatically purge named collections before startup logging begins.
- `purge_collections` (array of strings): list of collection names to purge (e.g., `["msLogs", "msErrors"]`).
- Enables clean dev iteration: each `cargo run` in dev automatically resets logger state.
**Why**: Reduces friction in dev loops. Trace-on avoids printf debugging. Purge-on-IPL ensures each test iteration starts fresh without manual `mongo` CLI cleanup.
---
## Observability Utility: Modern Logger Reader (log_dumper)
**Status**: Completed in `src/bin/log_dumper.rs`
**Change**: Built a modern Rust equivalent to legacy PHP `utilities/dumper.php` for browsing `msLogs`.
**Details**:
- **Web UI** (Axum):
- Dashboard route `/` with seed-write action, quick filter by level/node, root GUID chain input.
- Compact row layout: timestamp | level | node | message snippet | arrow (expand).
- Single-record view `/record?root_event_id=...` showing full event context.
- Arrow-trigger expansion shows full message without full-page refresh.
- **Features**:
- Human-readable timestamps (ISO 8601 formatted).
- Seed-write to create test events and validate logger pipeline.
- Root chain traversal via GUID input.
- Dev-centric UX: minimal clicks, maximum information density.
**Why**: Centralizes all observability into a single web interface. Replaces CLI-based manual querying. Makes startup diagnostics visible to entire team without MongoDB knowledge.
---
## Operational Safety: Dev-Only Purge Controls
**Status**: Completed in `src/main.rs` and config system
**Change**: Added dev-only purge logic to reset logger collections on IPL in non-production environments.
**Details**:
- IPL checks `config.logger_admin.purge_on_ipl` flag.
- If true and node is not production, purges collections listed in `config.logger_admin.purge_collections` before logging startup events.
- Prevents accidental production data loss (flag only honored in non-prod node roles).
- `env_dev.toml` enables this by default for frictionless dev iteration.
**Why**: Closes dev/prod gap. Enables safe, repeatable testing without manual intervention. Prevents stale logger state from polluting diagnostics.
---
## Commit Summary
This hardening phase encompasses:
1. **Runtime lifecycle**: Daemon model, coordinated shutdown, graceful exit.
2. **Broker semantics**: Shutdown operation integration, channel-based signaling.
3. **Logging infrastructure**: Persistent IPL events, root GUID lineage, structured context.
4. **Developer experience**: Trace control, purge controls, web-based observability.
5. **Configuration**: New `trace_on` and `logger_admin` namespaces.
6. **Tooling**: Modern Rust observability utility replacing legacy PHP dumper.
**Files Changed**:
- `src/main.rs`: resident runtime loop, IPL logging, shutdown coordination, trace control.
- `src/brokers/dispatcher.rs`: shutdown operation handling, global shutdown listening.
- `src/brokers/mod.rs`: dispatcher pool accepts shutdown channels.
- `src/brokers/logger_store.rs`: root GUID chain fetch operations, structured logging helpers.
- `src/config/structs.rs`: `trace_on`, `logger_admin` config types.
- `src/bin/log_dumper.rs`: new modern observability utility (Axum web UI).
- `config/env_dev.toml`: dev overrides enabling trace/purge controls.
- `Cargo.toml` / `Cargo.lock`: added `axum`, `chrono`, `uuid` dependencies.
- Wiki updates: `Home.md`, `04-ipl.md`, `06-queue-topology.md`, `10-modernization-roadmap.md`, new `11-beds-architecture-visual-brief.md`.
**Next Phase**: Autoscaling heuristics, metric collection, and cross-node coordinator election (deferred).

View File

@@ -18,7 +18,7 @@ If you are reading this as a new contributor, start here and read in order. The
- [IPL — Initial Program Load](04-ipl.md) — The bootstrap sequence, step by step, and why order matters
- [Configuration System](05-configuration.md) — Layered TOML, environment files, topology options
- [Modernization Roadmap](10-modernization-roadmap.md) — POC-first execution sequence and modernization requirements
- [Architecture Visual Brief](11-beds-architecture-visual-brief.md) — Leadership-facing architecture narrative and diagram prompts
- [Architecture Visual Brief](11-beds-architecture-visual-brief.md) — Leadership-facing architecture narrative, diagram prompts, and regulated-deployment positioning
### Messaging
- [Queue Topology](06-queue-topology.md) — AMQP exchanges, queues, routing keys, and the broker model