Files
rustybeds/README.md
gramps 2a9afe7d77 Add MariaDB IPL validation, topology docs in beds.toml, and developer wiki
- Add MariaDB (REL) IPL validation — master required, secondary non-fatal
- Add RelNodeConfig / RelInstanceConfig structs with master/secondary pattern
- Add rel_services section to beds.toml and test fixture
- Add detailed topology commentary to beds.toml covering standalone,
  master/replica, Galera cluster, and multi-DB-per-node configurations
- Add developer wiki (wiki/) covering:
    - Origin story — PHP Namaste history, production record, why Rust
    - Architecture overview — full system diagram, all layers explained
    - The four nodes — appServer, admin, segundo, tercero with real-world context
    - IPL sequence — every step documented with rationale for ordering
    - Configuration system — layering, env selection, adding new sections
    - Queue topology — exchanges, routing keys, broker bindings, vhost isolation
    - Template system — REC/REL, TLA convention, cache map, warehousing
    - Event lineage — compound event IDs, parent/child tracking, msLogs schema
    - Glossary
- Update README with wiki index and MariaDB status

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 15:41:28 -07:00

6.9 KiB
Raw Blame History

BEDS — Back End Data System

A Rust rewrite of a battle-tested PHP microservice backend framework with 952 days of continuous production uptime, 40,000+ transactions per second on a single node, and ~200ms round-trip latency from Baja California to West Virginia across dozens of concurrent database fanout calls.

This is not a greenfield project. The architecture is proven. The Rust rewrite exists to go further.


Why Rust

Problem in PHP Solution in Rust
Memory leaks required planned broker death via SIGCHLD Tokio tasks don't leak — broker pool is permanent
Runtime dependency on PHP interpreter Single compiled binary, zero runtime deps
Source code exposed on deployment Compiled — IP protected
Throughput ceiling on single node 510x improvement expected
No AI layer Phase 2: AI-driven database object generation

Architecture

BEDS is AMQP-first. No component in the application layer ever touches a database directly. Every data operation flows through a message broker. This is not a constraint — it is the product.

Client Request
      │
      ▼
  RabbitMQ / AMQP
      │
      ▼
  Broker Pool  (Tokio async tasks — one per broker type)
      │
      ▼
  Factory  (template name → adapter dispatch)
      │
      ▼
  NamasteCore Trait  (unified CRUD interface)
      │
      ▼
  Database Adapter  (MySQL · MongoDB)
      │
      ▼
  DBA-owned Schema  (views · stored procedures · functions)

The application layer never writes SQL. Ever.


Core Principles

  1. AMQP-first — all data access flows through the broker layer, no exceptions
  2. Database agnostic — MySQL/MariaDB and MongoDB behind a unified trait; no DB-specific logic leaks upward
  3. DBA-owned schema — all data access goes through named database objects; the application calls template names, not queries
  4. Template-driven CRUD — each data domain is a struct implementing NamasteCore; adding a domain means adding one file
  5. Config-driven nodes — all nodes run the same binary; role is determined entirely by startup config

Service Nodes

Node Role Brokers
appServer Primary application rBroker, wBroker, mBroker
admin Administration & observability adminBrokerIn/Out, adminLogsBroker, adminSyslogBroker, adminGraphBroker
segundo Warehouse / cool storage whBroker, cBroker
tercero User & session management uBroker, sBroker

Every node runs the same binary. Configuration determines what it does.


Scaling Model

  • Horizontal — increase broker instance count in config when a node has headroom
  • Vertical — add nodes to the broker pool when a node saturates
  • Hot-swap — nodes can be replaced without touching application code; the broker pool absorbs the transition

Project Structure

rustybeds/
├── src/
│   ├── config/
│   │   ├── mod.rs          # Loader — load() and load_from() for testability
│   │   └── structs.rs      # Typed config structs (serde Deserialize)
│   ├── amqp.rs             # RabbitMQ transport — validate(), future channel/queue ops
│   ├── mariadb.rs          # MariaDB transport — validate_all(), future adapter ops
│   ├── mongo.rs            # MongoDB transport — validate_all(), future adapter ops
│   ├── lib.rs              # Public API surface for integration test harness
│   ├── logging.rs          # tracing + journald init
│   └── main.rs             # ipl() sequence + main()
├── config/
│   ├── beds.toml           # Base config — checked in, no credentials
│   ├── env_dev.toml        # Dev overrides — gitignored
│   ├── env_qa.toml         # QA overrides — gitignored
│   └── env_prod.toml       # Prod overrides — gitignored
├── templates/
│   ├── example_rec.toml    # Canonical self-documenting REC template
│   └── mst_logger_rec.toml # Logger collection template (msLogs)
├── tests/
│   ├── common/mod.rs       # Shared test helpers — load_test_config()
│   └── fixtures/
│       └── beds_test.toml  # Canonical test config fixture
└── Cargo.toml

Configuration

Layered TOML system. beds.toml holds production-safe defaults and is checked into version control. An env override file (env_{BEDS_ENV}.toml) is layered on top and is never committed. Set BEDS_ENV to dev, qa, or prod — defaults to dev.

# beds.toml — base
[broker_services.app_server]
host = "prod-broker.internal"
port = 5672

# env_dev.toml — local override (gitignored)
[broker_services.app_server]
host = "localhost"
pass = "your-dev-password"

The config crate deep-merges these at startup. Only keys present in the env file are overridden — everything else inherits from the base.


Status

Component Status
Config loading (layered TOML + env select) Done
Structured logging (journald + console mirror) Done
IPL sequence with env-aware error handling Done
RabbitMQ reachability validation Done
Unit test scaffolding + config fixture pattern Done
MongoDB reachability validation Done
MariaDB reachability validation Done
Shared filesystem validation Next
AMQP channel / queue declaration Planned
Broker pool (Tokio tasks) Planned
NamasteCore trait Planned
Database adapters (MariaDB, MongoDB) Planned
Factory dispatch Planned
AI database object generation Phase 2

Developer Wiki

Full framework documentation lives in wiki/:


Performance Baseline

The PHP predecessor achieved:

  • 40,000+ tp/s on a single node
  • ~200ms round-trip Baja California → West Virginia with dozens of concurrent DB fanout calls per transaction
  • 952 days continuous production uptime without error

The Rust rewrite must meet or exceed all three numbers. Benchmarks run before any architectural change touching the broker pool or factory dispatch path.


Author

gramps@llamachile.shop