- Flat src/amqp.rs, src/mongo.rs, src/mariadb.rs promoted to src/services/{amqp,mongo,mariadb}/
- services/amqp/connection.rs: AmqpConnection struct with connect() and declare_exchange()
- services/amqp/error.rs: AmqpError type (thiserror, wraps lapin::Error)
- ipl() made async; #[tokio::main] added to main()
- IPL step 3b: authenticate to RabbitMQ + declare beds.events topic exchange (durable)
- Added lapin = "2" and tokio = { version = "1", features = ["full"] } to Cargo.toml
- 12 unit tests pass
- Docs: README, CLAUDE.md, wiki/04-ipl.md, wiki/06-queue-topology.md updated
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
7.3 KiB
BEDS — Back End Data System
A Rust rewrite of a battle-tested PHP microservice backend framework with 952 days of continuous production uptime, 40,000+ transactions per second on a single node, and ~200ms round-trip latency from Baja California to West Virginia across dozens of concurrent database fanout calls.
This is not a greenfield project. The architecture is proven. The Rust rewrite exists to go further.
Why Rust
| Problem in PHP | Solution in Rust |
|---|---|
| Memory leaks required planned broker death via SIGCHLD | Tokio tasks don't leak — broker pool is permanent |
| Runtime dependency on PHP interpreter | Single compiled binary, zero runtime deps |
| Source code exposed on deployment | Compiled — IP protected |
| Throughput ceiling on single node | 5–10x improvement expected |
| No AI layer | Phase 2: AI-driven database object generation |
Architecture
BEDS is AMQP-first. No component in the application layer ever touches a database directly. Every data operation flows through a message broker. This is not a constraint — it is the product.
Client Request
│
▼
RabbitMQ / AMQP
│
▼
Broker Pool (Tokio async tasks — one per broker type)
│
▼
Factory (template name → adapter dispatch)
│
▼
NamasteCore Trait (unified CRUD interface)
│
▼
Database Adapter (MySQL · MongoDB)
│
▼
DBA-owned Schema (views · stored procedures · functions)
The application layer never writes SQL. Ever.
Core Principles
- AMQP-first — all data access flows through the broker layer, no exceptions
- Database agnostic — MySQL/MariaDB and MongoDB behind a unified trait; no DB-specific logic leaks upward
- DBA-owned schema — all data access goes through named database objects; the application calls template names, not queries
- Template-driven CRUD — each data domain is a struct implementing
NamasteCore; adding a domain means adding one file - Config-driven nodes — all nodes run the same binary; role is determined entirely by startup config
Service Nodes
| Node | Role | Brokers |
|---|---|---|
appServer |
Primary application | rBroker, wBroker, mBroker |
admin |
Administration & observability | adminBrokerIn/Out, adminLogsBroker, adminSyslogBroker, adminGraphBroker |
segundo |
Warehouse / cool storage | whBroker, cBroker |
tercero |
User & session management | uBroker, sBroker |
Every node runs the same binary. Configuration determines what it does.
Scaling Model
- Horizontal — increase broker instance count in config when a node has headroom
- Vertical — add nodes to the broker pool when a node saturates
- Hot-swap — nodes can be replaced without touching application code; the broker pool absorbs the transition
Project Structure
rustybeds/
├── src/
│ ├── config/
│ │ ├── mod.rs # Loader — load() and load_from() for testability
│ │ └── structs.rs # Typed config structs (serde Deserialize)
│ ├── services/
│ │ ├── mod.rs # Groups external service transport modules
│ │ ├── amqp/
│ │ │ ├── mod.rs # validate() — TCP reachability pre-flight
│ │ │ ├── connection.rs # AmqpConnection — auth + exchange declare
│ │ │ └── error.rs # AmqpError type
│ │ ├── mongo/
│ │ │ └── mod.rs # validate_all() — TCP reachability
│ │ └── mariadb/
│ │ └── mod.rs # validate_all() — master/secondary pattern
│ ├── lib.rs # Public API surface for integration test harness
│ ├── logging.rs # tracing + journald init
│ └── main.rs # async ipl() sequence + #[tokio::main] main()
├── config/
│ ├── beds.toml # Base config — checked in, no credentials
│ ├── env_dev.toml # Dev overrides — gitignored
│ ├── env_qa.toml # QA overrides — gitignored
│ └── env_prod.toml # Prod overrides — gitignored
├── templates/
│ ├── example_rec.toml # Canonical self-documenting REC template
│ └── mst_logger_rec.toml # Logger collection template (msLogs)
├── tests/
│ ├── common/mod.rs # Shared test helpers — load_test_config()
│ └── fixtures/
│ └── beds_test.toml # Canonical test config fixture
└── Cargo.toml
Configuration
Layered TOML system. beds.toml holds production-safe defaults and is checked into version control. An env override file (env_{BEDS_ENV}.toml) is layered on top and is never committed. Set BEDS_ENV to dev, qa, or prod — defaults to dev.
# beds.toml — base
[broker_services.app_server]
host = "prod-broker.internal"
port = 5672
# env_dev.toml — local override (gitignored)
[broker_services.app_server]
host = "localhost"
pass = "your-dev-password"
The config crate deep-merges these at startup. Only keys present in the env file are overridden — everything else inherits from the base.
Status
| Component | Status |
|---|---|
| Config loading (layered TOML + env select) | Done |
| Structured logging (journald + console mirror) | Done |
| IPL sequence with env-aware error handling | Done |
| RabbitMQ reachability validation | Done |
| RabbitMQ authentication + exchange declaration | Done |
| Unit test scaffolding + config fixture pattern | Done |
| MongoDB reachability validation | Done |
| MariaDB reachability validation | Done |
| Broker pool (Tokio tasks) + queue declaration | Next |
| AMQP publish / consume | Planned |
| Broker pool (Tokio tasks) | Planned |
| NamasteCore trait | Planned |
| Database adapters (MariaDB, MongoDB) | Planned |
| Factory dispatch | Planned |
| AI database object generation | Phase 2 |
Developer Wiki
Full framework documentation lives in wiki/:
- Origin Story — Where BEDS came from and why it was built the way it was
- Architecture Overview — Full system design and core principles
- The Four Nodes — appServer, admin, segundo, tercero
- IPL — Initial Program Load — Bootstrap sequence, step by step
- Configuration System — Layered TOML, env files, topology options
- Queue Topology — AMQP exchanges, queues, routing keys
- Template System — REC and REL templates, TLA convention
- Event Lineage — Compound event IDs, parent/child tracking
- Glossary — Terms and abbreviations
Performance Baseline
The PHP predecessor achieved:
- 40,000+ tp/s on a single node
- ~200ms round-trip Baja California → West Virginia with dozens of concurrent DB fanout calls per transaction
- 952 days continuous production uptime without error
The Rust rewrite must meet or exceed all three numbers. Benchmarks run before any architectural change touching the broker pool or factory dispatch path.