Was originally a proof of concept in the PHP version. Dropped in production as too expensive. Removed all references from docs. Supported backends are MySQL/MariaDB and MongoDB only. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
4.6 KiB
BEDS — Back End Data System
A Rust rewrite of a battle-tested PHP microservice backend framework with 952 days of continuous production uptime, 40,000+ transactions per second on a single node, and ~200ms round-trip latency from Baja California to West Virginia across dozens of concurrent database fanout calls.
This is not a greenfield project. The architecture is proven. The Rust rewrite exists to go further.
Why Rust
| Problem in PHP | Solution in Rust |
|---|---|
| Memory leaks required planned broker death via SIGCHLD | Tokio tasks don't leak — broker pool is permanent |
| Runtime dependency on PHP interpreter | Single compiled binary, zero runtime deps |
| Source code exposed on deployment | Compiled — IP protected |
| Throughput ceiling on single node | 5–10x improvement expected |
| No AI layer | Phase 2: AI-driven database object generation |
Architecture
BEDS is AMQP-first. No component in the application layer ever touches a database directly. Every data operation flows through a message broker. This is not a constraint — it is the product.
Client Request
│
▼
RabbitMQ / AMQP
│
▼
Broker Pool (Tokio async tasks — one per broker type)
│
▼
Factory (template name → adapter dispatch)
│
▼
NamasteCore Trait (unified CRUD interface)
│
▼
Database Adapter (MySQL · MongoDB)
│
▼
DBA-owned Schema (views · stored procedures · functions)
The application layer never writes SQL. Ever.
Core Principles
- AMQP-first — all data access flows through the broker layer, no exceptions
- Database agnostic — MySQL/MariaDB and MongoDB behind a unified trait; no DB-specific logic leaks upward
- DBA-owned schema — all data access goes through named database objects; the application calls template names, not queries
- Template-driven CRUD — each data domain is a struct implementing
NamasteCore; adding a domain means adding one file - Config-driven nodes — all nodes run the same binary; role is determined entirely by startup config
Service Nodes
| Node | Role | Brokers |
|---|---|---|
appServer |
Primary application | rBroker, wBroker, mBroker |
admin |
Administration & observability | adminBrokerIn/Out, adminLogsBroker, adminSyslogBroker, adminGraphBroker |
segundo |
Warehouse / cool storage | whBroker, cBroker |
tercero |
User & session management | uBroker, sBroker |
Every node runs the same binary. Configuration determines what it does.
Scaling Model
- Horizontal — increase broker instance count in config when a node has headroom
- Vertical — add nodes to the broker pool when a node saturates
- Hot-swap — nodes can be replaced without touching application code; the broker pool absorbs the transition
Project Structure
rustybeds/
├── src/
│ ├── config/
│ │ ├── mod.rs # Loader — layered TOML, base + env override
│ │ └── structs.rs # Typed config structs (serde Deserialize)
│ ├── logging.rs # tracing + journald init
│ └── main.rs
├── config/
│ ├── beds.toml # Base config — checked in, no credentials
│ └── env.toml # Environment overrides — gitignored
└── Cargo.toml
Configuration
Two-file layered TOML system. beds.toml contains production defaults and is checked into version control. env.toml overrides per-environment values and is never committed.
# beds.toml — base
[broker_services.app_server]
host = "prod-broker.internal"
port = 5672
# env.toml — local override
[broker_services.app_server]
host = "localhost"
pass = "your-actual-password"
The config crate deep-merges these at startup. Only keys present in env.toml are overridden — everything else inherits from base.
Status
| Component | Status |
|---|---|
| Config loading | Done |
| Structured logging (journald + console) | Done |
| Broker pool | Next |
| NamasteCore trait | Planned |
| Database adapters (MySQL, MongoDB) | Planned |
| Factory dispatch | Planned |
| AI database object generation | Phase 2 |
Performance Baseline
The PHP predecessor achieved:
- 40,000+ tp/s on a single node
- ~200ms round-trip Baja California → West Virginia with dozens of concurrent DB fanout calls per transaction
- 952 days continuous production uptime without error
The Rust rewrite must meet or exceed all three numbers. Benchmarks run before any architectural change touching the broker pool or factory dispatch path.