- Add MariaDB (REL) IPL validation — master required, secondary non-fatal
- Add RelNodeConfig / RelInstanceConfig structs with master/secondary pattern
- Add rel_services section to beds.toml and test fixture
- Add detailed topology commentary to beds.toml covering standalone,
master/replica, Galera cluster, and multi-DB-per-node configurations
- Add developer wiki (wiki/) covering:
- Origin story — PHP Namaste history, production record, why Rust
- Architecture overview — full system diagram, all layers explained
- The four nodes — appServer, admin, segundo, tercero with real-world context
- IPL sequence — every step documented with rationale for ordering
- Configuration system — layering, env selection, adding new sections
- Queue topology — exchanges, routing keys, broker bindings, vhost isolation
- Template system — REC/REL, TLA convention, cache map, warehousing
- Event lineage — compound event IDs, parent/child tracking, msLogs schema
- Glossary
- Update README with wiki index and MariaDB status
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
68 lines
6.4 KiB
Markdown
68 lines
6.4 KiB
Markdown
# Origin Story
|
||
|
||
## The Problem That Started Everything
|
||
|
||
In 2017, a PHP backend framework called **Namaste** was built for Giving Assistant, a charitable shopping platform based in California. The business had a deceptively simple technical problem: a single application server handling thousands of concurrent user sessions, all hammering a MySQL database through a conventional ORM layer.
|
||
|
||
The ORM was the problem. Every request spawned its own database connection, held it open for the duration of the request lifecycle, and released it on completion — if it completed cleanly. Memory leaks accumulated. Under load, the connection pool exhausted. Queries that could have been satisfied by a cached result went to the database anyway. There was no circuit breaker, no backpressure, no way to distinguish a read that could tolerate slight staleness from a write that could not.
|
||
|
||
The standard PHP answer — throw more servers at it — was tried. It worked until it didn't. Horizontal scaling moved the bottleneck from the web tier to the database tier without solving the underlying architectural problem.
|
||
|
||
## The Design Decision
|
||
|
||
The core insight was this: **the application layer should never touch the database directly**. Not through an ORM, not through a raw PDO connection, not through any mechanism that gives the application layer visibility into the database topology.
|
||
|
||
Everything goes through a message broker. A client request hits the application layer, gets packaged as an AMQP event, and is dispatched to a queue. A broker process — completely independent of the web tier — picks it up, executes the database operation, and routes the result back. The application layer never waits on a database connection. It waits on a message.
|
||
|
||
This had several consequences that turned out to be features:
|
||
|
||
**Decoupling.** The web tier and the database tier became operationally independent. A slow database didn't block the web tier — it built a queue. The queue was observable, manageable, and bounded.
|
||
|
||
**Backpressure.** The broker pool was the throttle. You could tune how many concurrent database operations ran by adjusting broker instance counts in a config file, without touching a line of code.
|
||
|
||
**Database agnosticism.** Because the application layer never called the database directly, the database could be swapped. The same broker call that hit MySQL could be routed to MongoDB by changing a template config. This wasn't theoretical — it was used in production to migrate collections from MySQL to MongoDB without application downtime.
|
||
|
||
**Planned obsolescence.** PHP worker processes leak memory. This is a known, accepted fact in PHP production operations. The conventional solution is to restart workers periodically — the infamous `SIGCHLD` dance. In Namaste, broker processes were intentionally designed to accept a kill signal, complete their in-flight work, and exit gracefully. A supervisor process immediately spawned a replacement. Memory leaks were managed by design, not fought against.
|
||
|
||
## The Name
|
||
|
||
**Namaste** was an internal codename. The framework was formally called **BEDS** — Back End Data System. The name Namaste stuck in the codebase because it was the class prefix (`gaaNamasteCore`, `gacMongoDB`, etc.) and changing it would have broken too many things too early.
|
||
|
||
When the Rust rewrite began, the codebase was renamed **rustybeds** — a nod to both the language and the framework's history.
|
||
|
||
## Production History
|
||
|
||
Namaste ran in production at Giving Assistant from mid-2017. At peak it handled **40,000+ transactions per second** on a single application server node. Round-trip latency from Baja California to West Virginia — across dozens of concurrent database fanout calls per transaction — was consistently **~200 milliseconds**.
|
||
|
||
It ran for **952 days without an unplanned outage**.
|
||
|
||
The framework was later deployed in a different configuration at **Pathway Genomics** in California, where the `tercero` node handled user and session management for a patient portal. The separation of PII and PHI from user records — a compliance requirement — was implemented as a configuration choice, not a code change. The `tercero` node ran against a separate database with separate credentials, isolated by AMQP routing.
|
||
|
||
## The PHP Codebase
|
||
|
||
The PHP implementation lives in the `namaste` repository. It is the authoritative reference for BEDS architecture and should be consulted when the *intent* behind a design decision is unclear. The Rust rewrite does not copy PHP code — it reimplements the same architecture with Rust's type system, async runtime, and zero-cost abstractions.
|
||
|
||
Key reference files in the PHP codebase:
|
||
|
||
| File | What it shows |
|
||
|---|---|
|
||
| `config/namaste.xml` | Full production config structure — the gold standard for what config covers |
|
||
| `config/env.admin.xml` | Admin node env override — shows how node-specific config layering works |
|
||
| `classes/templates/gatTestMongo.class.inc` | Canonical REC template — the pattern every data domain follows |
|
||
| `common/errorCatalog.php` | Log level constants and integer values — replicated in BEDS Rust |
|
||
| `common/functions.php` | `consoleLog` format — the console output format BEDS Rust follows |
|
||
| `scripts/startBrokers.php` | Broker startup sequence — the origin of the IPL concept |
|
||
| `common/dbCatalog.php` | TLA naming convention — confirmed source of the three-letter abbreviation system |
|
||
|
||
## Why Rust
|
||
|
||
The PHP implementation worked. The decision to rewrite in Rust was not driven by a production failure — it was driven by what the framework could become:
|
||
|
||
1. **Memory leaks, eliminated.** Tokio async tasks do not leak. The `SIGCHLD` planned-obsolescence pattern becomes unnecessary.
|
||
2. **Throughput ceiling, raised.** PHP on a single process is fundamentally limited. Rust async on a multi-core machine is not. The expectation is a 5–10x throughput improvement on equivalent hardware.
|
||
3. **Single binary deployment.** No PHP interpreter, no extension dependencies, no version conflicts. One binary, copy it to the server, run it.
|
||
4. **IP protection.** A compiled binary does not expose source code on deployment.
|
||
5. **AI layer.** Phase 2 of BEDS Rust includes an AI-driven database object generation layer — a DBA describes a data domain in natural language and the AI generates the schema, stored procedures, and BEDS template. This is the primary market differentiator and was not feasible in PHP.
|
||
|
||
The architecture is proven. The Rust rewrite exists to go further.
|