Add MariaDB IPL validation, topology docs in beds.toml, and developer wiki

- Add MariaDB (REL) IPL validation — master required, secondary non-fatal
- Add RelNodeConfig / RelInstanceConfig structs with master/secondary pattern
- Add rel_services section to beds.toml and test fixture
- Add detailed topology commentary to beds.toml covering standalone,
  master/replica, Galera cluster, and multi-DB-per-node configurations
- Add developer wiki (wiki/) covering:
    - Origin story — PHP Namaste history, production record, why Rust
    - Architecture overview — full system diagram, all layers explained
    - The four nodes — appServer, admin, segundo, tercero with real-world context
    - IPL sequence — every step documented with rationale for ordering
    - Configuration system — layering, env selection, adding new sections
    - Queue topology — exchanges, routing keys, broker bindings, vhost isolation
    - Template system — REC/REL, TLA convention, cache map, warehousing
    - Event lineage — compound event IDs, parent/child tracking, msLogs schema
    - Glossary
- Update README with wiki index and MariaDB status

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-04 15:41:28 -07:00
parent 2ce87710ff
commit 2a9afe7d77
19 changed files with 1798 additions and 12 deletions

160
wiki/02-architecture.md Normal file
View File

@@ -0,0 +1,160 @@
# Architecture Overview
## The Central Principle
**BEDS is AMQP-first. No component in the application layer ever touches a database directly. Ever.**
This is not a guideline. It is the architectural constraint that makes everything else possible. If you find yourself writing code that calls a database adapter directly from outside the broker layer, you are breaking the framework.
## System Diagram
```
External Client
│ HTTP / WebSocket / REST
┌─────────────┐
│ appServer │ ← your application logic lives here
│ node │
└──────┬──────┘
│ AMQP event (routing key: rec.write, rel.read, log, etc.)
┌─────────────────────────────────┐
│ RabbitMQ Broker │
│ │
│ Exchange: beds.events (topic) │
│ Exchange: beds.logs (topic) │
└──────┬──────────────────┬───────┘
│ │
▼ ▼
┌────────────┐ ┌────────────┐
│ Broker │ │ admin │
│ Pool │ │ node │
│ (Tokio │ │ │
│ tasks) │ │ logging │
└──────┬─────┘ │ auditing │
│ │ metrics │
▼ └─────┬──────┘
┌────────────┐ │
│ Factory │ ▼
│ Dispatch │ ┌────────────┐
└──────┬─────┘ │ MongoDB │
│ │ msLogs │
▼ └────────────┘
┌────────────────────────┐
│ NamasteCore Trait │
│ (unified CRUD iface) │
└──────┬─────────────────┘
├──────────────────────────┐
▼ ▼
┌────────────┐ ┌────────────┐
│ MongoDB │ │ MariaDB │
│ Adapter │ │ Adapter │
│ (REC) │ │ (REL) │
└──────┬─────┘ └──────┬─────┘
│ │
▼ ▼
┌────────────┐ ┌────────────┐
│ MongoDB │ │ MariaDB │
│ Collections│ │ Tables / │
│ │ │ Procs / │
│ │ │ Views │
└────────────┘ └────────────┘
```
## Layers
### 1. Transport Layer (AMQP)
RabbitMQ is the backbone. All inter-component communication flows through it. This includes:
- Client data requests (read, write, update, delete)
- Log events from all nodes
- Audit records
- Migration jobs
- Warehouse operations
The transport layer knows nothing about databases. It routes messages. That is all.
### 2. Broker Pool (Tokio tasks)
Each node runs a pool of async broker tasks. Each task listens on one queue, processes one message at a time, and routes the result back via AMQP. The pool size is configured per broker type in `beds.toml`.
The broker pool is the throttle for the entire system. By adjusting instance counts, you control how many concurrent database operations the node performs — without changing a line of code.
Broker tasks are supervised. A panicked task is logged and replaced. The pool does not shrink on failure.
### 3. Factory Dispatch
A broker task receives an event containing a template name (e.g. `"Users"`, `"Sessions"`). The factory maps that name to the correct adapter — MongoDB for REC templates, MariaDB for REL templates. The factory does not know which template will be requested at compile time; dispatch is runtime.
### 4. NamasteCore Trait
The unified CRUD interface. Every database adapter implements it. Every template is a struct that selects an adapter and delegates to it. The application layer calls `NamasteCore` methods — it never calls adapter methods directly.
```rust
pub trait NamasteCore {
async fn create_record(&self, payload: &Payload) -> Result<Response, BedsError>;
async fn fetch_records(&self, query: &Query) -> Result<Vec<Response>, BedsError>;
async fn update_record(&self, payload: &Payload) -> Result<Response, BedsError>;
async fn delete_record(&self, id: &str) -> Result<Response, BedsError>;
}
```
### 5. Database Adapters
Two adapters, one interface:
- **REC adapter** — MongoDB. Document store. Schema-flexible. High-throughput appends. Used for logs, events, user profiles, audit records, anything that benefits from document structure.
- **REL adapter** — MariaDB. Relational store. SQL joins, transactions, strict schema. Used for anything that benefits from referential integrity.
Adapters do not write SQL or MongoDB queries. They call named database objects — stored procedures, views, functions — that the DBA owns. The adapter layer calls the object by name and passes parameters. It does not construct queries.
### 6. DBA-Owned Schema
The application layer never writes a query. All data access goes through named database objects. This is the separation of concerns that made Namaste maintainable across years and multiple development teams.
Adding a new data domain means:
1. DBA writes the schema (table/collection, views, stored procedures)
2. Developer writes a BEDS template (a TOML config file)
3. BEDS generates the adapter binding
Nothing else changes.
## The CALGON Pattern
Some operations cannot return an immediate result — long-running aggregations, migration jobs, warehouse operations. BEDS handles these with the **CALGON** pattern (async ticket):
1. Client submits a request
2. BEDS immediately returns a GUID ticket
3. The operation executes asynchronously
4. Client polls with the GUID to retrieve the result when ready
The client is never blocked on a long operation. The broker absorbs the work. This is the same pattern used by every major async job queue system, implemented natively in BEDS.
## Event Lineage
Every BEDS event carries a compound identifier:
```
event_id = "{node}.{env}.{guid}" # e.g. "ms.production.a1b2c3d4..."
parent_id = "" # empty string if this is a root event
depth = 0 # levels from the root event
```
A root event (an incoming client request) has `depth=0` and no parent. Every event it spawns (database calls, log events, audit records) carries the root's `event_id` as its `parent_id` and increments `depth`. This creates a complete, queryable tree of every operation triggered by a single client request.
Event lineage is how you answer "what actually happened when request X came in?" — without distributed tracing infrastructure.
## Configuration Drives Everything
BEDS has no node types in code. All nodes run the same binary. The configuration file determines:
- Which services this node runs (`is_local` per service)
- How many brokers of each type to spawn
- Which databases to connect to
- Whether this node is in production mode (fatal IPL failures) or development mode (non-fatal)
Changing a node's role means changing its config file and restarting. No code changes. No redeployment.