- Add MariaDB (REL) IPL validation — master required, secondary non-fatal
- Add RelNodeConfig / RelInstanceConfig structs with master/secondary pattern
- Add rel_services section to beds.toml and test fixture
- Add detailed topology commentary to beds.toml covering standalone,
master/replica, Galera cluster, and multi-DB-per-node configurations
- Add developer wiki (wiki/) covering:
- Origin story — PHP Namaste history, production record, why Rust
- Architecture overview — full system diagram, all layers explained
- The four nodes — appServer, admin, segundo, tercero with real-world context
- IPL sequence — every step documented with rationale for ordering
- Configuration system — layering, env selection, adding new sections
- Queue topology — exchanges, routing keys, broker bindings, vhost isolation
- Template system — REC/REL, TLA convention, cache map, warehousing
- Event lineage — compound event IDs, parent/child tracking, msLogs schema
- Glossary
- Update README with wiki index and MariaDB status
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
168 lines
7.0 KiB
Markdown
168 lines
7.0 KiB
Markdown
# The Four Nodes
|
|
|
|
BEDS defines four node roles. All nodes run the same binary — role is determined entirely by configuration. In a homelab or development environment, all four roles run on a single machine. In production, they typically run on separate servers.
|
|
|
|
The `isLocal` flag in the env config file is the declaration: "this service runs on this physical machine." Brokers are only started for services declared as local.
|
|
|
|
---
|
|
|
|
## appServer
|
|
|
|
**The primary application node.** This is where your business logic lives. In the PHP implementation this was also called "namaste" — the application layer that handled all client-facing CRUD operations.
|
|
|
|
### Responsibilities
|
|
- Receives all incoming client requests via AMQP
|
|
- Dispatches to the factory layer for database operations
|
|
- Returns results to clients
|
|
|
|
### Broker Types
|
|
| Broker | Queue | Purpose |
|
|
|---|---|---|
|
|
| `rBroker` | `rec.read`, `rel.read` | Non-destructive fetch queries |
|
|
| `wBroker` | `rec.write`, `rel.write` | Create / update / delete operations |
|
|
| `mBroker` | `rec.obj`, `rel.obj` | Migration and bulk transfer events — disabled by default |
|
|
|
|
### Databases
|
|
- MongoDB: primary application document store
|
|
- MariaDB: primary relational store
|
|
|
|
### Real-world deployment note
|
|
In the Giving Assistant production deployment, appServer handled 40,000+ transactions per second on a single node. The broker pool absorbed burst traffic; the queue was the backpressure mechanism. When the database was slow, the queue grew — it did not drop requests.
|
|
|
|
---
|
|
|
|
## admin
|
|
|
|
**The administrative and observability node.** This is the most critical node in the cluster from an operations standpoint. It is the logger, the auditor, the metrics collector, and the system health monitor.
|
|
|
|
All other nodes route their log events to admin over AMQP. Admin is the single point of truth for what happened in the cluster.
|
|
|
|
### Responsibilities
|
|
- Receives and persists all log events from all nodes
|
|
- Routes log events to syslog when configured
|
|
- Records audit trails for auditable operations
|
|
- Collects and publishes performance metrics and timer data
|
|
- Handles administrative AMQP events (node management, config reloads)
|
|
|
|
### Broker Types
|
|
| Broker | Queue | Purpose |
|
|
|---|---|---|
|
|
| `adminBrokerIn` | `adm` | Inbound administrative events |
|
|
| `adminBrokerOut` | `adm` | Outbound administrative responses |
|
|
| `adminLogsBroker` | `log` | Log events from all nodes |
|
|
| `adminSyslogBroker` | `log` | Syslog routing for log events |
|
|
| `adminGraphBroker` | `log` | Metrics and graph data collection |
|
|
|
|
### Databases
|
|
- MongoDB: `msLogs` collection (log event store), audit records
|
|
- MariaDB: administrative relational data
|
|
|
|
### Important: admin is the logger
|
|
Non-admin nodes do not write logs directly to MongoDB. They publish log events to the `log` exchange over AMQP. Admin consumes them and writes to `msLogs`. This means:
|
|
|
|
- If admin is down, log events queue in RabbitMQ — they are not lost
|
|
- If MongoDB is down on admin, the queue backs up until it recovers
|
|
- No other node needs a direct MongoDB connection for logging
|
|
|
|
This design was battle-tested: in the Namaste homelab, the admin node was run on a Raspberry Pi to deliberately stress-test the queue backlog behaviour. The Pi was slower than the appServer — logs queued during spikes and drained during lulls. Nothing dropped.
|
|
|
|
---
|
|
|
|
## segundo
|
|
|
|
**The warehousing and cool storage node.** Segundo handles the data lifecycle — moving records from HOT (live production) storage to COOL (warehoused) storage on a defined schedule.
|
|
|
|
"Segundo" is Spanish for "second" — this was the second node added to the framework after appServer, originally to handle the warehousing workload that was creating performance problems in the primary database.
|
|
|
|
### Responsibilities
|
|
- Automated warehousing — moves eligible records from HOT to COOL storage on a schedule
|
|
- On-demand warehousing — responds to explicit warehouse requests
|
|
- Manages COOL storage (warehoused data that maintains schema and indexing)
|
|
- Data migration support
|
|
|
|
### Broker Types
|
|
| Broker | Queue | Purpose |
|
|
|---|---|---|
|
|
| `whBroker` | `mig` | Warehouse operations — scheduled and on-demand |
|
|
| `cBroker` | `mig` | Consolidation broker — bulk data operations |
|
|
|
|
### Databases
|
|
- MongoDB: COOL storage document collections
|
|
- MariaDB: `beds_warehouse` — warehoused relational data
|
|
|
|
### HOT / COOL / COLD storage model
|
|
| Tier | Description | Index changes | Schema changes |
|
|
|---|---|---|---|
|
|
| HOT | Live production data | No | No |
|
|
| COOL | Warehoused, full schema preserved | Allowed | Allowed |
|
|
| COLD | Archived, reformatted (typically CSV) | N/A | N/A |
|
|
| WARM | Being restored from COLD to HOT | In progress | In progress |
|
|
|
|
---
|
|
|
|
## tercero
|
|
|
|
**The user and session management node.** Tercero was the third node added to the framework, originally driven by a compliance requirement at Pathway Genomics in California.
|
|
|
|
"Tercero" is Spanish for "third."
|
|
|
|
### The compliance backstory
|
|
Pathway Genomics ran a patient portal for genetic test kits. Patient data included both PII (Personally Identifiable Information) and PHI (Protected Health Information under HIPAA). The compliance requirement was clear: PII and PHI must be physically separated — different databases, different credentials, different access controls.
|
|
|
|
The solution was to route all user and session data through a dedicated node (tercero) with its own MongoDB instance and MariaDB database. The appServer node never touched the user database directly. It sent AMQP events to tercero and received session tokens back.
|
|
|
|
This is the canonical demonstration of BEDS' separation-of-concerns design: a compliance requirement that would have required significant application refactoring in a conventional architecture was implemented as a configuration choice.
|
|
|
|
### Responsibilities
|
|
- User record management (registration, profile updates, deactivation)
|
|
- Session management (login, logout, session validation, expiry)
|
|
- Authentication token lifecycle
|
|
|
|
### Broker Types
|
|
| Broker | Queue | Purpose |
|
|
|---|---|---|
|
|
| `uBroker` | `rec.read`, `rec.write` | User record operations |
|
|
| `sBroker` | `rec.read`, `rec.write` | Session record operations |
|
|
|
|
### Databases
|
|
- MongoDB: `msUsers` (user profiles), `msSessions` (session records)
|
|
- MariaDB: `beds_users` — relational user data where joins are needed
|
|
|
|
---
|
|
|
|
## Node Configuration in Practice
|
|
|
|
In `beds.toml`, all four nodes share the same RabbitMQ instance but connect to different queues. The env file declares which services are local to this machine:
|
|
|
|
```toml
|
|
# env_dev.toml — all four on one machine (development)
|
|
[app_server]
|
|
is_local = true
|
|
|
|
[admin]
|
|
is_local = true
|
|
|
|
[segundo]
|
|
is_local = true
|
|
|
|
[tercero]
|
|
is_local = true
|
|
```
|
|
|
|
```toml
|
|
# env_prod.toml — dedicated servers (production)
|
|
[app_server]
|
|
is_local = true # this file lives on the appServer machine
|
|
|
|
[admin]
|
|
is_local = false # admin runs on a separate server
|
|
|
|
[segundo]
|
|
is_local = false # segundo runs on a separate server
|
|
|
|
[tercero]
|
|
is_local = false # tercero runs on a separate server
|
|
```
|
|
|
|
The binary on each server reads the same `beds.toml` base config but a different env file, which tells it which role to assume.
|