Archive: Namaste PHP AMQP framework v1.0 (2017-2020)

952 days continuous production uptime, 40k+ tp/s single node.
Original corpo Bitbucket history not included — clean archive commit.
This commit is contained in:
2026-04-05 09:49:30 -07:00
commit 373ebc8c93
1284 changed files with 409372 additions and 0 deletions

6
.gitignore vendored Normal file
View File

@@ -0,0 +1,6 @@
# Created by .ignore support plugin (hsz.mobi)
.idea
logs/*
pids/*
docs/*
deployment/namastessh

120
CLAUDE.md Normal file
View File

@@ -0,0 +1,120 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
**Namaste** is a custom PHP 7.4+ event-driven backend framework for multi-service microservice architectures. It is NOT built on Laravel or Symfony — it uses its own custom abstractions. All data access flows through AMQP message brokers backed by RabbitMQ.
## Commands
### Dependencies
```bash
cd lib && php composer.phar update -vv --prefer-dist
```
### Running Brokers
```bash
php scripts/startBrokers.php # Start all brokers defined in XML config
bash scripts/launchBrokers.sh # Wrapper script for daemon-mode launch
bash scripts/stopBrokers.sh # Stop all running brokers
```
### Code Quality
All tools are Composer-installed in `lib/vendor/bin/`:
```bash
php lib/vendor/bin/phpcs [file] # Code style
php lib/vendor/bin/phpmd [dir] text cleancode
php lib/vendor/bin/phploc [dir]
php lib/vendor/bin/phpcpd [dir]
```
### Testing
PHPUnit 6.5 is available. Stubs in `stubs/` are used for manual/ad-hoc testing:
```bash
php lib/vendor/bin/phpunit # Run unit test suite
php stubs/testMongo.php # Manual stub tests
php stubs/testUsers.php
```
### Schema Setup
```bash
php utilities/mysqlConfig.php # Initialize MySQL/MariaDB schema
php utilities/mongoConfig.php # Initialize MongoDB schema
```
### Docker
```bash
docker build . --tag=givingassistant/namaste:master
```
## Architecture
### Configuration System
- `config/namaste.xml` — base production config (service definitions, DB connections, broker counts, security settings)
- `config/env.xml` — environment-specific overrides (local DB hosts, credentials, feature flags)
- `config/env.admin.xml` — admin service overrides
- `gasConfig` class loads and merges these XML files; environment layering is how dev/staging/prod differ
### Service Architecture (Four Services)
- **appServer** — main application server with `rBroker` (read), `wBroker` (write), `mBroker` (mail/message)
- **admin** — admin service with `adminBrokerIn`, `adminBrokerOut`, `adminLogsBroker`, `adminSyslogBroker`, `adminGraphBroker`
- **segundo** — warehouse/cool-storage with `whBroker` and `cBroker` (Consolidated Sanctions List)
- **tercero** — user management with `uBroker` and `sBroker` (sessions)
### Data Layer Pattern
```
gacFactory (factory)
└── Resolves template name → instantiates correct widget
├── gacMongoDB — MongoDB adapter (sharding + replication support)
├── gacPDO — MySQL/MariaDB adapter (master-slave replication)
└── gacDdb — DynamoDB adapter
```
All data classes extend `gaaNamasteCore` (abstract base), which defines the CRUD interface: `_createRecord`, `_fetchRecords`, `_updateRecord`, `_deleteRecord`.
### Data Templates (`classes/templates/`)
Each `.class.inc` file is a domain-specific schema class (e.g., `gatDonors`, `gatUsers`, `gatSessions`, `gatAudit`, `gatConsolidatedSanctionsList`). They extend `gaaNamasteCore` and implement schema-specific logic. Adding new data domains means creating a new template here.
### Message Flow
1. AMQP message arrives at broker
2. Broker parses metadata (sessionID, clientIP, etc.) via `gacMeta`
3. Broker calls `gacFactory::grabWidget()` with template name
4. Factory returns the appropriate database widget
5. Widget executes the CRUD operation
6. Response published back to AMQP reply queue
### Key Support Classes
| Class | Purpose |
|---|---|
| `gasConfig` | XML config loader/merger |
| `gasResourceManager` | Connection pooling, resource lifecycle |
| `gacErrorLogger` | Centralized logging |
| `gacBrokerClient` | AMQP publish/consume |
| `gacBrokerHelper` | Queue utilities |
| `gacUsers` | User CRUD, authentication, password hashing (ARGON2I) |
| `gasCache` | Memcached wrapper |
| `gacMigrations` | MongoDB ↔ MySQL data migration |
| `gasStatic` | Shared utility methods |
### Autoloading
Uses a **custom autoloader** (`autoloader.php`) — not PSR-4/Composer autoloading. All class files use `.class.inc` extension.
### Common/Shared Definitions
- `common/constants.php` — application-wide constants
- `common/functions.php` — global utility functions
- `common/errorCatalog.php` — error codes and messages
- `common/dbCatalog.php` — database schema definitions
- `common/cacheMaps.php` — Memcached key mappings
## Database Infrastructure
- **MongoDB**: Default port 27017 (dev), sharding via mongos at 27019 (prod). Three+ databases (namaste, admin, segundo, users), each with separate auth credentials.
- **MySQL/MariaDB**: Default port 3306. Master-slave replication in production. Schema in `schema/pdo/`.
- **DynamoDB**: Optional. Configured in `namaste.xml` under the DDB section.
- **Memcached**: Required for session caching and performance.
## Web Utilities (Admin/Debug)
Available at `http://namaste/utilities/`:
- `gaAdmin.php` — main admin dashboard
- `cashpeak.php` — Memcache reader/viewer
- `dumper.php` — log and metrics viewer
- `migrateData.php` — interactive data migration GUI

5
Ddb/.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
DynamoDBLocal.jar
third_party_licenses
DynamoDBLocal_lib
LICENSE.txt
shared-local-instance.db

29
Ddb/README.txt Normal file
View File

@@ -0,0 +1,29 @@
README
========
For an overview of DynamoDB Local please refer to the documentation at http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tools.DynamoDBLocal.html
Release Notes
-----------------------------
2017-01-24 (1.11.86)
* Implement waiters() method in LocalDynamoDBClient
* Update aws libs to 1.11.86
* Enable WARN logging for SQLite
2016-05-17_1.0
* Bug fix for Query validation preventing primary key attributes in query filter expressions
Running DynamoDB Local
---------------------------------------------------------------
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar [options]
For more information on available options, run with the -help option:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -help

70
Ddb/namaste-readme.txt Normal file
View File

@@ -0,0 +1,70 @@
To start the Ddb for namaste, at the command line enter:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -inMemory
Note that this command will run Ddb in memory only - when you exit the local db instance, no data will persist.
To run the Ddb instance saving data locally to a file, use this command:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -dbPath /some/file/path
The help file is incorrect - there is no default - if you leave the dbPath option blank, it will throw an error.
usage: java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar
[-port <port-no.>] [-inMemory] [-delayTransientStatuses]
[-dbPath <path>][-sharedDb] [-cors <allow-list>]
-cors <arg> Enable CORS support for javascript against a
specific allow-list list the domains separated
by , use '*' for public access (default is
'*')
-dbPath <path> Specify the location of your database file.
Default is the current directory.
-delayTransientStatuses When specified, DynamoDB Local will introduce
delays to hold various transient table and
index statuses so that it simulates actual
service more closely. Currently works only for
CREATING and DELETING online index statuses.
-help Display DynamoDB Local usage and options.
-inMemory When specified, DynamoDB Local will run in
memory.
-optimizeDbBeforeStartup Optimize the underlying backing store database
tables before starting up the server
-port <port-no.> Specify a port number. Default is 8000
-sharedDb When specified, DynamoDB Local will use a
single database instead of separate databases
for each credential and region. As a result,
all clients will interact with the same set of
tables, regardless of their region and
credential configuration. (Useful for
interacting with Local through the JS Shell in
addition to other SDKs)
You will also need to configure you aws credentials which is easily done using the "aws configure" command:
$ aws configure
AWS Access Key ID [None]: YOUR-AWS-ACCESS-KEY
AWS Secret Access Key [None]: YOUR-AWS-SECRET-KEY
Default region name [None]: us-west-2
Default output format [None]:
Start the local instance of the DynamoDB:
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -dbPath /YOUR/FILE/PATH
Test the connection by listing the existing tables in the database:
$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": []
}
Note that the aws configure will create a directory in $HOME called ".aws/" and in that directory are two files:
$ more config
[default]
region = us-west-2
$ more credentials
[default]
aws_secret_access_key = gVADzw1ZEhl0ie1/ktMW+jz/pPKpdrd7Hr+6Tt0w
aws_access_key_id = AKIAIGBWL4HXBOOFXH5A

110
Dockerfile Normal file
View File

@@ -0,0 +1,110 @@
# docker build . --tag=givingassistant/namaste:master
# FROM givingassistant/base:latest
FROM ubuntu:18.04
# install PHP and required packages, config
RUN adduser --system --no-create-home --group app && \
adduser app www-data && \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
apt-get upgrade -y && \
apt-get install -y software-properties-common \
build-essential \
locales \
&& \
localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 && \
add-apt-repository ppa:ondrej/php && \
apt-get update && \
apt-get install -y php7.2 \
php7.2-common \
php7.2-bcmath \
php7.2-cli \
php7.2-curl \
php7.2-dev \
php7.2-gd \
php7.2-json \
php7.2-mbstring \
php7.2-mysql \
php7.2-opcache \
php7.2-readline \
php7.2-xml \
php7.2-memcached \
# php7.3-mongodb \
php-pear \
autoconf \
g++ \
make \
libcurl4-openssl-dev \
pkg-config \
libsasl2-dev \
libpcre3-dev \
openssl \
libssl-dev \
openssh-server \
wget \
rsync \
git \
zip \
apache2 \
mongodb \
mariadb-client \
iputils-ping \
dnsutils \
vim && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN echo "America/Los_Angeles" > /etc/timezone && \
ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime
# Should we use configure tzdata instead commands instead of pushing into /etc/timezone.
ENV LANG en_US.utf8
# This needs to be in a different run command for some reason
# otherwise we are getting log directory does not exist error.
# install mongodb
RUN pecl install mongodb && \
pecl clear-cache
# PHP configuration
ADD ./deployment/phpconf.ini /etc/php/7.2/cli/conf.d/90-givva.ini
# install app requirements
RUN mkdir -p /home/app/lib
ADD lib/composer.json /home/app/lib/composer.json
RUN wget https://getcomposer.org/composer.phar && \
chmod +x composer.phar && \
mv composer.phar /usr/local/bin/composer && \
cd /home/app/lib && \
/usr/local/bin/composer update -vv --prefer-dist
# add apache config
ADD ./deployment/apache.conf /etc/apache2/sites-available/namaste.conf
# TODO should we add ssl files like we do on givingassistant/web ?
RUN a2ensite namaste && \
a2dissite 000-default && \
a2enmod ssl && \
a2enmod headers && \
a2enmod rewrite && \
a2enmod setenvif && \
a2enmod status
# Add run script
# ADD ./deployment/run_apache.sh /etc/service/httpd/run
# ADD ./deployment/run_namaste.sh /etc/my_init.d/02_namaste_start.sh
ADD ./deployment/run.sh /sbin/run.sh
# pull in source to user's home
ADD . /home/app
RUN mkdir -p /home/app/logs && \
mkdir -p /home/app/pids && \
mkdir -p /home/app/scripts/mongo && \
chmod a+x /sbin/run.sh &&\
chown app /home/app -R && \
chgrp www-data /home/app -R && \
chmod g+rwx /home/app -R ;
# Add ssh configuration
ADD ./deployment/sshd/sshd_config /etc/ssh/sshd_config
ADD ./deployment/namastessh/id_rsa.pub /root/.ssh/authorized_keys
RUN passwd -d root
WORKDIR /home/app
CMD ["/sbin/run.sh"]

2496
Doxyfile Normal file

File diff suppressed because it is too large Load Diff

544
README.md Normal file
View File

@@ -0,0 +1,544 @@
namaste
=======
This is the namaste repository - a backend, AMQP-based, event-driven framework written in PHP 7.1.
Creation Date: Jun 7, 2017
Last Updated: Oct 20, 2020
Author: mike@givingassistant.org
Reference Link for MD Syntax:
* https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/user/markdown.md
Assumptions
-----------
* you've already installed git and configured the git globals
* you've pulled the source code from the bit-bucket repository
* you're installing on Ubuntu 18.04 LTS or a cloud-compatible release version
Installation Notes:
-------------------
Once you've pulled source code, and upgraded to PHP 7.1 (below), you'll need to make a few changes to the namaste environment. From the root directory in the Namaste source, enter the following commands:
* mkdir logs pids
* chmod 1777 logs pids
Some commands to run for setting up the Namaste environment:
* Install Xdebugger 2.7.2 for PHP7.0: https://xdebug.org/wizard.php
* do not forget to also configure this in PHPStorm when creating the repos!
* cd into the ./lib directory to install composer:
* curl -sS https://getcomposer.org/installer -o composer-setup.php
* validate the composer download:
* php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
* Output: "Installer verified"
* Install composer:
* sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer
* on completion, composer is available globally.. cd into the ./lib directory to install packages needed for the project.
* move the composer-setup.php file into the ./lib directory
* run: php composer-setup-php -- this will create the composer.phar file in ./lib
* visit: https://packagist.org/packages/theseer/phpdox to ensure that the packages listed are version compatible and at the lates revs
* php composer.phar self-update -- updates composer package to the latest version.
* Note: composer.phar is NOT checked-in to source -- it is _your_ responsibility to keep it current!
* php composer.phar update -- installs libraries and dependencies into the lib directory
* Note: the vendor sub-directory, where composer stores it's files, is not checked into source. Again, it is _your_ responsibility to ensure it is kept current.
* on completion, you should remove composer-setup.php
* Oracle Java 8
* Install the Oracle Java8 SDK (required for DynamoDB)
* instructions: http://tipsonubuntu.com/2016/07/31/install-oracle-java-8-9-ubuntu-16-04-linux-mint-18/
* Download and install AWS DynamoDB
* instructions: https://docs.aws.amazon.com/amazondynamodb/latest/gettingstartedguide/GettingStarted.Download.html
* note: the Ddb sdk is installed via composer
* Install memcached:
* apt install memcached php-memcached
PHP 7.4
-------
Don't forget to enable the Apache mod for PHP 7.4!
Here's the current list of PHP 7.4 packages I currently have installed:
```
mshallop@clydesdale:~$ apt list "*php*" | grep -i installed
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
libapache2-mod-php7.2/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
libapache2-mod-php7.4/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php-apcu/bionic,now 5.1.18+4.0.11-1+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php-apcu-bc/bionic,now 1.0.5-1+ubuntu18.04.1+deb.sury.org+20191129 amd64 [installed,automatic]
php-common/bionic,bionic,now 2:72+ubuntu18.04.1+deb.sury.org+1 all [installed,automatic]
php-igbinary/bionic,now 3.1.0+2.0.8-2+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php-memcached/bionic,now 3.1.4+2.2.0-1+ubuntu18.04.1+deb.sury.org+20191129 amd64 [installed]
php-mongodb/bionic,now 1.6.1-4+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php-msgpack/bionic,now 2.0.3+0.5.7-2+ubuntu18.04.1+deb.sury.org+20191129 amd64 [installed,automatic]
php-pear/bionic,bionic,now 1:1.10.8+submodules+notgz-1+ubuntu18.04.1+deb.sury.org+1 all [installed]
php-php-gettext/bionic,bionic,now 1.0.12-0.1 all [installed,automatic]
php-phpseclib/bionic,bionic,now 2.0.9-1 all [installed,automatic]
php-tcpdf/bionic,bionic,now 6.2.13+dfsg-1ubuntu1 all [installed,automatic]
php7.2/bionic,bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 all [installed]
php7.2-bcmath/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-bz2/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php7.2-cli/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-common/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-curl/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-gd/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-gmp/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php7.2-imap/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-json/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-mbstring/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-mysql/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-opcache/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-readline/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-xml/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.2-zip/bionic,now 7.2.27-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php7.4/bionic,bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 all [installed]
php7.4-bcmath/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-bz2/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-cli/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php7.4-common/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-curl/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-dev/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-gd/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-gmp/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-intl/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-json/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php7.4-mbstring/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-mysql/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-opcache/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php7.4-readline/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed,automatic]
php7.4-xml/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
php7.4-zip/bionic,now 7.4.2-6+ubuntu18.04.1+deb.sury.org+1 amd64 [installed]
phpmyadmin/bionic,bionic,now 4:4.6.6-5 all [installed]
pkg-php-tools/bionic,bionic,now 1.35ubuntu1 all [installed,automatic]
```
To install these tools in a single command:
```$ apt install php-memcached php-mongodb php-pear php7.4-bcmath php7.4-bz2 php7.4-curl php7.4-imap uw-mailutils php7.4-mbstring php7.4-mysql php-apcu php7.4-gd libgd-tools php7.4-xml php7.4-zip php7.4-dev php7.4-intl phpmyadmin```
Finally, install the pecl libraries for mongoDB and the XDebugger:
```$ pecl install mongodb xdebug```
RabbitMQ
--------
Install RabbitMQ:
RabbitMQ can fail to install based on your current OS version. As of this writing, we're using the following versions:
* RabbitMQ 3.7.7
* Erlang 20.2.2
You can go to this site: https://www.rabbitmq.com/download.html
and choose the appropriate package to download (Linux/Debian for Ubuntu) and then use your software installer to install the deb package.
This will install the compatible versions of both RabbitMQ and Erlang.
Or, you can install manually...
```text
$ apt install rabbitmq-server
```
Once installed, next, install the RMQ management console:
```text
$ rabbitmq-plugins enable rabbitmq_management
```
Test/confirm that you can access the console via: http://localhost:15672
Generally speaking, I've found user-management via the gui to be pretty janky - you'll be better off using the command line:
```text
$ rabbitmqctl add_user namaste 4YyrWuKH
```
This creates the namaste user and sets the password.
```text
$ rabbitmqctl set_user_tags namaste administrator
```
This sets the namaste account to administrator level.
```text
$ rabbitmqctl add_vhost mdev
```
Adds the vhost "mdev" to the configuration. (ymmv)
```text
$ rabbitmqctl set_permissions -p mdev namaste ".*" ".*" ".*"
```
This command grants the namaste user full access to the mdev vhost
Helpful Links:
* https://www.rabbitmq.com/man/rabbitmqctl.1.man.html
* https://github.com/rabbitmq/rabbitmq-server/blob/stable/docs/rabbitmq.config.example
Upgrade Erlang
--------------
**You do not need to do this step if you downloaded and installed from the deb package!**
The default version of Erlang, installed with RabbitMQ, under Ubuntu, is sub-optimal for node.js clients. If you're using node.js to connect to RabbitMQ, you should be at Erlang v. 19.x or later.
To upgrade Erlang, follow these steps:
* lsb_release -c
returns the codename of your installation -- for Ubuntu 16.04 or later, this should come back as "xenial".
Add the following line to /etc/apt/sources.list:
* deb http://packages.erlang-solutions.com/ubuntu xenial trusty contrib
(Change "xenial" to the appropriate codename)
Execute the following commands:
* wget http://packages.erlang-solutions.com/ubuntu/erlang_solutions.asc
* sudo apt-key add erlang_solutions.asc
Then, update the system:
* apt update
* apt upgrade
* apt dist-upgrade
Restart RabbitMQ:
* service rabbitmq-server restart
And check the console page for the server - the upper-right corner shows the versions of RabbitMQ and Erlang. Your version of Erlang should now be at 19.x or later.
(As of this writing, the Erlang version is 20.0 for Ubuntu 16.04)
php.ini tweaks:
---------------
For most resources, (mongoDB, RabbitMQ, etc.), I like to re-route the logging out put to a single repository. Normally, I use /home/logs however ymmv.
```
error_log = /home/logs/php_errors.log
```
Local Variables:
```
[xdebug]
xdebug.remote=1
xdebug.remote_port=9000
xdebug.remote_autostart=0
xdebug.var_display_max_depth=10
xdebug.cli_color=1
```
mongodb
-------
For mongo, you will need to install the pecl libraries containing the mongodb drivers for PHP 7.1.
```
pecl install mongodb
```
Add the following line to the file: /etc/php/7.1/cli/php.ini
```
[mongodb]
extension=mongodb.so
```
For Apache2 support, you will need to install the mongodb drivers and restart Apache:
```bash
apt install php-mongodb
...
systemctl restart apache2
```
Apache2 support is necessary for the Namaste browser utilities.
**User RBAC (Role Based Access Controls) **
You should create a *minimum* of two user account for Namaste mongo when using version 3.6 or later.
The first user should be a root user.
```text
> use admin
> db.createUser({ user: "myRootUser", pwd: "myRootPassword", roles : [ { role : "root", db: "admin" } ] } )
```
Once the user is created, then you can test your authentication (while still using the admin db):
```text
> db.auth("myRootUser", "myRootPassword")
1
```
A value of 1 will be returned if the information is correct. If not, a 0 is returned.
Next, create the user accounts. As of this writing, there are two Namaste mongo collections, one on the admin service,
and one in Namaste proper. These are discrete databases and you have the option/ability to create different users for
both.
Both are defined in the XML configuration
```xml
<user>gaOwner</user>
<password>zPd7B6^Y</password>
<useAuth>1</useAuth>
<authSource>givva_namaste</authSource>
```
Note that the authSource tag uses the name of the Namaste database sans the environment. (Defined in id -> envName)
This will be prepended to the database name prior to making a resource connection request.
To create users for the Namaste and Admin databases:
```text
> use development_givva_namaste
> db.createUser( { user : "gaOwner", pwd: "zPd7B6^Y", roles: [ { role : "dbOwner", db: "development_givva_namaste" } ] } )
> use development_givva_namaste_admin
> db.createUser( { user : "gaOwner", pwd: "zPd7B6^Y", roles: [ { role : "dbOwner", db: "development_givva_namaste_admin" } ] } )
```
MariaDB
-------
As root:
```
apt install mariadb-server mariadb-client
mysql_secure_installation
```
Running mysql_secure_installation will prompt you for the root password - if this is a new, fresh, install, you should just press enter.
Create the gaAdmin account -- if you change the name of the account user, make sure you update the env.xml configuration file also!
```mysql
CREATE USER 'gaAdmin'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'gaAdmin'@'%';
FLUSH PRIVILEGES;
```
The database name is derived from the XML file setting under application --> id --> envName which is prepended to the string "givva".
In order to execute Namaste db creation scripts, the gaAdmin user account must have all privileges, globally, minus the GRANT option.
Once the gaAdmin (or whatever you've named your user) has been created in mySQL, create the databases:
```mysql
CREATE DATABASE development_givva_namaste;
CREATE DATABASE development_givva_namaste_warehouse;
```
Note that your database name may be different depending on the environment to which you're deploying Namaste.
The supported environments are [development | qa | staging | production].
This naming conventions allows us to run databases for different, non-production, environments on the same database instance.
**Don't forget to run the ./utilities/mysqlConfig.php program to create your tables, views, and database-objects!**
### Tips...
If the database fails to start, and you've changed the location of your datafiles to the */home* directory, check status using the following command:
```text
$ systemctl status mariadb
```
And, within the ouput of that you command, you see the following:
```text
... [Warning] Can't create test file /home/data/mysql/master.lower-test
```
Then you'll need to edit the file located in: */lib/systemd/system/mariadb.service* and change the following line:
```text
ProtectHome=true
```
to this:
```text
ProtectHome=false
```
Next, reload the system configuration files with this command:
```text
$ systemctl daemon-reload
$ systemctl restart mariadb
```
This will allow you to start-up mariadb using the */home* directory for data storage.
mySQL Timeouts
--------------
When there is no broker activity requiring interaction with mySQL as a service, the connection in Namaste to the mysql resource may eventually time-out.
When this happens, you will need to restart the Namaste brokers (framework).
The timeout interval can be mitigated by increasing the timeout value in mySQL/MariaDB to a longer interval value. You can increase the timeout value by adding the following lines to the mariaDB configuration file:
```bash
# increase timeout values for broker connections
wait_timeout = 86400
interactive_timeout = 86400
```
Note that you will have to make these changes for every node in the mariaDB cluster.
Make sure you restart the mariaDB service in order for these changes to take effect.
Apache Virtual Host for Namaste:
--------------------------------
You should edit and add this v-host configuration to your Apache configuration (/etc/apache2/sites-available/namaste.conf) so that you can access Namaste tools and utilities over HTTP.
```
# Namaste Virtual Host Configuration
<VirtualHost *:80>
ServerName namaste
ServerAlias namaste
ServerAdmin mike@givingassistant.org
DocumentRoot /home/mshallop/code/php/namaste
DirectoryIndex index.php
<Directory /home/mshallop/code/php/namaste>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Satisfy Any
Require all granted
</Directory>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
CustomLog /home/logs/namaste_log common
ErrorLog /home/logs/namaste_error.log
CustomLog /home/logs/namaste_access.log combined
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
ServerSignature Off
</VirtualHost>
```
Local Tools
-----------
Namaste comes with a couple of brilliantly-written apps for debugging. When you've installed and aliased Namaste locally as a HTTP virtual host, you can access these tools from your host alias (in this case: namaste):
* cachepeak: memcache reader
* http://namaste/utilities/cashpeak.php
* dumper: reads namaste log and metrics log
* http://namaste/utilities/dumper.php
PHPUnit Testing
---------------
If you're using PHPStorm, you will want to make a configuration change for PHP Unit testing.
Under "Languages and Frameworks" --> PHP --> Test Frameworks
Select the "Path to phpunit.phar" option and download phpunit.phar -- for my installation, I installed the phar file into:
./lib/phpunit *(not checked-in to git)*
And was able to successfully run unit tests thereafter.
Other Recommended Tools and Packages:
-------------------------------------
GitKraken - The legendary GIT GUI client for all platforms:
* https://www.gitkraken.com/
Robomongo - (Linux) Mongo GUI Tool:
* https://robomongo.org/download
Adminer -- Database management in a single PHP File:
* https://www.adminer.org/
Version 2.0 Features
====================
DynamoDB
--------
For this framework, we're using DynamoDB - an AWS nosql storage schema.
Review the namaste-readme.txt file in the Ddb directory for help on connecting after you've read the online documentation on AWS.
Key Points:
* DynamoDB is a NoSQL database, and is schemaless, which means that, other than the primary key attributes, you do not need to define any attributes or data types at table creation time.
* Your applications must encode binary values in base64-encoded format before sending them to DynamoDB. Upon receipt of these values, DynamoDB decodes the data into an unsigned byte array and uses that as the length of the binary attribute.
* When your application writes data to a DynamoDB table and receives an HTTP 200 response (OK), all copies of the data are updated. The data will eventually be consistent across all storage locations, usually within one second or less. DynamoDB uses eventually consistent reads, unless you specify otherwise.
* When you create a table, you specify how much provisioned throughput capacity you want to reserve for reads and writes. DynamoDB will reserve the necessary resources to meet your throughput needs while ensuring consistent, low-latency performance. You can also change your provisioned throughput settings, increasing or decreasing capacity as needed.
* DynamoDB is **not** supported under the AWS Resource API - meaning that you cannot use the object-oriented abstraction layer for coding.
Helpful Links:
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.API.html
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BestPractices.html
* https://docs.aws.amazon.com/amazondynamodb/latest/gettingstartedguide/GettingStarted.PHP.01.html
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
* https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_AttributeValue.html
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html#Query.Pagination
* https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html
Command Line Examples:
If you want to run aws commands from the command line, you should first run the command:
* aws configure
and follow the prompts to save your key-id and access-key to a local .ini file.
list existing tables:
* aws dynamodb list-tables --endpoint-url http://localhost:8000
delete an existing table:
* aws dynamodb delete-table --table-name development_gaLogs --endpoint-url http://localhost:8000
list a table's attributes:
* aws dynamodb describe-table --table development_gaLogs_log --endpoint-url http://localhost:8000
dump a table using scan:
* aws dynamodb scan --table development_gaLogs_log --endpoint-url http://localhost:8000

40
ReleaseNotes.md Normal file
View File

@@ -0,0 +1,40 @@
Release Notes:
==============
**DB-168**: This release introduces AT(1) support via the *admin* service for user sessions.
-
**October 20, 2020
Author: mike@givingassistant.org
Release: 1.0**
* Release Notes!
All namaste releases will, on noteworthy changes or additions, will update these release notes so that
changes can be tracked to a JIRA ticket by the impacted features.
* New broker!
Introducing the session broker, a fire-n-forget broker for managing user sessions
* New templates!
Introducing the Failed-Session and WBList classes: failed sessions tracks sessions that failed
to properly register. The WBList class controls user white and black list entries for registration and email.
* New class!
Introducing the User class: This class contains all the business-logic for user management.
* XML Update:
Removed isLocal declarations at the db/service level, replaced with top-level declarations by
service and whether or not the service is active. Also added new configuration params to the
security session for white/black listing and declared the hashing algorithm for passwords.
* gacAdminInBroker
Has been renamed to *gacWorkQueueClient* -- this is the broker client for "fire-n-forget" brokers which,
previously, was limited to the adminBrokerIn broker. With the addition of the session broker on
tercero, this class was expanded to handle comms with both brokers.
* New API Events!
Four new API events (broker events) have been introduced:
For the user broker, the events BROKER_REQUEST_VALIDATE_EMAIL and BROKER_REQUEST_REGISTER_ACCOUNT, for
the adminOutBroker we have: BROKER_REQUEST_NEW_SESSION, and the session broker event: BROKER_REQUEST_EXPIRE_SESSION.
Additionally, I added a create event to the adminIn broker for creating failed-session event records.
* Improved error handling!
Added a new general function: handleExceptionMessaging() to provide consistent error messaging
and to reduce code-bloat.
* Better Support for Dynamic Primary Keys!
Declaring a key, other than 'token', in a data template allows dynamic queries to adjust to the declared column.
NOTE: data classes that require/support auditing must use DB_TOKEN as the primary key.
* New Function: grabWidget()!
This new function handles factory-class instantiation, error checking and simple returns the factory widget back to
the requesting client greatly reducing code footprint.

225
autoloader.php Normal file
View File

@@ -0,0 +1,225 @@
<?php
/**
* This autoloader takes care of class loading dynamically as well as maintaining scope
* and registration. All you have to do to use it is use the java style _import
* EX: _import('path.to.package.*');
* or the path based syntax
* EX: Autoloader::register_directory('./path/to/package/');
* multiple copies of a identically named class can exist, but only one can be
* loaded during execution, so while this can be helpful when dealing with libraries
* which have colliding namespaces, it is no panacea.
***********************************************************************************
* KEY FEATURES *
****************
* 1) auto-autoload (other than setting your class folders, you do nothing)
* 2) namespace compartmentalization (use the same class name over and over)
* 3) shorter class/folder hierarchy traversals (get to the right class faster)
* 4) allows inline catching of class load errors (no un-catchable fatals)
* 5) maximally optimized awesomeness factor (not really, but it sounds nice)
***********************************************************************************
*
* HISTORY:
* ========
* 06-15-17 mks Initial check-in
* 01-17-20 mks DB-150: PHP7.4 refactor, also got rid of the native logging which didn't work and replaced it
* with consoleLog, exception wrappers on all the things, and general code clean-up
*
*/
/** @noinspection PhpUnused */
/**
* _import()
*
* validates the input parameter.
*
* registers the class directory with the application so that future class
* instantiations are autoload'd from the registered directory.
*
* @param $namespace - enforces naming convention for class files: *.class.inc
*/
function _import($namespace)
{
$ns_parts = explode('.', $namespace);
if(end($ns_parts) == '*'){
unset($ns_parts[key($ns_parts)]);
$directory = implode('/', $ns_parts);
Autoloader::register_directory($directory, 1);
}
}
class Autoloader {
private static array $registry = array();
private static array $file_registry = array();
private static array $file_types = array('.class.inc');
public static string $base_dir = '.';
private static bool $initialized = false;
public static bool $verbose = false;
private static string $res = 'AUTO: ';
private static bool $debug = false; // change this to true for more output verbosity
// the following mode is an experimental setting to shoehorn dummy classes
// that can mask multiple collided classes and resolve the linkage by the
// calling context, I have personal doubts this will ever be suitable for
// anything but tests, unless you have a very odd need to sandbox namespaces.
// it's way inefficient as it is an attempt to autoload *every* instantiation
// the idea is to define a new class that injects an instance of the fully path'd
// class in it's own place, deletes itself and triggers removal of it's own class
// definition... sexy, eh?
private static bool $class_path_verify_mode = false; //Let me reiterate: this doesn't work yet, DO NOT USE
public static function initialize() :void
{
spl_autoload_register(array('Autoloader', 'load'));
Autoloader::$base_dir = getcwd();
Autoloader::$initialized = true;
}
public static function register_directory(string $directory, int $depth=0):void
{
if (!Autoloader::$initialized) {
try {
Autoloader::initialize();
} catch (TypeError $t) {
if (static::$debug)
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_ERROR, $t->getMessage());
exit();
}
}
//examine the stack to get the calling file
$stacktrace = debug_backtrace();
if (array_key_exists($depth, $stacktrace) && array_key_exists('file', $stacktrace[$depth])) {
$calling_file = $stacktrace[$depth]['file'];
if (static::$debug) {
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_SYSTEM, 'Registering ' . realpath(Autoloader::$base_dir . '/' . $directory) . ' to ' . $calling_file);
}
//register the directory to the calling file
Autoloader::$file_registry[$calling_file][] = $directory;
}
//register the directory globally
if(!in_array($directory, Autoloader::$registry)) Autoloader::$registry[] = $directory;
}
public static function find_class_definition(string $directory, $class_name) : ?string
{
foreach(Autoloader::$file_types as $type) {
//$class_path = realpath(Autoloader::$base_dir.'/'.$directory.'/'.$class_name.$type);
$class_path = realpath($directory . '/' . $class_name . $type);
if (static::$debug) {
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __LINE__));
consoleLog(static::$res, CON_SYSTEM, 'Checking for class ' . $class_name . ' in directory: ' . $directory);
consoleLog(static::$res, CON_SYSTEM, 'Class Path: (' . $class_path . ')');
}
if (file_exists($class_path)) {
if (static::$debug) {
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __LINE__));
consoleLog(static::$res, CON_SYSTEM, 'Found class ' . $class_name);
}
return $class_path;
}
}
return null;
}
public static function load(string $class_name, int $depth = 1) :void
{
if(!Autoloader::$initialized) Autoloader::initialize();
//get the context file we called from
$stacktrace = debug_backtrace();
@$calling_file = $stacktrace[$depth]['file'];
//attempt to load from the local context
$checked_dirs = array();
if(array_key_exists($calling_file, Autoloader::$file_registry) && is_array(Autoloader::$file_registry[$calling_file])) {
foreach(Autoloader::$file_registry[$calling_file] as $directory) {
if (static::$debug) {
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_SYSTEM, 'Local Seek [' . $directory . ']');
}
try {
$definition = Autoloader::find_class_definition($directory, $class_name);
} catch (TypeError $t) {
if (static::$debug)
consoleLog(static::$res, CON_ERROR, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_ERROR, $t->getMessage());
exit();
}
if (!is_null($definition)) {
try {
/** @noinspection PhpIncludeInspection */
require_once($definition);
} catch (Throwable $t) {
if (static::$debug)
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_ERROR, $t->getMessage());
exit();
}
return;
}
$checked_dirs[] = $directory;
}
}
//TODO: chain scopes so you have proper scope inheritance (not just local to the calling file)
// foreach depth we trim one link off the stack, then we walk through the stack. looking for scope
// attempt to load from the global context
foreach(Autoloader::$registry as $directory) {
if (static::$debug)
consoleLog(static::$res, CON_SYSTEM,'Global Seek ['.$directory.']');
try {
$definition = Autoloader::find_class_definition($directory, $class_name);
if (!is_null($definition)) {
/** @noinspection PhpIncludeInspection */
require_once($definition);
return;
}
} catch (TypeError $t) {
if (static::$debug)
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_ERROR, $t->getMessage());
return;
}
}
// uh oh, we can't find the class, we're going to have to return a clean crash-dummy, so we can catch the error
consoleLog(static::$res, CON_ERROR, 'Could not find the class: ' . $directory . SLASH . $class_name . ' Creating a dummy.');
try {
Autoloader::load_text(Autoloader::create_crash_dummy($class_name), $class_name);
} catch (TypeError $t) {
if (static::$debug)
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_ERROR, $t->getMessage());
}
}
public static function load_text(string $class_text, ?string $class=null, ?string $namespace=null) :void
{
try {
eval('?>' . $class_text . '<?php ');
//require_once($class_text);
consoleLog(static::$res, CON_SYSTEM,'Loaded class definition ['.$class.']');
if (Autoloader::$class_path_verify_mode && $class != null && $namespace != null) {
$class_with_package_text = preg_replace('~ '.$class.'~', ' '.$namespace.'_'.$class, $class_text);
eval('?>'.$class_with_package_text.'<?php ');
if (static::$debug)
consoleLog(static::$res, CON_ERROR, 'Loaded package specific class definition ['.$class_with_package_text.']');
}
} catch (Throwable $t) {
if (static::$debug) {
consoleLog(static::$res, CON_SYSTEM, sprintf(INFO_LOC, basename(__METHOD__), __FILE__));
consoleLog(static::$res, CON_ERROR, $t->getMessage());
consoleLog(static::$res, CON_ERROR, $class_text);
}
}
}
public static function create_crash_dummy(string $class_name): string {
return '<?php
class '.$class_name.' {
function __construct($a=null, $b=null, $c=null, $d=null, $e=null, $f=null, $g=null, $h=null, $i=null, $j=null, $k=null, $l=null, $m=null, $n=null, $o=null, $p=null) {
throw new Exception("AUTOLOADER: Class '.$class_name.' not found!");
}
}
?>';
}
}

520
brokers/adminBrokerIn.php Normal file
View File

@@ -0,0 +1,520 @@
<?php
/**
* adminBrokerIn.php -- the admin-in broker client
*
* this broker is part of the administrative-services suite - and is intended to "live" on the admin instance.
*
* the primary purpose of this broker is to accept incoming system, audit or journaling events. This is a direct
* broker (fire-n-forget) that does not publish a response to the event.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 08-16-17 mks CORE-500: cleaned-up some IDE warnings
* 08-21-17 mks CORE-500: completed coding for systemEvents->brokerEvents tracking
* 02-06-18 mks _INF-139: PHP 7.2 exception handling
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 10-17-18 mks DB-72: audit event coding
* 02-11-19 mks DB-100: offloaded a chunk of broker-event code into core for smaller footprint
* 09-19-19 mks DB-136: better exception handling, moved log/metric code to respective brokers
* fixed console log message where auditIn event always generating error message on success
* 07-28-20 mks DB-156: broker self-registration installed
* 09-17-20 mks DB-168: updated service registration, updated exception handling to current standard
*
*/
//use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Exception\AMQPChannelClosedException;
use PhpAmqpLib\Exception\AMQPRuntimeException;
use PhpAmqpLib\Exception\AMQPTimeoutException;
//use PhpAmqpLib\Message\AMQPMessage;
//use PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$_REDIRECT = true;
$topDir = dirname(__DIR__);
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'ADMI: ';
// event management for children
$adminServiceConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_ADMIN];
$numberChildren = $adminServiceConfig[CONFIG_BROKER_INSTANCES][CONFIG_ADMIN_BROKER_IN];
$requestsPerInstance = (empty($adminServiceConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $adminServiceConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
$parentLog = new gacErrorLogger();
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_ADMIN;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $startingMemory, $groot;
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(0, 9);
$startingMemory = memory_get_usage(true);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
$childGUID = rtrim($res, COLON) . UDASH . guid();
try {
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_AI;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_ADMIN);
if (is_null($brokerConnection)) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$childLog->fatal($hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_AI);
consoleLog($res, CON_ERROR,$hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_AI);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
// $brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
$brokerChannel->queue_declare($queue);
} catch (AMQPRuntimeException | AMQPTimeoutException | Throwable | TypeError $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the broker child start-up as a system-event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request) {
$startTime = gasStatic::doingTime();
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var PhpAmqpLib\Connection\AMQPStreamConnection $brokerConnection */
global $brokerConnection;
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
if (gasConfig::$settings[CONFIG_DEBUG]) {
consoleLog($res, CON_DEBUG, 'Child GUID: ' . $childGUID);
consoleLog($res,CON_DEBUG, 'root GUID: ' . $groot);
}
$requestCounter++;
$returnData = null;
$eventTimer = false;
$request = null;
$eventSuccess = false;
$conMsg = '';
$errorList = array();
$thisPid = getmypid();
$eventGUID = guid();
$ogGUID = '';
// set-up the call-back logger
$callBackLog = new gacErrorLogger($eventGUID, false);
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$event = BROKER_QUEUE_AI . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$event = BROKER_QUEUE_AI . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_AI . '(' . $request[BROKER_REQUEST] . ')';
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
// $_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$eventSuccess = true;
break;
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_AI;
$eventSuccess = true;
break;
case BROKER_REQUEST_CREATE :
$eventTimer = true;
$conMsg = '';
// validate that we have a data-template in meta
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
} elseif (!isset($request[BROKER_META_DATA][META_CLIENT]) or $request[BROKER_META_DATA][META_CLIENT] != CLIENT_SYSTEM) {
$conMsg = ERROR_BROKER_CLIENT_NOT_AUTH;
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->create($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_UPDATE :
$eventTimer = true;
$conMsg = '';
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
} elseif (!isset($request[BROKER_META_DATA][META_CLIENT]) or $request[BROKER_META_DATA][META_CLIENT] != CLIENT_SYSTEM) {
$conMsg = ERROR_BROKER_CLIENT_NOT_AUTH;
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->update($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_ADMIN_BROKER_EVENT:
$eventTimer = true; // set to true if you want to log the processing-time for an event
if (!isset($request[BROKER_DATA]) or empty($request[BROKER_DATA])) {
$msg = ERROR_DATA_MISSING_ARRAY . STRING_DATA;
$conMsg = $msg;
$callBackLog->data($msg);
} else {
if (isset($request[BROKER_META_DATA][META_EVENT_GUID])) {
$ogGUID = $request[BROKER_META_DATA][META_EVENT_GUID];
}
// disable auditing so we don't get into an infinite loop creating a new system event record
$metaCopy = $request[BROKER_META_DATA];
$metaCopy[META_AUDIT_EVENT] = 1;
$tmpObj = new gacSystemEvents($metaCopy);
if ($tmpObj->status) {
$tmpObj->_createRecord($request[BROKER_DATA]);
if ($tmpObj->status) {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_CREATE;
$eventSuccess = true;
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_CREATE;
}
}
if (is_object($tmpObj)) $tmpObj->__destruct();
unset($tmpObj);
}
break;
// DB-72: Audit Event
case BROKER_REQUEST_ADMIN_AUDIT_CREATE :
$eventTimer = true;
$journalData = [];
$errorList = [];
$haveJournal = false;
if (!isset($request[BROKER_DATA]) or empty($request[BROKER_DATA])) {
$conMsg = ERROR_DATA_MISSING_ARRAY . STRING_DATA;
$callBackLog->data($conMsg);
} elseif (!isset($request[BROKER_DATA][SYSTEM_EVENT_DATA]) or empty($request[BROKER_DATA][SYSTEM_EVENT_DATA])) {
$conMsg = ERROR_DATA_MISSING_ARRAY . SYSTEM_EVENT_DATA;
$callBackLog->data($conMsg);
} else {
try {
// instantiate a system event object
$objSysEv = new gacSystemEvents($request[BROKER_META_DATA]);
if (!$objSysEv->status) {
$conMsg = ERROR_TEMPLATE_INSTANTIATE . TEMPLATE_CLASS_SYS_EVENTS;
$callBackLog->error($conMsg);
} else {
$objSysEv->_createRecord([$request[BROKER_DATA][SYSTEM_EVENT_DATA]], DATA_AUDT);
unset($request[BROKER_DATA][SYSTEM_EVENT_DATA]);
// grab journaling data if it exists and set a flag
if (array_key_exists(STRING_JOURNAL_DATA, $request[BROKER_DATA])) {
$journalData = $request[BROKER_DATA][STRING_JOURNAL_DATA];
unset($request[BROKER_DATA][STRING_JOURNAL_DATA]);
$haveJournal = true;
}
if (!$objSysEv->status) {
$conMsg = sprintf(ERROR_DATA_IMPORT, SYSTEM_EVENT_DATA, $objSysEv->class);
$callBackLog->data($conMsg);
} else {
/** @var gacMongoDB $objAudit */
if (is_null($objAudit = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
$systemEventToken = $objSysEv->getColumn(DB_EVENT_GUID);
$rc = $objAudit->launchAudit($request, $haveJournal, $systemEventToken, $journalData);
$conMsg = ($rc) ? SUCCESS_AUDIT_EVENT : ERROR_AUDIT_GENERIC_FAIL;
if (!$rc and count($objAudit->eventMessages)) {
consoleLog($res, CON_ERROR, ERROR_AUDIT_FAIL);
foreach ($objAudit->eventMessages as $errorMessage) {
consoleLog($res, CON_ERROR, $errorMessage);
}
} elseif (!$rc) {
consoleLog($res, CON_ERROR, ERROR_AUDIT_FAILED);
$conMsg = ERROR_AUDIT_FAILED;
} else {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST];
}
if (is_object($objAudit)) $objAudit->__destruct();
unset($objAudit);
}
}
if (is_object($objSysEv)) $objSysEv->__destruct();
unset($objSysEv);
}
} catch (TypeError | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$conMsg = ERROR_TYPE_EXCEPTION;
$errorList[] = $conMsg;
consoleLog($res, CON_ERROR, $hdr . $conMsg);
consoleLog($res, CON_ERROR, $t->getMessage());
}
}
break;
case BROKER_REQUEST_NEW_SESSION :
$eventTimer = true;
if (!isset($request[BROKER_META_DATA][META_SESSION_GUID]) or empty($request[BROKER_META_DATA][META_SESSION_GUID])) {
$conMsg = sprintf(ERROR_META_FIELD_404, META_SESSION_GUID);
} elseif (!isset($request[BROKER_DATA][SYSTEM_EVENT_DURATION]) or empty($request[BROKER_DATA][SYSTEM_EVENT_DURATION])) {
$conMsg = sprintf(ERROR_DATA_KEY_404 . SYSTEM_EVENT_DURATION);
} elseif (!validateGUID($request[BROKER_META_DATA][META_SESSION_GUID])) {
$conMsg = ERROR_INVALID_GUID . $request[BROKER_META_DATA][META_SESSION_GUID];
} else {
$sessionToken = $request[BROKER_META_DATA][META_SESSION_GUID];
$duration = intval($request[BROKER_DATA][SYSTEM_EVENT_DURATION]);
$rc = gasStatic::createATJob($duration, $sessionToken);
if (!is_null($rc)) {
// we successfully created the AT(1) job - update the sys-event record
$tmpObj = new gacSystemEvents($request[BROKER_META_DATA]);
if (!$tmpObj->status) {
$conMsg = ERROR_TEMPLATE_INSTANTIATE . $request[BROKER_META_DATA][META_TEMPLATE];
} else {
// fetch the system event record
$tmpObj->fetchRecordBySessionGUID($sessionToken);
if ($tmpObj->status) {
// update the system-event record with the AT results
$query = [SYSTEM_EVENT_FK_SESSION_GUID => [OPERAND_NULL => [OPERATOR_EQ => [$tmpObj->getColumn(SYSTEM_EVENT_FK_SESSION_GUID)]]]];
$update = [SYSTEM_EVENT_AT_RESULTS => $rc];
$data = [ STRING_QUERY_DATA => $query, STRING_UPDATE_DATA => $update ];
$tmpObj->_updateRecord($data);
if ($tmpObj->status) {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST];
} else {
$callBackLog->warn(ERROR_MDB_SYS_EVENT_UPDATE);
consoleLog($res, CON_SYSTEM, ERROR_MDB_SYS_EVENT_UPDATE);
}
} else {
$callBackLog->warn(ERROR_MDB_SYS_EVENT_SAVE);
consoleLog($res, CON_SYSTEM, ERROR_MDB_SYS_EVENT_SAVE);
}
}
} else {
$callBackLog->warn(ERROR_AT_SAVE);
consoleLog($res, CON_SYSTEM, ERROR_AT_SAVE);
}
}
break;
case BROKER_REQUEST_ADMIN_CACHE_SMASH :
$eventTimer = true;
$errors = [];
if (!isset($request[BROKER_DATA]) or empty($request[BROKER_DATA])) {
$conMsg = ERROR_DATA_MISSING_ARRAY . STRING_DATA;
$callBackLog->data($conMsg);
} else {
try {
// calls the cache-smash method and passes the list of guids
if (!gasCache::smashCache($request[BROKER_DATA][STRING_DATA], $errors))
consoleLog($res, CON_ERROR, ERROR_CACHE_SMASH_FAIL_USER);
else
consoleLog($res, CON_SUCCESS, SUCCESS_CACHE_SMASH);
} catch (TypeError | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$conMsg = ERROR_TYPE_EXCEPTION;
$errorList[] = $conMsg;
consoleLog($res, CON_ERROR, $hdr . $conMsg);
consoleLog($res, CON_ERROR, $t->getMessage());
}
}
break;
default :
$conMsg = ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST];
$callBackLog->warn(ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST]);
// todo - not a supported event so log something dire
break;
}
}
if (!$eventSuccess and empty($conMsg)) {
$conMsg = ERROR_FINE_PICKLE;
}
if (!empty($conMsg)) {
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
}
// $_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
// log a system-event for the event -- unlike the other system events, we're not going to submit
// this one via a broker - which is standard but, instead, we're going to write the record out
// directly since doing otherwise would cause an infinite loop in processing.
if ($eventTime and $eventTimer) {
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($ogGUID)) $data[SYSTEM_EVENT_OGUID] = $ogGUID;
@postSystemEvent($data, $eventGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
gasCache::sysDel(($groot . UDASH . $thisPid));
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_AI, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_consume($queue, '', false, true, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (AMQPChannelClosedException | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_AI));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

583
brokers/adminBrokerOut.php Normal file
View File

@@ -0,0 +1,583 @@
<?php
/**
* adminBrokerOut.php
*
* This is the admin-out broker and is, currently, a placeholder for future work.
*
* This broker is intended to be used to fetch data/reports from the admin-side. As it stands today, only base
* events are supported.
*
* This broker is pretty old requires refactoring to bring up-to standards. As it is now, it is simply a placeholder
* service.
*
* @author mike@givingassistant.org
* @version 1.0.0
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 08-24-17 mks CORE-500: broker events
* 02-06-18 mks _INF-139: migration event coding, PHP 7.2 exception handling
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 06-07-18 mks CORE-1013: remote fetch event added
* 07-09-18 mks CORE-1017: pedigree fetch event added
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
use PhpAmqpLib\Connection\AMQPStreamConnection;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Message\AMQPMessage;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true;
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$childrenPidList = null;
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$res = 'ADMO: ';
// event management for children
$adminServiceConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_ADMIN];
$numberChildren = $adminServiceConfig[CONFIG_BROKER_INSTANCES][CONFIG_ADMIN_BROKER_OUT];
$requestsPerInstance = (empty($adminServiceConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $adminServiceConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_ADMIN;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $groot;
// $startingMemory = memory_get_usage(true);
// todo -- when this broker becomes active, add the systemEvent for calculating memory consumption on SIGCLD
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
try {
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_AO;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_ADMIN);
if (is_null($brokerConnection)) {
$parentLog->fatal(ERROR_RESOURCE_404 . RESOURCE_ADMIN);
consoleLog($res, CON_ERROR, ERROR_RESOURCE_404 . RESOURCE_ADMIN);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
// params: queue name, passive, durable, exclusive, auto-delete
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable | TypeError $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
$errorList = array();
$requestCounter++;
$aryRetData = null;
$retData = null;
$request = null;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$eventSuccess = false;
$conMsg = '';
$eventGUID = guid();
$thisPid = getmypid();
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
// set-up the call-back logger
/** @var gacErrorLogger $callBackLog */
$callBackLog = new gacErrorLogger($eventGUID);
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$aryRetData = buildReturnPayload([false, STATE_FAIL, null, $msg, null]);
$event = BROKER_QUEUE_AO . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $msg, null, null]);
$event = BROKER_QUEUE_AO . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_AO . '(' . $request[BROKER_REQUEST] . ')';
if (is_null($request)) consoleLog($res, CON_ERROR, ERROR_BROKER_REQUEST_BAD . BROKER_REQUEST);
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN, null]);
$eventSuccess = true;
break;
// test broker responsiveness
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_AO;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, (SUCCESS_PING . BROKER_QUEUE_AO), null]);
$eventSuccess = true;
break;
case BROKER_REQUEST_PEDIGREE :
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_PEDIGREE;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, gasConfig::getPedigree()]);
$eventSuccess = true;
break;
case BROKER_REQUEST_ADMIN_MWH_EVENT_CREATE :
$eventTimer = true;
$errors = array();
if (!isset($request[BROKER_META_DATA][META_CLIENT])) {
$msg = ERROR_META_CLIENT_404;
$conMsg = $msg;
$callBackLog->data($msg);
} elseif ($request[BROKER_META_DATA][META_CLIENT] != CLIENT_SYSTEM) {
$msg = ERROR_BROKER_CLIENT_NOT_AUTH . COLON . $request[BROKER_META_DATA][META_CLIENT];
$conMsg = $msg;
$callBackLog->data($msg);
} elseif ($request[BROKER_META_DATA][META_TEMPLATE] != TEMPLATE_CLASS_MIGRATIONS
and $request[BROKER_META_DATA][META_TEMPLATE] != TEMPLATE_CLASS_WAREHOUSE) {
$msg = ERROR_TEMPLATE_WRONG;
$conMsg = $msg;
$callBackLog->error($msg);
} else {
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
$widget->_createRecord($request[BROKER_DATA]);
if (!$widget->status) {
$msg = FAIL_EVENT . BROKER_REQUEST_ADMIN_MWH_EVENT_CREATE;
$conMsg = $msg;
$callBackLog->error($msg);
$aryRetData = buildReturnPayload([false, $widget->state, $widget->eventMessages, null]);
} else {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_ADMIN_MWH_EVENT_CREATE;
$aryRetData = buildReturnPayload([true, $widget->state, $widget->eventMessages, $widget->getData()]);
}
if (is_object($widget)) $widget->__destruct();
unset($widget);
}
}
break;
case BROKER_REQUEST_ADMIN_MWH_EVENT_FETCH :
$eventTimer = true;
// try { // if debugging, turn on the exception trapper
$errors = [];
if (empty($request[BROKER_DATA]) or !is_array($request[BROKER_DATA])) {
$msg = ERROR_DATA_MISSING_ARRAY . BROKER_DATA;
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, $msg]);
} elseif (!array_key_exists(META_TEMPLATE, $request[BROKER_META_DATA])
or ($request[BROKER_META_DATA][META_TEMPLATE] != TEMPLATE_CLASS_MIGRATIONS
and $request[BROKER_META_DATA][META_TEMPLATE] != TEMPLATE_CLASS_WAREHOUSE)) {
$msg = sprintf(ERROR_TEMPLATE_BAD, TEMPLATE_CLASS_MIGRATIONS);
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $msg, null]);
} else {
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
$widget->_fetchRecords($request[BROKER_DATA]);
if (!$widget->status) {
// fetch failed
$conMsg = FAIL_EVENT . BROKER_REQUEST_ADMIN_MWH_EVENT_FETCH;
$aryRetData = buildReturnPayload([false, $widget->state, $widget->eventMessages, null]);
} else {
// update successful
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_ADMIN_MWH_EVENT_FETCH;
// ReturnPayload - a query that returns no data will show success - eval the return
// state and return either a "no data found" message or the query
// results to the calling client
$rp = ($widget->state == STATE_NOT_FOUND) ? $widget->eventMessages : $widget->queryResults;
$aryRetData = buildReturnPayload([true, $widget->state, $rp, $widget->getData()]);
}
if (is_object($widget)) $widget->__destruct();
unset($widget);
}
}
// } catch (Throwable | TypeError $t) {
// $eLine = $t->getLine();
// $eFile = $t->getFile();
// $eMsg = $t->getMessage();
// $msg = $eFile . COLON_NS . $eLine . COLON . $eMsg;
// consoleLog($res, CON_ERROR, $msg);
// $conMsg = $hdrConE . FAIL_EVENT . BROKER_REQUEST_ADMIN_MIGRATE_FETCH_EVENT;
// $aryRetData = buildReturnPayload([false, STATE_FAIL, $eMsg, null]);
// }
break;
case BROKER_REQUEST_REMOTE_FETCH :
$eventTimer = true;
$errors = array();
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
$widget->_fetchRecords($request[BROKER_DATA]);
if ($widget->status) {
$eventSuccess = true;
$widget->eventMessages[] = STRING_REC_COUNT_RET . $widget->recordsReturned;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_FETCH;
$queryMeta = [
STRING_REC_COUNT_RET => $widget->recordsReturned,
STRING_REC_COUNT_TOT => $widget->recordsInCollection
];
// recordsInQuery is a PDO thing so let's see if it exists in the class object
if (isset($widget->recordsInQuery) and $widget->recordsInQuery) {
$queryMeta[STRING_REC_COUNT_QUERY] = $widget->recordsInQuery;
}
if (isset($request[BROKER_META_DATA][META_DONUT_FILTER]) and $request[BROKER_META_DATA][META_DONUT_FILTER] == 1) {
$queryResults = $widget->getData();
} elseif ($widget->useCache or (isset($request[BROKER_META_DATA][META_DO_CACHE]) and $request[BROKER_META_DATA][META_DO_CACHE])) {
$queryResults = $widget->getCK();
} else {
$queryResults = $widget->getData();
}
$retData = [STRING_QUERY_RESULTS => $queryResults, STRING_QUERY_DATA => $queryMeta];
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, $widget->eventMessages, $retData]);
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_FETCH;
$aryRetData = buildReturnPayload([false, $widget->state, $widget->eventMessages, null]);
}
if (is_object($widget)) $widget->__destruct();
unset($widget);
}
break;
case BROKER_REQUEST_AUDIT_RESTORE :
$eventTimer = true;
$errors = [];
if (!isset($request[BROKER_DATA]) or empty($request[BROKER_DATA])) {
$msg = ERROR_DATA_MISSING_ARRAY . STRING_DATA;
$conMsg = $msg;
$callBackLog->data($msg);
$errors[] = $msg;
} elseif (!is_array($request[BROKER_DATA])) {
$msg = ERROR_DATA_ARRAY_NOT_ARRAY . STRING_DATA;
$conMsg = $msg;
$callBackLog->data($msg);
$errors[] = $msg;
} elseif (!isset($request[BROKER_DATA][STRING_KEY])) {
$msg = ERROR_ARRAY_KEY_404 . BROKER_DATA . COLON . STRING_KEY;
$conMsg = $msg;
$callBackLog->data($msg);
$errors[] = $msg;
} else {
$key = $request[BROKER_DATA][STRING_KEY];
if (!validateGUID($key)) {
$msg = ERROR_INVALID_GUID . $key;
$conMsg = $msg;
$callBackLog->data($msg);
$errors[] = $msg;
} else {
// inject a key into the metaPayload to flag all requests as audit requests
$request[BROKER_META_DATA][META_AUDIT_EVENT] = 1;
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($request[BROKER_META_DATA], $key, $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
try {
$data = []; // for the return payload containing original and changed records
$rc = $widget->restoreAuditRecord($data);
switch ($rc) {
case true :
$eventSuccess = true;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, $widget->eventMessages, SUCCESS_DB_RECORD_RESTORED]);
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_AUDIT_RESTORE;
break;
case false :
default :
if (!empty($errors)) $widget->eventMessages = array_merge($widget->eventMessages, $errors);
$conMsg = FAIL_EVENT . BROKER_REQUEST_AUDIT_RESTORE;
$aryRetData = buildReturnPayload([false, STATE_FAIL, $widget->eventMessages, null]);
break;
}
} catch (TypeError | Throwable $t) {
$msg = ERROR_TYPE_EXCEPTION . $t->getMessage();
$conMsg = $msg;
$callBackLog->error($msg);
}
if (is_object($widget)) $widget->__destruct();
unset($widget);
}
}
}
break;
case BROKER_REQUEST_ADMIN_MWH_EVENT_UPDATE :
$eventTimer = true;
$errors = array();
if (!isset($request[BROKER_DATA]) or empty($request[BROKER_DATA])) {
$msg = ERROR_DATA_MISSING_ARRAY . STRING_DATA;
$conMsg = $msg;
$callBackLog->data($msg);
} elseif (!isset($request[BROKER_META_DATA][META_CLIENT])) {
$msg = ERROR_META_CLIENT_404;
$conMsg = $msg;
$callBackLog->data($msg);
} elseif ($request[BROKER_META_DATA][META_CLIENT] != CLIENT_SYSTEM) {
$msg = ERROR_BROKER_CLIENT_NOT_AUTH . COLON . $request[BROKER_META_DATA][META_CLIENT];
$conMsg = $msg;
$callBackLog->data($msg);
} elseif ($request[BROKER_META_DATA][META_TEMPLATE] != TEMPLATE_CLASS_MIGRATIONS
and $request[BROKER_META_DATA][META_TEMPLATE] != TEMPLATE_CLASS_WAREHOUSE) {
$msg = ERROR_TEMPLATE_WRONG;
$conMsg = $msg;
$callBackLog->error($msg);
} else {
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
$widget->_updateRecord($request[BROKER_DATA]);
if (!$widget->status) {
$msg = FAIL_EVENT . BROKER_REQUEST_ADMIN_MWH_EVENT_UPDATE;
$conMsg = $msg;
$callBackLog->error($msg);
$aryRetData = buildReturnPayload([false, $widget->state, $widget->eventMessages, null]);
} else {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_ADMIN_MWH_EVENT_UPDATE;
$aryRetData = buildReturnPayload([true, $widget->state, $widget->eventMessages, $widget->getData()]);
}
if (is_object($widget)) $widget->__destruct();
unset($widget);
}
}
break;
}
unset($aryRetData[PAYLOAD_CM]);
}
// ensure we have a return-payload and a console message
if (empty($aryRetData)) {
$msg = ERROR_NO_RET_DATA . '-' . __FILE__ . '-' . $request[BROKER_REQUEST];
$conMsg = BROKER_QUEUE_AO . ' - ' . $msg;
$errorList = (empty($errors)) ? null : $errors;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, $msg, $errorList]);
} elseif ($eventSuccess and empty($conMsg)) {
$callBackLog->warn(ERROR_NO_CON_MSG);
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
}
// prepare the return payload...
/** @noinspection PhpUndefinedMethodInspection */
$msg = new AMQPMessage(gzcompress(json_encode($aryRetData)), array(BROKER_CORRELATION_ID => $_request->get(BROKER_CORRELATION_ID)));
try {
/** @noinspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_publish($msg, '', $_request->get(BROKER_REPLY_TO));
} catch (PhpAmqpLib\Exception\AMQPTimeoutException |
PhpAmqpLib\Exception\AMQPRuntimeException |
Throwable $e) {
$logMsg = ERROR_BROKER_EXCEPTION . $e->getMessage();
$callBackLog->fatal($logMsg);
consoleLog($res, CON_ERROR, $logMsg);
}
// if the event processing failed, reject the message, otherwise ack removing it from the queue
if (!$eventSuccess) {
$_request->delivery_info[BROKER_CHANNEL]->basic_reject($_request->delivery_info[BROKER_DELIVERY_TAG], false);
} else {
$_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
}
unset($msg);
if (!empty($conMsg)) {
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
}
// publish event metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postSystemEvent($data, $childGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_AO, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_AO));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

View File

@@ -0,0 +1,352 @@
<?php
/**
* admionGraphBroker.php -- the Graph (grafana) broker client
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 09-16-19 mks DB-113: original coding
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
//use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Exception\AMQPRuntimeException;
//use PhpAmqpLib\Message\AMQPMessage;
//use PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$_REDIRECT = true;
$topDir = dirname(__DIR__);
// load the lite version of the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'GRPH: ';
// event management for children
$graphServiceConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_ADMIN];
$numberChildren = $graphServiceConfig[CONFIG_BROKER_INSTANCES][CONFIG_GRAPH_BROKER];
$requestsPerInstance = (empty($graphServiceConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $graphServiceConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
$parentLog = new gacErrorLogger();
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_ADMIN;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $startingMemory, $groot;
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(0, 9);
$startingMemory = memory_get_usage(true);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
$childGUID = rtrim($res, COLON) . UDASH . guid();
try {
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_GRAPHS;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_ADMIN);
if (is_null($brokerConnection)) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$childLog->fatal($hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_GRAPHS);
consoleLog($res, CON_ERROR,$hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_GRAPHS);
exit(SHELL_FAILURE); // shell-script exit value for fail
}
// declare the channel...
$brokerChannel = $brokerConnection->channel();
// declare the topic exchange for topic-logging
$brokerChannel->exchange_declare(EXCHANGE_NAME_TOPIC_LOGS, EXCHANGE_TYPE_TOPIC, false, false, false);
// declare the channel queue and create the queue name
list($queueName, ,) = $brokerChannel->queue_declare($queue);
// this broker handles all messages passed to the topic_logs exchange
$brokerChannel->queue_bind($queueName, EXCHANGE_NAME_TOPIC_LOGS, EXCHANGE_QUEUE_BINDING_METRICS);
} catch (AMQPRuntimeException | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(SHELL_FAILURE);
}
// register the broker child start-up as a system-event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request) {
$startTime = gasStatic::doingTime();
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var PhpAmqpLib\Connection\AMQPStreamConnection $brokerConnection */
global $brokerConnection;
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
if (gasConfig::$settings[CONFIG_DEBUG]) {
consoleLog($res, CON_DEBUG, 'Child GUID: ' . $childGUID);
consoleLog($res,CON_DEBUG, 'root GUID: ' . $groot);
}
$requestCounter++;
$returnData = null;
$eventTimer = false;
$request = null;
$eventSuccess = false;
$conMsg = '';
$errorList = array();
$thisPid = getmypid();
$eventGUID = guid();
$ogGUID = '';
// set-up the call-back logger
$callBackLog = new gacErrorLogger($eventGUID, false);
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$event = BROKER_QUEUE_GRAPHS . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$event = BROKER_QUEUE_GRAPHS . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_GRAPHS . '(' . $request[BROKER_REQUEST] . ')';
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
// $_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$eventSuccess = true;
break;
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_GRAPHS;
$eventSuccess = true;
break;
// standard query-metrics event that is also handled in-parallel by LogsBroker is now:
case BROKER_REQUEST_MET :
if (empty($request[BROKER_DATA])) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$msg = ERROR_DATA_404;
consoleLog($res, CON_ERROR, $hdr . $msg);
$conMsg = FAIL_EVENT . BROKER_REQUEST_MET;
} else {
$meta = $request[BROKER_META_DATA];
$meta[META_TEMPLATE] = TEMPLATE_CLASS_GRAPHS;
$meta[META_SKIP_AUDIT] = 1;
$meta[META_CLIENT] = CLIENT_SYSTEM;
try {
/** @var gacMongoDB $objGraphs */
if (is_null($objGraphs = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
if ($objGraphs->processMetricsForGraph($request[BROKER_DATA])) {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_MET . INFO_GRAPH_VERSION;
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_MET;
}
}
if (is_object($objGraphs)) $objGraphs->__destruct();
unset($objGraphs);
} catch (Throwable | TypeError $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
$conMsg = ERROR_EXCEPTION;
}
}
break;
default :
$conMsg = ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST];
$callBackLog->warn(ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST]);
// todo - not a supported event so log something dire
break;
}
}
if (!$eventSuccess and empty($conMsg)) {
$conMsg = ERROR_FINE_PICKLE;
}
if (!empty($conMsg)) {
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
}
// $_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
// log a system-event for the event -- unlike the other system events, we're not going to submit
// this one via a broker - which is standard but, instead, we're going to write the record out
// directly since doing otherwise would cause an infinite loop in processing.
if ($eventTime and $eventTimer) {
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($ogGUID)) $data[SYSTEM_EVENT_OGUID] = $ogGUID;
@postSystemEvent($data, $eventGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
gasCache::sysDel(($groot . UDASH . $thisPid));
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_GRAPHS, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_consume($queue, '', false, true, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_GRAPHS));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

342
brokers/adminLogsBroker.php Normal file
View File

@@ -0,0 +1,342 @@
<?php
/**
* adminSyslogBroker.php -- the syslog broker client
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 09-16-19 mks DB-113: original coding
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
//use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Exception\AMQPRuntimeException;
//use PhpAmqpLib\Message\AMQPMessage;
//use PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$_REDIRECT = true;
$topDir = dirname(__DIR__);
// load the lite version of the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'LOGS: ';
// event management for children
$syslogServiceConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_ADMIN];
$numberChildren = $syslogServiceConfig[CONFIG_BROKER_INSTANCES][CONFIG_LOG_BROKER];
$requestsPerInstance = (empty($syslogServiceConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $syslogServiceConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
$parentLog = new gacErrorLogger();
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_ADMIN;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $startingMemory, $groot;
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(0, 9);
$startingMemory = memory_get_usage(true);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
$childGUID = rtrim($res, COLON) . UDASH . guid();
try {
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
// $exchange = BROKER_EXCHANGE_A1;
$queue = $queueTag . BROKER_QUEUE_LOGS;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_ADMIN);
if (is_null($brokerConnection)) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$childLog->fatal($hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_LOGS);
consoleLog($res, CON_ERROR,$hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_LOGS);
exit(SHELL_FAILURE); // shell-script exit value for fail
}
// declare the channel...
$brokerChannel = $brokerConnection->channel();
// declare the topic exchange for topic-logging
$brokerChannel->exchange_declare(EXCHANGE_NAME_TOPIC_LOGS, EXCHANGE_TYPE_TOPIC, false, false, false);
// declare the channel queue and create the queue name
list($queueName, ,) = $brokerChannel->queue_declare($queue);
// this broker handles all messages passed to the topic_logs exchange (LOGS and METRICS)
$brokerChannel->queue_bind($queueName, EXCHANGE_NAME_TOPIC_LOGS, EXCHANGE_QUEUE_BINDING_ALL);
} catch (AMQPRuntimeException | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(SHELL_FAILURE);
}
// register the broker child start-up as a system-event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request) {
$startTime = gasStatic::doingTime();
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var PhpAmqpLib\Connection\AMQPStreamConnection $brokerConnection */
global $brokerConnection;
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
if (gasConfig::$settings[CONFIG_DEBUG]) {
consoleLog($res, CON_DEBUG, 'Child GUID: ' . $childGUID);
consoleLog($res,CON_DEBUG, 'root GUID: ' . $groot);
}
$requestCounter++;
$returnData = null;
$eventTimer = false;
$request = null;
$eventSuccess = false;
$conMsg = '';
$errorList = array();
$thisPid = getmypid();
$eventGUID = guid();
$ogGUID = '';
// set-up the call-back (relative to the broker) logger
$callBackLog = new gacErrorLogger($eventGUID, false);
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$event = BROKER_QUEUE_SYSLOG . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$event = BROKER_QUEUE_LOGS . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_LOGS . '(' . $request[BROKER_REQUEST] . ')';
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
// /** @noinspection PhpUndefinedFieldInspection PhpUndefinedMethodInspection */
// $_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$eventSuccess = true;
break;
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_SYSLOG;
$eventSuccess = true;
break;
case BROKER_REQUEST_LOG :
$callBackLog->errStack = $request[BROKER_DATA];
$callBackLog->writeLogMessage();
if ($callBackLog->status) {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_LOG;
$eventSuccess = true;
} else {
$conMsg = basename(__FILE__) . COLON_NS . __LINE__ . COLON . FAIL_EVENT . BROKER_REQUEST_LOG;
}
break;
case BROKER_REQUEST_MET :
$callBackLog->errStack = $request[BROKER_DATA];
$callBackLog->writeLogMessage(true);
if ($callBackLog->status) {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_MET;
$eventSuccess = true;
} else {
$conMsg = basename(__FILE__) . COLON_NS . __LINE__ . COLON . FAIL_EVENT . BROKER_REQUEST_MET;
}
break;
default :
$conMsg = ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST];
$callBackLog->warn(ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST]);
// todo - not a supported event so log something dire
break;
}
}
if (!$eventSuccess and empty($conMsg)) {
$conMsg = ERROR_FINE_PICKLE;
}
if (!empty($conMsg)) {
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
}
// /** @noinspection PhpUndefinedFieldInspection PhpUndefinedMethodInspection */
// $_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
// log a system-event for the event -- unlike the other system events, we're not going to submit
// this one via a broker - which is standard but, instead, we're going to write the record out
// directly since doing otherwise would cause an infinite loop in processing.
if ($eventTime and $eventTimer) {
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($ogGUID)) $data[SYSTEM_EVENT_OGUID] = $ogGUID;
@postSystemEvent($data, $eventGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
gasCache::sysDel(($groot . UDASH . $thisPid));
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_SYSLOG, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_consume($queue, '', false, true, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_SYSLOG));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

View File

@@ -0,0 +1,335 @@
<?php
/**
* adminSyslogBroker.php -- the syslog broker client
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 09-16-19 mks DB-113: original coding
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
//use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Exception\AMQPRuntimeException;
//use PhpAmqpLib\Message\AMQPMessage;
//use PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$_REDIRECT = true;
$topDir = dirname(__DIR__);
// load the lite version of the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'SYSL: ';
// event management for children
$syslogServiceConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_ADMIN];
$numberChildren = $syslogServiceConfig[CONFIG_BROKER_INSTANCES][CONFIG_SYSLOG_BROKER];
$requestsPerInstance = (empty($syslogServiceConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $syslogServiceConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
$parentLog = new gacErrorLogger();
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_ADMIN;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $startingMemory, $groot;
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(0, 9);
$startingMemory = memory_get_usage(true);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
$childGUID = rtrim($res, COLON) . UDASH . guid();
try {
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_SYSLOG;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_ADMIN);
if (is_null($brokerConnection)) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$childLog->fatal($hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_SYSLOG);
consoleLog($res, CON_ERROR,$hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN . COLON . BROKER_QUEUE_SYSLOG);
exit(SHELL_FAILURE); // shell-script exit value for fail
}
// declare the channel...
$brokerChannel = $brokerConnection->channel();
// declare the topic exchange for topic-logging
$brokerChannel->exchange_declare(EXCHANGE_NAME_TOPIC_LOGS, EXCHANGE_TYPE_TOPIC, false, false, false);
// declare the channel queue and create the queue name
list($queueName, ,) = $brokerChannel->queue_declare($queue);
// this broker handles all messages passed to the topic_logs exchange
$brokerChannel->queue_bind($queueName, EXCHANGE_NAME_TOPIC_LOGS, EXCHANGE_QUEUE_BINDING_LOGS);
} catch (AMQPRuntimeException | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(SHELL_FAILURE);
}
// register the broker child start-up as a system-event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request) {
$startTime = gasStatic::doingTime();
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var PhpAmqpLib\Connection\AMQPStreamConnection $brokerConnection */
global $brokerConnection;
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
if (gasConfig::$settings[CONFIG_DEBUG]) {
consoleLog($res, CON_DEBUG, 'Child GUID: ' . $childGUID);
consoleLog($res,CON_DEBUG, 'root GUID: ' . $groot);
}
$requestCounter++;
$returnData = null;
$eventTimer = false;
$request = null;
$eventSuccess = false;
$conMsg = '';
$errorList = array();
$thisPid = getmypid();
$eventGUID = guid();
$ogGUID = '';
// set-up the call-back logger
$callBackLog = new gacErrorLogger($eventGUID, false);
if (!firstPassPayloadValidation($_request, $service,$msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$event = BROKER_QUEUE_SYSLOG . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$event = BROKER_QUEUE_SYSLOG . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_SYSLOG . '(' . $request[BROKER_REQUEST] . ')';
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
// $_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$eventSuccess = true;
break;
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_SYSLOG;
$eventSuccess = true;
break;
// syslog processing happens here
case BROKER_REQUEST_LOG :
try {
$namaste = '[' . CONFIG_ID_NODE_NAMASTE . '] ';
$sysLogError = gasStatic::getSysLogError($request[BROKER_DATA][0][(LOG_VALUE . COLLECTION_MONGO_LOGS_EXT)]);
if (!syslog($sysLogError, $namaste . $request[BROKER_DATA][0][ERROR_MESSAGE . COLLECTION_MONGO_LOGS_EXT])) {
consoleLog($res, CON_ERROR, ERROR_SYSLOG);
$conMsg = FAIL_EVENT . ERROR_SYSLOG;
} else {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_SYSLOG;
$eventSuccess = true;
}
} catch (Throwable | TypeError $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
consoleLog($res, CON_ERROR, $hdr . $t->getMessage());
}
break;
default :
$conMsg = ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST];
$callBackLog->warn(ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST]);
// todo - not a supported event so log something dire
break;
}
}
if (!$eventSuccess and empty($conMsg)) {
$conMsg = ERROR_FINE_PICKLE;
}
if (!empty($conMsg)) {
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
}
// $_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
// log a system-event for the event -- unlike the other system events, we're not going to submit
// this one via a broker - which is standard but, instead, we're going to write the record out
// directly since doing otherwise would cause an infinite loop in processing.
if ($eventTime and $eventTimer) {
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($ogGUID)) $data[SYSTEM_EVENT_OGUID] = $ogGUID;
@postSystemEvent($data, $eventGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
gasCache::sysDel(($groot . UDASH . $thisPid));
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_SYSLOG, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_consume($queue, '', false, true, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_SYSLOG));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

381
brokers/brokerTemplate.txt Normal file
View File

@@ -0,0 +1,381 @@
<?php
/**
* brokerTemplate.txt
*
* This is the template file for brokers. It holds all the PHP code to create a new broker client/service - all you
* need to do is configure the broker instance to be unique to all the other already-existing brokers, and to add
* the event handlers.
*
* Speaking of, the template comes with the default event handlers for ping and shutdown. Comments, such as this one,
* are added to key places in the code to alert you to lines that should be modified and suggestions, where possible
* for the range of inputs available.
*
* Please observe the Namaste coding standards when adding comments and comment blocks as your comments will be used
* to create Namaste system documentation for other programmers.
*
* @author mike@givingassistant.org <--- todo change/update to your email
* @version 1.0 <--- todo in what namaste version did this module first appear?
*
* HISTORY:
* ========
* 01-28-20 mks DB-144: original coding
*
*/
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Connection\AMQPStreamConnection;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Channel\AMQPChannel;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Message\AMQPMessage;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true;
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'XXXX: '; // todo <-- change this 4-char field to something unique for the console log
// todo ------------------------------------------------------------------------------------------------------------------------------------
// todo -- if your broker requires it's own configuration section (example here uses the migration section, then
// todo -- you'll need to add a relevant section to the XML configuration -- otherwise, delete this section
// before we do anything, ensure we have a "migration" section in the configuration
if (!array_key_exists(CONFIG_MIGRATION, gasConfig::$settings) // todo <--- change CONFIG_MIGRATION
or empty(gasConfig::$settings[CONFIG_MIGRATION]) // todo <--- change CONFIG_MIGRATION
or !is_array(gasConfig::$settings[CONFIG_MIGRATION])) { // todo <--- change CONFIG_MIGRATION
// XML config for migration is not loaded or is empty or malformed - exit immediately
consoleLog($res, CON_SYSTEM, ERROR_CONFIG_RESOURCE_404 . STRING_MIGRATION_CONFIG); // todo <--- change STRING_MIGRATION_CONFIG
exit(1);
}
// todo ------------------------------------------------------------------------------------------------------------------------------------
$childrenPidList = null;
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
// event management for children
$appServerConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_APPSERVER]; // todo <--- change CONFIG_BROKER_APPSERVER
$numberChildren = $appServerConfig[CONFIG_BROKER_INSTANCES][CONFIG_BROKER_M_BROKER]; // todo <--- change CONFIG_BROKER_M_BROKER
$requestsPerInstance = (empty($appServerConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $appServerConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
// todo - validate the broker environment as declared in the XML config
// get the location of the broker is supposed to be run
$brokerLocation = ENV_PRIME; // todo <--- change the environment
if (!empty($argv) and !empty($argv[1])) {
$brokerLocation = $argv[1];
}
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_ADMIN;
if (!registerService($service)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
// declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $startingMemory, $myRequestsPerInstance, $groot;
$startingMemory = memory_get_usage(true);
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_TBD; // todo <--- change BROKER_QUEUE_TBD
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_BROKER);
if (is_null($brokerConnection)) {
$childLog->fatal(ERROR_RESOURCE_404 . RESOURCE_BROKER);
consoleLog($res, CON_ERROR, ERROR_RESOURCE_404 . RESOURCE_BROKER);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
try {
// params: queue name, passive, durable, exclusive, auto-delete
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable $e) {
$childLog->fatal($e->getMessage());
consoleLog($res, CON_ERROR, ERROR_BROKER_QUEUE_DECLARE . $queue);
exit(1);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postBrokerEvent($data, $childGUID, $childLog);
// todo -- add a broker name to this event so we know which broker is registering
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var AMQPStreamConnection $brokerConnection */
global $brokerConnection;
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
$_request[STRING_SERVICE] = $service;
$event = BROKER_QUEUE_TBD . '('; // todo <--- change BROKER_QUEUE_TBD
$requestCounter++;
$aryRetData = null;
$retData = null;
$errorStack = [];
$request = null;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$eventSuccess = false;
$conMsg = '';
$eventGUID = guid();
$thisPid = getmypid();
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
// set-up the call-back logger
/** @var gacErrorLogger $callBackLog */
$callBackLog = new gacErrorLogger($eventGUID);
try {
if (!firstPassPayloadValidation($_request, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$aryRetData = buildReturnPayload([false, STATE_FAIL, null, $msg, null]);
$event .= ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorStack)) {
for ($index = 0, $last = count($errorStack); $index < $last; $index++) {
$conMsg .= $errorStack[$index] . $eos;
$callBackLog->error($errorStack[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $errorStack, null, null]);
$event .= ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event .= $request[BROKER_REQUEST] . ')';
if (is_null($request)) {
consoleLog($res, CON_ERROR, ERROR_REQUEST_404);
}
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
/** @noinspection PhpUndefinedFieldInspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN, null]);
$eventSuccess = true;
break;
// test broker responsiveness
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_TBD; // todo <--- change BROKER_QUEUE_TBD
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, (SUCCESS_PING . BROKER_QUEUE_TBD), null]); // todo <--- change BROKER_QUEUE_M
$eventSuccess = true;
break;
// todo <--- your events for this broker start here
default :
$msg = ERROR_EVENT_404 . $request[BROKER_REQUEST];
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DOES_NOT_EXIST, $msg, null]);
break;
}
}
} catch (Throwable $t) {
consoleLog($res, CON_SYSTEM, $t->getMessage());
$callBackLog->fatal($t->getMessage());
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, $t->getMessage(), $errorStack]);
}
// ensure we have a return-payload and a console message
if (empty($aryRetData)) {
$msg = ERROR_NO_RET_DATA . '-' . __FILE__ . '-' . $request[BROKER_REQUEST];
$conMsg = BROKER_QUEUE_M . ' - ' . $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg, null]);
} elseif ($eventSuccess and empty($conMsg)) {
$callBackLog->warn(ERROR_NO_CON_MSG);
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
}
// prepare the return payload...
/** @noinspection PhpUndefinedMethodInspection */
$msg = new AMQPMessage(gzcompress(json_encode($aryRetData)), array(BROKER_CORRELATION_ID => $_request->get(BROKER_CORRELATION_ID)));
try {
/** @noinspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_publish($msg, '', $_request->get(BROKER_REPLY_TO));
/** @noinspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
} catch (PhpAmqpLib\Exception\AMQPTimeoutException |
PhpAmqpLib\Exception\AMQPRuntimeException |
Throwable $e) {
$logMsg = ERROR_BROKER_EXCEPTION . $e->getMessage();
$callBackLog->fatal($logMsg);
consoleLog($res, CON_ERROR, $logMsg);
}
// if the event processing failed, reject the message, otherwise ack removing it from the queue
// todo: core-452: publish the event payload to the sysEvent broker to capture the failed event
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
unset($msg);
// publish event metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postBrokerEvent($data, $childGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postBrokerEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_TBD, $thisPid, $myRequestsPerInstance)); // todo <--- change BROKER_QUEUE_TBD
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
$brokerChannel->wait();
}
break;
case 1 : // parent
; // does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_TBD)); // todo <--- change BROKER_QUEUE_TBD
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postBrokerEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

489
brokers/cBroker.php Normal file
View File

@@ -0,0 +1,489 @@
<?php
/**
* cBroker.php
*
* cBroker is the CONS-Broker (consolidated access list) a list provided by the US Treasury Department listing all
* individuals and entities who have been blocked from receiving payment for reasons of national security.
*
* This is a segundo broker and provides the following functionality:
*
* 1. Event to upload the CONS XML file
* 2. Event to process the CONS XML file (stores the XML file into a mongo collection)
* 3. Queries against the CONS XML file (which are really queries against the collection created above)
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-07-20 mks DB-180: Original coding
*
*/
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Message\AMQPMessage;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true;
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
// load the framework environment
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'CONS: '; // CONSolidated access list
$childrenPidList = null;
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
// event management for children
$serviceSettings = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_SEGUNDO];
$numberChildren = $serviceSettings[CONFIG_BROKER_INSTANCES][CONFIG_BROKER_C_BROKER];
$requestsPerInstance = (empty($serviceSettings[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $serviceSettings[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_NUM_CHILD, substr(basename(__FILE__), 0, -4), $numberChildren));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
// todo - validate the broker environment as declared in the XML config
// get the location of the broker is supposed to be run
$brokerLocation = ENV_SEGUNDO;
if (!empty($argv) and !empty($argv[1])) {
$brokerLocation = $argv[1];
}
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_SEGUNDO;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $startingMemory, $myRequestsPerInstance, $groot, $file;
$startingMemory = memory_get_usage(true);
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
try {
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
$queue = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG] . BROKER_QUEUE_C;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_SEGUNDO);
if (is_null($brokerConnection)) {
$childLog->fatal(ERROR_RESOURCE_404 . RESOURCE_SEGUNDO . COLON . BROKER_QUEUE_C);
consoleLog($res, CON_ERROR . ERROR_RESOURCE_404 . RESOURCE_SEGUNDO . COLON . BROKER_QUEUE_C);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
// params: queue name, passive, durable, exclusive, auto-delete
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
$postNormalResponse = true;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var AMQPStreamConnection $brokerConnection */
global $brokerConnection;
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service, $file;
$event = BROKER_QUEUE_M . '(';
$requestCounter++;
$aryRetData = null;
$retData = null;
$errorStack = [];
$request = null;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$eventSuccess = false;
$conMsg = '';
$eventGUID = guid();
$thisPid = getmypid();
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
// set-up the call-back logger
/** @var gacErrorLogger $callBackLog */
$callBackLog = new gacErrorLogger($eventGUID);
try {
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$aryRetData = buildReturnPayload([false, STATE_FAIL, null, $msg, null]);
$event .= ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorStack)) {
for ($index = 0, $last = count($errorStack); $index < $last; $index++) {
$conMsg .= $errorStack[$index] . $eos;
$callBackLog->error($errorStack[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $errorStack, null, null]);
$event .= ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event .= $request[BROKER_REQUEST] . ')';
if (is_null($request)) {
consoleLog($res, CON_ERROR, ERROR_REQUEST_404);
}
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN, null]);
$eventSuccess = true;
break;
// test broker responsiveness
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_WH;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, (SUCCESS_PING . BROKER_QUEUE_WH), null]);
$eventSuccess = true;
break;
case BROKER_REQUEST_PEDIGREE :
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_PEDIGREE;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, gasConfig::getPedigree()]);
$eventSuccess = true;
break;
case BROKER_REQUEST_WAREHOUSE :
$eventSuccess = false;
$eventTimer = false;
$objMigrate = new gacMigrations($request[BROKER_DATA], $request[BROKER_META_DATA], EVENT_WAREHOUSE);
if (!$objMigrate->status) {
$conMsg = FAIL_EVENT . BROKER_REQUEST_WAREHOUSE;
$aryRetData = buildReturnPayload([false, $objMigrate->state, $objMigrate->errorStack, null]);
} else {
$guid = $objMigrate->objWarehouseMeta->getColumn(DB_TOKEN);
// validate return guid
if (!validateGUID($guid)) {
$conMsg = ERROR_EVENT . BROKER_REQUEST_WAREHOUSE;
$aryRetData = buildReturnPayload([ false, FAIL_EVENT, $objMigrate->errorStack, ERROR_BROKER_REQUEST_FAILED]);
} else {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_WAREHOUSE;
$aryRetData = buildReturnPayload([true, SUCCESS_EVENT, $objMigrate->errorStack, $guid]);
$eventSuccess = true;
}
// send the guid back to the calling client now so we can resume the warehousing...
postResponse($aryRetData, $_request, $callBackLog, $res);
$postNormalResponse = false;
// dive back into the objMigration class and perform the warehouse request
if (!$objMigrate->whData()) {
$conMsg = FAIL_EVENT . BROKER_REQUEST_WAREHOUSE;
} else {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_WAREHOUSE;
$eventSuccess = true;
}
}
break;
case BROKER_REQUEST_CONS :
$lastUpdated = '';
$recordCount = 0;
$eventTimer = true;
$errors = [];
$request[BROKER_META_DATA][META_LIMIT_OVERRIDE] = 1;
/** @var gacMongoDB $tmpObj */
if (is_null($tmpObj = grabWidget($request[BROKER_META_DATA], '', $errors))) {
// failed to instantiate the CONS data class object
foreach ($errors as $error)
$callBackLog->error($error);
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_WARNING, $errors, null]);
} else {
if (is_null($newFileName = $tmpObj->template->saveCONSList($request[BROKER_DATA], $errors))) {
// failed to save the CONS list to the tmp directory
foreach ($errors as $error)
$callBackLog->error($error);
$aryRetData = buildReturnPayload([FALSE, STATE_FAIL, $errors, null]);
} else {
if (is_null($aryData = $tmpObj->template->processCONSList($newFileName, $errors, $lastUpdated, $recordCount))) {
// failed to process the xml file into a data structure
foreach ($errors as $error)
$callBackLog->error($error);
$aryRetData = buildReturnPayload([FALSE, STATE_FAIL, $errors, null]);
} else {
$tmpObj->_createRecord($aryData, DATA_CONS);
if (!$tmpObj->status) {
// failed to save CONS data structure to mongo table
if (empty($tmpObj->eventMessages))
$tmpObj->eventMessages[] = ERROR_SAVE_XML_FILE . STRING_DBR;
$aryRetData = buildReturnPayload([false, STATE_FAIL, $tmpObj->eventMessages, null]);
} else {
if ($tmpObj->count != $recordCount) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
$msg = sprintf(ERROR_DATA_RECORD_COUNT, $recordCount, $tmpObj->count);
$callBackLog->error($hdr . $msg);
} else {
$msg = SUCCESS_DB_UPSERT_COUNT . $recordCount;
$eventSuccess = true;
$retData = [
STRING_REC_COUNT_INSERTED => $recordCount,
STRING_GENERATED_DATE => $lastUpdated
];
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, $retData]);
echo 'FQFN: ' . $newFileName . PHP_EOL; // todo: delete me
$tmpObj->template->cleanUp($newFileName);
}
}
}
}
}
if (is_object($tmpObj)) $tmpObj->__destruct();
unset($tmpObj);
break;
case BROKER_REQUEST_REMOTE_FETCH :
$eventTimer = true;
$errors = [];
/** @var gacMongoDB $tmpObj */
if (is_null($tmpObj = grabWidget($request[BROKER_META_DATA], '', $errors))) {
foreach ($errors as $error)
$callBackLog->error($error);
} else {
// todo -- this is WRONG - use the core call: remoteFetchRequest() instead!
$tmpObj->_fetchRecords($request[BROKER_DATA]);
if ($tmpObj->status) {
$eventSuccess = true;
$tmpObj->eventMessages[] = STRING_REC_COUNT_RET . $tmpObj->recordsReturned;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_FETCH;
$queryMeta = [
STRING_REC_COUNT_RET => $tmpObj->recordsReturned,
STRING_REC_COUNT_TOT => $tmpObj->recordsInCollection
];
// recordsInQuery is a PDO thing so let's see if it exists in the class object
if (isset($tmpObj->recordsInQuery) and $tmpObj->recordsInQuery) {
$queryMeta[STRING_REC_COUNT_QUERY] = $tmpObj->recordsInQuery;
}
if (isset($request[BROKER_META_DATA][META_DONUT_FILTER]) and $request[BROKER_META_DATA][META_DONUT_FILTER] == 1) {
$queryResults = $tmpObj->getData();
} elseif ($tmpObj->useCache or (isset($request[BROKER_META_DATA][META_DO_CACHE]) and $request[BROKER_META_DATA][META_DO_CACHE])) {
// todo - this is supposed to return the list of cache keys, or the single reference cache key - fix!
$queryResults = $tmpObj->cacheMap;
} else {
$queryResults = $tmpObj->getData();
}
$retData = [STRING_QUERY_RESULTS => $queryResults, STRING_QUERY_DATA => $queryMeta];
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, $tmpObj->eventMessages, $retData]);
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_FETCH;
$aryRetData = buildReturnPayload([false, $tmpObj->state, $tmpObj->eventMessages, null]);
}
if (is_object($tmpObj)) $tmpObj->__destruct();
unset($tmpObj);
}
break;
default :
$msg = ERROR_EVENT_404 . $request[BROKER_REQUEST];
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DOES_NOT_EXIST, $msg, null]);
break;
}
}
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, $t->getMessage(), $errorStack]);
}
// ensure we have a return-payload and a console message
if (empty($aryRetData) and $postNormalResponse) {
$msg = ERROR_NO_RET_DATA . '-' . __FILE__ . '-' . $request[BROKER_REQUEST];
$conMsg = BROKER_QUEUE_M . ' - ' . $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg, null]);
} elseif ($eventSuccess and empty($conMsg)) {
$callBackLog->warn(ERROR_NO_CON_MSG);
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
}
// prepare and send the return payload if we've not already sent it...
if ($postNormalResponse)
postResponse($aryRetData, $_request, $callBackLog, $res);
// if the event processing failed, reject the message, otherwise ack removing it from the queue
// todo: core-452: publish the event payload to the sysEvent broker to capture the failed event
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT,$requestCounter, $myRequestsPerInstance));
unset($msg);
// publish event metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postSystemEvent($data, $childGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_WH, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_WH));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

420
brokers/mBroker.php Normal file
View File

@@ -0,0 +1,420 @@
<?php
/**
* mBroker.php
*
* This is the migration broker and is a rare-use broker that should normally not be spun-up unless you're planning
* on a data migration. Otherwise, most of the time, this broker should not be active or available since it runs
* off the main application server and therefore would be "visible" to anyone with access to the Namaste service.
*
* This broker is used to pull data from a remote mongo or mysql database and import the entire table/collection into
* the "local" mysql or mongo database.
*
* You can, as of this writing:
*
* migrate mysql --> mongo
* migrate mongo --> mysql
*
* To do so, the XML file, migration section, must be populated with the data defining the source resource. (URI,
* port, authentication, database name, table or collection name.) The remote service must be available and accessible
* to the Namaste service.
*
* Secondly, the destination table must contain a "migration" section in the template file. The migration data in the
* template maps the source data to the new data source.
*
* This is an RPC broker - however, the migration event exposes no data and no schema to the calling client. The
* response payload is limited to simply a boolean status and a count of the number of records that were transferred
* and the total amount of time take to complete the migration.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 01-19-18 mks INF-139: Original coding
* 02-08-18 mks INF-139: Added migration events, PHP 7.2 exception handling
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 10-04-18 mks DB-43: Support for migration requests coming from the awesome web app
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Message\AMQPMessage;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true;
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
$file = basename(__FILE__);
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'MIGB: ';
// before we do anything, ensure we have a "migration" section in the configuration
if (!array_key_exists(CONFIG_MIGRATION, gasConfig::$settings)
or empty(gasConfig::$settings[CONFIG_MIGRATION])
or !is_array(gasConfig::$settings[CONFIG_MIGRATION])) {
// XML config for migration is not loaded or is empty or malformed - exit immediately
consoleLog($res, CON_SYSTEM, ERROR_CONFIG_RESOURCE_404 . STRING_MIGRATION_CONFIG);
exit(1);
}
$childrenPidList = null;
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
// event management for children
$appServerConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_APPSERVER];
$numberChildren = $appServerConfig[CONFIG_BROKER_INSTANCES][CONFIG_BROKER_M_BROKER];
$requestsPerInstance = (empty($appServerConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $appServerConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
// todo - validate the broker environment as declared in the XML config
// get the location of the broker is supposed to be run
$brokerLocation = ENV_APPSERVER;
if (!empty($argv) and !empty($argv[1])) {
$brokerLocation = $argv[1];
}
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = CONFIG_BROKER_APPSERVER;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $startingMemory, $myRequestsPerInstance, $groot, $file;
$startingMemory = memory_get_usage(true);
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
try {
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_M;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_BROKER);
if (is_null($brokerConnection)) {
$childLog->fatal(ERROR_RESOURCE_404 . RESOURCE_BROKER);
consoleLog($res, CON_ERROR, ERROR_RESOURCE_404 . RESOURCE_BROKER);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
// params: queue name, passive, durable, exclusive, auto-delete
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
// todo -- add a broker name to this event so we know which broker is registering
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var AMQPStreamConnection $brokerConnection */
global $brokerConnection;
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
$event = BROKER_QUEUE_M . '(';
$requestCounter++;
$aryRetData = null;
$retData = null;
$errorStack = [];
$request = null;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$eventSuccess = false;
$conMsg = '';
$eventGUID = guid();
$thisPid = getmypid();
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
// set-up the call-back logger
/** @var gacErrorLogger $callBackLog */
$callBackLog = new gacErrorLogger($eventGUID);
try {
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$aryRetData = buildReturnPayload([false, STATE_FAIL, null, $msg, null]);
$event .= ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorStack)) {
for ($index = 0, $last = count($errorStack); $index < $last; $index++) {
$conMsg .= $errorStack[$index] . $eos;
$callBackLog->error($errorStack[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $errorStack, null, null]);
$event .= ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event .= $request[BROKER_REQUEST] . ')';
if (is_null($request)) {
consoleLog($res, CON_ERROR, ERROR_REQUEST_404);
}
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN, null]);
$eventSuccess = true;
break;
// test broker responsiveness
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_M;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, (SUCCESS_PING . BROKER_QUEUE_M), null]);
$eventSuccess = true;
break;
// BROKER_REQUEST_MIGRATION event starts the migration process
case BROKER_REQUEST_MIGRATION :
// check to see if this is a web-request (with replacement XML)
if (isset($request[BROKER_META_DATA][BROKER_XML_DATA]) and is_array($request[BROKER_META_DATA][BROKER_XML_DATA])) {
// store the old XML settings
// $oldMigCfg = gasConfig::$settings[CONFIG_MIGRATION];
// replace with the new XML configuration validated by the migration web app
// gasConfig::$settings[CONFIG_MIGRATION] = $request[BROKER_META_DATA][BROKER_XML_DATA];
// unset($request[BROKER_META_DATA][BROKER_XML_DATA]);
// set a meta field to indicate that the origin was the migration web app
// $request[BROKER_META_DATA][META_MIGRATION_WEB_APP] = true;
consoleLog($res, CON_SYSTEM, INFO_MIGRATION_XML_OVERRIDE);
}
// process the request
$objMigrate = new gacMigrations($request[BROKER_DATA], $request[BROKER_META_DATA]);
// reset XML back to original settings
if (isset($oldMigCfg)) gasConfig::$settings[CONFIG_MIGRATION] = $oldMigCfg;
if (!$objMigrate->status) {
// migration process did not complete or even failed to launch
$conMsg = FAIL_EVENT . BROKER_REQUEST_MIGRATION;
$aryRetData = buildReturnPayload([false, $objMigrate->state, $objMigrate->errorStack, null]);
} else {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_MIGRATION;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, $objMigrate->errorStack, $objMigrate->migrationReport]);
$eventSuccess = true;
}
break;
default :
$msg = ERROR_EVENT_404 . $request[BROKER_REQUEST];
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DOES_NOT_EXIST, $msg, null]);
break;
}
}
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, $t->getMessage(), $errorStack]);
}
// ensure we have a return-payload and a console message
if (empty($aryRetData)) {
$msg = ERROR_NO_RET_DATA . '-' . __FILE__ . '-' . $request[BROKER_REQUEST];
$conMsg = BROKER_QUEUE_M . ' - ' . $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg, null]);
} elseif ($eventSuccess and empty($conMsg)) {
$callBackLog->warn(ERROR_NO_CON_MSG);
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
}
// prepare the return payload...
/** @noinspection PhpUndefinedMethodInspection */
$msg = new AMQPMessage(gzcompress(json_encode($aryRetData)), array(BROKER_CORRELATION_ID => $_request->get(BROKER_CORRELATION_ID)));
try {
/** @noinspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_publish($msg, '', $_request->get(BROKER_REPLY_TO));
$_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
} catch (PhpAmqpLib\Exception\AMQPTimeoutException | PhpAmqpLib\Exception\AMQPRuntimeException | Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
// if the event processing failed, reject the message, otherwise ack removing it from the queue
// todo: core-452: publish the event payload to the sysEvent broker to capture the failed event
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT,$requestCounter, $myRequestsPerInstance));
unset($msg);
// publish event metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postSystemEvent($data, $childGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_M, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_M));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

575
brokers/rBroker.php Normal file
View File

@@ -0,0 +1,575 @@
<?php
/**
* readBroker (rBroker.php) -- persistent (daemon) PHP application program
*
* This is a forking-broker. Which means that, upon execution, the broker program will iteratively start-up a
* specific number of child-processes (XML config) and then, as the parent process, will monitor each child.
* On a child's death, the signal is trapped via a replacement, custom, signal handler, and a replacement child
* is restarted.
*
* Children are only allowed to execute a finite number of broker events (XML config) before they self-terminate and
* are re-incarnated by the parent. This is to mitigate the memory leaks inherent in PHP as PHP applications were
* never intended to be used as TSR programs.
*
* NOTES:
* ------
* - only the parent PID is written to the PID directory. This feature is for a monitoring program that will restart
* the parent broker, but will not monitor/restart the children; only the parent daemon may re-incarnate new children
* - custom signal handler for trapping SIGCLD and updating the global child counter
* - signals sent to a child (other than SIGKILL) are diverted to the shutDown event RMQ resources are freed
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-14-17 mks original coding
* 08-24-17 mks CORE-500: broker events
* 03-14-18 mks CORE-833: fetch event tests for recordsInQuery class member being set before including in the
* return-data payload b/c recordsInQuery is only a PDO thing
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 01-03-19 mks DB-78: fixed bug in fetch event where pre-supplied event-GUID value was being over-written
* 01-29-20 mks DB-145: router code for tercero requests added to default section in $callback method
* 04-03-20 mks ECI-107: added sub-collection fetch, exception trapping on wait(), IDE directives cleaned-up
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Exception\AMQPChannelClosedException;
use PhpAmqpLib\Exception\AMQPInvalidArgumentException;
use PhpAmqpLib\Exception\AMQPRuntimeException;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use PhpAmqpLib\Message\AMQPMessage;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true; // all output to logfile
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$childrenPidList = null; // contains list of the pids of the children spawned by this watcher
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$res = 'RBRK: ';
$appServerConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_APPSERVER];
$numberChildren = $appServerConfig[CONFIG_BROKER_INSTANCES][CONFIG_BROKER_R_BROKER];
$requestsPerInstance = (empty($appServerConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $appServerConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren;
$runningBrokers = $numberChildren;
$myRequestsPerInstance = 0;
$startingMemory = 0;
$file = basename(__FILE__);
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
// get the location of the broker is supposed to be run
$brokerLocation = ENV_APPSERVER;
if (!empty($argv) and !empty($argv[1])) {
$brokerLocation = $argv[1];
}
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = CONFIG_BROKER_APPSERVER;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
///////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death
///////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
//////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event
//////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $startingMemory, $groot, $file;
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$startingMemory = memory_get_usage(true);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error!!!
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// remove the signal handlers in the child code
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
try {
// set-up the child error logger
$childLog = new gacErrorLogger();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
// ---- broker code begins ---- //
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
//$exchange = BROKER_EXCHANGE_RO;
$queue = $queueTag . BROKER_QUEUE_R;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_BROKER);
if (is_null($brokerConnection)) {
$childLog->fatal(ERROR_RESOURCE_404 . RESOURCE_BROKER);
consoleLog($res, CON_ERROR, ERROR_RESOURCE_404 . RESOURCE_BROKER);
exit(0);
}
/** @var AMQPChannel $brokerChannel */
$brokerChannel = $brokerConnection->channel();
// set up the RPC queue for RO service
// params: queue name, passive, durable, exclusive, auto-delete
//$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(0);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var AMQPConnection $brokerConnection */
global $brokerConnection;
global $requestCounter, $groot, $res, $eos, $myRequestsPerInstance, $startingMemory, $service;
$requestCounter++;
$aryRetData = null;
$retData = null;
$request = null;
$errorList = [];
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$eventSuccess = false;
$conMsg = '';
$eventGUID = guid();
$thisPid = getmypid();
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
// set-up the callBack log; logger object for the callback function
$callBackLog = new gacErrorLogger($eventGUID);
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_FAIL, $msg, null, null]);
$callBackLog->info($msg);
$event = BROKER_QUEUE_R . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
if (count($errorList) == 0) {
$callBackLog->error(ERROR_DATA_META_REJECTED . STRING_UNKNOWN);
$errorList[] = ERROR_DATA_META_REJECTED . STRING_UNKNOWN;
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST];
} else {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
}
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $errorList, null]);
$event = BROKER_QUEUE_R . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_R . '(' . $request[BROKER_REQUEST] . ')';
if (is_null($request)) consoleLog($res, CON_ERROR, ERROR_BROKER_REQUEST_404);
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN . BROKER_QUEUE_R;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN]);
$eventSuccess = false;
break;
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_R;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, SUCCESS_PING . BROKER_QUEUE_R]);
$eventSuccess = true;
break;
// request class schema map
case BROKER_REQUEST_SCHEMA :
$eventTimer = true;
if (empty($request[BROKER_META_DATA]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_DATA_404;
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, null, ERROR_DATA_404]);
} else {
$obj = null;
$errorList = array();
$processObject = false;
// instantiate the new template class and return a schema report...
try {
// cant instantiate remove-service objects in production, so we'll inject an skip
// directive for the env-check...
$obj = new gacFactory($request[BROKER_META_DATA], FACTORY_EVENT_SCHEMA_REQUEST, '', $errorList);
$processObject = true;
} catch (TypeError $e) {
$callBackLog->mirror = true;
$callBackLog->warn($e->getMessage());
$callBackLog->mirror = false;
$conMsg = ERROR_EXCEPTION;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, [$msg], null]);
}
if ($processObject) {
if (!$obj->status) {
$msg = ERROR_CLASS_SCHEMA_404 . COLON . $request[BROKER_META_DATA][META_TEMPLATE];
$conMsg = $msg;
$callBackLog->error($msg);
$errorList[] = $msg;
if (!empty($obj->eventMessages)) $errorList = array_merge($errorList, $obj->eventMessages);
$aryRetData = buildReturnPayload([false, STATE_TEMPLATE_ERROR, $errorList, null]);
} else {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST];
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, $obj->schema]);
}
}
if (is_object($obj)) $obj->__destruct();
unset($obj);
}
break;
case BROKER_REQUEST_FETCH :
$eventTimer = true;
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, ERROR_TEMPLATE_FILE_404, BROKER_REQUEST_CREATE]);
} else {
// invoke the broker-helper to execute the fetch request
$bh = new gacBrokerHelper();
$eventSuccess = $bh->fetch($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_SUBC_FETCH :
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, ERROR_TEMPLATE_FILE_404, BROKER_REQUEST_SUBC_FETCH]);
} elseif (!isset($request[BROKER_DATA][STRING_SUBC_COL]) or empty($request[BROKER_DATA][STRING_SUBC_COL])) {
$conMsg = ERROR_DATA_KEY_404 . STRING_SUBC_COL;
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, $conMsg, BROKER_REQUEST_SUBC_FETCH]);
} elseif (!isset($request[BROKER_DATA][STRING_SUBC_DATA]) or empty($request[BROKER_DATA][STRING_SUBC_DATA])) {
$conMsg = ERROR_DATA_KEY_404 . STRING_SUBC_DATA;
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, $conMsg, BROKER_REQUEST_SUBC_FETCH]);
} else {
$errors = [];
/** @var gacMongoDB $objClass */
if (is_null($objClass = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
// todo -- for now, sub-collection fetches are only allowed on appServer
// check that this is a mongo object, exit if it is not
if ($objClass->schema != TEMPLATE_DB_MONGO) {
$conMsg = ERROR_SCHEMA_MISMATCH . $request[BROKER_META_DATA][META_TEMPLATE];
$errors[] = $conMsg;
$errors[] = INFO_SCHEMA . $objClass->schema;
$aryRetData = buildReturnPayload([false, STATE_FAIL, $errors, null]);
} else {
$objClass->fetchSubCollectionRecord($request[BROKER_DATA]);
if (!$objClass->status) {
$conMsg = ERROR_SUBC_FETCH;
$aryRetData = buildReturnPayload([ false, $objClass->state, $objClass->eventMessages, null]);
} else {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_SUBC_FETCH;
$queryMeta = [
STRING_REC_COUNT_RET => $objClass->recordsReturned,
STRING_REC_COUNT_QUERY => $objClass->recordsInQuery,
STRING_REC_COUNT_TOT => $objClass->recordsInCollection
];
if ($objClass->state == STATE_NOT_FOUND and $objClass->count == 0) {
$retData = [STRING_QUERY_RESULTS => null, STRING_QUERY_DATA => $queryMeta];
} else {
// cacheMapping call
if (!gasCache::mapOutboundPayload($objClass, $errors)) {
$queryResults = $objClass->getData();
} else {
// cache mapping succeeded - return the cache key
$queryResults = $objClass->getCK();
}
$retData = [STRING_QUERY_RESULTS => $queryResults, STRING_QUERY_DATA => $queryMeta];
}
$aryRetData = buildReturnPayload([true, $objClass->state, $objClass->eventMessages, $retData]);
}
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
}
break;
case BROKER_REQUEST_QUERY_COUNT :
$eventTimer = true;
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, ERROR_TEMPLATE_FILE_404, BROKER_REQUEST_CREATE]);
} else {
$errors = [];
/** @var gacMongoDB $objClass */
if (is_null($objClass = grabWidget($request[BROKER_META_DATA], '', $errors))) {
foreach ($errors as $error)
$callBackLog->error($error);
} else {
if (!$objClass->_getQC($request[BROKER_DATA])) {
$conMsg = FAIL_EVENT . BROKER_REQUEST_QUERY_COUNT;
$aryRetData = buildReturnPayload([ false, $objClass->state, $objClass->eventMessages, null]);
} else {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_QUERY_COUNT;
$aryRetData = buildReturnPayload([ true, STATE_SUCCESS, $objClass->eventMessages, [$objClass->recordsInQuery, $objClass->recordsInCollection ] ] );
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
}
break;
case BROKER_REQUEST_TERCERO :
$eventTimer = true;
// just as a reminder, we don't check for the existence of META_TEMPLATE in the validateMetaData()
// function because not all events require it - hence the seemingly repetitive check in the event code.
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, ERROR_TEMPLATE_FILE_404, null]);
} elseif (!isset($request[OLD_REQUEST]) or empty($request[OLD_REQUEST])) {
$conMsg = ERROR_REQUEST_404 . COLON . OLD_REQUEST;
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, $conMsg, null]);
} else {
// this is a request for tercero - replace the event, instantiate a tercero client,
// and cross-service publish the request and return the response back to the caller
$bc = new gacBrokerClient(BROKER_QUEUE_U, sprintf(INFO_LOC, basename(__FILE__), __LINE__));
if (!$bc->status) {
$conMsg = ERROR_BROKER_CLIENT_DECLARE . BROKER_QUEUE_U;
$aryRetData = buildReturnPayload([ false, STATE_FRAMEWORK_WARNING, $conMsg, null]);
} else {
$request[BROKER_REQUEST] = $request[OLD_REQUEST];
$aryRetData = json_decode(gzuncompress($bc->call(gzcompress(json_encode($request)))),true);
if ($aryRetData[PAYLOAD_STATUS]) {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . $request[OLD_REQUEST] . ' for ' . BROKER_TERCERO;
} else {
$conMsg = FAIL_EVENT . $request[OLD_REQUEST] . ' for ' . BROKER_TERCERO;
}
}
if (is_object($bc)) $bc->__destruct();
unset($bc);
}
break;
default :
// check for user template in meta payload and, if exists, publish the request to the user
// and pass the return payload back to the requesting client
if (isset($request[BROKER_META_DATA][META_TEMPLATE]) and $request[BROKER_META_DATA][META_TEMPLATE] == TEMPLATE_CLASS_USERS) {
$ubc = new gacBrokerClient(BROKER_QUEUE_U, basename(__FILE__) . AT . __LINE__);
if (!$ubc->status) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_BROKER_CLIENT_DECLARE . BROKER_QUEUE_U;
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg]);
} else {
$response = $ubc->call($request);
$response = json_decode(gzuncompress($response), true);
$aryRetData = buildReturnPayload([$response[PAYLOAD_STATUS], $response[PAYLOAD_STATE], $response[PAYLOAD_DIAGNOSTICS], $response[PAYLOAD_RESULTS]]);
if ($response[PAYLOAD_STATUS]) {
$conMsg = SUCCESS_EVENT;
$eventSuccess = true;
} else $conMsg = FAIL_EVENT;
$conMsg .= $request[BROKER_REQUEST];
if (is_object($ubc)) $ubc->__destruct();
unset($ubc);
}
} else {
$msg = ERROR_EVENT_404 . $request[BROKER_REQUEST];
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DOES_NOT_EXIST, null, $msg]);
}
break;
}
}
// make doubly-damn sure we have a return-payload and a console message
if (empty($aryRetData)) {
$msg = ERROR_NO_RET_DATA;
$conMsg = BROKER_QUEUE_R . ' - ' . $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg, null]);
} elseif ($eventSuccess and empty($conMsg)) {
$callBackLog->warn(ERROR_NO_CON_MSG);
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
}
// prepare the return payload...
// $eventSuccess = false;
try {
/** @noinspection PhpUndefinedMethodInspection */
$msg = new AMQPMessage(gzcompress(json_encode($aryRetData)), array(BROKER_CORRELATION_ID => $_request->get(BROKER_CORRELATION_ID)));
/** @noinspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_publish($msg, '', $_request->get(BROKER_REPLY_TO));
$_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
} catch (AMQPTimeoutException | TypeError | AMQPRuntimeException | Throwable $e) {
$m = $e->getMessage();
$callBackLog->fatal($m);
consoleLog($res, CON_ERROR, $m);
}
// if the event processing failed, we want to publish the failed event to the admin queue
// if (!$eventSuccess) {
// todo - CORE-452 - publish the event(payload) to the admin queue to capture the failed event
// }
unset($msg);
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT,$requestCounter, $myRequestsPerInstance));
// publish event metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postSystemEvent($data, $childGUID, $callBackLog);
}
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->disconnect(); // changed from close() which DNE
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_R, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (AMQPChannelClosedException | AMQPInvalidArgumentException | AMQPRuntimeException| Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
// ---- broker code ends ---- //
break;
case 1 : // parent
// do nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_R));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run...it looks for any of the children in it's process group to die...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

381
brokers/sBroker.php Normal file
View File

@@ -0,0 +1,381 @@
<?php
/**
* sBroker.php -- the tercero session (one-way) broker
*
* The session broker is a "system" broker designed to basically do one thing - handle requests to expire existing
* sessions on tercero from the admin service.
*
* This broker was created so as to not overly-burden the user-broker with administrative requests. The broker is
* a "fire-n-forget" broker meaning that no response is published (returned) to the calling client.
*
* If there is an error in processing, then we'll communicate that error back to admin by publishing a system event.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 10-02-20 mks DB-168: original coding
*
*
*/
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Exception\AMQPChannelClosedException;
use PhpAmqpLib\Exception\AMQPRuntimeException;
use PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$_REDIRECT = true;
$topDir = dirname(__DIR__);
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'SESS: ';
// event management for children
$appServerConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_TERCERO];
$numberChildren = $appServerConfig[CONFIG_BROKER_INSTANCES][CONFIG_SESSION_BROKER];
$requestsPerInstance = (empty($appServerConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $appServerConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2?? (Yes, but only if prod)
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
$parentLog = new gacErrorLogger();
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_TERCERO;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $startingMemory, $groot;
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(0, 9);
$startingMemory = memory_get_usage(true);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
$childGUID = rtrim($res, COLON) . UDASH . guid();
try {
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_S;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_TERCERO);
if (is_null($brokerConnection)) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
$childLog->fatal($hdr . ERROR_RESOURCE_404 . RESOURCE_TERCERO . COLON . BROKER_QUEUE_S);
consoleLog($res, CON_ERROR,$hdr . ERROR_RESOURCE_404 . RESOURCE_TERCERO . COLON . BROKER_QUEUE_S);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
// $brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
$brokerChannel->queue_declare($queue);
} catch (AMQPRuntimeException | AMQPTimeoutException | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the broker child start-up as a system-event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request) {
$startTime = gasStatic::doingTime();
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var PhpAmqpLib\Connection\AMQPStreamConnection $brokerConnection */
global $brokerConnection;
$file = basename(__FILE__);
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
if (gasConfig::$settings[CONFIG_DEBUG]) {
consoleLog($res, CON_DEBUG, 'Child GUID: ' . $childGUID);
consoleLog($res,CON_DEBUG, 'root GUID: ' . $groot);
}
$requestCounter++;
$returnData = null;
$eventTimer = false;
$request = null;
$eventSuccess = false;
$conMsg = '';
$errorList = array();
$thisPid = getmypid();
$eventGUID = guid();
$ogGUID = '';
/** @var gacMongoDB $obj */
$obj = null;
// set-up the call-back logger
$callBackLog = new gacErrorLogger($eventGUID, false);
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$event = BROKER_QUEUE_S . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$event = BROKER_QUEUE_S . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_S . '(' . $request[BROKER_REQUEST] . ')';
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
// $_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$eventSuccess = true;
break;
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_S;
$eventSuccess = true;
break;
case BROKER_REQUEST_EXPIRE_SESSION :
$errors = [];
/** var gacMongoDB $obj */
if (!is_null($obj = grabWidget($request[BROKER_META_DATA], '', $errors))) {
// we have widget - update the session record
/** @var gatSessions $template */
$template = $obj->template;
$bc = new gacWorkQueueClient($file . AT . __LINE__, BROKER_QUEUE_AI);
if (!$bc->status) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, ERROR_BROKER_CLIENT_DECLARE . BROKER_QUEUE_AI, $foo, true);
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST];
} elseif (!is_null($payload = $template->buildExpireSessionPayload($request[BROKER_DATA], $errors))) {
$obj->_updateRecord($payload);
if ($obj->status) {
// publish a request back to admin to expire the system-event record indicating success in closing the session
// (because both of these queues are type f-n-f...)
$bc->call($template->buildCloseSysEventPayload($request[BROKER_DATA][STRING_GUID_KEY]));
// successful session update - console log message and done (no return to client)
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST];
$eventSuccess = true;
} else {
// create failed-session record for failure to update session record
if (count($errors))
foreach ($errors as $error)
$callBackLog->error($error);
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST];
$data = [
MONGO_FAILED_EVENT_GUID => $request[BROKER_DATA][STRING_GUID_KEY],
MONGO_FAILED_EVENT_NAME => $request[BROKER_REQUEST],
MONGO_FAILED_EVENT_DESC => basename(__FILE__) . AT . __LINE__,
MONGO_FAILED_EVENT_SEV => ERROR_WARN
];
$meta = [
META_TEMPLATE => TEMPLATE_CLASS_FAILED_SESSIONS,
META_CLIENT => CLIENT_SYSTEM,
META_DO_CACHE => 0,
META_SESSION_ID => $request[BROKER_DATA][STRING_GUID_KEY],
];
$request = [
BROKER_REQUEST => BROKER_REQUEST_CREATE,
BROKER_DATA => [$data],
BROKER_META_DATA => $meta
];
$bc->call(gzcompress(json_encode($request)));
if (is_object($bc)) $bc->__destruct();
unset($bd);
}
} else {
// we somehow failed to build the data payload based on the request data
if (count($errors))
foreach ($errors as $error)
$callBackLog->error($error);
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST];
}
} else {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$conMsg = ERROR_TEMPLATE_INSTANTIATE . $request[BROKER_META_DATA][META_TEMPLATE];
$obj->eventMessages[] = $conMsg;
$callBackLog->warn($hdr . $conMsg);
}
break;
default :
$conMsg = ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST];
$callBackLog->warn(ERROR_BROKER_EVENT_UNKNOWN . $request[BROKER_REQUEST]);
break;
}
}
if (!$eventSuccess and empty($conMsg)) {
$conMsg = ERROR_FINE_PICKLE;
}
if (!empty($conMsg)) {
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
}
// $_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
// log a system-event for the event -- unlike the other system events, we're not going to submit
// this one via a broker - which is standard but, instead, we're going to write the record out
// directly since doing otherwise would cause an infinite loop in processing.
if ($eventTime and $eventTimer) {
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($ogGUID)) $data[SYSTEM_EVENT_OGUID] = $ogGUID;
@postSystemEvent($data, $eventGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
gasCache::sysDel(($groot . UDASH . $thisPid));
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_S, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_consume($queue, '', false, true, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (AMQPChannelClosedException | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_S));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

485
brokers/uBroker.php Normal file
View File

@@ -0,0 +1,485 @@
<?php
/**
* uBroker.php -- user broker for tercero service
*
* The user broker lives on tercero and handles all user API requests initially published to appServer which is then
* forwarded here to this broker.
*
* API calls are fully documented.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 01-28-20 mks DB-144: original coding
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Message\AMQPMessage;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true;
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
$file = basename(__FILE__);
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'USRB: '; // USRB: USeR Broker
$childrenPidList = null;
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
// event management for children
$appServerConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_TERCERO];
$numberChildren = $appServerConfig[CONFIG_BROKER_INSTANCES][CONFIG_USER_BROKER];
$requestsPerInstance = (empty($appServerConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $appServerConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
// todo - validate the broker environment as declared in the XML config
// get the location of the broker is supposed to be run
$brokerLocation = ENV_TERCERO;
if (!empty($argv) and !empty($argv[1])) {
$brokerLocation = $argv[1];
}
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_TERCERO;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $startingMemory, $myRequestsPerInstance, $groot, $file;
$startingMemory = memory_get_usage(true);
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
try {
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
$queue = $queueTag . BROKER_QUEUE_U;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_TERCERO);
if (is_null($brokerConnection)) {
$childLog->fatal(ERROR_RESOURCE_404 . RESOURCE_BROKER);
consoleLog($res, CON_ERROR, ERROR_RESOURCE_404 . RESOURCE_BROKER);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
// params: queue name, passive, durable, exclusive, auto-delete
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
// todo -- add a broker name to this event so we know which broker is registering
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var AMQPStreamConnection $brokerConnection */
global $brokerConnection, $file;
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service;
$event = BROKER_QUEUE_U . '(';
$requestCounter++;
$aryRetData = null;
$retData = null;
$errorStack = [];
$request = null;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$eventSuccess = false;
$conMsg = '';
$eventGUID = guid();
$thisPid = getmypid();
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
// set-up the call-back logger
/** @var gacErrorLogger $callBackLog */
$callBackLog = new gacErrorLogger($eventGUID);
try {
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$aryRetData = buildReturnPayload([false, STATE_FAIL, null, $msg, null]);
$event .= ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorStack)) {
for ($index = 0, $last = count($errorStack); $index < $last; $index++) {
$conMsg .= $errorStack[$index] . $eos;
$callBackLog->error($errorStack[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $errorStack, null, null]);
$event .= ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event .= $request[BROKER_REQUEST] . ')';
if (is_null($request)) {
consoleLog($res, CON_ERROR, ERROR_REQUEST_404);
}
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN, null]);
$eventSuccess = true;
break;
// test broker responsiveness
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_U;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, (SUCCESS_PING . BROKER_QUEUE_U), null]);
$eventSuccess = true;
break;
// your events for this broker start here
case BROKER_REQUEST_CREATE :
$eventTimer = true;
$msg = '';
// validate that we have a data-template in meta
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, ERROR_TEMPLATE_FILE_404, BROKER_REQUEST_CREATE]);
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->create($request, $aryRetData, $msg);
unset($bh);
}
break;
case BROKER_REQUEST_FETCH :
$eventTimer = true;
$conMsg = '';
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, ERROR_TEMPLATE_FILE_404, BROKER_REQUEST_CREATE]);
} else {
// invoke the broker-helper to execute the fetch request
$bh = new gacBrokerHelper();
$eventSuccess = $bh->fetch($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_UPDATE :
$eventTimer = true;
$conMsg = '';
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_TEMPLATE_FILE_404, null]);
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->update($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_DELETE :
$eventTimer = true;
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_TEMPLATE_FILE_404, null]);
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->delete($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_VALIDATE_EMAIL :
$eventTimer = true;
$obj = null;
try {
$obj = new gacUsers($request[BROKER_META_DATA]);
if (!$obj->status) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = ERROR_TEMPLATE_INSTANTIATE . TEMPLATE_CLASS_USERS;
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg]);
$callBackLog->warn($hdr . $msg);
} else {
$obj->validateUserEmail($request[BROKER_DATA][STRING_QUERY_DATA][USER_PII_EMAIL . $obj->ext]);
if ($obj->status) {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_VALIDATE_EMAIL;
$aryRetData = buildReturnPayload([true, $obj->state, null, null]);
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_VALIDATE_EMAIL;
$aryRetData = buildReturnPayload([false, $obj->state, null, null]);
}
}
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
if (is_object($obj)) {
$obj->eventMessages[] = ERROR_EXCEPTION;
$aryRetData = buildReturnPayload([ false, STATE_FRAMEWORK_FAIL, $obj->eventMessages, null]);
} else {
$aryRetData = buildReturnPayload([ false, STATE_FRAMEWORK_FAIL, ERROR_EXCEPTION, null]);
}
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
if (is_object($obj)) $obj->__destruct();
unset($obj);
break;
case BROKER_REQUEST_REGISTER_ACCOUNT :
$eventTimer = true;
try {
$obj = new gacUsers($request[BROKER_META_DATA]);
if (!$obj->status) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = ERROR_TEMPLATE_INSTANTIATE . $request[BROKER_META_DATA][META_TEMPLATE];
$conMsg = $msg;
$callBackLog->error($hdr . $msg);
$aryRetData = buildReturnPayload([false, $obj->state, $obj->eventMessages, null]);
} else {
$obj->registerNewUser($request);
if ($obj->status) {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST];
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, [STRING_USER => $obj->userGUID, STRING_SESSION => $obj->sessionGUID]]);
} else {
$conMsg = ERROR_USER_REG_FAIL;
$aryRetData = buildReturnPayload([false, $obj->state, $obj->eventMessages, null]);
}
}
if (is_object($obj)) $obj->__destruct();
unset($obj);
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
break;
// events for this broker end here
default :
$msg = ERROR_EVENT_404 . $request[BROKER_REQUEST];
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DOES_NOT_EXIST, $msg, null]);
break;
}
}
} catch (Throwable $t) {
consoleLog($res, CON_SYSTEM, $t->getMessage());
$callBackLog->fatal($t->getMessage());
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, $t->getMessage(), $errorStack]);
}
// ensure we have a return-payload and a console message
if (empty($aryRetData)) {
$msg = ERROR_NO_RET_DATA . '-' . __FILE__ . '-' . $request[BROKER_REQUEST];
$conMsg = BROKER_QUEUE_M . ' - ' . $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg, null]);
} elseif ($eventSuccess and empty($conMsg)) {
$callBackLog->warn( ERROR_NO_CON_MSG . STRING_FOR . $request[BROKER_REQUEST]);
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
} elseif (!$eventSuccess and empty($conMsg)) {
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_FAIL;
}
// prepare the return payload...
/** @noinspection PhpUndefinedMethodInspection */
$msg = new AMQPMessage(gzcompress(json_encode($aryRetData)), array(BROKER_CORRELATION_ID => $_request->get(BROKER_CORRELATION_ID)));
try {
/** @noinspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_publish($msg, '', $_request->get(BROKER_REPLY_TO));
$_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
} catch (PhpAmqpLib\Exception\AMQPTimeoutException | PhpAmqpLib\Exception\AMQPRuntimeException | Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
// if the event processing failed, reject the message, otherwise ack removing it from the queue
// todo: core-452: publish the event payload to the sysEvent broker to capture the failed event
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT, $requestCounter, $myRequestsPerInstance));
unset($msg);
// publish event metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postSystemEvent($data, $childGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_U, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (TypeError | Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_U));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

598
brokers/wBroker.php Normal file
View File

@@ -0,0 +1,598 @@
<?php
/**
* wBroker.php -- write broker
*
* the write broker handles all destructive CRUD events for all of the framework classes.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 08-22-17 mks CORE-500: publishing broker events as system events to ADMIN service
* 09-11-17 mks CORE-501: new record requests: data count cannot exceed declared payload limits
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 07-09-18 mks CORE-1017: pedigree fetch event added
* 07-10-18 mks CORE-773: replaced echo statements with consoleLog()
* 01-28-19 mks DB-107: fixed bug: if meta payload has event guid submitted, no longer overwriting it
* 01-29-20 mks DB-145: router code for tercero requests added to default section in $callback method
* 07-28-20 mks DB-156: broker self-registration installed
* 09-17-20 mks DB-168: updated service registration, updated exception handling to current standard
*
*/
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Exception\AMQPRuntimeException;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use PhpAmqpLib\Message\AMQPMessage;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true;
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$childrenPidList = null; // contains list of the pids of the children spawned by this watcher
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$res = 'WBRK: ';
// event management
$appServerConfig = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_APPSERVER];
$numberChildren = $appServerConfig[CONFIG_BROKER_INSTANCES][CONFIG_BROKER_W_BROKER];
$requestsPerInstance = (empty($appServerConfig[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $appServerConfig[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren;
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS,sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
// get the location of the broker is supposed to be run
$brokerLocation = (!empty($argv) and !empty($argv[1])) ? $argv[1] : ENV_APPSERVER;
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = CONFIG_BROKER_APPSERVER;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
///////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death
///////////////////////////////////////////////////////////////////////////////
//declare( ticks = 1);
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
//////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event
//////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $myRequestsPerInstance, $startingMemory, $groot;
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$startingMemory = memory_get_usage(true);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error!!!
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
try {
// replace the signal handlers in the child code
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
// init the child log
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
// ---- broker code begins ---- //
$queueTag = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG];
//$exchange = BROKER_EXCHANGE_WO;
$queue = $queueTag . BROKER_QUEUE_W;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_BROKER);
if (is_null($brokerConnection)) {
$childLog->fatal(ERROR_RESOURCE_404 . RESOURCE_BROKER);
consoleLog($res, CON_ERROR, ERROR_RESOURCE_404 . RESOURCE_BROKER);
exit(0);
}
/** @var AMQPChannel $brokerChannel */
$brokerChannel = $brokerConnection->channel(); // params: queue name, passive, durable, exclusive, auto-delete
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable | TypeError $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service, $file;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var PhpAmqpLib\Connection\AMQPStreamConnection $brokerConnection */
global $brokerConnection;
$requestCounter++;
$aryRetData = null;
$retData = null;
$request = null;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$res = 'WBRK: ';
$eventSuccess = false;
$conMsg = '';
$errorList = array();
$eventGUID = guid();
// $request[BROKER_META_DATA][META_EVENT_GUID] = $eventGUID; // inject the event guid
$thisPid = getmypid();
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
// set-up the call-back logger
/** @var gacErrorLogger $callBackLog */
$callBackLog = new gacErrorLogger($eventGUID);
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($res . $msg);
$aryRetData = buildReturnPayload([false, STATE_FAIL, null, $msg, null]);
$event = BROKER_QUEUE_W . '(' . ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorList)) {
for ($index = 0, $last = count($errorList); $index < $last; $index++) {
$conMsg .= $errorList[$index] . $eos;
$callBackLog->error($errorList[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $errorList, null, null]);
$event = BROKER_QUEUE_W . '(' . ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event = BROKER_QUEUE_W . '(' . $request[BROKER_REQUEST] . ')';
if (is_null($request)) {
consoleLog($res, CON_ERROR, ERROR_BROKER_REQUEST_404);
}
// DB-57: stash the broker guids in the meta for audit and journaling
$request[META_BROKER_CHILD_GUID] = $childGUID;
$request[META_BROKER_GROOT] = $groot;
switch ($request[BROKER_REQUEST]) {
// shutdown gracefully
case BROKER_REQUEST_SHUTDOWN :
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN]);
$eventSuccess = true;
break;
// test broker responsiveness
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_W;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, SUCCESS_PING . BROKER_QUEUE_W]);
$eventSuccess = true;
break;
case BROKER_REQUEST_PEDIGREE :
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_PEDIGREE;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, gasConfig::getPedigree()]);
$eventSuccess = true;
break;
// create new record event
case BROKER_REQUEST_CREATE :
$eventTimer = true;
$conMsg = '';
// validate that we have a data-template in meta
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, ERROR_TEMPLATE_FILE_404, BROKER_REQUEST_CREATE]);
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->create($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_UPDATE :
$eventTimer = true;
$conMsg = '';
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_TEMPLATE_FILE_404, null]);
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->update($request, $aryRetData, $conMsg);
unset($bh);
}
break;
case BROKER_REQUEST_DELETE :
$eventTimer = true;
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_TEMPLATE_FILE_404, null]);
} else {
$bh = new gacBrokerHelper();
$eventSuccess = $bh->delete($request, $aryRetData, $conMsg);
unset($bh);
}
break;
// sub-collection events
case BROKER_REQUEST_SUBC_CREATE :
$eventTimer = true;
$errors = array();
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_TEMPLATE_FILE_404, null]);
} else {
// CORE-501: validate the number of incoming records
$qrl = intval(gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_QUERY_RECORD_LIMIT]);
if (count($request[BROKER_DATA][STRING_DATA]) > $qrl) {
$msg = ERROR_RECORD_LIMIT_EXCEEDED . $qrl;
$callBackLog->data($msg);
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, $msg, null]);
} else {
/** @var gacMongoDB $objClass */
if (is_null($objClass = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
$objClass->pushSubCollectionEvent($request[BROKER_DATA]);
if ($objClass->status) {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_SUBC_CREATE;
$eventSuccess = true;
$queryResults = (!gasCache::mapOutboundPayload($objClass, $errors)) ? $objClass->getData() : $objClass->getCK();
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, $queryResults]);
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_SUBC_CREATE;
$aryRetData = buildReturnPayload([false, $objClass->state, $objClass->eventMessages, null]);
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
}
}
break;
case BROKER_REQUEST_SUBC_DELETE :
$eventTimer = true;
$errors = array();
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_TEMPLATE_FILE_404, null]);
} else {
/** @var gacMongoDB $objClass */
if (is_null($objClass = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
$objClass->popSubCollection($request[BROKER_DATA]);
if ($objClass->status) {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_SUBC_DELETE;
$eventSuccess = true;
$queryResults = (!gasCache::mapOutboundPayload($objClass, $errors)) ? $objClass->getData() : $objClass->getCK();
$aryRetData = buildReturnPayload([true, $objClass->state, $objClass->eventMessages, $queryResults]);
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_SUBC_DELETE;
$aryRetData = buildReturnPayload([false, $objClass->state, $objClass->eventMessages, null]);
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
}
break;
case BROKER_REQUEST_CALL_SP :
$eventTimer = true;
$errors = array();
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_TEMPLATE_FILE_404, null]);
} else {
/** @var gacPDO $objClass */
if (is_null($objClass = grabWidget($request[BROKER_META_DATA], '', $errorList))) {
foreach ($errorList as $error)
$callBackLog->error($error);
} else {
if ($objClass->schema != TEMPLATE_DB_PDO) {
$msg = sprintf(ERROR_PDO_INVALID_EVENT, $request[BROKER_REQUEST], $objClass->class);
$conMsg = $msg;
$errors[] = $msg;
$aryRetData = buildReturnPayload([false, FAIL_EVENT, $errors, null]);
$callBackLog->error($msg);
} else {
$objClass->execSP($request[BROKER_DATA]);
if ($objClass->status) {
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST];
$eventSuccess = true;
$aryRetData = buildReturnPayload([true, $objClass->state, $objClass->eventMessages, $objClass->queryResults]);
} else {
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST];
$aryRetData = buildReturnPayload([false, $objClass->state, $objClass->eventMessages, null]);
}
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
}
break;
case BROKER_REQUEST_WAREHOUSE :
$eventTimer = true;
$objClass = new gacBrokerClient(BROKER_QUEUE_WH, basename(__FILE__) . COLON_NS . __LINE__);
if (!$objClass->status or is_null($objClass)) {
$error = ERROR_BROKER_CLIENT_DECLARE . BROKER_QUEUE_WH . EVENT_TYPE . COLON . BROKER_REQUEST_WAREHOUSE;
$conMsg = $error;
$aryRetData = buildReturnPayload([false, STATE_RESOURCE_ERROR, $error, null]);
} else {
$response = json_decode(gzuncompress($objClass->call($_request->body)), true);
if (!$response[PAYLOAD_STATUS]) {
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST];
$aryRetData = buildReturnPayload([false, $response[PAYLOAD_STATE], $response[PAYLOAD_DIAGNOSTICS], $response[PAYLOAD_RESULTS]]);
} else {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST];
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, $response[PAYLOAD_DIAGNOSTICS], $response[PAYLOAD_RESULTS]]);
}
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
break;
case BROKER_REQUEST_TERCERO :
$eventTimer = true;
try {
if (!isset($request[OLD_REQUEST])) {
$conMsg = sprintf(ERROR_REQ_FIELD_404, OLD_REQUEST);
$aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, $conMsg, null]);
break;
}
// just as a reminder, we don't check for the existence of META_TEMPLATE in the validateMetaData()
// function because not all events require it - hence the seemingly repetitive check in the event code.
if (!isset($request[BROKER_META_DATA][META_TEMPLATE]) or empty($request[BROKER_META_DATA][META_TEMPLATE])) {
$conMsg = ERROR_TEMPLATE_FILE_404;
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, ERROR_TEMPLATE_FILE_404, null]);
} else {
// this is a request for tercero - replace the event, instantiate a tercero client,
// and cross-service publish the request and return the response back to the caller
$request[BROKER_REQUEST] = $request[OLD_REQUEST];
unset($request[OLD_REQUEST]);
$bc = new gacBrokerClient(BROKER_QUEUE_U, sprintf(INFO_LOC, basename(__FILE__), __LINE__));
if (!$bc->status) {
$conMsg = ERROR_BROKER_CLIENT_DECLARE . BROKER_QUEUE_U;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_WARNING, $conMsg, null]);
} else {
$aryRetData = json_decode(gzuncompress($bc->call(gzcompress(json_encode($request)))), true);
if ($aryRetData[PAYLOAD_STATUS]) {
$eventSuccess = true;
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST] . ' for ' . BROKER_TERCERO;
} else {
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST] . ' for ' . BROKER_TERCERO;
}
}
if (is_object($bc)) $bc->__destruct();
unset($bc);
}
} catch (Throwable | TypeError | AMQPRuntimeException $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
$aryRetData = buildReturnPayload([false. STATE_FRAMEWORK_WARNING, ERROR_EXCEPTION, null]);
$conMsg = ERROR_EXCEPTION;
}
break;
case BROKER_REQUEST_NULL_FIELD :
default :
// check for user template in meta payload and, if exists, publish the request to the user
// and pass the return payload back to the requesting client
if (isset($request[BROKER_META_DATA][META_TEMPLATE]) and $request[BROKER_META_DATA][META_TEMPLATE] == TEMPLATE_CLASS_USERS) {
$ubc = new gacBrokerClient(BROKER_QUEUE_U, basename(__FILE__) . AT . __LINE__);
if (!$ubc->status) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_BROKER_CLIENT_DECLARE . BROKER_QUEUE_U;
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg]);
} else {
$response = $ubc->call($request);
$response = json_decode(gzuncompress($response), true);
$aryRetData = buildReturnPayload([$response[PAYLOAD_STATUS], $response[PAYLOAD_STATE], $response[PAYLOAD_DIAGNOSTICS], $response[PAYLOAD_RESULTS]]);
if ($response[PAYLOAD_STATUS]) {
$conMsg = SUCCESS_EVENT;
$eventSuccess = true;
} else $conMsg = FAIL_EVENT;
$conMsg .= $request[BROKER_REQUEST];
if (is_object($ubc)) $ubc->__destruct();
unset($ubc);
}
} else {
$msg = ERROR_EVENT_404 . $request[BROKER_REQUEST];
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DOES_NOT_EXIST, null, $msg]);
}
break;
}
unset($aryRetData[PAYLOAD_CM]);
if (is_null($aryRetData[PAYLOAD_DIAGNOSTICS])) unset($aryRetData[PAYLOAD_DIAGNOSTICS]);
}
// ensure we have a return-payload and a console message
if (empty($aryRetData)) {
$conMsg = BROKER_QUEUE_W . ' - ' . ERROR_NO_RET_DATA;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg, null]);
} elseif ($eventSuccess and empty($conMsg)) {
$conMsg = SUCCESS_EVENT . $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
} elseif (!$eventSuccess and empty($conMsg)) {
$conMsg = FAIL_EVENT . $request[BROKER_REQUEST] . ' - ' . STATE_FAIL;
}
// prepare the return payload...
try {
/** @noinspection PhpUndefinedMethodInspection */
$msg = new AMQPMessage(gzcompress(json_encode($aryRetData)), array(BROKER_CORRELATION_ID => $_request->get(BROKER_CORRELATION_ID)));
/** @noinspection PhpUndefinedMethodInspection */
$_request->delivery_info[BROKER_CHANNEL]->basic_publish($msg, '', $_request->get(BROKER_REPLY_TO));
$_request->delivery_info[BROKER_CHANNEL]->basic_ack($_request->delivery_info[BROKER_DELIVERY_TAG]);
} catch (AMQPTimeoutException | AMQPRuntimeException | TypeError | Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
// if the event processing failed, we want to publish the failed event to the admin queue
// if (!$eventSuccess) {
// todo - CORE-452 - publish the event(payload) to the admin queue to capture the failed event
// }
unset($msg);
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT,$requestCounter, $myRequestsPerInstance));
// publish metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postSystemEvent($data, $childGUID, $callBackLog);
}
// post a broker system-event if we're recycling the broker
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
unset($msg);
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_W, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (TypeError | Throwable $t) {
$hdr = basename(__FILE__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
// ---- broker code ends ---- //
break;
case 1 : // parent
// do nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_W));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run...it looks for any of the children in it's process group to die...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

449
brokers/whBroker.php Normal file
View File

@@ -0,0 +1,449 @@
<?php
/**
* whBroker.php
*
* This is the Namaste Warehouse broker which, by design, should be running on segundo, separately from both the
* Appserver (namaste) and the admin services.
*
* Warehousing accepts requests from Namaste because we want writes to happen "locally" to this service.
*
* Warehousing is intended to support the following data transitions:
*
* COOL --> data is moved from HOT storage to COOL storage meaning that the original format of the data is preserved.
* COLD --> data is moved from HOT or COOL storage to COLD storage -- which is TBD and is expected to be a CSV type
* format using on of the AWS Storage-as-a-Service options.
* WARM --> data is moved COLD to COOL storage. (so that it can be queried)
* HOT --> data is moved COOL or COLD to HOT storage. (data de-archiving/recovery)
*
* For design notes, please refer to Jira case INF-188.
*
* This is a non-blocking RPC broker. Once a request is received from Namaste, we validate the request. If the
* request passed validation and verification, then we'll immediately return a tracking GUID back to Namaste while
* starting the migration process. The client has the responsibility to monitor the wareHousing record (created in
* the ADMIN service database) progress and completion status. The GUID returned back to Namaste is the data
* wareHousing record GUID.
*
*
* @author mike@givingassistant.org
* @version 1.0.0
*
* HISTORY:
* ========
* 04-10-18 mks INF-201: Original coding (begins)
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 06-07-18 mks CORE-1013: remote-fetch event added
* 07-09-18 mks CORE-1017: pedigree fetch event added
* 07-28-20 mks DB-156: broker self-registration installed
*
*/
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Message\AMQPMessage;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Exception\AMQPTimeoutException;
pcntl_async_signals(true); // enable asynchronous signal handling (PHP 7.1)
$myPid = getmypid();
$_REDIRECT = true;
$topDir = dirname(__DIR__);
$thisWatcher = basename(__FILE__);
$thisWatcher = rtrim($thisWatcher, ".php");
// load the framework
@require_once($topDir . '/config/sneakerstrap.inc'); // can't be constants b/c this loads the constants
$res = 'DATW: '; // dat warehouse
$childrenPidList = null;
$pidDir = $topDir . DIR_PIDS;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
// event management for children
$whServiceSettings = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_WH];
$numberChildren = $whServiceSettings[CONFIG_BROKER_INSTANCES][CONFIG_BROKER_WH_BROKER];
$requestsPerInstance = (empty($whServiceSettings[CONFIG_BROKER_REQUEST_LIMIT])) ? NUMBER_C : $whServiceSettings[CONFIG_BROKER_REQUEST_LIMIT];
$numberChildren = ($numberChildren < 1) ? 1 : $numberChildren; // todo -- should this be = 2??
$runningBrokers = $numberChildren;
$requestCounter = 0;
$myRequestsPerInstance = 0;
$startingMemory = 0;
// create the root guid
$groot = rtrim($res, COLON) . UDASH . guid(); // root guid
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_STARTUP, substr(basename(__FILE__), 0, -4), $groot));
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_NUM_CHILD, substr(basename(__FILE__), 0, -4), $numberChildren));
/** @var gacErrorLogger $parentLog */
$parentLog = new gacErrorLogger();
// todo - validate the broker environment as declared in the XML config
// get the location of the broker is supposed to be run
$brokerLocation = ENV_SEGUNDO;
if (!empty($argv) and !empty($argv[1])) {
$brokerLocation = $argv[1];
}
$errors = null;
$file = rtrim(basename(__FILE__), DOT . FILE_TYPE_PHP);
$service = ENV_SEGUNDO;
if (!validateService($service, $errors)) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = sprintf(ERROR_SERVICE_REG, $file, $service);
$parentLog->fatal($hdr . $msg);
$parentLog->__destruct();
unset($parentLog);
exit(1);
}
//////////////////////////////////////////////////////////////////////////////////
// set-up the replacement signal handler that will be called on a child's death //
//////////////////////////////////////////////////////////////////////////////////
function sigHandler($_sig) {
global $numberChildren;
switch ($_sig) {
case SIGCHLD :
$numberChildren--;
while (($pid = pcntl_wait($_sig, WNOHANG)) > 0) {
@pcntl_wexitstatus($_sig);
}
break;
}
}
pcntl_signal(SIGCLD, 'sigHandler');
/////////////////////////////////////////////////////////////////////////////////////////
// set-up the forking function so that it can be called initially or on a SIGCLD event //
/////////////////////////////////////////////////////////////////////////////////////////
function forkMe()
{
global $thisWatcher, $eos, $res, $parentLog, $requestsPerInstance, $startingMemory, $myRequestsPerInstance, $groot, $file;
$startingMemory = memory_get_usage(true);
$myRequestsPerInstance = $requestsPerInstance + (mt_rand(0, 2) * 10) + mt_rand(1, 9);
$thisPid = pcntl_fork();
switch ($thisPid) {
case -1 : // error
$cmsg = ERROR_FORK_FAILED . $thisWatcher;
$parentLog->fatal($cmsg);
die(getDateTime() . CON_ERROR . $res . $cmsg . $eos);
break;
case 0 : // child (broker daemon)
try {
// replace the sigcld signal handler
pcntl_signal(SIGCLD, SIG_DFL);
$thisPid = getmypid();
// create the child logger object
/** @var gacErrorLogger $childLog */
$childLog = new gacErrorLogger();
// generate a child guid for the forked child...
$childGUID = rtrim($res, COLON) . UDASH . guid();
// toss the childGUID unto cache because it does not propagate down to the callback method
gasCache::sysAdd(($groot . UDASH . $thisPid), $childGUID);
$queue = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG] . BROKER_QUEUE_WH;
/** @var AMQPStreamConnection $brokerConnection */
$brokerConnection = gasResourceManager::fetchResource(RESOURCE_SEGUNDO);
if (is_null($brokerConnection)) {
$childLog->fatal(ERROR_RESOURCE_404 . RESOURCE_SEGUNDO . COLON . BROKER_QUEUE_WH);
consoleLog($res, CON_ERROR . ERROR_RESOURCE_404 . RESOURCE_SEGUNDO . COLON . BROKER_QUEUE_WH);
exit(1); // shell-script exit value for fail
}
$brokerChannel = $brokerConnection->channel();
// params: queue name, passive, durable, exclusive, auto-delete
$brokerChannel->queue_declare($queue, BROKER_QUEUE_DECLARE_PASSIVE, false, false, true);
} catch (PhpAmqpLib\Exception\AMQPRuntimeException | Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
exit(1);
}
// register the child-spawn event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_CHILD_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
SYSTEM_EVENT_KEY => SYSEV_CHILD_RPI,
SYSTEM_EVENT_VAL => $myRequestsPerInstance,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $childGUID, $childLog);
register_shutdown_function(BROKER_SHUTDOWN_FUNCTION, $brokerChannel, $brokerConnection, $res);
$callback = function($_request)
{
$startTime = gasStatic::doingTime();
$postNormalResponse = true;
/** @var AMQPChannel $brokerChannel */
global $brokerChannel;
/** @var AMQPStreamConnection $brokerConnection */
global $brokerConnection;
global $requestCounter, $res, $eos, $myRequestsPerInstance, $startingMemory, $groot, $service, $file;
$event = BROKER_QUEUE_M . '(';
$requestCounter++;
$aryRetData = null;
$retData = null;
$errorStack = [];
$request = null;
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$eventSuccess = false;
$conMsg = '';
$eventGUID = guid();
$thisPid = getmypid();
$eventTimer = false; // certain events will toggle to true to log timer recording for the broker event
$childGUID = gasCache::sysGet(($groot . UDASH . getmypid()));
// set-up the call-back logger
/** @var gacErrorLogger $callBackLog */
$callBackLog = new gacErrorLogger($eventGUID);
try {
if (!firstPassPayloadValidation($_request, $service, $msg, $request, $eventGUID)) {
$conMsg = $msg;
$callBackLog->info($msg);
$aryRetData = buildReturnPayload([false, STATE_FAIL, null, $msg, null]);
$event .= ERROR_DATA_VALIDATION_FIRST_PASS . ')';
} elseif (!validateMetaData($request, $errorStack)) {
for ($index = 0, $last = count($errorStack); $index < $last; $index++) {
$conMsg .= $errorStack[$index] . $eos;
$callBackLog->error($errorStack[$index]);
}
$conMsg = rtrim($conMsg, $eos);
$aryRetData = buildReturnPayload([false, STATE_META_ERROR, $errorStack, null, null]);
$event .= ERROR_META_VALIDATION_SECOND_PASS . ')';
} else {
$event .= $request[BROKER_REQUEST] . ')';
if (is_null($request)) {
consoleLog($res, CON_ERROR, ERROR_REQUEST_404);
}
switch ($request[BROKER_REQUEST]) {
case BROKER_REQUEST_SHUTDOWN :
$_request->delivery_info[BROKER_CHANNEL]->basic_cancel($_request->delivery_info[BROKER_DELIVERY_TAG]);
$conMsg = SUCCESS_SHUTDOWN;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, BROKER_REQUEST_SHUTDOWN, null]);
$eventSuccess = true;
break;
// test broker responsiveness
case BROKER_REQUEST_PING :
$conMsg = SUCCESS_PING . BROKER_QUEUE_WH;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, (SUCCESS_PING . BROKER_QUEUE_WH), null]);
$eventSuccess = true;
break;
case BROKER_REQUEST_PEDIGREE :
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_PEDIGREE;
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, null, gasConfig::getPedigree()]);
$eventSuccess = true;
break;
case BROKER_REQUEST_WAREHOUSE :
$eventSuccess = false;
$eventTimer = false;
$objMigrate = new gacMigrations($request[BROKER_DATA], $request[BROKER_META_DATA], EVENT_WAREHOUSE);
if (!$objMigrate->status) {
$conMsg = FAIL_EVENT . BROKER_REQUEST_WAREHOUSE;
$aryRetData = buildReturnPayload([false, $objMigrate->state, $objMigrate->errorStack, null]);
} else {
$guid = $objMigrate->objWarehouseMeta->getColumn(DB_TOKEN);
// validate return guid
if (!validateGUID($guid)) {
$conMsg = ERROR_EVENT . BROKER_REQUEST_WAREHOUSE;
$aryRetData = buildReturnPayload([ false, FAIL_EVENT, $objMigrate->errorStack, ERROR_BROKER_REQUEST_FAILED]);
} else {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_WAREHOUSE;
$aryRetData = buildReturnPayload([true, SUCCESS_EVENT, $objMigrate->errorStack, $guid]);
$eventSuccess = true;
}
// send the guid back to the calling client now so we can resume the warehousing...
postResponse($aryRetData, $_request, $callBackLog, $res);
$postNormalResponse = false;
// dive back into the objMigration class and perform the warehouse request
if (!$objMigrate->whData()) {
$conMsg = FAIL_EVENT . BROKER_REQUEST_WAREHOUSE;
} else {
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_WAREHOUSE;
$eventSuccess = true;
}
}
break;
case BROKER_REQUEST_REMOTE_FETCH :
$eventTimer = true;
$errors = [];
/** @var gacMongoDB $tmpObj */
if (is_null($tmpObj = grabWidget($request[BROKER_META_DATA], '', $errors))) {
foreach ($errors as $error)
$callBackLog->error($error);
} else {
$tmpObj->_fetchRecords($request[BROKER_DATA]);
if ($tmpObj->status) {
$eventSuccess = true;
$tmpObj->eventMessages[] = STRING_REC_COUNT_RET . $tmpObj->recordsReturned;
$conMsg = SUCCESS_EVENT . BROKER_REQUEST_FETCH;
$queryMeta = [
STRING_REC_COUNT_RET => $tmpObj->recordsReturned,
STRING_REC_COUNT_TOT => $tmpObj->recordsInCollection
];
// recordsInQuery is a PDO thing so let's see if it exists in the class object
if (isset($tmpObj->recordsInQuery) and $tmpObj->recordsInQuery) {
$queryMeta[STRING_REC_COUNT_QUERY] = $tmpObj->recordsInQuery;
}
if (isset($request[BROKER_META_DATA][META_DONUT_FILTER]) and $request[BROKER_META_DATA][META_DONUT_FILTER] == 1) {
$queryResults = $tmpObj->getData();
} elseif ($tmpObj->useCache or (isset($request[BROKER_META_DATA][META_DO_CACHE]) and $request[BROKER_META_DATA][META_DO_CACHE])) {
// todo - this is supposed to return the list of cache keys, or the single reference cache key - fix!
$queryResults = $tmpObj->cacheMap;
} else {
$queryResults = $tmpObj->getData();
}
$retData = [STRING_QUERY_RESULTS => $queryResults, STRING_QUERY_DATA => $queryMeta];
$aryRetData = buildReturnPayload([true, STATE_SUCCESS, $tmpObj->eventMessages, $retData]);
} else {
$conMsg = FAIL_EVENT . BROKER_REQUEST_FETCH;
$aryRetData = buildReturnPayload([false, $tmpObj->state, $tmpObj->eventMessages, null]);
}
if (is_object($tmpObj)) $tmpObj->__destruct();
unset($tmpObj);
}
break;
default :
$msg = ERROR_EVENT_404 . $request[BROKER_REQUEST];
$conMsg = $msg;
$aryRetData = buildReturnPayload([false, STATE_DOES_NOT_EXIST, $msg, null]);
break;
}
}
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, $t->getMessage(), $errorStack]);
}
// ensure we have a return-payload and a console message
if (empty($aryRetData) and $postNormalResponse) {
$msg = ERROR_NO_RET_DATA . '-' . __FILE__ . '-' . $request[BROKER_REQUEST];
$conMsg = BROKER_QUEUE_M . ' - ' . $msg;
$aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, null, $msg, null]);
} elseif ($eventSuccess and empty($conMsg)) {
$callBackLog->warn(ERROR_NO_CON_MSG);
$conMsg = $request[BROKER_REQUEST] . ' - ' . STATE_SUCCESS;
}
// prepare and send the return payload if we've not already sent it...
if ($postNormalResponse)
postResponse($aryRetData, $_request, $callBackLog, $res);
// if the event processing failed, reject the message, otherwise ack removing it from the queue
// todo: core-452: publish the event payload to the sysEvent broker to capture the failed event
consoleLog($res, (($eventSuccess) ? CON_SUCCESS : CON_ERROR), $conMsg . sprintf(ERROR_EVENT_COUNT,$requestCounter, $myRequestsPerInstance));
unset($msg);
// publish event metrics if we've toggled the switch on
if ($eventTimer) {
// get the broker-event processing time
$eventTime = gasStatic::doingTime($startTime);
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_EVENT_TIMER,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_TIMER => $eventTime,
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_META_DATA => $request[BROKER_META_DATA],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
if (!empty($childGUID)) $data[SYSTEM_EVENT_OGUID] = $childGUID;
@postSystemEvent($data, $childGUID, $callBackLog);
}
// exit the child if we've reached the request limit
if ($requestCounter >= $myRequestsPerInstance) {
if (getmypid() == $thisPid) {
$meta = [
META_SESSION_IP => STRING_SESSION_HOME,
META_SESSION_DAEMON => 1,
META_SESSION_MISC => INFO_BROKER_RECYCLE,
META_EVENT_GUID => $eventGUID
];
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_BROKER_RECYCLE,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_BROKER_GUID => $childGUID,
DB_EVENT_GUID => $eventGUID,
SYSTEM_EVENT_START => $startingMemory,
SYSTEM_EVENT_PEAK => memory_get_peak_usage(true),
SYSTEM_EVENT_END => memory_get_usage(true),
SYSTEM_EVENT_BROKER_EVENT => $event,
SYSTEM_EVENT_COUNT => $requestCounter,
SYSTEM_EVENT_COUNT_TOTAL => $myRequestsPerInstance,
SYSTEM_EVENT_META_DATA => $meta,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__
];
@postSystemEvent($data, $eventGUID, $callBackLog);
}
consoleLog($res, CON_SYSTEM, INFO_BROKER_REQ_COUNT);
if (is_object($brokerChannel)) $brokerChannel->close();
if (is_object($brokerConnection)) $brokerConnection->close();
exit(0);
}
};
consoleLog($res, CON_SYSTEM, sprintf(INFO_BROKER_QUEUE_ESTABLISHED, BROKER_QUEUE_WH, $thisPid, $myRequestsPerInstance));
$brokerChannel->basic_qos(null, 1, null);
$brokerChannel->basic_consume($queue, '', false, false, false, false, $callback);
while (count($brokerChannel->callbacks)) {
try {
$brokerChannel->wait();
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, $file, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
break;
case 1 : // parent
// does nothing
break;
}
return($thisPid);
}
for ($numBrokers = 0; $numBrokers < $runningBrokers; $numBrokers++) {
$childrenPidList[] = forkMe();
}
consoleLog($res, CON_SUCCESS, sprintf(INFO_BROKER_PARENT_STARTED, count($childrenPidList), BROKER_QUEUE_WH));
// "register" the broker instantiation event
$data = [
SYSTEM_EVENT_NAME => SYSEV_NAME_GROOT_REG,
SYSTEM_EVENT_TYPE => SYSEV_TYPE_BROKER,
SYSTEM_EVENT_BROKER_ROOT_GUID => $groot,
SYSTEM_EVENT_KEY => STRING_NUMBER_CHILDREN,
SYSTEM_EVENT_VAL => $numberChildren,
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . COLON . __LINE__,
SYSTEM_EVENT_NOTES => BROKER_SYSEV_REG . rtrim($res, ": ")
];
@postSystemEvent($data, $groot, $parentLog);
// the parent process continues to run, waking-up every second to monitor it's children...
// when a child dies, it's death-rattle is caught and the child is replaced with a new process.
while (count($childrenPidList)) {
$lastPid = 0;
$newPidList = null;
$result = pcntl_waitpid(0, $status); // detect any sigchld from the parent-group
if (in_array($result, $childrenPidList)) {
$key = array_search($result, $childrenPidList);
array_splice($childrenPidList, $key, 1);
// process has already exited -- restart it
$childrenPidList[] = forkMe();
}
}

View File

@@ -0,0 +1,315 @@
/**
* convertCacheMapDataToSchema() -- protected method
*
* this method takes an input array of payload data and checks to see if the current-loaded class has cacheMapping
* set (the cacheMap element has to be an array) and uses the map to convert the data from the public (cachemap)
* to private (schema) format.
*
* method requires two input parameters:
*
* - the payload data - which is a indexed array of associative array tuples
* - boolean toggle indicating if ALL fields are required to pass validation
*
* If the current class has cacheMapping, then we're going to spin through each tuple in the $_data parameter
* and look at each $key in the tuple -- if the $key exists as a member in the cacheMap, pull the key from cacheMap
* and store the new key and the old value in a temp array. If the key does not exist in the cacheMap then
* use the current (old) key/value pair.
*
* After each tuple is processed,copy the new vector in to a temporary matrix which will eventually be returned
* to the calling client.
*
* in all other (fail) cases, a null is returned.
*
* NOTE:
* -----
* This method is not to be confused with gasCache->buildMappedDataArray() which converts schema to cacheMap.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_data
* @param bool $_allFields
* @return null|array
*
* HISTORY:
* ========
* 07-13-17 mks CORE-464: original coding
* 02-06-18 mks _INF-139: support for disabled caching + no cache map
* 02-22-18 mks _INF-139: when cache is disabled, need to verify that submitted data included the class
* extension - if not, replace the old key with the old key + class extension
* 12-12-18 mks DB-77: fixed error in processing: when we have a journal recovery event, the restore
* query uses column literals instead of cache-mapped values. Added conditional code
* to check if the literal appears in the field list and, if so, validate that field
*/
protected function convertCacheMapDataToSchema(array $_data, bool $_allFields = false): ?array
{
$this->state = STATE_VALIDATION_ERROR;
$this->status = false;
$data = false;
$badData = false;
$loggerAvailable = (isset($this->logger) and $this->logger->available);
if (!is_array($_data)) {
$msg = basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_DATA_INVALID_FORMAT;
if ($loggerAvailable)
$this->logger->data($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
$this->eventMessages[] = $msg;
} elseif ($this->useCache and empty($this->cacheMap)) {
$msg = ERROR_CACHE_MAP_404 . COLON . $this->class;
if ($loggerAvailable)
$this->logger->data($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
$this->eventMessages[] = $msg;
} elseif (!$this->useCache and (!isset($this->cacheMap) or empty($this->cacheMap))) {
$data = $_data[0];
foreach ($data as $key => $value) {
try {
$newKey = $this->addExtension($key);
} catch (TypeError $t) {
$msg = ERROR_TYPE_EXCEPTION . COLON . $t->getMessage();
if ($loggerAvailable)
$this->logger->error($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
$this->eventMessages[] = $msg;
return null;
}
if (is_null($newKey)) return null;
if ($newKey != $key) {
$data[$newKey] = $value;
unset($data[$key]);
}
}
$this->state = STATE_SUCCESS;
$this->status = true;
return [$data];
} else {
$counter = 0;
for ($index = 0, $last = count($_data); $index < $last; $index++) {
$row = null;
foreach ($_data[$index] as $key => $value) {
$ck = array_search($key, $this->cacheMap);
if (false === $ck) {
$ck = array_key_exists($key, $this->cacheMap);
/*
* edge case - this case will be encountered in situations where we're using non-cache-mapped
* keys (e.g.: column literals) in a cached-class query where there exists a cache-map.
* Journaling saves the recovery query in literal (as opposed to cache-mapped) format... so, we
* need to accommodate the possibility where the data keys exist only in the $fieldList member
* and, if so, treat the keys as valid values once we've exhausted cache-map processing
*/
if (false === $ck) { // check to see if key is member of $fieldList
// first - see if we have an extension appended to the key - if not, add one
$newKey = $this->addExtension($key);
// then, check if the qualified key exists in the fieldList - if so, update the value of $ck
$ck = (in_array($newKey, $this->fieldList)) ? $newKey : false;
}
if (false === $ck) {
$msg = ERROR_DATA_INVALID_KEY . $key;
$this->eventMessages[] = $msg;
if ($loggerAvailable)
$this->logger->data($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
if ($_allFields) $badData = true;
$ck = $key;
}
}
if (is_array($value) and !empty($this->subCollections) and array_key_exists($ck, $this->subCollections)) {
try {
$value = $this->convertCacheMapDataToSchema($value);
} catch (TypeError $t) {
$msg = ERROR_TYPE_EXCEPTION . COLON . $t->getMessage();
if ($loggerAvailable)
$this->logger->error($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
return null;
}
if (false === $value) {
$msg = sprintf(ERROR_SUB_C_V_NULL, $ck);
$this->logger->warn($msg);
$this->eventMessages[] = $msg;
}
}
if (false !== $ck) $row[$ck] = $value;
}
if (!empty($row)) $data[$counter++] = $row;
}
if (($_allFields and !$badData and is_array($data)) or (!$_allFields and is_array($data))) {
$this->state = STATE_SUCCESS;
$this->status = true;
}
}
return( ($this->status) ? $data : null );
}
NOTE: dataScrub() wasn't deprecated, just eviscerated...
/**
* dataScrub() -- private method
*
* this method parses all of the data stored in the protected $data member and replaces keys with cleaned values
* (extensions stripped from keys) and critical values removed entirely.
*
* $_data -- call-by-reference variable that's the implicitly returned
*
* While processing the data rows, we make a recursive call back to this method if we encounter sub-arrays so
* that the sub-array keys can be stripped (method 1 only!) also.
*
* When generating the return data, for every row of data, we check each column to ensure it's not listed in the
* $hiddenColumns member array and, if it is, we remove it.
*
* For nosql-based collections, if we specify that we want the meta data, then we'll return the history
* sub-collection (aka meta data) so if meta isn't specified, the meta is dropped from the return set.
*
* There are no errors raised in this method. The success is implicitly defined in the $_data return structure.
*
* NOTE:
* =====
* Cache-Key Mapping is located in the private static method gasCache::buildMappedDataArray().
*
*
* @author mikegivingassistant.org
* @version 1.0
*
* @param $_data
*
* HISTORY:
* ========
* 06-22-17 mks original coding
* 08-14-17 mks CORE-493: removing meta param support (DB_HISTORY no longer supported)
*
*/
private function dataScrub(array &$_data): void
{
if (empty($_data)) {
$msg = ERROR_DATA_MISSING_ARRAY . STRING_DATA;
$this->eventMessages[] = $msg;
if (isset($this->logger) and $this->logger->available)
$this->logger->error($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
return;
}
/*
* if we're requesting a clean data set, and we've not requested a key-mapping, then
* clean the data "old-school" style, stripping off extensions, pulling the mongo ID
* fields, in the return data set.
*/
for ($index = 0, $last = count($_data); $index < $last; $index++) {
if (!empty($_data[$index]) and is_array($_data[$index])) {
foreach ($_data[$index] as $key => $value) {
$newKey = str_replace($this->ext, '', $key);
if (is_array($value)) {
try {
$this->dataScrub($value);
} catch (TypeError $t) {
$msg = basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_TYPE_EXCEPTION . COLON . $t->getMessage();
$this->eventMessages[] = $msg;
if (isset($this->logger) and $this->logger->available)
$this->logger->error($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
}
return;
}
if ($newKey != $key and !in_array($newKey, $this->hiddenColumns)) {
$_data[$index][$newKey] = $value;
unset($_data[$index][$key]);
} elseif (in_array($newKey, $this->hiddenColumns)) {
unset($_data[$index][$key]);
}
}
}
}
}
/**
* cmData() -- public function
*
* This public function is a dirty little way to stuff whatever data is defined in $_payload to replace whatever
* is stored in the protected member: $data.
*
* The only requirement is that the input parameter be an array.
*
* The purpose of this method is to store the cacheMap key(s) into the $data payload right before the broker
* event, that generated the data payload, finished processing an releases the class memory assigned to the object.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_payload
*
*
* HISTORY:
* ========
* 03-04-19 mks DB-116: original coding
*
*/
public function cmData(array $_payload): void
{
$this->data = $_payload;
}
/**
* dumpRecord() -- public core method
*
* Sometimes, you need to know what's in the $data payload and, since it's protected, you can't access it directly
* without going through one of the other methods that filters the payload.
*
* This method allows you to dump a row of data from the $data array to stdout. If you don't specify a row (as
* the only input parameter, then you will dump the first (0th) row in the array. However, if the array is empty,
* we'll out a message to that effect.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param int $_row
*
*
* HISTORY:
* ========
* 10-24-18 mks DB-67: original coding
*
*
*/
public function dumpRecord(int $_row = 0): void
{
if (empty($this->data))
echo INFO_NO_DATA_IN_DATA;
else
var_export($this->data[$_row]);
}
/**
* validateStatus() -- public method
*
* Simple method that takes a single input parameter, a string, and returns a boolean value corresponding to
* whether or not the string-value is present in the validStatus member array.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_status
* @return bool
*
*
* HISTORY:
* ========
* 02-12-18 mks _INF-139: original coding
*
*/
public function validateStatus(string $_status): bool
{
return (in_array($_status, $this->validStatus));
}

View File

@@ -0,0 +1,143 @@
<?php
/**
* this class is used when we want to publish a request to the AdminIn broker. The class wraps all of the
* RabbitMQ initialization and communication work so you don't have to. Especially useful for unit testing.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 07-05-17 mks eliminated recursive calls to the $logger entity; replaced $logger calls with console output
*
*/
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Connection\AMQPStreamConnection;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Message\AMQPMessage;
class gacAdminClientIn
{
/** @var PhpAmqpLib\Connection\AMQPStreamConnection */
private $rabbitConnection;
/** @var PhpAmqpLib\Channel\AMQPChannel */
private $rabbitChannel;
private $rabbitCorrelationID;
private $rabbitCallbackQueue;
private $queueName;
private $res = 'BACI: '; // Broker Admin Client In
public $status;
/**
* __construct() -- public method
*
* this is the constructor for the class. it requests an admin resource from the resource manager and declares
* a client-side connection to the service.
*
* there is an optional input parameter -- $_fw (from-where) that inserts a string into the queue label allowing
* easy identification of the requesting source.
*
* the method returns no values. It only sets the class' status member variable, a Boolean, on success or fail,
* accordingly.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* @param $_fw - "from where" - tweaks queue label to identify request origin
*
*
* HISTORY:
* ========
* 06-15-17 mks original coding
*
*/
public function __construct($_fw = 'AdminClientIn')
{
register_shutdown_function(array($this, STRING_DESTRUCTOR));
$this->status = false;
var_dump(debug_backtrace());
$this->queueName = gasResourceManager::$cfgAdmin[CONFIG_BROKER_QUEUE_TAG] . BROKER_QUEUE_AI;
// $this->rabbitConnection = gasResourceManager::fetchResource(RESOURCE_ADMIN);
$this->rabbitConnection = new AMQPStreamConnection('localhost', 5672, 'namaste', 'oSZL8Cby', 'mdev');
if (is_null($this->rabbitConnection)) return;
$this->rabbitChannel = $this->rabbitConnection->channel();
$label = uniqid('gacAdminInClient<' . $_fw . '>:');
list($this->rabbitCallbackQueue, ,) = $this->rabbitChannel->queue_declare($label, false, false, false, false); // was: f,f,f,t
$this->status = true;
return;
}
/**
* call() -- public method
*
* This method is invoked outside of the class and is the entry point for publishing a message request to the
* AdminIn broker. It creates a new AMQP message and publishes it to the queue (defined in the constructor),
* and then exits, returning a true message indicating that the messages was successfully published.
*
* Since the AdminIN broker is a fire-n-forget broker, there are no return messages to block-and-wait on.
*
* If an exception is raised by this class, then a false value will be returned.
*
* NOTE: the true/false return values are not, in any way, a reflection of the processing success/failure on the
* remote service. The general RoT is that if we can publish the request, then we can only assume that the request
* was successfully consumed and processed.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_data
*
* HISTORY:
* ========
* 06-15-17 mks original coding
*
*/
public function call($_data)
{
$this->rabbitCorrelationID = uniqid();
$res = false;
$rabbitMessage = new AMQPMessage((string)$_data);
try {
$rabbitMessage = new AMQPMessage((string)$_data, array(BROKER_CORRELATION_ID => $this->rabbitCorrelationID, BROKER_REPLY_TO => $this->rabbitCallbackQueue));
$this->rabbitChannel->basic_publish($rabbitMessage, '', $this->queueName);
$res = true;
} catch (AMQPTimeoutException $e) {
echo getDateTime() . CON_ERROR . $this->res . ERROR_BROKER_EXCEPTION_TIMEOUT . PHP_EOL;
echo getDateTime() . CON_ERROR . $this->res . $e->getMessage() . PHP_EOL;
} catch (\PhpAmqpLib\Exception\AMQPRuntimeException $e) {
echo getDateTime() . CON_ERROR . $this->res . ERROR_BROKER_EXCEPTION_RUNTIME . PHP_EOL;
echo getDateTime() . CON_ERROR . $this->res . $e->getMessage() . PHP_EOL;
} catch (AMQPException $e) {
echo getDateTime() . CON_ERROR . $this->res . ERROR_BROKER_EXCEPTION . PHP_EOL;
echo getDateTime() . CON_ERROR . $this->res . $e->getMessage() . PHP_EOL;
} catch (Exception $e) {
echo getDateTime() . CON_ERROR . $this->res . ERROR_BROKER_EXCEPTION . PHP_EOL;
echo getDateTime() . CON_ERROR . $this->res . $e->getMessage() . PHP_EOL;
}
$this->rabbitChannel->close();
$this->rabbitConnection->close();
$this->status = $res;
}
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
if (is_object($this->rabbitChannel)) {
$this->rabbitChannel->close();
$this->rabbitConnection->close();
}
}
}

View File

@@ -0,0 +1,149 @@
/**
* getNoSQLResource() -- private static method
*
* this method initializes the nosql resource by attempting to connect to the nosql service. if the connection
* attempt succeeds, then mark the resource as available. Otherwise, post an error-fatal and explicitly mark
* the service as not-available and return.
*
* NOTE:
* -----
* This resource allocation exists outside of the resource manager because the resource manager instantiates
* this class in it's constructor. Were you to request a resource from the resource manager, you'd end-up in
* a circular reference and if the whole thing does not come to an immediate shuddering stop, then it would
* certainly blow-up the first time you attempt to log an error. So, tl;dr: do not attempt to 'fix' this as
* it's not broken.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return Aws\DynamoDb\DynamoDbClient|null
*
* HISTORY:
* ========
* 06-09-17 mks original coding
*
*/
private function getNoSQLResource():Aws\DynamoDb\DynamoDbClient
{
global $eos;
$options = null;
date_default_timezone_set(TIME_TIMEZONE);
if (empty($this->config)) {
echo getDateTime() . CON_ERROR . $this->res . ERROR_CONFIG_RESOURCE_404 . CONFIG_DATABASE_DDB . $eos;
return(null);
}
$noSQLConfig = $this->config[CONFIG_DATABASE_DDB_APPSERVER];
$credentials = [
STRING_KEY => $noSQLConfig[CONFIG_DATABASE_DDB_APPSERVER_KEY_ID],
STRING_SECRET => $noSQLConfig[CONFIG_DATABASE_DDB_APPSERVER_ACCESS_KEY]
];
/*
* Requests to DynamoDB are made over HTTP(S), and this does not require that you establish an upfront
* connection. When you create the client object, you are not making a connection to DynamoDB, you are just
* configuring an HTTP client that will make requests to DynamoDB.
*/
$awsConfig = new Aws\Sdk([
STRING_ENDPOINT => $noSQLConfig[CONFIG_DATABASE_DDB_APPSERVER_DSN] . ':' . $noSQLConfig[CONFIG_DATABASE_DDB_APPSERVER_PORT],
STRING_REGION => $noSQLConfig[CONFIG_DATABASE_DDB_APPSERVER_REGION],
STRING_VERSION => $noSQLConfig[CONFIG_DATABASE_DDB_APPSERVER_VERSION],
STRING_CREDS => $credentials
]);
return $awsConfig->createDynamoDb();
}
/**
* getLog() - public method
*
* getLog is the method that is used to fetch log (or Metrics) records from the mongo collection.
*
* because ddb limits queries to 1MB returns, and the paint is still wet on this schema, for now I'm
* going to limit queries for log-file fetching to just the last N records created within the last hour and
* we'll just grab up to the limit of the records returned - which should still be a significant number of
* records...
*
* This method reads the last X records created in the last hour (since this method is mainly used for
* providing HTML output to the log-reader) and wraps the data in HTML table rows intended for the logDump
* utility.
*
* todo -- pagination support? Query by error-code? Query by eventID? Query by class?
*
* @author mshallop@pathway.com
* @version 1.0
*
* @param string $_what defaults to the log template -- should be over-ridden for metrics template
* @return null|string
* @throws Exception
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 06-14-17 mks refactored for ddb
*
*/
public function getLog(string $_what = TEMPLATE_CLASS_LOGS):string
{
$result = null;
$returnData = null;
$marshaler = new Marshaler(); // black-box son converter
$lastHour = time() - NUMBER_ONE_HOUR_SEC;
if ($_what != TEMPLATE_CLASS_LOGS and $_what != TEMPLATE_CLASS_METRICS) $_what = TEMPLATE_CLASS_LOGS;
$eav = $marshaler->marshalJson('{":ts" :' . $lastHour . '}');
$params = [
DDB_STRING_TABLE_NAME => $this->collectionName,
DDB_STRING_KEY_COND_EXPR => LOG_CREATED . ' > :ts',
DDB_STRING_EXPR_ATTR_VALS => $eav
];
try {
$result = $this->connection->query($params);
} catch (DynamoDbException $e) {
$this->errStack[] = __FILE__ . ':' . __LINE__ . ':' . __METHOD__ . ':' . $this->class . ':' .
ERROR_FATAL . ' caught cursor exception: ' . $e->getMessage();
self::throwFatal();
}
if (!is_null($result)) {
foreach ($result[DDB_STRING_ITEMS] as $row) {
$returnData .= '<div class="rowMeta">'; // note: css is defined in the utilities directory
$returnData .= '(' . $row[(DB_PKEY . $this->ext)] . ') - ';
// $returnData .= date(TIME_DATE_FORMAT, $row[(META_SESSION_DATE . self::$ext)]->sec) . ' - ';
// add error label as a span: warn/error/fatal...
$returnData .= self::getErrorLabel($row[(LOG_LEVEL . $this->ext)]);
$returnData .= ' ' . $row[(ERROR_FILE . $this->ext)] . '(' . $row[(ERROR_LINE . $this->ext)] . ')';
$cd = '';
if (!empty($row[(ERROR_CLASS . $this->ext)])) $cd = ' class[' . $row[(ERROR_CLASS . $this->ext)] . ']';
if (!empty($row[(ERROR_METHOD . $this->ext)])) $cd .= '.method(' . $row[(ERROR_METHOD . $this->ext)] . ')</div>';
$returnData .= $cd;
/*
if ($row[(ERROR_TYPE . self::$ext)] == ERROR_TRACE) {
$returnData .= '</div>';
}
*/
$returnData .= '<div class="rowData">' . htmlentities($row[(ERROR_MESSAGE . $this->ext)]);
if ($_what == TEMPLATE_CLASS_METRICS) {
$returnData .= ' - ' . $row[(DB_TIMER . $this->ext)] . ' or ';
$returnData .= ($row[(DB_TIMER . $this->ext)] * NUMBER_MS_PER_SEC) . 'ms';
}
$returnData .= '</div>';
$returnData .= '<div class="rowHist">';
foreach($row[(DB_HISTORY . $this->ext)] as $histRec) {
$returnData .= date('Y-M-d h:i:s', $histRec[META_SESSION_DATE]->sec);// . ' (';
if (!is_null($row[(LOG_EVENT_GUID . $this->ext)]))
$returnData .= ', Event ID: ' . $row[(LOG_EVENT_GUID . $this->ext)];
// $returnData .= $histRec[META_SESSION_EVENT] . ') from (';
// $returnData .= $histRec[META_SESSION_IP] . '): ';
// $returnData .= ((isset($histRec[META_SESSION_ID])) ? $histRec[META_SESSION_ID] : $histRec[META_CLIENT]) . '<br />';
}
$returnData .= '</div><br />';
}
}
return ($returnData);
}

View File

@@ -0,0 +1,68 @@
/**
* validateMeta() -- public method
*
* This method requires one input parameter:
*
* $_meta -- a key-value paired array containing the current meta data payload
*
* first we validate the input parameter to ensure we're working with valid data object. If not, we're going to
* immediately return a false value and set the gacMetrics property (stopProcessing) to true. This allows us
* to signal the gacFactory class that a processing error had occurred.
*
* Otherwise, spin through the meta data that was passed to the method and compare each key in the array to the
* list of "authorized" keys defined for the current class. If a key does not exist in the authoritative index,
* then remove that key from the input-meta data and record the event in the log file and in the gacFactory
* class eventMessages property.
*
* Method returns a boolean value that indicates if meta data was validated.
*
* Since the meta data array is passed as a call-by-reference variable, dropped fields will propagate back to the
* calling client. A list of dropped fields, if any, will be stored in the eventMessages container.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_meta
* @return bool
*
* HISTORY:
* ========
* 06-21-17 mks original coding
* 10-05-17 mks CORE-584: added validation for META_SKIP and META_LIMIT
*
*/
public function validateMeta(array &$_meta):bool
{
if (!is_array($_meta) or empty($_meta)) {
$this->logger->error(ERROR_DATA_META_REQUIRED);
$this->eventMessages[] = ERROR_DATA_META_REQUIRED;
return(false);
}
foreach ($_meta as $key => $value) {
if (!array_key_exists($key, $this->fields)) {
unset($_meta[$key]);
$msg = sprintf(NOTICE_META_DISCARD, $key);
$this->eventMessages[] = $msg;
if ($this->debug) $this->logger->debug($msg);
} else {
switch ($key) {
case META_SKIP :
case META_LIMIT :
if (!is_numeric($value)) {
$msg = ERROR_DATA_FIELD_DROPPED . $key;
$this->eventMessages[] = $msg;
if ($this->debug) $this->logger->debug($msg);
$msg = sprintf(ERROR_DATA_TYPE_MISMATCH_DETAILS, $key, DATA_TYPE_INTEGER, gettype($value));
$this->eventMessages[] = $msg;
if ($this->debug) $this->logger->debug($msg);
unset($_meta[$key]);
}
break;
}
}
}
return(true);
}

View File

@@ -0,0 +1,55 @@
/**
* deBSON() -- private method
*
* MongoDB has a problem fetching data as it tends to BSON serialize all properties within a record for
* non-packed record arrays. So, this method, which is recursive, takes the current data payload of N-records,
* and traverses all the records looking for declared array fields (fieldTypes).
*
* When found, that field type is force-cast to type = array and, if the field itself is another array, such
* as a sub-collection, the method will recursively call itself.
*
* As of this time, version 1.0.0, this method is only called from the _fetchRecords() method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_data
*
* HISTORY:
* ========
* 08-29-17 mks CORE-494: original coding
* 11-26-18 mks DB-55: added error processing if a key is not a member of the current class
*
*/
private function deBSON(&$_data)
{
if (is_array($_data) and array_key_exists(0, $_data)) {
for ($index = 0, $limit = count($_data); $index < $limit; $index++) {
foreach ($_data[$index] as $column => &$value) {
if ($this->fieldTypes[$column] == DATA_TYPE_ARRAY) {
if (!is_scalar($value)) {
foreach ($value as &$rec) {
if (!is_scalar($rec)) $rec = (array) $rec;
}
$this->deBSON($value);
}
$_data[$index][$column] = (array) $value;
}
}
}
} elseif (is_array($_data)) {
foreach ($_data as $key => $val) {
if (array_key_exists($key, $this->fieldTypes) and $this->fieldTypes[$key] == DATA_TYPE_ARRAY) {
if (is_array($val))
$this->deBSON($val);
$_data[$key] = (array) $val;
} elseif (!array_key_exists($key, $this->fieldTypes)) {
$msg = ERROR_DATA_FIELD_NOT_MEMBER . $key;
$this->eventMessages[] = sprintf(STUB_LOC, basename(__FILE__),__METHOD__, __LINE__) . COLON . $msg;
$this->logger->data($msg);
}
}
}
}

View File

@@ -0,0 +1,709 @@
<?php
/**
* BindParams -- helper class
*
* this is a helper class for the gacMySQL class, specifically for generating dynamic prepared statements.
*
* this class, when instantiated, creates storage for a prepared statement's type and values. When we want to
* create the prepared statement, we use call_user_func_array() and use the output from this method to generate
* the arguments that are normally passed in a prepared statement.
*
* using this class allows a data payload to be dynamically parsed and validated - allows a client to update
* a sub-set of a table without having to explicitly enumerate every column in the table.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* --------
* 06-29-17 mks original coding
*
*/
class BindParams {
private $values = array();
private $types = '';
/**
* add() -- public method
*
* this method accepts two parameters as input - the type of the variable and the value of the variable. In
* this instance, when I say variable, I am referring to a mysql table column.
*
* if, for some unknown reason, type is a value not allowed, reset it to type 's' which should cover-up most
* mistakes.
*
* $value as a call-by-reference to suppress a PHP warning message.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $type
* @param $value
*
* HISTORY:
* --------
* 06-29-17 mks original coding
*
*/
public function add(string $type, &$value)
{
switch ($type) {
case 'd' :
case 'i' :
case 'b' :
case 's' :
break;
default :
$type = 's';
}
$this->values[] = $value;
$this->types .= $type;
}
public function isEmpty()
{
return((empty($this->values)) ? true : false);
}
public function checkOrd()
{
return((count($this->values) == strlen($this->types)));
}
/**
* get() -- public method
*
* get() simply returns the two class variables as a string of output that's tailored to the input
* requirement of mysqli::bind_param().
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return array
*
* HISTORY:
* --------
* 06-29-17 mks Original coding
*
*/
public function get()
{
return array_merge(array($this->types), $this->values);
}
public function refValues($arr) {
$refs = array();
foreach($arr as $key => $value)
$refs[$key] = &$arr[$key];
return $refs;
}
}
class gacMySQL extends gaaNamasteCore
{
private $slaveConnection = null; // resource link to mysqli service
protected $useSlaveServer = false; // should be overridden in the class instantiation
protected $batchSize = PDO_RECORDS_PER_PAGE;
protected $mySqlTypes = array();
protected $tip = false; // indicates if a transaction is already in progress
protected $uniqueIndexes = null;
protected $compoundIndexes = null;
protected $exposedFields = null;
protected $dbEvent; // used to track the different sql events
protected $rowsAffected; // how many rows were affected by the sql query
protected $queryResult; // container to hold return payload from mysqli
protected $recordLimit;
protected $serviceReady; // boolean indicating if the mysql service is ready
// exceptions to the query-builder
public $queryOrderBy;
public $queryOrderByDirection;
public $queryGroupBy;
public $queryGroupByDirection;
public $queryLimit;
public $queryHaving;
public $mysqlMasterAvailable;
public $mysqlSlaveAvailable;
// allowable operands for mysql
public $operands = [
OPERATOR_EQ,
OPERATOR_LTE,
OPERATOR_GTE,
OPERATOR_DNE,
OPERATOR_LT,
OPERATOR_GT,
OPERATOR_NE
];
/**
* __construct -- public method
*
* constructor for the mysql data instantiation class
*
* Three input parameters are supported for the constructor:
*
* $_template: the name of the template that establishes which data class will be instantiated
* $_meta: the meta data payload as received from the broker - critical because it contains the name
* of the class template we're going to be instantiating.
* $_id: an optional parameter - if provided, we'll instantiate the class and then attempt to load
* the record referenced by the primary key value (after evaluating the id type).
*
* Next, we going to assign mysql resources - based on the configuration, if we're supporting slave reads, make
* the appropriate assignments so the correct resource is engaged for any particular query.
*
* Load the template properties into the class and set the class properties accordingly.
*
* Every mySQL table has two "primary keys" -- the traditional auto-incrementing integer, and a guid string.
* The best-practices effort of "id's internally, guids externally" applies to mysql structures.
*
* When we instantiate the class and we receive an id, we have to evaluate if we're passed a string (guid) or an
* integer (id) and adjust the current pkey pointer appropriately so that correct query is build deeper down.
*
* Therefore, mysql is the first and, of this writing, the only db instantiation class that has a floating pkey
* value/type which is established on a data fetch at run-time.
*
*
* @author mike@givingassistant.org
* @version 1.0.0
*
* @param string $_template
* @param array $_meta
* @param mixed $_id
*
* HISTORY:
* --------
* 06-29-17 mks initial coding
*
*/
public function __construct(string $_template, array $_meta, $_id = null)
{
register_shutdown_function(array($this, '__destruct'));
parent::__construct();
if ($this->trace and $this->logger->available) {
$this->logger->trace(STRING_ENT_METH . __METHOD__);
if (!empty($_guid) and $this->debug) {
$this->logger->debug('received guid: ' . $_guid);
}
}
// validate the meta data payload
if (empty($_meta)) {
$this->state = STATE_META_ERROR;
$this->logger->data(ERROR_DATA_META_REQUIRED);
$this->eventMessages[] = ERROR_DATA_META_REQUIRED;
return;
} elseif (!array_key_exists(META_TEMPLATE, $_meta)) {
$this->state = STATE_META_ERROR;
$msg = ERROR_DATA_META_KEY_404 . META_TEMPLATE;
$this->logger->data($msg);
$this->eventMessages[] = $msg;
return;
}
// invoke the parent constructor, load the mysql configuration
parent::__construct();
$this->status = false;
$this->config = gasConfig::$settings[CONFIG_DATABASE_MYSQL];
if (empty($this->config)) {
$msg = ERROR_CONFIG_RESOURCE_404 . RESOURCE_MYSQL;
$this->logger->warn($msg);
$this->eventMessages[] = $msg;
$this->state = STATE_RESOURCE_ERROR_MYSQL;
return;
}
// load the template
$this->templateName = STRING_CLASS_GAT . $_meta[META_TEMPLATE];
if (!$this->loadTemplate()) {
$this->logger->warn(ERROR_TEMPLATE_INSTANTIATE . $_meta[META_TEMPLATE]);
$this->state = STATE_TEMPLATE_ERROR;
return;
}
$this->class = $_meta[META_TEMPLATE]; // set the class to the name of the requested data class
// if we're passed an optional $_id, then evaluate which type of id we're working with and make
// the appropriate assignments.
if (!empty($_id)) {
$_id = trim($_id);
$_id = (is_numeric($_id)) ? abs(intval($_id)) : $_id;
switch (gettype($_id)) {
case DATA_TYPE_STRING :
if (validateGUID($_id)) {
if ($this->pKey != PKEY_GUID) {
$msg = sprintf(ERROR_PKEY_TYPE, DATA_TYPE_STRING);
$this->logger->error($msg);
$this->state = STATE_DATA_TYPE_ERROR;
$this->eventMessages[] = $msg;
return;
}
} else {
$msg = ERROR_INVALID_GUID . $_id;
$this->eventMessages[] = $msg;
$this->logger->error($msg);
$this->state = STATE_DATA_ERROR;
return;
}
break;
case DATA_TYPE_INTEGER :
if ($this->pKey != PKEY_ID) {
$msg = sprintf(ERROR_PKEY_TYPE, DATA_TYPE_INTEGER);
$this->logger->error($msg);
$this->eventMessages[] = $msg;
$this->state = STATE_DATA_TYPE_ERROR;
return;
}
$this->pKey = PKEY_ID;
break;
default :
$msg = sprintf(ERROR_PKEY_TYPE, gettype($_id));
$this->logger->error($msg);
$this->eventMessages[] = $msg;
$this->state = STATE_DATA_TYPE_ERROR;
return;
break;
}
}
// establish and assign mysql connections
if (gasResourceManager::$mySqlMasterAvailable) {
$this->connection = gasResourceManager::fetchResource(RESOURCE_MYSQL_MASTER);
$this->mysqlMasterAvailable = true;
if (gasResourceManager::$mySqlSlaveAvailable) {
$this->slaveConnection = gasResourceManager::fetchResource(RESOURCE_MYSQL_SLAVE);
$this->mysqlSlaveAvailable = true;
} else {
$this->mysqlSlaveAvailable = false;
}
} else {
$this->mysqlMasterAvailable = false;
$this->state = STATE_RESOURCE_ERROR_MYSQL;
return;
}
$this->queryOrderBy = null;
$this->queryGroupBy = null;
$this->queryLimit = null;
$this->queryHaving = null;
$this->queryGroupByDirection = null;
$this->queryGroupByDirection = null;
$this->serviceReady = true;
// if we have an $_id, load the record
if ($this->collectionName != NONE) {
$this->buildIndexReference();
$this->setRowsReturnedLimit();
}
}
/**
* buildIndexReference() -- private method
*
* this method looks at the table defined in the current class instantiation and fetches the schema
* information about the table from mysql.
*
* Each returned array structure from the query looks like this:
*
* Array
* (
* [Field] => email_usr
* [Type] => varchar(50)
* [Null] => NO
* [Key] => UNI
* [Default] =>
* [Extra] =>
* )
*
* We're looking for the column 'Key' to be not-empty as this indicates that the table column is indexed
* in some way.
*
* We want to save the indexed column information in a K->V paired array so that, when we're parsing
* queries submitted to the mysql service, we can screen the query and prevent the execution of any
* query that does not use the indexed columns of the table.
*
* The K->V associative array will be stored locally in the the $fieldTypes variable (declared in the core).
*
* The Key will contain the name of the indexed column, and the Value will have the mysql type definition
* for that column.
*
* If the query execution generates a mysql error, set a WARN message and return.
* If the query executes, but no indexed columns are returned, raise a WARN message.
*
*
* @author mike@givingassistant.org
* @version 1.0.0
*
* HISTORY:
* --------
* 06-30-17 mks original coding
*
*/
private function buildIndexReference()
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
$data = null;
// generate the cache key appropriate to the class and see if we've already cached this info
// based off a previous instantiation...
$cKey = CACHE_NAMASTE_KEY . '_' . CACHE_MYSQL_TABLE_SCHEMA . '_' . $this->collectionName;
if ($data = gasCache::get($cKey)) {
$data = json_decode(gzuncompress($data), true);
} else {
$this->dbEvent = DB_EVENT_NAMASTE;
$this->strQuery = 'SHOW COLUMNS FROM ' . $this->collectionName;
$this->executeNonPreparedQuery();
if (!$this->rowsAffected) {
$this->logger->warn(ERROR_SQL_FTL_INDEXES);
$this->eventMessages[] = ERROR_SQL_FTL_INDEXES;
} else {
foreach($this->queryResults as $row) {
$data[] = $row;
}
}
gasCache::add($cKey, gzcompress(json_encode($data, true)));
}
if (!empty($data) and is_array($data)) {
foreach ($data as $row) {
@$this->mySqlTypes[$row[MYSQL_COLUMN_FIELD]] = $row[MYSQL_COLUMN_TYPE];
if (!empty($row[MYSQL_COLUMN_KEY])) {
$this->indexes[] = $row[MYSQL_COLUMN_FIELD];
}
if (@$row[MYSQL_COLUMN_KEY] == MYSQL_INDEX_PRIMARY or @$row[MYSQL_COLUMN_KEY] == MYSQL_INDEX_UNIQUE) {
$this->uniqueIndexes[] = $row[MYSQL_COLUMN_FIELD];
}
}
}
}
/**
* setRowsReturnedLimit() -- private method
*
* this function is called in the constructor for the current table instantiation.
*
* it looks at the information_schema table to get the average_row_length (arl) value for the table.
* this is a gross calculation - the more data in the table, the more accurate the value.
*
* if we can get the arl value from the information_schema, then divide this number into the system
* constant (max_data_returned) to see if the result is less-than or equal-to the system constant for the
* number of rows returned per query...
*
* if the calculated value is smaller, then allow the system constants to remain -- if not, adjust the system
* constant for the max_rows_returned so that the total amount of data remains under the system constant
* max_data_returned.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* 07-10-17 mks original coding
*
*/
private function setRowsReturnedLimit()
{
if (gasConfig::$settings[ERROR_TRACE] and $this->logger->available) {
$this->logger->trace(STRING_ENT_METH . __METHOD__);
}
$key = PDO_DATA_DEFINITION . '_' . PDO_AVG_ROW_LEN . '_' . $this->collectionName;
$cacheData = null;
$this->dbEvent = MYSQL_EVENT_META;
if ($cacheData = gasCache::get($key)) {
$cacheData = json_decode(gzuncompress($cacheData), true);
$this->recordLimit = $cacheData[PDO_RECORDS_PER_PAGE];
} else {
$schema = gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_MYSQL][CONFIG_DATABASE_MYSQL_APPSERVER][CONFIG_DATABASE_MYSQL_MASTER][CONFIG_DATABASE_MYSQL_DB];
$this->strQuery = '-- noinspection SqlDialectInspection
SELECT AVG_ROW_LENGTH
FROM information_schema.tables
WHERE table_schema = "' . $schema . '"
AND table_name = "' . $this->collectionName . '"';
$this->recordLimit = PDO_RECORDS_PER_PAGE;
$this->executeNonPreparedQuery();
if (($this->rowsAffected === 1) and (isset($this->queryResult[0][MYSQL_AVG_ROW_LENGTH]))) {
$arl = $this->queryResult[0][MYSQL_AVG_ROW_LENGTH];
if (($arl * PDO_RECORDS_PER_PAGE) > MYSQL_MAX_DATA_RETURNED) {
$this->recordLimit = intval(MYSQL_MAX_DATA_RETURNED / $arl);
}
}
$cacheData[PDO_RECORDS_PER_PAGE] = $this->recordLimit;
if (!gasCache::add(PDO_DATA_DEFINITION . '_' . PDO_AVG_ROW_LEN . '_' . $this->collectionName, gzcompress(json_encode($cacheData, true)), gasCache::$cacheTTL)) {
$this->logger->warn('memcache:set failed - check log files');
}
}
}
/**
* executeNonPreparedQuery() -- private method
*
* this is the main method to execute all META and SELECT queries - any query that is not a prepared statement
* will execute here. Basically, namaste queries.
*
* upon invocation, the string passed (implicitly through the member variable: $query) will be cleaned through
* the common function, and then we'll evaluate the query based on the setting of the member variable $dbEvent.
* If $dbEvent is not META and not SELECT, then we're going to return with a WARN message requiring the client
* to use the prepared-query method.
*
* Next, parse the query and look for the "?" character - which is used as a place holder in prepared queries,
* and, if found, reject the request and return with a WARN message.
*
* Call a private method to see if the slave server is enabled and, if so, use it if the current query contains
* the SELECT keyword (meta queries will not use SELECT) and return the connection resource to a local variable.
*
* if query timers are enabled, then mark the start time and execute the query. record the end-time and log
* the query through the parent::method().
*
* Make a call to fetch the data as an associative array and post the results, along with the row count, to
* class variables.
*
* if the query generated an mysql error, generate an WARN message and return.
*
* @author mike@givingassistant.org
* @version 1.0.0
*
* HISTORY:
* --------
* 06-30-17 mks original coding
*
*/
private function executeNonPreparedQuery()
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
$startTime = floatval(0);
if ($this->debug) {
$this->logger->debug($this->strQuery);
}
// todo: can I exec this schema command using the read-slave? Do I want to?
/** @var $dbLink mysqli() */
$dbLink = $this->connection;
if ($this->useTimers) $startTime = floatval(0);
$this->queryResults = null;
if ($this->dbEvent != DB_EVENT_NAMASTE) {
$this->strQuery = cleanQueryString($this->strQuery);
}
switch($this->dbEvent) {
case DB_EVENT_NAMASTE :
case DB_EVENT_SELECT :
break;
default :
$this->logger->error(ERROR_SQL_NOT_PREP_STMNT);
return;
}
if (stripos($this->strQuery, '?')) {
$this->logger->warn(ERROR_SQL_LOST_PREP_QUERY);
$this->logger->warn($this->strQuery);
return;
}
if ($this->useTimers) {
$startTime = gasStatic::doingTime();
}
if ($result = $dbLink->query($this->strQuery)) {
$this->rowsAffected = $result->num_rows;
if ($this->useTimers) {
$this->logger->metrics($this->strQuery, gasStatic::doingTime($startTime));
$this->logger->debug(MYSQL_ROWS_AFFECTED . $this->rowsAffected);
}
while ($row = $result->fetch_assoc()) {
$this->queryResult[] = $row;
}
} else {
$this->logger->warn('error expecting query: ' . $this->strQuery);
}
}
/**
* loadTemplate() -- private method
*
* this method is invoked by the constructor and serves to load the class template file, assimilating it into
* the current instantiation.
*
* template loads are done on the schema-instantiation level, instead of the core, because of the changes in
* the template file(s) across various schemas.
*
* the method will load the class template and set the class member variables controlled/referenced by the
* template.
*
* successful loading of the template is determined by the return (boolean) value -- on error, a log message
* will be generated so it's up to the developer to check logs on fail-returns to see why their template
* file was not correctly assimilated.
*
* The template to be loaded is first derived in the constructor (post validation that the template file
* exists) and is pulled from the member variable (also set in the constructor) within this method.
*
*
* @author mike@givingassistant.org
* @version 1.0.0
*
* @return bool
*
* HISTORY:
* ========
* 06-30-17 mks original coding
*
*/
private function loadTemplate():bool
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
try {
/** @var gatTestMySQL template */
$this->template = new $this->templateName;
} catch (Exception $e) {
$this->logger->warn($e->getMessage());
$this->state = STATE_FRAMEWORK_FAIL;
return (false);
}
if (!is_object($this->template)) {
$this->logger->warn(ERROR_FILE_404 . $this->templateName);
$this->setState(ERROR_FILE_404 . $this->templateName);
return (false);
}
if ($this->template->schema != TEMPLATE_DB_PDO) {
$this->logger->warn(ERROR_SCHEMA_MISMATCH . $this->template->schema . ERROR_STUB_EXPECTING . TEMPLATE_DB_PDO);
$this->setState(ERROR_SCHEMA_MISMATCH . $this->templateName);
return (false);
}
// transfer meta data info to current instantiation
$this->schema = $this->template->schema;
$this->collectionName = $this->template->collection;
$this->ext = $this->template->extension;
$this->useCache = $this->template->setCache;
$this->useDeletes = $this->template->setDeletes;
$this->useAuditing = $this->template->setAuditing;
$this->useJournaling = $this->template->setJournaling;
$this->allowUpdates = $this->template->setUpdates;
$this->useDetailedHistory = $this->template->setHistory;
$this->defaultStatus = $this->template->setDefaultStatus;
$this->searchStatus = $this->template->setSearchStatus;
$this->useLocking = $this->template->setLocking;
$this->useTimers = ($this->template->setTimers and gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_QUERY_TIMERS]);
$this->pKey = $this->template->setPKey;
$this->useToken = $this->template->setTokens;
$this->cacheExpiry = $this->template->cacheTimer;
if (isset($this->template->fields) and is_array($this->template->fields)) {
foreach ($this->template->fields as $key => $value) {
if ($key == DB_HISTORY) {
$this->fieldList[] = $key;
$this->fieldTypes[$key] = $value;
} else {
$this->fieldList[] = ($key . $this->ext);
$this->fieldTypes[($key . $this->ext)] = $value;
}
}
}
if (isset($this->template->indexes) and is_array($this->template->indexes)) {
foreach ($this->template->indexes as $key => $value) {
$this->indexes[] = ($key . $this->ext);
}
}
if (!is_null($this->template->cacheMap) and $this->useCache) {
foreach ($this->template->cacheMap as $key => $value) {
$this->cacheMap[($key . $this->ext)] = $value;
}
} elseif (!$this->useCache) {
$this->cacheMap = null;
if (!is_null($this->template->exposedFields)) {
$this->exposedFields = $this->template->exposedFields;
}
}
if (!is_null($this->template->uniqueIndexes)) $this->uniqueIndexes = $this->template->uniqueIndexes;
if (!is_null($this->template->compoundIndexes)) $this->compoundIndexes = $this->template->compoundIndexes;
if (!is_null($this->template->binFields)) {
foreach ($this->template->binFields as $key) {
$this->binaryFields[] = ($key . $this->ext);
}
}
if ($this->template->selfDestruct) {
unset($this->template);
}
return (true);
}
protected function _createRecord($_data)
{
}
protected function _fetchRecords($_dd, $_rd = null, $_co = true, $_skip = 0, $_limit = 0, $_sort = null)
{
}
protected function _updateRecord($_data){
}
protected function _deleteRecord($_data)
{
}
protected function _lockRecord()
{
}
protected function _releaseLock()
{
}
protected function _isLocked()
{
}
/**
* __destruct() -- public function
*
* class destructor
*
* @author mike@givingassistant.org
* @version 1.0.0
*
* HISTORY:
* ========
* 06-29-17 mks original coding
*
*/
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
// there is no destructor method defined in the core abstraction class, hence
// there is no call to that parent destructor in this class.
parent::__destruct();
}
}

View File

@@ -0,0 +1,709 @@
<?php
/**
* BindParams -- helper class
*
* this is a helper class for the gacMySQL class, specifically for generating dynamic prepared statements.
*
* this class, when instantiated, creates storage for a prepared statement's type and values. When we want to
* create the prepared statement, we use call_user_func_array() and use the output from this method to generate
* the arguments that are normally passed in a prepared statement.
*
* using this class allows a data payload to be dynamically parsed and validated - allows a client to update
* a sub-set of a table without having to explicitly enumerate every column in the table.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* --------
* 06-29-17 mks original coding
*
*/
class BindParams {
private $values = array();
private $types = '';
/**
* add() -- public method
*
* this method accepts two parameters as input - the type of the variable and the value of the variable. In
* this instance, when I say variable, I am referring to a mysql table column.
*
* if, for some unknown reason, type is a value not allowed, reset it to type 's' which should cover-up most
* mistakes.
*
* $value as a call-by-reference to suppress a PHP warning message.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $type
* @param $value
*
* HISTORY:
* --------
* 06-29-17 mks original coding
*
*/
public function add(string $type, &$value)
{
switch ($type) {
case 'd' :
case 'i' :
case 'b' :
case 's' :
break;
default :
$type = 's';
}
$this->values[] = $value;
$this->types .= $type;
}
public function isEmpty()
{
return((empty($this->values)) ? true : false);
}
public function checkOrd()
{
return((count($this->values) == strlen($this->types)));
}
/**
* get() -- public method
*
* get() simply returns the two class variables as a string of output that's tailored to the input
* requirement of mysqli::bind_param().
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return array
*
* HISTORY:
* --------
* 06-29-17 mks Original coding
*
*/
public function get()
{
return array_merge(array($this->types), $this->values);
}
public function refValues($arr) {
$refs = array();
foreach($arr as $key => $value)
$refs[$key] = &$arr[$key];
return $refs;
}
}
class gacMySQL extends gaaNamasteCore
{
private $slaveConnection = null; // resource link to mysqli service
protected $useSlaveServer = false; // should be overridden in the class instantiation
protected $batchSize = PDO_RECORDS_PER_PAGE;
protected $mySqlTypes = array();
protected $tip = false; // indicates if a transaction is already in progress
protected $uniqueIndexes = null;
protected $compoundIndexes = null;
protected $exposedFields = null;
protected $dbEvent; // used to track the different sql events
protected $rowsAffected; // how many rows were affected by the sql query
protected $queryResult; // container to hold return payload from mysqli
protected $recordLimit;
protected $serviceReady; // boolean indicating if the mysql service is ready
// exceptions to the query-builder
public $queryOrderBy;
public $queryOrderByDirection;
public $queryGroupBy;
public $queryGroupByDirection;
public $queryLimit;
public $queryHaving;
public $mysqlMasterAvailable;
public $mysqlSlaveAvailable;
// allowable operands for mysql
public $operands = [
OPERATOR_EQ,
OPERATOR_LTE,
OPERATOR_GTE,
OPERATOR_DNE,
OPERATOR_LT,
OPERATOR_GT,
OPERATOR_NE
];
/**
* __construct -- public method
*
* constructor for the mysql data instantiation class
*
* Three input parameters are supported for the constructor:
*
* $_template: the name of the template that establishes which data class will be instantiated
* $_meta: the meta data payload as received from the broker - critical because it contains the name
* of the class template we're going to be instantiating.
* $_id: an optional parameter - if provided, we'll instantiate the class and then attempt to load
* the record referenced by the primary key value (after evaluating the id type).
*
* Next, we going to assign mysql resources - based on the configuration, if we're supporting slave reads, make
* the appropriate assignments so the correct resource is engaged for any particular query.
*
* Load the template properties into the class and set the class properties accordingly.
*
* Every mySQL table has two "primary keys" -- the traditional auto-incrementing integer, and a guid string.
* The best-practices effort of "id's internally, guids externally" applies to mysql structures.
*
* When we instantiate the class and we receive an id, we have to evaluate if we're passed a string (guid) or an
* integer (id) and adjust the current pkey pointer appropriately so that correct query is build deeper down.
*
* Therefore, mysql is the first and, of this writing, the only db instantiation class that has a floating pkey
* value/type which is established on a data fetch at run-time.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_template
* @param array $_meta
* @param mixed $_id
*
* HISTORY:
* --------
* 06-29-17 mks initial coding
*
*/
public function __construct(string $_template, array $_meta, $_id = null)
{
register_shutdown_function(array($this, '__destruct'));
parent::__construct();
if ($this->trace and $this->logger->available) {
$this->logger->trace(STRING_ENT_METH . __METHOD__);
if (!empty($_guid) and $this->debug) {
$this->logger->debug('received guid: ' . $_guid);
}
}
// validate the meta data payload
if (empty($_meta)) {
$this->state = STATE_META_ERROR;
$this->logger->data(ERROR_DATA_META_REQUIRED);
$this->eventMessages[] = ERROR_DATA_META_REQUIRED;
return;
} elseif (!array_key_exists(META_TEMPLATE, $_meta)) {
$this->state = STATE_META_ERROR;
$msg = ERROR_DATA_META_KEY_404 . META_TEMPLATE;
$this->logger->data($msg);
$this->eventMessages[] = $msg;
return;
}
// invoke the parent constructor, load the mysql configuration
parent::__construct();
$this->status = false;
$this->config = gasConfig::$settings[CONFIG_DATABASE_MYSQL];
if (empty($this->config)) {
$msg = ERROR_CONFIG_RESOURCE_404 . RESOURCE_MYSQL;
$this->logger->warn($msg);
$this->eventMessages[] = $msg;
$this->state = STATE_RESOURCE_ERROR_MYSQL;
return;
}
// load the template
$this->templateName = STRING_CLASS_GAT . $_meta[META_TEMPLATE];
if (!$this->loadTemplate()) {
$this->logger->warn(ERROR_TEMPLATE_INSTANTIATE . $_meta[META_TEMPLATE]);
$this->state = STATE_TEMPLATE_ERROR;
return;
}
$this->class = $_meta[META_TEMPLATE]; // set the class to the name of the requested data class
// if we're passed an optional $_id, then evaluate which type of id we're working with and make
// the appropriate assignments.
if (!empty($_id)) {
$_id = trim($_id);
$_id = (is_numeric($_id)) ? abs(intval($_id)) : $_id;
switch (gettype($_id)) {
case DATA_TYPE_STRING :
if (validateGUID($_id)) {
if ($this->pKey != PKEY_GUID) {
$msg = sprintf(ERROR_PKEY_TYPE, DATA_TYPE_STRING);
$this->logger->error($msg);
$this->state = STATE_DATA_TYPE_ERROR;
$this->eventMessages[] = $msg;
return;
}
} else {
$msg = ERROR_INVALID_GUID . $_id;
$this->eventMessages[] = $msg;
$this->logger->error($msg);
$this->state = STATE_DATA_ERROR;
return;
}
break;
case DATA_TYPE_INTEGER :
if ($this->pKey != PKEY_ID) {
$msg = sprintf(ERROR_PKEY_TYPE, DATA_TYPE_INTEGER);
$this->logger->error($msg);
$this->eventMessages[] = $msg;
$this->state = STATE_DATA_TYPE_ERROR;
return;
}
$this->pKey = PKEY_ID;
break;
default :
$msg = sprintf(ERROR_PKEY_TYPE, gettype($_id));
$this->logger->error($msg);
$this->eventMessages[] = $msg;
$this->state = STATE_DATA_TYPE_ERROR;
return;
break;
}
}
// establish and assign mysql connections
if (gasResourceManager::$mySqlMasterAvailable) {
$this->connection = gasResourceManager::fetchResource(RESOURCE_MYSQL_MASTER);
$this->mysqlMasterAvailable = true;
if (gasResourceManager::$mySqlSlaveAvailable) {
$this->slaveConnection = gasResourceManager::fetchResource(RESOURCE_MYSQL_SLAVE);
$this->mysqlSlaveAvailable = true;
} else {
$this->mysqlSlaveAvailable = false;
}
} else {
$this->mysqlMasterAvailable = false;
$this->state = STATE_RESOURCE_ERROR_MYSQL;
return;
}
$this->queryOrderBy = null;
$this->queryGroupBy = null;
$this->queryLimit = null;
$this->queryHaving = null;
$this->queryGroupByDirection = null;
$this->queryGroupByDirection = null;
$this->serviceReady = true;
// if we have an $_id, load the record
if ($this->collectionName != NONE) {
$this->buildIndexReference();
$this->setRowsReturnedLimit();
}
}
/**
* buildIndexReference() -- private method
*
* this method looks at the table defined in the current class instantiation and fetches the schema
* information about the table from mysql.
*
* Each returned array structure from the query looks like this:
*
* Array
* (
* [Field] => email_usr
* [Type] => varchar(50)
* [Null] => NO
* [Key] => UNI
* [Default] =>
* [Extra] =>
* )
*
* We're looking for the column 'Key' to be not-empty as this indicates that the table column is indexed
* in some way.
*
* We want to save the indexed column information in a K->V paired array so that, when we're parsing
* queries submitted to the mysql service, we can screen the query and prevent the execution of any
* query that does not use the indexed columns of the table.
*
* The K->V associative array will be stored locally in the the $fieldTypes variable (declared in the core).
*
* The Key will contain the name of the indexed column, and the Value will have the mysql type definition
* for that column.
*
* If the query execution generates a mysql error, set a WARN message and return.
* If the query executes, but no indexed columns are returned, raise a WARN message.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* --------
* 06-30-17 mks original coding
*
*/
private function buildIndexReference()
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
$data = null;
// generate the cache key appropriate to the class and see if we've already cached this info
// based off a previous instantiation...
$cKey = CACHE_NAMASTE_KEY . '_' . CACHE_MYSQL_TABLE_SCHEMA . '_' . $this->collectionName;
if ($data = gasCache::get($cKey)) {
$data = json_decode(gzuncompress($data), true);
} else {
$this->dbEvent = DB_EVENT_NAMASTE;
$this->strQuery = 'SHOW COLUMNS FROM ' . $this->collectionName;
$this->executeNonPreparedQuery();
if (!$this->rowsAffected) {
$this->logger->warn(ERROR_SQL_FTL_INDEXES);
$this->eventMessages[] = ERROR_SQL_FTL_INDEXES;
} else {
foreach($this->queryResults as $row) {
$data[] = $row;
}
}
gasCache::add($cKey, gzcompress(json_encode($data, true)));
}
if (!empty($data) and is_array($data)) {
foreach ($data as $row) {
@$this->mySqlTypes[$row[MYSQL_COLUMN_FIELD]] = $row[MYSQL_COLUMN_TYPE];
if (!empty($row[MYSQL_COLUMN_KEY])) {
$this->indexes[] = $row[MYSQL_COLUMN_FIELD];
}
if (@$row[MYSQL_COLUMN_KEY] == MYSQL_INDEX_PRIMARY or @$row[MYSQL_COLUMN_KEY] == MYSQL_INDEX_UNIQUE) {
$this->uniqueIndexes[] = $row[MYSQL_COLUMN_FIELD];
}
}
}
}
/**
* setRowsReturnedLimit() -- private method
*
* this function is called in the constructor for the current table instantiation.
*
* it looks at the information_schema table to get the average_row_length (arl) value for the table.
* this is a gross calculation - the more data in the table, the more accurate the value.
*
* if we can get the arl value from the information_schema, then divide this number into the system
* constant (max_data_returned) to see if the result is less-than or equal-to the system constant for the
* number of rows returned per query...
*
* if the calculated value is smaller, then allow the system constants to remain -- if not, adjust the system
* constant for the max_rows_returned so that the total amount of data remains under the system constant
* max_data_returned.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* 07-10-17 mks original coding
*
*/
private function setRowsReturnedLimit()
{
if (gasConfig::$settings[ERROR_TRACE] and $this->logger->available) {
$this->logger->trace(STRING_ENT_METH . __METHOD__);
}
$key = PDO_DATA_DEFINITION . '_' . PDO_AVG_ROW_LEN . '_' . $this->collectionName;
$cacheData = null;
$this->dbEvent = MYSQL_EVENT_META;
if ($cacheData = gasCache::get($key)) {
$cacheData = json_decode(gzuncompress($cacheData), true);
$this->recordLimit = $cacheData[PDO_RECORDS_PER_PAGE];
} else {
$schema = gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_MYSQL][CONFIG_DATABASE_MYSQL_APPSERVER][CONFIG_DATABASE_MYSQL_MASTER][CONFIG_DATABASE_MYSQL_DB];
$this->strQuery = '-- noinspection SqlDialectInspection
SELECT AVG_ROW_LENGTH
FROM information_schema.tables
WHERE table_schema = "' . $schema . '"
AND table_name = "' . $this->collectionName . '"';
$this->recordLimit = PDO_RECORDS_PER_PAGE;
$this->executeNonPreparedQuery();
if (($this->rowsAffected === 1) and (isset($this->queryResult[0][MYSQL_AVG_ROW_LENGTH]))) {
$arl = $this->queryResult[0][MYSQL_AVG_ROW_LENGTH];
if (($arl * PDO_RECORDS_PER_PAGE) > MYSQL_MAX_DATA_RETURNED) {
$this->recordLimit = intval(MYSQL_MAX_DATA_RETURNED / $arl);
}
}
$cacheData[PDO_RECORDS_PER_PAGE] = $this->recordLimit;
if (!gasCache::add(PDO_DATA_DEFINITION . '_' . PDO_AVG_ROW_LEN . '_' . $this->collectionName, gzcompress(json_encode($cacheData, true)), gasCache::$cacheTTL)) {
$this->logger->warn('memcache:set failed - check log files');
}
}
}
/**
* executeNonPreparedQuery() -- private method
*
* this is the main method to execute all META and SELECT queries - any query that is not a prepared statement
* will execute here. Basically, namaste queries.
*
* upon invocation, the string passed (implicitly through the member variable: $query) will be cleaned through
* the common function, and then we'll evaluate the query based on the setting of the member variable $dbEvent.
* If $dbEvent is not META and not SELECT, then we're going to return with a WARN message requiring the client
* to use the prepared-query method.
*
* Next, parse the query and look for the "?" character - which is used as a place holder in prepared queries,
* and, if found, reject the request and return with a WARN message.
*
* Call a private method to see if the slave server is enabled and, if so, use it if the current query contains
* the SELECT keyword (meta queries will not use SELECT) and return the connection resource to a local variable.
*
* if query timers are enabled, then mark the start time and execute the query. record the end-time and log
* the query through the parent::method().
*
* Make a call to fetch the data as an associative array and post the results, along with the row count, to
* class variables.
*
* if the query generated an mysql error, generate an WARN message and return.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* --------
* 06-30-17 mks original coding
*
*/
private function executeNonPreparedQuery()
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
$startTime = floatval(0);
if ($this->debug) {
$this->logger->debug($this->strQuery);
}
// todo: can I exec this schema command using the read-slave? Do I want to?
/** @var $dbLink mysqli() */
$dbLink = $this->connection;
if ($this->useTimers) $startTime = floatval(0);
$this->queryResults = null;
if ($this->dbEvent != DB_EVENT_NAMASTE) {
$this->strQuery = cleanQueryString($this->strQuery);
}
switch($this->dbEvent) {
case DB_EVENT_NAMASTE :
case DB_EVENT_SELECT :
break;
default :
$this->logger->error(ERROR_SQL_NOT_PREP_STMNT);
return;
}
if (stripos($this->strQuery, '?')) {
$this->logger->warn(ERROR_SQL_LOST_PREP_QUERY);
$this->logger->warn($this->strQuery);
return;
}
if ($this->useTimers) {
$startTime = gasStatic::doingTime();
}
if ($result = $dbLink->query($this->strQuery)) {
$this->rowsAffected = $result->num_rows;
if ($this->useTimers) {
$this->logger->metrics($this->strQuery, gasStatic::doingTime($startTime));
$this->logger->debug(MYSQL_ROWS_AFFECTED . $this->rowsAffected);
}
while ($row = $result->fetch_assoc()) {
$this->queryResult[] = $row;
}
} else {
$this->logger->warn('error expecting query: ' . $this->strQuery);
}
}
/**
* loadTemplate() -- private method
*
* this method is invoked by the constructor and serves to load the class template file, assimilating it into
* the current instantiation.
*
* template loads are done on the schema-instantiation level, instead of the core, because of the changes in
* the template file(s) across various schemas.
*
* the method will load the class template and set the class member variables controlled/referenced by the
* template.
*
* successful loading of the template is determined by the return (boolean) value -- on error, a log message
* will be generated so it's up to the developer to check logs on fail-returns to see why their template
* file was not correctly assimilated.
*
* The template to be loaded is first derived in the constructor (post validation that the template file
* exists) and is pulled from the member variable (also set in the constructor) within this method.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return bool
*
* HISTORY:
* ========
* 06-30-17 mks original coding
*
*/
private function loadTemplate():bool
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
try {
/** @var gatTestMySQL template */
$this->template = new $this->templateName;
} catch (Exception $e) {
$this->logger->warn($e->getMessage());
$this->state = STATE_FRAMEWORK_FAIL;
return (false);
}
if (!is_object($this->template)) {
$this->logger->warn(ERROR_FILE_404 . $this->templateName);
$this->setState(ERROR_FILE_404 . $this->templateName);
return (false);
}
if ($this->template->schema != TEMPLATE_DB_PDO) {
$this->logger->warn(ERROR_SCHEMA_MISMATCH . $this->template->schema . ERROR_STUB_EXPECTING . TEMPLATE_DB_PDO);
$this->setState(ERROR_SCHEMA_MISMATCH . $this->templateName);
return (false);
}
// transfer meta data info to current instantiation
$this->schema = $this->template->schema;
$this->collectionName = $this->template->collection;
$this->ext = $this->template->extension;
$this->useCache = $this->template->setCache;
$this->useDeletes = $this->template->setDeletes;
$this->useAuditing = $this->template->setAuditing;
$this->useJournaling = $this->template->setJournaling;
$this->allowUpdates = $this->template->setUpdates;
$this->useDetailedHistory = $this->template->setHistory;
$this->defaultStatus = $this->template->setDefaultStatus;
$this->searchStatus = $this->template->setSearchStatus;
$this->useLocking = $this->template->setLocking;
$this->useTimers = ($this->template->setTimers and gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_QUERY_TIMERS]);
$this->pKey = $this->template->setPKey;
$this->useToken = $this->template->setTokens;
$this->cacheExpiry = $this->template->cacheTimer;
if (isset($this->template->fields) and is_array($this->template->fields)) {
foreach ($this->template->fields as $key => $value) {
if ($key == DB_HISTORY) {
$this->fieldList[] = $key;
$this->fieldTypes[$key] = $value;
} else {
$this->fieldList[] = ($key . $this->ext);
$this->fieldTypes[($key . $this->ext)] = $value;
}
}
}
if (isset($this->template->indexes) and is_array($this->template->indexes)) {
foreach ($this->template->indexes as $key => $value) {
$this->indexes[] = ($key . $this->ext);
}
}
if (!is_null($this->template->cacheMap) and $this->useCache) {
foreach ($this->template->cacheMap as $key => $value) {
$this->cacheMap[($key . $this->ext)] = $value;
}
} elseif (!$this->useCache) {
$this->cacheMap = null;
if (!is_null($this->template->exposedFields)) {
$this->exposedFields = $this->template->exposedFields;
}
}
if (!is_null($this->template->uniqueIndexes)) $this->uniqueIndexes = $this->template->uniqueIndexes;
if (!is_null($this->template->compoundIndexes)) $this->compoundIndexes = $this->template->compoundIndexes;
if (!is_null($this->template->binFields)) {
foreach ($this->template->binFields as $key) {
$this->binaryFields[] = ($key . $this->ext);
}
}
if ($this->template->selfDestruct) {
unset($this->template);
}
return (true);
}
protected function _createRecord($_data)
{
}
protected function _fetchRecords($_dd, $_rd = null, $_co = true, $_skip = 0, $_limit = 0, $_sort = null)
{
}
protected function _updateRecord($_data){
}
protected function _deleteRecord($_data)
{
}
protected function _lockRecord()
{
}
protected function _releaseLock()
{
}
protected function _isLocked()
{
}
/**
* __destruct() -- public function
*
* class destructor
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-29-17 mks original coding
*
*/
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
// there is no destructor method defined in the core abstraction class, hence
// there is no call to that parent destructor in this class.
parent::__destruct();
}
}

View File

@@ -0,0 +1,193 @@
/**
* cacheByTokenList() -- private method
*
* This method requires a single input parameter -- that's an array of tokens in the following format:
*
* array (
* 0 =>
* array (
* 'token_tst' => '2DB9636A-C14D-F2C9-7CDA-E7808C1EA600',
* ),
* )
*
* This method is used from the update event -- when we've already completed the update successfully and the
* current class has been populated with the successful update-query results, which we wish to preserve.
*
* The updated records, represented by the token list, has to be re-cached. So this method is going to exec
* a SELECT query to fetch the updated records for caching. This is a prepared query.
*
* Since we don't want to overwrite the results of the update query in the current class object, we're going to
* clone the object, execute the select query from that object, and transfer the results over to the original
* class before releasing the cloned object.
*
* Prior to said release, we're going to call the method to process the data members and cache the records and,
* on return, transfer the cache keys (if caching is enabled for the class) or the data.
*
* The method returns a boolean indicating success or failure for all of the operation.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_tList
* @return bool
*
* HISTORY:
* ========
* 10-31-17 mks CORE-586: original coding
*
*/
private function cacheByTokenList(array $_tList): bool
{
if (!is_array($_tList)) {
$this->eventMessages[] = ERROR_DATA_ARRAY_NOT_ARRAY . STRING_TOKEN;
return false;
}
// clone the current object so we don't overwrite any of the current class members & zero-out the important bits
$tObj = clone $this;
$tObj->queryVariables = null;
$tObj->strQuery = '';
$tObj->queryResults = '';
$tObj->count = 0;
$tObj->dbEvent = DB_EVENT_SELECT;
/*
* build the query to fetch the record based on the token list which looks like:
*
* array (
* 0 =>
* array (
* 'token_xxx' => '2DB9636A-C14D-F2C9-7CDA-E7808C1EA600',
* ),
* )
*/
$query = 'SELECT /* ' . basename(__FILE__) . COLON . __METHOD__ . AT . __LINE__ . ' */ ';
$query .= '* FROM ';
if (!isset($tObj->template->dbObjects[PDO_VIEWS][PDO_VIEW_BASIC . $tObj->collectionName])) {
$query .= $tObj->collectionName;
} else {
$query .= PDO_VIEW_BASIC . $tObj->collectionName;
}
$query .= ' ';
$query .= 'WHERE ' . STRING_TOKEN . $tObj->ext . ' IN (';
foreach ($_tList as $record) {
$query .= '?, ';
$tObj->queryVariables[] = $record[(STRING_TOKEN . $tObj->ext)];
}
$query = rtrim($query, ', ');
$query .= ') ';
if (!$tObj->useDeletes) {
$query .= 'AND status' . $tObj->ext . ' != ?';
$tObj->queryVariables[] = STATUS_DELETED;
}
$tObj->strQuery = $query;
try {
$tObj->executePreparedQuery();
if (!$tObj->status) {
$this->eventMessages = array_merge($this->eventMessages, $tObj->eventMessages);
return false;
}
$tObj->data = $tObj->queryResults;
if (!$tObj->returnFilteredData()) {
$this->eventMessages = array_merge($this->eventMessages, $tObj->eventMessages);
$this->eventMessages[] = ERROR_RFD_CORE_FAIL;
return false;
}
} catch (Throwable $t) {
$msg = ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
$this->eventMessages[] = $msg;
if (isset($this->logger) and $this->logger->available)
$this->logger->error($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
$this->state = STATE_FRAMEWORK_WARNING;
return false;
}
// copy data from the clone to the original
$this->eventMessages = array_merge($this->eventMessages, $tObj->eventMessages);
$this->cacheKeys = $tObj->cacheKeys;
$this->data = $tObj->data;
$tObj->__destruct();
unset($tObj);
return true;
}
/**
* getCacheTokenListQuery() -- private method
*
* This method is called from the update and delete methods for when we need to generate a list of affected tokens
* for these operations. We need to generate this list b/c the operation will modify the records and, if the
* records exist in cache, they should be removed.
*
* The method requires one input parameter which is the list of tokens we're going to build as a result of the
* query. As such, all of the query elements must have been built prior to invoking this method as those member
* elements are used to build this SELECT query...
*
* We'll execute the prepared select query and store the results in an array which is implicitly returned to the
* calling client.
*
* The method proper returns a boolean to indicate success or fail in processing as a null value returned for
* the token list is permissible.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array|null $_tokenList
* @return boolean
*
* HISTORY:
* ========
* 10-26-17 mks CORE-586: original coding
*
*/
private function getCacheTokenListQuery(array &$_tokenList = null): bool
{
$rc = false;
$cq = 'SELECT /* ' . basename(__FILE__) . COLON . __METHOD__ . AT . __LINE__ . ' */ ';
$cq .= DB_TOKEN . $this->ext . ' ';
$cq .= 'FROM ' . $this->collectionName . ' ';
$cq .= 'WHERE ' . $this->where . ' ';
if (!is_null($this->queryOrderBy)) {
$cq .= 'ORDER BY ' . $this->queryOrderBy . ' ';
}
if (!is_null($this->queryLimit)) {
$cq .= 'LIMIT ' . $this->queryLimit;
}
try {
$this->dbEvent = DB_EVENT_NAMASTE_READ;
$this->strQuery = $cq;
$this->executePreparedQuery();
if ($this->status) {
$_tokenList = $this->queryResults;
// foreach ($this->queryResults as $key ) {
// $_tokenList[] = $key;
// }
if (empty($_tokenList)) $_tokenList = null;
$rc = true;
} else {
$this->eventMessages[] = ERROR_PDO_CQ_QUERY;
$this->state = STATE_DB_ERROR;
}
return $rc;
} catch (Throwable $t) {
$msg = sprintf(ERROR_EXCEPTION . COLON . $this->strQuery[0]);
consoleLog(RES_PDO, CON_ERROR, $msg);
$this->eventMessages[] = $msg;
$this->eventMessages[] = $t->getMessage();
if (isset($this->logger) and $this->logger->available) {
$this->logger->warn($msg);
$this->logger->warn($t->getMessage());
} else {
consoleLog($this->res, CON_ERROR, $msg);
consoleLog($this->res, CON_ERROR, $t->getMessage());
}
$this->status = false;
$this->state = STATE_DB_ERROR;
}
return false;
}

View File

@@ -0,0 +1,204 @@
/**
* convertCacheMap() -- private static method
*
* This private method is the gateway/entry-point for cacheMapping on data payloads. The function has the following
* required input parameters:
*
* $_data -- this is the payload to be cacheMapped. This should be an indexed array of one, or more, assoc arrays
* $_dir -- string value indicating if the data is incoming (IN) or outbound (OUT)
* $_map -- this is the class-specific vector of cacheMapped settings pulled from the global cacheMap
* $_type -- this defines the data payload as either record-data or query-data
* $_errs -- this is a call-by-reference array that allows us to propagate error messages back up the stack
*
* The function looks at the contents of most of the input parameters and validates the content returning a null
* if any of the params fail validation while also adding messages to the error stack and by publishing a message
* to the error logger.
*
* Once validation is complete, we pass all of the input params to a second private function, allowing for
* recursion in that function, and hopefully get back an array (which is passed though back up to the calling
* client) that is successfully cacheMapped.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_data
* @param string $_dir
* @param array $_map
* @param string $_type
* @param array $_errs
* @return array|null
*
*
* HISTORY:
* ========
* 02-25-19 mks DB-116: original coding
*
*/
private static function convertCacheMap(array $_data, string $_dir, array $_map, string $_type, array &$_errs): ?array
{
// validate input param content -- input param type is implicitly validated by strong type decls
if ($_dir != IN and $_dir != OUT) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = sprintf(ERROR_CACHE_DIRECTION, (string) $_dir);
$_errs[] = $msg;
static::$logger->data($hdr . $msg);
return null;
}
if ($_type != STRING_QUERY and $_type != STRING_DATA) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = ERROR_CACHE_MAP_TYPE . $_type;
$_errs[] = $msg;
static::$logger->error($hdr . $msg);
return null;
}
if (!is_array($_data)) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = $hdr . ERROR_DATA_ARRAY_NOT_ARRAY . STRING_DATA;
$_errs[] = $msg;
static::$logger->data($msg);
return null;
}
if (!is_array($_map)) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = $hdr . ERROR_DATA_ARRAY_NOT_ARRAY . CACHE_MAP;
$_errs[] = $msg;
static::$logger->data($msg);
return null;
}
try {
return static::processCacheMap($_data, $_map, $_dir, $_type, $_errs);
} catch (TypeError $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = $hdr . ERROR_TYPE_EXCEPTION;
$_errs[] = $msg;
static::$logger->warn($msg);
$msg = $hdr . $t->getMessage();
$_errs[] = $msg;
static::$logger->warn($msg);
return null;
}
}
/**
* processCacheMap() -- private static method with recursion
*
* This function is responsible for the cacheMapping for both incoming and outbound data payloads. It is a
* stand-alone function because of it's recursive nature -- when we encounter a sub-array within the payload,
* we must recursively call this method in order to process the sub-array (etc.).
*
* There are the following input parameters to this method, all of which are required:
*
* $_data -- this is the incoming/outgoing data payload. The invoking method has parsed, for example, the incoming
* data payload and, as an example, let's say there are three sub-arrays stored in the payload:
* STRING_QUERY_DATA, STRING_SORT_DATA and STRING_RETURN_DATA -- the invoking method will make a total of three
* calls to this method, one for each of the sub-arrays under BROKER_DATA. Note that $_data should be passed as
* an indexed array s.t. each tuple in the array is processed as a separate record.
*
* $_map -- this is the cacheMap for the targeted class. In other words, it is not the entire cacheMap but the
* named tuple for the current data class.
*
* $_dir -- this is a string value that may only be either IN or OUT (both are Namaste system constants). IN
* designates the payload as incoming while OUT designates the payload as outbound.
*
* $_type -- this is a string value, defined as either STRING_QUERY or STRING_DATA, and is verified in the
* the calling client. This value designates the type of payload to be mapped.
*
* $_es -- this is an array for the error-stack -- it's a call-by-reference parameter s.t. we can propagate any
* error messages back to the invoking client.
*
* The method returns an array which, under optimal conditions, returns a mirror of the incoming data payload
* save that the keys have been successfully cacheMapped.
*
* Any filed which fails cacheMapping (e.g.: not found) will be stored in the class static $badCacheFields and
* will be implicitly returned to the calling client for processing.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* @param array $_data -- indexed array of data to be cacheMapped; may contain more than one record
* @param array $_map -- cacheMap for the current data class
* @param string $_dir -- indicates the direction (flow) of the data: either INcoming or OUTbound
* @param string $_type - indicates payload type: either DATA or QUERY (validated in calling client)
* @param array $_es -- call-by-reference array for returning error messages to the calling client
* @return array|null -- returns the cacheMapped array or a null on error
*
*
* HISTORY:
* ========
* 02-25-19 mks DB-116: original coding
*
*/
private static function processCacheMap(array $_data, array $_map, string $_dir, string $_type, array &$_es): ?array
{
$data = null; // container to hold the cacheMapped record
$records = null; // container to hold all the cacheMapped records
// todo -- test for subCollection array existing in the subC setting... which means you have to add it to the cacheMap data
// this is where the cache-mapping magic happens...
foreach ($_data as $record => $recordData) {
foreach ($record as $column => &$value) {
if ($_dir == IN) { // todo -- map off the type...
// we're cache-mapping an incoming payload
if (in_array($column, $_map[CACHE_MAP])) {
if (is_array($value) and $_map[CACHE_SUBC] != STRING_NOT_DEFINED and array_key_exists(array_search($column, $_map[CACHE_MAP]), $_map[CACHE_SUBC])) {
// note: not checking for the case where $value is an array but $newKey is not defined in the
// cached SUBC definition for the class. This loose definition for sub-collections
// permits the user to store sub-arrays without inspection/validation/mapping.
try {
// we have to process $value recursively as a sub-array
$data[array_search($column, $_map[CACHE_MAP])] = static::processCacheMap($value, $_map, $_dir, $_type, $_es);
} catch (TypeError $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = $hdr . ERROR_TYPE_EXCEPTION;
$_errs[] = $msg;
static::$logger->warn($msg);
$msg = $hdr . $t->getMessage();
$_errs[] = $msg;
static::$logger->warn($msg);
return null;
}
} else {
$data[array_search($column, $_map[CACHE_MAP])] = $value;
}
} else {
static::$badCacheFields[$column] = $value;
}
} else {
// we're cache-mapping an outbound payload so remove the class extension from the column name
$newKey = str_replace($_map[CACHE_EXT], '', $column);
if (array_key_exists($newKey, $_map[CACHE_MAP])) {
// note: not checking for the case where $value is an array but $newKey is not defined in the
// cached SUBC definition for the class. This loose definition for sub-collections
// permits the user to store sub-arrays without inspection/validation/mapping.
if (is_array($value) and $_map[CACHE_SUBC] != STRING_NOT_DEFINED and array_key_exists($newKey, $_map[CACHE_SUBC])) {
try {
// we have to process $value recursively as a sub-array
$data[$_map[CACHE_MAP][$newKey]] = static::processCacheMap($value, $_map, $_dir, $_type,$_es);
} catch (TypeError $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = $hdr . ERROR_TYPE_EXCEPTION;
$_errs[] = $msg;
static::$logger->warn($msg);
$msg = $hdr . $t->getMessage();
$_errs[] = $msg;
static::$logger->warn($msg);
return null;
}
} else {
$data[$_map[CACHE_MAP][$newKey]] = $value;
}
} else {
static::$badCacheFields[$newKey] = $value;
}
}
}
}
if (!empty($data)) {
$records[] = $data;
unset($data);
} // todo: else?
return $records;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,366 @@
<?php
/**
* This is a wrapper class for the AT daemon. It was plagiarized from:
*
* https://github.com/treffynnon/PHP-at-Job-Queue-Wrapper/blob/master/lib/Treffynnon/At/Wrapper.php
*
* Because the original (author's) version uses exceptions for error reporting. I've re-tooled the original code,
* eliminating the exception processing, making the output logging align with Namaste's logging formats, and closing
* access publicly to all methods except the intended public function.
*
* @author treffynnon@php.net Simon Holywell
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 06-07-17 mks port from original source
* 08-17-20 mks DB-168: code review
*
*/
class gacATWrapper
{
protected static string $binary = 'at'; // path to the AT binary
protected static string $addRegex = '/^job (\d+) at ([\w\d- :]+)$/'; // regexp to fetch the current job listings
protected static array $addMap = array( // regexp mapping -> descriptive names
1 => 'job_number',
2 => 'date',
);
// regexp for fetching queue info
protected static string $queueRegex = '/^(\d+)\s+([\w\d- :]+) (\w) ([\w-]+)$/';
// another regexp matching for queue data
protected static array $queueMap = array(
1 => 'job_number',
2 => 'date',
3 => 'queue',
4 => 'user',
);
protected static string $res = 'CRON: ';
protected static string $pipeTo = '2>&1'; // redirects STDERR to STDOUT (redundant)
protected static array $atSwitches = array( // supported AT options list
'queue' => '-q',
'list_queue' => '-l',
'file' => '-f',
'remove' => '-d',
);
/**
* cmd() -- public static function
*
* @users self::addCommand
*
* @param $command
* @param $time
* @param null $queue
* @return mixed
*/
static public function cmd($command, $time, $queue = null)
{
return self::addCommand($command, $time, $queue);
}
/**
* @uses self::addFile
*
* @param $file
* @param $time
* @param null $queue
* @return mixed
*/
static public function file($file, $time, $queue = null)
{
return self::addFile($file, $time, $queue);
}
/**
* @uses self::listQueue
*
* @param null $queue
* @return mixed
*/
static public function lq($queue = null)
{
return self::listQueue($queue);
}
/**
* Add a job to the `at` queue
* @param string $command
* @param string $time see `man at`
* @param string $queue a-zA-Z see `man at`
* @return mixed
*/
static private function addCommand($command, $time, $queue = null)
{
$command = self::escape($command);
$time = self::escape($time);
$exec_string = "echo '$command' | " . self::$binary;
if(null !== $queue) {
$exec_string .= ' ' . self::$atSwitches['queue'] . " {$queue[0]}";
}
$exec_string .= " $time ";
return self::addJob($exec_string);
}
/**
* Add a file job to the `at` queue
* @param string $file Full path to the file to be executed
* @param string $time see `man at`
* @param string $queue a-zA-Z see `man at`
* @return mixed
*/
static private function addFile($file, $time, $queue = null)
{
$file = self::escape($file);
$time = self::escape($time);
$exec_string = self::$binary . ' ' . self::$atSwitches['file'] . " $file";
if(null !== $queue) {
$exec_string .= ' ' . self::$atSwitches['queue'] . " {$queue[0]}";
}
$exec_string .= " $time ";
return self::addJob($exec_string);
}
/**
* Return a list of the jobs currently in the queue. If you do not specify
* a queue to look at then it will return all jobs in all queues.
* @param string $queue
* @return array of Job objects
* @return mixed
*/
static private function listQueue($queue = null)
{
$exec_string = self::$binary . ' ' . self::$atSwitches['list_queue'];
if(null !== $queue) {
$exec_string .= ' ' . self::$atSwitches['queue'] . " {$queue[0]}";
}
$result = self::exec($exec_string);
return self::transform($result, 'queue');
}
/**
* Remove a job by job number
* @param int $job_number
* @return Boolean
*/
static public function removeJob($job_number)
{
$rc = true;
if (empty($job_number)) {
$hdr = sprintf(INFO_LOC, __METHOD__, __LINE__);
consoleLog(static::$res, CON_ERROR, $hdr . RES_ATW . ERROR_DATA_INPUT_EMPTY . STRING_JOB_NUMBER);
return(false);
}
$job_number = self::escape($job_number);
// $exec_string = self::$binary . ' ' . self::$atSwitches['remove'] . " $job_number";
$exec_string = 'atrm ' . $job_number;
$output = self::exec($exec_string);
if(count($output)) {
$rc = false;
foreach ($output as $errorMessage) {
$hdr = sprintf(INFO_LOC, __METHOD__, __LINE__);
consoleLog(static::$res, CON_ERROR, $hdr . $errorMessage);
}
echo getDateTime() . CON_ERROR . RES_ATW . $output[0] . PHP_EOL;
}
return($rc);
}
/**
* Add a job to the at queue and return the
* @param string $job_exec_string
* @return mixed
*/
static private function addJob($job_exec_string)
{
$output = self::exec($job_exec_string);
return (count($output) == 1) ? $output[0] : $output[1];
// $job = self::transform($output);
// if(!count($job)) {
// $logger = new gacErrorLogger();
// $logger->warn('failed to add job to the queue. Exec command: ' . $job_exec_string);
// $logger->__destruct();
// }
// return reset($job);
}
/**
* Transform the output of `at` into an array of objects
* @param array $output_array
* @param string $type Is this an add or list we are transforming?
* @return array An array of Job objects
*/
static private function transform(array $output_array, string $type = 'add'):array
{
$jobs = array();
// Get the appropriate regex class property for the type
// of `at` switch/command being run at this point in time.
$regex = $type . 'Regex';
$regex = self::$$regex;
$map = $type .'Map';
$map = self::$$map;
foreach($output_array as $line) {
$matches = array();
@preg_match($regex, $line, $matches);
if(count($matches) > count($map)) {
$jobs[] = self::mapJob($matches, $map);
}
}
return $jobs;
}
/**
* Map the details matched with the regex to descriptively named properties
* in a new Job object
* @param array $details
* @param array $map
* @return Job
*/
static private function mapJob($details, $map)
{
$Job = new Job();
foreach($details as $key => $detail) {
if(isset($map[$key])) {
$Job->$map[$key] = $detail;
}
}
return $Job;
}
/**
* Escape a string that will be passed to exec
* @param string $string
* @return string
*/
static private function escape($string)
{
return escapeshellcmd($string);
}
/**
* Run the command via exec() and return each line of the output as an
* array
* @param string $string
* @return array Each line of output is an element in the array
*/
static private function exec($string)
{
$output = array();
$string .= ' ' . self::$pipeTo;
exec($string, $output);
return $output;
}
}
/**
* A simple class for storing a jobs details and some methods for manipulating
* it. A job model if you will.
*
* @author Simon Holywell <treffynnon@php.net>
* @version 16.11.2010
*/
class Job {
/**
* Data store for the job details
* @var array
*/
public array $data = array();
protected string $res = 'CRON: ';
protected /** @noinspection PhpMissingFieldTypeInspection */ $date;
/**
* Magic method to set a value in the $data
* property of the class
* @param string $name
* @param mixed $value
*/
public function __set($name, $value)
{
$this->data[$name] = $value;
}
/**
* Magic method to get a value in the $data property
* of the class
* @param string $name
* @return mixed
*/
public function __get($name)
{
if (isset($this->data[$name])) {
return $this->data[$name];
}
$logger = new gacErrorLogger();
$trace = debug_backtrace();
$logger->warn("Undefined property via __get(): $name in $trace[0]['file'] . on line $trace[0]['line']");
$logger->__destruct();
return(false);
}
/**
* Magic method to check for the existence of an
* index in the $data property of the class
* @param string $name
* @return bool
*/
public function __isset($name)
{
return isset($this->data[$name]);
}
/**
* Magic method to unset an index in the $data property
* of the class
* @param string $name
*/
public function __unset($name)
{
unset($this->data[$name]);
}
/**
* Remove this job from the queue
*/
public function remove() {
if(isset($this->job_number)) {
gacATWrapper::removeJob((int)$this->job_number);
}
}
/**
* Get a DateTime object for date and time extracted from
* the output of `at`
* @example echo $job->date()->format('d-m-Y');
* @uses DateTime
* @return DateTime A PHP DateTime object
*/
public function date(): ?DateTime
{
try {
return new DateTime($this->date);
} catch (Exception | TypeError $t) {
$hdr = sprintf(INFO_LOC, __METHOD__, __LINE__);
consoleLog($this->res, CON_ERROR, $hdr . $t->getMessage());
return null;
}
}
}

View File

@@ -0,0 +1,253 @@
<?php
/**
* gacBrokerClient -- class definition
*
* this is the class for declaring a brokerClient for use in testing, or within the framework when we need to publish
* an event to another queue (that's not a logging event).
*
* This class simply abstracts the RabbitMQ processes so that you don't have to re-write all the RMQ code every
* time you want to publish a message to the queues.
*
* IMPORTANT NOTE:
* ---------------
* Whenever you add a new queue to the pantheon, you'll need to update the constructor class, adding the queue name
* to the $validQueues member, and to the switch-case statement that assigns the correct environment.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 02-08-17 mks _INF-139: updated for migrations broker, PHPDoc variable casting for AMQP members
* 07-31-18 mks CORE-774: PHP7.2 exception handling
* 09-18-19 mks DB-136: improved error messaging and exception handling
*
*/
//use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Exception\AMQPRuntimeException;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use PhpAmqpLib\Message\AMQPMessage;
class gacBrokerClient {
private ?object $rabbitConnection = null;
private ?AMQPChannel $rabbitChannel = null;
private string $rabbitCallbackQueue;
private ?string $rabbitResponse;
private string $rabbitCorrelationID;
private string $queueName;
private array $validQueues;
private gacErrorLogger $logger;
private string $res = 'CLBR: ';
public bool $status;
/**
* __construct() -- public method
*
* the constructor instantiates the class and establishes a connection to the RMQ broker.
*
* the constructor takes one input parameter:
*
* - queueName -- which queue does this instantiation wish to connect to
*
* as of this writing, there are two queuing services supported: namaste and admin. If more are added, then
* they'll need to be defined (as constants) and the resource evaluation code updated.
*
* method returns a boolean indicating whether or not the resource management was successful and attempts to
* provide diagnostics via logging, cli output, or via the status member variable.
*
* queue_declare arguments:
* ------------------------
* Queue Name: this is an arbitrary name, will be used to identify the queue
* Passive: if set to true, the server will only check if the queue can be created, false will actually attempt to create the queue.
* Durable: Typically, if the server stops or crashes, all queues and messages are lost... unless we declare the queue durable, in which case the queue will persist if the server is restarted.
* Exclusive: If true, the queue can only be used by the connection that created it.
* Autodelete: if true, the queue will be deleted once it has no messages and there are no subscribers connected
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_queueName
* @param $_tag
*
* HISTORY:
* ========
* 06-15-17 mks initial coding
* 02-08-18 mks _INF-139: support for the migration broker, fixed return statements, console log
* 04-11-18 mks _INF-188: warehousing broker support
* 01-29-20 mks DB-144: tercero support
* 12-10-20 mks DB-180: segundo cons broker support
*
*/
public function __construct(string $_queueName, $_tag = 'default')
{
register_shutdown_function(array($this, '__destruct'));
$this->status = false;
try {
$this->logger = new gacErrorLogger();
} catch (TypeError $t) {
consoleLog($this->res, CON_ERROR, ERROR_TYPE_EXCEPTION . $t->getMessage());
}
$this->queueName = $_queueName;
$this->validQueues = [
BROKER_QUEUE_R,
BROKER_QUEUE_W,
BROKER_QUEUE_AI,
BROKER_QUEUE_AO,
BROKER_QUEUE_M,
BROKER_QUEUE_WH,
BROKER_QUEUE_U,
BROKER_QUEUE_S,
BROKER_QUEUE_C
];
if (!in_array($this->queueName, $this->validQueues)) {
$this->logger->warn(ERROR_INVALID_QUEUE_NAME . $this->queueName);
} else {
switch ($this->queueName) {
case BROKER_QUEUE_R :
case BROKER_QUEUE_W :
case BROKER_QUEUE_M :
$resource = RESOURCE_BROKER;
break;
case BROKER_QUEUE_AI :
case BROKER_QUEUE_AO :
$resource = RESOURCE_ADMIN;
break;
case BROKER_QUEUE_WH :
case BROKER_QUEUE_C :
$resource = RESOURCE_SEGUNDO;
break;
case BROKER_QUEUE_U :
case BROKER_QUEUE_S :
$resource = RESOURCE_TERCERO;
break;
default :
$msg = ERROR_RESOURCE_404 . $_queueName;
$this->logger->info($msg);
consoleLog($this->res, CON_SYSTEM, $msg);
return;
break;
}
try {
$this->rabbitConnection = gasResourceManager::fetchResource($resource);
if (is_null($this->rabbitConnection)) return;
$this->rabbitChannel = $this->rabbitConnection->channel();
$label = uniqid('gacBrokerClient<' . $_tag . '>:');
list($this->rabbitCallbackQueue, ,) = $this->rabbitChannel->queue_declare($label, false, false, false, true); // was: f, f, f, t
$this->rabbitChannel->basic_consume($this->rabbitCallbackQueue, '', false, false, false, false, array($this, BROKER_CLIENT_RESPONSE));
$this->status = true;
} catch (AMQPRuntimeException | AMQPTimeoutException | Throwable $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . ERROR_THROWABLE_EXCEPTION);
consoleLog($this->res, CON_ERROR, $hdr . $t->getMessage());
}
}
return;
}
/**
* @noinspection PhpUnused
* client_response -- public method
*
* this method checks to see if the current response, based on the correlation (request) ID, is the one it's
* waiting for from the remote (vault) service.
*
* When it receives the awaited response, it stores the response into a member variable and exits.
*
*
* @author mikegivingassistant.org
* @version 1.0
* @param $_response - the class object created in the constructor
*
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 09-18-19 mks DB-136: exception wrapped this code
*
*/
public function client_response(AMQPMessage $_response)
{
try {
if ($_response->get(BROKER_CORRELATION_ID) == $this->rabbitCorrelationID) {
$this->rabbitResponse = $_response->body;
}
} catch (AMQPTimeoutException | AMQPRuntimeException | Throwable $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . ERROR_EXCEPTION);
consoleLog($this->res, CON_ERROR, $hdr . $t->getMessage());
}
}
/**
* call() -- public method
*
* This method is invoked outside of the class and is the entry point for publishing a message request to the
* broker. It creates a new AMQP message and publishes it to the queue (defined in the constructor), and then
* blocks-and-waits for a response from the remote (vault) service.
*
* Publishing a message is exception trapped and will generate a log message at the warn level if tripped.
*
* The "raw" response is returned directly to the calling client and will be processed at that level.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_data
* @return null
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
*
*/
public function call($_data)
{
$this->rabbitResponse = null;
$this->rabbitCorrelationID = uniqid();
$rabbitMessage = new AMQPMessage((string)$_data, [BROKER_CORRELATION_ID => $this->rabbitCorrelationID, BROKER_REPLY_TO => $this->rabbitCallbackQueue]);
try {
$this->rabbitChannel->basic_publish($rabbitMessage, '', (gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG] . $this->queueName));
while (!$this->rabbitResponse) {
$this->rabbitChannel->wait();
}
} catch (AMQPTimeoutException | AMQPRuntimeException | Throwable $e) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$this->logger->fatal($hdr . ERROR_BROKER_EXCEPTION_TIMEOUT);
$this->logger->fatal($hdr . $e->getMessage());
consoleLog('_BTC: ', CON_ERROR, $hdr . $e->getMessage());
}
return ($this->rabbitResponse);
}
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
try {
if (!is_null($this->rabbitChannel))
$this->rabbitChannel->close();
if (!is_null($this->rabbitConnection))
$this->rabbitConnection->close();
} catch (Throwable $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . ERROR_THROWABLE_EXCEPTION);
consoleLog($this->res, CON_ERROR, $hdr. $t->getMessage());
}
}
}

View File

@@ -0,0 +1,354 @@
<?php
/**
* Class gacBrokerHelper
*
* There are certain events that are duplicated across the different services. These events, at the broker levels, had
* their processing code/logic pulled and moved to this helper class.
*
* BrokerHelper is a vector for eliminating redundant code across the broker services.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-17-20 mks DB-156: original coding
*
*/
class gacBrokerHelper
{
private int $queryRecordLimit;
private string $res = 'BH : ';
public bool $status;
public string $state;
private ?gacErrorLogger $logger;
public function __construct()
{
$this->logger = new gacErrorLogger();
$this->queryRecordLimit = intval(gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_QUERY_RECORD_LIMIT]);
$this->state = STATE_INITIALIZED;
$this->status = true;
}
/**
* create() -- public method
*
* This method was taken from the write-broker's create event. This method handles the class method invocations
* for creating a new class record. There are three input parameters to this method:
*
* $_request -- this is the broker request containing all three parts (event, data and meta-data)
* $_aryRetData -- call-by-reference object which is a broker's return payload array
* $_msg -- call-by-reference string which is the generated console message
*
* This method calculates the query record limit (one of the XML configurable parameters) and returns immediately
* if the number of submitted records exceeds the XML-stated limitation.
*
* Next, we instantiate a factory object based on the current meta-data payload and we'll return immediately if we
* were unable to instantiate a factory class successfully.
*
* We copy off the widget (data class object) and invoke the create() method. On successful return, we cache-map
* the outbound payload and build the aryRetData container based on those results.
*
* The method returns a boolean indicating whether or not the record(s) were successfully created.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_request
* @param array|null $_aryRetData
* @param string $_msg
* @return bool|null
*
*
* HISTORY:
* ========
* 07-17-20 mks DB-156: original coding completed
*
*/
public function create(array $_request, ?array &$_aryRetData, string &$_msg):?bool
{
$rc = false;
$errors = [];
if (count($_request[BROKER_DATA]) > $this->queryRecordLimit) {
$_msg = ERROR_RECORD_LIMIT_EXCEEDED . $this->queryRecordLimit;
$this->logger->error($_msg);
$_aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, $_msg, null]);
return $rc;
}
try {
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($_request[BROKER_META_DATA], '', $errors))) {
foreach ($errors as $error)
$this->logger->error($error);
$_aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, ERROR_FACTORY_LOAD_BROKER . BROKER_REQUEST_CREATE, null]);
return $rc;
}
$widget->_createRecord($_request[BROKER_DATA]);
if ($widget->status) {
if (gasCache::mapOutboundPayload($widget, $errors)) {
$_aryRetData = buildReturnPayload([true, STATE_SUCCESS, $widget->queryResults, $widget->getCK()]);
$rc = true;
} else {
$_aryRetData = buildReturnPayload([false, $widget->state, $widget->eventMessages, null]);
$_msg = ERROR_CACHE_MAP_FAIL . ' tercero payload';
$this->logger->warn($_msg);
consoleLog($this->res, CON_ERROR, $_msg);
}
} else {
$widget->eventMessages[] = FAIL_EVENT . BROKER_REQUEST_CREATE;
$_aryRetData = buildReturnPayload([FALSE, $widget->state, $widget->eventMessages, null]);
}
if (is_object($widget)) $widget->__destruct();
unset($widget);
} catch (TypeError | Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $errors, true);
$_aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_EXCEPTION, null]);
}
return $rc;
}
/**
* fetch() -- public function
*
* The fetch method was culled from the read broker event processing for the same event. It allows us to access
* the fetch code from either the appServer or Tercero brokers.
*
* There are three input parameters to this method:
*
* $_request -- this is the array of event data submitted (Request, Data, Meta)
* $_aryRetData -- call-by-reference parameter which is the array returned to the calling client
* $_msg -- call by reference parameter that carries the console log message back to the calling client
*
* The method returns a boolean variable indicating if the fetch completed successfully or not. The data returned
* in the fetch operation is implicitly returned via the method's input parameters.
*
* The method instantiates a factory object which builds the schema-widget which is then used to execute
* the request query to fetch the data - the results of which are bundled-up and returned back to the
* calling client.
*
* In this implementation, I've removed support for the remoteFetchRequest() method call because validateMetaData()
* is now handling the tercero re-directs.
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* @param array $_request
* @param array|null $_aryRetData
* @param string|null $_msg
* @return bool
*
*
* HISTORY:
* ========
* 08-03-20 mks DB-157: original coding completed
*
*/
public function fetch(array $_request, ?array &$_aryRetData = null, ?string &$_msg = null):bool
{
$errors = array();
$rc = false;
try {
// cant instantiate remote-service objects in production, so we'll inject an skip
// directive for the env-check... todo -- qualify this!
/** @var gacMongoDB $objClass */
if (is_null($objClass = grabWidget($_request[BROKER_META_DATA], '', $errors))) {
foreach ($errors as $error)
$this->logger->error($error);
$_aryRetData = buildReturnPayload([false, STATE_DATA_ERROR, ERROR_FACTORY_LOAD_BROKER . BROKER_REQUEST_CREATE, null]);
} else {
// CORE-1013 ---------------------------------------------------------------
$objClass->_fetchRecords($_request[BROKER_DATA]);
if ($objClass->status) {
$rc = true;
$_msg = SUCCESS_EVENT . BROKER_REQUEST_FETCH;
$queryMeta = [
STRING_REC_COUNT_RET => $objClass->recordsReturned,
STRING_REC_COUNT_QUERY => $objClass->recordsInQuery,
STRING_REC_COUNT_TOT => $objClass->recordsInCollection
];
if ($objClass->state == STATE_NOT_FOUND and $objClass->count == 0) {
$retData = [STRING_QUERY_RESULTS => null, STRING_QUERY_DATA => $queryMeta];
} else {
// cacheMapping call
if (!gasCache::mapOutboundPayload($objClass, $errors)) {
$queryResults = $objClass->getData();
} else {
// cache mapping succeeded - return the cache key
$queryResults = $objClass->getCK();
}
$retData = [STRING_QUERY_RESULTS => $queryResults, STRING_QUERY_DATA => $queryMeta];
}
$_aryRetData = buildReturnPayload([true, $objClass->state, $objClass->eventMessages, $retData]);
} else {
$_msg = FAIL_EVENT . BROKER_REQUEST_FETCH;
$_aryRetData = buildReturnPayload([false, $objClass->state, $objClass->eventMessages, null]);
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
} catch (Throwable | TypeError $t) {
$hdr = sprintf(basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $errors, true);
$_aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, ERROR_EXCEPTION, null]);
}
return $rc;
}
/**
* update() -- public function
*
* This function was populated using code from the write-broker update event. The code was moved to the BH class
* so that it could be accessed by either the appServer:wBroker or tercero:userBroker brokers to process an
* update request.
*
* The function requires three input parameters:
*
* $_request -- this is the request payload as received by the broker, post-validation processing
* $_aryRetData -- this is a call-by-reference parameter that will contain the return payload that is sent back
* to the calling client on completion
* $_msg -- this is a string, a call-by-reference parameter, that will contain the event message for the
* console log.
*
* The method itself returns a boolean value to indicate if processing successfully completed or not.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_request
* @param array|null $_aryRetData
* @param string|null $_msg
* @return bool
*
*
* HISTORY:
* ========
* 08-04-20 mks DB-157: original coding happy birthday dad!
*
*/
public function update(array $_request, ?array &$_aryRetData = null, ?string &$_msg = null):bool
{
$rc = false;
$errors = [];
$objEvent = new gacFactory($_request[BROKER_META_DATA], FACTORY_EVENT_NEW_CLASS, '', $errors);
if (!$objEvent->status) {
$msg = ERROR_FACTORY_LOAD_BROKER . BROKER_REQUEST_CREATE;
$_msg = $msg;
$errors[] = $msg;
$_aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, $errors, null]);
$this->logger->fatal($msg);
} else {
/** @var gacMongoDB | gacPDO $objClass */
$objClass = $objEvent->widget;
if ($objClass->schema == TEMPLATE_DB_MONGO) {
if (isset($request[BROKER_META_DATA][META_LIMIT]) and $_request[BROKER_META_DATA][META_LIMIT] !== 1)
$request[BROKER_DATA][STRING_QUERY_OPTIONS][STRING_MULTI] = true;
else
$request[BROKER_DATA][STRING_QUERY_OPTIONS][STRING_MULTI] = false;
}
$objClass->_updateRecord($_request[BROKER_DATA]);
if ($objClass->status) {
$rc = true;
$_msg = SUCCESS_EVENT . BROKER_REQUEST_UPDATE;
try {
if ($objClass->state == STATE_NOT_FOUND) {
$_aryRetData = buildReturnPayload([false, STATE_NOT_FOUND, $objClass->eventMessages, null]);
} else {
$queryResults = (!gasCache::mapOutboundPayload($objClass, $errors)) ? $objClass->getData() : $objClass->getCK();
$_aryRetData = buildReturnPayload([true, STATE_SUCCESS, $objClass->queryResults, $queryResults]);
}
} catch (TypeError | Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __FILE__);
$msg = ERROR_EXCEPTION;
$this->logger->error($hdr . $msg);
$this->logger->error($t->getMessage());
$_aryRetData = buildReturnPayload([false, STATE_FAIL, $msg, null]);
$_msg = FAIL_EVENT . $hdr . $t->getMessage();
}
} else {
$_msg = FAIL_EVENT . BROKER_REQUEST_UPDATE;
$_aryRetData = buildReturnPayload([false, $objClass->state, $objClass->eventMessages, null]);
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
if (is_object($objEvent)) $objEvent->__destruct();
unset($objEvent);
return $rc;
}
/**
* delete() -- public method
*
* This method is the delete block formerly located in the write-broker, moved to the broker helper so that the
* code is equally accessible by either appServer:wBroker or tercero:uBroker brokers.
*
* There are three input parameters to this method:
*
* $_request -- this is an array containing the processed data payload from the broker event request
* $_aryRetData -- call-by-reference array that will contain the return payload to be sent back to the client
* $_msg -- call-by-reference string that will contain the console log message from processing
*
* The method returns a boolean indicating if the routine successfully processed to completion or not.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_request
* @param array|null $_aryRetData
* @param string|null $_msg
* @return bool
*
*
* HISTORY:
* ========
* 08-05-20 mks DB-157: original coding completed
*
*/
public function delete(array $_request, ?array &$_aryRetData = null, ?string &$_msg = null): bool
{
$rc = false;
$errors = [];
try {
/** @var gacMongoDB $objClass */
if (is_null($objClass = grabWidget($_request[BROKER_META_DATA], '', $errors))) {
foreach ($errors as $error)
$this->logger->error($error);
$_aryRetData = buildReturnPayload([false, STATE_FRAMEWORK_FAIL, ERROR_FACTORY_LOAD_BROKER . BROKER_REQUEST_CREATE, null]);
} else {
$objClass->_deleteRecord($_request[BROKER_DATA]);
if ($objClass->status and $objClass->state != STATE_NOT_FOUND) {
$_msg = SUCCESS_EVENT . BROKER_REQUEST_DELETE;
$rc = true;
$_aryRetData = buildReturnPayload([true, STATE_SUCCESS, $objClass->eventMessages, [STRING_RECS_DELETED => $objClass->rowsAffected]]);
} elseif ($objClass->status and $objClass->state == STATE_NOT_FOUND) {
$_msg = FAIL_EVENT . BROKER_REQUEST_DELETE;
$_aryRetData = buildReturnPayload([true, STATE_NOT_FOUND, $objClass->eventMessages, null]);
} else {
$_msg = FAIL_EVENT . BROKER_REQUEST_DELETE;
$_aryRetData = buildReturnPayload([false, $objClass->state, $objClass->eventMessages, null]);
}
if (is_object($objClass)) $objClass->__destruct();
unset($objClass);
}
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __FILE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $errors, true);
$_aryRetData = buildReturnPayload([false, STATE_FAIL, ERROR_EXCEPTION, null]);
}
return $rc;
}
}

498
classes/gacDdb.class.inc Normal file
View File

@@ -0,0 +1,498 @@
<?php
/**
* some assumptions to make:
*
* -- that this class will be invoked via the gacFactory class
* -- that the gacFactory class will pre-validate the data template as being for the Ddb schema
*
*/
class gacDdb extends gaaNamasteCore
{
private $globalIndexes = null;
private $localIndexes = null;
protected $service; // defines the end-point service
/**
* gacDdb constructor.
*
* the constructor for this class requires two input parameters:
*
* $_meta -- the meta data payload (has been vetted by the factory class)
* $_guid -- an optional string containing a guid value -- if this is specified, then we're telling the class
* to instantiate the class with the designated record pre-loaded.
*
* We're going to initialize with some housekeeping chores - like loading up the configuration, initializing
* a connection to the DDB resource via the resource manager, and then loading the template.
*
* if we passed a guid into the method, validate the guid
* if cache is enabled, check to see if the record is cached and, if so, load it
* if the $data property is still null, call the method to load the record
* note that the state/status of the class will be set here
*
* return control to the calling client
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_meta
* @param string $_guid
*
* HISTORY:
* ========
* 06-21-17 mks original coding
*
*/
public function __construct(array $_meta, string $_guid = '')
{
// set-up
register_shutdown_function([$this, STRING_DESTRUCTOR]);
$this->status = false;
parent::__construct();
// load config
$this->config = gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_DDB];
if (empty($this->config)) {
$this->logger->fatal(ERROR_CONFIG_RESOURCE_404 . CONFIG_DATABASE . COLON . CONFIG_DATABASE_DDB);
$this->state = STATE_FRAMEWORK_FAIL;
return;
}
// set the class properties based off the template settings for the data
$this->templateName = STRING_CLASS_GAT . $_meta[META_TEMPLATE];
if (!$this->loadTemplate()) {
$this->logger->warn(ERROR_TEMPLATE_INSTANTIATE . $_meta[META_TEMPLATE]);
$this->state = STATE_TEMPLATE_ERROR;
return;
}
$this->class = $_meta[META_TEMPLATE]; // set the class to the name of the requested data class
// get the http resource for connecting to DynamoDB instance
if (gasResourceManager::$ddbAvailable and is_null($this->connection)) {
$this->connection = gasResourceManager::fetchResource(RESOURCE_DDB);
if (!is_object($this->connection)) {
$this->logger->fatal(ERROR_RESOURCE_DDB_404);
$this->setState(STATE_DB_ERROR);
return;
}
}
// store meta data and client identifier
$this->client = STRING_UNDEFINED; // todo: why?
if (!empty($_meta)) {
$this->metaPayload = $_meta;
if (isset($this->metaPayload[META_CLIENT])) {
$this->client = $this->metaPayload[META_CLIENT];
}
}
// store the event GUID
if (isset($this->metaPayload[META_EVENT_GUID])) {
$this->eventGUID = $this->metaPayload[META_EVENT_GUID];
} else {
$this->logger->warn(ERROR_EVENT_GUID_404);
}
// if a GUID/key was passed to the constructor, we need to fetch that record from the db
// NOTE: mBEDS code has connecting-to-remote-service (vault) code in this block
$this->data = null; // reset the data container
if (!empty($_guid) and validateGUID($_guid)) { // if we have a valid guid...
if ($this->useCache and gasResourceManager::$cacheAvailable) { // if cache is on and available
$this->data = gasCache::get($_guid); // search cache for the key
if (!is_null($this->data)) {
$this->data = json_decode(gzuncompress($this->data), true);
}
}
if (is_null($this->data)) {
$this->guidFetch($_guid);
}
} else {
$this->state = STATE_SUCCESS;
$this->status = true;
}
}
protected function _createRecord($_data)
{
}
protected function _fetchRecords($_dd, $_rd = null, $_co = true, $_skip = 0, $_limit = 0, $_sort = null)
{
}
protected function _updateRecord($_data){
}
protected function _deleteRecord($_data)
{
}
protected function _lockRecord()
{
}
protected function _releaseLock()
{
}
protected function _isLocked()
{
}
/**
* guidFetch() -- private method
*
* this method is (should only be) called from the class constructor and only then if the calling client passes
* a guid into the constructor - which the framework interprets as a request to instantiate and populate with the
* record designated by the guid - which is the primary key for the ddb table.
*
* The required input parameter is the guid for the record. Validation (that this is a guid) happens in the
* constructor or the calling client.
*
* First, we build the table name from the current environment and then build the query. The query is just a
* key-value-pair fetch but converts into something overly-complicated once we phrase it in ddb syntax. Just
* for logging and general readability, we sql-ize the ddb-query and store the string in the designated property.
*
* If the class is using timers, start the timer
* Execute the query - which returns an iterator and not a data set
* Log the timer results to the metrics table
* parse the return data set and assign it to the $data property
*
* We exception-wrap the query so we'll return the error as to why the query failed and will log accordingly.
*
* Next, if cache is enabled for the current class, cache the item after converting the data set to a json
* string.
*
* At all levels, set the state-status values accordingly - these are what should be evaluated by the calling
* client on return from the method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_guid
*
* HISTORY:
* ========
* 06-21-17 mks original coding
*
*/
private function guidFetch(string $_guid)
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
$this->status = false;
$startTime = floatval(0);
$tableName = gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV] . UDASH . $this->collectionName . $this->ext;
$query = [
DDB_STRING_CONSISTENT_READ => true,
DDB_TABLE_NAME => $tableName,
DDB_STRING_EXPR_ATTR_VALS => [
':q1' => [
$this->fieldTypes[$this->pKey . $this->ext] => $_guid
]
],
DDB_STRING_KEY_COND_EXPR => "$this->pKey$this->ext = :q1"
];
/** @noinspection SqlNoDataSourceInspection */
$this->strQuery = 'DDB:SELECT * FROM ' . $tableName . ' WHERE id' . $this->ext . ' = "' . $_guid . '"';
// todo - this was coded to see if it worked - next, build a query request array and pass to queryBuilder() and _fetchRecords()
// exec the DDB query
try {
if ($this->useTimers) $startTime = gasStatic::doingTime();
/** @var AWS\Result $result */
/** @noinspection PhpUndefinedMethodInspection */
$result = $this->connection->query($query);
if ($this->useTimers) $this->logger->metrics($this->strQuery, gasStatic::doingTime($startTime));
if ($result[DDB_STRING_COUNT] != 1) {
$msg = sprintf(ERROR_DDB_RECORD_COUNT, count($result), 1);
$this->eventMessages[] = $msg;
$this->logger->error($msg);
return;
}
// parse the return data set
/** @var array $resultData */
$resultData = $result[DDB_STRING_ITEMS];
foreach ($resultData as $item) {
$record = null;
foreach ($item as $key => $value)
foreach ($value as $type => $column)
$record[$key] = $column;
$this->data[] = $record;
}
} catch (\Aws\Exception\AwsException $e) {
$this->eventMessages[] = $e->getMessage();
$this->logger->error($e->getMessage());
$this->state = STATE_DB_ERROR;
return;
}
// cache the item if cache is enabled for the current class
if ($this->useCache and !empty($this->data)) {
$cacheData = json_encode($this->data);
if (is_null(gasCache::add($this->getColumn($this->pKey), $cacheData))) {
$msg = ERROR_CACHE_ADD_FAIL . $this->getColumn($this->pKey);
$this->eventMessages[] = $msg;
$this->logger->error($msg);
$this->state = STATE_CACHE_ERROR;
return;
}
}
$this->event = DB_EVENT_FETCH;
$this->status = true;
$this->state = STATE_SUCCESS;
}
private function queryBuilder(array $_query)
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
$this->status = false;
$foundIndex = false;
$attribute = '';
if (!is_array($_query) or empty($_query)) { // '[]' tests true for is_array
$msg = ERROR_PARAM_404 . DDB_STRING_QUERY;
$this->eventMessages[] = $msg;
$this->logger->error($msg);
$this->state = STATE_DATA_ERROR;
return;
}
// deal with a single-key query
if (count($_query) == 1) {
// extract the key value pair
foreach ($_query as $k1 => $v1) { // attribute level
$attribute = $k1;
foreach ($v1 as $k2 => $v2) { // operand level
foreach ($v2 as $k3 => $v3) { // operator level
// check that the operand is '='
if ($k3 != OPERATOR_DDB_EQ) {
$this->eventMessages[] = ERROR_DDB_EXP_EQ_Q1;
$this->logger->data(ERROR_DDB_EXP_EQ_Q1);
$this->state = STATE_DATA_ERROR;
return;
}
if (count($v3) != 1) {
$this->eventMessages[] = ERROR_DDB_EXP_VAL_Q1;
$this->logger->data(ERROR_DDB_EXP_VAL_Q1);
$this->state = STATE_DATA_ERROR;
return;
}
$value = $v3[0]; // grab the search value
}
}
}
// we have extracted the search key attribute and the search value...need to find it in one of
// the three possible index declarations for the current table (VALIDATE THE INDEX ATTRIBUTE)
if (array_key_exists($attribute, $this->indexes) and $this->indexes[$attribute] == DDB_INDEX_HASH) {
$foundIndex = 'base';
} else {
$ca = ['globalIndexes', 'localIndexes'];
foreach ($ca as $index) {
if (is_array($this->$index)) {
foreach ($this->$index as $subIndex) {
if (array_key_exists($attribute, $subIndex) and $subIndex[$attribute] == DDB_INDEX_HASH) {
$foundIndex = true;
break;
}
}
}
if ($foundIndex) break;
}
}
// if (is_array($this->globalIndexes)) {
// foreach ($this->globalIndexes as $gIndex) {
// if (array_key_exists($attribute, $gIndex) and $gIndex[$attribute] == DDB_INDEX_HASH) {
// $foundIndex = true;
// }
// }
// }
// if (!$foundIndex and is_array($this->localIndexes)) {
// foreach ($this->localIndexes as $lIndex) {
// if (array_key_exists($attribute, $lIndex) and $lIndex[$attribute] == DDB_INDEX_HASH) {
// $foundIndex = true;
// }
// }
// }
// }
if (!$foundIndex) {
$msg = sprintf(ERROR_DDB_NO_HASH_IDX, $attribute);
$this->eventMessages[] = $msg;
$this->logger->data($msg);
$this->state = STATE_DATA_ERROR;
return;
}
} elseif (count($_query) == 1) {
// todo: query uses the hash and the range
} else {
// todo -- malformed request
}
}
/**
* loadTemplate() -- private method
*
* this method is invoked by the constructor and serves to load the class template file, assimilating it into
* the current instantiation.
*
* template loads are done on the schema-instantiation level, instead of the core, because of the changes in
* the template file(s) across various schemas.
*
* the method will load the class template and set the class member variables controlled/referenced by the
* template.
*
* successful loading of the template is determined by the return (boolean) value -- on error, a log message
* will be generated so it's up to the developer to check logs on fail-returns to see why their template
* file was not correctly assimilated.
*
* The template to be loaded is first derived in the constructor (post validation that the template file
* exists) and is pulled from the member variable (also set in the constructor) within this method.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return bool
*
* HISTORY:
* ========
* 06-20-17 mks original coding
*
*/
private function loadTemplate():bool
{
if ($this->trace) $this->logger->trace(STRING_ENT_METH . __METHOD__);
try {
$this->template = new $this->templateName();
} catch (Exception $e) {
$this->logger->warn($e->getMessage());
$this->state = STATE_FRAMEWORK_FAIL;
return (false);
}
if (!is_object($this->template)) {
$this->logger->warn(ERROR_FILE_404 . $this->templateName);
$this->setState(ERROR_FILE_404 . $this->templateName);
return (false);
}
if ($this->template->schema != TEMPLATE_DB_DDB) {
$this->logger->warn(ERROR_SCHEMA_MISMATCH . $this->template->schema . ERROR_STUB_EXPECTING . TEMPLATE_DB_DDB);
$this->setState(ERROR_SCHEMA_MISMATCH . $this->templateName);
return (false);
}
// transfer meta data info to current instantiation
$this->service = $this->template->service;
$this->schema = $this->template->schema;
$this->collectionName = $this->template->collection;
$this->ext = $this->template->extension;
$this->useCache = $this->template->setCache;
$this->useDeletes = $this->template->setDeletes;
$this->useAuditing = $this->template->setAuditing;
$this->useJournaling = $this->template->setJournaling;
$this->allowUpdates = $this->template->setUpdates;
$this->useDetailedHistory = $this->template->setHistory;
$this->defaultStatus = $this->template->setDefaultStatus;
$this->searchStatus = $this->template->setSearchStatus;
$this->useLocking = $this->template->setLocking;
$this->useTimers = ($this->template->setTimers and gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_QUERY_TIMERS]);
$this->pKey = $this->template->setPKey;
$this->useToken = $this->template->setTokens;
$this->cacheExpiry = $this->template->cacheTimer;
if (isset($this->template->fields) and is_array($this->template->fields)) {
foreach ($this->template->fields as $key => $value) {
if ($key == DB_HISTORY) {
$this->fieldList[] = $key;
$this->fieldTypes[$key] = $value;
} else {
$this->fieldList[] = ($key . $this->ext);
$this->fieldTypes[($key . $this->ext)] = $value;
}
}
}
if (isset($this->template->indexes) and is_array($this->template->indexes)) {
foreach ($this->template->indexes as $key => $value) {
$this->indexes[] = ($key . $this->ext);
}
}
// todo: validate the global index data
if (isset($this->template->globalIndexes) and is_array($this->template->globalIndexes)) {
foreach ($this->template->globalIndexes as $key ) {
$this->globalIndexes[] = $key;
}
}
// todo: validate the local index data
if (isset($this->template->localIndexes) and is_array($this->template->localIndexes)) {
foreach ($this->template->localIndexes as $key) {
$this->localIndexes[] = $key;
}
}
if (!is_null($this->template->cacheMap)) {
foreach ($this->template->cacheMap as $key => $value) {
$this->cacheMap[($key . $this->ext)] = $value;
}
} else {
$this->cacheMap = null;
}
if (!is_null($this->template->binFields)) {
foreach ($this->template->binFields as $key) {
$this->binaryFields[] = ($key . $this->ext);
}
}
if ($this->template->selfDestruct) {
unset($this->template);
}
return (true);
}
/**
* __destruct() -- public function
*
* class destructor
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-20-17 mks original coding
*
*/
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
// there is no destructor method defined in the core abstraction class, hence
// there is no call to that parent destructor in this class.
parent::__destruct();
}
}

View File

@@ -0,0 +1,817 @@
<?php
use MongoDB\Driver\WriteResult;
class gacErrorLogger {
public bool $haveErrors; // true if we have ERROR errors
public bool $haveWarnings; // true if we have WARN errors
public bool $haveFatals; // true if we have FATAL errors
public array $errStack; // the error stack array
public bool $available = false; // boolean indicating service availability
public bool $mirror = false; // mirror db log messages to console?
public string $status; // dynamic progress indicator
private string $service = ''; // defines which service the current instantiation is current running on
private ?string $eventGUID = ''; // recording the broker event identifier
private bool $isMetric = false; // indicates if this is a metrics (as opposed to a log) message
private string $res = 'LOGR: '; // console-log label
private bool $publishMessage; // do we publish the message remotely or write locally?
private string $ext = ''; // class extension (_log or _met)
private string $class = ''; // class name of the current collection
private array $config; // holds copy of the pgsConfig for mongo
private ?object $connection; // mongo DB Driver connection resource
private string $dbName; // name of the mongodb
private string $collectionName = ''; // mongo table name (log end-point)
private array $validTemplates; // names of the collections
private string $env = ''; // defines the environment for ddb table names
private gacLogClient $abc; // Admin Broker Client -- pointer to the admin broker client class
public function __construct(string $_eg = null, bool $_pm = true)
{
$this->status = false;
try {
register_shutdown_function(array($this, '__destruct'));
$this->errStack = [];
$this->haveErrors = false;
$this->haveWarnings = false;
$this->haveFatals = false;
$this->publishMessage = $_pm;
$this->eventGUID = (!is_null($_eg) and validateGUID($_eg)) ? $_eg : null;
$this->validTemplates = [TEMPLATE_CLASS_LOGS, TEMPLATE_CLASS_METRICS];
// toss a fatal if the config hasn't been instantiated...
if (!isset(gasConfig::$settings) or empty(gasConfig::$settings)) {
$this->throwFatal(__FILE__ . '(' . __LINE__ . '): ' . ERROR_CONFIG_404);
} elseif (empty(gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_MONGODB])) {
$this->throwFatal(__FILE__ . '(' . __LINE__ . '): ' . ERROR_CONFIG_RESOURCE_404 . RESOURCE_DDB);
}
// $this->config = gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_DDB];
// $this->connection = $this->getNoSQLResource(); // ddb connection
$this->config = gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_MONGODB];
$this->connection = gasResourceManager::fetchResource(RESOURCE_MONGO_MASTER, ENV_ADMIN);
$this->available = (!is_null($this->connection)) ? true : false;
$this->env = gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV];
$this->dbName = $this->env . UDASH . $this->config[CONFIG_DATABASE_MONGODB_ADMIN][CONFIG_DATABASE_MONGODB_DB_NAME];
// $this->abc = null;
$this->status = true;
} catch (TypeError | Throwable $e) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
echo getDateTime() . CON_ERROR . $this->res . $hdr . $e->getMessage();
}
}
/**
* setService() -- public method
*
* When any class that extends core creates an embedded logger class, which they all should, then the instantiation
* class should call this function s.t. the logger class inherits (sloppily) the service defined for the current
* class. This will validated later in the function: isServiceLocal() when we're instantiating a class to ensure
* that a class can only be instantiated in it's service when we're dealing with appServer and tercero classes.
*
* There is a single input parameter to the function that is the defined name of the current service. We're not
* validating this string value b/c that's happened already within the "parent" class.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_service
* @return bool
*
* HISTORY:
* ========
* 11-04-20 mks DB-171: original coding
*
*/
public function setService(string $_service):bool
{
if (isset(gasConfig::$settings[$_service]))
if (gasConfig::$settings[$_service][CONFIG_IS_LOCAL])
$this->service = $_service;
else {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = sprintf(ERROR_SERVICE_NOT_LOCAL, $_service);
consoleLog('SSER: ', CON_ERROR, $hdr . $msg);
return false;
}
else {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __FILE__);
$msg = ERROR_CONFIG_RESOURCE_404 . STRING_SERVICE;
consoleLog('SSER: ', CON_ERROR, $hdr . $msg);
return false;
}
return true;
}
/**
* throwFatal() -- private method
*
* if the logger cannot be properly instantiated, (thereby being unable to log errors in the database), the the
* last remaining option is to throw a direMessage (tm) to the console.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_msg -- the text dumped to the php-errors log
* @throws Exception
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 12-07-17 mks CORE-591: removed errStack dump b/c cached and can be recovered
* 01-22-18 mks _INF-139: updated code to php 7.2
* 03-02-18 mks CORE-680: deprecated trace logging
*
*/
private function throwFatal(string $_msg): void
{
$msg = PHP_EOL . '----------------------------------------------------------------------------------------------------' . PHP_EOL;
$msg .= ' FATAL ERRORS OCCURRED IN THE ERROR CLASS. EVALUATE, FIX, AND RE-RUN.' . PHP_EOL;
$msg .= PHP_EOL . $_msg . PHP_EOL;
$msg .= '----------------------------------------------------------------------------------------------------' . PHP_EOL;
throw new Exception($msg);
}
public function debug(string $_message): void
{
try {
self::set(ERROR_DEBUG, $_message);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
}
}
public function info(string $_message): void
{
try {
self::set(ERROR_INFO, $_message);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
}
}
public function data(string $_message): void
{
try {
self::set(ERROR_DATA, $_message);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
}
}
public function error(string $_message): void
{
try {
self::set(ERROR_ERROR, $_message);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
}
}
public function warn(string $_message): void
{
try {
self::set(ERROR_WARN, $_message);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
}
}
public function fatal(string $_message): void
{
try {
self::set(ERROR_FATAL, $_message);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
}
}
public function metrics(string $_message, float $_time, &$_es = null): void
{
try {
self::set(ERROR_METRICS, $_message, true, $_time, $_es);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
}
}
/**
* set() -- public method
*
* this method is where most of the work gets done for the class.
*
* this method accepts a log level and a log message. if AMQP and nosql services are not available (either),
* then the message will be output to the console.
*
* otherwise, we're going to parse the log level against the configuration settings to ensure that we're
* publishing messages according to the settings and the current environment. (iow, don't publish debug level
* messages while in a production environment)
*
* RULES:
* ------
* DEBUG errors are only logged if debug mode is on
*
* we use debug_backtrace to generate the originating message data points and assemble the message data
* (and the meta data for the broker request) into a single array which is then processed and published
* to the delayed-write queue (itself a fire-and-forget queue).
*
* this design differs from previous iterations in that we're no longer trying to preserve an ongoing error
* stack - instead we're just considering the incoming message to be the only message currently in existence.
*
* The input parameters to the method are:
*
* $_level - which is the message level and a defined constant
* $_message - the text of the actual message
* $_metrics - (optional, default = false) if true, use the metrics template instead of the logs template
* $_t - (optional, default = 0), for metrics, the total query time
* $_es - (optional, default = 0), call-by-reference param to store/return the metrics error stack
*
* There are no return values, either implicitly or explicitly.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_level error level
* @param string $_message error message string
* @param bool $_metrics defines the event as a metric (instead of an error) event
* @param float $_t time, in ms
* @param array $_es error stack: call by reference array stack/container
* @throws Exception
*
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 07-12-17 mks added log-level values to collection/processing
* 12-08-17 mks CORE-591: new throw-fatal processing
* 03-02-18 mks CORE-680: deprecated trace logging
* 06-08-18 mks CORE-1034: deprecating XML prodBox tag
* 07-30-18 mks CORE-774: PHP7.2 exception handling update
* 09-24-19 mks DB-136: fixed error in levelValues element assignment to correct constant
* 11-04-20 mks DB-171: added service (origin) definition to logging output
*
*/
private function set(string $_level, string $_message, bool $_metrics = false, float $_t = 0, array &$_es = null): void
{
$this->errStack = [];
$levelValues = [
ERROR_EVENT => ERROR_EVENT_VAL,
ERROR_METRICS => ERROR_METRICS_VAL,
ERROR_DEBUG => ERROR_DEBUG_VAL,
ERROR_DATA => ERROR_DATA_VAL,
ERROR_INFO => ERROR_INFO_VAL,
ERROR_ERROR => ERROR_ERROR_VAL,
ERROR_WARN => ERROR_WARN_VAL,
ERROR_FATAL => ERROR_FATAL_VAL
];
if (!array_key_exists($_level, $levelValues)) {
$this->throwFatal(__FILE__ . COLON . __LINE__ . COLON . sprintf(ERROR_UNKNOWN_KEY, $_level, LOG_VALUE));
$_level = ERROR_ERROR;
}
// the system is up - so create the error message request for publication to the DW broker
$_message = empty($_message) ? ERROR_DATA_INPUT_EMPTY : trim($_message);
switch ($_level) {
case ERROR_DEBUG :
$saveError = gasConfig::$settings[CONFIG_DEBUG] and !(gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV] == ENV_PRODUCTION);
break;
case ERROR_DATA :
case ERROR_INFO :
case ERROR_METRICS :
$saveError = true;
break;
case ERROR_ERROR :
$this->haveErrors = true;
$saveError = true;
break;
case ERROR_WARN :
$this->haveWarnings = true;
$saveError = true;
break;
case ERROR_FATAL :
$this->haveFatals = true;
$saveError = true;
break;
default :
$saveError = true;
$_level = ERROR_INFO;
break;
}
if (!$saveError) return;
try {
if ($_level == ERROR_METRICS) {
$this->isMetric = true;
$this->setCollection(TEMPLATE_CLASS_METRICS);
} else {
$this->setCollection();
}
} catch (Throwable $t) {
consoleLog($this->res, CON_ERROR, ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage());
}
if (is_array($_message)) {
$_message = implode('<br />', $_message);
}
$backTrace = debug_backtrace();
for ($index = 0; $index < 3; $index++) {
if (isset($backTrace[$index][STRING_FUNCTION]) and $backTrace[$index][STRING_FUNCTION] != STRING_SET)
break;
}
// if (isset($backTrace[2][ERROR_FILE])) {
// $index = 2;
// } elseif (isset($backTrace[1][ERROR_FILE])) {
// $index = 1;
// } else {
// $index = 0;
// }
$errStack = null;
if (empty($this->errStack)) $this->errStack = array();
if ($_level == ERROR_METRICS) @$errStack[(LOG_EVENT . $this->ext)] = EVENT_METRICS;
@$errStack[(LOG_FILE . $this->ext)] = $this->service . ARROW . $backTrace[$index][ERROR_FILE];
@$errStack[(LOG_LINE . $this->ext)] = $backTrace[$index][ERROR_LINE];
@$errStack[(LOG_METHOD . $this->ext)] = $backTrace[$index+1][STRING_FUNCTION];
@$errStack[(LOG_CLASS . $this->ext)] = $backTrace[$index+1][ERROR_CLASS];
@$errStack[(LOG_LEVEL . $this->ext)] = $_level;
@$errStack[(LOG_VALUE . $this->ext)] = $levelValues[$_level];
@$errStack[(LOG_MESSAGE . $this->ext)] = utf8_encode($_message);
if (isset($this->eventGUID))
@$errStack[(DB_EVENT_GUID . $this->ext)] = $this->eventGUID;
if ($_metrics) {
@$errStack[(LOG_TIMER . $this->ext)] = $_t;
$_es = $backTrace;
}
$errStack[(LOG_CREATED . $this->ext)] = time();
array_push($this->errStack, $errStack);
try {
$this->writeLogMessage();
if ($this->mirror) {
consoleLog($this->res, CON_ERROR, $_message);
}
} catch (Throwable $t) {
consoleLog($this->res, CON_ERROR, ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage());
}
}
/**
* writeLogMessage() -- public static method
*
* this is the record-create method for saving a log event.
*
* There are three input parameters to the method, all of which are required:
*
* $_data -- the data payload as received by the broker in associative array format
* $_meta -- the meta payload as received by the broker in associative array format
* $_where -- string (use the system constant, please) that defines the collection destination
*
* first thing we do is set the collection destination by passing the $_where input parameter to the
* private setCollection() method.
*
* if we (self) are not available, then return immediately where the calling client should assume that the
* logging services are not available and post the necessary console message.
*
* Validate the $_data payload by assigning the contents, filtering the payload through a validation array,
* into a temp variable...
*
* next, insert the generated sequence key into the temp array structure and add the created-date timestamp...
*
* finally, insert the record into the designated collection, trapping any nosql exception raised in our fatal
* handler so that the (error) results are saved to the console log.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param bool $_im -- is Metric -- should only be set to true from adminIn broker event call
* @throws Exception
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 06-14-17 mks refactored for dynamodb
* 07-05-17 mks CORE-463: refactored for RMQ and mongoDB
* 07-07-17 mks CORE-463: switch collections on-the-fly for adminIn Metrics event call
* 12-06-17 mks CORE-591: caching log messages based on XML config params, w/console logging on errors
* 01-22-18 mks _INF-139: fixed bug where when caching is disabled, the current error wasn't being
* added to the error stack.
* 03-01-18 mks CORE-689: removed env tag from collection name
* 07-30-18 mks CORE-774: converted to PHP7.2 typeError trapping/processing
*
*/
public function writeLogMessage(bool $_im = false)
{
// if the logger service is unavailable or we don't have a message to publish, then return
if (!$this->available or empty($this->errStack)) return;
$this->status = false;
$oe = $oc = $os = '';
$maxMsgCount = gasConfig::$settings[CONFIG_CACHE][CONFIG_CACHE_LOG_BUFFER_COUNT];
$msgBufferOn = boolval(gasConfig::$settings[CONFIG_CACHE][CONFIG_CACHE_LOG_BUFFER]);
$whichBuffer = ($this->isMetric) ? CONFIG_CACHE_METRICS_BUFFER : CONFIG_CACHE_LOG_BUFFER;
$whichBufferCounter = ($this->isMetric) ? CONFIG_CACHE_METRICS_BUFFER_COUNT : CONFIG_CACHE_LOG_BUFFER_COUNT;
// if (empty($aryData[DB_PKEY])) {
// $this->errStack[(DB_PKEY . $this->ext)] = guid();
// }
// $this->errStack[(LOG_CREATED . $this->ext)] = time();
// admin service is remote - either cache or publish the message
if ($this->publishMessage) {
try {
$currMsgCount = gasCache::sysGet($whichBufferCounter);
$msgBuffer = gasCache::sysGet($whichBuffer);
// remove cache data
gasCache::sysDel($whichBuffer);
gasCache::sysDel($whichBufferCounter);
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
return;
}
// validate the message buffer fetched from cache
if (!empty($msgBuffer) and !is_array($msgBuffer)) {
$msg = basename(__FILE__) . COLON . __LINE__ . COLON;
$msg .= sprintf(ERROR_CACHE_DATA_MALFORMED, 'array', CONFIG_CACHE_LOG_BUFFER, gettype($msgBuffer));
$msg .= PHP_EOL . 'Dump: ' . PHP_EOL;
$msg .= var_export($msgBuffer, true);
$msg .= PHP_EOL;
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . $msg);
return;
}
if ($currMsgCount === false) {
/** @noinspection PhpUnusedLocalVariableInspection */
$currMsgCount = 0;
}
if (false === $msgBuffer) $msgBuffer = [];
$msgBuffer = [...$msgBuffer, ...$this->errStack];
$currMsgCount = count($msgBuffer);
// determine if we're going to publish or cache the message buffer
$publish = (!$msgBufferOn or $currMsgCount >= $maxMsgCount) ? true : false;
if ($publish) {
try {
// $this->abc = new gacAdminClientIn(__METHOD__ . COLON . __LINE__);
$this->abc = new gacLogClient();
if (!$this->abc->status) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = ERROR_FAILED_TO_INSTANTIATE . RESOURCE_ADMIN_CLIENT;
consoleLog($this->res, CON_ERROR, $hdr . $msg);
return;
}
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
return;
}
$request = [
BROKER_REQUEST => ($this->isMetric) ? BROKER_REQUEST_MET : BROKER_REQUEST_LOG,
BROKER_DATA => $msgBuffer,
BROKER_META_DATA => [
META_TEMPLATE => ($this->isMetric) ? TEMPLATE_CLASS_METRICS : TEMPLATE_CLASS_LOGS,
META_CLIENT => CLIENT_SYSTEM
]
];
if (!empty($this->eventGUID)) $request[BROKER_META_DATA][META_EVENT_GUID] = $this->eventGUID;
try {
$route = (!$this->isMetric) ? EXCHANGE_SOURCE_LOGS : EXCHANGE_SOURCE_METRICS;
$this->abc->call(gzcompress(json_encode($request)), $route);
if (is_object($this->abc)) $this->abc->__destruct();
unset($this->abc);
} catch (TypeError | Throwable $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . $t->getMessage());
return;
}
unset($this->abc);
// $this->abc = null;
$this->isMetric = false;
// if (gasConfig::$settings[CONFIG_DEBUG]) {
// $msg = sprintf(SUCCESS_CACHE_LOG_DUMP, $currMsgCount);
// consoleLog($this->res, CON_SUCCESS, $msg);
// }
} else {
// update the current message buffer and message buffer counter and re-cache
$msgBuffer = [...$msgBuffer, ...$this->errStack];
$currMsgCount = count($msgBuffer);
try {
if (!gasCache::sysAdd($whichBuffer, $msgBuffer)) {
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_CACHE_ADD_FAIL . $whichBuffer);
return;
}
if (!gasCache::sysAdd($whichBufferCounter, $currMsgCount)) {
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_CACHE_ADD_FAIL . $whichBufferCounter . COLON . $currMsgCount);
return;
}
} catch (Throwable $t) {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__) . ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
consoleLog($this->res, CON_ERROR, $msg);
return;
}
}
} else {
// write the message directly to mongodb - start by checking the mongo resource
if (is_null($this->connection)) {
// if we "lost" the resource, attempt to reconnect
$this->connection = gasResourceManager::fetchResource(RESOURCE_MONGO_MASTER, ENV_ADMIN);
// if we can't reconnect, throw a fatal and exit
if (is_null($this->connection)) {
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_MONGO_CONNECT);
}
}
try {
$record = new MongoDB\Driver\BulkWrite();
foreach ($this->errStack as $errorMessage)
$record->insert($errorMessage);
if ($_im) {
$oc = $this->collectionName;
$oe = $this->ext;
$os = $this->class;
$this->ext = COLLECTION_MONGO_METRICS_EXT;
$this->collectionName = COLLECTION_MONGO_METRICS . $this->ext;
$this->class = COLLECTION_MONGO_METRICS;
} else {
$this->ext = COLLECTION_MONGO_LOGS_EXT;
$this->collectionName = COLLECTION_MONGO_LOGS . $this->ext;
$this->class = COLLECTION_MONGO_LOGS;
}
/** @var WriteResult $result */
$result = $this->connection->executeBulkWrite($this->dbName . DOT . $this->collectionName, $record);
if ($_im) {
$this->ext = $oe;
$this->collectionName = $oc;
$this->class = $os;
}
unset($record);
if ($result->getInsertedCount() != 1) {
$msg = __CLASS__ . COLON . __LINE__ . COLON;
$msg .= sprintf(ERROR_MONGO_INSERT_COUNT, 1, $result->getInsertedCount());
echo getDateTime() . CON_ERROR . $this->res . $msg . PHP_EOL;
// todo -- eventManger should be invoked here
} else {
$this->status = true;
}
} catch (MongoDB\Driver\Exception\BulkWriteException $e) {
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_MONGO_EXCEPTION_BULK_WRITE . PHP_EOL . $e->getMessage());
} catch (MongoDB\Driver\Exception\InvalidArgumentException $e) {
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_MONGO_EXCEPTION_INVALID_ARGS . PHP_EOL . $e->getMessage());
} catch (MongoDB\Driver\Exception\ConnectionException $e) {
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_MONGO_EXCEPTION_CONNECTION . PHP_EOL .$e->getMessage());
} catch (MongoDB\Driver\Exception\RuntimeException $e) {
$this->throwFatal(basename(__METHOD__) . AT . __LINE__ . COLON . ERROR_MONGO_EXCEPTION_RUNTIME . PHP_EOL . $e->getMessage());
}
}
}
/**
* setCollection() -- private method
*
* this is a private method, always called by the writeLogMessage() method, that sets the collection destination
* for the current request. As of this writing, the logger handles all writes to both the log (pgsLogs_log) and
* metrics (pgsMetrics_met) collections. Based on the data passed in the error-message, we set the collection
* destination in this method, along with other member variables that are collection dependent.
*
* the method has a single input parameter, which defaults to a known constant, that is the destination collection.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_which
* @throws Exception
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 06-14-17 mks updated for ddb, dynamic environments
* 07-05-17 mks CORE-463: converted to mongodb
* 03-01-18 mks CORE-689: removed env tags from db name
*
*/
private function setCollection(string $_which = TEMPLATE_CLASS_LOGS): void
{
if (!in_array($_which, $this->validTemplates)) {
$this->throwFatal(ERROR_INVALID_TEMPLATE . $_which);
} else {
if ($_which == TEMPLATE_CLASS_LOGS) {
$this->ext = COLLECTION_MONGO_LOGS_EXT;
$this->collectionName = COLLECTION_MONGO_LOGS . $this->ext;
$this->class = COLLECTION_MONGO_LOGS;
} else {
$this->ext = COLLECTION_MONGO_METRICS_EXT;
$this->collectionName = COLLECTION_MONGO_METRICS . $this->ext;
$this->class = COLLECTION_MONGO_METRICS;
}
}
}
/**
* getLog() - public method
*
* getLog is the method that is used to fetch log (or Metrics) records from the mongo collection.
*
* The method has two required parameters:
*
* $_lines - integer value that indicates the number of log entries to retrieve - defaults to the system constant
* $_where - defines which collection to fetch from
*
* Method opens a channel/connection to mongodb and fetches N lines from the collection specified by the
* input parameter.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param int $_lines
* @param string $_where
* @return null|string
* @throws Exception
*
* HISTORY:
* ========
* 07-05-17 mks CORE-463: original coding
* 09-08-17 mks CORE-529: added event guids to the generated output for dumper
* 07-30-18 mks CORE-774: PHP7.2 Exception Compliance
*
*/
public function getLog($_lines, $_where): ?string
{
if (!in_array($_where, $this->validTemplates)) {
$msg = ERROR_MONGO_TEMPLATE_INVALID . $_where;
return($msg);
}
try {
$this->setCollection($_where);
} catch (Throwable $t) {
consoleLog($this->res, CON_ERROR, ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage());
}
if (!$this->available) {
return(null); // returning on false implicitly generates a console error message by the calling client
}
if (!is_numeric($_lines) or $_lines < 0) {
$_lines = intval(MONGO_LOG_MAX_LINES);
} else {
$_lines = intval($_lines);
}
$mongoData = null;
$returnData = null;
$cursor = null;
try {
$options = [
STRING_SORT => [(LOG_CREATED . $this->ext) => -1],
STRING_LIMIT => $_lines
];
$filter = [];
$nameSpace = $this->dbName . DOT . $this->collectionName;
$readPreference = new MongoDB\Driver\ReadPreference($this->config[CONFIG_DATABASE_MONGODB_ADMIN][CONFIG_DATABASE_MONGODB_SECONDARY_RP]);
$query = new MongoDB\Driver\Query($filter, $options);
$cursor = $this->connection->executeQuery($nameSpace, $query, $readPreference);
} catch (Throwable | TypeError $e) {
$this->throwFatal(__FILE__ . COLON . __LINE__ . COLON . ERROR_MONGO_EXCEPTION_INVALID_ARGS . PHP_EOL . $e->getMessage());
}
if (!is_null($cursor)) {
foreach ($cursor as $property) {
$property = (array) $property;
$returnData .= '<div class="rowMeta">'; // note: css is defined in the utilities directory
$returnData .= getDateTime($property[(LOG_CREATED . $this->ext)]) . ' - ';
// $returnData .= date(TIME_DATE_FORMAT, $row[(META_SESSION_DATE . self::$ext)]->sec) . ' - ';
// add error label as a span: warn/error/fatal...
try {
$returnData .= self::getErrorLabel($property[(LOG_LEVEL . $this->ext)]);
} catch (TypeError $t) {
consoleLog($this->res, CON_ERROR, ERROR_TYPE_EXCEPTION . COLON . $t->getMessage());
}
$returnData .= ' ' . $property[(ERROR_FILE . $this->ext)] . '(' . $property[(ERROR_LINE . $this->ext)] . ')';
$cd = '';
if (!empty($property[(ERROR_CLASS . $this->ext)])) $cd = ' class[' . $property[(ERROR_CLASS . $this->ext)] . ']';
if (!empty($property[(ERROR_METHOD . $this->ext)])) $cd .= '.method(' . $property[(ERROR_METHOD . $this->ext)] . ')</div>';
$returnData .= $cd;
$returnData .= '<div class="rowData">' . htmlentities($property[(ERROR_MESSAGE . $this->ext)]);
if ($_where == TEMPLATE_CLASS_METRICS) {
$returnData .= ' - ' . $property[(DB_TIMER . $this->ext)] . ' or ';
$returnData .= ($property[(DB_TIMER . $this->ext)] * NUMBER_MS_PER_SEC) . 'ms';
}
$returnData .= '</div>';
$returnData .= '<div class="rowHist">';
if (!empty($property[(DB_EVENT_GUID . $this->ext)])) {
$returnData .= 'Event ID: ' . $property[(DB_EVENT_GUID . $this->ext)];
}
// foreach($row[(TEMPLATE_HISTORY . $this->ext)] as $histRec) {
// $returnData .= date('Y-M-d h:i:s', $histRec[META_SESSION_DATE]->sec);// . ' (';
// if (!is_null($row[(MONGO_LOG_EVENT_GUID . $this->ext)]))
// $returnData .= ', Event ID: ' . $row[(MONGO_LOG_EVENT_GUID . $this->ext)];
// $returnData .= $histRec[META_SESSION_EVENT] . ') from (';
// $returnData .= $histRec[META_SESSION_IP] . '): ';
// $returnData .= ((isset($histRec[META_SESSION_ID])) ? $histRec[META_SESSION_ID] : $histRec[META_CLIENT_ID]) . '<br />';
// }
$returnData .= '</div><br />';
}
}
return ($returnData);
}
/**
* getErrorLabel() - public method
*
* build an association between known error types and a color-key for output.
*
* if the error does not exist, return black.
*
* in all cases, return an HTML SPAN tag coded to the selected color.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $errorType
* @return string
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
private function getErrorLabel(string $errorType):string
{
// set the error level text-color
$errorColorMap = [
ERROR_DEBUG => '#008000', // green
ERROR_METRICS => '#00FF00', // black
ERROR_DATA => '#0000FF', // black
ERROR_INFO => '#000080', // navy
ERROR_ERROR => '#800080', // purple
ERROR_FATAL => '#FF0000', // red
ERROR_WARN => '#F47E1C', // orange,
ERROR_EVENT => '#FF00CC', // pink
];
$cssColor = (!empty($errorColorMap[$errorType])) ? $errorColorMap[$errorType] : '#0';
return '<span style="color:' . $cssColor . ';">' . strtoupper($errorType) . '</span>';
}
/**
* __clone() -- public method
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 03-18-15 mks original coding
*
*/
private function __clone()
{
return null;
}
/**
* __destruct() -- public method
*
* As of PHP 5.3.10, destructors are not run on shutdowns caused by fatal errors - since the destructor is
* now registered in the constructor method, recovery and/or clean-up efforts should go into this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-18-15 mks original coding
* 05-11-16 mks ome-287: support for dynamic resource management
*
*/
public function __destruct()
{
//do nothing
}
}

1585
classes/gacFactory.class.inc Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,172 @@
<?php
/**
* gacLogClient -- class definition
*
* this is the class for declaring a logClient for use in testing, or within the framework when we need to publish
* a log event to another queue.
*
* This class simply abstracts the RabbitMQ processes so that you don't have to re-write all the RMQ code every
* time you want to publish a message to the log exchange.
*
* IMPORTANT NOTE:
* ---------------
* Whenever you add a qualifying queue to the pantheon, you'll need to update the constructor class, adding the queue
* name to the $validQueues member, and to the switch-case statement that assigns the correct environment.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-18-19 mks DB-136: original coding
* 12-04-19 mks DB-140: PHP 7.4 compliance
* 01-07-20 mks DB-150: PHP 7.4 member variable casting
*
*/
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Exception\AMQPRuntimeException;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use PhpAmqpLib\Message\AMQPMessage;
class gacLogClient {
private ?object $rabbitConnection;
private AMQPChannel $rabbitChannel;
private string $res = 'BRCL: ';
// if you add to the log-exchange bindings, for either service or source, remember to update these arrays as needed
private array $validTypes = [ EXCHANGE_SOURCE_LOGS, EXCHANGE_SOURCE_METRICS, STAR ];
public string $status;
/**
* __construct() -- public method
*
* the constructor instantiates the class and establishes a connection to the RMQ Log Exchange.
*
* other client classes, such as BrokerClient, require a queue name to be passed to the constructor because these
* clients are connecting directly to the queues to publish an event. This client, instead, connects to the
* log exchange (instead of a queue) and publishes to the exchange.
*
* Within the bindings for the exchange which are coded in the brokers for each of the exchange queues, are the
* configuration directives for how messages are routed by the exchange to the destination queues. Messages
* published to the exchange can be routed to one, several, or all of the queues bound to the exchange.
*
* That is why there isn't a string parameter passed to this class' constructor - because this client connects
* to the exchange and not to a broker specified by the input parameter.
*
* method returns an implicit boolean via a class member (status) indicating whether or not the resource management
* was successful and attempts to provide diagnostics via logging, cli output, or via the status member variable.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 09-18-19 mks DB-136: original coding
*
*/
public function __construct()
{
register_shutdown_function(array($this, '__destruct'));
$this->status = false;
try {
// fetch the AMQPStreamConnection to the Admin service
$this->rabbitConnection = gasResourceManager::fetchResource(RESOURCE_ADMIN);
if (is_null($this->rabbitConnection)) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . ERROR_RESOURCE_404 . RESOURCE_ADMIN);
return;
}
$this->rabbitChannel = $this->rabbitConnection->channel();
// connect the channel to the logging exchange
$this->rabbitChannel->exchange_declare(EXCHANGE_NAME_TOPIC_LOGS, EXCHANGE_TYPE_TOPIC, false, false, false);
$this->status = true;
} catch (AMQPRuntimeException | AMQPTimeoutException | Throwable $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . ERROR_THROWABLE_EXCEPTION);
consoleLog($this->res, CON_ERROR, $hdr . $t->getMessage());
}
return;
}
/**
* call() -- public method
*
* This method is invoked outside of the class and is the entry point for publishing a message request to the
* logging exchange. It creates a new AMQP message and publishes it to the exchange, defined in the constructor),
* and then blocks-and-waits for a response from the remote (vault) service.
*
* There are two input parameters to this method:
*
* $_data -- this is the payload data that, untouched, is converted into an AMQPMessage class object
* $_route -- this defines how the exchange will handle the incoming message (<service>.<source> format)
*
* For exchange routing, the $_route is required and critical for delivering a successful message request to
* the logging exchange. The following values are valid routes with the default route set to STAR:
*
* EXCHANGE_SOURCE_METRICS, EXCHANGE_SOURCE_LOGS, STAR
*
* Publishing a message is exception trapped and will generate a log message at the warn level if tripped.
*
* The method returns a boolean to the calling client if the message was successfully published to the exchange.
* Since logging is fire-n-forget, we can't know if the message was accepted... if a false value is returned,
* then the client should check the console log as their request failed routing validation.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_data -- the payload to be converted to an AMQPMessage class object
* @param $_route -- The declared route for the message request ( defaults to: * )
*
* @return bool -- indicates if the message was successfully published or not
*
*
* HISTORY:
* ========
* 09-18-19 mks DB-136: original coding
*
*/
public function call(string $_data, string $_route = STAR): bool
{
// validate the routing string for the exchange...
if (!in_array($_route, $this->validTypes)) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = sprintf(ERROR_SERVICE_SOURCE_UNK, $_route);
consoleLog($this->res, CON_ERROR, $hdr . $msg);
return false;
}
// create and publish the message to the logging exchange
try {
$rabbitMessage = new AMQPMessage((string)$_data);
$this->rabbitChannel->basic_publish($rabbitMessage, EXCHANGE_NAME_TOPIC_LOGS, $_route);
consoleLog($this->res, CON_SUCCESS, SUCCESS_PUBLISHED . $_route);
} catch (AMQPTimeoutException | AMQPRuntimeException | Throwable $e) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . $e->getMessage());
}
return true;
}
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
try {
$this->rabbitChannel->close();
$this->rabbitConnection->close();
} catch (AMQPTimeoutException | AMQPRuntimeException | Throwable $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
consoleLog($this->res, CON_ERROR, $hdr . ERROR_THROWABLE_EXCEPTION);
consoleLog($this->res, CON_ERROR, $hdr . $t->getMessage());
}
}
}

346
classes/gacMeta.class.inc Normal file
View File

@@ -0,0 +1,346 @@
<?php
/**
* Class gacMeta -- GivingAssistant Class Meta
*
* meta data is a class pseudo-template definition which defines the meta-data fields, and their respective types,
* that are added to every collection.
*
* These fields (with the exception of the status field) are usually masked from the user and are managed solely by
* the framework although some of the meta field data must be fed to the framework via the published request.
*
* by defining the meta-data as a class, we relieve ourselves of the responsibility of having to explicitly declare
* the meta data in every defined class, and changing the meta data is easier because a single origin point.
*
* NOTES:
* ------
* The meta data sub-array is referenced in all collections via the HISTORY column.
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 06-09-17 mks original coding
* 01-24-18 mks _INF-139: added BROKER_REQUEST_MIGRATION to meta skip-check
* 03-02-18 mks CORE-680: deprecated trace logging
* 05-04-18 mks _INF-188: updated meta fields for some undocumented feature fun
* 07-31-18 mks CORE-774: PHP7.2 exception handling
* 11-15-18 mks DB-63: updated with new meta fields
* 12-06-18 mks DB-55: imported validateMetaFields() from gasStatic, refactored, and deprecated the
* previous method: validateMeta()
* 12-12-18 mks DB-77: added undoc'd audit members for env and audit checks
* 02-04-19 mks DB-108: added CLIENT_UNIT to valid client list
* 04-02-19 mks DB-116: added META_AUDIT_EVENT to valid client list
* 01-07-20 mks DB-150: PHP7.4 member variable type-casting
* 03-25-20 mks PD-18: added support for sessionID
* 06-02-20 mks ECI-103: added support for API client partners
*
*/
class gacMeta {
public array $fields = [
META_TEMPLATE => DATA_TYPE_STRING,
META_DO_CACHE => DATA_TYPE_BOOL,
META_SKIP => DATA_TYPE_INTEGER,
META_LIMIT => DATA_TYPE_INTEGER,
META_LIMIT_OVERRIDE => DATA_TYPE_INTEGER,
META_SYSTEM_NOTES => DATA_TYPE_STRING,
META_CLIENT => DATA_TYPE_STRING,
META_TARGET_ENV => DATA_TYPE_STRING,
META_SESSION_IP => DATA_TYPE_STRING,
META_SESSION_ID => DATA_TYPE_STRING,
META_CLIENT_IP => DATA_TYPE_STRING,
META_EVENT_GUID => DATA_TYPE_STRING,
META_SESSION_GUID => DATA_TYPE_STRING,
META_USER_GUID => DATA_TYPE_STRING,
META_USER_INFO => DATA_TYPE_STRING,
META_BROKER_CHILD_GUID => DATA_TYPE_STRING,
META_BROKER_GROOT => DATA_TYPE_STRING,
META_SESSION_DATE => DATA_TYPE_INTEGER,
META_SESSION_EVENT => DATA_TYPE_STRING,
META_SESSION_MISC => DATA_TYPE_STRING,
META_SESSION_LOCATION => DATA_TYPE_STRING,
META_SESSION_DAEMON => DATA_TYPE_INTEGER, // todo: fix that this currently isn't being used
META_BROKER_SERVICE => DATA_TYPE_STRING,
META_DONUT_FILTER => DATA_TYPE_INTEGER,
META_AUDIT_EVENT => DATA_TYPE_INTEGER,
META_SKIP_AUDIT => DATA_TYPE_INTEGER,
CLIENT_AUTH_TOKEN => DATA_TYPE_STRING,
META_TLTI => DATA_TYPE_STRING
];
public array $skipChecksForMeta = [
BROKER_REQUEST_PING,
BROKER_REQUEST_SCHEMA,
BROKER_REQUEST_SHUTDOWN,
BROKER_REQUEST_LOG,
BROKER_REQUEST_MET,
BROKER_REQUEST_MIGRATION
];
public array /** @noinspection PhpUnused */ $allowedSessionStates = [
STATUS_NEW,
STATUS_PENDING,
STATUS_ACTIVE
];
public array $validClients = [
CLIENT_SYSTEM,
CLIENT_CSR,
CLIENT_UNIT,
CLIENT_AUDIT,
CLIENT_CLIENT,
CLIENT_API,
CLIENT_API_USER
];
// places (domains) where namaste lives
public array /** @noinspection PhpUnused */ $validEnvironments = [
ENV_ADMIN,
ENV_APPSERVER, // aka: namaste
ENV_SEGUNDO,
ENV_TERCERO
];
public bool $debug;
public gacErrorLogger $logger;
public array $config;
public array $eventMessages;
private string $res = 'META: ';
/**
* gacMeta constructor.
*
* sets public variables and, if explicitly requested, loads a logger class object.
*
* NOTES:
* ------
* about the XML configuration:
*
* The base-xml file configuration contains a new sub-section called: "meta"
* Within this meta header, all of the valid clients are defined.
* Within each client block, the required meta fields are listed and each meta field tag contains a boolean
* value. The boolean setting is handled differently depending on the field as follows:
*
* clientID: required for all clients
* eventGUID: required for all clients
*
* Fields that are not required assume the defaults.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param Boolean $_ll -- Load Logger (defaults to false)
*
*
* HISTORY:
* ========
* 06-09-17 mks original coding
*
*/
public function __construct(bool $_ll = false)
{
$this->debug = gasConfig::$settings[CONFIG_DEBUG];
$this->config = [];
$this->eventMessages = [];
if ($_ll) {
try {
$this->logger = new gacErrorLogger();
} catch (Throwable $t) {
$msg = ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
$this->eventMessages[] = $msg;
if (isset($this->logger) and $this->logger->available) {
$this->logger->error($msg);
} else {
consoleLog($this->res, CON_ERROR, $msg);
}
}
return;
}
$this->eventMessages = [];
if (!empty(gasConfig::$settings[CONFIG_META])) {
$this->config = gasConfig::$settings[CONFIG_META];
} else {
$msg = sprintf(INFO_LOC, basename(__FILE__), __LINE__, ERROR_CONFIG_404);
$this->eventMessages[] = $msg;
if (isset($this->logger) and $this->logger->available) $this->logger->fatal($msg);
consoleLog($this->res, CON_SYSTEM, $msg);
}
}
/**
* validateMetaPayload() -- protected method
*
* this protected method allows for validation of a meta-data payload submitted via the broker.
*
* the method ultimately returns a Boolean indicating whether or not ALL meta elements passed validation. In this
* context, validation means we're validating that the meta fields are permitted and the defined types for each
* field meet requirements.
*
* There is one input parameter to the method, described as follows:
*
* $_meta - the meta data payload, an associative array (vector) of key-value pairs
*
* requirements for success:
* 1. that $_meta is an array and...
* 2. that the $_meta keys are known...
* 3. that all the $meta types are defined...
* 4. that all of meta keys pass their respective validation requirements
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_meta -- associative vector containing meta payload from the broker event
* @return bool -- true: all meta data was validated successfully, false: it was not
*
*
* HISTORY:
* ========
* 06-09-17 mks original coding
* 07-30-18 mks CORE-774: PHP7.2 exception handling
* 12-06-18 mks DB-55: pulled this code over from gasStatic to replace the local method (validateMeta -
* which has been relocated to the deprecated folder) to provide a single, consistent
* meta-data payload validation to the framework.
* 06-02-20 mks ECI-108: support for SMAX API clients, fixed bug by eliminated $_results
*/
public function validateMetaPayload(&$_meta): bool
{
$badField = false;
try {
if (!is_array($_meta)) {
$msg = ERROR_DATA_MISSING_ARRAY . STRING_META;
$this->logger->data($msg);
$this->eventMessages[] = $msg;
return (false);
}
foreach ($_meta as $key => &$value) {
$badField = false;
if (!array_key_exists($key, $this->fields)) {
$msg = sprintf(NOTICE_META_DISCARD, $key);
$this->eventMessages[] = $msg;
$this->logger->error($msg);
unset($_meta[$key]);
} else {
switch ($key) {
case META_SESSION_GUID :
case META_USER_GUID :
case META_BROKER_CHILD_GUID :
case META_BROKER_GROOT :
case META_EVENT_GUID :
case META_SESSION_ID :
case CLIENT_AUTH_TOKEN :
if (!validateGUID($value)) {
$msg = ERROR_INVALID_GUID . $key . COLON . $value;
$this->eventMessages[] = $msg;
$this->logger->error($msg);
$badField = true;
}
break;
case META_SESSION_IP :
case META_CLIENT_IP :
if (false === (filter_var($value, FILTER_VALIDATE_IP, FILTER_FLAG_IPV4 | FILTER_FLAG_IPV6))) {
$msg = ERROR_INVALID_IP . $key . COLON . $value;
$this->eventMessages[] = $msg;
$this->logger->error($msg);
$badField = true;
}
break;
case META_SKIP :
case META_LIMIT :
case META_LIMIT_OVERRIDE :
case META_SESSION_DAEMON :
case META_DONUT_FILTER :
case META_SESSION_DATE :
case META_SKIP_AUDIT :
case META_AUDIT_EVENT :
if (!is_numeric($value)) {
$type = (is_integer($value)) ? DATA_TYPE_INTEGER : DATA_TYPE_DOUBLE;
$msg = ERROR_DATA_INVALID_FORMAT . COLON . $key . ERROR_STUB_EXPECTING . $type;
$msg .= ERROR_STUB_RECEIVED . gettype($value);
$this->logger->error($msg);
$this->eventMessages[] = $msg;
$badField = true;
}
break;
case META_DO_CACHE :
if (!is_bool($value) and ($value != 0 and $value != 1)) {
$msg = ERROR_DATA_INVALID_FORMAT . COLON . $key . ERROR_STUB_EXPECTING . DATA_TYPE_BOOL;
$msg .= ERROR_STUB_RECEIVED . gettype($value);
$this->logger->error($msg);
$this->eventMessages[] = $msg;
$badField = true;
}
break;
case META_TEMPLATE :
case META_SYSTEM_NOTES :
case META_TARGET_ENV :
case META_USER_INFO :
case META_SESSION_EVENT :
case META_SESSION_MISC :
case META_SESSION_LOCATION :
case META_TLTI :
case META_BROKER_SERVICE :
if (!is_string($value)) {
$msg = ERROR_DATA_INVALID_FORMAT . COLON . $key . ERROR_STUB_EXPECTING . DATA_TYPE_STRING;
$msg .= ERROR_STUB_RECEIVED . gettype($value);
$this->logger->error($msg);
$this->eventMessages[] = $msg;
$badField = true;
}
break;
case META_CLIENT :
if (!in_array($value, $this->validClients)) {
$msg = ERROR_DATA_RANGE . COLON . $key . COLON . $value;
$this->eventMessages[] = $msg;
$this->logger->error($msg);
$badField = true;
}
break;
default :
$msg = sprintf(ERROR_UNK_META_TYPE, gettype($value), $key);
$this->eventMessages[] = $msg;
$this->logger->data($msg);
$badField = true;
break;
}
}
if ($badField) {
$this->logger->error(ERROR_DATA_META_REJECTED . $key);
unset($_meta[$key]);
}
}
return ($badField) ? false : true;
} catch (TypeError $t) {
consoleLog($this->res, CON_ERROR, $t->getMessage());
return false;
}
}
/**
* __destruct() -- public function
*
* class destructor
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-09-17 mks original coding
*
*/
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
}
}

File diff suppressed because it is too large Load Diff

3218
classes/gacMongoDB.class.inc Normal file

File diff suppressed because it is too large Load Diff

3879
classes/gacPDO.class.inc Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,167 @@
<?php
/**
* gacSystemEvents class
* ---------------------
* This class is for processing system events:
* -- Broker Events (instantiation, forks, events, etc.)
* -- Audit Events
* -- Journaling Events
*
* This class instantiates it's own copy of gacAdminClientIn for publishing messages to the AdminIN broker. As such,
* this class can be safely instantiated from any Namaste environment.
*
* @author mike@givvingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-16-17 mks CORE-500: initial coding
* 03-02-18 mks CORE-680: deprecated trace logging
* 08-02-18 mks CORE-774: PHP7.2 exception handling
* 01-08-20 mks DB-150: PHP7.4 member type-casting
*/
class gacSystemEvents extends gacMongoDB
{
private ?gacWorkQueueClient $aiClient;
private string $res = 'cSEV: ';
/**
* gacSystemEvents constructor
*
* The constructor takes an optional parameter:
*
* $_meta -- the meta data as received by the broker event
*
* If the meta parameters is empty, the assumption is that this class was instantiated client-side
* respective to the admin broker and we're preparing a publish event.
*
* The responsibilities of the constructor are basically to instantiate the class for pending requests.
*
* @author mike@givingassistant.com
* @version 1.0
*
* @param null $_meta
*
*
* HISTORY:
* ========
* 08-17-17 mks CORE-500: original coding
* 10-24-18 mks DB-57: mod for the meta data template override
*
*/
public function __construct($_meta = null)
{
if (empty($_meta)) {
$_meta = [
META_TEMPLATE => TEMPLATE_CLASS_SYS_EVENTS,
META_SESSION_DAEMON => 1
];
}
// sometimes I forget to replace the class template in the meta data that initiated the system event...
if ($_meta[META_TEMPLATE] != TEMPLATE_CLASS_SYS_EVENTS) $_meta[META_TEMPLATE] = TEMPLATE_CLASS_SYS_EVENTS;
try {
parent::__construct($_meta);
} catch (Throwable $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
$msg = ERROR_THROWABLE_EXCEPTION . COLON . $t->getMessage();
@handleExceptionMessaging($hdr, $msg, $this->eventMessages, true);
$this->state = STATE_FRAMEWORK_FAIL;
$this->status = false;
return;
}
if (!$this->status) {
$msg = ERROR_FAILED_TO_INSTANTIATE . STRING_CLASS_MONGO;
if (isset($this->logger) and $this->logger->available)
$this->logger->warn($msg);
else
consoleLog($this->res, CON_ERROR, $msg);
$this->eventMessages[] = $msg;
return;
}
$this->aiClient = null;
$this->class = get_class($this);
}
/**
* fetchRecordBySessionGUID() -- public method
*
* This session-events class function requires a single input parameter:
*
* $_sessionGUID -- this is the session GUID, unique by definition, that we'll use to fetch the sys-event record
*
* The method assumes (because it was validated by the adminOut broker that called this method) that the GUID is valid.
* We'll exec a schema-fetch based on the query build to filter by the session GUID. On success, the entire record will
* populated in the current data object. Otherwise error messages and such.
*
* The function returns type void - success or failure can be tested by the class state/status.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_sessionGUID
*
*
* HISTORY:
* ========
* 10-20-20 mks DB-168: original coding
*
*/
public function fetchRecordBySessionGUID(string $_sessionGUID):void
{
$query = [STRING_QUERY_DATA => [ SYSTEM_EVENT_FK_SESSION_GUID => [OPERAND_NULL => [OPERATOR_EQ => [$_sessionGUID]]]]];
$this->_fetchRecords($query);
if (!$this->status) {
$this->eventMessages[] = ERROR_NOSQL_FETCH;
consoleLog($this->res, CON_ERROR, sprintf(ERROR_MDB_FETCH_FAIL, TEMPLATE_CLASS_SYS_EVENTS) . SYSTEM_EVENT_FK_SESSION_GUID);
} elseif ($this->count != 1) {
$this->eventMessages[] = ERROR_FETCH;
$this->logger->warn(sprintf(ERROR_DATA_RECORD_COUNT,1, $this->count));
$this->logger->warn(json_encode($query));
}
}
/**
* __clone() -- public method
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 08-17-15 mks CORE-500: original coding
*
*/
private function __clone()
{
return null;
}
/**
* __destruct() -- public method
*
* As of PHP 5.3.10, destructors are not run on shutdowns caused by fatal errors - since the destructor is
* now registered in the constructor method, recovery and/or clean-up efforts should go into this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-17-15 mks CORE-500: original coding
*
*/
public function __destruct()
{
//do nothing
}
}

962
classes/gacUsers.class.inc Normal file
View File

@@ -0,0 +1,962 @@
<?php
/**
* Class gacUsers -- public GA class
*
* This is a Namaste::Admin data class for all things User.
*
* The user data lives on Namaste's Admin service. This class encompasses all the code for managing the user entity
* including CRUD requests, and general user-type events such as login, logout, etc. The sister-table to this
* collection is also a mongo collection living on Namaste::Admin and that's the session class.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 08-17-20 mks DB-168: original coding
*
*/
class gacUsers extends gacMongoDB
{
public string $res = 'cUSR: ';
public ?array $securityConfig = null;
public ?array $emailConfig = null;
public ?array $sessionConfig = null;
public string $myToken;
public string $myAPIKey;
public string $myUserType;
/**
* gacUsers constructor.
* @param array|null $_meta
* @param string $_id
*/
public function __construct(?array $_meta = null, string $_id = '')
{
if (empty($_meta) or is_null($_meta)) {
$_meta = [
META_TEMPLATE => TEMPLATE_CLASS_USERS,
META_SESSION_DAEMON => 1 // todo -- hook this up to something!
];
} elseif (!isset($_meta[META_TEMPLATE])) {
// client didn't submit a template; let's fix that for them
$_meta[META_TEMPLATE] = TEMPLATE_CLASS_USERS;
} elseif ($_meta[META_TEMPLATE] != TEMPLATE_CLASS_USERS) {
// there's a difference between unset and being set to the wrong class
// todo -- system event for a hack attempt
$_meta[META_TEMPLATE] = TEMPLATE_CLASS_USERS;
}
try {
parent::__construct($_meta, $_id);
if (!$this->isServiceLocal(ENV_TERCERO)) return;
$this->myToken = '';
$this->myAPIKey = '';
$this->myUserType = '';
$this->securityConfig = gasConfig::$settings[CONFIG_SECURITY];
$this->emailConfig = gasConfig::$settings[CONFIG_EMAIL];
$this->sessionConfig = gasConfig::$settings[CONFIG_SESSIONS];
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$this->eventMessages[] = ERROR_EXCEPTION;
$msg = ERROR_FAILED_TO_INSTANTIATE . TEMPLATE_CLASS_USERS;
$this->logger->fatal($hdr . $msg);
consoleLog($this->res, CON_ERROR, $msg);
$this->state = STATE_FRAMEWORK_FAIL;
$this->status = false;
return;
}
}
/**
* registerNewUser() -- public method
*
* This method is called when we're adding a new user to the system. The array of request data, as processed by
* the broker, is the only input parameter.
*
* The function is of type void -- processing results are reflected in the class member settings.
*
* The method process the input data, ensuring that the minimally-required fields are present. The method then
* performs the following validation checks and storage actions:
*
* 1. Validate the user's email address and domain
* 2. Create the user record
* 3. Create the user's session, linking the user record
* 4. Create a System Event for the timer (expiry) event on Admin
* 5. Publish an Admin request to register the session with AT(1)
*
* In the event of a system error, where we fail to instantiate a class or save a record, since we're working
* with three different class objects in this method, and if the error is encountered in the session or system-
* event classes, then we'll copy the eventMessages stack on that object over to the user object before
* returning control to the calling client.
*
* Again, as there are no explicit or implicit returns via the parameters, the calling client has to check the
* user-class data members for the results.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_data
*
*
* HISTORY:
* ========
* 09-14-20 mks DB-168: Original coding
*
*/
public function registerNewUser(array $_data):void
{
$error = false;
$this->state = STATE_VALIDATION_ERROR;
$this->status = false;
// we minimally need the user email and hashed password to proceed
$minFields = [ USER_PII_EMAIL . $this->ext, USER_PASSWORD . $this->ext ]; // add fields as necessary
// first, we're only going to allow the creation of one record per request...
// ...so make sure $data is an array...
if (!is_array($_data[BROKER_DATA])) {
$this->eventMessages[] = ERROR_DATA_ARRAY_NOT_ARRAY;
return;
}
// ...and that the array only has one record
if (count($_data[BROKER_DATA]) != 1) {
$this->eventMessages[] = sprintf(ERROR_DATA_ARRAY_COUNT, 1, STRING_DATA, count($_data[BROKER_DATA]));
return;
}
$data = $_data[BROKER_DATA][0];
// make sure the minimally-required fields are present:
foreach ($minFields as $field) {
if (!array_key_exists($field, $data)) {
$this->eventMessages[] = ERROR_DATA_KEY_404 . $field;
$error = true;
}
}
if ($error) return;
// email validation: proper email, wblist, email does not already exist
$this->validateUserEmail($data[USER_PII_EMAIL . $this->ext]);
if (!$this->status) return;
// grab the partnerID, if it exists
// (if no partner ID, the assumption is that we're creating an internal user)
if (array_key_exists(CLIENT_AUTH_TOKEN, $_data[BROKER_META_DATA]))
$data[USER_PARTNER_API_KEY] = $_data[BROKER_META_DATA][CLIENT_AUTH_TOKEN];
// create the new-user record and start gathering session data
$this->_createRecord([$data]);
if (!$this->status) return;
// calculate the session duration based on the XML configuration
$duration = (gasConfig::$settings[CONFIG_SESSIONS][CONFIG_SESSIONS_DURATION_DAYS])
? gasConfig::$settings[CONFIG_SESSIONS][CONFIG_SESSIONS_DURATION_DAYS] * NUMBER_ONE_DAY
: gasConfig::$settings[CONFIG_SESSIONS][CONFIG_SESSIONS_DURATION_HOURS] * NUMBER_ONE_HOUR_SEC;
// build the session-record payload
$sessionData = [
SESSION_EXPIRES => time() + $duration,
SESSION_DURATION => $duration,
SESSION_LEVEL => SESSION_LEVEL_USER,
SESSION_FK_USER => $this->getColumn(DB_TOKEN)
// todo: other fields are legacy and we need to learn what to fill them with...
];
// instantiate a new session object
$metaCopy = $this->metaPayload;
$metaCopy[META_TEMPLATE] = TEMPLATE_CLASS_SESSIONS;
$errors = [];
if (is_null($objSession = grabWidget($metaCopy, '', $errors))) {
$this->eventMessages = [...$this->eventMessages, $errors];
$this->state = STATE_FRAMEWORK_WARNING;
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_TEMPLATE_INSTANTIATE . $metaCopy[META_TEMPLATE];
$this->logger->warn($hdr . $msg);
consoleLog($this->res, CON_ERROR, $msg);
return;
}
// create the session record
$objSession->_createRecord([$sessionData], DATA_USER);
if (!$objSession->status) {
$this->eventMessages = [ ...$this->eventMessages, ...$objSession->eventMessages ];
if (is_object($objSession)) $objSession->__destruct();
unset($objSession);
$this->state = STATE_FRAMEWORK_FAIL;
return;
}
// theses two values will be harvested back up at the broker level and used to create the return payload
$this->sessionGUID = $objSession->getColumn(DB_TOKEN);
$this->userGUID = $this->getColumn(DB_TOKEN);
// create the system-event data and publish the event to admin (or save locally if admin is local)
$eventData = [
SYSTEM_EVENT_NAME => EVENT_NAME_SESSION_EXPIRY,
SYSTEM_EVENT_STATUS => STATUS_ACTIVE,
SYSTEM_EVENT_TYPE => EVENT_TYPE_SESSION,
SYSTEM_EVENT_FK_SESSION_GUID => $this->sessionGUID,
SYSTEM_EVENT_FK_USER_GUID => $this->userGUID,
SYSTEM_EVENT_CLASS => get_class($this),
SYSTEM_EVENT_DURATION => $duration,
SYSTEM_EVENT_BROKER_EVENT => $_data[BROKER_REQUEST],
SYSTEM_EVENT_OGUID => $metaCopy[META_EVENT_GUID],
SYSTEM_EVENT_CODE_LOC => basename(__FILE__) . AT . __LINE__,
SYSTEM_EVENT_META_DATA => $metaCopy,
SYSTEM_EVENT_NOTES => basename(__METHOD__)
];
// invoke the function to publish the system event request and register the session with AT(1) on Admin:
if (!postSystemEvent($eventData, $metaCopy[META_EVENT_GUID], $this->logger)) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, ERROR_MDB_SYS_EVENT_SAVE, $this->eventMessages, true);
}
// call admin to fetch the token for the system event record we just created
// next, register the event with the admin broker AT(1) service
$request = [
BROKER_REQUEST => BROKER_REQUEST_NEW_SESSION,
BROKER_DATA => [ SYSTEM_EVENT_DURATION => $duration ],
BROKER_META_DATA => [
META_TEMPLATE => TEMPLATE_CLASS_SYS_EVENTS,
META_SESSION_GUID => $this->sessionGUID,
META_CLIENT => CLIENT_SYSTEM
]
];
// get a copy of the record we just created
// $dataSystemEvent = $objSession->getData();
// $dataSystemEvent = $dataSystemEvent[0];
// create the broker client and publish the BROKER_REQUEST_NEW_SESSION event to AdminInBroker to register
// the session expiry with AT(1)
/** @var gacWorkQueueClient $tmpObj */
$tmpObj = new gacWorkQueueClient(basename(__METHOD__) . AT . __LINE__);
if (is_null($tmpObj) or !$tmpObj->status) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_TEMPLATE_INSTANTIATE . sprintf(ERROR_BROKER_CLIENT_INSTANTIATION, STRING_WORK_QUEUE_CLIENT);
@handleExceptionMessaging($hdr, $msg, $this->eventMessages, true);
return;
}
// fire-n-forget queue
$tmpObj->call(gzcompress(json_encode($request)));
// todo: validation email
// clean-up and return a success condition
if (is_object($objSession)) $objSession->__destruct();
if (is_object($tmpObj)) $tmpObj->__destruct();
unset($objSession, $tmpObj);
$this->state = STATE_SUCCESS;
$this->status = true;
}
/**
* validateUserEmail() -- public method
*
* this method is the single entry-point for all email checks.
*
* this method checks a submitted email address:
*
* 1. sanitize the email removing invalid characters
* 2. validate that the email is in the correct format
* 3. validate the email domain
* 4. validate the the email is unique
* 5. validate against the WBL list
*
* If case 1 or case 2 fails, then a STATE_VALIDATION_ERROR is returned -- the email address itself is invalid
*
* Next, check to see if the email address exists in the database. We're going to check both the primary
* and alternate email addresses.
*
* if the query comes back success and the data count is true, that's implicit indication that the
* email is in-use. What we want to do is make an additional check on the status of the email.
*
* If the status is false, (from the query), then we want to check for a 404-state -- indicating that no records
* were found for that email and it's ok to use.
*
* any other state/status combination defaults to a framework warning and a check-logs diagnostic is generated.
*
* Return States:
* --------------
* STATE_SUCCESS -- email address is valid, not already in-use, and is white-listed
* STATE_NOT_WHITE_LIST -- email does not appear on the white list and white-list checks are enabled
* STATE_BLACK_LIST -- email has been black-listed
* STATE_VALIDATION_ERROR -- email address as submitted was empty or malformed
* STATE_ALREADY_EXISTS -- email address is in-use and cannot be reused
* STATE_MAIL_FAIL -- processing error within the framework
* STATE_FRAMEWORK_WARNING -- some random bad thing happened and needs investigation
*
* Note that the $_email parameter is a call-by-reference who's contents may be altered by this method.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_email -- the email to validate/check
* @param Boolean $_skipCheck -- if set to true, skip email-already-exists check (for logins)
*
* HISTORY:
* ========
* 08-18-20 mks DB-168: original coding
*
*/
public function validateUserEmail(string &$_email, bool $_skipCheck = false):void
{
$oldData = null;
$whiteList = true;
$blackList = true;
$oldCount = 0;
$this->state = STATE_VALIDATION_ERROR;
$this->status = false;
$backup = false;
if (empty($this->metaPayload)) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$this->logger->warn($hdr . ERROR_DATA_META_404);
$this->eventMessages[] = ERROR_DATA_META_404;
$this->status = false;
$this->state = STATE_META_ERROR;
return;
}
$metaCopy = $this->metaPayload;
$metaCopy[META_TEMPLATE] = TEMPLATE_CLASS_WBL;
if ($this->count and !empty($this->data)) {
$oldData = $this->getData();
$oldCount = $this->count;
$this->data = [];
$this->count = 0;
$backup = true;
}
// ensure we have input parameters
if (empty($_email)) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_DATA_KEY_404 . USER_PII_EMAIL;
$this->eventMessages[] = $msg;
$this->logger->data($hdr . $msg);
if ($backup) {
$this->count = $oldCount;
$this->data = $oldData;
}
return;
}
if (empty($this->emailConfig) or !is_array($this->emailConfig)) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_CONFIG_RESOURCE_404 . RESOURCE_EMAIL;
$this->eventMessages[] = $msg;
$this->logger->data($hdr . $msg);
if ($backup) {
$this->count = $oldCount;
$this->data = $oldData;
}
return;
}
// force the user email to lowercase
$_email = trim(mb_strtolower($_email));
// validate the email and email domain
if (!checkEmailAndDomain($_email)) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_DIAG_EMAIL_MALFORMED . COLON . $_email;
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
$this->state = STATE_MAIL_FAIL;
return;
}
// check for duplicate emails
if (!$_skipCheck) {
$state = $this->emailSearch($_email);
switch ($state) {
case STATE_FRAMEWORK_WARNING :
case STATE_ALREADY_EXISTS :
$this->state = $state;
$this->status = false;
if ($backup) {
$this->count = $oldCount;
$this->data = $oldData;
}
return;
break;
case STATE_DOES_NOT_EXIST :
// do nothing: optimal return
break;
default :
$this->status = false;
$msg = ERROR_UNKNOWN_STATE . $state;
$this->eventMessages[] = $msg;
$this->logger->warn($msg);
if ($backup) {
$this->count = $oldCount;
$this->data = $oldData;
}
return;
break;
}
}
// if either whitelisting or blacklisting are enabled for the client, then check the wbl
if ($whiteList or $blackList) {
try {
$this->checkWBL($_email);
if ($this->debug) {
$this->logger->debug('email checked: ' . $_email);
$this->logger->debug('objWBL state: ' . $this->state);
$this->logger->debug('objWBL status: ' . (($this->status) ? STRING_TRUE : STRING_FALSE));
}
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = $t->getMessage();
$this->eventMessages[] = ERROR_EXCEPTION;
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
$this->eventMessages = array_pop($this->eventMessages);
return;
}
} else {
$this->state = STATE_SUCCESS;
$this->status = true;
}
// reset the current class data
if ($backup) {
$this->count = $oldCount;
$this->data = $oldData;
}
}
/**
* emailSearch() -- private method
*
* this method accepts an email address as it's only input parameter and generates a query to check to
* see if the email already exists in the user collection.
*
* the method returns a state (string) determined by the following conditions:
*
* STATE_FRAMEWORK_WARNING - processing or db error
* STATE_ALREADY_EXISTS - email is already in-use in the db
* STATE_DOES_NOT_EXIST - email is not in-use (desired return)
*
* The calling client should evaluate the return state accordingly as this method does not change the class
* state/status params.
*
* The calling client should also reset the data payload as, if a record is found, then the found record will
* be added to the current data member.
*
* This method was written to reduce the code footprint of the validateUserEmail method.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_email
* @return string
*
* HISTORY:
* ========
* 08-18-20 mks DB-169: original coding
*
*/
private function emailSearch(string $_email): string
{
$return = STATE_FRAMEWORK_WARNING;
// check if tercero is not a "local" service
if (!gasConfig::$settings[ENV_TERCERO][CONFIG_IS_LOCAL]) return $return;
// set-up the email query
$query = [
STRING_QUERY_DATA => [
USER_PII_EMAIL => [ OPERAND_NULL => [ OPERATOR_EQ => [ $_email ]]],
USER_PII_SECONDARY_EMAIL => [ OPERAND_NULL => [ OPERATOR_EQ => [ $_email]]],
OPERAND_OR => null
],
STRING_RETURN_DATA => [ CM_TOKEN ]
];
// query the db
$this->_fetchRecords($query);
switch ($this->status) {
case true :
if ($this->count) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = sprintf(ERROR_EMAIL_DUPLICATE, $_email);
$this->eventMessages[] = $msg;
$this->logger->data($hdr . $msg);
$return = STATE_ALREADY_EXISTS;
} elseif ($this->state == STATE_NOT_FOUND) {
return STATE_DOES_NOT_EXIST;
}
break;
case false :
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$this->logger->warn($hdr . $this->strQuery);
$this->logger->warn($hdr . ERROR_CHECK_LOGS);
$this->eventMessages[] = ERROR_CHECK_LOGS;
break;
}
return($return);
}
/**
* checkWBL() -- public method
*
* this method checks the submitted email and domain against the White/Black list data.
*
* there is but one input parameter required for the method:
*
* $_email -- a string containing the email to be evaluated
*
* the method begins by looping through the email - the address is processed by first evaluating the
* entire email (for a match in the WBL table) and, if not found, then we continue with the domain-part of
* the email (right-side of the '@') and continue to remove sub-domains until we get to the TLD. If we
* get to the TLD, then the user is neither black-listed or white-listed.
*
* if we get a domain match, or if we match the entire email, then we look at the WBL record "type" (a boolean)
* to determine if the WBL record is a black (false) or white (true) listed email.
*
* The following states are assigned to the class under the following:
*
* STATE_DB_ERROR -- the WBL record was found, but no value was stored in the "type_wbl" column
* -- more than one WBL record was found for the email/domain
* -- the search query failed to execute successfully
* STATE_SUCCESS -- the email/domain is white-listed
* STATE_BLACK_LIST -- the email/domain is black-listed
* STATE_NOT_FOUND -- the email/domain is neither white-listed or black-listed, or wbl is disabled
*
* the method return is the class STATE variable which should be evaluated by the calling client upon return.
*
* Programmer's Notes:
* -------------------
* This method is a member of the user class, as opposed to the WBList class, for efficiency - we can check a user's
* email in this class without resorting to instantiating another data class (WBList) for the check.
*
* PENDING WORK:
* -------------
* todo: code the security event when a black-listed return is encountered
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_email
*
* HISTORY:
* ========
* 08-19-20 mks DB-168: original coding
*
*/
public function checkWBL(string $_email):void
{
// lvar init
$wblState = null;
$emailBits = explode(AT, $_email);
$domain = $emailBits[1];
$diminishingDomain = $domain;
$method = basename(__METHOD__);
$domainBits = explode(DOT, $domain);
$counter = 0;
$errorList = [];
$this->status = false;
$this->state = STATE_VALIDATION_ERROR;
$blEnabled = boolval(gasConfig::$settings[CONFIG_SECURITY][CONFIG_SECURITY_BANNED_LIST]);
$wlEnabled = boolval(gasConfig::$settings[CONFIG_SECURITY][CONFIG_SECURITY_RESTRICTED_LIST]);
// return immediately if wbl is disabled system-wide
if (!$blEnabled and !$wlEnabled) {
$this->state = STATE_NOT_SUPPORTED;
$this->status = true;
return;
}
$tmpMeta = $this->metaPayload;
$tmpMeta[META_TEMPLATE] = TEMPLATE_CLASS_WBL;
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($tmpMeta, '', $errorList))) {
foreach ($errorList as $error)
$this->logger->error($error);
$this->eventMessages = [...$this->eventMessages, ...$errorList];
return;
}
/*
* we start with all of the domain. If there are sub-domains embedded in the domain,
* start by searching with the left-most sub-domain and each iteration, remove a sub-domain
* until we either find an entry in the collection, or we run out of domain.
*
* This technique allows us to validate an email submitted such as: mike@backend.engineering.givva.com
* to one of the following:
*
* 1. mike@backend.engineering.givva.com
* 2. mike@engineering.givva.com
* 3. mike@givva.com
*
*/
// first, check to see if the email USER is explicitly listed in the WBL collection:
$query = [ USER_PII_EMAIL => [ OPERAND_NULL => [ OPERATOR_EQ => [ $_email ]]]];
$widget->_fetchRecords([STRING_QUERY_DATA => $query]);
if ($widget->status and $widget->count == 1) {
// email is either white or black-listed
if (is_null($wbl = $widget->getColumn(MONGO_WBL_TYPE))) {
// edge case: there is no WBL Type setting so ... this is a system event! Why would a user be listed
// in the table but not have a setting?!? Database error...
// todo: system event for incomplete data record found in the database
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$this->state = STATE_DATA_ERROR;
$this->eventMessages[] = ERROR_GENERIC_CUSTOMER;
$this->logger->warn($hdr . ERROR_BAD_DATA_RECORD . 'wbl-type is not populated');
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
}
// ... otherwise, if there we found a record for the email address, check to see if it's a white or black-listed entry
$this->state = (boolval($wbl)) ? STATE_SUCCESS : STATE_BLACK_LIST;
$this->status = ($this->state == STATE_SUCCESS) ? true : false;
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
} elseif ($widget->status and $widget->state == STATE_NOT_FOUND) {
// record was not found - which is not a problem unless whitelisting is enabled
$this->status = true;
$this->state = ($wlEnabled) ? STATE_NOT_WHITE_LIST : STATE_SUCCESS;
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
} elseif ($widget->status and $widget->count > 1 and $widget->state != STATE_NOT_FOUND) {
// more than one record was found
$msg = sprintf(MONGO_FAILED_TOO_MANY_RECS, 1) . $widget->count;
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, $msg, $this->eventMessages, true);
$this->state = STATE_DB_ERROR;
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
} elseif (!$widget->status) {
// query fail
$this->eventMessages[] = ERROR_CHECK_LOGS;
$this->state = STATE_FAIL;
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
}
// process the email DOMAIN until we find an entry in the wbl table or we run out of domain
for ($index = count($domainBits); $index > 1; $index--) {
$query = [USER_PII_EMAIL => [OPERAND_NULL => [OPERATOR_EQ => [(AT . $diminishingDomain)]]]];
$widget->_fetchRecords([STRING_QUERY_DATA => $query]);
$wblState = $widget->getColumn(MONGO_WBL_TYPE);
if ($widget->status and $widget->count == 1) {
// a wbl record exists for the domain
if (false === boolval($wblState)) {
// domain has been explicitly black listed
$this->state = STATE_BLACK_LIST;
$this->status = true;
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
// todo -- system event
} elseif (true === boolval($wblState)) {
// domain has been explicitly white listed
$this->state = STATE_SUCCESS;
$this->status = true;
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
} elseif (is_null($wblState)) {
// record exists but type is not defined
$this->state = STATE_DATA_ERROR;
if (is_object($widget)) $widget->__destruct();
unset($widget);
return;
}
} elseif ($widget->status and $widget->state == STATE_NOT_FOUND) {
// no records returned -- shrink the domain
$diminishingDomain = ltrim($diminishingDomain, ($domainBits[$counter++] . DOT));
}
}
// if we're done processing the domain and we land here, then the check the $wblType for null
if (is_null($wblState) and $wlEnabled) {
// none of the domain was found and white-listing is enabled
$this->state = STATE_NOT_WHITE_LIST;
$this->status = false;
} else {
// wl is not enabled and no wbl record was found for any of the domain
$this->state = STATE_SUCCESS;
$this->status = true;
}
if (is_object($widget)) $widget->__destruct();
unset($widget);
}
/**
* hashText() -- protected method
*
* This method requires a single input parameter, the text to be hashed:
*
* $_text -- string containing the text to be hashed
*
* The method will check the XML configuration for the hashing algorithm to use. If the security section was
* not properly loaded during instantiation, or if the calling client did not provide input text, then an error
* message will be generated and a null value returned.
*
* Otherwise, we'll use the password_hash() function to generate a hash of the request text. If the function
* returns a Boolean(false), then we'll generate an error message and return a null to the requesting client.
*
* Otherwise, return the hashed string.
*
* Programmer's Notes:
* -------------------
* I've marked this as protected so that it cannot be invoked (generate a hash) outside of the user instantiation
* stack. This is to limit access to the Namaste hashing algorithm to only the user class.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_text
* @return string|null
*
*
* HISTORY:
* ========
* 08-31-20 mks DB-168: original coding
*
*/
protected function hashText(string $_text):?string
{
$method = basename(__METHOD__);
if (empty($this->securityConfig)) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$msg = ERROR_CONFIG_RESOURCE_404 . CONFIG_SECURITY;
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
return null;
}
if (empty($_text)) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$msg = ERROR_DATA_404;
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
return null;
}
try {
// if set, pull the hash algorithm from namaste config, o/w set to default and call hash function
$hash = password_hash($_text, (!isset($this->securityConfig[CONFIG_SECURITY_HASH_ALGO])) ? PASSWORD_ARGON2I : $this->securityConfig[CONFIG_SECURITY_HASH_ALGO]);
if (false === $hash) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$msg = ERROR_PASSWORD_HASH_GENERATION_FAILED;
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
return null;
}
return $hash;
} catch (TypeError | Throwable $t) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $this->eventMessages);
return null;
}
}
/**
* hashFetch() -- private method
*
* This method has two input parameters:
*
* $_searchValue - this should be either a 36-character GUID value, or an email address
* $_retData -- call-by-reference parameter that, if submitted, will contain the user's type and api-key in
* addition to the password hash and account token
*
* We'll test to see if the input value is either a GUID or an email address and will structure the query
* to match. If the searchValue is not either, then generate error messages, return a null, and exit.
*
* If the query generated an error, or returns a not found, we'll generate appropriate messaging and return
* a null value to the calling client preserving class state/status and query results data.
*
* Otherwise, we'll return the password hash to the calling client.
*
* Programmer's Notes:
* -------------------
* This function is private so as to limit access to the user table for the purposes of accessing the hash key
* to this class only.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string|null $_searchValue -- the email or guid value of the target record
* @param array|null $_retData -- call by reference param to return additional record data
* @return string|null
*
*
* HISTORY:
* ========
* 08-31-20 mks DB-168: original coding
*
*/
private function hashFetch(?string $_searchValue = null, ?array &$_retData = null):?string
{
$method = basename(__METHOD__);
// search key can be either a token or an email address - figure out which
if (empty($_searchValue)) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$msg = ERROR_PARAM_404 . STRING_TOKEN;
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
return null;
} elseif (validateGUID($_searchValue))
$searchKey = STRING_TOKEN;
elseif (false !== filter_var($_searchValue, FILTER_VALIDATE_EMAIL))
$searchKey = USER_PII_EMAIL;
else {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$msg = ERROR_DATA_INVALID_KEY . $_searchValue;
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
return null;
}
// build the query to fetch the user's hash based on the record token
$query = [
$searchKey => [ STRING_TOKEN => [ OPERAND_NULL => [ OPERATOR_EQ => [ $_searchValue]]]],
STRING_RETURN_DATA => [ USER_PASSWORD, USER_TYPE, USER_PARTNER_API_KEY, DB_STATUS ]
];
$this->_fetchRecords($query);
if (!$this->status) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$msg = sprintf(ERROR_MDB_QUERY_FAIL, STRING_SEARCH);
@handleExceptionMessaging($hdr, $msg, $this->eventMessages);
return null;
} elseif ($this->state == STATE_NOT_FOUND) {
$this->eventMessages[] = ERROR_DATA_404;
return null;
}
// class data gets reset on successful search and return
$_retData = $this->getData();
$_retData = $_retData[0];
if (isset($_retData[STRING_PASSWORD . $this->ext])) unset($_retData[STRING_PASSWORD . $this->ext]);
$hash = $this->getColumn(USER_PASSWORD);
$this->removeData();
return $hash;
}
/**
* hashCheck() -- public function
*
* This is the public function, access point, for validating a user's password hash. The method has the
* following input parameters to the method:
*
* $_searchValue -- this can be either a GUID or an email address and will be used as search key
* $_hashText -- this is the hash text as generated by the client that we'll compare to the stored hash
*
* There are no explicit parameters returned. However, we make use of the class state/status members to pass-back
* the state and status of the request which should be processed by the calling client.
* If either parameter is passed empty, then generate messaging and return.
*
* The method invokes the hashFetch() method, passing in the search-value to fetch the password hash from the
* database record and, in the same line, invokes the password_verify() function to validate the user
* submitted pre-hash value against the stored hash.
*
* Programmer's Notes:
* -------------------
* Anytime you want to add layered-validation, like ensuring that partner account belongs to a partner, you'll want
* to add those checks to this method.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_searchValue
* @param $_hashText
*
*
* HISTORY:
* ========
* 08-31-20 mks DB-168: Original coding
*
*/
public function hashCheck(string $_searchValue, $_hashText):void
{
$this->status = false;
$this->state = STATE_AUTH_ERROR;
$userData = null;
$method = basename(__METHOD__);
if (empty($_searchValue) or empty($_hashText)) {
$this->eventMessages[] = ERROR_DATA_404;
return;
}
if (password_verify($_hashText, $this->hashFetch($_searchValue, $userData))) {
// if the hash verification was successful...
// check the account status for non-active states:
if ($userData[DB_STATUS . $this->ext] != STATUS_ACTIVE) {
switch ($userData[DB_STATUS . $this->ext]) {
case STATUS_LOCKED :
case STATUS_CLOSED :
case STATUS_SUSPENDED :
case STATUS_REVOKED :
case STATUS_INACTIVE :
// account needs CSR intervention
break;
case STATUS_ABANDONED :
// account needs to have status updated and password change forced
break;
case STATUS_PENDING :
// account needs to validate their email
break;
}
}
if (isset($userData[USER_TYPE . $this->ext]) and ($userData[USER_TYPE . $this->ext] == USER_TYPE_PARTNER)) {
// ...and we have a userType and that type is equal to "partner"...
if (isset($userData[USER_PARTNER_API_KEY . $this->ext]) and validateGUID($userData[USER_PARTNER_API_KEY . $this->ext])) {
// ...and we have an partner API key in the user record...
if (isset($this->metaPayload[CLIENT_AUTH_TOKEN])) {
// ...and we have the XPI-Key set in the meta payload...
if ($this->metaPayload[CLIENT_AUTH_TOKEN] == $userData[USER_PARTNER_API_KEY . $this->ext]) {
// ...and the meta-payload X-API-Key matches the API-Key stored in the user record...
// then we have a successful partner-user login!
$this->myAPIKey = $userData[USER_PARTNER_API_KEY . $this->ext];
$this->myUserType = $userData[USER_TYPE . $this->ext];
if (isset($userData[STRING_TOKEN . $this->ext]))
$this->myToken = $userData[STRING_TOKEN . $this->ext];
$this->state = STATE_SUCCESS;
$this->status = true;
return;
} else {
// client auth token in meta payload does not match xpi-key in user record
$this->eventMessages[] = ERROR_PARTNER_API_KEY_MISMATCH;
return;
// todo security system event
}
} else {
// Event request was from a partner, but user is not associated with a partner account
$this->eventMessages[] = ERROR_PARTNER_USER_NOT_MEMBER;
// todo security system event
return;
}
} else {
// either the api-key is not set or is an invalid guid (and the user is partner'd)
if (!isset($userData[USER_PARTNER_API_KEY . $this->ext])) {
// partner key was not set in the user record
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$this->logger->warn($hdr . ERROR_PARTNER_USER_NOT_REGISTERED);
$this->eventMessages[] = ERROR_PARTNER_USER_NOT_MEMBER;
$this->state = STATE_DB_ERROR;
return;
} else {
// the partner key guid stored in the user record is bad
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$this->eventMessages[] = ERROR_PARTNER_USER_DATA;
$msg = sprintf(ERROR_PARTNER_USER_HAS_BAD_GUID, $userData[USER_PARTNER_API_KEY . $this->ext], USER_PARTNER_API_KEY);
$this->logger->warn($hdr . $msg);
$this->logger->warn(ERROR_INVALID_GUID . $userData[USER_PARTNER_API_KEY . $this->ext]);
$this->state = STATE_DATA_ERROR;
return;
}
}
} // END - check to see if user belongs to a partner
} else {
$this->eventMessages[] = ERROR_PASSWORD_MISMATCH;
}
}
}

View File

@@ -0,0 +1,169 @@
<?php
/**
* this class is used when we want to publish a request to the AdminIn broker. The class wraps all of the
* RabbitMQ initialization and communication work so you don't have to. Especially useful for unit testing.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 07-07-17 mks CORE-463: updated for async logging (appserver posting log events to admin server)
* 07-31-18 mks CORE-774: PHP7.2 exception handling
* 01-29-20 mks DB-144: PHP7.4 support
* 10-15-20 mks DB-168: renamed, supports all namaste work queues (adminBrokerIn and sBroker)
*
*/
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Exception\AMQPRuntimeException;
use PhpAmqpLib\Exception\AMQPTimeoutException;
use PhpAmqpLib\Message\AMQPMessage;
class gacWorkQueueClient
{
private object $rabbitConnection; // actually contains AMQPStreamConnection type but the API declares object|null
private ?AMQPChannel $rabbitChannel = null;
private ?string $rabbitCallbackQueue;
private ?string $rabbitResponse;
private string $rabbitCorrelationID;
private string $queueName;
public bool $status;
/**
* __construct() -- public method
*
* this is the constructor for the class. it requests an admin resource from the resource manager and declares
* a client-side connection to the service.
*
* there is an optional input parameter -- $_fw (from-where) that inserts a string into the queue label allowing
* easy identification of the requesting source.
*
* the method returns no values. It only sets the class' status member variable, a Boolean, on success or fail,
* accordingly.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* @param $_fw - "from where" - tweaks queue label to identify request origin
* @param $_which - which fire-n-forget broker queue to attache to (default is adminIn)
*
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 09-19-19 mks DB-136: improved exception handling
* 10-02-20 mks DB-168: support for (new) session broker (also fire-n-forget), removed callback method
*
*/
public function __construct(string $_fw = __METHOD__ . AT . __LINE__, string $_which = BROKER_QUEUE_AI)
{
register_shutdown_function(array($this, STRING_DESTRUCTOR));
$this->status = false;
switch ($_which) {
case BROKER_QUEUE_AI :
$resource = RESOURCE_ADMIN;
$queue = BROKER_QUEUE_AI;
$labelClient = 'gacAdminInClient<';
break;
case BROKER_QUEUE_S :
$resource = RESOURCE_TERCERO;
$queue = BROKER_QUEUE_S;
$labelClient = 'gacSessionClient<';
break;
default :
consoleLog('cACI: ', CON_ERROR, ERROR_CONFIG_RESOURCE_404 . $_which);
return;
break;
}
$this->queueName = gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG] . $queue;
try {
$this->rabbitConnection = gasResourceManager::fetchResource($resource);
if (is_null($this->rabbitConnection)) return;
$this->rabbitChannel = $this->rabbitConnection->channel();
$label = uniqid($labelClient . $_fw . '>:');
list($this->rabbitCallbackQueue, ,) = $this->rabbitChannel->queue_declare($label . uniqid(), false, false, false, true);
$this->rabbitChannel->basic_consume($this->rabbitCallbackQueue, '', false, true, false, false);
$this->rabbitResponse = null;
$this->status = true;
} catch (AMQPRuntimeException | AMQPTimeoutException | Throwable | TypeError $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
/**
* call() -- public method
*
* This method is invoked outside of the class and is the entry point for publishing a message request to the
* AdminIn broker. It creates a new AMQP message and publishes it to the queue (defined in the constructor),
* and then exits, returning a true message indicating that the messages was successfully published.
*
* Since the AdminIN broker is a fire-n-forget broker, there are no return messages to block-and-wait on.
*
* If an exception is raised by this class, then a false value will be returned.
*
* NOTE: the true/false return values are not, in any way, a reflection of the processing success/failure on the
* remote service. The general RoT is that if we can publish the request, then we can only assume that the request
* was successfully consumed and processed.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_data
* @return bool
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 08-17-17 mks CORE-500: returning a boolean that actually indicates if we successfully published the event
* 09-19-19 mks DB-136: refactored exception handling, fixed the AMQPMessage create call
* 10-20-20 mks DB-168: better exception handling, removed channel close b/c autodelete is on
*
*/
public function call($_data): bool
{
$this->rabbitResponse = null;
$this->rabbitCorrelationID = uniqid();
$success = false;
try {
$rabbitMessage = new AMQPMessage((string)$_data);
$this->rabbitChannel->basic_publish($rabbitMessage, '', $this->queueName);
$success = true;
} catch (AMQPTimeoutException | AMQPRuntimeException | Throwable $e) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $e->getMessage(), $foo, true);
}
return ($success);
}
public function __destruct()
{
// As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
//
// destructor is registered shut-down function in constructor -- so any recovery
// efforts should go in this method.
try {
if (!is_null($this->rabbitChannel)) {
$this->rabbitChannel->close();
$this->rabbitConnection->close();
}
} catch (AMQPRuntimeException | AMQPTimeoutException | Throwable | TypeError $t) {
$hdr = basename(__METHOD__) . AT . __LINE__ . COLON;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
}
}
}

2129
classes/gasCache.class.inc Normal file

File diff suppressed because it is too large Load Diff

579
classes/gasConfig.class.inc Normal file
View File

@@ -0,0 +1,579 @@
<?php
/**
* gasConfig -- static configuration class
*
* The data framework is intended to read two configuration files of varying types to create a single structure
* containing all of the configuration options.
*
* subsequent requests/instantiations to different configuration files *overwrite* the duplicated-tag sections.
*
* this approach allows us to load a core configuration, then to modify the structure generated with a second
* configuration file.
*
* class is designed as a singleton so that only one "authoritative" config file can exist.
*
* To invoke, call the singleton function with the path/filename and the file type:
*
* $foo = gasConfig::singleton('./config/base.xml', 'xml');
*
* all configuration file names, and their file types, are defined in the global constants file.
*
* all key-value-paired data is loaded, stored, and accessed in the classes' $settings member - given a key (tag,
* index, etc.), you can access/pull the data using the following:
*
* $foo = gasConfig::$settings[$key];
*
* which will either return a value or a sub-array depending on what $key indexes.
*
* KNOWN LIMITATIONS:
* ------------------
* Config files have to be supported by the corresponding PHP function that parses that config file type. As of this
* writing, only the following file types are supported:
*
* -- xml
* -- ini
* -- json
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 06-08-18 mks CORE-1035: console logging upgrade
* register $setting member function added
* 06-15-18 mks CORE-1045: deprecated CONFIG_ID_NODE XML tag
* 09-24-19 mks DB-136: deprecated getSyslog() method
* 01-08-20 mks DB-150: PHP7.4 class member type-casting
*
*/
class gasConfig
{
private static ?gasConfig $instance = null; // self-pointer
public static ?string $status = null; // used to validate successful instantiation
// private static $env = [ // defines the valid environments
// CONFIG_ID_NODE_NAMASTE,
// CONFIG_ID_NODE_ADMIN,
// CONFIG_ID_NODE_DEV
// ];
public static ?array $settings = null; // hold all the configs stuffs
CONST AUTO = 0; // config file format-type associations
CONST JSON = 2;
const PHP_INI = 4;
const XML = 16;
static private string $res = 'CNFG: '; // logger id tag
static private array $CONF_EXT_RELATION = ['json' => 2, 'ini' => 4, 'xml' => 16];
/**
* __construct() -- private method
*
* constructor function for the class - determines the type of configuration file to read (if none is provided,
* then attempt to determine by the file's extension).
*
* depending on a file extension, invoke the appropriate function to parse the config file in an internal
* data structure.
*
*
* @author mshallop@pathway.com
* @version 2.1.7
*
* @param $_cFile
* @param string $_fType
*
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
private function __construct(string $_cFile, string $_fType = gasConfig::AUTO)
{
//register_shutdown_function('pgsConfig::__destruct');
self::getConfig($_cFile, $_fType);
if (gasConfig::$settings[CONFIG_DEBUG]) consoleLog(static::$res, CON_DEBUG, INFO_CONFIG_LOADED);
}
/**
* registerEnvironment() -- private static method
*
* This method has no input parameters and returns a boolean to indicate successful processing.
*
* The method requires that the XML configuration be pre-loaded prior to invocation.
*
* Method creates an array of services and assigns a boolean value to each service (associative array)
* which is then transferred to a member variable.
*
* Note that "available services" are relative to the local service and not indicative of overall service
* ability for a distributed cluster.
*
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return bool
*
*
* HISTORY:
* ========
* 06-08-18 mks CORE-1035: original coding (transferred from startBrokers.php)
* 09-08-20 mks DB-168: updated for XML service locality re-configuration
* 11-09-20 mks DB-171: update check for local service registration to also be qualified on ACTIVE setting
*
*/
private static function registerEnvironment(): bool
{
$environments = [];
// using the environment in the XML config -- generate a list of currently-active services.
foreach (gasConfig::$settings[CONFIG_REGISTERED_SERVICES][CONFIG_DATABASE_MONGODB_ADMIN_REPLSET_SET] as $service) {
if (isset(gasConfig::$settings[CONFIG_BROKER_SERVICES][$service]) and is_array(gasConfig::$settings[CONFIG_BROKER_SERVICES][$service])) {
if (isset(gasConfig::$settings[$service][CONFIG_IS_LOCAL])) {
if (isset(gasConfig::$settings[$service][CONFIG_ACTIVE])) {
$environments[$service] = boolval(gasConfig::$settings[$service][CONFIG_IS_LOCAL]) && boolval(gasConfig::$settings[$service][CONFIG_ACTIVE]);
} else {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = $hdr . sprintf(CONFIG_XML_SERVICE_SETTING, CONFIG_BROKER_SERVICES . ARROW . $service . ARROW . CONFIG_IS_LOCAL);
consoleLog(static::$res, CON_ERROR, $msg);
return false;
}
} else {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = $hdr . sprintf(CONFIG_XML_SERVICE_SETTING, CONFIG_BROKER_SERVICES . ARROW . $service . ARROW . CONFIG_IS_LOCAL);
consoleLog(static::$res, CON_ERROR, $msg);
return false;
}
}
}
if (gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV] == ENV_PRODUCTION) {
// if we're starting in a production environment -- this requires services (appServer, Admin, Segundo and Tercero)
// to all be started on separate instances. This block enforces that all services are discrete per instance.
if (array_sum($environments) > 1) {
$msg = CONFIG_XML_SERVICE_VIOLATION;
foreach ($environments as $key => $value) {
if ($value == 1) $msg .= $key . ', ';
}
$msg = rtrim($msg, ', ');
consoleLog(static::$res, CON_ERROR, $msg);
return false;
}
}
// save the local services to the gasConfig object
static::$settings[CONFIG_REGISTERED_SERVICES] = $environments;
return true;
}
/**
* getConfig() -- private static method
*
* reads the configuration file (passed in by $_cFile) from the DIR_CONFIG directory and merges the files into
* (or onto) the existing configuration structure ($settings) on subsequent calls to this method.
*
* supported file types:
* -- json
* -- php.ini
* -- xml
*
* if the files cannot be accessed, or if an invalid file type is given, then dump the error to stdout (logfile)
* and exit.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_cFile -- config file path and name
* @param string $_fType -- config file type
*
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
private static function getConfig(string $_cFile, string $_fType = gasConfig::AUTO)
{
$tSettings = null; // temporary holder for file settings
if ($_fType == self::AUTO) {
$_fType = self::$CONF_EXT_RELATION[pathinfo($_cFile, PATHINFO_EXTENSION)];
}
switch ($_fType) {
case self::JSON :
$result = file_get_contents($_cFile, true);
if (!$result) {
consoleLog(static::$res, CON_ERROR, CONFIG_FTL_JSON . $_cFile);
return;
} else {
$tSettings = json_decode($result);
}
break;
case self::PHP_INI :
$result = parse_ini_file($_cFile, true);
if (!$result) {
consoleLog(static::$res, CON_ERROR, CONFIG_FTL_INI . $_cFile);
return;
} else {
$tSettings = $result;
}
break;
case self::XML :
$result = simplexml_load_file($_cFile);
if (!$result) {
consoleLog(static::$res, CON_ERROR, CONFIG_FTL_XML . $_cFile);
return;
} else {
try {
$tSettings = self::objectToArray($result);
} catch (TypeError $t) {
consoleLog(static::$res, CON_ERROR, ERROR_TYPE_EXCEPTION . COLON . $t->getMessage());
return;
}
}
break;
}
if (!is_null($tSettings)) {
if (is_null(self::$settings)) {
self::$settings = $tSettings;
} else {
self::$settings = self::recursiveArrayMerge(self::$settings, $tSettings, true);
}
try {
self::recursiveArrayPurge(self::$settings);
} catch (TypeError $t) {
consoleLog(static::$res, CON_ERROR, ERROR_TYPE_EXCEPTION . COLON . $t->getMessage());
return;
}
}
// // validate the environment
// if (!in_array(self::$settings[CONFIG_ID][CONFIG_ID_NODE], self::$env)) {
// consoleLog(static::$res, CON_ERROR, CONFIG_UNK_ENV . self::$settings[CONFIG_ID][CONFIG_ID_NODE]);
// exit(1);
// }
}
/**
* recursiveArrayMerge() -- private static method
*
* this takes the original configuration array and merges subsequent configuration files on top of it.
*
* for example, if you have a structure:
*
* $this[database][mysql] section defined:
*
* <database>
* <mysql>
* <db_hostname>localhost</db_hostname>
* <db_username>user_name</db_username>
* <db_password>user_pass</db_password>
* <db_port>3306</db_port>
* <db_database>some_database_name</db_database>
* </mysql>
*
* and you want to change the database name for your local environment, then the
* subsequent configuration file would bear an identical parent structure, and an
* identical element naming structure....changed data within the elements would be
* copied over the existing parent structure:
*
* <database>
* <mysql>
* <db_database>some_database_name</db_database>
* </mysql>
*
* with the resulting output:
* [database] => Array
* (
* [mysql] => Array
* (
* [db_hostname] => localhost
* [db_username] => user
* [db_password] => password
* [db_port] => 3306
* [db_database] => some_database_name
* )
* )
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $array1
* @param $array2
* @param bool $overwrite
* @return array
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 06-20-18 mks CORE-1045: trapped PHP fatal error -- if there's an error in the XML where you have
* re-declared a variable (to a different value), the framework IPL will crash.
* This fix will prevent the crash from happening and output a console message.
*
*/
public static function recursiveArrayMerge(array $array1, array $array2, bool $overwrite = true)
{
foreach ($array2 as $key => $val) {
if (isset($array1[$key])) {
if (is_array($val)) {
try {
$array1[$key] = self::recursiveArrayMerge($array1[$key], $val);
} catch (TypeError $t) {
consoleLog(static::$res, CON_SYSTEM, CONFIG_XML_DUP_VAR . $key);
consoleLog(static::$res, CON_SYSTEM, $t->getMessage());
}
} elseif ((is_string($array1[$key]) or is_int($array1[$key])) && $overwrite) {
$array1[$key] = $val;
}
} else {
$array1[$key] = $val;
}
}
return $array1;
}
/**
* recursiveArrayPurge() -- local private method
*
* So, as it turns out, simplexml_load_file() does not ignore comments embedded into the XML... mostly... in truth,
* placeholder indices are created within the structure that are also arrays, albeit empty.
*
* So this function, which is called by the getConfig method, recursively loops through the existing $settings
* structure and removes any array with a key of "comment". This, of course, means that one cannot include this
* particular value as an index key within the xml file.
*
* The input parameter to the method is a call-by-reference value which allows us to traverse the settings
* structure recursively and retain changes to the overall array.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $ary -- initially, should be the static::$settings array, thereafter, sub-arrays from within
*
*
* HISTORY:
* ========
* 06-13-17 mks original coding
* 11-28-17 mks CORE-635: fixed bug in conditional that was causing 0th elements of XML sub-arrays
* to be discarded
*
*/
private static function recursiveArrayPurge(array &$ary)
{
foreach ($ary as $key => &$value) {
if ($key === STRING_COMMENT) {
unset($ary[$key]);
} elseif (is_array($value)) {
try {
self::recursiveArrayPurge($value);
} catch (TypeError $t) {
consoleLog(static::$res, CON_ERROR, $t->getMessage());
}
}
}
}
/**
* objectToArray() - private static method
*
* recursive function to flatten objects to a single array.
* If the object contains embedded objects, then self-invoke.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_obj - a collection of either objects or arrays
* @return array - the flattened object
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 07-21-17 mks CORE-468: if the XML value is numeric (float or int) then return a float or int instead
* of a string. Do this by implicitly casting the value by adding 0 to the value.
*
*/
private static function objectToArray($_obj)
{
$ph = null; // placeholder
$ph = (is_object($_obj)) ? get_object_vars($_obj) : $_obj;
foreach ($ph as $key => $val) {
// todo - evaluate to see if is_array is ever true - if not, use strong typing (SimpleXMLElement)
$ph[$key] = ((is_array($val)) or (is_object($val))) ? self::objectToArray($val) : (is_numeric($val) ? $val + 0 : $val);
}
return ($ph);
}
/**
* __get() -- public method
*
* This is a magic function for accessing private data within the config class object.
* Note that magic functions are required to be public and not static.
*
* input parameter ($_section) is the string (tag) referencing the (sub)section of the configuration file
* to be returned. If $_section does not exist as an array key, or if self::$settings has yet to be set,
* then return Boolean(false) -- otherwise return the sub-scripted array as referenced by $_section.
*
* LIMITATIONS:
* ------------
* Sub-arrays cannot be deeper than one level.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_section
* @return bool
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
public function __get($_section)
{
return (is_array(self::$settings) and (array_key_exists($_section, self::$settings)) ? self::$settings[$_section] : false);
}
/**
* getPedigree() -- public static function
*
* This function has no input parameters and returns an associative array.
*
* The function pulls the current configuration and returns selected parameters to the calling client. These values
* are going to be string values, except for the current version which is cast to float, and will indicate if a
* feature is enabled, disabled or, if there's a configuration error, set to an error message.
*
* It's the responsibility of the calling program to parse the return array and to compare/contrast the selected
* values.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return array
*
*
* HISTORY:
* ========
* 07-09-18 mks CORE-1017: original coding
*
*/
public static function getPedigree(): array
{
$retData[PEDIGREE_ENV] = isset(gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV]) ? gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV] : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_VER] = isset(gasConfig::$settings[CONFIG_ID][CONFIG_ID_VER]) ? floatval(gasConfig::$settings[CONFIG_ID][CONFIG_ID_VER]) : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_DEBUG] = isset(gasConfig::$settings[CONFIG_DEBUG]) ? ((intval(gasConfig::$settings[CONFIG_DEBUG]) == 1) ? STRING_ENABLED : STRING_DISABLED) : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_SYSLOG] = STRING_ENABLED;
$retData[PEDIGREE_AUDIT] = isset(gasConfig::$settings[CONFIG_AUDIT_ON]) ? ((intval(gasConfig::$settings[CONFIG_AUDIT_ON]) == 1) ? STRING_ENABLED : STRING_DISABLED) : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_JOURNAL] = isset(gasConfig::$settings[CONFIG_JOURNAL_ON]) ? ((intval(gasConfig::$settings[CONFIG_JOURNAL_ON]) == 1) ? STRING_ENABLED : STRING_DISABLED) : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_SEGUNDO] = isset(gasConfig::$settings[CONFIG_BROKER_SEGUNDO]) ? ((intval(gasConfig::$settings[CONFIG_BROKER_SEGUNDO]) == 1) ? STRING_ENABLED : STRING_DISABLED) : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_TERCERO] = isset(gasConfig::$settings[CONFIG_BROKER_TERCERO]) ? ((intval(gasConfig::$settings[CONFIG_BROKER_TERCERO]) == 1) ? STRING_ENABLED : STRING_DISABLED) : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_QTAG] = isset(gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG]) ? gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_QUEUE_TAG] : ERROR_STUB_NOTDEF;
$retData[PEDIGREE_VHOST] = isset(gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_VHOST]) ? gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_VHOST] : ERROR_STUB_NOTDEF;
return $retData;
}
/**
* singleton() -- public static function
*
* the problem with php is that it does not support true static classes. If you add a debug output to this method
* when it's entered, you'll see it proc every time this method is called.
*
* The input parameters are the name of the configuration file and the configuration file type - which defaults
* to the "auto" config type.
*
* The output is the array structure as defined by the config file itself since it's (more or less) read-in
* as the input.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_iniFile -- path/filename.ext to the config file
* @param string $_iniType -- extension type of the config file
* @return gasConfig -- returns an array or false
*
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
public static function singleton(string $_iniFile, string $_iniType = gasConfig::AUTO): gasConfig
{
if (static::$instance === null) {
$c = __CLASS__;
static::$instance = new $c($_iniFile, $_iniType);
}
return static::$instance;
}
/**
* addConfig() -- public static method
*
* subsequent calls to read-in additional configuration are handled by this method which invokes the private
* method already used to load the 0th-case.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param $_file
* @param string $_type
*
* HISTORY:
* ========
* 06-07-17 mks original coding
* 06-08-18 mks CORE-1035: adding service environments to $settings
*
*/
public static function addConfig(string $_file, string $_type = gasConfig::AUTO)
{
self::getConfig($_file, $_type);
// now that both env files are loaded, register the service environments:
static::$status = false;
try {
if (!self::registerEnvironment()) {
consoleLog(static::$res, CON_ERROR, ERROR_CONFIG_RESOURCE_404 . STRING_SVC_ENV);
} else {
static::$status = true;
}
} catch (TypeError $t) {
consoleLog(static::$res, CON_ERROR, ERROR_TYPE_EXCEPTION . COLON . $t->getMessage());
}
}
/**
* __clone() -- public static method
*
* disallow cloning by return an explicit null on the request
*
* @author mshallop@pathway.com
*
* @return null
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
public function __clone()
{
return(null); // disallow cloning of this class
}
}

File diff suppressed because it is too large Load Diff

653
classes/gasStatic.class.inc Normal file
View File

@@ -0,0 +1,653 @@
<?php
/**
* This is the gasStatic class which is a collection of one-off functions that can be called from a broker or any
* instantiation class -- code herein is class-agnostic and storing one-off's here reduces the overhead of a full
* class instantiation to accomplish the same functionality.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-09-17 mks original coding
* 08-28-17 mks CORE-494: removed buildMappedDataArray() -- method lives in the gasCache class
* 03-02-18 mks CORE-680: deprecated trace logging
* 07-30-18 mks CORE-774: PHP7.2 Exception handling
*
*/
class gasStatic
{
private static ?gasStatic $instance = null; // used to determine if already instantiated
private static string $res = 'STTK: '; // logging resource id
public static bool $debug; // debug state for diagnostic output
public static bool $available = false; // instantiation-check
private static ?array $errors; // local error stack
private static string $class; // name of the current class
private static ?gacErrorLogger $logger = null;
/**
* __construct() -- private function
*
* this is the constructor function for this singleton class. the constructor sets the class's member variables
* and calls any cache pre-loading that's required.
*
* Note that the public entry point is via the getInstance() method.
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 06-09-17 mks original coding
* 07-30-18 mks CORE-774: PHP7.2 Exception handling
*
*/
private function __construct()
{
global $eos;
static::$errors = null;
try {
static::$logger = new gacErrorLogger();
} catch (TypeError $t) {
echo getDateTime() . CON_ERROR . static::$res . $t->getMessage() . $eos;
static::$errors[] = $t->getMessage();
}
static::$class = __CLASS__;
static::$debug = (isset(gasConfig::$settings[STRING_DEBUG])) ? gasConfig::$settings[STRING_DEBUG] : false;
}
/**
* singleton() -- public static class
*
* method to instantiate the resourceManager singleton class
*
* note:
* -----
* it's the calling client's responsibility to check for the null return value in the member variable $instance.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 06-09-17 mks original coding
*
*/
public static function singleton()
{
if (static::$instance === null) {
$c = __CLASS__;
static::$instance = new $c();
}
return(static::$instance);
}
/**
* getInstance() -- public static method
*
* this is the singleton constructor entry point for the class. returns a resource pointer to the static or
* a null value if the instantiation failed.
*
* USAGE:
* ------
* $resourcePointer = pgsStatic::getInstance();
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return mixed
*
* HISTORY:
* ========
* 06-09-17 mks original coding
*
*/
public static function getInstance()
{
if (static::$instance === null) {
$c = __CLASS__;
static::$instance = new $c();
}
return (static::$instance);
}
/**
* doingTime() -- public static method
*
* public static method that creates a timer (micro-time) value.
*
* _start is the input parameter (defaults to Boolean(false)) -- if this value is false, then the method will
* immediately return a generated timer value to the client.
*
* if _start is provided, then get a second timer value and derive the difference between the current and the
* start-time returning the total time calculated.
*
* USAGE:
* ------
* pgsStatic::getInstance()->doingTime($startValue);
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param integer $_start
* @return float|int
*
* HISTORY:
* ========
* 06-09-17 mks original coding
*
*/
public static function doingTime(int $_start = 0)
{
if ($_start == 0) {
return (time() + microtime(true));
}
$end = time() + microtime(true);
return (round(floatval($end - floatval($_start)), NUMBER_FP_PRECISION));
}
/**
* publishSystemEvent() -- public method
*
* This method is invoked when we want to publish a system event to the (remote) admin service. The method requires
* a single input parameter: the data ball in array format which represents one, or more, broker event records.
*
* The method is responsible for ensuring the data ball is in the correct format (an array of records), and for
* building the payload that's published to the admin service.
*
* The method returns a false on processing error, or if the packet fails to send. A true return means that the
* payload was successfully submitted to the (remote) service. It does not indicate that the record was
* successfully processed and inserted into the collection as that part of the process is black-boxed behind RMQ.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_data the record to be published remotely
* @param array $_meta the meta data for the sysEvent record to be published
* @param string $_og the original event guid
* @param array|null $_es call-by-reference parameter to contain the error stack
* @return bool
*
* HISTORY:
* ========
* 08-17-17 mks CORE-500: original coding
* 08-13-20 mks DB-168: method renamed, meta added to params, refactored, and moved to gasStatic class
*
*/
public static function publishSystemEvent(array $_data, array $_meta, string $_og = '', ?array $_es = null): bool
{
$bc = null;
// validate that the $_data payload is not empty
if (empty($_data)) {
$msg = ERROR_DATA_ARRAY_EMPTY . COLON . STRING_DATA;
$_es[] = $msg;
return false;
}
if (empty($_meta)) {
$msg = ERROR_DATA_ARRAY_EMPTY . COLON . STRING_META;
$_es[] = $msg;
return false;
}
// validate that we're getting an array of records ($_data[0] = record1, etc.)
if (key($_data) !== 0) {
$msg = ERROR_DATA_ARRAY_NOT_ARRAY . STRING_DATA;
$_es[] = $msg;
return false;
}
try {
// instantiate an AI broker client
$bc = static::fetchAIClient($_es);
if (is_null($bc)) return false;
if (!empty($_og) and validateGUID($_og)) $_meta[META_EVENT_GUID] = $_og;
$request = [
BROKER_REQUEST => BROKER_REQUEST_ADMIN_BROKER_EVENT,
BROKER_DATA => $_data,
BROKER_META_DATA => $_meta
];
$response = $bc->call(gzcompress(json_encode($request))); // rpc queue
if (is_object($bc)) $bc->__destruct();
unset($bc);
return ($response);
} catch (Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $_es);
if (is_object($bc)) $bc->__destruct();
unset($bc);
return false;
}
}
/**
* fetchAIClient() -- private method
*
* This method instantiates a new AdminIN broker client and assigns the object to the $aiClient property.
*
* The method checks to see if there's already an object attached before instantiating a new one.
*
* The method returns a BOOL value that indicates if the instantiation was successful or not.
*
*
* @param array|null $_es -- call-by-reference parameter containing the method's error stack output
* @return null|gacWorkQueueClient
*
* HISTORY:
* ========
* 08-17-17 mks CORE-500: original coding
* 09-10-20 mks DB-168: improved error handling, moved to static class, changed return type to obj
*
*@author mike@givingassistant.org
* @version 1.0
*
*/
private static function fetchAIClient(?array &$_es):?gacWorkQueueClient
{
$bc = null;
$foo = null;
$method = basename(__METHOD__);
try {
$bc = new gacWorkQueueClient($method . AT . __LINE__);
if (!$bc->status) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$msg = ERROR_FAILED_TO_INSTANTIATE . RESOURCE_ADMIN_CLIENT;
$_es[] = $msg;
static::$logger->error($hdr . $msg);
if (is_object($bc)) $bc->__destruct();
unset($bc);
return null;
}
return $bc;
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$_es[] = ERROR_THROWABLE_EXCEPTION;
@handleExceptionMessaging($hdr, $t->getMessage(), $foo, true);
if (is_object($bc)) $bc->__destruct();
unset($bc);
return null;
}
}
/**
* createATJob() -- public static method
*
* this is a private method providing a single point-of-access to the create-AT-Job function. the method requires
* the following input parameters:
*
* -- duration (in seconds) from "now" when the session will expire
* -- the systemEvent token that will be passed to the PHP script, invoked by AT(1)
* -- $_script - should be one of two values: used the session script for backward compatibility
*
* since this is an internal method, there's no error checking or parameter validation - this method will only
* execute if the admin service is local to the current instance.
*
* the function returns the AT output back to the calling client as a string or a null if the admin service is not
* local to the current instance or if an exception was raised during processing.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param int $_duration
* @param string $_sysEvToken
* @param string $_script
* @return null | string
*
* HISTORY:
* ========
* 08-13-20 mks DB-168: original coding
*
*/
public static function createATJob(int $_duration, string $_sysEvToken, string $_script = FILE_EOS):?string
{
if (gasConfig::$settings[ENV_ADMIN][CONFIG_IS_LOCAL]) {
$time = 'now + ' . floor($_duration / NUMBER_SECS_IN_MIN) . ' ' . STRING_SYS_MIN;
$job = '/usr/bin/php -f ' . dirname(__DIR__) . DIR_SCRIPTS . $_script . ' ' . $_sysEvToken;
return (gacATWrapper::cmd($job, $time));
} else {
try {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_REMOTE_NOT_ADMIN;
static::$logger->warn($hdr . $msg);
consoleLog(static::$res, CON_ERROR, $msg);
} catch (TypeError | Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
static::$logger->warn($hdr . $t->getMessage());
consoleLog(static::$res, CON_ERROR, $t->getMessage());
}
return null;
}
}
/**
* convertWebMigrationRequest() -- public static method
*
* This function is called from the migration broker when we receive a migration request from the webUI.
*
* There is one input parameter for this method:
*
* $_data -- the array passed from the broker which contains the event payload data
*
* The method returns an array -- this array will contain a new configuration matrix to temporarily replace
* gasConfig::$settings[CONFIG_MIGRATION].
*
* The method generates a copy of the migration XML data and then over-writes relevant fields with data received
* via the input parameters.
*
* No validation is performed as the web-app (./utilities/migrateData.php) does all the data validation prior
* to publishing the broker event request.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* @param array $_data -- broker event payload data
* @return array -- returns a replacement migration configuration (data validated/tested in web-app!)
*
*
* HISTORY:
* ========
* 09-27-18 mks DB-43: initial coding
*
*/
public static function convertWebMigrationRequest(array $_data): array
{
$rd = null; // return data
$newMigCfg = gasConfig::$settings[CONFIG_MIGRATION]; // start with a copy of the current config
// first, set the schema
$schema = $_data[MIGRATION_SOURCE_SCHEMA];
// next, let's replace the common migration XML components that are required in an override-migration request
$newMigCfg[$schema][STRING_HOST] = $_data[STRING_HOST];
$newMigCfg[$schema][STRING_PORT] = $_data[STRING_PORT];
$newMigCfg[$schema][CONFIG_DATABASE] = $_data[CONFIG_DATABASE];
// next, if the optional components are set, bring them into the new config array
if (isset($_data[STRING_USER])) $newMigCfg[$schema][STRING_USER] = $_data[STRING_USER];
if (isset($_data[STRING_PASS])) $newMigCfg[$schema][STRING_PASS] = $_data[STRING_PASS];
// next, grab the mongo-specific components
if ($schema == CONFIG_SCHEMA_MONGO) {
$newMigCfg[$schema][STRING_AUTH_SRC] = $_data[STRING_AUTH_SRC];
$newMigCfg[$schema][CONFIG_DATABASE_MONGODB_REPLSET_NAME] = $_data[CONFIG_DATABASE_MONGODB_REPLSET_NAME];
$newMigCfg[$schema][CONFIG_DATABASE_MONGODB_REPLSET_DSN] = $_data[CONFIG_DATABASE_MONGODB_REPLSET_DSN];
}
return $newMigCfg;
}
/**
* loadValidTemplateNames() -- private method
*
* this method loads all of the file names that end with an ".inc" extension that currently reside in the
* framework's template directory.
*
* The file names are loaded into the indexed array and returned to the calling client.
*
* If we could not open the template directory, then the value for $validTemplates is set to null - which
* should be tested by the client on return.
*
* Additionally, if the template files could not be loaded, we generate both a diagnostics and a logging
* error message.
*
* There are no inputs to this method. The method returns an array of valid template names it sussed from the
* template directory, on success, or a null on failure.
*
* @author mike@givingassistant.com
* @version 1.0
*
* @return array|null
*
* HISTORY:
* ========
* 06-15-17 mks original coding
* 07-07-17 mks fixed error searching for .inc type files
* 07-30-18 mks CORE-775: PHP7.2 Exception Handling
* 09-13-18 mks DB-43: added option to strip "gat" prepend
* 06-11-20 mks ECI-164: can no longer strip the TLTI prefix from template classes
*
*/
public static function loadValidTemplateNames(): ?array
{
$validTemplates = null;
$tDir = __DIR__ . DIR_TEMPLATE;
if ($fh = opendir($tDir)) {
while (false !== ($file = readdir($fh))) {
if ($file != DOT and $file != DOTDOT and (substr($file, (strlen($file) - strlen(FILE_TYPE_INC))) == FILE_TYPE_INC)) {
$validTemplates[] = preg_replace("/" . FILE_TEMPLATE_EXT . "/", "", $file);
}
}
closedir($fh);
} else {
$msg = ERROR_READ_DIR . $tDir;
static::$logger->warn($msg);
}
return($validTemplates);
}
/**
* getTLTI() -- public static method
*
* This method was moved from gacFactory class to gasStatic class because the factory is not the only place where
* we need to derive a TLTI based on a meta-data payload - this method will also be called from every broker
* class when validating an incoming meta-data payload prior to the gacFactory instantiation.
*
* The method has the following input parameters:
*
* $_authToken -- this is the CLIENT_AUTH_TOKEN as received from the SMAX-request meta-data payload
* $_eventGUID -- as above, except META_EVENT_GUID
* $_payload -- call-by-reference parameter, will implicitly return the broker response payload
*
* We instantiate a read-broker client and, on success, build our query to fetch the TLTI from the SMAXAPI
* data table in appServer. We'll exec the request as a system-level request to bypass redundant validation
* checks speeding up the process.
*
* Next, we publish the request and consume the response -- if the event was successful, then we'll extract
* the tlti from the payload and explicitly return that value.
*
* In all other cases, the function returns a null value to indicate a processing error. It's the responsibility
* of the calling client to process the error which is why we implicitly return the response payload.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_authToken
* @param string $_eventGUID
* @param array|null $_payload
* @return string|null
*
*
* HISTORY:
* ========
* 06-15-20 mks ECI-164: original coding as a tweaked-import from the gacFactory class
*
*/
public static function getTLTI(string $_authToken, string $_eventGUID, ?array &$_payload = null):?string
{
// instantiate a read-broker client to fetch the TLTI for the AUTH TOKEN
if (is_null($bc = static::getBC(BROKER_QUEUE_R, sprintf(INFO_LOC, basename(__FILE__), __LINE__)))) return null;
$query = [STRING_KEY => [OPERAND_NULL => [OPERATOR_EQ => [$_authToken]]]];
$request = [
BROKER_REQUEST => BROKER_REQUEST_FETCH,
BROKER_DATA => [
STRING_QUERY_DATA => $query,
STRING_RETURN_DATA => [CM_SMAX_TLTI]
],
BROKER_META_DATA => [
META_CLIENT => CLIENT_SYSTEM,
META_DO_CACHE => false,
META_TEMPLATE => TEMPLATE_CLASS_SMAXAPI,
META_EVENT_GUID => $_eventGUID
]
];
// publish the payload request and consume the Namaste response
$response = json_decode(gzuncompress($bc->call(gzcompress(json_encode($request)))), true);
if (is_object($bc)) $bc->__destruct();
unset($bc);
$_payload = $response;
if ($response[PAYLOAD_STATUS] and $response[PAYLOAD_STATE] != STATE_NOT_FOUND) {
if (isset($response[PAYLOAD_RESULTS][STRING_QUERY_RESULTS][0][SMAX_TLTI])) {
return $response[PAYLOAD_RESULTS][STRING_QUERY_RESULTS][0][SMAX_TLTI] . CHAR_T;
}
}
return null;
}
/**
* getBC() -- private static method
*
* This method was originally in the Factory class but was moved to the static class because of the broker calls
* to validateMetaData also requires deriving the TLTI based on the META_CLIENT setting.
*
* This method fetches a valid broker client connecting to the appServer read broker. Since the instantiation
* is exception wrapped, the call here is not.
*
* The method requires the following two input parameters:
*
* $_queue -- the name of the broker queue to instantiate
* $_loc -- the INFO_LOC string which is passed to rabbit so as to identify where the request originated
*
* The method returns a gacBrokerClient object or, in the case of an error raised during instantiation, a null.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_queue
* @param string $_loc
* @return gacBrokerClient|null
*
*
* HISTORY:
* ========
* 06-15-20 mks ECI-164: original coding
*
*/
private static function getBC(string $_queue, string $_loc):?gacBrokerClient
{
$file = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$bc = new gacBrokerClient($_queue, $_loc);
if (!$bc->status) {
// failed to instantiate a broker client
$hdr = sprintf(INFO_LOC, $file, __LINE__);
$msg = ERROR_BROKER_CLIENT_DECLARE . BROKER_QUEUE_R;
consoleLog(static::$res, CON_SYSTEM, $hdr . $msg);
static::$errors[] = $hdr . $msg;
if (is_object($bc)) $bc->__destruct();
unset($bc);
return null;
}
return $bc;
}
/**
* buildPDODBName() -- public static method
*
* This method requires a single argument to the function:
*
* $_env: should be either ENV_PRIME, ENV_SEGUNDO or any of the other valid env names
*
* The purpose of this function is to generate the database name during run time which is predicated on the
* current running environment and the current namaste service on which this is running.
*
* If an invalid or unsupported env is passed to the method, it will return a null. Otherwise, the method builds
* the db name and returns that to the calling client.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_env
* @return string|null
*
* HISTORY:
* ========
* 08-20-19 mks DB-
*/
public static function buildPDODBName(string $_env): ?string
{
$dbConfig = gasConfig::$settings[CONFIG_DATABASE][CONFIG_DATABASE_PDO];
// if more env's are defined to support PDO, you will have to also define them here.
switch ($_env) {
case ENV_APPSERVER :
$namastePDO = $dbConfig[CONFIG_DATABASE_PDO_APPSERVER][CONFIG_DATABASE_PDO_MASTER][CONFIG_DATABASE_PDO_DB];
break;
case ENV_SEGUNDO :
$namastePDO = $dbConfig[CONFIG_DATABASE_PDO_SEGUNDO][CONFIG_DATABASE_PDO_MASTER][CONFIG_DATABASE_PDO_DB];
break;
default :
consoleLog(static::$res, CON_ERROR, sprintf(ERROR_RESOURCE_ENV_404, STRING_PDO, $_env));
return null;
break;
}
return gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV] . UDASH . $namastePDO;
}
/**
* getSysLogError() -- public static method
*
* This method requires one input parameter:
*
* $_error: integer value representing the namaste error
*
* Namaste errors are mapped as follows:
*
* Namaste Error Value Syslog Error
* ---------------------------------------
* DEBUG 2 LOG_DEBUG
* DATA 3 LOG_NOTICE
* INFO 4 LOG_INFO
* ERROR 5 LOG_ERROR
* WARN 6 LOG_CRIT
* FATAL 7 LOG_EMERG
*
* TRACE used to be Value = 1 but has since been deprecated.
* METRICS has a Value of 0.
* EVENT has a value of -1.
*
* We create a simple matrix with the Namaste values as keys and the sysLog values as values and then map the
* input value passed in $_error via the matrix. If not found, then we return a LOG_ERR syslog code. Otherwise,
* return the mapped integer value.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param int $_error
* @return int
*
* HISTORY:
* ========
* 09-26-19 mks DB-136: original coding
*
*/
public static function getSysLogError(int $_error): int
{
$matrix = [
2 => LOG_DEBUG,
3 => LOG_NOTICE,
4 => LOG_INFO,
5 => LOG_ERR,
6 => LOG_CRIT,
7 => LOG_EMERG
];
return (!array_key_exists($_error, $matrix)) ? LOG_ERR : $matrix[$_error];
}
}

View File

@@ -0,0 +1,217 @@
<?php
/**
* Class gatLog
*
* This is the logging class definition that records framework-generated event messages.
*
* Design Notes:
* -------------
* because this is a log, who's events are processed by a FnF queue, we're not going to cache, or use auditing.
* History is limited to the created event and deletes are HARD.
* Only one status is supported: ACTIVE and there are no updates allowed making record-locking unnecessary.
* To reduce overhead, we're not enabling cache timers because recursion.
* The collection does not need GUID tokens but we are storing the passed Broker Event ID
*
* @author mike@givingassistant.org
* @version 2.1.3
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*
*/
class xxxLogs
{
public $service = CONFIG_DATABASE_DDB_APPSERVER; // defines the nosql server service configuration
public $schema = TEMPLATE_DB_DDB; // defines the storage schema for the class
public $collection = COLLECTION_MONGO_LOGS; // sets the collection (table) name
public $seqKey = COLLECTION_NOSQL_LOGS_SQK; // sets the sequence key identifier
public $extension = COLLECTION_MONGO_LOGS_EXT; // sets the extension for the collection
public $setCache = false; // set to true to cache class data
public $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public $setJournaling = false; // set to true to allow journaling
public $setUpdates = false; // set to true to allow record updates
public $setHistory = false; // set to true to enable detailed record history tracking
public $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public $setSearchStatus = STATUS_ACTIVE; // set the default search status
public $setLocking = false; // set to true to enable record locking for collection
public $setTimers = false; // set to true to enable collection query timers
public $setPKeyType = DB_TOKEN; // sets the primary key type: either ID or TOKEN
/*
* tokens are guids -- if you're using a guid as the pkey for the class, then this value should be false.
* if you're using an integer pkey, and you want a token, you have to explicitly declare
* the token fields in $fields and set this value to true.
* if you're using an integer pkey and you don't want a token, set this value to false.
*/
public $setTokens = false; // set to true: adds the idToken field functionality
public $selfDestruct = true; // set to false if the class contains methods
public $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public $setEnv = ENV_ALL; // defines the env where this class can be accessed
public $setMeta = false; // defines if we'll use the meta package for history
public $fields = [
DB_PKEY => DDB_TYPE_STRING, // GUID because setPKeyType == DB_TOKEN
LOG_FILE => DDB_TYPE_STRING,
LOG_METHOD => DDB_TYPE_STRING,
LOG_LINE => DDB_TYPE_NUMBER,
LOG_CLASS => DDB_TYPE_STRING,
LOG_LEVEL => DDB_TYPE_STRING,
LOG_MESSAGE => DDB_TYPE_STRING,
LOG_STACK_TRACE => DDB_TYPE_LIST,
DB_STATUS => DDB_TYPE_STRING,
DB_HISTORY => DDB_TYPE_LIST,
LOG_IS_EVENT => DDB_TYPE_BOOLEAN,
LOG_EVENT_GUID => DDB_TYPE_STRING,
LOG_CREATED => DDB_TYPE_NUMBER
];
public $fieldTypes = [
DB_PKEY => DATA_TYPE_STRING, // guid
LOG_FILE => DATA_TYPE_STRING,
LOG_METHOD => DATA_TYPE_STRING,
LOG_LINE => DATA_TYPE_INTEGER,
LOG_CLASS => DATA_TYPE_STRING,
LOG_LEVEL => DATA_TYPE_STRING,
LOG_MESSAGE => DATA_TYPE_STRING,
LOG_STACK_TRACE => DATA_TYPE_ARRAY,
DB_STATUS => DATA_TYPE_STRING,
DB_TIMER => DATA_TYPE_DOUBLE,
DB_HISTORY => DATA_TYPE_ARRAY,
LOG_IS_EVENT => DATA_TYPE_BOOL,
LOG_EVENT_GUID => DATA_TYPE_STRING,
LOG_CREATED => DATA_TYPE_INTEGER
];
// in the ddb world, this is the primary composite key for this table
public $indexes = [ DB_PKEY => DDB_INDEX_HASH, LOG_CREATED => DDB_INDEX_RANGE ];
/*
* declaring global and local secondary indexes:
*
* Limit: 5 of each
*
* General Format:
* ---------------
* Each tuple, up to the limit, is a record that contains the following array structure:
*
* [[
* 'name' => INDEX_NAME, // REQUIRED
* 'indexes' => [ KEY_NAME => HASH {, KEY_NAME => RANGE } ], // REQUIRED
* 'projectionType' => { KEYS_ONLY | INCLUDE | ALL }, // REQUIRED
* 'nka' => { [ list of one or more non-key attributes !>20 ] }, // REQUIRED if projection = INCLUDE
* 'throughput' => [ 'rcu' => <integer>, 'wcu' => <integer> ] // REQUIRED for GLOBAL only
* ],....];
*
* secondary index keys must use the key literals as shown above. ('name', 'indexes', 'projectionType', etc.)
*
*/
public $globalIndexes = array(
[
// this creates a partition key based on the log level (fatal, warn, debug, etc.) with a sort key
// based on the method (the class method that created the log event).
// query example: give me all fatal errors
// give me all warnings generate by the method: _fetchData()
// Since the base keys (id, date) are projected onto this index, I am (awaiting testing) assuming
// that you could also range your query based on the creation date.
STRING_NAME => 'index_log_level',
STRING_INDEXES => [ LOG_LEVEL => DDB_INDEX_HASH, LOG_METHOD => DDB_INDEX_RANGE ],
DDB_STRING_PT => DDB_PT_ALL,
STRING_THROUGHPUT => [ CONFIG_DATABASE_READ_CAPACITY_UNITS => 100, CONFIG_DATABASE_WRITE_CAPACITY_UNITS => 100 ]
],
[
// lets add a second global index: key will be the created date, and the sort will be the error level
// this will allow us to answer queries like:
// give me all errors in the last hour
// give me all fatal errors for January
STRING_NAME => 'index_log_created',
STRING_INDEXES => [ LOG_CREATED => DDB_INDEX_HASH, LOG_LEVEL => DDB_INDEX_RANGE ],
DDB_STRING_PT => DDB_PT_INCLUDE,
DDB_STRING_NON_KEY_ATTRIBUTE => [ LOG_FILE, LOG_CLASS, LOG_METHOD, LOG_LINE ],
STRING_THROUGHPUT => [ CONFIG_DATABASE_READ_CAPACITY_UNITS => 100, CONFIG_DATABASE_WRITE_CAPACITY_UNITS => 100 ]
]
);
public $localIndexes = array(
[
// create secondary index using the log-level as the range value making the assumption that the
// base index hash will be used as the local secondary hash
STRING_NAME => 'index_sec_level',
STRING_INDEXES => [ DB_PKEY => DDB_INDEX_HASH, LOG_LEVEL => DDB_INDEX_RANGE ],
DDB_STRING_PT => DDB_PT_ALL
]
);
public $exposedFields = null; // list of fields exposed to clients
public $cacheMap = null; // k->v paired array mapping fields -> cachedField Names
public $binFields = null; // binary fields that have to be encoded
// these fields aren't used in DDB, but are used in mongo, so are here only for code-compatibility
public $uniqueIndexes = null;
public $sparseIndexes = null;
public $subCollections = null;
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
public function __construct()
{
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,185 @@
<?php
/**
* Class pgtMetrics
*
* This is the metrics class definition that records timer events, usually database queries.
*
* Design Notes:
* -------------
* Metrics is identical to Logs, who's events are processed by a FnF queue, we're not going to cache, or use auditing.
* History is limited to the created event and deletes are HARD.
* Only one status is supported: ACTIVE and there are no updates allowed making record-locking unnecessary.
* To reduce overhead, we're not enabling cache timers because recursive.
* The collection does not need GUID tokens but we are storing the passed session ID in the meta payload for the
* create event - which is the only history event required or logged.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-07-17 mks code complete
*
*/
class xxxMetrics
{
public $service = CONFIG_DATABASE_DDB_APPSERVER; // defines the nosql server service configuration
public $schema = TEMPLATE_DB_DDB; // defines the storage schema for the class
public $collection = COLLECTION_MONGO_METRICS; // sets the collection (table) name
public $seqKey = COLLECTION_NOSQL_METRICS_SQK; // sets the sequence key identifier
public $extension = COLLECTION_MONGO_METRICS_EXT; // sets the extension for the collection
public $setCache = false; // set to true to cache class data
public $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public $setJournaling = false; // set to true to enable journaling
public $setUpdates = false; // set to true to allow record updates
public $setHistory = false; // set to true to enable detailed record history tracking
public $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public $setSearchStatus = STATUS_ACTIVE; // set the default search status
public $setLocking = false; // set to true to enable record locking for collection
public $setTimers = false; // set to true to enable collection query timers
public $setPKeyType = DB_TOKEN; // sets the primary key type: either ID or TOKEN
/*
* tokens are guids -- if you're using a guid as the pkey for the class, then this value should be false.
* if you're using an integer pkey, and you want a token, you have to explicitly declare
* the token fields in $fields and set this value to true.
* if you're using an integer pkey and you don't want a token, set this value to false.
*/
public $setTokens = false; // set to true: adds the idToken field functionality
public $selfDestruct = true; // set to false if the class contains methods
public $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public $setEnv = ENV_ALL; // defines the env where this class can be accessed
public $setMeta = false; // defines if we'll use the meta package for history
public $fields = [
DB_PKEY => DDB_TYPE_STRING, // GUID because setPKeyType == DB_TOKEN
LOG_FILE => DDB_TYPE_STRING,
LOG_METHOD => DDB_TYPE_STRING,
LOG_LINE => DDB_TYPE_NUMBER,
LOG_CLASS => DDB_TYPE_STRING,
LOG_LEVEL => DDB_TYPE_STRING,
LOG_MESSAGE => DDB_TYPE_STRING,
LOG_STACK_TRACE => DDB_TYPE_LIST,
DB_STATUS => DDB_TYPE_STRING,
DB_TIMER => DDB_TYPE_NUMBER,
DB_HISTORY => DDB_TYPE_LIST,
LOG_IS_EVENT => DDB_TYPE_BOOLEAN,
LOG_EVENT_GUID => DDB_TYPE_STRING,
LOG_CREATED => DDB_TYPE_NUMBER
];
public $fieldTypes = [
DB_PKEY => DATA_TYPE_STRING, // guid
LOG_FILE => DATA_TYPE_STRING,
LOG_METHOD => DATA_TYPE_STRING,
LOG_LINE => DATA_TYPE_INTEGER,
LOG_CLASS => DATA_TYPE_STRING,
LOG_LEVEL => DATA_TYPE_STRING,
LOG_MESSAGE => DATA_TYPE_STRING,
LOG_STACK_TRACE => DATA_TYPE_ARRAY,
DB_STATUS => DATA_TYPE_STRING,
DB_TIMER => DATA_TYPE_DOUBLE,
DB_HISTORY => DATA_TYPE_ARRAY,
LOG_IS_EVENT => DATA_TYPE_BOOL,
LOG_EVENT_GUID => DATA_TYPE_STRING,
LOG_CREATED => DATA_TYPE_INTEGER
];
// in the ddb world, this is the primary composite key for this table
public $indexes = [ DB_PKEY => DDB_INDEX_HASH, LOG_CREATED => DDB_INDEX_RANGE ];
/*
* declaring global and local secondary indexes:
*
* Limit: 5 of each
*
* General Format:
* ---------------
* Each tuple, up to the limit, is a record that contains the following array structure:
*
* [[
* 'name' => INDEX_NAME, // REQUIRED
* 'indexes' => [ KEY_NAME => HASH {, KEY_NAME => RANGE } ], // REQUIRED
* 'projectionType' => { KEYS_ONLY | INCLUDE | ALL }, // REQUIRED
* 'nka' => { [ list of one or more non-key attributes !>20 ] }, // REQUIRED if projection = INCLUDE
* 'throughput' => [ 'rcu' => <integer>, 'wcu' => <integer> ] // REQUIRED for GLOBAL only
* ],....];
*
* secondary index keys must use the key literals as shown above. ('name', 'indexes', 'projectionType', etc.)
*
*/
public $globalIndexes = null;
public $localIndexes = null;
public $exposedFields = null; // list of fields exposed to clients
public $cacheMap = null; // k->v paired array mapping fields -> cachedField Names
public $binFields = null; // binary fields that have to be encoded
// these fields aren't used in DDB, but are used in mongo, so are here only for code-compatibility
public $uniqueIndexes = null;
public $sparseIndexes = null;
public $subCollections = null;
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
public function __construct()
{
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,87 @@
<?php
/**
* Class: gatTestMySQL
*
* This is the definition for a mysql/mariadb-based test class. Intended usage is for unit-testing for basic CRUD
* operations.
*
* This template should also serve as a guide, or documentation, for creating mysql/mariadb template classes.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-30-17 mks original coding
*
*/
class gatTestMySQL
{
public $version = 1;
public $schema = TEMPLATE_DB_PDO; // defines the storage schema for the class
public $collection = COLLECTION_MYSQL_TEST; // sets the collection (table) name
public $seqKey = COLLECTION_MYSQL_TEST_SQK; // sets the sequence key identifier
public $extension = COLLECTION_MYSQL_TEST_EXT; // sets the extension for the collection
public $setCache = true; // set to true to cache class data
public $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public $setJournaling = false; // set to true to allow journaling
public $setUpdates = true; // set to true to allow record updates
public $setHistory = false; // set to true to enable detailed record history tracking
public $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public $setSearchStatus = STATUS_ACTIVE; // set the default search status
public $setLocking = false; // set to true to enable record locking for collection
public $setTimers = true; // set to true to enable collection query timers
public $setPKey = DB_PKEY; // sets the primary key for the collection
public $selfDestruct = true; // set to false if this class contains methods
/*
* tokens are guids -- if you're using a guid as the pkey for the class, then this value should be false.
* if you're using an integer pkey, and you want a token, you have to explicitly declare
* the token fields in $fields and set this value to true.
* if you're using an integer pkey and you don't want a token, set this value to false.
*/
public $setTokens = false; // set to true: adds the idToken field functionality
public $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public $setEnv = ENV_ALL; // defines the env where this class can be accessed
public $setMeta = false; // defines if we'll use the meta package for history
public $fields = [
DB_PKEY => DATA_TYPE_INTEGER, // pkey (integer) used internally and is REQUIRED
TEST_FIELD_TEST_STRING => DATA_TYPE_STRING,
TEST_FIELD_TEST_DOUBLE => DATA_TYPE_DOUBLE,
TEST_FIELD_TEST_INT => DATA_TYPE_INTEGER,
TEST_FIELD_TEST_BOOL => DATA_TYPE_BOOL,
TEST_FIELD_TEST_OBJECT => DATA_TYPE_OBJECT,
DB_TOKEN => DATA_TYPE_STRING // unique key (string) exposed externally and is REQUIRED
];
// cache-map constants are in ./common/cacheMaps.php
public $cacheMap = [
DB_TOKEN => CM_TST_TOKEN,
TEST_FIELD_TEST_STRING => CM_TST_FIELD_TEST_STRING,
TEST_FIELD_TEST_DOUBLE => CM_TST_FIELD_TEST_DOUBLE,
TEST_FIELD_TEST_INT => CM_TST_FIELD_TEST_INT,
TEST_FIELD_TEST_BOOL => CM_TST_FIELD_TEST_BOOL,
TEST_FIELD_TEST_OBJECT => CM_TST_FIELD_TEST_OBJ
];
// for mysql, all indexed fields are listed in this container regardless of index type. If an field appears but
// is not a unique or compound index, then it is just a regular index.
public $indexes = [ DB_PKEY, TEST_FIELD_TEST_INT, DB_TOKEN ];
// unique indexes listed as an indexed array
public $uniqueIndexes = [ DB_TOKEN ];
// compound indexes are listed as sub-arrays:
// [ [ col-1, ..., col-n ], ..., [] ]
public $compoundIndexes = null;
// exposed fields are mutually exclusive with cacheMaps; one or the other but not both
public $exposedFields = null;
// binary fields require special handling (encoding) and have to be listed here
public $binaryFields = null;
}

View File

@@ -0,0 +1,529 @@
<?php
/**
* gatAudit Class Template -- mongo template class
*
* This is the template class for the Audit table - the auditing sub-system for Namaste.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
* 11-13-18 mks DB-63: added template field to collection fields/schema
* 01-13-20 mks DB-150: PHP7.4 member type casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatAudit
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_AUDIT; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_AUDIT; // sets the collection (table) name
public ?string $whTemplate = TEMPLATE_CLASS_AUDIT; // name of the warehouse template (not collection)
public string $extension = COLLECTION_MONGO_AUDIT_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
// fields specific to the collection
MONGO_ID => DATA_TYPE_OBJECT, // sorting by the id is just like sorting by createdDate
AUDIT_SYS_EV_GUID => DATA_TYPE_STRING, // GUID passed from the System Event Manager
AUDIT_SESSION_GUID => DATA_TYPE_STRING, // Session GUID (pulled from meta payload)
AUDIT_SESSION_IP => DATA_TYPE_STRING, // Session IP (pulled from meta payload)
AUDIT_USER_GUID => DATA_TYPE_STRING, // User GUID (pulled from meta payload)
AUDIT_JOURNAL_GUID => DATA_TYPE_STRING, // (optional) Journal Event GUID
AUDIT_SERVICE => DATA_TYPE_STRING, // service of the collection/record under audit
AUDIT_SCHEMA => DATA_TYPE_STRING, // schema type for the accessed collection/table
AUDIT_TEMPLATE => DATA_TYPE_STRING, // the template name used to instantiate the data class
AUDIT_DB => DATA_TYPE_STRING, // name of the DB being accessed
AUDIT_COLLECTION => DATA_TYPE_STRING, // name of the collection/table being accessed
AUDIT_COLLECTION_EXT => DATA_TYPE_STRING, // Namaste extension of the targeted table (todo: not sure yet why I need/want this)
AUDIT_RECORD_TOKEN => DATA_TYPE_STRING, // record GUID
AUDIT_SNAPSHOT => DATA_TYPE_STRING, // JSON-encoded copy of the record prior to access
AUDIT_QUERY => DATA_TYPE_STRING, // copy of the query used to access the record
AUDIT_ACCESS_CLIENT => DATA_TYPE_STRING, // name of the application/client used to access the record
AUDIT_ACCESS_USER => DATA_TYPE_STRING, // name of the user accessing the record (if available)
AUDIT_USER_ROLE => DATA_TYPE_STRING, // role of the user accessing the record (if available)
AUDIT_OPERATION => DATA_TYPE_STRING, // name of the operation access the record (CRUD)
AUDIT_ACCESS_ALLOWED => DATA_TYPE_BOOL, // if the access was granted or blocked
// generic mongo constants
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
// todo -- code a condition where ALL fields are protected using the * symbol (DB-58)
public ?array $protectedFields = [
MONGO_ID, AUDIT_SYS_EV_GUID, AUDIT_SESSION_GUID, AUDIT_USER_GUID, AUDIT_JOURNAL_GUID, AUDIT_SERVICE,
AUDIT_SCHEMA, AUDIT_DB, AUDIT_COLLECTION, AUDIT_COLLECTION_EXT, AUDIT_SNAPSHOT, AUDIT_QUERY,
AUDIT_ACCESS_CLIENT, AUDIT_ACCESS_USER, AUDIT_USER_ROLE, AUDIT_OPERATION, AUDIT_ACCESS_ALLOWED,
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_STATUS, DB_ACCESSED, AUDIT_RECORD_TOKEN, AUDIT_TEMPLATE
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_TOKEN, DB_ACCESSED, DB_EVENT_GUID, AUDIT_SYS_EV_GUID, AUDIT_USER_GUID,
AUDIT_JOURNAL_GUID, AUDIT_ACCESS_ALLOWED, AUDIT_SESSION_GUID, AUDIT_SESSION_IP, AUDIT_RECORD_TOKEN
];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => -1, // assuming we want LIFO
DB_ACCESSED => -1, // assuming we want LIFO
DB_EVENT_GUID => 1, // event guid should always be indexed
AUDIT_ACCESS_ALLOWED => 1,
AUDIT_SESSION_IP => 1,
AUDIT_RECORD_TOKEN => 1
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
AUDIT_SESSION_GUID => 1,
AUDIT_JOURNAL_GUID => 1,
AUDIT_USER_GUID => 1,
AUDIT_SYS_EV_GUID => 1
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = null;
//=================================================================================================================
// MIGRATION DECLARATIONS
// ----------------------
// Data in this section is used to handle migrations -- when we're pulling from legacy tables into the Namaste
// framework. See online doc for more info.
//=================================================================================================================
/**
* The migration map is an associative array that maps the Namaste fields (keys) to the corresponding
* (remote) legacy fields in the source table to be migrated to Namaste.
*
* For example, if we were migrating a mysql table in the legacy production database to Namaste::mongo, then
* the keys of the migration map would be the Namaste::mongo->fieldNames and the values would be the mysql
* column names in the legacy table.
*
* If there is a value which cannot be mapped to a key, then set it to null.
*
* Fields that will be dropped in the migration are not listed as values or as keys.
*
* This map will only exist in the template object and will never be imported into the class widget.
*
* This is a required field.
*
*/
public ?array $migrationMap = null;
/*
* the migrationSortKey defines the SOURCE field by which the fetch query will be sorted. ALL sort fields are
* in ASC order so all we need to list here is the name of the field -- which MUST BE IN THE SOURCE TABLE.
*
* Populating this field may require preliminary examination of the data - what we want is a field that has
* zero NULL values.
*
* This is a required field.
*
*/
public ?array $migrationSortKey = null;
/*
* The migrationStatusKey defines the status field/column in the source table -- if the user requires that
* soft-deleted records not be migrated, then this field must be set. Otherwise, set the value to null.
*
* The format is in the form of a key-value paired array. The key specifies the name of the column and the value
* specifies the "deleted" value that, if found, will cause that row from the SOURCE data to be omitted from the
* DESTINATION table.
*
* e.g.: $migrationStatusKV = [ 'some_field' => 'deleted' ]
*
* Note that both the key and the value are case-sensitive!
*
* This is an optional field.
*
*/
public ?array $migrationStatusKV = null;
// The $migrationSourceSchema defines the remote schema for the source table
public ?string $migrationSourceSchema = null; // or STRING_MONGO
// The source table in the remote repos (default defined in the XML) must be declared here
public ?string $migrationSourceTable = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => true, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'Q', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => true, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
WH_INDEXES => [DB_CREATED, DB_WH_CREATED],
WH_TEMPLATE => '',
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,808 @@
<?php
/**
* Class gatConsolidatedSanctionsList -- mongo class
*
* This class is used to store the US DHS Consolidated Sanctions list found at:
* https://home.treasury.gov/policy-issues/financial-sanctions/consolidated-sanctions-list-data-files
* This collection is populated from the file: consolidated.xml, an XML version of the Consolidated Sanctions list
*
*
* HISTORY:
* ========
* 12-03-20 mks DB-179: original coding
*
*/
class gatConsolidatedSanctionsList
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version; not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_SEGUNDO; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_CSL; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_CSL; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_CSL_EXT; // sets the extension for the collection
public bool $closedClass = false; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_DESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = true; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search statusz
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
public array $fields = [
DB_TOKEN => DATA_TYPE_STRING, // unique key exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER, // epoch time
COLLECTION_MONGO_CSL_ADDRESS => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_ADDRESS1 => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_ADDRESS2 => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_ADDRESS3 => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_ADDR_LIST => DATA_TYPE_ARRAY,
COLLECTION_MONGO_CSL_AKA => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_AKA_LIST => DATA_TYPE_ARRAY,
COLLECTION_MONGO_CSL_CATEGORY => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_CITIZENSHIP => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_CITIZENSHIP_LIST => DATA_TYPE_ARRAY,
COLLECTION_MONGO_CSL_CITY => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_COUNTRY => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_DOB => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_DOB_LIST => DATA_TYPE_ARRAY,
COLLECTION_MONGO_CSL_FN => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_LN => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_ID => DATA_TYPE_INTEGER,
COLLECTION_MONGO_CSL_ID_COUNTRY => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_ID_LIST => DATA_TYPE_ARRAY,
COLLECTION_MONGO_CSL_ID_NUMBER => DATA_TYPE_STRING, // because leading zeros
COLLECTION_MONGO_CSL_ID_TYPE => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_MAIN_ENTRY => DATA_TYPE_BOOL,
COLLECTION_MONGO_CSL_POB => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_POB_LIST => DATA_TYPE_ARRAY,
COLLECTION_MONGO_CSL_POSTAL_CODE => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_PRG => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_PRG_LIST => DATA_TYPE_ARRAY,
COLLECTION_MONGO_CSL_REMARKS => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_SOP => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_TYPE => DATA_TYPE_STRING,
COLLECTION_MONGO_CSL_UID => DATA_TYPE_INTEGER,
COLLECTION_MONGO_CSL_SDN_TYPE => DATA_TYPE_STRING
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, DB_STATUS
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_CREATED, DB_EVENT_GUID, DB_ACCESSED, MONGO_ID, DB_STATUS
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_TOKEN, DB_CREATED, DB_STATUS, DB_EVENT_GUID, COLLECTION_MONGO_CSL_AKA_LIST,
COLLECTION_MONGO_CSL_DOB_LIST, COLLECTION_MONGO_CSL_ADDR_LIST, COLLECTION_MONGO_CSL_SDN_TYPE
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = ['EntityNameIndex'];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_TOKEN => 1,
DB_CREATED => -1,
DB_STATUS => 1,
COLLECTION_MONGO_CSL_SDN_TYPE => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'EntityNameIndex' => [COLLECTION_MONGO_CSL_SDN_TYPE, COLLECTION_MONGO_CSL_LN, COLLECTION_MONGO_CSL_LN]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are required for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TOKEN,
DB_CREATED => CM_DATE_CREATED,
DB_ACCESSED => CM_DATE_ACCESSED,
DB_STATUS => CM_STATUS,
DB_EVENT_GUID => CM_EVENT_GUID,
CM_CSL_ADDR => COLLECTION_MONGO_CSL_ADDRESS,
CM_CSL_ADDR1 => COLLECTION_MONGO_CSL_ADDRESS1,
CM_CSL_ADDR2 => COLLECTION_MONGO_CSL_ADDRESS2,
CM_CSL_ADDR3 => COLLECTION_MONGO_CSL_ADDRESS3,
CM_CSL_ADDR_LIST => COLLECTION_MONGO_CSL_ADDR_LIST,
CM_CSL_AKA => COLLECTION_MONGO_CSL_AKA,
CM_CSL_AKA_LIST => COLLECTION_MONGO_CSL_AKA_LIST,
CM_CSL_CAT => COLLECTION_MONGO_CSL_CATEGORY,
CM_CSL_CITIZENSHIP => COLLECTION_MONGO_CSL_CITIZENSHIP,
CM_CSL_CITIZENSHIP_LIST => COLLECTION_MONGO_CSL_CITIZENSHIP_LIST,
CM_CSL_CITY => COLLECTION_MONGO_CSL_CITY,
CM_CSL_COUNTRY => COLLECTION_MONGO_CSL_COUNTRY,
CM_CSL_DOB => COLLECTION_MONGO_CSL_DOB,
CM_CSL_DOB_LIST => COLLECTION_MONGO_CSL_DOB_LIST,
CM_CSL_FIRST_NAME => COLLECTION_MONGO_CSL_FN,
CM_CSL_LAST_NAME => COLLECTION_MONGO_CSL_LN,
CM_CSL_ID => COLLECTION_MONGO_CSL_ID,
CM_CSL_ID_COUNTRY => COLLECTION_MONGO_CSL_ID_COUNTRY,
CM_CSL_ID_LIST => COLLECTION_MONGO_CSL_ID_LIST,
CM_CSL_ID_NUM => COLLECTION_MONGO_CSL_ID_NUMBER,
CM_CSL_ID_TYPE => COLLECTION_MONGO_CSL_ID_TYPE,
CM_CSL_MAIN_ENTRY => COLLECTION_MONGO_CSL_MAIN_ENTRY,
CM_CSL_POB => COLLECTION_MONGO_CSL_POB,
CM_CSL_POB_LIST => COLLECTION_MONGO_CSL_POB_LIST,
CM_CSL_POST_CODE => COLLECTION_MONGO_CSL_POSTAL_CODE,
CM_CSL_PRG => COLLECTION_MONGO_CSL_PRG,
CM_CSL_PRG_LIST => COLLECTION_MONGO_CSL_PRG_LIST,
CM_CSL_REM => COLLECTION_MONGO_CSL_REMARKS,
CM_CSL_STATE_OR_PROVINCE => COLLECTION_MONGO_CSL_SOP,
CM_CSL_TYPE => COLLECTION_MONGO_CSL_TYPE,
CM_CSL_UID => COLLECTION_MONGO_CSL_UID,
CM_CSL_SDN_TYPE => COLLECTION_MONGO_CSL_SDN_TYPE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as the associative array: $exposedFields. Only those fields,
* enumerated within this container, will be exposed to the client.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = [
COLLECTION_MONGO_CSL_ADDR_LIST => [
COLLECTION_MONGO_CSL_UID,
COLLECTION_MONGO_CSL_ADDRESS1,
COLLECTION_MONGO_CSL_ADDRESS2,
COLLECTION_MONGO_CSL_ADDRESS3,
COLLECTION_MONGO_CSL_CITY,
COLLECTION_MONGO_CSL_POSTAL_CODE,
COLLECTION_MONGO_CSL_COUNTRY,
COLLECTION_MONGO_CSL_SOP
],
COLLECTION_MONGO_CSL_AKA_LIST => [
COLLECTION_MONGO_CSL_UID,
COLLECTION_MONGO_CSL_TYPE,
COLLECTION_MONGO_CSL_CATEGORY,
COLLECTION_MONGO_CSL_LN,
COLLECTION_MONGO_CSL_FN
],
COLLECTION_MONGO_CSL_ID_LIST => [
COLLECTION_MONGO_CSL_UID,
COLLECTION_MONGO_CSL_ID_TYPE,
COLLECTION_MONGO_CSL_ID_NUMBER
]
// COLLECTION_MONGO_CSL_DOB_LIST => [
// COLLECTION_MONGO_CSL_UID,
// COLLECTION_MONGO_CSL_DOB,
// COLLECTION_MONGO_CSL_MAIN_ENTRY
// ],
// COLLECTION_MONGO_CSL_POB_LIST => [
// COLLECTION_MONGO_CSL_UID,
// COLLECTION_MONGO_CSL_POB,
// COLLECTION_MONGO_CSL_MAIN_ENTRY
// ],
// COLLECTION_MONGO_CSL_PRG_LIST => [
// COLLECTION_MONGO_CSL_PRG
// ]
];
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* Constructor in this template not only registers the shutdown method, but also allows us to generate a custom
* GUID string during instantiation by use of the input parameters:
*
* $_getGUID - boolean, defaults to false but, if true, will generate a GUID value and store it in the class member
* $_lc - boolean, defaults to false but, if true, will generate a GUID using lower-case alpha characters
*
* If we generate a GUID on instantiation, the GUID will be stored in the class member. This allows us to both
* instantiate a session class object and a GUID value, (the most requested, post-instantiation, action), at the
* same time. All the efficient.
*
*
* HISTORY:
* ========
* 12-03-20 mks DB-179: original coding
*
* @author mike@givingassistant.org
* @version 1.0
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* processCONSList() -- public template method
*
* This template method is accessible through the factory widget->template object only.
*
* The CONS list is available from the US Treasure Department and contains a list of Individuals and Entities
* that have sanctions applied that prohibits the transfer of funds.
*
* The list, available at:
* https://home.treasury.gov/policy-issues/financial-sanctions/consolidated-sanctions-list-data-files
* and known as: consolidated.xml, contains the entire sanctions list.
*
* The format of the list is not conducive to efficient data storage so we're going to manipulate some of the
* columnar elements, which are lists (arrays), s.t. we're removed the superflous and redundant sub-container
* headers so that all lists are associative arrays of indexed arrays.
*
* The function requires the following input parameters:
*
* $_file - the fqfn file containing the CONS XML list
* $_errs - a call-by-reference parameter that will return processing errors back to the calling client
* $_lastUpdated - a call-by-reference parameter that returns the date when the list was updated last
* $_recCount - a call-by-reference parameter containing the total number of records as report by USDoT
*
* On successful processing, the function returns an array, the processed cons list with the header removed
* and the sub-arrays all nice and homogeneous.
*
* If there was an error raised in processing, we'll store a copy of the error in $_errs and return a null
* value back to the calling client.
*
* If an exception is raised, a null will be returned and the error(s) logged to the db and to the console.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* @param string $_file
* @param array|null $_errs
* @param string $_lastUpdated
* @param int $_recCount
* @return array|null
*
*
* HISTORY:
* ========
* 12-07-20 mks DB-180: original coding
*
*/
public function processCONSList(string $_file, ?array &$_errs, string &$_lastUpdated = '', int &$_recCount = 0): ?array
{
$method = basename(__METHOD__);
$aryRetData = null;
if (empty($_file)) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$_errs[] = $hdr . ERROR_PARAM_404 . STRING_LIST;
return null;
}
// see if we can open the file for reading
$fp = simplexml_load_file($_file);
if (false === $fp) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$_errs[] = $hdr . ERROR_OPEN_XML_FILE . $_file;
return null;
} else {
try {
if (is_null($consData = objectToArray($fp))) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
$_errs[] = $hdr . ERROR_DATA_OBJ_2_ARY_FAIL;
return null;
}
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $_errs, true);
return null;
}
}
// at this point, we have successfully loaded the CONS XML file and have stored it, as an array, in $consData
// we need to clean up the "lists" embedded as sub-arrays since they XML introduced an artificial layer
// pointing to the data: $consData['sdnEntry']['akaList']['aka'][0, ..., n]
// ^^^^^ <--- this is the layer to be removed
$records = $consData[CONS_SDN_ENTRY];
$consMeta = $consData[CONS_PUB_INFO];
$_lastUpdated = $consMeta[CONS_PUB_DATE];
$_recCount = intval($consMeta[CONS_REC_COUNT]);
// list of list (entities) -- these are the sub-collections as opposed to sub-arrays
$lol = [COLLECTION_MONGO_CSL_AKA_LIST => COLLECTION_MONGO_CSL_AKA,
COLLECTION_MONGO_CSL_ADDR_LIST => COLLECTION_MONGO_CSL_ADDRESS,
COLLECTION_MONGO_CSL_ID_LIST => COLLECTION_MONGO_CSL_ID,
COLLECTION_MONGO_CSL_PRG_LIST => COLLECTION_MONGO_CSL_PRG,
COLLECTION_MONGO_CSL_DOB_LIST => COLLECTION_MONGO_CSL_DOB_ITEM,
COLLECTION_MONGO_CSL_POB_LIST => COLLECTION_MONGO_CSL_POB_ITEM
];
try {
foreach ($records as &$record) {
foreach ($record as $field => $fieldValue) {
if (is_array($fieldValue) and array_key_exists($field, $lol)) {
if (is_array($record[$field][$lol[$field]]) and is_numeric(key($record[$field][$lol[$field]]))) {
// sub-array has multiple elements
foreach ($record[$field][$lol[$field]] as $subRecord) {
$record[$field][] = $subRecord;
}
} else {
// sub-array has but a single element
$record[$field][] = $record[$field][$lol[$field]];
}
unset($record[$field][$lol[$field]]);
}
}
}
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $_errs, true);
return null;
}
return $records;
}
/**
* saveCONSList() -- public template method
*
* This method has the following input parameters:
*
* $_data -- this is the broker request data array
* $_errs -- a call-by-reference parameter for returning processing errors back to the calling client
*
* The method will return a null value when there are errors in parsing the input parameters, or from saving
* the XML file to disk.
*
* Otherwise, on success, the method returns the FQFN of the saved XML file.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_data
* @param array|null $_errs
* @return string|null
*
*
* HISTORY:
* ========
* 12-09-20 mks DB-180: original programming
*
*/
public function saveCONSList(array $_data, ?array &$_errs): ?string
{
$method = basename(__METHOD__);
if (empty($_data)) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
@handleExceptionMessaging($hdr, ERROR_DATA_ARRAY_EMPTY, $_errs, true);
return null;
}
if (!is_array($_data)) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
@handleExceptionMessaging($hdr, ERROR_DATA_ARRAY_NOT_ARRAY . STRING_DATA, $_errs, true);
return null;
}
if (!array_key_exists(STRING_DATA, $_data) or empty($_data[STRING_DATA])) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
@handleExceptionMessaging($hdr, ERROR_DATA_ARRAY_EMPTY . COLON . STRING_DATA, $_errs, true);
return null;
}
try {
// extract the XML file from the data payload
$xmlFile = $_data[STRING_DATA];
// write the file to tmp storage
$guid = guid();
$fqfn = DIR_TMP . SLASH . $guid . DOT . FILE_TYPE_XML;
if (false === file_put_contents($fqfn, $xmlFile)) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
@handleExceptionMessaging($hdr, ERROR_SAVE_XML_FILE . $fqfn, $_errs, true);
return null;
}
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, $method, __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $_errs, true);
return null;
}
return $fqfn;
}
/**
* cleanUp() -- public template method
*
* This file has a single, required, input parameter: the FQFN of the original XML file. Once validated, we'll
* load the XML file into a variable and use that string as the VALUE value in the system-data table for row #2.
*
* This means that the last CONS list added to the Segundo collection has been stored (for archival and validation
* purposes) as a flat-file in a column in a mongo table.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_file
*
*
* HISTORY:
* ========
* 12-15-20 mks DB-180: original coding
*
*/
public function cleanUp(string $_file = ''):void
{
$errors = [];
try {
if (strlen($_file) and file_exists($_file)) {
$contents = file_get_contents($_file);
if (false === $contents) {
consoleLog('CONS: ', CON_ERROR, ERROR_OPEN_XML_FILE . $_file);
return;
}
// delete the XML file
unlink($_file);
// instantiate a system-data widget
$meta = [
META_TEMPLATE => TEMPLATE_CLASS_SYS_DATA,
META_CLIENT => CLIENT_SYSTEM,
META_EVENT_GUID => guid()
];
/** @var gacMongoDB $widget */
if (is_null($widget = grabWidget($meta, '', $errors))) {
consoleLog('CONS: ', CON_ERROR, ERROR_FAILED_TO_INSTANTIATE . TEMPLATE_CLASS_SYS_DATA);
} else {
// save the XML data to the system-data table
$data = [
DATA_KEY => DATA_CONS,
DATA_VALUE => $contents,
ROW_ID => SYS_DATA_ROW_ID_CONS
];
$bc = new gacWorkQueueClient( basename(__METHOD__) . AT . __LINE__);
if (!$bc->status) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, sprintf(ERROR_BROKER_CLIENT_INSTANTIATION, BROKER_QUEUE_AI, null, true));
} else {
$payload = [
BROKER_REQUEST => BROKER_REQUEST_CREATE,
BROKER_DATA => [$data],
BROKER_META_DATA => $meta
];
if (false === $bc->call(gzcompress(json_encode($payload))))
consoleLog('CONS: ', CON_ERROR, sprintf(ERROR_MDB_QUERY_FAIL, STRING_UPSERT));
}
if (is_object($bc)) $bc->__destruct();
unset($bc);
}
}
} catch (TypeError | Throwable $t) {
$hdr = sprintf(INFO_LOC, basename(__FILE__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $errors, true);
}
if (isset($widget) and is_object($widget)) {
$widget->__destruct();
unset($widget);
}
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* HISTORY:
* ========
* 12-03-20 mks DB-179: original coding
*
* @version 1.0
*
* @author mike@givingassistant.org
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-03-20 mks DB-179: original coding
*
*/
public function __destruct()
{
// blank
}
}

View File

@@ -0,0 +1,450 @@
<?php
/** @noinspection PhpUnused */
/**
* Class gatDonors -- mongo data-template class
*
* This template defines the donors collection, part of the integrated partnerships sub-system.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-06-20 mks DB-147: original coding
* 06-01-20 mks ECI-108: support for authToken
*/
class gatDonors
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_DONORS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_DONORS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_DONORS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NONDESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = true; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
DB_TOKEN => DATA_TYPE_STRING, // unique pkey exposed externally and is REQUIRED
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER, // epoch time
DONORS_TRANS_COUNT => DATA_TYPE_INTEGER, // transaction count
DONORS_DTCC => DATA_TYPE_INTEGER, // donations to current cause
DONORS_TOTAL_DONATIONS => DATA_TYPE_DOUBLE, // dollar amount of total donations
DONORS_SDWC => DATA_TYPE_BOOL, // share data with cause
DONORS_CID => DATA_TYPE_STRING, // foreign key to somewhere
DONORS_CAUSE_TITLE => DATA_TYPE_STRING,
DONORS_UNK_FOREIGN_ID => DATA_TYPE_INTEGER // unknown generic foreign key
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DONORS_CID, DB_STATUS, DB_TOKEN
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => -1,
DONORS_CID => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE,
DONORS_TRANS_COUNT => CM_DONORS_TC,
DONORS_DTCC => CM_DONORS_DTCC,
DONORS_TOTAL_DONATIONS => CM_DONORS_TD,
DONORS_SDWC => CM_DONORS_SDWC,
DONORS_CID => CM_DONORS_CID,
DONORS_CAUSE_TITLE => CM_DONORS_CT,
DONORS_UNK_FOREIGN_ID => CM_DONORS_FI
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-06-20 mks DB-147: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-06-20 mks DB-147: original coding
*
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-06-20 mks DB-147: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,459 @@
<?php
/** @noinspection PhpUnused */
/**
* Class gatFailedSessions -- mongo class
*
* This is an admin class that tracks failed session closures. Currently, this is limited to AT(1)-based events.
* Entries in this collection allow us to trigger a batch job at night where we can have a second go at processing the
* event requests that previously failed.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-13-20 mks DB-168: Original coding
*/
class gatFailedSessions
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version; not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_TERCERO; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_FAILED_SESSIONS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_FAILED_SESSIONS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_FAILED_SESSIONS_EXT; // sets the extension for the collection
public bool $closedClass = false; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
DB_TOKEN => DATA_TYPE_STRING, // unique key exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER, // epoch time
// fields specific to systemEvents collection
MONGO_FAILED_EVENT_GUID => DATA_TYPE_STRING,
MONGO_FAILED_EVENT_NAME => DATA_TYPE_STRING,
MONGO_FAILED_EVENT_DESC => DATA_TYPE_STRING,
MONGO_FAILED_EVENT_SEV => DATA_TYPE_STRING,
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, DB_STATUS
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_CREATED, DB_EVENT_GUID, DB_ACCESSED, MONGO_ID, DB_STATUS
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_TOKEN, DB_CREATED, DB_STATUS, DB_EVENT_GUID,
MONGO_FAILED_EVENT_GUID, MONGO_FAILED_EVENT_NAME, MONGO_FAILED_EVENT_SEV
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_TOKEN => 1,
DB_CREATED => -1,
DB_STATUS => 1,
DB_EVENT_GUID => 1,
MONGO_FAILED_EVENT_NAME => 1,
MONGO_FAILED_EVENT_SEV => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
DB_EVENT_GUID => 1,
MONGO_FAILED_EVENT_GUID => 1
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are required for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TOKEN,
DB_CREATED => CM_DATE_CREATED,
DB_ACCESSED => CM_DATE_ACCESSED,
DB_STATUS => CM_STATUS,
DB_EVENT_GUID => CM_EVENT_GUID,
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as the associative array: $exposedFields. Only those fields,
* enumerated within this container, will be exposed to the client.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 08-13-20 mks DB-169: original coding
*
*/
private function __clone()
{
return (null);
}
/**
* __construct() -- public method
*
* Constructor in this template not only registers the shutdown method, but also allows us to generate a custom
* GUID string during instantiation by use of the input parameters:
*
* $_getGUID - boolean, defaults to false but, if true, will generate a GUID value and store it in the class member
* $_lc - boolean, defaults to false but, if true, will generate a GUID using lower-case alpha characters
*
* If we generate a GUID on instantiation, the GUID will be stored in the class member. This allows us to both
* instantiate a session class object and a GUID value, (the most requested, post-instantiation, action), at the
* same time. All the efficient.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-13-20 mks DB-169: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-13-20 mks DB-169: original coding
*
*/
public function __destruct()
{
// move on lookie-loo....
}
}

View File

@@ -0,0 +1,458 @@
<?php
/**
* Class gatGraphs
*
* This is the class used for feeding the Namaste graphing dashboard.
*
* Design Notes:
* -------------
* The graphs collection is designed to be "open" with respect to the data captured. Graphs was originally intended
* to be a time-series based collection. However, I wanted to expand the definition to include almost any type of
* data.
*
* The main columns are the ones labeled key and value. These are the pivotal columns - the key defines the name of
* data element, and value defines the metric value.
*
* For example, if I was graphing query metrics, I would receive a key of 'queryTime' and a value of 0.089. Combined
* with the date/time field, I could submit this data as time-series. However, the key "queryTime" represents an
* arbitrary event as we don't know which query this was measured against, what schema, etc.
*
* The remaining fields, then, help narrow the scope of the event.
*
* Another example, this time working the other way, is that I want to record the number of event requests that are
* being received by the brokers.
*
* I could use a key of 'brokerEvent', with the value being the time the event took to process. Supporting values may
* include service, broker, and event so that we now which service the event was processed on, which broker on that
* service processed the event, and which event was processed. This combination of data allows us to start building
* a history of efficiency of a particular service, broker or event.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-29-19 mks DB-136: original coding
* 01-13-20 mks DB-150: PHP7.4 member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatGraphs
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1;
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_GRAPHS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_GRAPHS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_GRAPHS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to enable journaling
public bool $setUpdates = false; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = false; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
GRAPH_KEY => DATA_TYPE_STRING, // primary value: the metric being recorded
GRAPH_VALUE => DATA_TYPE_MIXED, // primary value: the value of the metric
GRAPH_SCHEMA => DATA_TYPE_STRING, // (optional) what was the db schema?
GRAPH_SERVICE => DATA_TYPE_STRING, // (optional) what service handled the event?
GRAPH_LOCATION => DATA_TYPE_ARRAY, // array containing file:method:line info
GRAPH_FILE => DATA_TYPE_STRING, // sub-array label for the file
GRAPH_METHOD => DATA_TYPE_STRING, // sub-array label for the method
GRAPH_LINE => DATA_TYPE_INTEGER, // sub-array label for the line
GRAPH_COMMENT => DATA_TYPE_STRING, // (optional) free-form text description
GRAPH_LABEL => DATA_TYPE_STRING, // (optional) suggestion for a graph label
GRAPH_COLLECTION => DATA_TYPE_STRING, // (optional) the db collection involved
GRAPH_DBO => DATA_TYPE_STRING, // (optional) the database object involved
GRAPH_EVENT => DATA_TYPE_STRING, // (optional) the name of the broker event
GRAPH_BROKER => DATA_TYPE_STRING, // (optional) the name of the broker
GRAPH_TIMER => DATA_TYPE_DOUBLE, // (optional) timer values go here because double
GRAPH_DATE => DATA_TYPE_DATETIME, // (optional) cleartext data because grafana likes 'em
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [MONGO_ID, GRAPH_KEY, GRAPH_SERVICE, GRAPH_COLLECTION, GRAPH_DBO,
GRAPH_EVENT, GRAPH_BROKER, DB_TOKEN, DB_EVENT_GUID, DB_CREATED ];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => -1,
DB_TOKEN => 1,
GRAPH_KEY => 1
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [ 'graphServiceIDX', 'graphBrokerIDX' ];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'graphServiceIDX' => [ GRAPH_SERVICE => 1, GRAPH_COLLECTION => 1, GRAPH_DBO => 1 ],
'graphBrokerIDX' => [ GRAPH_BROKER => 1, GRAPH_EVENT => 1 ]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = null; // token is not required for system collections
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null; // key-value paired array of field-names mapped to cache-names
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
public ?array $regexFields = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null; // lists the fields that will be sent to clients; null => all data
// cache map, if defined, will always override.
// mongo IDs are NEVER returned
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null; // sub-collection fields must be declared here
// see gatTestMongo.class.inc for explanation & examples
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-29-19 mks DB-136: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-29-19 mks DB-136: original coding
*
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-29-19 mks DB-136: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,593 @@
<?php
/**
* gatJournaling.class.inc -- mongo template class
*
* This template defines the journaling collection - a Namaste subsystem that bestows point-in-time recovery
* at the record level for supported data classes.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
* 02-07-19 mks DB-115: JOURNAL_AUD_TOK is now a unique indexed field
* 01-13-20 msk DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatJournaling
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_JOURNAL; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_JOURNAL; // sets the collection (table) name
public ?string $whTemplate = TEMPLATE_CLASS_JOURNAL; // name of the warehouse template (not collection)
public string $extension = COLLECTION_MONGO_JOURNAL_EXT;// sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
// fields specific to the collection
MONGO_ID => DATA_TYPE_OBJECT, // sorting by the id is just like sorting by createdDate
JOURNAL_SYSEV_TOK => DATA_TYPE_STRING, // systemEvent GUID
JOURNAL_AUD_TOK => DATA_TYPE_STRING, // audit GUID
JOURNAL_RECORD_GUID => DATA_TYPE_STRING, // GUID of the change record (table data in audit rec)
JOURNAL_RESTORE_QUERY => DATA_TYPE_STRING, // Namaste-derived query to restore the record
JOURNAL_HISTORY => DATA_TYPE_ARRAY, // sub-collection containing the journal-access history
JOURNAL_HISTORY_DATE_RESTORED => DATA_TYPE_STRING, // sub-collection field: date record restored
JOURNAL_HISTORY_RESTORED_EVENT_GUID => DATA_TYPE_STRING, // sub-collection field: restore request event GUID
JOURNAL_HISTORY_RESTORED_BY => DATA_TYPE_STRING, // sub-collection field: name/GUID requesting user
JOURNAL_HISTORY_RESTORED_REASON => DATA_TYPE_STRING, // sub-collection field: client supplied text
// generic mongo constants
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
// todo -- code a condition where ALL fields are protected using the * symbol (DB-58)
public ?array $protectedFields = [
MONGO_ID, JOURNAL_SYSEV_TOK, JOURNAL_RESTORE_QUERY, JOURNAL_HISTORY, JOURNAL_AUD_TOK,
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_STATUS, DB_ACCESSED, JOURNAL_RECORD_GUID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_TOKEN, DB_ACCESSED, JOURNAL_RECORD_GUID, JOURNAL_AUD_TOK,
JOURNAL_SYSEV_TOK, DB_EVENT_GUID
];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
public ?array $singleFields = [
DB_CREATED => -1, // assuming we want LIFO
DB_ACCESSED => -1, // assuming we want LIFO
DB_EVENT_GUID => 1, // event guid should always be indexed
JOURNAL_RECORD_GUID => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
JOURNAL_SYSEV_TOK => 1,
JOURNAL_AUD_TOK => 1 // Journal -> Audit is 1:1 so this key must be unique
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = [
JOURNAL_HISTORY => [
JOURNAL_HISTORY_DATE_RESTORED,
JOURNAL_HISTORY_RESTORED_BY,
JOURNAL_HISTORY_RESTORED_EVENT_GUID,
JOURNAL_HISTORY_RESTORED_REASON
]
];
//=================================================================================================================
// MIGRATION DECLARATIONS
// ----------------------
// Data in this section is used to handle migrations -- when we're pulling from legacy tables into the Namaste
// framework. See online doc for more info.
//=================================================================================================================
/**
* The migration map is an associative array that maps the Namaste fields (keys) to the corresponding
* (remote) legacy fields in the source table to be migrated to Namaste.
*
* For example, if we were migrating a mysql table in the legacy production database to Namaste::mongo, then
* the keys of the migration map would be the Namaste::mongo->fieldNames and the values would be the mysql
* column names in the legacy table.
*
* If there is a value which cannot be mapped to a key, then set it to null.
*
* Fields that will be dropped in the migration are not listed as values or as keys.
*
* This map will only exist in the template object and will never be imported into the class widget.
*
* This is a required field.
*
*/
public ?array $migrationMap = null;
/*
* the migrationSortKey defines the SOURCE field by which the fetch query will be sorted. ALL sort fields are
* in ASC order so all we need to list here is the name of the field -- which MUST BE IN THE SOURCE TABLE.
*
* Populating this field may require preliminary examination of the data - what we want is a field that has
* zero NULL values.
*
* This is a required field.
*
*/
public ?array $migrationSortKey = null;
/*
* The migrationStatusKey defines the status field/column in the source table -- if the user requires that
* soft-deleted records not be migrated, then this field must be set. Otherwise, set the value to null.
*
* The format is in the form of a key-value paired array. The key specifies the name of the column and the value
* specifies the "deleted" value that, if found, will cause that row from the SOURCE data to be omitted from the
* DESTINATION table.
*
* e.g.: $migrationStatusKV = [ 'some_field' => 'deleted' ]
*
* Note that both the key and the value are case-sensitive!
*
* This is an optional field.
*
*/
public ?array $migrationStatusKV = null;
// The $migrationSourceSchema defines the remote schema for the source table
public ?string $migrationSourceSchema = null; // or STRING_MONGO
// The source table in the remote repos (default defined in the XML) must be declared here
public ?string $migrationSourceTable = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => true, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'Q', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => true, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
WH_INDEXES => [DB_CREATED, DB_WH_CREATED],
WH_TEMPLATE => '',
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}/** @noinspection PhpUnused */
/**
* buildJournalData() -- template method
*
* This template method should only be called from the AdminIN broker when creating a journal record. It's sole
* purpose is to build the data payload (record) that will be saved to the mongo database. As such, there are
* four input parameters to the method:
*
* $_sysEvTok -- string value containing the system event token value
* $_audTok -- string value containing the audit event token value
* $_journalData -- indexed array containing two associative arrays containing the recovery queries and token list
* $_es -- call-by-reference array parameter used to send error messages back to the calling client
*
* Processing errors will be raised if any of the first three input values are missing or invalid. Also, we compare
* the count of the the number of elements in the $_journalData array -- this array should have to keys:
* STRING_JOURNAL_TOKEN_LIST and STRING_JOURNAL_QUERY_LIST and the same number of elements must be in both.
*
* If errors are encountered during processing, then an error message is returned (implicitly) and a null value is
* explicitly returned to the caller.
*
* Otherwise, we return an indexed array of records that will be inserted into the journal collection.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_sysEvTok
* @param string $_audTok
* @param array $_journalData
* @param array $_es
* @return array|null
*
*
* HISTORY:
* ========
* 10-25-18 mks DB-74: original coding
*
*/
public function buildJournalData(string $_sysEvTok, string $_audTok, array $_journalData, array &$_es): ?array
{
$res = 'JRNL: ';
$data = null;
if (!validateGUID($_sysEvTok)) {
$msg = ERROR_INVALID_GUID . $_sysEvTok;
consoleLog($res, CON_SYSTEM, $msg);
$_es[] = $msg;
return $data;
}
if (!validateGUID($_audTok)) {
$msg = ERROR_INVALID_GUID . $_audTok;
consoleLog($res, CON_SYSTEM, $msg);
$_es[] = $msg;
return $data;
}
if (count($_journalData[STRING_JOURNAL_QUERY_LIST]) < 1 or count($_journalData[STRING_JOURNAL_QUERY_LIST]) != count($_journalData[STRING_JOURNAL_TOKEN_LIST])) {
$msg = ERROR_AUDIT_COUNT;
consoleLog($res, CON_SYSTEM, $msg);
$_es[] = $msg;
return $data;
}
for ($i = 0, $max = count($_journalData[STRING_JOURNAL_TOKEN_LIST]); $i < $max; $i++) {
$data[$i][JOURNAL_SYSEV_TOK] = $_sysEvTok;
$data[$i][JOURNAL_AUD_TOK] = $_audTok;
$data[$i][JOURNAL_RECORD_GUID] = $_journalData[STRING_JOURNAL_TOKEN_LIST][$i];
$data[$i][JOURNAL_RESTORE_QUERY] = $_journalData[STRING_JOURNAL_QUERY_LIST][$i];
}
consoleLog($res, CON_SUCCESS, sprintf(STUB_PROCESSED, $max, STRING_JOURNAL));
return $data;
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 10-16-18 mks DB-57: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,434 @@
<?php
/**
* Class gatLog
*
* This is the logging class definition that records application-generated events.
*
* Design Notes:
* -------------
* because this is a log, and log events are processed by a FnF queue, we're not going to cache, or use auditing.
* History is not recorded for this class.
* Only one status is supported: ACTIVE and there are no updates allowed making record-locking unnecessary.
* Cache timers on the class are disabled because recursion.
* id's (auto-incrementing integers) are deprecated and replaced by the native _id.
* The created date is stored an epoch time (integer).
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
* 07-12-17 mks added log-level value column to collection for ranged searching
* 08-04-17 mks added version control, partialIndexes
* 08-11-17 mks CORE-467: indexes brought up to mongo 3.2 standards and made consistent
* 04-19-18 mks _INF-188: warehousing section added
* 01-13-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatLogs {
public int $version = 1;
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_LOGS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_LOGS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_LOGS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = false; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = false; // set to true to enable collection query timers
public string $setPKey = MONGO_ID; // sets the primary key for the collection
public bool $setTokens = false; // set to true: adds the idToken field functionality
public bool $selfDestruct = true; // set to false if the class contains methods
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_OBJECT, // sorting by the id is just like sorting by createdDate
LOG_FILE => DATA_TYPE_STRING,
LOG_METHOD => DATA_TYPE_STRING,
LOG_LINE => DATA_TYPE_INTEGER,
LOG_CLASS => DATA_TYPE_STRING,
LOG_LEVEL => DATA_TYPE_STRING,
LOG_VALUE => DATA_TYPE_INTEGER,
LOG_MESSAGE => DATA_TYPE_STRING,
DB_STATUS => DATA_TYPE_STRING,
DB_EVENT_GUID => DATA_TYPE_STRING,
LOG_CREATED => DATA_TYPE_INTEGER
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [ MONGO_ID, LOG_CREATED, LOG_VALUE, LOG_FILE, LOG_LEVEL, DB_EVENT_GUID ];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
LOG_CREATED => -1,
LOG_VALUE => 1,
LOG_FILE => 1,
LOG_LEVEL => 1,
DB_EVENT_GUID => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = null; // token is not required for system collections
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
public ?array $regexFields = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null; // lists the fields that will be sent to clients; null => all data
// cache map, if defined, will always override.
// mongo IDs are NEVER returned
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null; // sub-collection fields must be declared here
// see gatTestMongo.class.inc for explanation & examples
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,433 @@
<?php /** @noinspection PhpUnused */
/**
* Class gatMetrics
*
* This is the metrics class definition that records timer events, usually database queries.
*
* Design Notes:
* -------------
* Metrics is identical to Logs, who's events are processed by a FnF queue, we're not going to cache, or use auditing.
* History is not recorded for this class.
* Only one status is supported: ACTIVE and there are no updates allowed making record-locking unnecessary.
* Cache timers on the class are disabled because recursion.
* id's (auto-incrementing integers) are deprecated and replaced by the native _id.
* The created date is stored an epoch time (integer).
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
* 08-04-17 mks added version control, partialIndexes
* 08-11-17 mks CORE-467: indexes brought up to mongo 3.2 standards and made consistent
* 04-19-18 mks _INF-188: warehousing section added
* 11-04-19 mks DB-136: Added DB_TIMER single-field index to indexFields container
* 01-13-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatMetrics
{
public int $version = 1;
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_METRICS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_METRICS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_METRICS_EXT;// sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to enable journaling
public bool $setUpdates = false; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = false; // set to true to enable collection query timers
public string $setPKey = MONGO_ID; // sets the primary key for the collection
public bool $setTokens = false; // set to true: adds the idToken field functionality
public bool $selfDestruct = true; // set to false if the class contains methods
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
LOG_FILE => DATA_TYPE_STRING,
LOG_METHOD => DATA_TYPE_STRING,
LOG_LINE => DATA_TYPE_INTEGER,
LOG_CLASS => DATA_TYPE_STRING,
LOG_LEVEL => DATA_TYPE_STRING,
LOG_MESSAGE => DATA_TYPE_STRING,
DB_STATUS => DATA_TYPE_STRING,
DB_TIMER => DATA_TYPE_DOUBLE,
DB_EVENT_GUID => DATA_TYPE_STRING,
LOG_CREATED => DATA_TYPE_INTEGER
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [ MONGO_ID, LOG_CREATED, LOG_LEVEL, DB_EVENT_GUID, DB_TIMER ];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
LOG_CREATED => -1,
LOG_LEVEL => 1,
DB_EVENT_GUID => 1,
DB_TIMER => -1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = null; // token is not required for system collections
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null; // key-value paired array of field-names mapped to cache-names
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
public ?array $regexFields = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null; // lists the fields that will be sent to clients; null => all data
// cache map, if defined, will always override.
// mongo IDs are NEVER returned
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null; // sub-collection fields must be declared here
// see gatTestMongo.class.inc for explanation & examples
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-06-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,471 @@
<?php
/**
* gatMigrations -- mongo template class
*
* This template is used internally to record migration process for a source table to it's destination. This table
* records the new records created so that, in the event of failure, we can back-out the newly-created records from
* the mongo collection.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 01-30-18 mks _INF-139: original coding
* 04-19-18 mks _INF-188: warehousing section added
* 11-04-19 mks DB-136: added missing DB_EVENT_GUID index to $indexFields
* 01-13-20 mks DB-150: PHP7.4 member class type-casting
* 06-01-20 mks ECI-108: support for auth token
*/
class gatMigrations
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_MIGRATIONS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_MIGRATIONS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_MIGRATIONS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = false; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
MWH_SOURCE_SCHEMA => DATA_TYPE_STRING, // name of the source schema
MWH_SOURCE_TABLE => DATA_TYPE_STRING, // name of the source table
MWH_DEST_SCHEMA => DATA_TYPE_STRING, // name of the destination schema
MWH_DEST_TABLE => DATA_TYPE_STRING, // name of the destination table
MWH_QUERY => DATA_TYPE_STRING, // (first) query used to migrate the data
MWH_DATE_STARTED => DATA_TYPE_INTEGER, // when the migration started (epoch time)
MWH_NUM_RECS_SOURCE => DATA_TYPE_STRING, // number of records in the source table
MWH_NUM_RECS_MOVED => DATA_TYPE_INTEGER, // number of records migrated
MWH_NUM_RECS_DROPPED => DATA_TYPE_INTEGER, // number of records that were dropped
MWH_LAST_REC_WRITTEN => DATA_TYPE_STRING, // json-encoded string of the last record written
MWH_DATE_COMPLETED => DATA_TYPE_INTEGER, // when migration completed (epoch time)
MWH_STOP_REASON => DATA_TYPE_STRING, // reason why migration failed
MWH_ERROR_CAT => DATA_TYPE_ARRAY, // array of errors
MWH_REPORT => DATA_TYPE_STRING, // stores the generated wh report
DB_TOKEN => DATA_TYPE_STRING, // unique key (GUID) exposed externally and is REQUIRED
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, MONGO_ID, MWH_SOURCE_SCHEMA,
MWH_SOURCE_TABLE, MIGRATION_DEST_SCHEMA, MIGRATION_DEST_TABLE,
MWH_NUM_RECS_SOURCE, MWH_DATE_STARTED
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_TOKEN, MWH_SOURCE_TABLE, DB_STATUS, MWH_DEST_TABLE,
MWH_DEST_SCHEMA, MWH_SOURCE_SCHEMA, DB_EVENT_GUID
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => 1,
DB_STATUS => 1,
DB_EVENT_GUID => 1,
MWH_SOURCE_TABLE => 1,
MWH_DEST_TABLE => 1,
MWH_DEST_SCHEMA => 1,
MWH_SOURCE_SCHEMA => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null; // todo -- test this setting
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 01-30-18 mks _INF-139: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 01-30-18 mks _INF-139: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 01-30-18 mks _INF-139: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,655 @@
<?php
/**
* gatProdRegistrations.class -- Namaste mySQL Data Template
*
* This is the template file for the Namaste mySQL version of product-registration. This template was created for the
* purpose of testing mysql->mysql data migration.
*
* There is another prod-reg template: gatProductRegistrations.class.inc -- which is a mongo data template.
*
* Once version 1.0.0 of Namaste is launched, please remember to deprecate either this, or the other, template class
* so that there is no potential for confusion.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 03-23-18 mks CORE-852: original coding
* 04-18-18 mks _INF-188: warehousing section added
* 06-12-18 mks CORE-1043: updated PDO objects to add SQL statements for table create, update
* 01-13-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatProdRegistrations
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version: not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_PDO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_PROD_REGS; // defines the clear-text template class name
public string $collection = COLLECTION_PDO_PROD_REGS; // sets the collection (table) name
public ?string $whTemplate = TEMPLATE_CLASS_WHC1_PROD_REG; // name of the warehouse template (not collection)
public string $extension = COLLECTION_PDO_PROD_REGS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set 2 true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if class contains methods or migration
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
//
// Note that for PDO-type tables, the data type is more ... homogeneous... e.g.: data types define the data
// type only. It does not define the actual column type in-use. For example, there is no distinction made
// between a tinyInt, Int, or BigInt. As far as the framework is concerned, they're all just integers.
//
public array $fields = [
PDO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
PRG_TYPE => DATA_TYPE_STRING,
PRG_IID => DATA_TYPE_STRING,
PRG_EAV => DATA_TYPE_STRING,
PRG_PLATFORM => DATA_TYPE_STRING,
PRG_BROWSER => DATA_TYPE_STRING,
PRG_MAJOR_VERSION => DATA_TYPE_INTEGER,
PRG_MINOR_VERSION => DATA_TYPE_INTEGER,
PRG_IS_MOBILE => DATA_TYPE_INTEGER,
PRG_IS_TABLET => DATA_TYPE_INTEGER,
PRG_FIRST_SEEN => DATA_TYPE_STRING,
PRG_LAST_SEEN => DATA_TYPE_STRING,
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_STRING, // dateTime type
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_STRING // dateTime type
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, PDO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
PDO_ID => 1,
PRG_EAV => 1,
PRG_IID => 1,
PRG_TYPE => 1,
DB_CREATED => 1,
DB_STATUS => 1, // status should only be indexed if soft-deletes are enabled (just saying)
DB_EVENT_GUID => 1, // event guid should always be indexed
DB_TOKEN => 1
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [ 'cIdx1Test'];
// the primary key index is declared in the class properties section as $setPKey
// unique indexes are to be used a values stored in these columns have to be unique to the table. Note that
// null values are permissible in unique-index columns. Do not declare MONGO_ID here, regardless of how badly
// you may want to.
public ?array $uniqueIndexes = [ DB_TOKEN => 1 ];
// single field index declarations -- since you can have a field in more than one index (index, multi)
// the format for the single-field index declaration is a simple indexed array.
public ?array $singleFields = [
PRG_EAV, PRG_IID, PRG_TYPE, DB_EVENT_GUID
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $compoundIndexes = [
'cIdx1Test' => [ DB_CREATED, DB_STATUS ]
];
// NOTE: foreign-key indexes are not explicitly enumerated in a template -- that relationship is defined in the
// schema for the table. Foreign-key indexes appear implicitly in the indexing declarations above.
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
PRG_TYPE => CM_PRG_TYPE,
PRG_IID => CM_PRG_IID,
PRG_EAV => CM_PRG_EAV,
PRG_PLATFORM => CM_PRG_PLATFORM,
PRG_BROWSER => CM_PRG_BROWSER,
PRG_MAJOR_VERSION => CM_PRG_MAJ_VER,
PRG_MINOR_VERSION => CM_PRG_MIN_VER,
PRG_IS_MOBILE => CM_PRG_IS_MOBILE,
PRG_IS_TABLET => CM_PRG_IS_TABLET,
PRG_FIRST_SEEN => CM_PRG_FIRST_SEEN,
PRG_LAST_SEEN => CM_PRG_LAST_SEEN,
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it a null.
*
*/
public ?array $exposedFields = [
PRG_TYPE => 1,
PRG_IID => 1,
PRG_EAV => 1,
PRG_PLATFORM => 1,
PRG_BROWSER => 1,
PRG_MAJOR_VERSION => 1,
PRG_MINOR_VERSION => 1,
PRG_IS_MOBILE => 1,
PRG_IS_TABLET => 1,
PRG_FIRST_SEEN => 1,
PRG_LAST_SEEN => 1,
DB_TOKEN => 1,
DB_CREATED => 1, // epoch time
DB_STATUS => 1, // record status
DB_ACCESSED => 1 // epoch time
];
// in PDO-land, binary fields are your basic data blobs. All binary fields require special handling and so
// need to be enumerated here as an indexed array.
public ?array $binFields = null;
// DB SQL:
// -------
// PDO SQL is stored in the template and is keyed by the current namaste version (defined in the XML file) during
// execution of the deployment script. Each version denotes a container of SQL commands that will be executed
// for the targeted version.
//
// SQL is versioned in parallel with the Namaste (XML->application->id->version) version. Each PDO_SQL
// sub-container has several fields - one of which has the version identifier. When the deployment script
// executes, the release versions are compared and, if they're an exact match, the SQL is submitted for execution.
//
// The PDO_SQL container consists of these sub-containers:
//
// PDO_SQL_VERSION --> this is a float value in the form of x.y as namaste only supports versions as a major
// and minor release number. (Patch releases are minor release increments.)
// PDO_TABLE --> string value containing the full table name.
// PDO_SQL_FC --> the FC means "first commit" -- when the table is first created, it will execute the
// SQL in this block, if it exists, and if the version number for the sub-container
// exactly matched the version number in the configuration XML.
// PDO_SQL_UPDATE --> When the sub-container PDO_SQL_VERSION value exactly matches the XML release value,
// then the ALTER-TABLE sql in this update block will be executed.
// STRING_DROP_CODE_IDX --> The boilerplate code for dropping the indexes of the table.
// STRING_DROP_CODE_DEV --> For version 1.0 only, this points to code to drop the entire table.
//
// Again, containers themselves are indexed arrays under the PDO_SQL tag. Within the container, data is stored
// as an associative array with the keys enumerated above.
//
//
// DB OBJECTS:
// -----------
// DB objects are: views, procedures, functions and events.
// All such objects assigned to a class are declared in this array under the appropriate header.
// This is a safety-feature that prevents a one class (table) from invoking another class object.
// The name of the object is stored as an indexed-array under the appropriate header.
//
// The format for these structures is basically the same. Each DBO is stored in an associative array with the
// key defining the name of the object. Within each object, there are embedded associative arrays that have the
// name of the object as the key and the object definition (text) and the value:
//
// objectType => [ objectName => [ objectContent ], ... ]
//
// Each created object should also have the directive to remove it's predecessor using a DROP statement.
//
// todo -- unset these objects post-instantiation so that schema is not revealed
//
// VIEWS:
// ------
// Every namaste table will have at least one view which limits the data fetched from the table. At a minimum,
// the id_{ext} field is filtered from the resulting data set via the view. Other fields can be withheld as well
// but that is something that is individually set-up for each table.
//
// The basic view has the following syntax for declaring it's name:
// view_basic_{tableName_ext}
// All views start with the word "view" so as to self-identify the object, followed by the view type which,
// optimally, you should try to limit to a single, descriptive word.
//
// Following this label, which points to a sub-array containing three elements:
// STRING_VIEW ----------> this is the SQL code that defines the view as a single string value
// STRING_TYPE_LIST -----> null or an array of types that corresponds to variable markers ('?') in the sql
// STRING_DESCRIPTION' --> a string that describes the purpose of the view.
//
// At a minimum, every class definition should contain at-least a basic view as all queries that don't specify
// a named view or other DBO, will default to the the basic view in the FROM clause of the generated SQL.
//
// PROCEDURES:
// -----------
// For stored procedures, which are entirely optional, the array definition contains the following elements:
// STRING_PROCEDURE -------> the SQL code that defined the stored procedure as a single string value
// STRING_DROP_CODE -------> the sql code that drops the procedure (required for procedures!)
// STRING_TYPE_LIST -------> an associative array of associative arrays -- in the top level, the key is the name
// of the parameter that points to a sub-array that contains the parameter direction
// as the key, and the parameter type as the value. There should be an entry for each
// parameter to be passed to the stored procedure/function.
//
// ------------------------------------------------------
// | NOTE: IN params must precede INOUT and OUT params! |
// ------------------------------------------------------
//
// STRING_SP_EVENT_TYPE ---> Assign one of the DB_EVENT constants to this field to indicate the type of
// query the stored-procedure will execute.
// NOTE: there is not a defined PDO::PARAM constant for type float: use string.
// STRING_DESCRIPTION -----> clear-text definition of the procedure's purpose
//
// Note that all of these containers are required; empty containers should contain a null placeholder.
//
// When a stored procedure contains a join of two or more tables/views, the first table listed is considered
// to be the "owning" table and the procedure will be declared in the class template for that table, but it will
// not be duplicated in other template classes referenced in the join.
//
public ?array $dbObjects = [
PDO_SQL => [
[
PDO_VERSION => 1.0,
PDO_TABLE => 'gaProductRegistrations_prg',
PDO_SQL_FC => "
--
-- Table structure for table `gaProductRegistrations_prg`
--
CREATE TABLE `gaProductRegistrations_prg` (
`id_prg` int(10) UNSIGNED NOT NULL,
`type_prg` char(16) NOT NULL,
`iid_prg` char(64) NOT NULL,
`eav_prg` char(16) DEFAULT NULL,
`platform_prg` char(32) DEFAULT NULL,
`browser_prg` char(32) DEFAULT NULL,
`majorVersion_prg` int(11) DEFAULT NULL,
`minorVersion_prg` int(11) DEFAULT NULL,
`isMobile_prg` tinyint(3) UNSIGNED DEFAULT NULL,
`isTablet_prg` tinyint(3) UNSIGNED DEFAULT NULL,
`firstSeen_prg` datetime DEFAULT NULL,
`lastSeen_prg` datetime DEFAULT NULL,
`token_prg` char(36) NOT NULL,
`eventGUID_prg` char(36) DEFAULT NULL,
`createdDate_prg` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`lastAccessedDate_prg` datetime DEFAULT NULL,
`status_prg` varchar(25) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
",
PDO_SQL_UPDATE => "
--
-- Indexes for table `gaProductRegistrations_prg`
--
ALTER TABLE `gaProductRegistrations_prg`
ADD PRIMARY KEY (`id_prg`),
ADD UNIQUE KEY `token_prg` (`token_prg`),
ADD KEY `type_prg` (`type_prg`,`iid_prg`),
ADD KEY `createdDate_prg` (`createdDate_prg`,`lastAccessedDate_prg`,`status_prg`),
ADD KEY `iid_prg` (`iid_prg`,`eav_prg`),
ADD KEY `eventGUID_prg` (`eventGUID_prg`);
--
-- AUTO_INCREMENT for table `gaProductRegistrations_prg`
--
ALTER TABLE `gaProductRegistrations_prg`
MODIFY `id_prg` int(10) UNSIGNED NOT NULL AUTO_INCREMENT;
",
/*
* example query return:
* ---------------------
* ALTER TABLE gaTest_tst DROP INDEX gaTest_tst_createdDate_tst_status_tst_index, DROP INDEX
* gaTest_tst_lastAccessedDate_tst_index, DROP INDEX testInteger_tst, DROP INDEX
* gaTest_tst_eventGuid_tst_index, DROP INDEX testDouble_tst, DROP INDEX testString_tst;
*
* NOTE:
* -----
* The sql comment code tag (--) will be removed during mysqlConfig's run time processing
*/
STRING_DROP_CODE_IDX => "--
SELECT CONCAT('ALTER TABLE ', `Table`, ' DROP INDEX ', GROUP_CONCAT(`Index` SEPARATOR ', DROP INDEX '),';' )
FROM (
SELECT table_name AS `Table`, index_name AS `Index`
FROM information_schema.statistics
WHERE INDEX_NAME != 'PRIMARY'
AND table_schema = 'XXXDROP_DB_NAMEXXX'
AND table_name = 'XXXDROP_TABLE_NAMEXXX'
GROUP BY `Table`, `Index`) AS tmp
GROUP BY `Table`;
",
STRING_DROP_CODE_DEV => "DROP TABLE IF EXISTS gaProductRegistrations_prg;" // only executed if declared
]
],
PDO_VIEWS => [
'view_basic_gaProductRegistrations' => [
STRING_VIEW =>
"DROP VIEW IF EXISTS view_basic_gaProductRegistrations;
CREATE VIEW view_basic_gaProductRegistrations_prg AS
SELECT type_prg, iid_prg, eav_prg, platform_prg, browser_prg, majorVersion_prg, minorVersion_prg,
isMobile_prg, isTablet_prg, firstSeen_prg, lastSeen_prg, eventGUID_prg, createdDate_prg,
lastAccessedDate_prg, status_prg, token_prg
FROM gaProductRegistrations_prg
WHERE status_prg <> \"DELETE\";",
STRING_TYPE_LIST => null,
STRING_DESCRIPTION => 'basic query'
],
],
PDO_PROCEDURES => [],
PDO_FUNCTIONS => [],
PDO_EVENTS => [],
PDO_TRIGGERS => []
];
//=================================================================================================================
// MIGRATION DECLARATIONS
// ----------------------
// Data in this section is used to handle migrations -- when we're pulling from legacy tables into the Namaste
// framework. See online doc for more info.
//=================================================================================================================
/**
* The migration map is an associative array that maps the Namaste fields (keys) to the corresponding
* (remote) legacy fields in the source table to be migrated to Namaste.
*
* For example, if we were migrating a mysql table in the legacy production database to Namaste::mongo, then
* the keys of the migration map would be the Namaste::mongo->fieldNames and the values would be the mysql
* column names in the legacy table.
*
* If there is a value which cannot be mapped to a key, then set it to null.
*
* Fields that will be dropped in the migration are not listed as values or as keys.
*
* This map will only exist in the template object and will never be imported into the class widget.
*
* This is a required field.
*
*/
public ?array $migrationMap = [
PDO_ID => null, // created on insert
PRG_TYPE => 'type',
PRG_IID => 'iid',
PRG_EAV => 'eav',
PRG_PLATFORM => 'platform',
PRG_BROWSER => 'browser',
PRG_MAJOR_VERSION => 'major_version',
PRG_MINOR_VERSION => 'minor_version',
PRG_IS_MOBILE => 'is_mobile',
PRG_IS_TABLET => 'is_tablet',
PRG_FIRST_SEEN => 'first_seen',
PRG_LAST_SEEN => 'last_seen',
DB_TOKEN => null, // created on insert
DB_EVENT_GUID => null, // generated by broker event
DB_CREATED => 'kinsert_date', // epoch time
DB_STATUS => null, // record status
DB_ACCESSED => 'kupdate_date' // epoch time
];
/*
* the migrationSortKey defines the SOURCE field by which the fetch query will be sorted. ALL sort fields are
* in ASC order so all we need to list here is the name of the field -- which MUST BE IN THE SOURCE TABLE.
*
* Populating this field may require preliminary examination of the data - what we want is a field that has
* zero NULL values.
*
* This is a required field.
*
*/
public ?string $migrationSortKey = 'last_seen';
/*
* The migrationStatusKey defines the status field/column in the source table -- if the user requires that
* soft-deleted records not be migrated, then this field must be set. Otherwise, set the value to null.
*
* The format is in the form of a key-value paired array. The key specifies the name of the column and the value
* specifies the "deleted" value that, if found, will cause that row from the SOURCE data to be omitted from the
* DESTINATION table.
*
* e.g.: $migrationStatusKV = [ 'some_field' => 'deleted' ]
*
* Note that both the key and the value are case-sensitive!
*
* This is an optional field.
*
*/
public ?array $migrationStatusKV = null;
// The $migrationSourceSchema defines the remote schema for the source table, and is set in the constructor
public ?string $migrationSourceSchema;
// The source table in the remote repos (default defined in the XML) must be declared here, set in the constructor
public ?string $migrationSourceTable;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => true, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => true, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => true, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => true, // true to allow an ad-hoc query filter or if WH_REMOTE_SUPPORT is true
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-23-18 mks CORE-852: original coding
* 09-09-19 mks DB-111: initialization of migration members moved to constructor b/c IDE warnings.
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
$this->migrationSourceSchema = STRING_MYSQL; // or STRING_MONGO
$this->migrationSourceTable = 'product_registrations';
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 03-23-18 mks CORE-852: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-23-18 mks CORE-852: original coding
*
*/
public function __destruct()
{
// empty by design
}
}

View File

@@ -0,0 +1,550 @@
<?php
/**
* gatProductRegistrations -- mongo template class
*
* This is the mongo template for givva.product_registrations, previously a mySQL-schema based table.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
* 01-13-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatProductRegistrations
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version: not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_PRODUCT_REG; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_PROD_REGS; // sets the collection (table) name
public ?string $whTemplate = TEMPLATE_CLASS_WHC1_PROD_REG; // name of the warehouse template (not collection)
public string $extension = COLLECTION_MONGO_PROD_REG_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set 2 true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_OBJECT, // sorting by the id is just like sorting by createdDate
PRG_TYPE => DATA_TYPE_STRING,
PRG_IID => DATA_TYPE_STRING,
PRG_EAV => DATA_TYPE_STRING,
PRG_PLATFORM => DATA_TYPE_STRING,
PRG_BROWSER => DATA_TYPE_STRING,
PRG_MAJOR_VERSION => DATA_TYPE_INTEGER,
PRG_MINOR_VERSION => DATA_TYPE_INTEGER,
PRG_IS_MOBILE => DATA_TYPE_INTEGER,
PRG_IS_TABLET => DATA_TYPE_INTEGER,
PRG_FIRST_SEEN => DATA_TYPE_STRING,
PRG_LAST_SEEN => DATA_TYPE_STRING,
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
PRG_IID, DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_TOKEN, DB_ACCESSED, DB_STATUS, PRG_TYPE, PRG_IID, PRG_EAV, DB_EVENT_GUID
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
PRG_EAV => 1,
PRG_IID => 1,
PRG_TYPE => 1,
DB_CREATED => 1,
DB_STATUS => 1,
DB_EVENT_GUID => 1 // event guid should always be indexed
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
PRG_IID => 1,
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
PRG_TYPE => CM_PRG_TYPE,
PRG_IID => CM_PRG_IID,
PRG_EAV => CM_PRG_EAV,
PRG_PLATFORM => CM_PRG_PLATFORM,
PRG_BROWSER => CM_PRG_BROWSER,
PRG_MAJOR_VERSION => CM_PRG_MAJ_VER,
PRG_MINOR_VERSION => CM_PRG_MIN_VER,
PRG_IS_MOBILE => CM_PRG_IS_MOBILE,
PRG_IS_TABLET => CM_PRG_IS_TABLET,
PRG_FIRST_SEEN => CM_PRG_FIRST_SEEN,
PRG_LAST_SEEN => CM_PRG_LAST_SEEN,
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = null;
//=================================================================================================================
// MIGRATION DECLARATIONS
// ----------------------
// Data in this section is used to handle migrations -- when we're pulling from legacy tables into the Namaste
// framework. See online doc for more info.
//=================================================================================================================
/**
* The migration map is an associative array that maps the Namaste fields (keys) to the corresponding
* (remote) legacy fields in the source table to be migrated to Namaste.
*
* For example, if we were migrating a mysql table in the legacy production database to Namaste::mongo, then
* the keys of the migration map would be the Namaste::mongo->fieldNames and the values would be the mysql
* column names in the legacy table.
*
* If there is a value which cannot be mapped to a key, then set it to null.
*
* Fields that will be dropped in the migration are not listed as values or as keys.
*
* This map will only exist in the template object and will never be imported into the class widget.
*
* This is a required field.
*
*/
public ?array $migrationMap = [
MONGO_ID => null, // created on insert
PRG_TYPE => 'type',
PRG_IID => 'iid',
PRG_EAV => 'eav',
PRG_PLATFORM => 'platform',
PRG_BROWSER => 'browser',
PRG_MAJOR_VERSION => 'major_version',
PRG_MINOR_VERSION => 'minor_version',
PRG_IS_MOBILE => 'is_mobile',
PRG_IS_TABLET => 'is_tablet',
PRG_FIRST_SEEN => 'first_seen',
PRG_LAST_SEEN => 'last_seen',
DB_TOKEN => null, // created on insert
DB_EVENT_GUID => null, // generated by broker event
DB_CREATED => 'kinsert_date', // epoch time
DB_STATUS => null, // record status
DB_ACCESSED => 'kupdate_date' // epoch time
];
/*
* the migrationSortKey defines the SOURCE field by which the fetch query will be sorted. ALL sort fields are
* in ASC order so all we need to list here is the name of the field -- which MUST BE IN THE SOURCE TABLE.
*
* Populating this field may require preliminary examination of the data - what we want is a field that has
* zero NULL values.
*
* This is a required field.
*
*/
public ?string $migrationSortKey = 'last_seen';
/*
* The migrationStatusKey defines the status field/column in the source table -- if the user requires that
* soft-deleted records not be migrated, then this field must be set. Otherwise, set the value to null.
*
* The format is in the form of a key-value paired array. The key specifies the name of the column and the value
* specifies the "deleted" value that, if found, will cause that row from the SOURCE data to be omitted from the
* DESTINATION table.
*
* e.g.: $migrationStatusKV = [ 'some_field' => 'deleted' ]
*
* Note that both the key and the value are case-sensitive!
*
* This is an optional field.
*
*/
public ?array $migrationStatusKV = null;
// The $migrationSourceSchema defines the remote schema for the source table, and is set in the constructor
public ?string $migrationSourceSchema;
// The source table in the remote repos (default defined in the XML) must be declared here, set in the constructor
public ?string $migrationSourceTable;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => true, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => true, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => true, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => true, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
WH_INDEXES => [DB_CREATED, DB_WH_CREATED],
WH_TEMPLATE => '',
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
* 09-09-19 mks DB-111: initialization of migration members moved to constructor b/c IDE warnings.
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
$this->migrationSourceSchema = STRING_MYSQL; // or STRING_MONGO
$this->migrationSourceTable = 'product_registrations';
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,451 @@
<?php
/**
* gatProductSessionUsers -- mongo template class
*
* This is the mongo template for givva.product_session_users, previously a mySQL-schema based table.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
* 04-19-18 mks _INF-188: warehousing section added
* 11-04-19 mks DB-136: fixed error where indexFields was missing a member element from singleIndex
* 01-13-20 mks DB-150: PHP7.4 class member type-casting
*
*/
class gatProductSessionUsers
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_PRODUCT_SES_USR; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_PSU; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_PSU_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
PSU_SID => DATA_TYPE_STRING,
PSU_UID => DATA_TYPE_STRING,
PSU_FIRST_SEEN => DATA_TYPE_STRING,
PSU_LAST_SEEN => DATA_TYPE_STRING,
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
PSU_SID, PSU_UID, DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_TOKEN, DB_ACCESSED, DB_STATUS, PSU_UID, PSU_SID, DB_EVENT_GUID
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [ 'cIdx1UserSession' ];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
public ?array $singleFields = [
DB_CREATED => 1,
DB_STATUS => 1,
DB_EVENT_GUID => 1 // event guid should always be indexed
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'cIdx1UserSession' => [ PSU_UID => 1, PSU_SID => 1]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
PSU_SID => CM_PSU_SID,
PSU_UID => CM_PSU_UID,
PSU_FIRST_SEEN => CM_PSU_FIRST_SEEN,
PSU_LAST_SEEN => CM_PSU_LAST_SEEN,
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,456 @@
<?php
/**
* gatProductSessions -- mongo template class
*
* This is the mongo template for givva.product_sessions, previously a mySQL-schema based table.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
* 04-19-18 mks _INF-188: warehousing section added
* 01-13-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatProductSessions
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_PRODUCT_SES; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_PROD_SESS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_PROD_SESS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
PSE_SID => DATA_TYPE_STRING,
PSE_IID => DATA_TYPE_STRING,
PSE_IP => DATA_TYPE_STRING,
PSE_FIRST_SEEN => DATA_TYPE_STRING,
PSE_LAST_SEEN => DATA_TYPE_STRING,
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
PSE_IID, PSE_SID, DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_TOKEN, DB_ACCESSED, DB_STATUS, PSE_IP, PSE_SID, PSE_IID, DB_EVENT_GUID
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
public ?array $singleFields = [
PSE_IID => 1,
PSE_SID => 1,
PSE_IP => 1,
DB_CREATED => 1,
DB_STATUS => 1,
DB_EVENT_GUID => 1 // event guid should always be indexed
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
PSE_SID => 1,
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
PSE_SID => CM_PSE_SID,
PSE_IID => CM_PSE_IID,
PSE_IP => CM_PSE_IP,
PSE_FIRST_SEEN => CM_PSE_FIRST_SEEN,
PSE_LAST_SEEN => CM_PSE_LAST_SEEN,
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,490 @@
<?php
/**
* gatSMAXAPI -- template class
*
* This template definition is for the Saarvus-Maximus (SMAX) API Partner tokens repository. A partner is required to
* submit their token (GUID) value with every API request. This collection tracks those entries.
*
* HISTORY:
* ========
* 04-20-20 mks ECI-101: original coding
* 06-01-20 mks ECI-108: support for auth tokens
* 06-11-20 mks ECI-164: new field: TLTI
*
*/
/** @noinspection PhpUnused */
class gatSMAXAPI
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_SMAXAPI; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_SMAXAPI; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_SMAXAPI_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_DESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
DB_TOKEN => DATA_TYPE_STRING, // unique pkey exposed externally and is REQUIRED
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER, // epoch time
SMAX_COMPANY_NAME => DATA_TYPE_STRING, // Name of company receiving API Key
SMAX_COMPANY_CONTACT_INFO => DATA_TYPE_ARRAY, // array column, not a sub-collection
SMAX_COMPANY_CONTACT_INFO_ADDRESS1 => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACT_INFO_ADDRESS2 => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACT_INFO_CITY => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACT_INFO_STATE => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACT_INFO_ZIP => DATA_TYPE_STRING,
SMAX_COMPANY_PHONES => DATA_TYPE_OBJECT,
SMAX_COMPANY_PHONES_VOICE => DATA_TYPE_STRING,
SMAX_COMPANY_PHONES_FAX => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACTS => DATA_TYPE_ARRAY, // this is a sub-collection
SMAX_COMPANY_CONTACTS_EMPLOYEE_NAME => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACTS_EMPLOYEE_EMAIL => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_VOICE => DATA_TYPE_STRING,
SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_FAX => DATA_TYPE_STRING,
SMAX_COMPANY_REGISTERED => DATA_TYPE_INTEGER,
SMAX_COMPANY_LICENSE_DURATION => DATA_TYPE_INTEGER,
SMAX_COMPANY_AUTHORIZED_BY => DATA_TYPE_STRING,
SMAX_COMPANY_INTERNAL_NOTES => DATA_TYPE_STRING,
SMAX_LICENSE_TYPE => DATA_TYPE_STRING,
SMAX_TLTI => DATA_TYPE_STRING
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID, SMAX_COMPANY_REGISTERED,
SMAX_COMPANY_LICENSE_DURATION, SMAX_LICENSE_TYPE, SMAX_TLTI
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_STATUS, DB_TOKEN, SMAX_COMPANY_NAME,
SMAX_COMPANY_AUTHORIZED_BY, SMAX_LICENSE_TYPE, SMAX_TLTI
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [
'cIdxCompanyNameStatus', 'cIdxCompanyJWTStatus', 'cIdxCompanyLicenseType'
];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => -1,
SMAX_COMPANY_NAME => 1,
SMAX_COMPANY_AUTHORIZED_BY => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'cIdxCompanyNameStatus' => [ SMAX_COMPANY_NAME => 1, DB_STATUS => 1],
'cIdxCompanyLicenseType' => [ SMAX_LICENSE_TYPE => 1, DB_STATUS => 1]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
SMAX_TLTI => 1 // Two-Letter Template Identifier must be unique
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TRANSACTIONS_CREATED_AT,
DB_ACCESSED => CM_TRANSACTIONS_UPDATED_AT,
SMAX_COMPANY_NAME => CM_SMAX_COMPANY_NAME,
SMAX_COMPANY_CONTACT_INFO => CM_SMAX_COMPANY_CONTACT_INFO,
SMAX_COMPANY_CONTACT_INFO_ADDRESS1 => CM_SMAX_COMPANY_ADDR1,
SMAX_COMPANY_CONTACT_INFO_ADDRESS2 => CM_SMAX_COMPANY_ADDR2,
SMAX_COMPANY_CONTACT_INFO_CITY => CM_SMAX_COMPANY_CITY,
SMAX_COMPANY_CONTACT_INFO_STATE => CM_SMAX_COMPANY_STATE,
SMAX_COMPANY_CONTACT_INFO_ZIP => CM_SMAX_COMPANY_ZIP,
SMAX_COMPANY_PHONES_VOICE => CM_SMAX_COMPANY_VOICE,
SMAX_COMPANY_PHONES_FAX => CM_SMAX_COMPANY_FAX,
SMAX_COMPANY_CONTACTS => CM_SMAX_CONTACTS,
SMAX_COMPANY_CONTACTS_EMPLOYEE_NAME => CM_SMAX_CONTACT_NAME,
SMAX_COMPANY_CONTACTS_EMPLOYEE_EMAIL => CM_SMAX_CONTACT_EMAIL,
SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_VOICE => CM_SMAX_CONTACT_VOICE,
SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_FAX => CM_SMAX_CONTACT_FAX,
SMAX_COMPANY_AUTHORIZED_BY => CM_SMAX_AUTH_BY,
SMAX_COMPANY_INTERNAL_NOTES => CM_SMAX_NOTES,
SMAX_LICENSE_TYPE => CM_SMAX_ACCOUNT_TYPE,
SMAX_TLTI => CM_SMAX_TLTI
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = [
SMAX_COMPANY_CONTACTS => [
SMAX_COMPANY_CONTACTS_EMPLOYEE_NAME,
SMAX_COMPANY_CONTACTS_EMPLOYEE_EMAIL,
SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_VOICE,
SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_FAX
]
];
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-10-20 mks original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-10-20 mks original coding
*
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-10-20 mks original coding
*
*/
public function __destruct()
{
// does nothing
}
}

View File

@@ -0,0 +1,634 @@
<?php /** @noinspection PhpUnused */
/**
* Class gatSessions -- mongo class
*
* This class is used to store user sessions (assuming that the Users table is also a mongo collection). Sessions
* are required for all communication with Namaste and must be linked to an active user, either external or internal,
* account.
*
* Questions requiring resolution:
* --------------------------------
* -- what is the hard-expiration for a user session?
* -- can a user have more than a single session open at a time?
*
* HISTORY:
* ========
* 02-03-20 mks DB-147: initial coding
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatSessions
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version; not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_TERCERO; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_SESSIONS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_SESSIONS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_SESS_EXT; // sets the extension for the collection
public bool $closedClass = false; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// non-standard template member variables
public ?string $guid = null; // internal container for a guid value on instantiation
public string $res = 'tSES: '; // resource identifier for logging
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER,
SESSION_EXPIRES => DATA_TYPE_DATETIME, // user-friendly time-stamp
SESSION_CLOSED => DATA_TYPE_STRING, // timestamp for when the session was actually closed
SESSION_DURATION => DATA_TYPE_INTEGER, // length of session in seconds
SESSION_FK_USER => DATA_TYPE_STRING, // fk-link to users.token_usr
SESSION_LEVEL => DATA_TYPE_INTEGER, // defines the session level (user, csr, etc.)
SESSION_CUSTOM_FIELD => DATA_TYPE_STRING, // user-defined KEY
SESSION_CUSTOM_VALUE => DATA_TYPE_STRING, // user-defined VALUE
SESSION_CREATED_WITH => DATA_TYPE_OBJECT, // legacy-data container for json-looking stuff
SESSION_ACTION => DATA_TYPE_STRING,
SESSION_AUTH_PROVIDER => DATA_TYPE_STRING,
DB_TOKEN => DATA_TYPE_STRING, // unique key exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_CREATED, DB_ACCESSED, SESSION_FK_USER, SESSION_LEVEL, SESSION_EXPIRES,
SESSION_DURATION, MONGO_ID, DB_STATUS
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_TOKEN, SESSION_FK_USER, DB_STATUS, DB_EVENT_GUID
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [
'cIdxSession1'
];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_TOKEN => 1,
DB_STATUS => 1,
DB_EVENT_GUID => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'cIdxSession1' => [ SESSION_FK_USER, DB_STATUS ]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
DB_EVENT_GUID => 1
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are required for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TOKEN,
DB_CREATED => CM_DATE_CREATED,
SESSION_CLOSED => CM_DATA_CLOSED,
DB_ACCESSED => CM_DATE_ACCESSED,
DB_STATUS => CM_STATUS,
DB_EVENT_GUID => CM_EVENT_GUID,
SESSION_EXPIRES => CM_SESSION_EXPIRES,
SESSION_DURATION => CM_SESSION_DURATION,
SESSION_LEVEL => CM_SESSION_LEVEL,
SESSION_FK_USER => CM_SESSION_UID,
SESSION_CUSTOM_FIELD => CM_SESSION_CUSTOM_KEY,
SESSION_CUSTOM_VALUE => CM_SESSION_CUSTOM_VAL,
SESSION_CREATED_WITH => CM_SESSION_CW,
SESSION_ACTION => CM_SESSION_ACTION,
SESSION_AUTH_PROVIDER => CM_SESSION_AP
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as the associative array: $exposedFields. Only those fields,
* enumerated within this container, will be exposed to the client.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = [
SESSION_ACTION => [ SESSION_ACTION, SESSION_CREATED_WITH ]
];
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* Constructor in this template not only registers the shutdown method, but also allows us to generate a custom
* GUID string during instantiation by use of the input parameters:
*
* $_getGUID - boolean, defaults to false but, if true, will generate a GUID value and store it in the class member
* $_lc - boolean, defaults to false but, if true, will generate a GUID using lower-case alpha characters
*
* If we generate a GUID on instantiation, the GUID will be stored in the class member. This allows us to both
* instantiate a session class object and a GUID value, (the most requested, post-instantiation, action), at the
* same time. All the efficient.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param bool $_getGUID
* @param bool $_lc
*
* HISTORY:
* ========
* 02-03-20 mks DB-147: original coding
*
*/
public function __construct(bool $_getGUID = false, bool $_lc = false)
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
if ($_getGUID) $this->guid = static::getGUID($_lc);
}
/**
* buildExpireSessionPayload() -- template function
*
* This is a "hidden function" for the session template class requiring the following input parameters:
*
* $_data -- the is the request payload as received by the tercero broker
* $_errors -- call-by-reference container for error messaging
*
* The function will validate that $_data contains the $requiredArrayKeys, generating an error message if any
* are not present in the _data array and returning a null value to the calling client if so.
*
* Depending on the value for STRING_TOK_TYPE stored in $_data, we'll build the query against DB_TOKEN or the
* DB_EVENT_GUID.
*
* Then we just assemble the rest of the query and return the array to the calling client, presumably the sBroker.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param array $_data
* @param array|null $_errors
* @return array|null
*
* HISTORY:
* ========
* 10-02-20 mks DB-168: original coding
*
*/
public function buildExpireSessionPayload(array $_data, ?array &$_errors = null):?array
{
$requiredArrayKeys = [ STRING_GUID_KEY, STRING_TOK_TYPE ];
$missingKey = false;
try {
$logger = new gacErrorLogger();
// _data validation
foreach ($requiredArrayKeys as $requiredKey) {
if (!array_key_exists($requiredKey, $_data)) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_ARRAY_KEY_404 . $requiredKey;
$_errors[] = $msg;
$logger->data($hdr . $msg);
$missingKey = true;
}
}
if ($missingKey) return null;
switch ($_data[STRING_TOK_TYPE]) {
case STRING_TOK_TYPE_EVE :
$searchDiscriminant = DB_EVENT_GUID;
break;
case STRING_TOK_TYPE_SES :
case STRING_TOK_TYPE_TOK :
$searchDiscriminant = DB_TOKEN;
break;
default :
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
$msg = ERROR_DATA_FIELD_NOT_MEMBER . $_data[STRING_TOK_TYPE];
$_errors[] = $msg;
$logger->data($hdr . $msg);
return null;
break;
}
$query = [$searchDiscriminant => [ OPERAND_NULL => [ OPERATOR_EQ => [$_data[STRING_GUID_KEY]]]]];
$update = [ DB_STATUS => STATUS_EXPIRED, SESSION_CLOSED => time() ];
return [STRING_QUERY_DATA => $query, STRING_UPDATE_DATA => $update];
} catch (Throwable | TypeError $t) {
$hdr = sprintf(INFO_LOC, basename(__METHOD__), __LINE__);
@handleExceptionMessaging($hdr, $t->getMessage(), $_errors, true);
return null;
}
}
/**
* buidlCloseSysEventPayload() -- public function
*
* This function requires a single input parameter:
*
* $_guid -- the session guid (foreign key value) in the system event record we're updating
*
* The method takes this information and build an update-record payload that will be returned to the adminIn
* broker to close the system-event record that recorded the original session-expiry event.
*
* The function returns a string, which is the compress json query payload to be send immediately to the broker.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param string $_guid
* @return string|null
*
* HISTORY:
* ========
* 10-23-20 mks DB-168: original coding
*
*/
public function buildCloseSysEventPayload(string $_guid):string
{
$payload = [
SYSTEM_EVENT_STATUS => STATUS_CLOSED,
DB_ACCESSED => time(),
];
$meta = [
META_TEMPLATE => TEMPLATE_CLASS_SYS_EVENTS,
META_CLIENT => CLIENT_SYSTEM,
META_DO_CACHE => 0
];
$request = [
BROKER_REQUEST => BROKER_REQUEST_UPDATE,
BROKER_DATA => [
STRING_QUERY_DATA => [ SYSTEM_EVENT_FK_SESSION_GUID => [ OPERAND_NULL => [OPERATOR_EQ => [$_guid]]]],
STRING_UPDATE_DATA => $payload
],
BROKER_META_DATA => $meta
];
return gzcompress(json_encode($request));
}
/**
* getGUID() -- public static template method
*
* This method calls the generic function method (guid()) which generates a random GUID (36-char format).
*
* The method has one input parameter:
*
* $_lc -- boolean: default set to false but, if true, converts the alpha chars in the guid string to lower-case.
*
* The method returns a string back to the calling client containing the 36-character GUID.
*
*
* @param bool $_lc - defaults to false, submit true if your want the guid's alpha chars converted to lower-case
* @return string
*
*
* HISTORY:
* ========
* 02-04-20 mks DB-147: original coding
*
*/
public static function getGUID(bool $_lc = false):string
{
return (($_lc) ? strtolower(guid()) : guid());
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* HISTORY:
* ========
* 02-03-20 mks DB-147: original coding
*
* @version 1.0
*
* @author mike@givingassistant.org
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-03-20 mks DB-147: original coding
*
*/
public function __destruct()
{
// does nothing
}
}

View File

@@ -0,0 +1,501 @@
<?php
/**
* gatSystemData -- mongo template class
*
* This is the mongo data template for the system data class. The system data class resides on the Admin service
* and consists of a table with multiple rows. There's row identifier for "known" rows (states,status) and
* "generic" columns for random key-value pairs.
*
* The systemData table is read and cached during IPL.
*
* HISTORY:
* ========
* 00-00-00 mks Original coding
* 01-14-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth tokens
*
*/
class gatSystemData
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_SYS_DATA; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_SYS_DATA; // sets the collection (table) name
public ?string $whTemplate = null; // name of the warehouse template (not collection)
public string $extension = COLLECTION_MONGO_SYS_DATA_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_OBJECT, // sorting by the id is just like sorting by createdDate
ROW_ID => DATA_TYPE_INTEGER, // simple, imposed, identifier for well-known rows
DATA_KEY => DATA_TYPE_STRING, // label for key->value pair
DATA_VALUE => DATA_TYPE_STRING, // value for key->value pair
VALID_STATES => DATA_TYPE_ARRAY,
VALID_STATUS => DATA_TYPE_ARRAY,
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID, ROW_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
MONGO_ID, ROW_ID, DATA_KEY
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
public ?array $singleFields = [DATA_KEY => 1];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [ROW_ID => 1];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
VALID_STATUS => VALID_STATUS,
VALID_STATES => VALID_STATES,
ROW_ID => ROW_ID,
DATA_KEY => DATA_KEY,
DATA_VALUE => DATA_VALUE,
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = null;
//=================================================================================================================
// MIGRATION DECLARATIONS
// ----------------------
// Data in this section is used to handle migrations -- when we're pulling from legacy tables into the Namaste
// framework. See online doc for more info.
//=================================================================================================================
/**
* The migration map is an associative array that maps the Namaste fields (keys) to the corresponding
* (remote) legacy fields in the source table to be migrated to Namaste.
*
* For example, if we were migrating a mysql table in the legacy production database to Namaste::mongo, then
* the keys of the migration map would be the Namaste::mongo->fieldNames and the values would be the mysql
* column names in the legacy table.
*
* If there is a value which cannot be mapped to a key, then set it to null.
*
* Fields that will be dropped in the migration are not listed as values or as keys.
*
* This map will only exist in the template object and will never be imported into the class widget.
*
* This is a required field.
*
*/
public ?array $migrationMap = null;
/*
* the migrationSortKey defines the SOURCE field by which the fetch query will be sorted. ALL sort fields are
* in ASC order so all we need to list here is the name of the field -- which MUST BE IN THE SOURCE TABLE.
*
* Populating this field may require preliminary examination of the data - what we want is a field that has
* zero NULL values.
*
* This is a required field.
*
*/
public ?string $migrationSortKey = 'last_seen';
/*
* The migrationStatusKey defines the status field/column in the source table -- if the user requires that
* soft-deleted records not be migrated, then this field must be set. Otherwise, set the value to null.
*
* The format is in the form of a key-value paired array. The key specifies the name of the column and the value
* specifies the "deleted" value that, if found, will cause that row from the SOURCE data to be omitted from the
* DESTINATION table.
*
* e.g.: $migrationStatusKV = [ 'some_field' => 'deleted' ]
*
* Note that both the key and the value are case-sensitive!
*
* This is an optional field.
*
*/
public ?string $migrationStatusKV = null;
// The $migrationSourceSchema defines the remote schema for the source table
public ?string $migrationSourceSchema = ''; // or STRING_MONGO
// The source table in the remote repos (default defined in the XML) must be declared here
public ?string $migrationSourceTable = '';
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
// todo -- add an "enabled" option to this block
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
WH_INDEXES => null,
WH_TEMPLATE => '',
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => null
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 12-20-17 mks CORE-681: original coding
*
*/
public function __destruct()
{
// empty
}
}

View File

@@ -0,0 +1,486 @@
<?php
/**
* Class Template: gatSystemEvents
* -------------------------------
* This template defines the storage (mongo) for system-events.
*
* System events record data about routine and exception events that occur during normal processing/execution of
* the Namaste framework. As such, it is strictly an administrative table and accessible via the admin system only.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-16-17 mks CORE-500: original programming
* 04-19-18 mks _INF-188: warehousing section added
* 11-04-19 mks DB-136: added DB_STATUS field to $indexFields, removed duplicate key from $fields
* 01-14-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
* 08-14-20 mks DB-168: added event status which is binary: success | fail
*
*/
class gatSystemEvents
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1;
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_SYS_EVENTS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_SYS_EVENTS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_SYS_EVENTS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = false; // set to true to enable collection query timers
public string $setPKey = SYSTEM_EVENT_FK_SESSION_GUID; // sets the primary key for the collection
public bool $setTokens = false; // set to true: adds the idToken field functionality
public bool $selfDestruct = true; // set to false if the class contains methods
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// all column names are defined as key->value pairs with the value being type specified by the given constant
// all key names should also be declared constants.
// note that key names do NOT have their extensions appended!
public array $fields = [
// generic mongo constants
MONGO_ID => DATA_TYPE_INTEGER,
DB_STATUS => DATA_TYPE_STRING, // status of this record
DB_CREATED => DATA_TYPE_INTEGER,
DB_ACCESSED => DATA_TYPE_INTEGER,
DB_EVENT_GUID => DATA_TYPE_STRING,
// fields specific to systemEvents collection
SYSTEM_EVENT_NAME => DATA_TYPE_STRING, // name of the event (pre-defined constant)
SYSTEM_EVENT_STATUS => DATA_TYPE_STRING, // status of the event, not this record
SYSTEM_EVENT_TYPE => DATA_TYPE_STRING, // event type (BROKER, TIMER, FATAL, etc.)
SYSTEM_EVENT_CLASS => DATA_TYPE_STRING, // which class, if any, generated the event
SYSTEM_EVENT_START => DATA_TYPE_INTEGER, // event starting value (memory)
SYSTEM_EVENT_END => DATA_TYPE_INTEGER, // event ending value (memory)
SYSTEM_EVENT_PEAK => DATA_TYPE_INTEGER, // event peak value (memory)
SYSTEM_EVENT_TIMER => DATA_TYPE_DOUBLE, // Spot-timer for the event, absolute/real time
SYSTEM_EVENT_AT_RESULTS => DATA_TYPE_ARRAY, // container for results of the AT(1) command
SYSTEM_EVENT_DURATION => DATA_TYPE_INTEGER, // length of time for the event implying expiry
SYSTEM_EVENT_BROKER_EVENT => DATA_TYPE_STRING, // the name of the broker event
SYSTEM_EVENT_OGUID => DATA_TYPE_STRING, // the original guid (for cross-queue events)
SYSTEM_EVENT_FK_SESSION_GUID => DATA_TYPE_STRING, // if we have a session GUID, store it here
SYSTEM_EVENT_FK_USER_GUID => DATA_TYPE_STRING, // if we have a user GUID, store it here
SYSTEM_EVENT_BROKER_GUID => DATA_TYPE_STRING, // broker (child) identifying guid
SYSTEM_EVENT_COUNT => DATA_TYPE_INTEGER, // this event is event number...
SYSTEM_EVENT_COUNT_TOTAL => DATA_TYPE_INTEGER, // ... out of this many events
SYSTEM_EVENT_BROKER_ROOT_GUID => DATA_TYPE_STRING, // broker (parent) identifying guid
SYSTEM_EVENT_NUM_EVENTS => DATA_TYPE_INTEGER, // number of discrete sub-events (child broker)
SYSTEM_EVENT_CODE_LOC => DATA_TYPE_STRING, // identified the code-location launching the event
SYSTEM_EVENT_KEY => DATA_TYPE_STRING, // (optional) free-form key
SYSTEM_EVENT_VAL => DATA_TYPE_INTEGER, // (optional) free-form value matched to key
SYSTEM_EVENT_ERROR_STACK => DATA_TYPE_ARRAY, // (optional) the error-stack generated
SYSTEM_EVENT_META_DATA => DATA_TYPE_ARRAY, // (optional) the meta-data received for the event
SYSTEM_EVENT_NOTES => DATA_TYPE_STRING // (optional) free-form text
/*
* other candidate fields for later expansion include, but are not limited to:
*
* -- mail GUID did we generate an outbound email?
* -- AT_OUT output from AT(1) daemon on timer events
* -- Severity? classify events by their assigned severity
* -- responded if an event required intervention, when was it responded to?
* -- respondedBy if an event required intervention, who was assigned to the event?
* -- closed if an event required intervention, when was it closed?
* -- resolution if an event required intervention, what was the resolution?
*/
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
MONGO_ID,
DB_CREATED,
DB_EVENT_GUID,
SYSTEM_EVENT_BROKER_EVENT,
SYSTEM_EVENT_BROKER_ROOT_GUID,
SYSTEM_EVENT_OGUID,
SYSTEM_EVENT_FK_SESSION_GUID,
SYSTEM_EVENT_BROKER_GUID,
SYSTEM_EVENT_TYPE,
SYSTEM_EVENT_NAME,
SYSTEM_EVENT_STATUS,
SYSTEM_EVENT_CLASS,
DB_STATUS
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [
'cIdx1SEV',
'cIdx2GUID',
'cIdx2TYPE',
'cIdx1SESS'
];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
public ?array $singleFields = [
DB_EVENT_GUID => 1,
DB_CREATED => -1,
SYSTEM_EVENT_BROKER_EVENT => 1,
SYSTEM_EVENT_BROKER_GUID => 1,
SYSTEM_EVENT_STATUS => 1,
SYSTEM_EVENT_BROKER_ROOT_GUID => 1,
SYSTEM_EVENT_TYPE => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'cIdx1SEV' => [SYSTEM_EVENT_NAME => 1, SYSTEM_EVENT_CLASS => 1 ],
'cIdx2GUID' => [ SYSTEM_EVENT_OGUID => 1, SYSTEM_EVENT_BROKER_GUID => 1 ],
'cIdx2TYPE' => [ SYSTEM_EVENT_TYPE => 1, SYSTEM_EVENT_STATUS => 1],
'cIdx1SESS' => [ SYSTEM_EVENT_FK_SESSION_GUID => 1, DB_STATUS => 1]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = null; // mongo TOKEN does not appear b/c system table
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are required for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-16-17 mks CORE-500: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 08-16-17 mks CORE-500: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-16-17 mks CORE-500: original coding
*
*/
public function __destruct()
{
// does nothing
}
}

View File

@@ -0,0 +1,545 @@
<?php
/**
* gatTestMongo() -- mongo test class
*
* This class is used for testing mongo operations. It has it's own collection in the database and is used in testing
* during development and in unit-testing.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-10-17 mks original coding
* 08-04-17 mks added version control, partialIndexes
* 09-12-17 mks CORE-558: protected fields
* 04-19-18 mks _INF-188: warehousing section added
* 01-14-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-120: support for auth token
*
*/
class gatTestMongo
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_TEST_MONGO; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_TEST; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_TEST_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NONDESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
TEST_FIELD_TEST_STRING => DATA_TYPE_STRING,
TEST_FIELD_TEST_DOUBLE => DATA_TYPE_DOUBLE,
TEST_FIELD_TEST_INT => DATA_TYPE_INTEGER,
TEST_FIELD_TEST_NIF => DATA_TYPE_INTEGER, // used in unit testing for testing non-indexed queries
TEST_FIELD_TEST_BOOL => DATA_TYPE_BOOL,
TEST_FIELD_TEST_OBJECT => DATA_TYPE_OBJECT,
TEST_FIELD_TEST_ARRAY => DATA_TYPE_ARRAY, // array of data (flat)
TEST_FIELD_TEST_SUBC => DATA_TYPE_ARRAY, // subCollection (array of arrays which may contain arrays)
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, TEST_FIELD_TEST_INT, DB_TOKEN, TEST_FIELD_TEST_BOOL, DB_EVENT_GUID,
TEST_FIELD_TEST_DOUBLE, DB_ACCESSED, TEST_FIELD_TEST_STRING, DB_STATUS, TEST_FIELD_TEST_OBJECT
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [
'cIdx1Test',
'mIdx1Test',
];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
TEST_FIELD_TEST_BOOL => 1,
TEST_FIELD_TEST_STRING => 1,
DB_CREATED => -1,
DB_STATUS => -1,
DB_EVENT_GUID => 1 // event guid should always be indexed
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'cIdx1Test' => [TEST_FIELD_TEST_INT => 1, TEST_FIELD_TEST_DOUBLE => -1 ]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = [ 'mIdx1Test' => [TEST_FIELD_TEST_ARRAY. DOT . TEST_FIELD_TEST_INT => 1]];
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = [
// this is a really bad example, but an example nonetheless, of a partial index
[[ TEST_FIELD_TEST_OBJECT => 1], [ MONGO_STRING_PARTIAL_FE => [ DB_CREATED => [ MONGO_EXISTS => false ]]]],
[[ TEST_FIELD_TEST_OBJECT => 1], [ MONGO_STRING_PARTIAL_FE => [ DB_CREATED => [ MONGO_EXISTS => true ]]]]
];
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = [DB_CREATED => 86400]; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
TEST_FIELD_TEST_STRING => CM_TST_FIELD_TEST_STRING,
TEST_FIELD_TEST_DOUBLE => CM_TST_FIELD_TEST_DOUBLE,
TEST_FIELD_TEST_INT => CM_TST_FIELD_TEST_INT,
TEST_FIELD_TEST_NIF => CM_TST_FIELD_TEST_NIF,
TEST_FIELD_TEST_BOOL => CM_TST_FIELD_TEST_BOOL,
TEST_FIELD_TEST_OBJECT => CM_TST_FIELD_TEST_OBJ,
TEST_FIELD_TEST_ARRAY => CM_TST_FIELD_TEST_ARY,
TEST_FIELD_TEST_SUBC => CM_TST_FIELD_TEST_SUBC,
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = [ TEST_FIELD_TEST_STRING ];
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = [
TEST_FIELD_TEST_SUBC => [
TEST_FIELD_TEST_INT,
TEST_FIELD_TEST_DOUBLE,
TEST_FIELD_TEST_STRING,
TEST_FIELD_TEST_BOOL,
DB_TOKEN
]
];
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-10-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
public function __construct()
{
$this->authToken = '79344859-5403-1556-7663-4E34D6B4CBE4'; // SMAX-API record token
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* buildTestData() -- public static method
*
* this method is used to build an array structure of random data. There is only input parameter to the method
* specifies the number of records to return to the calling client.
*
* The input parameter specifies how many records (indexes in the array) should be returned to the calling client
* and should be a reasonable integer between one and one-hundred (1 - 100). If the passed-value for the number
* of records is outside of this range, on either side, then the passed-value will be replaced with the range
* limit for the appropriate "side".
*
* We then spin through a loop which populates an indexed array with the elements, from the test class,
* with the appropriate extension as the sub-array key value.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param int $_records
* @return null|array
*
* HISTORY:
* ========
* 07-10-17 mks original coding
* 07-12-17 mks added object field to test data generation: a copy of the current record's data
* 07-20-17 mks moved to gatTestMongo class where it belongs (testing selfDestruct feature in templates)
* 04-15-19 mks DB-116: header re-formatted for PHP7 (type casting enforced)
*
*/
public static function buildTestData(int $_records = 0): ?array
{
if ($_records < 1) $_records = 1;
if ($_records > 1000) $_records = 1000;
$retData = null;
mt_srand();
for ($index = 0; $index < $_records; $index++) {
$sentenceCount = mt_rand(1, 20);
$retData[$index][CM_TST_FIELD_TEST_INT] = $sentenceCount;
$retData[$index][CM_TST_FIELD_TEST_NIF] = 1;
$retData[$index][CM_TST_FIELD_TEST_DOUBLE] = floatval((1 / mt_rand(1, 10000)) * 100);
$retData[$index][CM_TST_FIELD_TEST_BOOL] = (mt_rand(0, 1) == 1) ? true : false;
$retData[$index][CM_TST_FIELD_TEST_STRING] = lorumIpsum($sentenceCount, 0);
$retData[$index][CM_TST_FIELD_TEST_ARY] = [[
CM_TST_FIELD_TEST_INT => rand(0,100),
CM_TST_FIELD_TEST_STRING => STRING_DATA
], [
CM_TST_FIELD_TEST_INT => rand(100,200),
CM_TST_FIELD_TEST_STRING => INFO_INTERNAL_REQUEST
]];
$retData[$index][CM_TST_FIELD_TEST_OBJ] = (object) [
CM_TST_FIELD_TEST_INT => rand(0, 100),
CM_TST_FIELD_TEST_STRING => STRING_DATA
];
$retData[$index][CM_TST_FIELD_TEST_SUBC][] = [
CM_TST_FIELD_TEST_INT => $sentenceCount,
CM_TST_FIELD_TEST_DOUBLE => $retData[$index][CM_TST_FIELD_TEST_DOUBLE],
CM_TST_FIELD_TEST_BOOL => $retData[$index][CM_TST_FIELD_TEST_BOOL],
CM_TST_FIELD_TEST_STRING => $retData[$index][CM_TST_FIELD_TEST_STRING]
];
}
return ($retData);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 07-10-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 07-10-17 mks CORE-463: code complete (refactor from ddb to mdb)
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,701 @@
<?php
/**
* Class gatTestPDO -- PDO Test Class
*
* This class is a test class for PDO operations. It is used for stub and unit-testing, references it's own
* database table, and is for development/QA use only.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-13-17 mks CORE-562: original coding
* 04-19-18 mks _INF-188: warehousing section added
* 06-13-18 mks CORE-1044: making a consistent, sample, PDO template
* 01-18-19 mks DB-105: updated for audit/journaling unit testing
* 02-05-19 mks DB-107: created audit_view for cross-broker queries by the audit service
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatTestPDO
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_PDO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_TEST_PDO; // defines the clear-text template class name
public string $collection = COLLECTION_PDO_TEST; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_PDO_TEST_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NONDESTRUCTIVE; // set to AUDIT_value constant
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
//
// Note that for PDO-type tables, the data type is more ... homogeneous... e.g.: data types define the data
// type only. It does not define the actual column type in-use. For example, there is no distinction made
// between a tinyInt, Int, or BigInt. As far as the framework is concerned, they're all just integers.
//
public array $fields = [
PDO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
TEST_FIELD_TEST_STRING => DATA_TYPE_STRING,
TEST_FIELD_TEST_DOUBLE => DATA_TYPE_DOUBLE,
TEST_FIELD_TEST_INT => DATA_TYPE_INTEGER,
TEST_FIELD_TEST_BOOL => DATA_TYPE_INTEGER, // BOOLs in PDO are really tinyInt(1)
TEST_FIELD_TEST_NIF => DATA_TYPE_INTEGER, // used in unit-testing -- never index this field
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_STRING, // dateTime type
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_STRING // dateTime type
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, PDO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
PDO_ID, // implicitly indexed as pkey when table is created
DB_CREATED, DB_STATUS, // compound index
DB_TOKEN, // unique index
TEST_FIELD_TEST_STRING, // for unit-testing
TEST_FIELD_TEST_INT, DB_EVENT_GUID, // single field indexes...
DB_ACCESSED, TEST_FIELD_TEST_DOUBLE
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [ 'cIdx1ITest' ];
// the primary key index is declared in the class properties section as $setPKey
// unique indexes are to be used a values stored in these columns have to be unique to the table. Note that
// null values are permissible in unique-index columns.
public ?array $uniqueIndexes = [ DB_TOKEN ];
// single field index declarations -- since you can have a field in more than one index (index, multi)
// the format for the single-field index declaration is a simple indexed array.
public ?array $singleFields = [
TEST_FIELD_TEST_INT, DB_ACCESSED, DB_EVENT_GUID, TEST_FIELD_TEST_DOUBLE, TEST_FIELD_TEST_STRING
];
// multi-column (or compound) indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME1, FIELD_NAME2, ..., FIELD_NAMEn ]]
// where INDEX-NAME is a unique string
//
// PDO compound-indexes are left-most indexes - if it cannot use the entire index, the db must be able to use
// one, or more, of the left-most fields in the index.
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'cIdx1Test' => [ DB_CREATED, DB_STATUS ]
];
// NOTE: foreign-key indexes are not explicitly enumerated in a template -- that relationship is defined in the
// schema for the table. Foreign-key indexes appear implicitly in the indexing declarations above.
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
TEST_FIELD_TEST_STRING => CM_TST_FIELD_TEST_STRING,
TEST_FIELD_TEST_DOUBLE => CM_TST_FIELD_TEST_DOUBLE,
TEST_FIELD_TEST_INT => CM_TST_FIELD_TEST_INT,
TEST_FIELD_TEST_BOOL => CM_TST_FIELD_TEST_BOOL,
TEST_FIELD_TEST_NIF => CM_TST_FIELD_TEST_NIF,
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TST_FIELD_TEST_CDATE,
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
// in PDO-land, binary fields are your basic data blobs. All binary fields require special handling and so
// need to be enumerated here as an indexed array.
public ?array $binFields = null;
// DB SQL:
// -------
// PDO SQL is stored in the template and is keyed by the current namaste version (defined in the XML file) during
// execution of the deployment script. Each version denotes a container of SQL commands that will be executed
// for the targeted version.
//
// SQL is versioned in parallel with the Namaste (XML->application->id->version) version. Each PDO_SQL
// sub-container has several fields - one of which has the version identifier. When the deployment script
// executes, the release versions are compared and, if they're an exact match, the SQL is submitted for execution.
//
// The PDO_SQL container consists of these sub-containers:
//
// PDO_SQL_VERSION --> this is a float value in the form of x.y as namaste only supports versions as a major
// and minor release number. (Patch releases are minor release increments.)
// PDO_TABLE --> string value containing the full table name.
// PDO_SQL_FC --> the FC means "first commit" -- when the table is first created, it will execute the
// SQL in this block, if it exists, and if the version number for the sub-container
// exactly matched the version number in the configuration XML.
// PDO_SQL_UPDATE --> When the sub-container PDO_SQL_VERSION value exactly matches the XML release value,
// then the ALTER-TABLE sql in this update block will be executed.
// STRING_DROP_CODE_IDX --> The boilerplate code for dropping the indexes of the table.
// STRING_DROP_CODE_DEV --> For version 1.0 only, this points to code to drop the entire table.
//
// Again, containers themselves are indexed arrays under the PDO_SQL tag. Within the container, data is stored
// as an associative array with the keys enumerated above.
//
//
// DB OBJECTS:
// -----------
// DB objects are: views, procedures, functions and events.
// All such objects assigned to a class are declared in this array under the appropriate header.
// This is a safety-feature that prevents a one class (table) from invoking another class object.
// The name of the object is stored as an indexed-array under the appropriate header.
//
// The format for these structures is basically the same. Each DBO is stored in an associative array with the
// key defining the name of the object. Within each object, there are embedded associative arrays that have the
// name of the object as the key and the object definition (text) and the value:
//
// objectType => [ objectName => [ objectContent ], ... ]
//
// Each created object should also have the directive to remove it's predecessor using a DROP statement.
//
// todo -- unset these objects post-instantiation so that schema is not revealed
//
// VIEWS:
// ------
// Every namaste table will have at least one view which limits the data fetched from the table. At a minimum,
// the id_{ext} field is filtered from the resulting data set via the view. Other fields can be withheld as well
// but that is something that is individually set-up for each table.
//
// The basic view has the following syntax for declaring it's name:
// view_basic_{tableName_ext}
// All views start with the word "view" so as to self-identify the object, followed by the view type which,
// optimally, you should try to limit to a single, descriptive word.
//
// Following this label, which points to a sub-array containing three elements:
// STRING_VIEW ----------> this is the SQL code that defines the view as a single string value
// STRING_TYPE_LIST -----> null or an array of types that corresponds to variable markers ('?') in the sql
// STRING_DESCRIPTION' --> a string that describes the purpose of the view.
//
// At a minimum, every class definition should contain at-least a basic view as all queries that don't specify
// a named view or other DBO, will default to the the basic view in the FROM clause of the generated SQL.
//
// PROCEDURES:
// -----------
// For stored procedures, which are entirely optional, the array definition contains the following elements:
// STRING_PROCEDURE -------> the SQL code that defined the stored procedure as a single string value
// STRING_DROP_CODE -------> the sql code that drops the current database object
// STRING_TYPE_LIST -------> an associative array of associative arrays -- in the top level, the key is the name
// of the parameter that points to a sub-array that contains the parameter direction
// as the key, and the parameter type as the value. There should be an entry for each
// parameter to be passed to the stored procedure/function.
//
// ------------------------------------------------------
// | NOTE: IN params must precede INOUT and OUT params! |
// ------------------------------------------------------
//
// STRING_SP_EVENT_TYPE ---> Assign one of the DB_EVENT constants to this field to indicate the type of
// query the stored-procedure will execute.
// NOTE: there is not a defined PDO::PARAM constant for type float: use string.
// STRING_DESCRIPTION -----> clear-text definition of the procedure's purpose
//
// Note that all of these containers are required; empty containers should contain a null placeholder.
//
// When a stored procedure contains a join of two or more tables/views, the first table listed is considered
// to be the "owning" table and the procedure will be declared in the class template for that table, but it will
// not be duplicated in other template classes referenced in the join.
//
public ?array $dbObjects = [
PDO_SQL => [
[
PDO_VERSION => 1.0,
PDO_TABLE => 'gaTest_tst',
PDO_SQL_FC => "
--
-- Table structure for table `gaTest_tst`
--
CREATE TABLE `gaTest_tst` (
`id_tst` int(10) UNSIGNED NOT NULL,
`testString_tst` varchar(255) DEFAULT NULL,
`testDouble_tst` double DEFAULT NULL,
`testInteger_tst` int(11) DEFAULT NULL,
`testBoolean_tst` tinyint(1) UNSIGNED DEFAULT NULL,
`notIndexedField_tst` int(11) DEFAULT NULL COMMENT 'do not index this field',
`createdDate_tst` datetime DEFAULT NULL,
`lastAccessedDate_tst` datetime DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
`status_tst` varchar(32) DEFAULT NULL,
`eventGUID_tst` char(36) DEFAULT NULL,
`token_tst` char(36) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
",
PDO_SQL_UPDATE => "
--
-- Indexes for table `gaTest_tst`
--
ALTER TABLE `gaTest_tst`
ADD PRIMARY KEY (`id_tst`),
ADD UNIQUE KEY `gaTest_tst_token_tst_uindex` (`token_tst`),
ADD KEY `gaTest_tst_createdDate_tst_status_tst_index` (`createdDate_tst`,`status_tst`),
ADD KEY `gaTest_tst_eventGuid_tst_index` (`eventGUID_tst`),
ADD KEY `gaTest_tst_lastAccessedDate_tst_index` (`lastAccessedDate_tst`),
ADD KEY `testInteger_tst` (`testInteger_tst`),
ADD KEY `testDouble_tst` (`testDouble_tst`),
ADD KEY `testString_tst` (`testString_tst`(191));
--
-- AUTO_INCREMENT for dumped tables
--
--
-- AUTO_INCREMENT for table `gaTest_tst`
--
ALTER TABLE `gaTest_tst`
MODIFY `id_tst` int(10) UNSIGNED NOT NULL AUTO_INCREMENT;
",
/*
* example query return:
* ---------------------
* ALTER TABLE gaTest_tst DROP INDEX gaTest_tst_createdDate_tst_status_tst_index, DROP INDEX
* gaTest_tst_lastAccessedDate_tst_index, DROP INDEX testInteger_tst, DROP INDEX
* gaTest_tst_eventGuid_tst_index, DROP INDEX testDouble_tst, DROP INDEX testString_tst;
*
* NOTE:
* -----
* The sql comment code tag (--) will be removed during mysqlConfig's run time processing
*/
STRING_DROP_CODE_IDX => "--
SELECT CONCAT('ALTER TABLE ', `Table`, ' DROP INDEX ', GROUP_CONCAT(`Index` SEPARATOR ', DROP INDEX '),';' )
FROM (
SELECT table_name AS `Table`, index_name AS `Index`
FROM information_schema.statistics
WHERE INDEX_NAME != 'PRIMARY'
AND table_schema = 'XXXDROP_DB_NAMEXXX'
AND table_name = 'XXXDROP_TABLE_NAMEXXX'
GROUP BY `Table`, `Index`) AS tmp
GROUP BY `Table`;
",
STRING_DROP_CODE_DEV => "DROP TABLE IF EXISTS gaTest_tst;" // only executed if declared
]
],
PDO_VIEWS => [
'view_basic_gaTest_tst' => [
STRING_VIEW =>
"DROP VIEW IF EXISTS view_basic_gaTest_tst;
CREATE VIEW view_basic_gaTest_tst AS
SELECT token_tst, testString_tst, testDouble_tst, testInteger_tst, testBoolean_tst,
notIndexedField_tst, status_tst, createdDate_tst, lastAccessedDate_tst, eventGUID_tst
FROM gaTest_tst
WHERE status_tst <> 'DELETED';",
STRING_TYPE_LIST => null,
STRING_DESCRIPTION => 'basic query'
],
'view_audit_gaTest_tst' => [
STRING_VIEW =>
"DROP VIEW IF EXISTS view_audit_gaTest_tst;
CREATE VIEW view_audit_gaTest_tst AS
SELECT token_tst, testString_tst, testDouble_tst, testInteger_tst, testBoolean_tst,
notIndexedField_tst, status_tst, createdDate_tst, lastAccessedDate_tst, eventGUID_tst
FROM gaTest_tst;",
STRING_TYPE_LIST => null,
STRING_DESCRIPTION => 'query for cross-broker queries by audit micro-service'
]
],
PDO_PROCEDURES => [
'testProc0' => [
STRING_DROP_CODE_DEV => "DROP PROCEDURE IF EXISTS testProc0;",
STRING_PROCEDURE =>
"CREATE PROCEDURE testProc0()
READS SQL DATA
BEGIN
SET @sqlString = 'SELECT COUNT(*) AS recordCount FROM gaTest_tst';
PREPARE sqlString from @sqlString;
EXECUTE sqlString;
DEALLOCATE PREPARE sqlString;
END",
STRING_TYPE_LIST => null,
STRING_SP_EVENT_TYPE => DB_EVENT_SELECT,
STRING_DESCRIPTION => 'stored procedure to return row-count of the table, demos a zero-param sp'
],
'testProc1' => [
STRING_DROP_CODE_DEV => 'DROP PROCEDURE IF EXISTS testProc1;',
STRING_PROCEDURE =>
"CREATE PROCEDURE testProc1( IN targetValue INT )
READS SQL DATA
BEGIN
SET @targetVal = targetValue;
SET @sqlString = CONCAT('
SELECT testInteger_tst, count(*) as rowCount
FROM gaTest_tst
WHERE testInteger_tst is not null
GROUP BY testInteger_tst
HAVING rowCount > ', @targetVal, '
ORDER BY rowCount DESC
LIMIT 10');
PREPARE sqlStatement FROM @sqlString;
EXECUTE sqlStatement;
DEALLOCATE PREPARE sqlStatement;
END",
STRING_TYPE_LIST => [
'targetValue' => [ STRING_IN => PDO::PARAM_INT ]
],
STRING_SP_EVENT_TYPE => DB_EVENT_SELECT,
STRING_DESCRIPTION => 'stored procedure that return top-10 list of integer-field values by count for all values greater than the supplied input parameter'
],
'testProc2' => [
STRING_DROP_CODE_DEV => 'DROP PROCEDURE IF EXISTS testProc2;',
STRING_PROCEDURE =>
"CREATE PROCEDURE testProc2( IN intVal INT, OUT avgDouble FLOAT, OUT stdDevDouble FLOAT )
READS SQL DATA
BEGIN
SELECT AVG(testDouble_tst), STDDEV(testDouble_tst)
INTO avgDouble, stdDevDouble
FROM gaTest_tst
WHERE testInteger_tst = intVal;
END",
STRING_TYPE_LIST => [
'intVal' => [ STRING_IN => PDO::PARAM_INT ],
'avgDouble' => [ STRING_OUT => PDO::PARAM_STR ],
'stdDevDouble' => [ STRING_OUT => PDO::PARAM_STR]
],
STRING_SP_EVENT_TYPE => DB_EVENT_SELECT,
STRING_DESCRIPTION => 'stored procedure that calculates the avg() and stddev() for the floats with a specified integer value'
]
],
PDO_FUNCTIONS => [],
PDO_EVENTS => [],
PDO_TRIGGERS => []
];
//=================================================================================================================
// MIGRATION DECLARATIONS
// ----------------------
// Data in this section is used to handle migrations -- when we're pulling from legacy tables into the Namaste
// framework. See online doc for more info.
//=================================================================================================================
/**
* The migration map is an associative array that maps the Namaste fields (keys) to the corresponding
* (remote) legacy fields in the source table to be migrated to Namaste.
*
* For example, if we were migrating a mysql table in the legacy production database to Namaste::mongo, then
* the keys of the migration map would be the Namaste::mongo->fieldNames and the values would be the mysql
* column names in the legacy table.
*
* If there is a value which cannot be mapped to a key, then set it to null.
*
* Fields that will be dropped in the migration are not listed as values or as keys.
*
* This map will only exist in the template object and will never be imported into the class widget.
*
* This is a required field.
*
*/
public ?array $migrationMap = null;
/*
* the migrationSortKey defines the SOURCE field by which the fetch query will be sorted. ALL sort fields are
* in ASC order so all we need to list here is the name of the field -- which MUST BE IN THE SOURCE TABLE.
*
* Populating this field may require preliminary examination of the data - what we want is a field that has
* zero NULL values.
*
* This is a required field.
*
*/
public ?string $migrationSortKey = '';
/*
* The migrationStatusKey defines the status field/column in the source table -- if the user requires that
* soft-deleted records not be migrated, then this field must be set. Otherwise, set the value to null.
*
* The format is in the form of a key-value paired array. The key specifies the name of the column and the value
* specifies the "deleted" value that, if found, will cause that row from the SOURCE data to be omitted from the
* DESTINATION table.
*
* e.g.: $migrationStatusKV = [ 'some_field' => 'deleted' ]
*
* Note that both the key and the value are case-sensitive!
*
* This is an optional field.
*
*/
public ?string $migrationStatusKV = null;
// The $migrationSourceSchema defines the remote schema for the source table, and is set in the constructor
public ?string $migrationSourceSchema;
// The source table in the remote repos (default defined in the XML) must be declared here, set in the constructor
public ?string $migrationSourceTable;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [ OPERAND_NULL => [ OPERATOR_LT => [ null ] ] ],
DB_STATUS => [ OPERAND_NULL => [ OPERATOR_EQ => [ STATUS_ACTIVE ]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-13-17 mks CORE-562: original coding
* 09-09-19 mks DB-111: initialization of migration members moved to constructor b/c IDE warnings.
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
// these next two lines are so that the IDE doesn't flag the variable declarations as unused </facePalm>
$this->migrationSourceSchema = '';
$this->migrationSourceTable = '';
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/** @noinspection PhpUnused */
/**
* buildTestData() -- public static method
*
* this method is used to build an array structure of random data. There are two parameters to the method:
*
* $_records specifies the number of records to return to the calling client
* $_incomplete indicates if we want to generate a partial (not all the fields are provided) record
*
* The $_incomplete parameter allows us to test the PDO class ability to successfully process partial payloads
* on new record creation.
*
* The input parameter specifies how many records (indexes in the array) should be returned to the calling client
* and should be a reasonable integer between one and one-hundred (1 - 100). If the passed-value for the number
* of records is outside of this range, on either side, then the passed-value will be replaced with the range
* limit for the appropriate "side".
*
* We then spin through a loop which populates an indexed array with the elements, from the test class,
* with the appropriate extension as the sub-array key value.
*
* @author mike@givingassistant.org
* @version 1.0
*
* @param int $_records
* @param bool $_incomplete
* @return array
*
* HISTORY:
* ========
* 09-13-17 mks CORE-562: original coding
* 10-23-17 mks CORE_585: incomplete option added to skip some of the fields
* 11-06-20 mks DB-171: ensuring that the test string length cannot be > 255 (max width of table column)
*
*/
public static function buildTestData(int $_records = 1, bool $_incomplete = false): array
{
if ($_records < 1) $_records = 1;
if ($_records > 1000) $_records = 1000;
$retData = null;
mt_srand();
for ($index = 0; $index < $_records; $index++) {
$sentenceCount = mt_rand(1, 20);
$coinToss = ($_incomplete) ? mt_rand(0, 1) : 1;
if ($coinToss) $retData[$index][CM_TST_FIELD_TEST_INT] = $sentenceCount;
$coinToss = ($_incomplete) ? mt_rand(0, 1) : 1;
if ($coinToss) $retData[$index][CM_TST_FIELD_TEST_DOUBLE] = floatval((1 / mt_rand(1, 10000)) * 100);
$coinToss = ($_incomplete) ? mt_rand(0, 1) : 1;
if ($coinToss) {
$retData[$index][CM_TST_FIELD_TEST_STRING] = lorumIpsum($sentenceCount, 0);
if (strlen($retData[$index][CM_TST_FIELD_TEST_STRING]) > 255)
$retData[$index][CM_TST_FIELD_TEST_STRING] = substr($retData[$index][CM_TST_FIELD_TEST_STRING], 0, strpos(wordwrap($retData[$index][CM_TST_FIELD_TEST_STRING], 255), "\n"));
}
$retData[$index][CM_TST_FIELD_TEST_BOOL] = intval(mt_rand(0,1));
}
return ($retData);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 09-13-17 mks CORE-562: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 09-13-17 mks CORE-562: original coding
*
*/
public function __destruct()
{
// empty by design
}
}

View File

@@ -0,0 +1,478 @@
<?php
/** @noinspection PhpUnused */
/**
* Class gatTransactions -- mongo collection template for Namaste
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 02-11-20 mks DB-147: original coding
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatTransactions
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_TRANSACTIONS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_TRANSACTIONS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_TRANSACTIONS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NONDESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = true; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
DB_TOKEN => DATA_TYPE_STRING, // unique pkey exposed externally and is REQUIRED
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER, // epoch time
TRANSACTIONS_ORDER_ID => DATA_TYPE_INTEGER,
TRANSACTIONS_TYPE => DATA_TYPE_STRING,
TRANSACTIONS_DESCRIPTION => DATA_TYPE_STRING,
TRANSACTIONS_AMOUNT => DATA_TYPE_DOUBLE,
TRANSACTIONS_EVENT_DATE => DATA_TYPE_DATETIME,
TRANSACTIONS_START_DATE => DATA_TYPE_DATETIME,
TRANSACTIONS_END_DATE => DATA_TYPE_DATETIME,
TRANSACTIONS_META_DATA => DATA_TYPE_ARRAY,
TRANSACTIONS_MD_TRAVEL_END_DATE => DATA_TYPE_DATETIME,
TRANSACTIONS_MD_PARTNER => DATA_TYPE_STRING,
TRANSACTION_MD_PRODUCT_ID => DATA_TYPE_INTEGER,
TRANSACTIONS_MD_TRAVEL_START_DATE => DATA_TYPE_DATETIME,
TRANSACTIONS_MD_CUSTOMER_ID => DATA_TYPE_INTEGER,
TRANSACTIONS_MD_OFFER_NUMBER => DATA_TYPE_INTEGER,
TRANSACTIONS_MD_ENV => DATA_TYPE_STRING,
TRANSACTIONS_DEST_CITY_NAME => DATA_TYPE_STRING,
TRANSACTIONS_DEST_STATE_CODE => DATA_TYPE_STRING,
TRANSACTIONS_DEST_COUNTRY_CODE => DATA_TYPE_STRING,
TRANSACTIONS_DONOR_ID => DATA_TYPE_INTEGER,
TRANSACTIONS_CID => DATA_TYPE_STRING,
TRANSACTIONS_CAUSE_TITLE => DATA_TYPE_STRING
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_STATUS, DB_TOKEN
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => -1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TST_TOKEN,
DB_STATUS => CM_TST_FIELD_TEST_STATUS,
DB_EVENT_GUID => CM_TST_EVENT_GUID,
DB_CREATED => CM_TRANSACTIONS_CREATED_AT,
DB_ACCESSED => CM_TRANSACTIONS_UPDATED_AT,
TRANSACTIONS_ORDER_ID => CM_TRANSACTIONS_ORDER_ID,
TRANSACTIONS_TYPE => CM_TRANSACTIONS_TYPE,
TRANSACTIONS_DESCRIPTION => CM_TRANSACTIONS_DESCRIPTION,
TRANSACTIONS_AMOUNT => CM_TRANSACTIONS_AMOUNT,
TRANSACTIONS_EVENT_DATE => CM_TRANSACTIONS_EVENT_DATE,
TRANSACTIONS_START_DATE => CM_TRANSACTIONS_START_DATE,
TRANSACTIONS_END_DATE => CM_TRANSACTIONS_END_DATE,
TRANSACTIONS_META_DATA => CM_TRANSACTIONS_META_DATA,
TRANSACTIONS_MD_TRAVEL_END_DATE => CM_TRANSACTIONS_MD_TRAVEL_END_DATE,
TRANSACTIONS_MD_PARTNER => CM_TRANSACTIONS_MD_PARTNER,
TRANSACTION_MD_PRODUCT_ID => CM_TRANSACTIONS_MD_PRODUCT_ID,
TRANSACTIONS_MD_TRAVEL_START_DATE => CM_TRANSACTIONS_MD_TRAVEL_START_DATE,
TRANSACTIONS_MD_CUSTOMER_ID => CM_TRANSACTIONS_MD_CUSTOMER_ID,
TRANSACTIONS_MD_OFFER_NUMBER => CM_TRANSACTIONS_MD_OFFER_NUM,
TRANSACTIONS_MD_ENV => CM_TRANSACTIONS_MD_ENV,
TRANSACTIONS_DEST_CITY_NAME => CM_TRANSACTIONS_DEST_CITY_NAME,
TRANSACTIONS_DEST_STATE_CODE => CM_TRANSACTIONS_DEST_STATE_CODE,
TRANSACTIONS_DEST_COUNTRY_CODE => CM_TRANSACTIONS_DEST_COUNTRY_CODE,
TRANSACTIONS_DONOR_ID => CM_TRANSACTIONS_DONOR_ID,
TRANSACTIONS_CID => CM_TRANSACTIONS_CID,
TRANSACTIONS_CAUSE_TITLE => CM_TRANSACTIONS_CAUSE_TITLE
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-11-20 mks DB-147: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-11-20 mks DB-147: original coding
*
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-11-20 mks DB-147: original coding
*
*/
public function __destruct()
{
}
}

View File

@@ -0,0 +1,660 @@
<?php
/** @noinspection PhpUnused */
/**
* Class gatUsers -- mongo class
*
* This class is used to store the user PII data necessary for a GA user account. This collection also stores internal
* accounts. (Administrative, CSR, etc.)
*
* In the legacy (Parse) collection, there were 78 columns, not including the proprietary ACL column. In the first
* pass of this migration, we've grouped the original entries into five sub-collections.
*
*
* HISTORY:
* ========
* 02-04-20 mks DB-147: original coding
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatUsers
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version; not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_TERCERO; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_USERS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_USERS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_USERS_EXT; // sets the extension for the collection
public bool $closedClass = false; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NONDESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = true; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER,
USER_ACCOUNT_SSO => DATA_TYPE_STRING,
USER_AUTH_DATA => DATA_TYPE_OBJECT,
USER_EMAIL_VERIFIED => DATA_TYPE_BOOL,
USER_FBID => DATA_TYPE_STRING,
USER_VERIFIED_HUMAN => DATA_TYPE_BOOL,
USER_MEMBERSHIP_PLAN => DATA_TYPE_STRING,
USER_NOTES => DATA_TYPE_STRING,
USER_PARTNER_API_KEY => DATA_TYPE_STRING,
USER_PASSWORD => DATA_TYPE_STRING,
USER_PASSWORD_UPDATED => DATA_TYPE_INTEGER,
USER_PASSWORD_LAST_THREE => DATA_TYPE_ARRAY,
USER_PROMO_SIGN_UP_ID => DATA_TYPE_INTEGER,
USER_TEMP_PASSWORD => DATA_TYPE_STRING,
USER_TZ => DATA_TYPE_DOUBLE,
USER_LEAP_CONVERTED => DATA_TYPE_BOOL,
USER_USERNAME => DATA_TYPE_STRING,
USER_WEBHOOK_RETRIES => DATA_TYPE_INTEGER,
USER_TYPE => DATA_TYPE_STRING,
USER_FINANCIALS => DATA_TYPE_ARRAY,
USER_FINANCIALS_TOTAL_DONATIONS => DATA_TYPE_DOUBLE,
USER_FINANCIALS_TOTAL_EARNINGS => DATA_TYPE_DOUBLE,
USER_FINANCIALS_CASHBACK_BANK => DATA_TYPE_INTEGER,
USER_FINANCIALS_CASHBACK_DONATION => DATA_TYPE_INTEGER,
USER_FINANCIALS_CURRENT_BALANCE => DATA_TYPE_DOUBLE,
USER_FINANCIALS_EARNING_TIER => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS => DATA_TYPE_ARRAY,
USER_FINANCIALS_PAYMENTS_ADDRESS1 => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_ADDRESS2 => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_CITY => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_COUNTRY => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_FULL_NAME => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_PLAN => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_STATE => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_STATUS => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_ZIP => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_VERIFIED => DATA_TYPE_BOOL,
USER_FINANCIALS_PAYMENTS_META => DATA_TYPE_STRING,
USER_FINANCIALS_PAYMENTS_TYPE => DATA_TYPE_STRING,
USER_FINANCIALS_PENDING_BALANCE => DATA_TYPE_DOUBLE,
USER_FINANCIALS_CUSTOMER_ID => DATA_TYPE_STRING,
USER_FINANCIALS_STRIPE_RECIPIENT_VERIFIED => DATA_TYPE_BOOL,
USER_FINANCIALS_TIN_FINGERPRINT => DATA_TYPE_STRING,
USER_FINANCIALS_TIN_TYPE => DATA_TYPE_STRING,
USER_SPORTS => DATA_TYPE_ARRAY,
USER_SPORTS_FAV_ATHLETES => DATA_TYPE_OBJECT,
USER_SPORTS_FAV_TEAMS => DATA_TYPE_OBJECT,
USER_SPORTS_FAV_SPORTS => DATA_TYPE_OBJECT,
USER_CHARITIES => DATA_TYPE_ARRAY,
USER_CHARITIES_SELECTED_CAMPAIGN => DATA_TYPE_INTEGER,
USER_CHARITIES_SELECTED_CAMPAIGN_META => DATA_TYPE_INTEGER,
USER_CHARITIES_SELECTED_CAMPAIGN_TITLE => DATA_TYPE_STRING,
USER_REFERRALS => DATA_TYPE_ARRAY,
USER_REFERRALS_EARNINGS => DATA_TYPE_INTEGER,
USER_REFERRALS_CLICKS => DATA_TYPE_INTEGER,
USER_REFERRALS_EARNINGS_PENDING => DATA_TYPE_INTEGER,
USER_REFERRALS_SIGNUPS => DATA_TYPE_INTEGER,
USER_REFERRALS_RID => DATA_TYPE_STRING,
USER_PII => DATA_TYPE_ARRAY,
USER_PII_ADDRESS => DATA_TYPE_STRING,
USER_PII_AGE_RANGE => DATA_TYPE_STRING,
USER_PII_BIRTHDAY => DATA_TYPE_DATETIME,
USER_PII_COUNTRY_CODE => DATA_TYPE_STRING,
USER_PII_EMAIL => DATA_TYPE_STRING,
USER_PII_SECONDARY_EMAIL => DATA_TYPE_STRING,
USER_PII_FNAME => DATA_TYPE_STRING,
USER_PII_GENDER => DATA_TYPE_STRING,
USER_PII_HOMETOWN => DATA_TYPE_STRING,
USER_PII_LANGUAGES => DATA_TYPE_OBJECT,
USER_PII_LNAME => DATA_TYPE_STRING,
USER_PII_LEGAL_NAME => DATA_TYPE_STRING,
USER_PII_LOCALE => DATA_TYPE_STRING,
USER_PII_LOCATION => DATA_TYPE_STRING,
DB_TOKEN => DATA_TYPE_STRING, // unique key exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, DB_STATUS
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_CREATED, DB_EVENT_GUID, DB_ACCESSED, MONGO_ID, DB_STATUS
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_TOKEN, DB_CREATED, DB_STATUS, DB_EVENT_GUID, USER_PII_EMAIL, USER_PII_SECONDARY_EMAIL
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = [ 'emailsIndex', 'activePartnersIndex' ];
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_TOKEN => 1,
DB_CREATED => -1,
DB_STATUS => 1,
DB_EVENT_GUID => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'emailsIndex' => [ USER_PII_EMAIL => 1, USER_PII_SECONDARY_EMAIL => 1 ],
'activePartnersIndex' => [ USER_PARTNER_API_KEY => 1, DB_STATUS => 1 ]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
DB_EVENT_GUID => 1,
USER_PII_EMAIL => 1
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are required for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TOKEN,
DB_CREATED => CM_DATE_CREATED,
DB_ACCESSED => CM_DATE_ACCESSED,
DB_STATUS => CM_STATUS,
DB_EVENT_GUID => CM_EVENT_GUID,
USER_ACCOUNT_SSO => CM_USER_ACCOUNT,
USER_AUTH_DATA => CM_USER_API_DATA,
USER_EMAIL_VERIFIED => CM_USER_EMAIL_VALIDATED,
USER_FBID => CM_USER_FB_KEY,
USER_VERIFIED_HUMAN => CM_USER_HUMAN_VALIDATED,
USER_MEMBERSHIP_PLAN => CM_USER_MEMBER_PLAN,
USER_NOTES => CM_USER_NOTES,
USER_PASSWORD => CM_USER_PASSWORD,
USER_PROMO_SIGN_UP_ID => CM_USER_PROMO_ID,
USER_TEMP_PASSWORD => CM_USER_TEMPORARY_PWD,
USER_TZ => CM_USER_TIMEZONE,
USER_LEAP_CONVERTED => CM_USER_LEAP_CONVERTED,
USER_USERNAME => CM_USER_NAME,
USER_WEBHOOK_RETRIES => CM_USER_WEBHOOK_ATTEMPTS,
USER_TYPE => CM_USER_TYPE,
USER_FINANCIALS => CM_USER_FINANCIALS,
USER_FINANCIALS_TOTAL_DONATIONS => CM_USER_FINANCIALS_DONATIONS,
USER_FINANCIALS_TOTAL_EARNINGS => CM_USER_FINANCIALS_EARNINGS,
USER_FINANCIALS_CASHBACK_BANK => CM_USER_FINANCIALS_CB_BANK,
USER_FINANCIALS_CASHBACK_DONATION => CM_USER_FINANCIALS_CB_DONATIONS,
USER_FINANCIALS_CURRENT_BALANCE => CM_USER_FINANCIALS_CURR_BAL,
USER_FINANCIALS_EARNING_TIER => CM_USER_FINANCIALS_EARNING_TIER,
USER_FINANCIALS_PAYMENTS => CM_USER_FINANCIALS_PAYMENTS,
USER_FINANCIALS_PAYMENTS_ADDRESS1 => USER_FINANCIALS_PAYMENTS_ADDRESS1,
USER_FINANCIALS_PAYMENTS_ADDRESS2 => CM_USER_FINANCIALS_PAYMENTS_ADDR2,
USER_FINANCIALS_PAYMENTS_CITY => CM_USER_FINANCIALS_PAYMENTS_CITY,
USER_FINANCIALS_PAYMENTS_COUNTRY => CM_USER_FINANCIALS_PAYMENTS_COUNTRY,
USER_FINANCIALS_PAYMENTS_FULL_NAME => CM_USER_FINANCIALS_PAYMENTS_FNAME,
USER_FINANCIALS_PAYMENTS_PLAN => CM_USER_FINANCIALS_PAYMENTS_PLAN,
USER_FINANCIALS_PAYMENTS_STATE => CM_USER_FINANCIALS_PAYMENTS_STATE,
USER_FINANCIALS_PAYMENTS_STATUS => CM_USER_FINANCIALS_PAYMENTS_STATUS,
USER_FINANCIALS_PAYMENTS_ZIP => CM_USER_FINANCIALS_PAYMENTS_ZIP,
USER_FINANCIALS_PAYMENTS_VERIFIED => CM_USER_FINANCIALS_PAYMENTS_VALIDATED,
USER_FINANCIALS_PAYMENTS_META => CM_USER_FINANCIALS_PAYMENTS_METADATA,
USER_FINANCIALS_PAYMENTS_TYPE => USER_FINANCIALS_PAYMENTS_TYPE,
USER_FINANCIALS_PENDING_BALANCE => CM_USER_FINANCIALS_PENDING_BALANCE,
USER_FINANCIALS_CUSTOMER_ID => CM_USER_FINANCIALS_CUSTOMER_ID,
USER_FINANCIALS_STRIPE_RECIPIENT_VERIFIED => CM_USER_FINANCIALS_STRIPE_VERIFIED,
USER_FINANCIALS_TIN_FINGERPRINT => CM_USER_FINANCIALS_TIN_IDENTIFIER,
USER_FINANCIALS_TIN_TYPE => CM_USER_FINANCIALS_TIN_TYPE,
USER_SPORTS => CM_USER_SPORTS,
USER_SPORTS_FAV_ATHLETES => CM_USER_SPORTS_FAVE_ATHLETES,
USER_SPORTS_FAV_TEAMS => CM_USER_SPORTS_FAVE_TEAMS,
USER_SPORTS_FAV_SPORTS => CM_USER_SPORTS_FAVE_SPORTS,
USER_CHARITIES => CM_USER_CHARITIES,
USER_CHARITIES_SELECTED_CAMPAIGN => CM_USER_CHARITIES_SEL_CAMPAIGN,
USER_CHARITIES_SELECTED_CAMPAIGN_META => CM_USER_CHARITIES_SEL_CAMPAIGN_META,
USER_CHARITIES_SELECTED_CAMPAIGN_TITLE => CM_USER_CHARITIES_SEL_CAMPAIGN_TITLE,
USER_REFERRALS => CM_USER_REFERRALS,
USER_REFERRALS_EARNINGS => CM_USER_REFERRALS_EARNINGS,
USER_REFERRALS_CLICKS => CM_USER_REFERRALS_CLICKS,
USER_REFERRALS_EARNINGS_PENDING => CM_USER_REFERRALS_PENDING_EARNINGS,
USER_REFERRALS_SIGNUPS => CM_USER_REFERRALS_SIGNUPS,
USER_REFERRALS_RID => CM_USER_REFERRALS_ID,
USER_PII => CM_USER_PII,
USER_PII_ADDRESS => CM_USER_PII_ADDR,
USER_PII_AGE_RANGE => CM_USER_PII_AGE_RANGE,
USER_PII_BIRTHDAY => CM_USER_PII_DOB,
USER_PII_COUNTRY_CODE => CM_USER_PII_COUNTRY_CODE,
USER_PII_EMAIL => CM_USER_PII_EMAIL,
USER_PII_SECONDARY_EMAIL => CM_USER_PII_ALT_EMAIL,
USER_PII_FNAME => CM_USER_PII_FNAME,
USER_PII_GENDER => CM_USER_PII_GENDER,
USER_PII_HOMETOWN => CM_USER_PII_HOMETOWN,
USER_PII_LANGUAGES => CM_USER_PII_LANGUAGES,
USER_PII_LNAME => CM_USER_PII_LNAME,
USER_PII_LEGAL_NAME => CM_USER_PII_LEGAL_NAME,
USER_PII_LOCALE => CM_USER_PII_LOCALE,
USER_PII_LOCATION => CM_USER_PII_LOCATION
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as the associative array: $exposedFields. Only those fields,
* enumerated within this container, will be exposed to the client.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = [
USER_SPORTS => [
USER_SPORTS_FAV_ATHLETES,
USER_SPORTS_FAV_TEAMS,
USER_SPORTS_FAV_SPORTS
],
USER_CHARITIES => [
USER_CHARITIES_SELECTED_CAMPAIGN,
USER_CHARITIES_SELECTED_CAMPAIGN_META,
USER_CHARITIES_SELECTED_CAMPAIGN_TITLE
],
USER_REFERRALS => [
USER_REFERRALS_EARNINGS,
USER_REFERRALS_CLICKS,
USER_REFERRALS_EARNINGS_PENDING,
USER_REFERRALS_SIGNUPS,
USER_REFERRALS_RID
],
USER_PII => [
USER_PII_ADDRESS,
USER_PII_AGE_RANGE,
USER_PII_BIRTHDAY,
USER_PII_COUNTRY_CODE,
USER_PII_EMAIL,
USER_PII_FNAME,
USER_PII_GENDER,
USER_PII_HOMETOWN,
USER_PII_LANGUAGES,
USER_PII_LNAME,
USER_PII_LEGAL_NAME,
USER_PII_LOCALE,
USER_PII_LOCATION
],
USER_FINANCIALS => [
USER_FINANCIALS_TOTAL_DONATIONS,
USER_FINANCIALS_TOTAL_EARNINGS,
USER_FINANCIALS_CASHBACK_BANK,
USER_FINANCIALS_CASHBACK_DONATION,
USER_FINANCIALS_CURRENT_BALANCE,
USER_FINANCIALS_EARNING_TIER,
USER_FINANCIALS_PENDING_BALANCE,
USER_FINANCIALS_CUSTOMER_ID,
USER_FINANCIALS_STRIPE_RECIPIENT_VERIFIED,
USER_FINANCIALS_TIN_FINGERPRINT,
USER_FINANCIALS_TIN_TYPE,
USER_FINANCIALS_PAYMENTS // <--- note that this is a sub-heading within a sub-heading |
], // |
USER_FINANCIALS_PAYMENTS => [ // <---------------------------------------------------------|
USER_FINANCIALS_PAYMENTS_ADDRESS1,
USER_FINANCIALS_PAYMENTS_ADDRESS2,
USER_FINANCIALS_PAYMENTS_CITY,
USER_FINANCIALS_PAYMENTS_COUNTRY,
USER_FINANCIALS_PAYMENTS_FULL_NAME,
USER_FINANCIALS_PAYMENTS_PLAN,
USER_FINANCIALS_PAYMENTS_STATE,
USER_FINANCIALS_PAYMENTS_STATUS,
USER_FINANCIALS_PAYMENTS_ZIP,
USER_FINANCIALS_PAYMENTS_VERIFIED,
USER_FINANCIALS_PAYMENTS_META,
USER_FINANCIALS_PAYMENTS_TYPE
]
];
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* Constructor in this template not only registers the shutdown method, but also allows us to generate a custom
* GUID string during instantiation by use of the input parameters:
*
* $_getGUID - boolean, defaults to false but, if true, will generate a GUID value and store it in the class member
* $_lc - boolean, defaults to false but, if true, will generate a GUID using lower-case alpha characters
*
* If we generate a GUID on instantiation, the GUID will be stored in the class member. This allows us to both
* instantiate a session class object and a GUID value, (the most requested, post-instantiation, action), at the
* same time. All the efficient.
*
*
* HISTORY:
* ========
* 02-04-20 mks DB-147: original coding
*
* @author mike@givingassistant.org
* @version 1.0
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* HISTORY:
* ========
* 02-04-20 mks DB-147: original coding
*
* @version 1.0
*
* @author mike@givingassistant.org
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 02-04-20 mks DB-147: original coding
*
*/
public function __destruct()
{
// move on lookie-loo....
}
}

View File

@@ -0,0 +1,469 @@
<?php
/**
* Class gatWBList -- template class for Giving Assistant's White/Black List collection.
*
* This is an tercero-class collection that records white and black-listed email addresses for the email validation
* process.
*
* Rules are simple and as you would guess:
* If an email is white listed then the email is always sent
* elseif the email is black listed then the email is never sent and generates a system event
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-18-20 mks DB-168: original coding
*
*/
class gatWBList
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version; not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_TERCERO; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_WBL; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_WBLIST; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_WBLIST_EXT; // sets the extension for the collection
public bool $closedClass = false; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_DESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
DB_TOKEN => DATA_TYPE_STRING, // unique key exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER, // epoch time
// fields specific to systemEvents collection
MONGO_WBL_TYPE => DATA_TYPE_BOOL, // 1 = white, 0 = black
USER_PII_EMAIL => DATA_TYPE_STRING, // email expression to be matched
MONGO_WBL_ADDED_BY => DATA_TYPE_STRING, // who added record to list
META_SYSTEM_NOTES => DATA_TYPE_STRING // notes about the add event
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, DB_STATUS
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_CREATED, DB_EVENT_GUID, DB_ACCESSED, MONGO_ID, DB_STATUS,
MONGO_WBL_TYPE, USER_PII_EMAIL, MONGO_WBL_ADDED_BY, META_SYSTEM_NOTES
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_TOKEN, DB_CREATED, DB_STATUS, DB_EVENT_GUID,
MONGO_WBL_TYPE, USER_PII_EMAIL, MONGO_WBL_ADDED_BY, 'wblType',
'wblEmail', 'wblCSR', 'wblEmailCSR'
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_TOKEN => 1,
DB_CREATED => -1,
DB_STATUS => 1,
DB_EVENT_GUID => 1,
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = [
'wblType' => [ MONGO_WBL_TYPE => 1, DB_STATUS => 1],
'wblEmail' => [ USER_PII_EMAIL => 1, DB_STATUS => 1],
'wblCSR' => [ MONGO_WBL_ADDED_BY => 1, DB_STATUS => 1],
'wblEmailCSR' => [ USER_PII_EMAIL => 1, MONGO_WBL_ADDED_BY => 1, DB_STATUS => 1]
];
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
DB_EVENT_GUID => 1,
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are required for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
DB_TOKEN => CM_TOKEN,
DB_CREATED => CM_DATE_CREATED,
DB_ACCESSED => CM_DATE_ACCESSED,
DB_STATUS => CM_STATUS,
DB_EVENT_GUID => CM_EVENT_GUID,
// collection-specific fields
MONGO_WBL_TYPE => CM_WBL_TYPE,
USER_PII_EMAIL => CM_WBL_EMAIL,
MONGO_WBL_ADDED_BY => CM_WBL_ADDED_BY,
META_SYSTEM_NOTES => CM_WBL_NOTES
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as the associative array: $exposedFields. Only those fields,
* enumerated within this container, will be exposed to the client.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 08-18-20 mks DB-168: original coding
*
*/
private function __clone()
{
return (null);
}
/**
* __construct() -- public method
*
* Constructor in this template not only registers the shutdown method, but also allows us to generate a custom
* GUID string during instantiation by use of the input parameters:
*
* $_getGUID - boolean, defaults to false but, if true, will generate a GUID value and store it in the class member
* $_lc - boolean, defaults to false but, if true, will generate a GUID using lower-case alpha characters
*
* If we generate a GUID on instantiation, the GUID will be stored in the class member. This allows us to both
* instantiate a session class object and a GUID value, (the most requested, post-instantiation, action), at the
* same time. All the efficient.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-18-20 mks DB-168: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 08-18-20 mks DB-168: original coding
*
*/
public function __destruct()
{
// move on lookie-loo....
}
}

View File

@@ -0,0 +1,527 @@
<?php
/**
* gatWHC1ProdRegistrations.class -- Namaste mySQL Data Template for WH Level: COOL
*
* This is the warehouse data template file for the Namaste mySQL version of product-registration.
* This template was created for the purpose of testing mysql->mysql data warehousing.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 05-08-18 mks _INF-188: original coding
* 01-15-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatWHC1ProdRegistrations
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...```
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_SEGUNDO; // defines the mongo server destination
public string $schema = TEMPLATE_DB_PDO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_WHC1_PROD_REG; // defines the clear-text template class name
public string $collection = WH_COOL_PDO_PROD_REGS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_PDO_PROD_REGS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = false; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = false; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if class contains methods or migration
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
//
// Note that for PDO-type tables, the data type is more ... homogeneous... e.g.: data types define the data
// type only. It does not define the actual column type in-use. For example, there is no distinction made
// between a tinyInt, Int, or BigInt. As far as the framework is concerned, they're all just integers.
//
public array $fields = [
PDO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
PRG_TYPE => DATA_TYPE_STRING,
PRG_IID => DATA_TYPE_STRING,
PRG_EAV => DATA_TYPE_STRING,
PRG_PLATFORM => DATA_TYPE_STRING,
PRG_BROWSER => DATA_TYPE_STRING,
PRG_MAJOR_VERSION => DATA_TYPE_INTEGER,
PRG_MINOR_VERSION => DATA_TYPE_INTEGER,
PRG_IS_MOBILE => DATA_TYPE_INTEGER,
PRG_IS_TABLET => DATA_TYPE_INTEGER,
PRG_FIRST_SEEN => DATA_TYPE_STRING,
PRG_LAST_SEEN => DATA_TYPE_STRING,
DB_TOKEN => DATA_TYPE_STRING, // unique key (string) exposed externally and is REQUIRED,
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_STRING, // dateTime type
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_STRING, // dateTime type
//_________________________________________________________________________
// UP TO HERE IS THE ORIGINAL DATA -- BELOW IS THE WH_CLASS-SPECIFIC DATA
//-------------------------------------------------------------------------
DB_WH_CREATED => DATA_TYPE_STRING,
DB_WH_EVENT_GUID => DATA_TYPE_STRING,
DB_WH_TOKEN => DATA_TYPE_STRING
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, PDO_ID, DB_WH_EVENT_GUID, DB_WH_CREATED, DB_WH_TOKEN
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public array $indexFields = [
DB_CREATED => 1,
DB_WH_TOKEN => 1,
DB_WH_CREATED => 1,
DB_WH_EVENT_GUID => 1
];
// the primary key index is declared in the class properties section as $setPKey
// unique indexes are to be used a values stored in these columns have to be unique to the table. Note that
// null values are permissible in unique-index columns. Do not declare PDO_ID here, regardless of how badly
// you may want to.
public ?array $uniqueIndexes = [ DB_WH_TOKEN, DB_WH_EVENT_GUID ];
// single field index declarations -- since you can have a field in more than one index (index, multi)
// the format for the single-field index declaration is a simple indexed array.
public ?array $singleFields = [
DB_CREATED
];
// multi-column (or compound) indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME1, FIELD_NAME2, ..., FIELD_NAMEn ]]
// where INDEX-NAME is a unique string
// unless it's for mongoDB -- mongoDB does not use index labels
//
// PDO compound-indexes are left-most indexes - if it cannot use the entire index, the db must be able to use
// one, or more, of the left-most fields in the index.
public ?array $compoundIndexes = [
'whC1PRG-I1' => [ DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN ]
];
// NOTE: foreign-key indexes are not explicitly enumerated in a template -- that relationship is defined in the
// schema for the table. Foreign-key indexes appear implicitly in the indexing declarations above.
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = [
PRG_TYPE => 1,
PRG_IID => 1,
PRG_EAV => 1,
PRG_PLATFORM => 1,
PRG_BROWSER => 1,
PRG_MAJOR_VERSION => 1,
PRG_MINOR_VERSION => 1,
PRG_IS_MOBILE => 1,
PRG_IS_TABLET => 1,
PRG_FIRST_SEEN => 1,
PRG_LAST_SEEN => 1,
DB_CREATED => 1, // epoch time
DB_STATUS => 1, // record status
DB_ACCESSED => 1, // epoch time
DB_WH_EVENT_GUID => 1,
DB_WH_CREATED => 1,
DB_WH_TOKEN => 1
];
// in PDO-land, binary fields are your basic data blobs. All binary fields require special handling and so
// need to be enumerated here as an indexed array.
public ?array $binFields = null;
// DB SQL:
// -------
// PDO SQL is stored in the template and is keyed by the current namaste version (defined in the XML file) during
// execution of the deployment script. Each version denotes a container of SQL commands that will be executed
// for the targeted version.
//
// SQL is versioned in parallel with the Namaste (XML->application->id->version) version. Each PDO_SQL
// sub-container has several fields - one of which has the version identifier. When the deployment script
// executes, the release versions are compared and, if they're an exact match, the SQL is submitted for execution.
//
// The PDO_SQL container consists of these sub-containers:
//
// PDO_SQL_VERSION --> this is a float value in the form of x.y as namaste only supports versions as a major
// and minor release number. (Patch releases are minor release increments.)
// PDO_TABLE --> string value containing the full table name.
// PDO_SQL_FC --> the FC means "first commit" -- when the table is first created, it will execute the
// SQL in this block, if it exists, and if the version number for the sub-container
// exactly matched the version number in the configuration XML.
// PDO_SQL_UPDATE --> When the sub-container PDO_SQL_VERSION value exactly matches the XML release value,
// then the ALTER-TABLE sql in this update block will be executed.
// STRING_DROP_CODE_IDX --> The boilerplate code for dropping the indexes of the table.
// STRING_DROP_CODE_DEV --> For version 1.0 only, this points to code to drop the entire table.
//
// Again, containers themselves are indexed arrays under the PDO_SQL tag. Within the container, data is stored
// as an associative array with the keys enumerated above.
//
//
// DB OBJECTS:
// -----------
// DB objects are: views, procedures, functions and events.
// All such objects assigned to a class are declared in this array under the appropriate header.
// This is a safety-feature that prevents a one class (table) from invoking another class object.
// The name of the object is stored as an indexed-array under the appropriate header.
//
// The format for these structures is basically the same. Each DBO is stored in an associative array with the
// key defining the name of the object. Within each object, there are embedded associative arrays that have the
// name of the object as the key and the object definition (text) and the value:
//
// objectType => [ objectName => [ objectContent ], ... ]
//
// Each created object should also have the directive to remove it's predecessor using a DROP statement.
//
// todo -- unset these objects post-instantiation so that schema is not revealed
//
// VIEWS:
// ------
// Every namaste table will have at least one view which limits the data fetched from the table. At a minimum,
// the id_{ext} field is filtered from the resulting data set via the view. Other fields can be withheld as well
// but that is something that is individually set-up for each table.
//
// The basic view has the following syntax for declaring it's name:
// view_basic_{tableName_ext}
// All views start with the word "view" so as to self-identify the object, followed by the view type which,
// optimally, you should try to limit to a single, descriptive word.
//
// Following this label, which points to a sub-array containing three elements:
// STRING_VIEW ----------> this is the SQL code that defines the view as a single string value
// STRING_TYPE_LIST -----> null or an array of types that corresponds to variable markers ('?') in the sql
// STRING_DESCRIPTION' --> a string that describes the purpose of the view.
//
// At a minimum, every class definition should contain at-least a basic view as all queries that don't specify
// a named view or other DBO, will default to the the basic view in the FROM clause of the generated SQL.
//
// PROCEDURES:
// -----------
// For stored procedures, which are entirely optional, the array definition contains the following elements:
// STRING_PROCEDURE -------> the SQL code that defined the stored procedure as a single string value
// STRING_DROP_CODE -------> the sql code that drops the procedure (required for procedures!)
// STRING_TYPE_LIST -------> an associative array of associative arrays -- in the top level, the key is the name
// of the parameter that points to a sub-array that contains the parameter direction
// as the key, and the parameter type as the value. There should be an entry for each
// parameter to be passed to the stored procedure/function.
//
// ------------------------------------------------------
// | NOTE: IN params must precede INOUT and OUT params! |
// ------------------------------------------------------
//
// STRING_SP_EVENT_TYPE ---> Assign one of the DB_EVENT constants to this field to indicate the type of
// query the stored-procedure will execute.
// NOTE: there is not a defined PDO::PARAM constant for type float: use string.
// STRING_DESCRIPTION -----> clear-text definition of the procedure's purpose
//
// Note that all of these containers are required; empty containers should contain a null placeholder.
//
// When a stored procedure contains a join of two or more tables/views, the first table listed is considered
// to be the "owning" table and the procedure will be declared in the class template for that table, but it will
// not be duplicated in other template classes referenced in the join.
//
public ?array $dbObjects = [
PDO_SQL => [
[
PDO_VERSION => 1.0,
PDO_TABLE => 'gaCoolProductRegistrations_prg',
PDO_SQL_FC => "
--
-- Table structure for table `gaCoolProductRegistrations_prg`
--
CREATE TABLE `gaCoolProductRegistrations_prg` (
`id_prg` int(10) UNSIGNED NOT NULL,
`type_prg` char(16) NOT NULL,
`iid_prg` char(64) DEFAULT NULL,
`eav_prg` char(16) DEFAULT NULL,
`platform_prg` char(32) DEFAULT NULL,
`browser_prg` char(32) DEFAULT NULL,
`majorVersion_prg` int(11) UNSIGNED DEFAULT NULL,
`minorVersion_prg` int(11) UNSIGNED DEFAULT NULL,
`isMobile_prg` tinyint(1) DEFAULT NULL,
`isTablet_prg` tinyint(1) UNSIGNED DEFAULT NULL,
`firstSeen_prg` datetime DEFAULT NULL,
`lastSeen_prg` datetime DEFAULT NULL,
`token_prg` char(36) DEFAULT NULL,
`eventGuid_prg` char(36) DEFAULT NULL,
`createdDate_prg` datetime DEFAULT NULL COMMENT 'replaces kinsert_date',
`lastAccessedDate_prg` datetime DEFAULT NULL COMMENT 'replaces kupdate_date',
`status_prg` varchar(25) DEFAULT NULL,
`whCreated_prg` datetime DEFAULT NULL,
`whEventGuid_prg` char(36) DEFAULT NULL,
`whToken_prg` char(36) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
",
PDO_SQL_UPDATE => "
--
-- Indexes for dumped tables
--
--
-- Indexes for table `gaCoolProductRegistrations_prg`
--
ALTER TABLE `gaCoolProductRegistrations_prg`
ADD UNIQUE KEY `whToken_prg` (`whToken_prg`);
",
/*
* example query return:
* ---------------------
* ALTER TABLE gaTest_tst DROP INDEX gaTest_tst_createdDate_tst_status_tst_index, DROP INDEX
* gaTest_tst_lastAccessedDate_tst_index, DROP INDEX testInteger_tst, DROP INDEX
* gaTest_tst_eventGuid_tst_index, DROP INDEX testDouble_tst, DROP INDEX testString_tst;
*
* NOTE:
* -----
* The sql comment code tag (--) will be removed during mysqlConfig's run time processing
*/
STRING_DROP_CODE_IDX => "--
SELECT CONCAT('ALTER TABLE ', `Table`, ' DROP INDEX ', GROUP_CONCAT(`Index` SEPARATOR ', DROP INDEX '),';' )
FROM (
SELECT table_name AS `Table`, index_name AS `Index`
FROM information_schema.statistics
WHERE INDEX_NAME != 'PRIMARY'
AND table_schema = 'XXXDROP_DB_NAMEXXX'
AND table_name = 'XXXDROP_TABLE_NAMEXXX'
GROUP BY `Table`, `Index`) AS tmp
GROUP BY `Table`;
",
STRING_DROP_CODE_DEV => "DROP TABLE IF EXISTS gaCoolProductRegistrations_prg;" // only executed if declared
]
],
PDO_VIEWS => [
'view_basic_gaWHC1ProductRegistrations' => [
STRING_VIEW =>
"DROP VIEW IF EXISTS view_basic_gaWHC1ProdRegistrations;
CREATE VIEW view_basic_gaWHC1ProdRegistrations_prg AS
SELECT type_prg, iid_prg, eav_prg, platform_prg, browser_prg, majorVersion_prg, minorVersion_prg,
isMobile_prg, isTablet_prg, firstSeen_prg, lastSeen_prg, eventGUID_prg, createdDate_prg,
lastAccessedDate_prg, status_prg, token_prg, whCreated_prg, whEventGuid_prg, whToken_prg
FROM gaCoolProductRegistrations_prg;",
STRING_TYPE_LIST => null,
STRING_DESCRIPTION => 'basic query'
],
],
PDO_PROCEDURES => [],
PDO_FUNCTIONS => [],
PDO_EVENTS => [],
PDO_TRIGGERS => []
];
//=================================================================================================================
// MIGRATION DECLARATIONS
// ----------------------
// Data in this section is used to handle migrations -- when we're pulling from legacy tables into the Namaste
// framework. See online doc for more info.
//
// Note -- this section is not supported for WareHouse templates! (all settings should be null or empty)
//=================================================================================================================
/**
* The migration map is an associative array that maps the Namaste fields (keys) to the corresponding
* (remote) legacy fields in the source table to be migrated to Namaste.
*
* For example, if we were migrating a mysql table in the legacy production database to Namaste::mongo, then
* the keys of the migration map would be the Namaste::mongo->fieldNames and the values would be the mysql
* column names in the legacy table.
*
* If there is a value which cannot be mapped to a key, then set it to null.
*
* Fields that will be dropped in the migration are not listed as values or as keys.
*
* This map will only exist in the template object and will never be imported into the class widget.
*
* This is a required field.
*
*/
public ?array $migrationMap = [
PDO_ID => null,
PRG_TYPE => 'type',
PRG_IID => 'iid',
PRG_EAV => 'eav',
PRG_PLATFORM => 'platform',
PRG_BROWSER => 'browser',
PRG_MAJOR_VERSION => 'major_version',
PRG_MINOR_VERSION => 'minor_version',
PRG_IS_MOBILE => 'is_mobile',
PRG_IS_TABLET => 'is_tablet',
PRG_FIRST_SEEN => 'first_seen',
PRG_LAST_SEEN => 'last_seen',
DB_TOKEN => null,
DB_EVENT_GUID => null, // generated by broker event
DB_CREATED => 'kinsert_date', // epoch time
DB_STATUS => null, // record status
DB_ACCESSED => 'kupdate_date' // epoch time
];
/*
* the migrationSortKey defines the SOURCE field by which the fetch query will be sorted. ALL sort fields are
* in ASC order so all we need to list here is the name of the field -- which MUST BE IN THE SOURCE TABLE.
*
* Populating this field may require preliminary examination of the data - what we want is a field that has
* zero NULL values.
*
* This is a required field.
*
*/
public ?array $migrationSortKey = null;
/*
* The migrationStatusKey defines the status field/column in the source table -- if the user requires that
* soft-deleted records not be migrated, then this field must be set. Otherwise, set the value to null.
*
* The format is in the form of a key-value paired array. The key specifies the name of the column and the value
* specifies the "deleted" value that, if found, will cause that row from the SOURCE data to be omitted from the
* DESTINATION table.
*
* e.g.: $migrationStatusKV = [ 'some_field' => 'deleted' ]
*
* Note that both the key and the value are case-sensitive!
*
* This is an optional field.
*
*/
public ?array $migrationStatusKV = null;
// The $migrationSourceSchema defines the remote schema for the source table
public $migrationSourceSchema = ''; // or STRING_MONGO
// The source table in the remote repos (default defined in the XML) must be declared here
public $migrationSourceTable = '';
//=================================================================================================================
// WAREHOUSE DECLARATIONS ARE DISABLED FOR WAREHOUSE CLASS OBJECTS
// ----------------------------------------------------------------------------------------------------------------
public ?array $wareHouse = null;
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-23-18 mks CORE-852: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 03-23-18 mks CORE-852: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 03-23-18 mks CORE-852: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,452 @@
<?php
/**
* gatWarehouse -- mongo admin template class
*
* gatWarehouse is a data-definition file for warehouse meta data - event data about a data warehousing request that,
* during and following a warehouse request event, is stored in the wareHouse collection on the Namaste admin service.
*
* Again, to be crystal clear, this data class stores (progress, completion) data about the warehouse event - it has
* nothing to do with the actual warehousing data other than recording the details about the event request.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 04-12-18 mks _INF-188: original coding (version 1)
* 11-04-19 mks DB-136: added DB_EVENT_GUID to $indexFields
* 01-14-20 mks DB-150: PHP7.4 class member type-casting
* 06-01-20 mks ECI-108: support for auth token
*
*/
class gatWarehouse
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_ADMIN; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_CLASS_WAREHOUSE; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_WAREHOUSE; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_MONGO_WAREHOUSE_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = false; // set to true to cache class data
public bool $setDeletes = true; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NOT_ENABLED; // set to AUDIT_value constant
public bool $setJournaling = false; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = false; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = false; // set to false if the class contains methods
public int $cacheTimer = 0; // number of seconds a tuple will remain in-cache
public bool $isGA = true; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you will
// need to initialize this member in the constructor (hard-coded)
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
MWH_SOURCE_URI => DATA_TYPE_STRING, // URI of the data source, if the source is remote
MWH_SOURCE_SCHEMA => DATA_TYPE_STRING, // name of the source schema
MWH_SOURCE_TABLE => DATA_TYPE_STRING, // name of the source table
MWH_DEST_SCHEMA => DATA_TYPE_STRING, // name of the destination schema
MWH_DEST_TABLE => DATA_TYPE_STRING, // name of the destination table
MWH_QUERY => DATA_TYPE_STRING, // query used to pull the data from source
MWH_QUERY_DATA => DATA_TYPE_STRING, // json-ized string of query parameter data
MWH_DATE_STARTED => DATA_TYPE_INTEGER, // when the migration started (epoch time)
MWH_NUM_RECS_SOURCE => DATA_TYPE_STRING, // number of records in the source table
MWH_NUM_RECS_IN_QUERY => DATA_TYPE_INTEGER, // number of records in the (migration/wh) query
MWH_NUM_RECS_MOVED => DATA_TYPE_INTEGER, // number of records migrated
MWH_NUM_RECS_DROPPED => DATA_TYPE_INTEGER, // number of records that were dropped
MWH_DELETE_TYPE => DATA_TYPE_STRING, // should be hard, soft, or none
MWH_LAST_REC_WRITTEN => DATA_TYPE_STRING, // json-encoded string of the last record written
MWH_DATE_COMPLETED => DATA_TYPE_INTEGER, // when migration completed (epoch time)
MWH_STOP_REASON => DATA_TYPE_STRING, // reason why migration failed
MWH_ERROR_CAT => DATA_TYPE_ARRAY, // array of errors
DB_TOKEN => DATA_TYPE_STRING, // unique key (GUID) exposed externally and is REQUIRED
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER // epoch time
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, MONGO_ID, MWH_SOURCE_SCHEMA, MWH_SOURCE_URI,
MWH_SOURCE_TABLE, MIGRATION_DEST_SCHEMA, MIGRATION_DEST_TABLE, MWH_NUM_RECS_SOURCE, MWH_DATE_STARTED
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_TOKEN, MWH_SOURCE_TABLE, DB_STATUS,
MWH_DEST_TABLE, MWH_DEST_SCHEMA, MWH_SOURCE_SCHEMA, DB_EVENT_GUID
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => 1,
DB_EVENT_GUID => 1,
MWH_SOURCE_TABLE => 1,
MWH_DEST_TABLE => 1,
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1 // MONGO_TOKEN should always appear
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null;
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = null;
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
* SubC fields do not need to be indexed.
*
*/
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_QUALIFIER => null, // query filter for warehousing
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H' // must be either H, or S. Can be reset to T via meta. Default: H
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 04-12-18 mks _INF-139: original coding
*
*/
public function __construct()
{
$this->authToken = NULL_TOKEN;
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @author mike@givingassistant.org
* @version 1.0
*
* @return null
*
* HISTORY:
* ========
* 04-12-18 mks _INF-139: original coding
*
*/
private function __clone()
{
return(null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 04-12-18 mks _INF-139: original coding
*
*/
public function __destruct()
{
;
}
}

View File

@@ -0,0 +1,457 @@
<?php
/** @noinspection PhpUnused */
/**
* Class pltDonors -- mongo data-template class
*
* This is the template class for Priceline donors. Priceline fields, as submitted to the SMAX API are defined
* as the cache-mapped values s.t. the actual schema remains obfuscated.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-12-20 mks ECI-164: original coding
*
*/
class pltDonors
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS PROPERTIES...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
public int $version = 1; // template version - not the same as the release version
public string $service = CONFIG_DATABASE_SERVICE_APPSERVER; // defines the mongo server destination
public string $schema = TEMPLATE_DB_MONGO; // defines the storage schema for the class
public string $templateClass = TEMPLATE_PL_DONORS; // defines the clear-text template class name
public string $collection = COLLECTION_MONGO_PL_DONORS; // sets the collection (table) name
public ?string $whTemplate = null; // sets the WH(cool) collection name, null if not wh'd
public string $extension = COLLECTION_PL_DONORS_EXT; // sets the extension for the collection
public bool $closedClass = true; // set to false to allow partner instantiations
public bool $setCache = true; // set to true to cache class data
public bool $setDeletes = false; // set to true to allow HARD deletes (otherwise: SOFT)
public int $setAuditing = AUDIT_NONDESTRUCTIVE; // set to AUDIT_value constant (nondestructive = reads(yes))
public bool $setJournaling = true; // set to true to allow journaling
public bool $setUpdates = true; // set to true to allow record updates
public bool $setHistory = false; // set to true to enable detailed record history tracking
public string $setDefaultStatus = STATUS_ACTIVE; // set the default status for each record
public string $setSearchStatus = STATUS_ACTIVE; // set the default search status
public bool $setLocking = false; // set to true to enable record locking for collection
public bool $setTimers = true; // set to true to enable collection query timers
public string $setPKey = DB_TOKEN; // sets the primary key for the collection
public bool $setTokens = true; // set to true: adds the idToken field functionality
public bool $selfDestruct = true; // set to false if the class contains methods
public int $cacheTimer = 300; // number of seconds a tuple will remain in-cache
public bool $isGA = false; // set to true is this class is a Namaste internal class
public ?string $authToken = null; // if this data class is registered to a partner, you'll
// need to initialize this member in the constructor
//
//
// fields: a key-value paired array, defines the field name and the data type for each field. Prior to insertion,
// all data is validated for type and membership. Data that does not satisfy these requirements is
// silently dropped prior to insertion.
public array $fields = [
/////// NAMASTE CONSTANTS ////////////////
MONGO_ID => DATA_TYPE_INTEGER, // sorting by the id is just like sorting by createdDate
DB_TOKEN => DATA_TYPE_STRING, // unique pkey exposed externally and is REQUIRED
DB_EVENT_GUID => DATA_TYPE_STRING, // track-back identifier for broker/events
DB_CREATED => DATA_TYPE_INTEGER, // epoch time
DB_STATUS => DATA_TYPE_STRING, // record status
DB_ACCESSED => DATA_TYPE_INTEGER, // epoch time
//////////////////////////////////////////
PL_CAUSE_TITLE => DATA_TYPE_STRING,
PL_CID => DATA_TYPE_STRING,
PL_SHARE_DATA_WITH_CAUSE => DATA_TYPE_BOOL,
PL_FK => DATA_TYPE_STRING,
PL_DONATIONS_TCC => DATA_TYPE_DOUBLE,
PL_TOT_DONS => DATA_TYPE_DOUBLE,
PL_TRANS_COUNT => DATA_TYPE_INTEGER
];
// protected fields are fields that a client is unable to modify or delete. If a client submits a query that
// updates these fields, the query will be rejected (worst case) or the directive to update/delete the field
// will be silently dropped (best case). In either way, updating or removing this fields cannot be accomplished.
//
// Minimally, this array should contain the following fields:
// -- DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED
// -- the ID field (either PDO_ID or MONGO_ID)
// -- DB_WH_CREATED, DB_WH_EVENT_GUID, DB_WH_TOKEN
//
public ?array $protectedFields = [
DB_TOKEN, DB_EVENT_GUID, DB_CREATED, DB_ACCESSED, MONGO_ID
];
// all fields that appear in any of the index declarations must appear in this list as this is the property
// that's used in the framework as an authoritative check to qualify discriminant fields as indexes.
//
// indexes are always declared with the template column name and not the cache-map column name
//
// warehouse indexes are limited to the original record's created date and the three WH fields only
//
public array $indexFields = [
MONGO_ID, DB_CREATED, DB_STATUS, DB_TOKEN, PL_CID, PL_CAUSE_TITLE, PL_FK
];
// all index names that are explicitly declared in the indexes below must also appear in this array. If there are
// no pre-defined index names, then this field should be set to null.
//
// Note that if you're allowing mysql to generate the index names for you, and if you use a partial index (below)
// that references that randomly-generated index name, and that name does not appear in this list, then you will
// fail to load that template at run time, every time.
//
// You have been warned.
//
public ?array $indexNameList = null;
// single field index declarations -- since you can have a field in more than one index
// (MONGO_ID should NEVER be listed as it's the default single-field index.)
// the format for the single-field index declaration is the same format used for all the
// index declarations:
// [ FIELD_NAME => <SORT-DIRECTION> ] where <SORT_DIR> = [ 1 | -1 ]
//
// NOTE: if you're going to declare a single column as a property, then do NOT also declare it as a single index!
//
public ?array $singleFields = [
DB_CREATED => -1,
PL_CID => 1,
PL_CAUSE_TITLE => 1
];
// compound indexes have format of:
// [ INDEX-NAME => [ FIELD_NAME => <SORT-DIR>, ... ]]
// where INDEX-NAME is a unique string and SORT-DIR = [1|-1]
// unless it's for mongoDB -- mongoDB does not use index labels
public ?array $compoundIndexes = null;
// multiKey indexes are indexes on fields that are arrays (not the same as sub-collections) which indexes the
// content stored in the array based on the column names.
//
// mongo, as of 3.4, automatically creates a multi-key index on any field declared as a (sic) index that's
// an array. Meaning: we don't need to explicitly create a multi-key index on an array field if that field
// is declared as a single-key, compound, or unique index.
//
// -----------------------------------------------------------------------------------------------------------------
// NOTES: if you implicitly declare a multi-key index by using the column as a compound-index field, then you
// may, at MOST, have one array within the compound index.
//
// You may NOT declare a multi-key index as a shard key.
//
// Hashed keys may NOT be multi-key.
// -----------------------------------------------------------------------------------------------------------------
//
// In other words, if you want to apply an index to ALL of the array element then declare the column as singleField,
// or compound, or unique. This will have the multi-key index automagically applied by mongoDB.
//
// If you want to index a subset of the array, then declare the fields to be indexed by using dot notation:
//
// [ 'someIndex' => [ arrayColumnName.subField1 => 1, arrayColumnName.subField3 => -1 ... ] ]
//
// And this will apply the multi-key index property to subField1 and subField3 only.
//
// multiKey indexes are referenced by an index name in order to remove ambiguity when parsing index-properties
// against this and other indexes that may have the same field name. In other words, index-properties that will
// be applied to a multiKey index must reference the multiKey index by the index (and not the column) name.
//
// example:
// [ 'mIdx1Test' => [ ARRAY_FIELD_NAME => <1|-1>, ... ]]
//
public ?array $multiKey = null;
/*
* Valid index-type constants are:
* MONGO_INDEX_TYPE_SINGLE
* MONGO_INDEX_TYPE_COMPOUND
* MONGO_INDEX_TYPE_MULTIKEY
*
* INDEXES NOT SUPPORTED BY NAMASTE AT THIS TIME:
* ----------------------------------------------
* geoSpatial
* text
* hashed
*
*/
// =================================================================================================================
// INDEX PROPERTIES
// ----------------
// Index properties are applied to indexes. The supported properties are:
// unique, partial and ttl
// sparse is not supported because partial
//
// If a property is not in-use, then you must still declare the property as a class object but the
// value of the property will be set to null.
//
// Sparse property types are not supported in favor of partials.
//
// =================================================================================================================
// Partial Indexes are supported as of MongoDB 3.2 and replace sparse indexes. Format for declaration is the
// column name as an array key, with the value being a sub-array of a mongo operand and a value, all of which is
// associated with either an existing column name or index label.
//
// If an existing column name is used, then that field must be defined (exists) in one of the above index
// declarations for single, compound, or multikey indexes.
//
// Sparse indexes only add the row to the index if the column referenced satisfies the conditions specified
// in the query condition (expr2).
//
// Format:
// { expr1 }, { expr2 }
// Where:
// expr1 is an indexed column and the index direction. e.g.: { created_tst : 1 }
// AND
// expr2 is the keyword "partialFilterExpression : { [ query ] }
// e.g.: { partialFilterExpression : { integer_tst : { $gte : 10 }}
//
// db.myTable.createIndex({ lastName: -1, firstName : 1 }, { partialFilterExpression : { age : { $gte : 62 }})
// The above index would return a list of names (sorted DESC by last name) for people aged 62 or older.
//
//
public ?array $partialIndexes = null;
// unique indexes cause MongoDB to reject duplicate values for the indexed field. Unique indexes
// are functionally interchangeable with other mongo indexes.
// Format:
// [ < FIELD_NAME | INDEX-NAME > => <SORT_DIR>, ... ]
//
public ?array $uniqueIndexes = [
DB_TOKEN => 1, // MONGO_TOKEN should always appear
PL_FK => 1 // foreign key value should be unique b/c key
];
// ttl indexes contain the column name and the time-to-live in seconds (e.g.: MONGO_TOKEN => 3600)
// ttl indexes can only be applied to fields that are MongoDB Date() (object) types, or an array that
// contains date values.
//
// If the field is an array, and there are multiple date values in the index, MongoDB uses lowest
// (i.e. earliest) date value in the array to calculate the expiration threshold. If the indexed
// field in a document is not a date or an array that holds a date value(s), the document will not expire.
//
// Format:
// [ SOME_FIELD_NAME => ExpireVal ]
//
// Example:
// [ SOME_FIELD_NAME => 86400 ] --- record will be sorted ASC and deleted after 1 day
//
public ?array $ttlIndexes = null; // ttl indexes appear in $indexFields
// cache maps are requires for namaste service classes. Even if caching is disabled for a class, a cache map is
// still required for the class. For PDO classes, the PDO_ID is never included in the mapping, nor is MONGO_ID.
public ?array $cacheMap = [
///////// NAMASTE CONSTANTS //////////////
DB_TOKEN => CM_TST_TOKEN, //
DB_STATUS => CM_TST_FIELD_TEST_STATUS, //
DB_EVENT_GUID => CM_TST_EVENT_GUID, //
DB_CREATED => CM_TST_FIELD_TEST_CDATE, //
DB_ACCESSED => CM_TST_FIELD_TEST_ADATE, //
//////////////////////////////////////////
PL_CID => PL_CM_CID,
PL_CAUSE_TITLE => PL_CM_CAUSE_TITLE,
PL_DONATIONS_TCC => PL_CM_DTCC,
PL_FK => PL_CM_FK,
PL_SHARE_DATA_WITH_CAUSE => PL_CM_SDWC,
PL_TOT_DONS => PL_CM_TD,
PL_TRANS_COUNT => PL_CM_TC
];
/*
* if there is no cache-mapping supported for the class, and you want to limit the fields returned,
* then those fields are listed here as an associative array.
*
* NOTE: You can have caching disabled for the class and still have a cache-map -- this controls the labels
* assigned to the returned data column names exposed to the client. Schema should never be exposed.
*
* NOTE: if you do not support caching for the class and this class is one that is returned to a client,
* (some classes are limited to internal use only, like logging), then you should (at a minimum)
* exclude the primary key field (integer).
*
*
* This array is an associative array -- the key is the native column name and the value doesn't matter. The
* important thing is that the keys are the column names that you want to return back to the client.
*
* If $exposedFields is to be undefined for the class, then assign it to null.
*
*/
public ?array $exposedFields = null;
public ?array $binFields = null; // binary fields require special handling; define binary fields here
// regex fields -- within the indexFields array, which fields enable regex searches?
// this does not define an index, but rather to control when to use a regex operand in a query...
public ?array $regexFields = null;
/*
* sub-collections represent the implementation of a 1:M relationship at the record-entity level in mongoDB.
*
* A great example of a sub-collection implementation would be a parent collection called questions and
* a sub-collection called answers.
*
* sub-collections are declared as key->value pairs where each key value is, itself, an array of field names:
*
* public $subC = [
* FIELD_ONE => [
* SUB_COLLECTION_FIELD_ONE,
* SUB_COLLECTION_FIELD_TWO,
* ...
* ],
* ...
* ];
*
* Each sub-collection field should also appear in both the fields list (to define the types), and in the
* cacheMap (if used). If you're not using a cacheMap, and you're limiting the exposed fields, then each
* sub-collection field exposed must be listed in the exposed field list. (e.g.: normal rules for exposure
* for a collection are applied the same way to a sub-collection.)
*
* Note that if a sub-Collection key is not listed in either the cacheMap or the exposed field list, then
* the entire sub-collection will be invisible to the client. If you list the sub-collection key, you can
* limit the sub-collection fields that are exposed by not listing them in either the cacheMap or the
* exposed-field lists, respectively.
*
* Sub-collections are managed within Namaste to allow the sub-collection elements to be either inserted,
* or deleted (an update is a delete + insert) without changing the parent field values and, accordingly,
* are enabled via discrete class methods.
*
*/
// sub-collection fields must be declared here (need not be indexed)
public ?array $subC = null;
//=================================================================================================================
// WAREHOUSE DECLARATIONS
// ----------------------
// This section handles the warehousing configuration for the class. If a data table is eligible to be ware-
// housed, then this section contains all the configuration information, including permissions, for the destination
// repository. Note that we need to support bi-directional flow for data.
//
// Terms/Definitions:
// ------------------
// HOT -- data is in production
// COOL -- data has been warehoused, maintains schema, but with indexing changes.
// COLD -- data has been warehoused but formatted to the destination schema, usually CSV.
// WARM -- indicates any data moving from COLD -> HOT
//
// Design Features:
// ----------------
// Supported
// This is a boolean value that indicates if the class supports warehousing. If this is set to false, then
// warehousing requests for the class will be rejected.
//
// Remote Support
// --------------
// This is a boolean value that indicates if the class will support a warehouse source outside of the Namaste
// framework. If this is set to false, and a user submits a request defining the data source as a remote
// repository, the request will be rejected.
//
// Automated
// This is a boolean value that indicates if the class allows automated warehousing, meaning that data will be
// warehoused once the qualifying condition has been met.
//
// Dynamic
// Boolean value that, if set to true, indicates that the class will accept dynamic requests. Otherwise, the
// warehousing operations will follow the interval schedule. Defaults to false.
//
// Interval
// This is a string value that tells the AT_micro-service how often to run automated warehousing on the data.
// D = Daily, M = 1st of every month, Q = 1st of every quarter, Y = 1st of every year
// The default setting for this value should be monthly (M).
//
// Qualifier
// This is a query string, similar to what you would provide to Namaste for a fetch operation, that establishes
// the filter/criteria for moving data to the warehouse. If Supported is set to true, this cannot be blank.
//
// Override
// Boolean value indicating if, and only for dynamic event requests, if the Qualifier can be overridden. If
// set to true, the the event request must contain a valid query filter.
//
// Delete
// This is a string value that tells Namaste what to do with the source data once successfully warehoused.
// H = hard delete, S = soft delete
// Note that this value overrides the $setDeletes setting.
//
//=================================================================================================================
public ?array $wareHouse = [
WH_SUPPORTED => false, // must be set to true for data class to support any warehousing
WH_REMOTE_SUPPORT => false, // must be set to true to import data into this class from remote source
WH_AUTOMATED => false, // must be set to true for warehousing to be automatically processed
WH_DYNAMIC => false, // must be set to true to allow non-scheduled event requests
WH_INTERVAL => 'M', // must be either D, M, Q or A, defaults to M
WH_OVERRIDE => false, // must be set to true to allow an ad-hoc query filter
WH_DELETE => 'H', // must be either H, or S. Can be reset to T via meta. Default: H
// default warehouse query to grab records where the date is LT a value and the status is active:
// the null value will be replaced with the value provided by the client in the wh request payload.
WH_QUALIFIER => [
DB_CREATED => [OPERAND_NULL => [OPERATOR_LT => [null]]],
DB_STATUS => [OPERAND_NULL => [OPERATOR_EQ => [STATUS_ACTIVE]]],
OPERAND_AND => null
]
];
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// CLASS METHODS...
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* __construct() -- public method
*
* we have a constructor to register the destructor.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-12-20 mks ECI-164: Original coding
*
*/
public function __construct()
{
$this->authToken = '136EA67A-B1E2-0A4B-2BD8-EE34D39DFDE1'; // make sure this exists in the SMAX_API collection
register_shutdown_function([$this, STRING_DESTRUCTOR]);
}
/**
* __clone() -- private function
*
* Silently disallows cloning of the object
*
* @return null
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-12-20 mks ECI-164: Original coding
*
*/
private function __clone()
{
return (null);
}
/**
* __destruct() -- public function
*
* As of PHP 5.3.10 destructors are not run on shutdown caused by fatal errors.
*
* The destructor is registered as a shut-down function in the constructor -- so any recovery
* efforts should go in this method.
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-12-20 mks ECI-164: Original coding
*
*/
public function __destruct()
{
// empty method
}
}

286
common/cacheMaps.php Normal file
View File

@@ -0,0 +1,286 @@
<?php
/**
* cacheMaps.php
*
* this file defines the cache map constants for the framework.
*
* The general intent is that cached data is directly accessible to clients and, as such, has the potential to
* expose table schema. So, the purpose of the cache map is to obfuscate the original schema, either through
* omission or by changing the column name of the field.
*
* Fields have the general format of:
*
* {name}_{table_ext}
*
* The {name} is the "natural" name of the column.
*
* The table_ext is an under-bar (_) followed by a three letter, unique to the db, identifier specific to the table.
* This three letter identifier is also appended to every column name within the table so as to identify field sources
* in queries where identical names are in play, such as "id" or "token".
*
* Cache constants identify themselves as the following:
*
* CM_{table ext}_{name}
*
* Where CM implies "cache map" and {name} is the (possibly) new name of the field.
*
* ex:
* ---
* The user table/collection has a field called salt. We want to cache the field but not necessarily broadcast that
* the name of the field in our user (_usr) collection is "salt". We use the short-string "CM", short for "Cache-Map",
* as an identifier specific to, and reserved for, this purpose.
*
* So, "salt_usr" cache-mapped to "seedKey" making the constant declaration look like:
*
* const CM_USR_SALT = 'seedKey'
*
* This way, when reading the code, we can identify the constant as a cache-mapped constant, the table in which
* the filed appears, and the literal name of the column being mapped.
*
* Generally, some tables may have both an integer primary key (id) and a string unique index (guid). You should
* never expose the ID when you have a GUID value... Remember: GUIDs externally, IDs internally.
*
* One last note - when mapping a record for caching, if a field is omitted from the cachemap for a class, then when
* Namaste is building the cache of data for the class, any field not listed in the cachemap will not be cached. This
* is the windy way of saying that there's not a 1:1 relationship between a class' field list and a cache map structure.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* ========
* 06-30-17 mks original coding
* 02-03-19 mks DB-147: added session and standard constants
*
*/
// standard fields in every collection
const CM_DATE_CREATED = 'dateCreated';
const CM_DATA_CLOSED = 'dateClosed';
const CM_DATE_ACCESSED = 'dateLastAccessed';
const CM_EVENT_GUID = 'eventGuid';
const CM_TOKEN = 'key';
const CM_STATUS = 'status';
// test table cachemap
const CM_TST_TOKEN = 'key';
const CM_TST_EVENT_GUID = 'eventGuid';
const CM_TST_FIELD_TEST_STRING = 'strVal';
const CM_TST_FIELD_TEST_DOUBLE = 'doubleVal';
const CM_TST_FIELD_TEST_INT = 'intVal';
const CM_TST_FIELD_TEST_NIF = 'foobar';
const CM_TST_FIELD_TEST_BOOL = 'boolVal';
const CM_TST_FIELD_TEST_OBJ = 'objVal';
const CM_TST_FIELD_TEST_ARY = 'aryVal';
const CM_TST_FIELD_TEST_SUBC = 'subCField';
const CM_TST_FIELD_TEST_CDATE = 'dateCreated';
const CM_TST_FIELD_TEST_ADATE = 'dateAccessed';
const CM_TST_FIELD_TEST_STATUS = 'recStatus';
// mongo-collection: sessions
const CM_SESSION_EXPIRES = 'sessionExpires';
const CM_SESSION_DURATION = 'sessionLength';
const CM_SESSION_UID = 'sessionUserId';
const CM_SESSION_LEVEL = 'sessionLevel';
const CM_SESSION_CUSTOM_KEY = 'sessionKey';
const CM_SESSION_CUSTOM_VAL = 'sessionValue';
const CM_SESSION_CW = 'createdWith';
const CM_SESSION_ACTION = 'action';
const CM_SESSION_AP = 'authProvider';
// mongo-collection: users
const CM_USER_ACCOUNT = 'accountID';
const CM_USER_API_DATA = 'apiData';
const CM_USER_PASS_KEY = 'passwordKey';
const CM_USER_EMAIL_VALIDATED = 'emailIsValidated';
const CM_USER_FB_KEY = 'fbKey';
const CM_USER_HUMAN_VALIDATED = 'humanValidated';
const CM_USER_MEMBER_PLAN = 'memberPlan';
const CM_USER_NOTES = 'comments';
const CM_USER_PASSWORD = 'userPassword';
const CM_USER_PROMO_ID = 'promoId';
const CM_USER_TEMPORARY_PWD = 'temporaryPass';
const CM_USER_TIMEZONE = 'userTZ';
const CM_USER_LEAP_CONVERTED = 'userLeapConverted';
const CM_USER_NAME = 'userName';
const CM_USER_ALT_EMAIL = 'altEmailAddress';
const CM_USER_WEBHOOK_ATTEMPTS = 'webhookAttempts';
const CM_USER_TYPE = 'userType';
const CM_USER_FINANCIALS = 'userFinancials'; // sub-collection heading
const CM_USER_FINANCIALS_DONATIONS = 'donationsTotal';
const CM_USER_FINANCIALS_EARNINGS = 'earningsTotal';
const CM_USER_FINANCIALS_CB_BANK = 'cbBank';
const CM_USER_FINANCIALS_CB_DONATIONS = 'cbDonations';
const CM_USER_FINANCIALS_CURR_BAL = 'curBalance';
const CM_USER_FINANCIALS_EARNING_TIER = 'earningTier';
const CM_USER_FINANCIALS_PAYMENTS = 'payments'; // sub-collection heading
const CM_USER_FINANCIALS_PAYMENTS_ADDR1 = 'paymentsAddr1';
const CM_USER_FINANCIALS_PAYMENTS_ADDR2 = 'paymentAddr2';
const CM_USER_FINANCIALS_PAYMENTS_CITY = 'paymentCity';
const CM_USER_FINANCIALS_PAYMENTS_COUNTRY = 'paymentCountry';
const CM_USER_FINANCIALS_PAYMENTS_FNAME = 'paymentFName';
const CM_USER_FINANCIALS_PAYMENTS_PLAN = 'paymentPlan';
const CM_USER_FINANCIALS_PAYMENTS_STATE = 'paymentState';
const CM_USER_FINANCIALS_PAYMENTS_STATUS = 'paymentStatus';
const CM_USER_FINANCIALS_PAYMENTS_ZIP = 'paymentsZip';
const CM_USER_FINANCIALS_PAYMENTS_VALIDATED = 'paymentValidated';
const CM_USER_FINANCIALS_PAYMENTS_METADATA = 'paymentMetaData';
const CM_USER_FINANCIALS_PAYMENTS_TYPE = 'paymentType';
const CM_USER_FINANCIALS_PENDING_BALANCE = 'pendingBalance';
const CM_USER_FINANCIALS_CUSTOMER_ID = 'customerID';
const CM_USER_FINANCIALS_STRIPE_VERIFIED = 'stripeVerified';
const CM_USER_FINANCIALS_TIN_IDENTIFIER = 'tinIdentifier';
const CM_USER_FINANCIALS_TIN_TYPE = 'tinType';
const CM_USER_SPORTS = 'userSports'; // sub-collection heading
const CM_USER_SPORTS_FAVE_ATHLETES = 'faveAthletes';
const CM_USER_SPORTS_FAVE_TEAMS = 'faveTeams';
const CM_USER_SPORTS_FAVE_SPORTS = 'faveSports';
const CM_USER_CHARITIES = 'charities'; // sub-collection heading
const CM_USER_CHARITIES_SEL_CAMPAIGN = 'selectedCampaign';
const CM_USER_CHARITIES_SEL_CAMPAIGN_META = 'selectedMetaData';
const CM_USER_CHARITIES_SEL_CAMPAIGN_TITLE = 'selectedCampaignTitle';
const CM_USER_REFERRALS = 'referrals'; // sub-collection heading
const CM_USER_REFERRALS_EARNINGS = 'referralEarnings';
const CM_USER_REFERRALS_CLICKS = 'referralClicks';
const CM_USER_REFERRALS_PENDING_EARNINGS = 'pendingEarnings';
const CM_USER_REFERRALS_SIGNUPS = 'referralSignups';
const CM_USER_REFERRALS_ID = 'referralsID';
const CM_USER_PII = 'personalInformation'; // sub-collection heading
const CM_USER_PII_ADDR = 'userAddress';
const CM_USER_PII_AGE_RANGE = 'userAgeRange';
const CM_USER_PII_DOB = 'userBirthday';
const CM_USER_PII_COUNTRY_CODE = 'userCC';
const CM_USER_PII_EMAIL = 'userEmail';
const CM_USER_PII_ALT_EMAIL = 'altUserEmail';
const CM_USER_PII_FNAME = 'userFName';
const CM_USER_PII_GENDER = 'userGender';
const CM_USER_PII_HOMETOWN = 'userHometown';
const CM_USER_PII_LANGUAGES = 'userLanguages';
const CM_USER_PII_LNAME = 'userLname';
const CM_USER_PII_LEGAL_NAME = 'userLegalName';
const CM_USER_PII_LOCALE = 'userLocale';
const CM_USER_PII_LOCATION = 'userLocation';
// mongo-collection: Consolidated Sanctions List
const CM_CSL_ADDR = 'addr';
const CM_CSL_ADDR1 = 'addr1';
const CM_CSL_ADDR2 = 'addr2';
const CM_CSL_ADDR3 = 'addr3';
const CM_CSL_ADDR_LIST = 'addrList';
const CM_CSL_AKA = 'alsoKnownAs';
const CM_CSL_AKA_LIST = 'alsoKnownAsList';
const CM_CSL_CAT = 'cat';
const CM_CSL_CITIZENSHIP = 'citizenship';
const CM_CSL_CITIZENSHIP_LIST = 'citizenshipList';
const CM_CSL_CITY = 'city';
const CM_CSL_COUNTRY = 'country';
const CM_CSL_DOB = 'dob';
const CM_CSL_DOB_LIST = 'dobList';
const CM_CSL_FIRST_NAME = 'fName';
const CM_CSL_LAST_NAME = 'lName';
const CM_CSL_ID = 'id';
const CM_CSL_ID_COUNTRY = 'idCountry';
const CM_CSL_ID_LIST = 'idList';
const CM_CSL_ID_NUM = 'idNum';
const CM_CSL_ID_TYPE = 'idType';
const CM_CSL_MAIN_ENTRY = 'mainEntry';
const CM_CSL_POB = 'pob';
const CM_CSL_POB_LIST = 'pobList';
const CM_CSL_POST_CODE = 'postCode';
const CM_CSL_PRG = 'prg';
const CM_CSL_PRG_LIST = 'prgList';
const CM_CSL_REM = 'remarks';
const CM_CSL_STATE_OR_PROVINCE = 'StateOrProvince';
const CM_CSL_TYPE = 'type';
const CM_CSL_UID = 'uid';
const CM_CSL_SDN_TYPE = 'entityType';
// mongo-collection: donors
const CM_DONORS_TC = 'transactionCount;';
const CM_DONORS_DTCC = 'donationsToCurrentCause';
const CM_DONORS_TD = 'totalDonations';
const CM_DONORS_SDWC = 'shareDataWithCause';
const CM_DONORS_CID = 'cid';
const CM_DONORS_CT = 'causeTitle';
const CM_DONORS_FI = 'foreignId';
// mongo-collection: wblist
const CM_WBL_TYPE = 'listType';
const CM_WBL_EMAIL = 'listEmail';
const CM_WBL_ALT_EMAIL = 'listAltEmail';
const CM_WBL_ADDED_BY = 'listAddedBy';
const CM_WBL_NOTES = 'listNotes';
// mongo-collection: transactions (these values are determined by Priceline)
const CM_TRANSACTIONS_CREATED_AT = '_created_at';
const CM_TRANSACTIONS_UPDATED_AT = '_updated_at';
const CM_TRANSACTIONS_ORDER_ID = 'orderId';
const CM_TRANSACTIONS_TYPE = 'type';
const CM_TRANSACTIONS_DESCRIPTION = 'description';
const CM_TRANSACTIONS_AMOUNT = 'amount';
const CM_TRANSACTIONS_EVENT_DATE = 'eventDate';
const CM_TRANSACTIONS_START_DATE = 'startDate';
const CM_TRANSACTIONS_END_DATE = 'endDate';
const CM_TRANSACTIONS_META_DATA = 'metadata';
const CM_TRANSACTIONS_MD_TRAVEL_END_DATE = 'travelEndDate';
const CM_TRANSACTIONS_MD_PARTNER = 'partner';
const CM_TRANSACTIONS_MD_PRODUCT_ID = 'productId';
const CM_TRANSACTIONS_MD_TRAVEL_START_DATE = 'travelStartDate';
const CM_TRANSACTIONS_MD_CUSTOMER_ID = 'custId';
const CM_TRANSACTIONS_MD_OFFER_NUM = 'offerNum';
const CM_TRANSACTIONS_MD_ENV = 'env';
const CM_TRANSACTIONS_DEST_CITY_NAME = 'destCityName';
const CM_TRANSACTIONS_DEST_STATE_CODE = 'destStateCode';
const CM_TRANSACTIONS_DEST_COUNTRY_CODE = 'destCountryCode';
const CM_TRANSACTIONS_DONOR_ID = '_donor_id';
const CM_TRANSACTIONS_CID = 'cid';
const CM_TRANSACTIONS_CAUSE_TITLE = 'causeTitle';
// mongo-collection: SMAXAPI
const CM_SMAX_COMPANY_NAME = 'name_of_company';
const CM_SMAX_COMPANY_CONTACT_INFO = 'contact_info';
const CM_SMAX_COMPANY_ADDR1 = 'address_line_1';
const CM_SMAX_COMPANY_ADDR2 = 'address_line_2';
const CM_SMAX_COMPANY_CITY = 'city';
const CM_SMAX_COMPANY_STATE = 'state';
const CM_SMAX_COMPANY_ZIP = 'zip';
const CM_SMAX_COMPANY_VOICE = 'company_voice_phone_number';
const CM_SMAX_COMPANY_FAX = 'company_fax_phone_number';
const CM_SMAX_CONTACTS = 'company_contacts';
const CM_SMAX_CONTACT_NAME = 'employee_name';
const CM_SMAX_CONTACT_EMAIL = 'employee_email';
const CM_SMAX_CONTACT_PHONES = 'employee_phones';
const CM_SMAX_CONTACT_VOICE = 'employee_voice_phone_number';
const CM_SMAX_CONTACT_FAX = 'employee_fax_phone_number';
const CM_SMAX_AUTH_BY = 'givva_employee_name';
const CM_SMAX_NOTES = 'internal_notes';
const CM_SMAX_ACCOUNT_TYPE = 'account_type';
const CM_SMAX_TLTI = 'tlti';
// mongo-collection: product-registrations
const CM_PRG_TYPE = 'type';
const CM_PRG_IID = 'installID';
const CM_PRG_EAV = 'extensionAddonVersion';
const CM_PRG_PLATFORM = 'platform';
const CM_PRG_BROWSER = 'browser';
const CM_PRG_MAJ_VER = 'majorVersion';
const CM_PRG_MIN_VER = 'minorVersion';
const CM_PRG_IS_MOBILE = 'isMobile';
const CM_PRG_IS_TABLET = 'isTablet';
const CM_PRG_FIRST_SEEN = 'firstSeen';
const CM_PRG_LAST_SEEN = 'lastSeen';
// mongo collection: product-sessions
const CM_PSE_IID = 'installID';
const CM_PSE_SID = 'sessionID';
const CM_PSE_IP = 'sessionIP';
const CM_PSE_FIRST_SEEN = 'firstSeen';
const CM_PSE_LAST_SEEN = 'lastSeen';
// mongo collection: product-session-users
const CM_PSU_UID = 'userID';
const CM_PSU_SID = 'sessionID';
const CM_PSU_FIRST_SEEN = 'firstSeen';
const CM_PSU_LAST_SEEN = 'lastSeen';

1088
common/constants.php Normal file

File diff suppressed because it is too large Load Diff

711
common/dbCatalog.php Normal file
View File

@@ -0,0 +1,711 @@
<?php
/**
* this file holds all of the mongo template and config constants
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 06-07-17 mks original coding
*
*/
const TEMPLATE_DB = 'db';
const TEMPLATE_DB_AUTH = 'admin';
const TEMPLATE_DB_DDB = 'DynamoDB';
const TEMPLATE_DB_MONGO = 'mongoDB';
const TEMPLATE_DB_PDO = 'PDO';
const APPLICATION_BRAND = 'ga';
const DB_MASTER = 1;
const DB_SLAVE = 0;
const STRING_MYSQL = 'mysql';
const STRING_MONGO = 'mongo';
const STRING_PDO = 'PDO';
const STRING_SORT_ASC = 'ASC';
const STRING_SORT_DESC = 'DESC';
const STRING_WITH_ROLLUP = 'WITH ROLLUP';
const STRING_MULTI = 'multi';
const STRING_UPSERT = 'upsert';
const STRING_PROJECTION = 'projection';
const STRING_IN = 'IN';
const STRING_OUT = 'OUT';
const STRING_INOUT = 'INOUT';
const PKEY_GUID = 'guid'; // namaste pkey for external use
const PKEY_ID = 'id'; // traditional mysql pkey for internal use
// define the database service locations
const CONFIG_DATABASE_SERVICE_APPSERVER = 'appServer';
const CONFIG_DATABASE_SERVICE_ADMIN = 'admin';
const CONFIG_DATABASE_SERVICE_SEGUNDO = 'segundo';
const CONFIG_DATABASE_SERVICE_TERCERO = 'tercero';
// db history events
const DB_EVENT_SELECT = 'select';
const DB_EVENT_CREATE = 'create';
const DB_EVENT_BULK_CREATE = 'bulkCreate';
const DB_EVENT_UPDATE = 'update';
const DB_EVENT_UPSERT = 'upsert';
const DB_EVENT_BATCH_UPDATE = 'batchUpdate';
const DB_EVENT_FETCH = 'fetch';
const DB_EVENT_DELETE = 'delete';
const DB_EVENT_BATCH_DELETE = 'batchDelete';
const DB_EVENT_LOCK = 'lock';
const DB_EVENT_UNLOCK = 'releaseLock';
const DB_EVENT_WAREHOUSE = 'warehouse';
const DB_EVENT_NAMASTE_WRITE = 'internalWrite';
const DB_EVENT_NAMASTE_READ = 'internalRead';
const DB_EVENT_HK = 'houseKeeping';
const DB_EVENT_CALL_SP = 'storedProcedureCalled';
const DB_EVENT_NONE = 'none';
const DB_EVENT_SCI = 'subCollectionInsert';
const DB_EVENT_SCD = 'subCollectionDelete';
const DB_EVENT_SCF = 'subCollectionFetch';
const DB_EVENT_BI = 'batchInsert';
const DB_EVENT_MAIL = 'emailSent';
const DB_EVENT_DISABLED = 'recordDisabled';
const DB_EVENT_NULL = 'deleteField';
// DDB table decl constants
const DDB_TYPE_STRING = 'S';
const DDB_TYPE_STRING_SET = 'SS';
const DDB_TYPE_NUMBER = 'N';
const DDB_TYPE_NUMBER_SET = 'NS';
const DDB_TYPE_BINARY = 'B';
const DDB_TYPE_BINARY_SET = 'BS';
const DDB_TYPE_BOOLEAN = 'BOOL';
const DDB_TYPE_LIST = 'L';
const DDB_TYPE_MAP = 'M';
const DDB_TYPE_NULL = 'NULL';
const DDB_INDEX_HASH = 'HASH';
const DDB_INDEX_RANGE = 'RANGE';
const DDB_STRING_PROVISIONED_THROUGHPUT = 'ProvisionedThroughput';
const DDB_STRING_READ_CAPACITY_UNITS = 'ReadCapacityUnits';
const DDB_STRING_WRITE_CAPACITY_UNITS = 'WriteCapacityUnits';
const DDB_STRING_ATTRIBUTE_NAME = 'AttributeName';
const DDB_STRING_ATTRIBUTE_TYPE = 'AttributeType';
const DDB_STRING_ATTRIBUTE_VALUE_LIST = 'AttributeValueList';
const DDB_STRING_ATTRIBUTE_DEFINITIONS = 'AttributeDefinitions';
const DDB_STRING_KEY_TYPE = 'KeyType';
const DDB_STRING_TABLE_NAME = 'TableName';
const DDB_STRING_KEY_SCHEMA = 'KeySchema';
const DDB_METADATA = '@metadata';
const DDB_STATUS_CODE = 'statusCode';
const DDB_TABLE_NAMES = 'TableNames';
const DDB_TABLE_NAME = 'TableName';
const DDB_STRING_ITEM = 'Item';
const DDB_STRING_ITEMS = 'Items';
const DDB_STRING_KEY_CONDITIONS = 'KeyConditions';
const DDB_STRING_KEY_COND_EXPR = 'KeyConditionExpression';
const DDB_STRING_EXPR_ATTR_NAMES = 'ExpressionAttributeNames';
const DDB_STRING_EXPR_ATTR_VALS = 'ExpressionAttributeValues';
const DDB_STRING_CONSISTENT_READ = 'ConsistentRead';
const DDB_STRING_KEY = 'Key';
const DDB_STRING_QUERY = 'Query';
const DDB_STRING_COUNT = 'Count';
const DDB_STRING_NON_KEY_ATTRIBUTE = 'nka';
const DDB_STRING_NON_KEY_ATTRIBUTES = 'NonKeyAttributes';
const DDB_STRING_PROJECTION = 'Projection';
const DDB_STRING_PROJECTION_TYPE = 'ProjectionType';
const DDB_STRING_PT = 'projectionType';
const DDB_STRING_INDEX_NAME = 'IndexName';
const DDB_STRING_GLOBAL_SI = 'GlobalSecondaryIndexes';
const DDB_STRING_GSI = 'globalIndexes';
const DDB_STRING_LOCAL_SI = 'LocalSecondaryIndexes';
const DDB_STRING_LSI = 'localIndexes';
const DDB_PT_KEYS_ONLY = 'KEYS_ONLY';
const DDB_PT_INCLUDE = 'INCLUDE';
const DDB_PT_ALL = 'ALL';
const DDB_INDEX_TYPE_PRIMARY = 'primary';
const DDB_INDEX_TYPE_GLOBAL = 'globalSecondary';
const DDB_INDEX_TYPE_LOCAL = 'localSecondary';
// mongo table names
const NOSQL_TABLE_SEQUENCE = 'sequence_seq';
// mongo constants
const MONGO_ID = '_id';
const MONGO_DSN = 'mongodb://';
const MONGO_REPL_SET = 'replicaSet';
const MONGO_REPL_SET_LIST = 'replicaSetList';
const MONGO_URI_PORT = 'uriPort';
const MONGO_DATABASE = 'database';
const MONGO_MASTER = 'master';
const MONGO_WH_MASTER = 'whMaster';
const MONGO_SEGUNDO_MASTER = 'segundoMaster';
const MONGO_ADMIN_MASTER = 'adminMaster';
const MONGO_TERCERO_MASTER = 'terceroMaster';
const MONGO_LOG_MAX_LINES = 100;
const MONGO_QUERY_BULK_CREATE = 'Mongo bulkWrite query successfully executed for %d records';
const MONGO_BULK_WRITE_RESULTS = '%d documents matched, %d documents updated, %d documents upserted, %d documents inserted, %d documents deleted';
const MONGO_EQ = "\$eq";
const MONGO_DNE = "\$ne";
const MONGO_NIN = "\$nin";
const MONGO_IN = "\$in";
const MONGO_GT = "\$gt";
const MONGO_GTE = "\$gte";
const MONGO_LTE = "\$lte";
const MONGO_LT = "\$lt";
const MONGO_REGEX = "\$regex";
const MONGO_AND = "\$and";
const MONGO_NOT = "\$not";
const MONGO_OR = "\$or";
const MONGO_NOR = "\$nor";
const MONGO_SET = "\$set";
const MONGO_PUSH = "\$push";
const MONGO_PULL = "\$pull";
const MONGO_EXISTS = "\$exists";
const MONGO_ELEMENT_MATCH = "\$elemMatch";
const MONGO_STRING_BACKGROUND = 'background';
const MONGO_STRING_UNIQUE = 'unique';
const MONGO_STRING_NAME = 'name';
const MONGO_STRING_PARTIAL_FE = 'partialFilterExpression';
const MONGO_STRING_EXPIRE_SEC = 'expireAfterSeconds';
const MONGO_STRING_CREATE_INDEX = 'createIndex';
const MONGO_STRING_COUNT = 'count';
const MONGO_STRING_QUERY = 'query';
const MONGO_STRING_ORDERED = 'ordered';
const MONGO_COLLECTION_NAME = 'collectionName';
const MONGO_START_DATE = 'startDate';
const MONGO_END_DATE = 'endDate';
// index types
const MONGO_INDEX_TYPE_SINGLE = 'single';
const MONGO_INDEX_TYPE_COMPOUND = 'compound';
const MONGO_INDEX_TYPE_MULTIKEY = 'multikey';
// PDO constants
const PDO_ID = 'id';
const PDO_EVENTS = 'pdoEvents';
const PDO_PROCEDURES = 'pdoProcedures';
const PDO_FUNCTIONS = 'pdoFunctions';
const PDO_VIEWS = 'pdoViews';
const PDO_SQL = 'sql';
const PDO_SQL_FC = 'firstCommit';
const PDO_SQL_UPDATE = 'pdoUpdateStatements';
const PDO_VERSION = 'version';
const PDO_TABLE = 'pdoTableName';
const PDO_TRIGGERS = 'pdoTriggers';
const PDO_DATA_DEFINITION = 'dataDefinition';
const PDO_AVG_ROW_LEN = 'ARL';
const PDO_RECORDS_PER_PAGE = 'recordsPerPage'; // number of records
const PDO_RP_PRIMARY = 'primary';
const PDO_RP_PRIMARY_PREFERRED = 'primaryPreferred';
const PDO_RP_SECONDARY = 'secondary';
const PDO_RP_SECONDARY_PREFERRED = 'secondaryPreferred';
const PDO_RP_NEAREST = 'nearest';
const PDO_BULK_INSERT_RESULTS = '%d records inserted, %d records dropped';
const PDO_UP_DELETE_RESULTS = '%d records %sd';
// PDO constants used in the templates and deployment scripts
const PDO_SQL_HEADER = 'SET FOREIGN_KEY_CHECKS=0;
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
SET AUTOCOMMIT = 0;
SET time_zone = "+00:00";
START TRANSACTION;';
const PDO_SQL_FOOTER = 'SET FOREIGN_KEY_CHECKS = 1;
COMMIT;';
const PDO_START_TRANSACTION = 'start transaction;';
const PDO_COMMIT = 'commit;';
const PDO_ROLLBACK = 'rollback;';
// PDO view constants
const PDO_VIEW_BASIC = 'view_basic_';
const PDO_VIEW_AUDIT = 'view_audit_';
// PDO Operators
const PDO_EQ = '=';
const PDO_NE = '!=';
const PDO_LT = '<';
const PDO_LTE = '<=';
const PDO_GT = '>';
const PDO_GTE = '>=';
const PDO_NS = '<=>';
const PDO_NIN = 'NOT IN';
const PDO_IN = 'IN';
const PDO_NULL = 'IS NULL';
const PDO_NOT_NULL = 'IS NOT NULL';
// mysql/mariadb constants
const MYSQL_EVENT_META = 'meta';
const MYSQL_AVG_ROW_LENGTH = 'AVG_ROW_LENGTH';
const MYSQL_MAX_DATA_RETURNED = 410000; // max payload size in bytes
const MYSQL_COLUMN_KEY = 'Key';
const MYSQL_COLUMN_FIELD = 'Field';
const MYSQL_COLUMN_TYPE = 'Type'; // do not change case - used for mysql system table
const MYSQL_INDEX_PRIMARY = 'PRI';
const MYSQL_INDEX_UNIQUE = 'UNI';
const MYSQL_ROWS_AFFECTED = 'rows affected: ';
const MYSQL_COLUMN_NAME = 'Column_name';
// Template Classes
const TEMPLATE_CLASS_LOGS = 'Logs';
const TEMPLATE_CLASS_METRICS = 'Metrics';
const TEMPLATE_CLASS_GRAPHS = 'Graphs';
const TEMPLATE_CLASS_TEST_MONGO = 'TestMongo';
const TEMPLATE_CLASS_DONORS = 'Donors';
const TEMPLATE_CLASS_TRANSACTIONS = 'Transactions';
const TEMPLATE_CLASS_SMAXAPI = 'SMAXAPI';
const TEMPLATE_CLASS_SESSIONS = 'Sessions';
const TEMPLATE_CLASS_SYS_EVENTS = 'SystemEvents';
const TEMPLATE_CLASS_SYS_DATA = 'SystemData';
const TEMPLATE_CLASS_TEST_PDO = 'TestPDO';
const TEMPLATE_CLASS_PRODUCT_REG = 'ProductRegistrations';
const TEMPLATE_CLASS_PROD_REGS = 'ProdRegistrations'; // mysql version of the mongo class above
const TEMPLATE_CLASS_PRODUCT_SES = 'ProductSessions';
const TEMPLATE_CLASS_PRODUCT_SES_USR = 'ProductSessionUsers';
const TEMPLATE_CLASS_MIGRATIONS = 'Migrations';
const TEMPLATE_CLASS_WAREHOUSE = 'Warehouse';
const TEMPLATE_CLASS_WHC1_PROD_REG = 'WHC1ProdRegistrations';
const TEMPLATE_CLASS_AUDIT = 'Audit';
const TEMPLATE_CLASS_JOURNAL = 'Journaling';
const TEMPLATE_CLASS_USERS = 'Users';
const TEMPLATE_CLASS_FAILED_SESSIONS = 'failedSessions';
const TEMPLATE_CLASS_WBL = 'WBList';
const TEMPLATE_CLASS_CSL = 'ConsolidatedSanctionsList';
// Warehouse Collection Name Definitions (COOL Storage)
const WH_COOL_MONGO_PRODUCT_REGISTRATIONS = 'gaCoolProductRegistrations';
const WH_COOL_PDO_PROD_REGS = 'gaCoolProductRegistrations';
// mongo logs collection
const COLLECTION_MONGO_LOGS = 'gaLogs';
const COLLECTION_MONGO_LOGS_EXT = '_log';
// mongo systemEvents collection
const COLLECTION_MONGO_SYS_EVENTS = 'gaSystemEvents';
const COLLECTION_MONGO_SYS_EVENTS_EXT = '_sev';
// mongo test collection
const COLLECTION_MONGO_TEST = 'gaTest';
const COLLECTION_MONGO_TEST_EXT = '_tst';
// mongo donors collection
const COLLECTION_MONGO_DONORS = 'gaDonors';
const COLLECTION_MONGO_DONORS_EXT = '_don';
// mongo transactions collection
const COLLECTION_MONGO_TRANSACTIONS = 'gaTransactions';
const COLLECTION_MONGO_TRANSACTIONS_EXT = '_tra';
// mongo SMAXAPI collection
const COLLECTION_MONGO_SMAXAPI = 'gaSMAXAPI';
const COLLECTION_MONGO_SMAXAPI_EXT = '_api';
// mongo system data collection
const COLLECTION_MONGO_SYS_DATA = 'gaSystemData';
const COLLECTION_MONGO_SYS_DATA_EXT = '_syd';
const VALID_STATES = 'validStates';
const VALID_STATUS = 'validStatus';
const ROW_ID = 'rowID';
const DATA_KEY = 'key';
const DATA_VALUE = 'value';
// mongo sessions collection
const COLLECTION_MONGO_SESSIONS = 'gaSessions';
const COLLECTION_MONGO_SESS_EXT = '_ses';
// mongo users collection
const COLLECTION_MONGO_USERS = 'gaUsers';
const COLLECTION_MONGO_USERS_EXT = '_usr';
// mongo consolidated sanctions list
const COLLECTION_MONGO_CSL = 'gaConsolidatedSanctionsList';
const COLLECTION_MONGO_CSL_EXT = '_csl';
// mongo failed events collection
const COLLECTION_MONGO_FAILED_SESSIONS = 'gaFailedSessions';
const COLLECTION_MONGO_FAILED_SESSIONS_EXT = '_fse';
// mongo white-black list collection
const COLLECTION_MONGO_WBLIST = 'gaWhiteBlackList';
const COLLECTION_MONGO_WBLIST_EXT = '_wbl';
// mongo consolidated sanctions list collection
const COLLECTION_MONGO_CSL_ADDRESS = 'address';
const COLLECTION_MONGO_CSL_ADDRESS1 = 'address1';
const COLLECTION_MONGO_CSL_ADDRESS2 = 'address2';
const COLLECTION_MONGO_CSL_ADDRESS3 = 'address3';
const COLLECTION_MONGO_CSL_ADDR_LIST = 'addressList';
const COLLECTION_MONGO_CSL_AKA = 'aka';
const COLLECTION_MONGO_CSL_AKA_LIST = 'akaList';
const COLLECTION_MONGO_CSL_CATEGORY = 'category';
const COLLECTION_MONGO_CSL_CITIZENSHIP = 'citizenship';
const COLLECTION_MONGO_CSL_CITIZENSHIP_LIST = 'citizenshipList';
const COLLECTION_MONGO_CSL_CITY = 'city';
const COLLECTION_MONGO_CSL_COUNTRY = 'country';
const COLLECTION_MONGO_CSL_DOB = 'dateOfBirth';
const COLLECTION_MONGO_CSL_DOB_ITEM = 'dateOfBirthItem';
const COLLECTION_MONGO_CSL_DOB_LIST = 'dateOfBirthList';
const COLLECTION_MONGO_CSL_FN = 'firstName';
const COLLECTION_MONGO_CSL_ID = 'id';
const COLLECTION_MONGO_CSL_ID_COUNTRY = 'idCountry';
const COLLECTION_MONGO_CSL_ID_LIST = 'idList';
const COLLECTION_MONGO_CSL_ID_NUMBER = 'idNumber';
const COLLECTION_MONGO_CSL_ID_TYPE = 'idType';
const COLLECTION_MONGO_CSL_LN = 'lastName';
const COLLECTION_MONGO_CSL_MAIN_ENTRY = 'mainEntry';
const COLLECTION_MONGO_CSL_POB = 'placeOfBirth';
const COLLECTION_MONGO_CSL_POB_ITEM = 'placeOfBirthItem';
const COLLECTION_MONGO_CSL_POB_LIST = 'placeOfBirthList';
const COLLECTION_MONGO_CSL_POSTAL_CODE = 'postalCode';
const COLLECTION_MONGO_CSL_PRG = 'program';
const COLLECTION_MONGO_CSL_PRG_LIST = 'programList';
const COLLECTION_MONGO_CSL_REMARKS = 'remarks';
const COLLECTION_MONGO_CSL_SDN_ENTRY = 'sdnEntry';
const COLLECTION_MONGO_CSL_SDN_TYPE = 'sdnType';
const COLLECTION_MONGO_CSL_SOP = 'stateOrProvince';
const COLLECTION_MONGO_CSL_TYPE = 'type';
const COLLECTION_MONGO_CSL_UID = 'uid';
// mongo product registration collection
const COLLECTION_MONGO_PROD_REGS = 'gaProductRegistrations';
const COLLECTION_MONGO_PROD_REG_EXT = '_prg';
const PRG_TYPE = 'type';
const PRG_IID = 'iid';
const PRG_EAV = 'eav';
const PRG_PLATFORM = 'platform';
const PRG_BROWSER = 'browser';
const PRG_MAJOR_VERSION = 'majorVersion';
const PRG_MINOR_VERSION = 'minorVersion';
const PRG_IS_MOBILE = 'isMobile';
const PRG_IS_TABLET = 'isTablet';
const PRG_FIRST_SEEN = 'firstSeen';
const PRG_LAST_SEEN = 'lastSeen';
// mongo audit collection
const COLLECTION_MONGO_AUDIT = 'gaAudit';
const COLLECTION_MONGO_AUDIT_EXT = '_aud';
const AUDIT_SYS_EV_GUID = 'systemEventGUID';
const AUDIT_SESSION_GUID = 'sessionGUID';
const AUDIT_SESSION_IP = 'sessionIP';
const AUDIT_USER_GUID = 'userGUID';
const AUDIT_JOURNAL_GUID = 'journalGUID';
const AUDIT_SERVICE = 'serviceName';
const AUDIT_SCHEMA = 'schema';
const AUDIT_DB = 'dbName';
const AUDIT_TEMPLATE = 'templateName';
const AUDIT_COLLECTION = 'tableName';
const AUDIT_COLLECTION_EXT = 'collectionExtension';
const AUDIT_RECORD_TOKEN = 'recordToken';
const AUDIT_SNAPSHOT = 'recordSnapShot';
const AUDIT_QUERY = 'query';
const AUDIT_ACCESS_CLIENT = 'accessClient';
const AUDIT_ACCESS_USER = 'accessUser';
const AUDIT_USER_ROLE = 'accessUserRole';
const AUDIT_OPERATION = 'operation';
const AUDIT_ACCESS_ALLOWED = 'accessAllowed';
// mongo journaling collection
const COLLECTION_MONGO_JOURNAL = 'gaJournal';
const COLLECTION_MONGO_JOURNAL_EXT = '_jnl';
const JOURNAL_SYSEV_TOK = 'systemEventGUID';
const JOURNAL_AUD_TOK = 'auditToken';
const JOURNAL_RECORD_GUID = 'recordGUID';
const JOURNAL_RESTORE_QUERY = 'restoreQuery';
const JOURNAL_HISTORY = 'journalHistory';
const JOURNAL_HISTORY_DATE_RESTORED = 'restoredOn';
const JOURNAL_HISTORY_RESTORED_EVENT_GUID = 'restorationEventGUID';
const JOURNAL_HISTORY_RESTORED_BY = 'restoredBy';
const JOURNAL_HISTORY_RESTORED_REASON = 'restoredReason';
// mysql product registrations table (for testing migrations)
const COLLECTION_PDO_PROD_REGS = 'gaProductRegistrations';
const COLLECTION_PDO_PROD_REGS_EXT = '_prg';
// mongo product sessions collection
const COLLECTION_MONGO_PROD_SESS = 'gaProductSessions';
const COLLECTION_MONGO_PROD_SESS_EXT = '_pse';
const PSE_IID = 'iid';
const PSE_SID = 'sid';
const PSE_IP = 'ip';
const PSE_FIRST_SEEN = 'firstSeen';
const PSE_LAST_SEEN = 'lastSeen';
// mongo product session users collection
const COLLECTION_MONGO_PSU = 'gaProductSessionUsers';
const COLLECTION_MONGO_PSU_EXT = '_psu';
const PSU_SID = 'sid';
const PSU_UID = 'uid';
const PSU_FIRST_SEEN = 'firstSeen';
const PSU_LAST_SEEN = 'lastSeen';
// mongo warehouse collection (internal table)
const COLLECTION_MONGO_WAREHOUSE = 'gaWarehouse';
const COLLECTION_MONGO_WAREHOUSE_EXT = '_whd';
// mongo migrations collection (internal table)
const COLLECTION_MONGO_MIGRATIONS = 'gaMigrations';
const COLLECTION_MONGO_MIGRATIONS_EXT = '_mig';
// mongo constants for both migrations and warehousing
const MWH_SOURCE_SCHEMA = 'sourceSchema';
const MWH_SOURCE_TABLE = 'sourceTable';
const MWH_DEST_SCHEMA = 'destinationSchema';
const MWH_DEST_TABLE = 'destinationTable';
const MWH_DATE_STARTED = 'dateStarted';
const MWH_NUM_RECS_SOURCE = 'numRecsInSource';
const MWH_NUM_RECS_IN_QUERY = 'numRecsInQuery';
const MWH_NUM_RECS_MOVED = 'numRecsMigrated';
const MWH_NUM_RECS_DROPPED = 'numRecsDropped';
const MWH_LAST_REC_WRITTEN = 'lastRecordWritten';
const MWH_DATE_COMPLETED = 'dateCompleted';
const MWH_STOP_REASON = 'reasonProcessingStopped';
const MWH_ERROR_CAT = 'processingErrors';
const MWH_QUERY = 'mwhQuery';
const MWH_QUERY_DATA = 'mwhQueryData';
const MWH_DELETE_TYPE = 'deleteSourceType';
const MWH_SOURCE_URI = 'sourceURI';
const MWH_REPORT = 'resultsReport';
// mysql test collection
const COLLECTION_PDO_TEST = 'gaTest';
const COLLECTION_MYSQL_TEST_SQK = 'myPDOTest';
const COLLECTION_PDO_TEST_EXT = '_tst';
// metrics collection
const COLLECTION_MONGO_METRICS = 'gaMetrics';
const COLLECTION_MONGO_METRICS_EXT = '_met';
// graphs collection
const COLLECTION_MONGO_GRAPHS = 'gaGraphs';
const COLLECTION_MONGO_GRAPHS_EXT = '_gra';
// DB constants - same for all schemas
const DB_PKEY = 'id';
const DB_HISTORY = 'history';
const DB_STATUS = 'status';
const DB_TOKEN = 'token';
const DB_WH_TOKEN = 'whToken';
const DB_WH_EVENT_GUID = 'whEventGuid';
const DB_WH_CREATED = 'whCreated';
const DB_TIMER = 'timer';
const DB_CREATED = 'createdDate';
const DB_ACCESSED = 'lastAccessedDate';
const DB_EVENT_GUID = 'eventGUID';
// Logs/Metrics columns
const LOG_FILE = 'file';
const LOG_METHOD = 'method';
const LOG_LINE = 'line';
const LOG_CLASS = 'class';
const LOG_LEVEL = 'level';
const LOG_VALUE = 'levelValue';
const LOG_MESSAGE = 'message';
const LOG_STACK_TRACE = 'stackTrace';
const LOG_TIMER = 'timer';
const LOG_MAX_LINES = 100;
const LOG_EVENT_GUID = 'eventGUID';
const LOG_EVENT = 'event';
const LOG_CREATED = 'created';
// Graphs constants
const GRAPH_KEY = 'key';
const GRAPH_VALUE = 'value';
const GRAPH_SCHEMA = 'schema';
const GRAPH_SERVICE = 'service';
const GRAPH_LOCATION = 'location';
const GRAPH_COMMENT = 'comment';
const GRAPH_LABEL = 'label';
const GRAPH_COLLECTION = 'collection';
const GRAPH_DBO = 'dbo';
const GRAPH_EVENT = 'event';
const GRAPH_BROKER = 'broker';
const GRAPH_TIMER = 'timer';
const GRAPH_DATE = 'date';
const GRAPH_FILE = 'file';
const GRAPH_METHOD = 'method';
const GRAPH_LINE = 'line';
// SystemEvents columns
const SYSTEM_EVENT_NAME = 'eventName';
const SYSTEM_EVENT_STATUS = 'eventStatus';
const SYSTEM_EVENT_TYPE = 'eventType';
const SYSTEM_EVENT_CLASS = 'eventClass';
const SYSTEM_EVENT_START = 'eventStart';
const SYSTEM_EVENT_END = 'eventEnd';
const SYSTEM_EVENT_PEAK = 'eventPeak';
const SYSTEM_EVENT_TIMER = 'eventTimer';
const SYSTEM_EVENT_AT_RESULTS = 'atJobData';
const SYSTEM_EVENT_DURATION = 'eventDuration';
const SYSTEM_EVENT_BROKER_EVENT = 'brokerEvent';
const SYSTEM_EVENT_BROKER_GUID = 'brokerGUID';
const SYSTEM_EVENT_COUNT = 'eventCount';
const SYSTEM_EVENT_COUNT_TOTAL = 'eventCountTotal';
const SYSTEM_EVENT_OGUID = 'originalGUID';
const SYSTEM_EVENT_FK_SESSION_GUID = 'idses';
const SYSTEM_EVENT_FK_USER_GUID = 'idusr';
const SYSTEM_EVENT_BROKER_ROOT_GUID = 'brokerRootGUID';
const SYSTEM_EVENT_NUM_EVENTS = 'numberEventsProcessed';
const SYSTEM_EVENT_CODE_LOC = 'eventCodeLocation';
const SYSTEM_EVENT_ERROR_STACK = 'eventErrorStack';
const SYSTEM_EVENT_META_DATA = 'eventMetaData';
const SYSTEM_EVENT_NOTES = 'eventNotes';
const SYSTEM_EVENT_KEY = 'eventKey';
const SYSTEM_EVENT_VAL = 'eventValue';
const SYSTEM_EVENT_DATA = 'sysEvData'; // used to piggyback sysEv data in an audit payload
// warehousing data template vars (consistent across schemas)
const WH_SUPPORTED = 'supported';
const WH_REMOTE_SUPPORT = 'remoteSupport';
const WH_AUTOMATED = 'automated';
const WH_DYNAMIC = 'dynamic';
const WH_INTERVAL = 'interval';
const WH_QUALIFIER = 'qualifier';
const WH_OVERRIDE = 'override';
const WH_DELETE = 'deleteState';
const WH_INDEXES = 'whIndexes';
const WH_TEMPLATE = 'whTemplate';
// donors fields/constants
const DONORS_TRANS_COUNT = 'transactionCount';
const DONORS_DTCC = 'donationsToCurrentCause';
CONST DONORS_TOTAL_DONATIONS = 'totalDonations';
const DONORS_SDWC = 'shareDataWithCause';
const DONORS_CID = 'idcau';
const DONORS_CAUSE_TITLE = 'causeTitle';
const DONORS_UNK_FOREIGN_ID = 'idxxx'; // todo - change this
// transactions fields/constants
const TRANSACTIONS_ORDER_ID = 'idord';
const TRANSACTIONS_TYPE = 'type';
const TRANSACTIONS_DESCRIPTION = 'description';
const TRANSACTIONS_AMOUNT = 'amount';
const TRANSACTIONS_EVENT_DATE = 'eventDate';
const TRANSACTIONS_START_DATE = 'startDate';
const TRANSACTIONS_END_DATE = 'endDate';
const TRANSACTIONS_META_DATA = 'metaData';
const TRANSACTIONS_MD_TRAVEL_END_DATE = 'travelEndDate';
const TRANSACTIONS_MD_PARTNER = 'partner';
const TRANSACTION_MD_PRODUCT_ID = 'idprd';
const TRANSACTIONS_MD_TRAVEL_START_DATE = 'travelStartDate';
const TRANSACTIONS_MD_CUSTOMER_ID = 'idcus';
const TRANSACTIONS_MD_OFFER_NUMBER = 'offerNum';
const TRANSACTIONS_MD_ENV = 'env';
const TRANSACTIONS_DEST_CITY_NAME = 'destCityName';
const TRANSACTIONS_DEST_STATE_CODE = 'destStateCode';
const TRANSACTIONS_DEST_COUNTRY_CODE = 'destCountryCode';
const TRANSACTIONS_DONOR_ID = 'iddon';
const TRANSACTIONS_CID = 'idxxx'; // todo - change this
const TRANSACTIONS_CAUSE_TITLE = 'causeTitle';
// SMAX API fields
const SMAX_COMPANY_NAME = 'companyName';
const SMAX_COMPANY_CONTACT_INFO = 'companyContactInfo';
const SMAX_COMPANY_CONTACT_INFO_ADDRESS1 = 'companyAddress1';
const SMAX_COMPANY_CONTACT_INFO_ADDRESS2 = 'companyAddress2';
const SMAX_COMPANY_CONTACT_INFO_CITY = 'companyCity';
const SMAX_COMPANY_CONTACT_INFO_STATE = 'companyState';
const SMAX_COMPANY_CONTACT_INFO_ZIP = 'companyZIP';
const SMAX_COMPANY_PHONES = 'companyPhones';
const SMAX_COMPANY_PHONES_VOICE = 'companyPhonesVoice';
const SMAX_COMPANY_PHONES_FAX = 'companyPhonesFax';
const SMAX_COMPANY_CONTACTS = 'companyContacts';
const SMAX_COMPANY_CONTACTS_EMPLOYEE_NAME = 'employeeName';
const SMAX_COMPANY_CONTACTS_EMPLOYEE_EMAIL = 'employeeEmail';
const SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_VOICE = 'employeePhoneVoice';
const SMAX_COMPANY_CONTACTS_EMPLOYEE_PHONE_FAX = 'employeePhoneFax';
const SMAX_COMPANY_REGISTERED = 'dateRegistered';
const SMAX_COMPANY_LICENSE_DURATION = 'licenseDuration';
const SMAX_COMPANY_AUTHORIZED_BY = 'authorizedBy';
const SMAX_COMPANY_INTERNAL_NOTES = 'internalNotes';
const SMAX_LICENSE_TYPE = 'licenseType';
const SMAX_TLTI = 'tlti'; // two letter template identifier
// API license types
const SMAX_API_LICENSE_TYPE_PAID = 'PAID';
const SMAX_API_LICENSE_TYPE_EVAL = 'EVAL';
const SMAX_API_LICENSE_TYPE_BETA = 'BETA'; // includes alpha
const SMAX_API_LICENSE_TYPE_PART = 'PARTNER';
const SMAX_API_LICENSE_TYPE_TEST = 'TEST'; // for internal testing/development
// session fields/constants
const SESSION_EXPIRES = 'sessionExpires';
const SESSION_CLOSED = 'sessionClosed';
const SESSION_DURATION = 'sessionDuration';
const SESSION_CUSTOM_FIELD = 'sessionCustomField';
const SESSION_CUSTOM_VALUE = 'sessionCustomValue';
const SESSION_FK_USER = 'idusr';
const SESSION_LEVEL = 'sessionLevel';
const SESSION_CREATED_WITH = 'createdWith';
const SESSION_ACTION = 'action';
const SESSION_AUTH_PROVIDER = 'authProvider';
// users fields/constants
const USER_ACCOUNT_SSO = 'accountSSO';
const USER_AUTH_DATA = 'authData';
const USER_EMAIL_VERIFIED = 'emailVerified';
const USER_FBID = 'fbid';
const USER_VERIFIED_HUMAN = 'humanVerified';
const USER_MEMBERSHIP_PLAN = 'membershipPlan';
const USER_NOTES = 'notes';
const USER_PASSWORD = 'password';
const USER_PASSWORD_UPDATED = 'passwordLastUpdated';
const USER_PASSWORD_LAST_THREE = 'lastThreePasswords';
const USER_PARTNER_API_KEY = 'partnerAPIKey';
const USER_PROMO_SIGN_UP_ID = 'promoSignUpId';
const USER_TEMP_PASSWORD = 'tempPassword';
const USER_TZ = 'timezone';
const USER_LEAP_CONVERTED = 'userLeapConverted';
const USER_USERNAME = 'username';
const USER_SECONDARY_EMAIL = 'secondaryEmail';
const USER_WEBHOOK_RETRIES = 'webhookRetries';
const USER_TYPE = 'userType';
const USER_FINANCIALS = 'userFinancials'; // sub-collection heading
const USER_FINANCIALS_TOTAL_DONATIONS = 'totalDonations';
const USER_FINANCIALS_TOTAL_EARNINGS = 'totalEarnings';
const USER_FINANCIALS_CASHBACK_BANK = 'cashbackBank';
const USER_FINANCIALS_CASHBACK_DONATION = 'cashbackDonations';
const USER_FINANCIALS_CURRENT_BALANCE = 'currentBalance';
const USER_FINANCIALS_EARNING_TIER = 'earningTier';
const USER_FINANCIALS_PAYMENTS = 'Payments'; // sub-collection heading
const USER_FINANCIALS_PAYMENTS_ADDRESS1 = 'paymentAddress1';
const USER_FINANCIALS_PAYMENTS_ADDRESS2 = 'paymentsAddress2';
const USER_FINANCIALS_PAYMENTS_CITY = 'paymentCity';
const USER_FINANCIALS_PAYMENTS_COUNTRY = 'paymentCountry';
const USER_FINANCIALS_PAYMENTS_FULL_NAME = 'paymentFullName';
const USER_FINANCIALS_PAYMENTS_PLAN = 'paymentPlan';
const USER_FINANCIALS_PAYMENTS_STATE = 'paymentState';
const USER_FINANCIALS_PAYMENTS_STATUS = 'paymentStatus';
const USER_FINANCIALS_PAYMENTS_ZIP = 'paymentsZip';
const USER_FINANCIALS_PAYMENTS_VERIFIED = 'paymentVerified';
const USER_FINANCIALS_PAYMENTS_META = 'paymentMeta';
const USER_FINANCIALS_PAYMENTS_TYPE = 'paymentType';
const USER_FINANCIALS_PENDING_BALANCE = 'pendingBalance';
const USER_FINANCIALS_CUSTOMER_ID = 'customerId';
const USER_FINANCIALS_STRIPE_RECIPIENT_VERIFIED = 'stripeRecipientVerified';
const USER_FINANCIALS_TIN_FINGERPRINT = 'tinFingerprint';
const USER_FINANCIALS_TIN_TYPE = 'tinType';
const USER_SPORTS = 'sports'; // sub-collection heading
const USER_SPORTS_FAV_ATHLETES = 'favoriteAthletes';
const USER_SPORTS_FAV_TEAMS = 'favoriteTeams';
const USER_SPORTS_FAV_SPORTS = 'favoriteSports';
const USER_CHARITIES = 'charities'; // sub-collection heading
const USER_CHARITIES_SELECTED_CAMPAIGN = 'selectedCampaign';
const USER_CHARITIES_SELECTED_CAMPAIGN_META = 'selectedMeta';
const USER_CHARITIES_SELECTED_CAMPAIGN_TITLE = 'selectedTitle';
const USER_REFERRALS = 'referrals'; // sub-collection heading
const USER_REFERRALS_EARNINGS = 'earnings';
const USER_REFERRALS_CLICKS = 'clicks';
const USER_REFERRALS_EARNINGS_PENDING = 'earningsPending';
const USER_REFERRALS_SIGNUPS = 'signups';
const USER_REFERRALS_RID = 'rid';
const USER_PII = 'personalInformation'; // sub-collection heading
const USER_PII_ADDRESS = 'address';
const USER_PII_AGE_RANGE = 'ageRange'; // should be derived from the next field
const USER_PII_BIRTHDAY = 'dob';
const USER_PII_COUNTRY_CODE = 'countryCode';
const USER_PII_EMAIL = 'email';
const USER_PII_SECONDARY_EMAIL = 'altEmail';
const USER_PII_FNAME = 'firstName';
const USER_PII_GENDER = 'gender';
const USER_PII_HOMETOWN = 'homeTown';
const USER_PII_LANGUAGES = 'languages';
const USER_PII_LNAME = 'lastName';
const USER_PII_LEGAL_NAME = 'legalName';
const USER_PII_LOCALE = 'locale';
const USER_PII_LOCATION = 'location';

706
common/errorCatalog.php Normal file
View File

@@ -0,0 +1,706 @@
<?php
/**
* Created by PhpStorm.
* User: mshallop
* Date: 6/7/17
* Time: 3:24 PM
*/
// error levels by string
const ERROR_DEBUG = 'debug'; // used for debug messages - will never be output outside of dev env
const ERROR_METRICS = 'timer'; // used for metrics logging only
const ERROR_DATA = 'data'; // user input or data validation error
const ERROR_INFO = 'info'; // general information, such as console messages
const ERROR_ERROR = 'error'; // general processing error
const ERROR_WARN = 'warning'; // pretty damn serious - service may not be stable
const ERROR_FATAL = 'fatal'; // damn serious - loss of user data or inability to continue processing
const ERROR_EVENT = 'event'; // event: reports timer data for top-level broker events
// error levels by value (for range searching)
const ERROR_EVENT_VAL = -1;
const ERROR_METRICS_VAL = 0;
//const ERROR_TRACE_VAL = 1;
const ERROR_DEBUG_VAL = 2;
const ERROR_DATA_VAL = 3;
const ERROR_INFO_VAL = 4;
const ERROR_ERROR_VAL = 5;
const ERROR_WARN_VAL = 6;
const ERROR_FATAL_VAL = 7;
// error constants
const ERROR_FILE = 'file';
const ERROR_LINE = 'line';
const ERROR_METHOD = 'method';
const ERROR_FUNCTION = 'function';
const ERROR_CLASS = 'class';
const ERROR_TYPE = 'type';
const ERROR_MESSAGE = 'message';
// error stubs
const ERROR_STUB_EXPECTING = ', expecting: ';
const ERROR_STUB_RECEIVED = ', received: ';
const ERROR_TDE = 'TEMPLATE DEFINITION ERROR: ';
const ERROR_STUB_SET_MEMCACHED = 'SET Memcached: ';
const ERROR_STUB_RESULT_CODE = 'SET Result Code: ';
const ERROR_STUB_NOTDEF = 'not/incorrectly defined';
const ERROR_EVENT_COUNT = '(%d/%d)';
const ERROR_ENV_INVALID = 'invalid environment: ';
const ERROR_ENV = 'environment errors for a service was encountered - check log files';
const ERROR_ENV_INVALID2 = 'this request requires %s environment';
const STUB_VALIDATED = ' validated';
const STUB_PROCESSED = '%d %s records processed';
const STUB_LOC = '%s:%s@%d'; // basename(__FILE__), __METHOD__, __LINE__
const STUB_JSON_ERROR = 'json error: ';
// config (class) errors
const CONFIG_FTL_JSON = 'CONFIG bootstrap failed to load JSON config file: ';
const CONFIG_FTL_INI = 'CONFIG bootstrap failed to load INI config file: ';
const CONFIG_FTL_XML = 'CONFIG bootstrap failed to load XML config file: ';
const CONFIG_UNK_ENV = 'CONFIG bootstrap loaded an unknown environment: ';
const CONFIG_DB_NOT_ENABLED = 'DB %s has not been enabled - cannot instantiate a template for this schema';
const CONFIG_DB_ENV_NOT_ENABLED = 'DB %s has not been enabled for the %s env - cannot instantiate class: %s';
// config XML errors
const CONFIG_XML_SERVICE_404 = 'could not locate %s (service) config for %s';
const CONFIG_XML_SERVICE_SETTING = 'XML setting for %s is incorrect: ';
const CONFIG_XML_SERVICE_VIOLATION = 'Production systems require these services be started on discrete instances: ';
const CONFIG_XML_ENV_UNK = 'Configured environment: %s is not currently supported';
const CONFIG_XML_DUP_VAR = 'Potential for a duplicated XML variable exists - check declaration for var: ';
const CONFIG_XML_LOAD = 'Could not load or find the configuration XML for Namaste';
// start-up errors
const ERROR_IPL_STOP_BROKERS = 'You should now manually stop all brokers';
const ERROR_IPL_BROKER_PING = 'Broker %s did not respond to ping event';
const ERROR_IPL_CONFIG = 'A configuration (XML) mismatch was detected between the %s and %s services.';
// 404 errors
const ERROR_CONFIG_404 = 'config class not loaded';
const ERROR_CONFIG_GENERIC = 'an error was encountered accessing the framework configuration - check logs';
const ERROR_CONFIG_TYPE = 'XML Param %s is mis-configured by type: expecting %s, received: ';
const ERROR_CONFIG_RESOURCE_404 = 'could not find/load config for resource: ';
const ERROR_CLASS_404 = 'unable to load class: ';
const ERROR_SERVICE_404 = 'service not available: ';
const ERROR_SERVICE_UNK = 'db service not defined';
const ERROR_SERVICE_REG = 'service registration for %s/%s has failed - check log files';
const ERROR_SERVICE_SOURCE_UNK = '%s is not a known log type';
const ERROR_LOCAL_SERVICE_404 = 'service is not available locally to this node - check configuration';
const ERROR_DATA_404 = 'data was not found - check log files';
const ERROR_DATA_KEY_404 = 'array is missing key: ';
const ERROR_DATA_META_404 = 'meta data was not found';
const ERROR_DATA_META_KEY_404 = 'required meta data field was not found: ';
const ERROR_DATA_METHOD_404 = 'method %s failed to post return data';
const ERROR_FILE_404 = 'file not found: ';
const ERROR_META_CLIENT_404 = 'meta payload missing client declaration';
const ERROR_META_CLIENT_UNK = 'Meta Client: %s is not a recognized or valid client name';
const ERROR_META_XML_CLIENT_404 = 'the meta client text was not found in the meta xml config';
const ERROR_LIB_404 = 'library not found: ';
const ERROR_KEY_404 = 'key is not a member of this class or cache map: ';
const ERROR_PARAM_404 = 'missing or empty parameter: ';
const ERROR_REQUEST_404 = ' request is empty';
const ERROR_RESOURCE_404 = 'resource not available: ';
const ERROR_RESOURCE_ENV_404 = 'could not obtain a resource: %s for env: %s';
const ERROR_RESOURCE_DDB_404 = 'DynamoDB, as a resource, cannot be instantiated';
const ERROR_REMOTE_RESOURCE_404 = 'The remote db resource for schema: %s was lost';
const ERROR_CLASS_SCHEMA_404 = 'class schema was not loaded';
const ERROR_TEMPLATE_DIR_404 = 'could not load template list';
const ERROR_TEMPLATE_FILE_404 = 'missing template file: ';
const ERROR_TEMPLATE_FILE_NOT_AUTH = 'template file %s cannot be instantiated by Partners';
const ERROR_CLIENT_AUTH_TOKEN_404 = 'missing partner auth token required for API calls';
const ERROR_CLIENT_AUTH_TOKEN_BAD = 'Client/Partner Auth token is not a valid token value';
const ERROR_CLIENT_AUTH_TOKEN_REJ = 'Client/Partner authentication failed - check auth token value';
const ERROR_CLIENT_AUTH_TOKEN_SEARCH = 'Unsuccessful search launched on SMAXAPI with token value: ';
const ERROR_CLIENT_AUTH_TOKEN_MISMATCH = 'partner API token does not allow access to the requested data';
const ERROR_TEMPLATE_BAD = 'event requires template: %s defined in meta data';
const ERROR_TEMPLATE_WRONG = 'template declaration is incorrect for the event';
const ERROR_TEMPLATE_MISSING_FIELD = 'template %s is missing declaration for: %s';
const ERROR_TEMPLATE_SERVICE_ENV = 'template %s is not supported for %s service';
const ERROR_SERVICES_UNDEF = 'services catalog has not been defined';
const ERROR_SERVICE_NOT_LOCAL = 'The service: %s is not configured as local to this instance';
const ERROR_SERVICE_NOT_ACTIVE = 'The service: %s is not configured as active';
const ERROR_EVENT_404 = 'unknown event: ';
const ERROR_STRING_404 = 'could not find %s in target string';
const ERROR_EVENT_GUID_404 = 'meta data is missing the event guid for class: ';
const ERROR_CLASS_EXT_404 = 'class extension is not defined';
const ERROR_BIN_FIELD_404 = 'binary field is empty or missing: ';
const ERROR_ARRAY_KEY_404 = 'array missing expected key: ';
const ERROR_ARRAY_KEY_UNK = 'array key: %s is not a member';
const ERROR_GRID_MISSING_TOKENS = 'data payload of %d records was missing %d tokens';
const ERROR_SUBC_404 = 'this class does not support sub-collections';
const ERROR_SUBC_RECORD_404 = 'unable to locate the sub-collection record by GUID: ';
const ERROR_SUBC_RECORD_NO_KEY = 'could not find a sub-collection record with a GUID key';
const ERROR_SUBC_RECORD_KEY_FOUND = 'subC record key (%s) still exists in the sub-collection';
const ERROR_SUBC_KEY_404 = 'subCollection key %s was not located in the subC-cacheMap';
const ERROR_SUBC_SCALAR = 'subCollections may only contain arrays - scalars are not allowed';
const ERROR_DATA_META_EG_404 = 'unable to stat the event GUID';
const ERROR_VIEW_404 = 'view requested: %s is not registered with class: %s';
const ERROR_SP_404 = 'stored procedure: %s is not registered with class: %s';
const ERROR_PF_404 = 'Protected fields are not initialized or in an incorrect format for processing this request';
// json errors
const ERROR_JSON_NONE = 'no errors';
const ERROR_JSON_DEPTH = 'max stack depth exceeded';
const ERROR_JSON_STATE_MISMATCH = 'underflow or the modes mismatch';
const ERROR_JSON_CTRL_CHAR = 'unexected control character found';
const ERROR_JSON_SYNTAX = 'syntax error, json malformed';
const ERROR_JSON_UTF8 = 'malformed UT8 chars, possible encoding error';
const ERROR_JSON_UNK = 'unknown json error';
const ERROR_JSON_DIR = 'direction (enc or dec) not specified';
const ERROR_JSON_TYPE = 'a value or type that cannot be json-decoded was given';
const ERROR_JSON_RECURSE = 'one or more recursive references in the string to be json-decoded';
const ERROR_JSON_INF_NAN = 'one or more NAN (undefined) or INF(inite) values in the string to be json-decoded';
const ERROR_JSON_API = 'json returned null or false but last-error reported as none';
const ERROR_JSON_OP = 'wrong json operation requested for the data type given';
const ERROR_JSON_RET_NULL = 'json operation returned null';
const ERROR_JSON_RET_FALSE = 'json operation returned false';
const ERROR_JSON_DECODE_FAIL = 'unable to decode json string';
const ERROR_JSON_ENCODE_FAIL = 'unable to convert to json format';
const ERROR_GZIP_COMPRESS_FAIL = 'failed to successfully compress data';
const ERROR_GZIP_UNCOMPRESS_FAIL = 'failed to successfully uncompress data';
const ERROR_B64_ENCODE_FAIL = 'unable to convert to base64 format';
const ERROR_B64_DECODE_FAIL = 'could not restore from b64 format';
const ERROR_ENCODING_FAIL_GENERIC = 'encoding operation has failed - check log files';
const ERROR_FAILED_TO_INSTANTIATE = 'failed to instantiate class: ';
const ERROR_REC_INSTANTIATION_FAIL = 'class %s failed to instantiate with GUID: ';
const ERROR_BROKER_CLIENT_INSTANTIATION = 'was not able to instantiate a %s broker client';
const ERROR_METHOD_FAILED = 'failed to successfully execute method %s for class %s - check logs';
const ERROR_FACTORY_LOAD = 'could not instantiate factory class - check logs';
const ERROR_FACTORY_LOAD_BROKER = 'could not instantiate factory class from broker event: ';
const ERROR_UNKNOWN_EVENT = 'do not know how to handle this event: ';
const ERROR_UNKNOWN = 'you should not be here';
const ERROR_NO_EVENT = 'no event was provided';
const ERROR_NO_RET_DATA = 'no return data payload built in the event handler';
const ERROR_NO_DATA = 'there is no data in the class object';
const ERROR_NO_CON_MSG = 'did not prepare a console message';
const ERROR_DATA_FIELD_NOT_MEMBER = 'this field is not a class member: ';
const ERROR_DATA_WH_INSERT_FAIL = 'the WH request returned an inconsistent count for records processed:';
const ERROR_DATA_RECORD_COUNT = 'expecting %d records but received %d';
const ERROR_DATA_REC_COUNT_UNDEF = 'query record count did not return a valid value';
const ERROR_DATA_INCONSISTENT_COUNT = 'count for class data is inconsistent';
const ERROR_CACHE_DATA_CHECKSUM = 'mismatch in checksum values for cached data object: ';
// nosql and sql errors
const ERROR_NOSQL_SELECT = 'could not select DB or Collection';
const ERROR_NOSQL_CREATE = 'create nosql record failed - check log files';
const ERROR_NOSQL_UPDATE = 'update record failed - check log files';
const ERROR_NOSQL_DELETE = 'delete query failed - check log files';
const ERROR_NOSQL_FETCH = 'fetch nosql record request failed - check log files';
const ERROR_SUBC_FETCH = 'sub-collection fetch failed - check log files';
const ERROR_NOSQL_SORT = 'sort request generated an exception: ';
const ERROR_NOSQL_SCHEMA = 'could not generate sequence value: ';
const ERROR_NOSQL_BD = 'batch delete request returned errors - check logs';
const ERROR_NOSQL_BU = 'batch update request returned errors - check logs';
const ERROR_NOSQL_BC = 'batch create request returned errors - check logs';
const ERROR_DATA_QUERY_BUILD = 'query failed to build';
const ERROR_REMOTE_QUERY_FAIL = 'remote query failed to execute - check logs';
const ERROR_DATA_HAVING_BUILD = 'having clause failed to build';
const ERROR_DATA_GROUP_ORDER_BY_BUILD = 'group/order by clause failed to build';
const ERROR_DATA_ORDER_BY_INVALID_VALUE = 'order-by value is not valid: ';
// ddb errors
const ERROR_DDB_RECORD_COUNT = 'ddb query returned %d records, expected %d';
const ERROR_DDB_EXP_EQ_Q1 = 'expecting an operand of EQ for a primary key search operand';
const ERROR_DDB_EXP_VAL_Q1 = 'expecting one (and only one) value for a primary key search';
const ERROR_DDB_NO_HASH_IDX = 'the attribute: %s submitted is not an indexed field';
const ERROR_DDB_CONNECT = 'failed to connect to DDB (DynamnoDB) resource';
const ERROR_DDB_QUERY = 'query failed: ';
const ERROR_DDB_INSTANTIATE = 'could not instantiate a DDB resource: ';
// mysql db errors
const ERROR_SQL_FTL_COLUMNS = 'failed to load table columns';
const ERROR_SQL_FTL_INDEXES = 'failed to load table indexes';
const ERROR_SQL_NOT_PREP_STMNT = 'this query should be executed as a prepared statement: ';
const ERROR_SQL_LOST_PREP_QUERY = 'prepared-statement query submitted to non-prepared query parser';
const ERROR_SQL_NOT_PREP_QUERY = 'expecting a prepared query statement -- this query is not one of those';
const ERROR_SQL_TEMPLATE_DBO_404 = 'SQL template is missing DB Objects declarations';
const ERROR_SQL_TEMPLATE_DBO_VER_404 = 'SQL Template %s missing version declaration within PDO_SQL block: ';
const ERROR_SQL_ENV_NOT_ENABLED = 'Env: %s is not enabled. Check your XML configuration under PDO section.';
// PDO Errors
const ERROR_PDO_ENABLED = 'PDO (mysql) support is not enabled in your current configuration';
const ERROR_PDO_CONNECT = 'failed to connect to PDO (mariaDB) resource';
const ERROR_PDO_EXCEPTION = 'query (%s) raised pdo exception';
const ERROR_PDO_PREPARE = 'query (%s) failed to build (prepare)';
const ERROR_PDO_PARSE = 'query did not properly parse: ';
const ERROR_PDO_PREPARE_2 = 'query (%s) failed build second time (prepare)';
const ERROR_PDO_INDEX_BUILD = 'failed to build class indexes or field list - check log files';
const ERROR_PDO_INDEX_DROP = 'failed to drop indexes for table: ';
const ERROR_PDO_QUERY_ELEMENT_DATA_TYPE = 'expecting array for %s -- received: %s';
const ERROR_PDO_CQ_QUERY = 'error executing pre-query for fetching query tokens';
const ERROR_PDO_SP_404 = 'could not locate stored-procedure for class %s by name: %s';
const ERROR_PDO_BIND = 'failed to bind param value: %s to query: %s, position: %d ';
const ERROR_PDO_EXEC = 'failed to execute prepared query: ';
const ERROR_PDO_FETCH = 'fetch of PDO record request failed - check logs';
const ERROR_FETCH = 'an error was raised during a record fetch - check logs';
const ERROR_PDO_INVALID_EVENT = 'event %s not supported for this class: %s';
const ERROR_PDO_RECONNECT = 'Namaste was forced to reconnect to the PDO resource';
const ERROR_PDO_RECONNECT_FAIL = 'Namaste was unable to reconnect to the named PDO resource';
const ERROR_PDO_DROPPED = 'cannot connect to the PDO database - check log files!';
const ERROR_PDO_QUERY = 'PDO query failed to execute: ';
const ERROR_PDO_QUERY_BUILD = 'PDO query failed to build or returned empty set';
const ERROR_PDO_SLAVE = 'PDO slave access is not configured (enabled) for access';
const ERROR_PDO_SLAVE_ERROR = 'PDO Slave cannot be used for this db event: ';
const ERROR_PDO_SLAVE_DROPPED = 'cannot connect to the PDO slave - check log files!';
const ERROR_PDO_COUNT_FETCH_FAIL = 'could not retrieve the query count for: ';
const ERROR_PDO_ROW_COUNT_ERROR = 'row count is incorrect: expecting %d row(s), received: %d instead';
const ERROR_PDO_FC_SQL_404 = 'missing the first-commit table sql for table: %s, for release: %s';
const ERROR_PDO_ST_FAIL = 'PDO start transaction command failed to execute';
const ERROR_PDO_ROLLBACK = 'Rollback has failed - perhaps a DDL statement was previously executed';
const ERROR_PDO_COMMIT = 'commit command has failed - check log files and PDO schema now';
const ERROR_PDO_DROP_404 = 'drop statement missing from template definition for class: ';
const ERROR_PDO_CREATE_404 = 'create statement missing from template definition for class: ';
const ERROR_PDO_UPDATE_404 = 'update statement missing from template definition for class: ';
const ERROR_PDO_UPDATE_FAIL = 'update statement in template %s has failed for release version %s';
const ERROR_PDO_FC_CREATE = 'first-commit sql failed to process';
const ERROR_PDO_DROP_DEV = 'could not drop the dev table';
const ERROR_PDO_CURRENT_TABLE = 'current table: ';
const ERROR_PDO_DROP_AI = 'could not drop AI attribute from pkey on table: ';
const ERROR_PDO_NO_TRANS = 'transaction has not been started - cannot execute query request';
const ERROR_USER_REG_FAIL = 'register new-user request has failed - check logs';
// mongo db errors
const ERROR_MONGO_CONNECT = 'failed to connect to mongo resource';
const ERROR_MONGO_EXCEPTION = 'mongo exception raised: ';
const ERROR_MONGO_EXCEPTION_CONNECTION = 'framework trapped a mongo connection exception';
const ERROR_MONGO_EXCEPTION_AUTH = 'framework trapped an authentication exception';
const ERROR_MONGO_EXCEPTION_INVALID_ARGS = 'framework trapped a mongo invalid-argument exception';
const ERROR_MONGO_EXCEPTION_BW_DECL = 'framework trapped a mongo error instantiating bulkWrite class';
const ERROR_MONGO_EXCEPTION_BW_INS = 'framework trapped a mongo error executing a bulkWrite insert()';
const ERROR_MONGO_EXCEPTION_BW_EXEC = 'framework trapped a mongo error executing a bulkWrite';
const ERROR_MONGO_EXCEPTION_RUNTIME = 'framework trapped a mongo runtime exception';
const ERROR_MONGO_EXCEPTION_RUNTIME_URI = 'framework trapped a mongo runtime exception: uri format';
const ERROR_MONGO_EXCEPTION_BULK_WRITE = 'framework trapped a mongo bulkWrite exception';
const ERROR_MONGO_EXCEPTION_EXCEPTION = 'framework trapped a generic mongo exception';
const ERROR_MONGO_NOT_ENABLED = 'mongo resource: %s not enabled - check config if this is unexpected';
const ERROR_MONGO_RESOURCE_INVALID = 'mongo resource: %s - must be either master or slave, or WH master, WH slave';
const ERROR_MONGO_WH_LEVEL_INVALID = 'mongo wh level: %s - is not a valid value';
const ERROR_MONGO_WH_LEVEL_404 = 'A warehouse level must be provided for a SEGUNDO resource request';
const ERROR_MONGO_LOCATION_INVALID = 'mongo location: %s - is now supported';
const ERROR_MONGO_LOCATION_DNE = 'mongo location: %s - has no configuration!';
const ERROR_MONGO_INSERT_COUNT = 'error in inserted record count: expecting: %d, reported: %d';
const ERROR_MONGO_TEMPLATE_INVALID = 'mongo template name is invalid: ';
const ERROR_MDB_IDX_FUZZY_NOT_IDX = 'index master array missing fuzzy index element: ';
const ERROR_MDB_DIAG_INDEXES = 'there is an error in the template index(es) definition - see log files';
const ERROR_MDB_INDEX_UNDECL = 'The %s index for %s has an undeclared column or label name: %s';
const ERROR_MDB_IDX_UNIQUE_NOT_IDX = 'index master array missing sparse index element: ';
const ERROR_MDB_IDX_SPARSE_NOT_IDX = 'index master array missing unique index element: ';
const ERROR_MDB_IDX_CONFLICT = 'index appears in both sparse and unique arrays: ';
const ERROR_MDB_IDX_MULTI_TYPE = 'A multiType index can only be applied to an array.subArray field using DOT notation, %s is not valid';
const ERROR_MDB_IDX_KEY_404 = 'Declared field: %s is not a defined member of the class';
const ERROR_MDB_IDX_LABEL_404 = 'Declared field: %s has not been defined as an index label';
const ERROR_MDB_SORT_404 = 'sort key, if specified, must be a non-empty associative array';
const ERROR_MDB_SORT_ARRAY_NOT = 'sort directive is not in array format';
const ERROR_MDB_SORT_DIR_404 = 'unknown sort direction directive: ';
const ERROR_MDB_FIELD_NOT_CACHED = 'requested field: %s not in the cacheMap for class: %s';
const ERROR_MDB_CACHE_NO_PKEYS = 'pkeys exempt from cache processing - value ignored';
const ERROR_MDB_CACHE_INTVAL = 'integer value encountered - ignored: sub-arrays use integers';
const ERROR_MDB_DBOP_GOOD_CACHE_BAD = 'db operation successful but unable to update cache';
const ERROR_MDB_NOT_ENABLED = 'mongoDB has not been enabled';
const ERROR_MDB_ENV_NOT_ENABLED = 'mongoDB template: %s requires %s environment which is not currently enabled';
const ERROR_MDB_SYS_EVENT_SAVE = 'saving system event has failed to successfully complete';
const ERROR_MDB_SYS_EVENT_UPDATE = 'updating system event has failed to successfully complete';
const ERROR_MDB_INVALID_RP = 'readPreference: %s is not a valid option';
const ERROR_MDB_FETCH_COUNT_FAIL = 'command to fetch query count failed with exception';
const ERROR_MDB_FETCH_FAIL = 'failed to fetch %s record using discriminant: ';
const ERROR_MDB_FETCH_COUNT_EXCESSIVE = 'query unexpectedly returned too many records';
const ERROR_MDB_UNK_ERROR = 'a database error has prevented successful processing of your request';
const ERROR_MDB_UNK_IDX_TYPE = 'this is an unknown index type: ';
const ERROR_MDB_QUERY_FAIL = '%s query failed to execute - check logs';
// failed event error messages
const MONGO_FAILED_EVENT_BAD_GUID = 'badGUID';
const MONGO_FAILED_EVENT_INSTANTIATE = 'failed to instantiate class: ';
const MONGO_FAILED_EVENT_INVALID_EVENT_DATA = 'bad data in the systemEvent record prevented processing';
const MONGO_FAILED_EVENT_SUF = 'failed to update the status of the session record';
const MONGO_FAILED_EVENT_EUF = 'failed to update the status of the system-event record';
const MONGO_FAILED_EVENT_SMF = 'failed to update the status of the sentMail record';
const MONGO_FAILED_EVENT_VUF = 'failed to update the status of the user record';
const MONGO_FAILED_EVENT_INSTANTIATE_DESC = 'class %s failed to instantiate on guid: %s';
const MONGO_FAILED_EVENT_BAD_GUID_DESC = 'guid (%s) is invalid';
const MONGO_FAILED_EVENT_SUF_DESC = 'failed to update the session status for record: ';
const MONGO_FAILED_EVENT_SMF_DESC = 'failed to update the sentMail status for record: ';
const MONGO_FAILED_EVENT_EUF_DESC = 'failed to update the system-event status for record: ';
const MONGO_FAILED_EVENT_VUF_DESC = 'failed to update the vaultUser status for record: ';
const MONGO_FAILED_EVENT_LHF_DESC = 'failed to update the lockHistory status for record: ';
const MONGO_FAILED_EVENT_CREATE = 'created new failed event: ';
const MONGO_FAILED_TOO_MANY_RECS = 'query returned too many records; expecting %d, received: ';
const MONGO_FAILED_EVENT_WRONG = 'expecting event: %s, received: %s instead';
// session errors
const ERROR_DATA_ARRAY_ARGV_EMPTY = 'did not receive expected argv param';
const ERROR_SESSION_ID_404 = 'session ID (token) is missing from data payload';
const ERROR_SESSION_EVENT_POST = 'session event post request failed';
// email errors
const ERROR_DIAG_EMAIL_MALFORMED = 'the email submitted did not pass validation';
const ERROR_EMAIL_DUPLICATE = 'the email: %s, is not available';
// cache errors
const ERROR_CACHE_MAP_404 = 'class has required cache-mapping but is missing the cacheMap';
const ERROR_CACHE_ADD_FAIL = 'could not cache element using key: ';
const ERROR_CACHE_FETCH_FAIL = 'could not fetch cache element using key: ';
const ERROR_CACHE_DATA_MALFORMED = 'expecting type %s for cache object: %s, received: %s';
const ERROR_CACHE_RESOURCE_404 = 'could not instantiate cache resource';
const ERROR_CACHE_MAP_FAIL = 'could not process cache mapping on: ';
const ERROR_CACHE_MAP_LOAD = 'failed to load cache map for: ';
const ERROR_CACHE_OP_FAIL_ON_KEY = 'INFO: cache operation (%d) could not complete because the key %s was not found';
const ERROR_CACHE_KEY_404 = 'cache key: %s was not found in the current cache-map for class: %s';
const ERROR_CACHE_MAP_KEY_404 = 'cacheMap key: %s was not found in the current cacheMap';
const ERROR_CACHE_CKSUM_DATA = 'checksum in the array does not match checksum parameter - did you pass the correct array?';
const ERROR_CACHE_CKSUM_MISMATCH = 'checksum passed: %s does not match calculated checksum: %s for data array passed';
const ERROR_CACHE_CKSUM_FAIL = 'checksum validation failed for class: %s, guid: %s';
const ERROR_CACHE_CKSUM_404 = 'checksum not found in cache payload for class: %s on guid: %s';
const ERROR_CACHE_ROUTE_FAIL = 'Could not build cache-map tasks on event: ';
const ERROR_CACHE_DIRECTION = '%s is not a valid cache direction';
const ERROR_CACHE_MAP_TYPE = 'expecting type data or query, not: ';
const ERROR_CACHE_GENERIC_FAIL = 'cache-mapping the data payload has failed - check log files';
const ERROR_CACHE_DELETE_FAIL = 'could not delete the following key from cache: ';
const ERROR_CACHE_SMASH_FAIL_USER = 'cache-clearance request failed processing - check log files';
const ERROR_CACHE_MASH_DATA = 'input data does not seem to be keyed by guids';
const ERROR_CACHE_MASH_FAIL = 'attempt to multi-set array of cache items failed - check log files';
const ERROR_CACHE_SMASH_FAIL_SYSTEM = 'cash smash failed on query: ';
// broker errors
const ERROR_BROKER_EXCEPTION = 'caught AMQP exception: ';
const ERROR_BROKER_EXCEPTION_TIMEOUT = 'caught AMQP timeout exception';
const ERROR_BROKER_EXCEPTION_RUNTIME = 'caught AMQP runtime exception';
const ERROR_BROKER_EVENT_UNKNOWN = 'broker has rejected the event as unknown: ';
const ERROR_BROKER_RESPONSE_BAD = 'broker response is malformed';
const ERROR_BROKER_REQUEST_BAD = 'broker request is malformed - maybe missing: ';
const ERROR_BROKER_REQUEST_FAILED = 'broker request failed to process successfully';
const ERROR_BROKER_TYPE_UNDEF = 'Broker type: %s is undefined';
const ERROR_BROKER_INTERNAL_CALL = 'internal request: %s to %s broker failed - check logs';
const ERROR_BROKER_CANCEL_EXCEPTION = 'caught AMQP Basic Cancel exception';
const ERROR_BROKER_IPL_FAIL = 'Not starting broker: %s because number of children is %s';
const ERROR_BROKER_REQUEST_404 = 'Broker request is empty';
const ERROR_BROKER_RESOURCE = 'could not create a broker resource';
const ERROR_BROKER_FETCH = 'remote fetch request has failed - check logs';
const ERROR_BROKER_QUEUE_DECLARE = 'failed to declare queue: ';
const ERROR_BROKER_CLIENT_DECLARE = 'failed to instantiate broker-client class: ';
const ERROR_BROKER_CLIENT_NOT_AUTH = 'client authorization error';
// general errors
const ERROR_CHECK_LOGS = 'check log files for more information';
const ERROR_GENERIC_CUSTOMER = 'a error has been raised preventing further processing - please contact support';
const ERROR_BAD_DATA_RECORD = 'problem with data record detected: ';
const ERROR_UNKNOWN_STATE = 'an unknown and unexpected state was returned: ';
const ERROR_REQ_META_KEY_404 = 'event (%s) requires, for this client (%s), the meta field (%s)';
const ERROR_REQ_FIELD_404 = 'missing expected field: %s from the request';
const ERROR_REQ_META_KEY_404_WB = 'API request requires the partner authorization token'; // wb = white box
const ERROR_EXCEPTION = 'the framework trapped an exception preventing further processing - check logs!';
const ERROR_TYPE_EXCEPTION = 'caught type (method invocation) exception - check log files';
const ERROR_TYPE_EXCEPTION_PARSE = 'caught parse exception: ';
const ERROR_DATA_BIN_CONV_FAIL = 'binary<->string conversion failed';
const ERROR_DATA_OBJ_2_ARY_FAIL = 'object -> array conversion failed';
const ERROR_DATA_INPUT_EMPTY = 'no data was received for param: ';
const ERROR_DATA_IMPORT = 'failed to import data array (%s) into class (%s) member';
const ERROR_DATA_ADD_FAIL = 'could not add data field %s to member: ';
const ERROR_DATA_INVALID_FORMAT = 'data is not in the expected format';
const ERROR_DATA_META_KEY_EMPTY = 'meta data field was found but is not set: ';
const ERROR_DATA_META_REJECTED = 'meta data field was rejected: ';
const ERROR_DATA_META_REJECTED_FOR_CLASS = 'meta data field %s was rejected for class: %s';
const ERROR_DATA_META_REQUIRED = 'meta data is required for this operation';
const ERROR_DATA_MISSING_ARRAY = 'missing data array: ';
const ERROR_DATA_ARRAY_FAIL = 'Array %s evaluated as empty when it should contain data';
const ERROR_DATA_RANGE = 'data value is out of range';
const ERROR_DATA_TYPE_MISMATCH = 'data type mismatch detected';
const ERROR_DATA_UNPACK = 'data did not unpack correctly';
const ERROR_DATA_VALIDATION_FIRST_PASS = 'data failed first-pass validation';
const ERROR_DATA_FORCE_CAST = 'data mismatch corrected: field %s received: %s, converted to %s';
const ERROR_DATA_TYPE_MISMATCH_DETAILS = 'field: %s is expecting type: %s but is type: %s - mismatch detected';
const ERROR_DATA_FIELD_DROPPED = 'data type mismatch caused field to be dropped: ';
const ERROR_DATA_FIELD_IGNORED = 'data field: %s was not bundled with the filed list';
const ERROR_DATA_OBJECT_EMPTY = 'expecting object for %s, but var has evaluated as null';
const ERROR_DATA_INVALID_CLASS_KEY = 'key %s is not valid member of class: %s';
const ERROR_DATA_INVALID_CLASS_MEMBER = 'field: %s is not a valid member of class: %s';
const ERROR_META_VALIDATION_SECOND_PASS = 'meta data failed second-pass validation';
const ERROR_EC_NA = 'static ERROR and static CONFIG objects are not available';
const ERROR_EMPTY_METHOD = 'method is devoid of code - utterly devoid';
const ERROR_FW_IPL = 'framework failed to launch - check logs';
const ERROR_GCO_NA = 'global configuration object is not available';
const ERROR_INVALID_TEMPLATE = 'template submitted is not valid: ';
const ERROR_TEMPLATE_INSTANTIATE = 'could not instantiate this template class: ';
const ERROR_TEMPLATE_EG_DECL_404 = 'template is missing the event-GUID field for class: ';
const ERROR_INVALID_GUID = 'this is not a valid GUID: ';
const ERROR_INVALID_NAMED_GUID = 'the guid (%s) for field: %s is invalid';
const ERROR_INVALID_IP = 'this is not a valid IP: ';
const ERROR_META_INVALID_FORMAT_ARRAY = 'meta data received not in array format';
const ERROR_META_FIELD_404 = 'expected key: %s was not included in meta payload';
const ERROR_SUBC_FIELDS_COUNT = 'subCollection Column count must match subCollection Value count';
const ERROR_OPEN_LOG_FILE = 'could not open the log file: ';
const ERROR_OPEN_XML_FILE = 'could not load XML file: ';
const ERROR_SAVE_XML_FILE = 'could not save XML file to: ';
const ERROR_ADMIN_NOT_ENABLED = 'The admin service is not enabled on this node - please check configuration';
const ERROR_REMOTE_NOT_ADMIN = 'request must originate from the admin service';
const ERROR_TERCERO_NOT_ENABLED = 'The tercero user service is not enabled on this node - check configuration';
const ERROR_LOCAL_NOT_ADMIN = 'request must execute on the admin server only';
const ERROR_RESOURCE_TYPE_UNDEF = 'Resource type: %s is not defined/supported';
const ERROR_RESOURCE_PDO_NOT_AVAIL = 'Resource: PDO is not available';
const ERROR_SSL_REQUIRED = 'Non-SSL connections to this service are not supported';
const ERROR_UNK_META_TYPE = 'the meta key type: %s has not been defined for key: %s';
const ERROR_UNK_STATIC_META_FIELD = 'the meta field %s defined as static, has no static definition';
const ERROR_FORK_FAILED = 'fork request failed in: ';
const ERROR_READ_DIR = 'could not open directory for reading: ';
const ERROR_FINE_PICKLE = 'an unknown, and completely unanticipated, error was raised - please contact support';
const ERROR_SOUR_PICKLE = 'an internal error has been raised preventing processing of this request - please contact support';
const ERROR_INVALID_QUEUE_NAME = 'invalid queue name: ';
const ERROR_INVALID_STATE = 'this is not a valid state: ';
const ERROR_INVALID_STATUS = 'expecting status %s, received %s';
const ERROR_SCHEMA_MISMATCH = 'wrong schema invoked for instantiation class: ';
const ERROR_SCHEMA_NOT_SUPPORTED = 'schema: %s is not supported in this method';
const ERROR_PKEY_TYPE = 'pkey type of %s is not supported';
const ERROR_PKEY_SWITCH = 'pkey for class: %s is %s - switching to %s';
const ERROR_PKEY_ID = 'could not derive the primary key from: ';
const ERROR_UNKNOWN_KEY = 'this key: %s, is not a member of the targeted array: %s';
const ERROR_DATA_INVALID_KEY = 'key is not a valid value: ';
const ERROR_SUB_C_INSERT_FAIL = 'could not add sub-collection record to class: %s - check logs';
const ERROR_SUB_C_V_NULL = 'sub-collection %s reduced to null on validation';
const ERROR_SUB_COLLECTION_NOT_MEMBER = 'sub-collection key is not a valid member: ';
const ERROR_SUB_COLLECTION_404 = 'sub-collection data was not found';
const ERROR_DATE_INVALID = 'The date submitted: %s, failed date validation';
const ERROR_DATA_ARRAY_ADD = 'failed to add an array of data into class object: ';
const ERROR_DATA_CREATE_PRE_EXISTS = 'record already exists; cannot create new record for: ';
const ERROR_DATA_ARRAY_EMPTY = 'no data was found in the array';
const ERROR_DATA_ARRAY_SLICE = 'could not extract elements from payload - payload restored to original size';
const ERROR_DATA_ARRAY_SLICE_INFO = 'Pre-Event query fetch reduced payload from %d to %d records';
const ERROR_DATA_ARRAY_COUNT = 'array count is not to spec - expecting %d elements for %s but received: %d';
const ERROR_DATA_ARRAY_COUNT_RANGE = 'array count is not within range of %d - %d';
const ERROR_DATA_ARRAY_COUNT_EXCESSIVE = 'received %s records when no more than %d records were expected';
const ERROR_DATA_ARRAY_NOT_ARRAY = 'expecting an array of records(array) for: ';
const ERROR_DATA_ARRAY_NOT_IDX = 'expecting an indexed array of: ';
const ERROR_DATA_PROCESSING = 'Processing return data payload has failed - check logs';
const ERROR_DATA_VALIDATION = 'Data has failed validation - stopping execution';
const ERROR_OPTION_INVALID = 'option: %s had invalid value: ';
const ERROR_THROWABLE_EXCEPTION = 'framework trapped a throwable';
const ERROR_UPDATES_BY_CLASS_DENIED = 'this class does not allow updating of records';
const ERROR_UPDATE_DATA_NOT_ALLOWED = 'this method: %s does not support an update-data payload';
const ERROR_UPDATE_DATA_INVALID = 'was unable to build the update-data portion of the payload';
const ERROR_UPDATE_PAYLOAD_EMPTY_POST_VALIDATION = 'Validation errors has resulted in an empty update array payload';
const ERROR_RECORD_LIMIT_EXCEEDED = 'request for number of records exceeds the query limit of: ';
const ERROR_GB_NOT_INDEXED_KEY = 'The group by key: %s is not and indexed key for the %s class';
const ERROR_GB_DISCRIMINANT = 'invalid group-by string: %s, expecting one of: %s';
const ERROR_RFD_CORE_FAIL = 'core::rfd() failedcheck log files';
const ERROR_SP_PARAM_PROC_FAIL = 'processing stored procedure parameters has failed due to param count';
const ERROR_RSR_APPSERVER = 'appServer is not a valid remote service destination'; // RSR: remote service request
const ERROR_RSR_UNSUPPORTED = 'known remote broker service %s not yet supported';
const ERROR_RSR_NOT_DEF = 'unknown remote broker service (%s) - check data template';
const ERROR_DATA_SLICE = 'slicing the input array resulted in an empty payload on iteration: ';
const ERROR_CLONE_QUERY = 'clone query failed';
const ERROR_PI_TAG_404 = 'missing partialFilterExpression key in query part of partial index';
const ERROR_PI_MALO = 'partial query is malformed - missing sub-array: check query part';
const ERROR_AT_SAVE = 'failed to register event with AT(1) daemon on admin service';
// query-builder errors
const ERROR_QB_ATTRIBUTE_404 = 'The attribute submitted: %s, does not appear to be a member of class: %s';
const ERROR_QB_INVALID_OPERAND = 'This is not an acceptable operand: ';
const ERROR_QB_UNKNOWN_OPERATOR = 'This is not a known operator: ' ;
const ERROR_QB_VALUE_COUNT = 'The value count is incorrect. For operand %s, the value could should be: %d';
const ERROR_QB_NOT_INDEXED_KEY = 'The search key requested (%s) is not in the (%s) class index list.';
const ERROR_QB_TYPE_MISMATCH = 'The search key requires type: %s, but the value (%s) submitted has type: %s';
const ERROR_QB_ROOT_OPERANDS = 'Have detected duplicate root level join operands (%s) - check query construction';
const ERROR_QB_ROOT_OPERAND_404 = 'Detected a missing (closing?) root-level operand - check query construction';
const ERROR_QB_PF_VIOL = 'protected field violation: %s is a protected field and cannot be changed';
// migration errors
const ERROR_MIGRATION_DATA = 'data payload missing or malformed';
const ERROR_MIGRATION_DATA_FIELD = 'missing required data field: ';
const ERROR_MIGRATION_DATA_FIELD_UNK = 'the field: %s is not in the source schema table';
const ERROR_MIGRATION_DATA_FIELD_TYPE = 'expecting type %s for %s but received: ';
const ERROR_MIGRATION_SCHEMA_COLLISION = 'you cannot migrate from one schema to the same schema';
const ERROR_MIGRATION_SCHEMA_UNKNOWN = 'schema: %s is not supported at this time';
const ERROR_MIGRATION_DEL_DEPENDENCY_FIELD = 'soft-delete-migration requires field: ';
const ERROR_MIGRATION_STATUS_KEY_404 = 'status key: %s not found in mysql source schema';
const ERROR_MIGRATION_STATUS_INV = 'this is not a valid status: ';
const ERROR_MIGRATION_WIDGET_ADD_DATA = 'was unable to add mapped data to the namaste class';
const ERROR_MIGRATION_MAPPING_FAILED = 'new data failed the migration map process';
const ERROR_MIGRATION_MAP_404 = 'template does not have a migration map defined';
const ERROR_MIGRATION_REPORT = 'error encountered generating migration report!';
const ERROR_MIGRATION_CONFIG = 'unable to load migration configuration';
// WF = web form MIG = migration
const ERROR_WF_MIG_URI_404 = 'Remote URI is required';
const ERROR_WF_MIG_PORT_404 = 'Remote Port number is required';
const ERROR_WF_MIG_REMOTE_MONGO_404 = 'A remote host (URI and port) **OR** a replSet (name and list) are required (but not both)';
const ERROR_WF_MIG_REMOTE_MYSQL_404 = 'A remote host (URI and port) **AND** and username and password are all required';
const ERROR_WF_MIG_LOGIN_MISSING = 'Login is required with password or authDB';
const ERROR_WF_MIG_PWD_MISSING = 'Password is required with login or authDB';
const ERROR_WF_MIG_ADB_MISSING = 'AuthDB is required with login or password';
const ERROR_WF_MIG_LOGIN_404 = 'Logins are required for production environments';
const ERROR_WF_MIG_PWD_404 = 'Passwords are required for production environments';
const ERROR_WF_MIG_ADB_404 = 'AuthDB is required for production environments';
const ERROR_WF_MIG_TABLE_404 = 'A remote table name is required';
const ERROR_WF_MIG_REPL_500 = 'Replication sets require BOTH the replSet name and the replSet members';
const ERROR_WF_MIG_REPL_BAD = 'Could not extract replication set names from input';
const ERROR_WF_MIG_REPL_URL = 'This: %s, is not a valid hostName:portNum combination';
const ERROR_WF_MIG_REPL_NUM = 'Must be at least three items listed in a replication set';
const ERROR_WF_MIG_DATE_BAD = 'The %s date: %s, is not a valid date - please correct using the date-picker';
const ERROR_WF_MIG_DB_404 = 'A database name is required';
const ERROR_WF_MIG_REMOTE_SCHEMA = 'Unable to derive the remote schema';
const ERROR_WF_MIG_REPLSET_404 = 'Missing replication-set name';
const ERROR_WF_MIG_BRK_CFG_404 = 'Migration broker does not appear to be configured/running';
const ERROR_WF_MIG_TEMPLATE_SCHEMA_404 = 'Template selected missing remote schema declaration';
// password, user and session processing errors
const ERROR_PASSWORD_HASH_GENERATION_FAILED = 'failed to generate password hash';
const ERROR_PARTNER_API_KEY_MISMATCH = 'An API Key mismatch has prevented record access';
const ERROR_PARTNER_USER_NOT_MEMBER = 'The user requested does not exist or is not an account member';
const ERROR_PARTNER_USER_DATA = 'There is an issue with the user account - contact support';
const ERROR_PARTNER_USER_NOT_REGISTERED = 'The user partner account with key: %s does not have a Partner token';
const ERROR_PARTNER_USER_HAS_BAD_GUID = 'The user account has a bad guid: %s for %s';
const ERROR_PASSWORD_MISMATCH = 'The login or user password is incorrect';
// migration HTML webApp errors
const ERROR_HTML_MIG_FORM_ERROR = 'There is an unrecoverable error in the HTML form: ';
// warehousing errors
const ERROR_WH_CLASS_NOT_SUPPORTED = 'The data class requested, %s, does not support warehousing';
const ERROR_WH_REMOTE_SOURCE_NOT_AUTH = 'The data class requested: %s, does not support remote WH sources';
const ERROR_WH_CRON_NOT_SUPPORTED = 'The data class requested: %s, does not support automated WH requests';
const ERROR_WH_DYNAMIC_NOT_SUPPORTED = 'The data class requested: %s, does not support ad-hoc WH requests';
const ERROR_WH_CUSTOM_QUERY_NOT_AUTH = 'The data class requested: %s, does not allow custom WH queries';
const ERROR_WH_FILTER_VAL_404 = 'The WH request is missing the required query filter value';
const ERROR_WH_REMOTE_SOURCE_404 = 'The WH request is missing the remote source table name';
const ERROR_WH_REMOTE_MIG_CFG_404 = 'The WH request requires migration URI endpoint configuration (XML) data';
const ERROR_WH_REMOTE_MYSQL_CFG_404 = 'The WH request requires migration (XML) config for a mysql endpoint';
const ERROR_WH_REMOTE_MONGO_CFG_404 = 'The WH request requires migration (XML) config for a mongo endpoint';
const ERROR_WH_SCHEMA_NOT_SUPPORTED = 'Warehousing is not supported for the schema: ';
const ERROR_WH_MISSING_WIDGET = 'Lost the destination template widget';
const ERROR_WH_MISSING_WH_OBJ = 'Lost the warehouse meta widget';
const ERROR_WH_MISSING_SETTINGS = 'Lost the template warehouse destination settings';
const ERROR_WH_MISSING_BROKER_DATA = 'Lost the broker-request data payload';
const ERROR_WH_MISSING_META_DATA = 'Lost the broker request meta payload';
const ERROR_WH_MISSING_WHERE_CLAUSE = 'lost the pre-built and pre-validated where clause from the WH object';
const ERROR_WH_NOT_ENABLED = 'warehousing has not been enabled';
const ERROR_WH_DEL_REMOTE_RECS = 'unable to delete records from source that were just warehoused';
// unit testing errors
const ERROR_UT_EXPECTING_TRUE = 'expecting true response for: ';
const ERROR_UT_EXPECTING_FALSE = 'expecting a false response for: ';
const ERROR_UT_BROKER_STATUS = 'broker request reported false status';
const ERROR_UT_EXPECTING_NON_ZERO_FP = 'expecting non-zero result but got: %2.6f';
const ERROR_UT_EXPECTING_NON_ZERO_INT = 'expecting non-zero int - got: ';
const ERROR_UT_STRING_MISMATCH = 'expected %s but received %s';
const ERROR_UT_CHECKSUM = 'checksum comparison failed';
const ERROR_UT_QUERY_RETURNED_ZERO = 'query returned zero records - this may not be an error';
const ERROR_UT_STRING_MATCH = 'strings: %s and %s matched when they should be different';
const ERROR_UT_INTEGER_MISMATCH = 'expected %d but received %d for: ';
const ERROR_UT_EXCESSIVE_COUNT = 'expected count return less than %d but received %d';
const ERROR_UT_VALS_NOT_EQUAL = 'test failed: %s <> %s';
const ERROR_UT_GENERIC_FAIL = 'unit test failed: ';
const ERROR_UT_SAME_FIELD_COMPARE_FAIL = 'field: %s has different values across compared structures';
const ERROR_UT_WIDGET_404 = 'widget appears to have been lost as it has failed the is-object test';
const ERROR_UT_FIELD_404 = 'missing field: %s from %s';
const ERROR_UT_FIELD_VALUE = 'field %s: has incorrect value or value type';
const ERROR_UT_BROKER_EVENT_FAIL = 'Broker event: %s has failed with state: %s';
const ERROR_UT_CACHE_FETCH_FAIL = 'failed to retrieve record from cache';
const ERROR_UT_EMPTY_RESULTS = 'results return data is empty when it should not be';
const ERROR_UT_LOST_VARIABLE = 'stored variable has been lost: ';
const ERROR_UT_NULL_VALUE = 'received null value for: ';
const ERROR_UT_NOT_FOUND = 'fetch query returned no records for testing';
// audit/journaling errors
const ERROR_AUDIT_GENERIC_FAIL = 'Audit request has failed to complete successfully - check log files';
const ERROR_AUDIT_FAIL = 'Audit record creation has failed - event messages to follow';
const ERROR_AUDIT_FAILED = 'Audit record created has failed with NO error messages. Well done.';
const ERROR_JOURNAL_GENERIC_FAIL = 'Journal request has failed to complete successfully - check log files';
const ERROR_JOURNAL_BUILD_FAIL = 'Was not able to build the journal record data -- check log files';
const ERROR_AUDIT_DATA_404 = 'Missing %s data from %s payload';
const ERROR_AUDIT_COUNT = 'journal data record count mismatch detected';
const ERROR_AUDIT_REC_LIST = 'failed to generate a list of records as they existed prior to the modification query';
const ERROR_AUDIT_SOURCE = 'audit data must come from either data or auditData members';
const ERROR_AUDIT_NO_SOURCE = 'source data for audit (record GUIDs) is empty';
const ERROR_JOURNAL_NOT_SUPPORTED = 'Journaling is not enabled for this class';
const ERROR_JOURNAL_REQ_BOMBED = 'Journaling recovery has failed - check log files';
const ERROR_AUDIT_CREATE = 'Failed to create the audit record - check log files';
const ERROR_SYSLOG = 'a call to syslog has failed';
// generic fail messages
const FAIL_EVENT = 'broker event failed: ';
const FAIL_CONNECT = 'unable to establish connection to: ';
const FAIL_CACHE_MAP_LOAD = 'unable to load the cacheMap - check logs';
const FAIL_CACHE_MAP_CACHE = 'unable to cache the cacheMap (set() failed)';
const FAIL_RESOURCE_LOAD = 'failed loading resource: %s for location: %s';
// notices -- not errors, but maybe of interest
const NOTICE_META_DISCARD = 'discarding meta field: %s as an unauthorized member';
// info error messages
const INFO_SLOW_QUERY_TIMERS = 'query timers: %s';
const INFO_SLOW_QUERY_TIMER_WARNINGS = 'query timer warnings: %s';
const INFO_QUERY_TIMER_VALUES = ' -- threshold: %d ms';
const INFO_DATA_RESET = 'resetting the value for %s to: ';
const INFO_GENERIC_DB_ERROR = 'a database error was raised';
const INFO_EXPOSED_FIELD_PROTECTION = 'field %s dropped because not a member of exposedFields list';
const INFO_INSERTED_FIELD = 'field %s inserted into record as null value';
const INFO_RECORD_LIMIT_OVERRIDE = 'record limit of %d records over-ridden by migration request to: %d records';
const INFO_QUERY_RETURNED_NO_DATA = 'query executed successfully but no records were returned';
const INFO_MIGRATION_RECORDS_MOVED = 'number of records moved (%s -> %s): %d/%d';
const INFO_TEMPLATE_CLASS_DROPPED = 'dropped the template sub-class during instantiation of the parent class';
const INFO_WH_NO_QUALIFIED_DATA = 'there are no records in the source table that satisfy the wh query';
const INFO_BROKERS_IPL = 'Starting Namaste brokers and re-routing all further output to logfile: ';
const INFO_TRX_COMMIT = 'transaction completed and committed successfully';
const INFO_TRX_ROLLBACK = 'transaction has failed and roll-back has been issued';
const INFO_PDO_DEPLOY = 'installing %s: %s';
const INFO_PDO_INDEXES_DROPPED = 'indexes dropped for table: ';
const INFO_PDO_AI_ATTR_DROPPED = 'dropped autoincrement attribute from pkey';
const INFO_PDO_NO_DEPLOY = 'Template: %s has not declared an object of type: %s';
const INFO_PDO_BAD_DEPLOY = 'Failed to install %s named: %s for %s';
const INFO_TEST_MESSAGE = 'This is a test message containing no useful content.';
const INFO_IPL_REST = 'Pausing to allow broker settling';
const INFO_IPL_BROKER_SUCCESS = '%s broker client pinged successfully';
const INFO_IPL_APPSERVER_SUCCESS = 'appserver service has successfully started!';
const INFO_IPL_SEGUNDO_SUCCESS = 'segundo service has successfully started!';
const INFO_IPL_TERCERO_SUCCESS = 'tercero service has successfully started!';
const INFO_IPL_ADMIN_SUCCESS = 'admin service has successfully started!';
const INFO_BROKER_REQ_COUNT = 'request-count limit reached - broker child cycling';
const INFO_BROKER_QUEUE_ESTABLISHED = '%s established as pid: %d for %d requests';
const INFO_BROKER_PARENT_STARTED = '%d instances of %s started';
const INFO_TEMPLATE_PROCESSING_STARTED = 'begin processing %s templates';
const INFO_PROCESSING = 'Processing: ';
const INFO_SERVICE_NOT_ENABLED = '%s service is not enabled in the current env';
const INFO_SHOULD_NOT_SEE_THIS = 'you should not ever see this error message';
const INFO_MIGRATION_XML_OVERRIDE = 'Migration XML (source) was overridden by web request';
const INFO_NO_DATA_IN_DATA = 'Data payload currently contains no records';
const INFO_NO_ERRORS = 'No errors generated in diagnostics.';
const INFO_PDO_SLAVE_SWITCH = 'Cannot use slave - switching to master';
const INFO_RECORD_NOT_FOUND = 'Query executed successfully, but no record(s) returned by query';
const INFO_EVENT_GUID_REPLACED = 'Event guid was replaced with %s in %s@%d';
const INFO_CKP_CONFIG_LOADED = 'resourceManager successfully loaded config file';
const INFO_DB_DUP_ENV_USER = 'bypass user create because %s already exists for env: ';
const INFO_NO_DIR = 'could not create file - perhaps a parent directory needs to be created?';
const INFO_PARTIAL_INDEX = 'partialIndex';
const INFO_SCHEMA = 'current class schema: ';
// checkpoint messages
const INFO_CKP_REACHED = 'Checkpoint %s@%d reached: %s';
const INFO_LOC = '[%s@%d]-> ';
// success messages
const SUCCESS_DB_DELETE = 'record successfully deleted: ';
const SUCCESS_DB_RECORD_CROSS_DELETED = '%s record based on %s record: %s, was successfully deleted';
const SUCCESS_DB_RECORD_DELETED = '%s record was successfully deleted for user guid: %s';
const SUCCESS_DB_RECORD_RESTORED = 'The requested record was successfully restored';
const SUCCESS_DB_UPDATE_COUNT = 'number of records successfully updated: ';
const SUCCESS_DB_UPSERT_COUNT = 'number of records successfully upserted: ';
const SUCCESS_TLS_CONNECT = 'successful TLS connection established to: ';
const SUCCESS_CONNECT = 'successful connection established to: ';
const SUCCESS_PIN_VALID = 'pin is valid - user has been activated';
const SUCCESS_PING = 'successfully pinged broker: ';
const SUCCESS_CONNECTED = 'successful test connection to resource: ';
const SUCCESS_SHUTDOWN = 'shutdown completed gracefully';
const SUCCESS_EVENT = 'broker event successful: ';
const SUCCESS_EVENT_404 = 'broker event processed but no data returned';
const SUCCESS_NOT_SUPPORTED = 'broker event successful but action not supported by this data class';
const SUCCESS_METHOD = 'method successful: ';
const SUCCESS_LOCK_COUNT_CLEARED = 'lock counters cleared by successful login';
const SUCCESS_RECORD_ADDED = '%s successfully added!';
const SUCCESS_ALL_SERVICES = 'all services are available';
const SUCCESS_CONNECT_CACHE = 'successfully connected and registered to cache service';
const SUCCESS_NO_ERRORS_FOUND = 'no errors were found';
const SUCCESS_SUBC_RECORD_DELETED = 'sub-collection records with guid: %s was successfully deleted';
const SUCCESS_CACHE_LOG_DUMP = 'published cached-log of %d messages to admin service';
const SUCCESS_PDO_TEMPLATE_PROCESSING = 'successfully processed all PDO templates for release version: ';
const SUCCESS_IPL_ENV_CHECK = 'Service environments successfully cross-checked.';
const SUCCESS_AUDIT_EVENT = 'Audit event successfully recorded';
const SUCCESS_CACHE_MAP = 'cacheMap successfully cached';
const SUCCESS_CACHE_SMASH = 'records successfully removed from cache';
const SUCCESS_PUBLISHED = 'successfully published log message using route: ';

1735
common/functions.php Normal file

File diff suppressed because it is too large Load Diff

16
common/lorumIpsum.inc Normal file
View File

@@ -0,0 +1,16 @@
<?php
/**
* Created by PhpStorm.
* User: mshallop
* Date: 7/20/17
* Time: 6:47 AM
*/
$text = '
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a diam lectus. Sed sit amet ipsum mauris. Maecenas congue ligula ac quam viverra nec consectetur ante hendrerit. Donec et mollis dolor. Praesent et diam eget libero egestas mattis sit amet vitae augue. Nam tincidunt congue enim, ut porta lorem lacinia consectetur. Donec ut libero sed arcu vehicula ultricies a non tortor. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean ut gravida lorem. Ut turpis felis, pulvinar a semper sed, adipiscing id dolor. Pellentesque auctor nisi id magna consequat sagittis. Curabitur dapibus enim sit amet elit pharetra tincidunt feugiat nisl imperdiet. Ut convallis libero in urna ultrices accumsan. Donec sed odio eros. Donec viverra mi quis quam pulvinar at malesuada arcu rhoncus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. In rutrum accumsan ultricies. Mauris vitae nisi at sem facilisis semper ac in est.
Vivamus fermentum semper porta. Nunc diam velit, adipiscing ut tristique vitae, sagittis vel odio. Maecenas convallis ullamcorper ultricies. Curabitur ornare, ligula semper consectetur sagittis, nisi diam iaculis velit, id fringilla sem nunc vel mi. Nam dictum, odio nec pretium volutpat, arcu ante placerat erat, non tristique elit urna et turpis. Quisque mi metus, ornare sit amet fermentum et, tincidunt et orci. Fusce eget orci a orci congue vestibulum. Ut dolor diam, elementum et vestibulum eu, porttitor vel elit. Curabitur venenatis pulvinar tellus gravida ornare. Sed et erat faucibus nunc euismod ultricies ut id justo. Nullam cursus suscipit nisi, et ultrices justo sodales nec. Fusce venenatis facilisis lectus ac semper. Aliquam at massa ipsum. Quisque bibendum purus convallis nulla ultrices ultricies. Nullam aliquam, mi eu aliquam tincidunt, purus velit laoreet tortor, viverra pretium nisi quam vitae mi. Fusce vel volutpat elit. Nam sagittis nisi dui.
Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis neque. Etiam luctus porttitor lorem, sed suscipit est rutrum non. Curabitur lobortis nisl a enim congue semper. Aenean commodo ultrices imperdiet. Vestibulum ut justo vel sapien venenatis tincidunt. Phasellus eget dolor sit amet ipsum dapibus condimentum vitae quis lectus. Aliquam ut massa in turpis dapibus convallis. Praesent elit lacus, vestibulum at malesuada et, ornare et est. Ut augue nunc, sodales ut euismod non, adipiscing vitae orci. Mauris ut placerat justo. Mauris in ultricies enim. Quisque nec est eleifend nulla ultrices egestas quis ut quam. Donec sollicitudin lectus a mauris pulvinar id aliquam urna cursus. Cras quis ligula sem, vel elementum mi. Phasellus non ullamcorper urna.
Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. In euismod ultrices facilisis. Vestibulum porta sapien adipiscing augue congue id pretium lectus molestie. Proin quis dictum nisl. Morbi id quam sapien, sed vestibulum sem. Duis elementum rutrum mauris sed convallis. Proin vestibulum magna mi. Aenean tristique hendrerit magna, ac facilisis nulla hendrerit ut. Sed non tortor sodales quam auctor elementum. Donec hendrerit nunc eget elit pharetra pulvinar. Suspendisse id tempus tortor. Aenean luctus, elit commodo laoreet commodo, justo nisi consequat massa, sed vulputate quam urna quis eros. Donec vel.';

38
common/plCatalog.php Normal file
View File

@@ -0,0 +1,38 @@
<?php
/**
* This constants file contains all the Priceline.com schema constants and should only appear in Priceline (pl)
* templates. This file exists in both SMAX and Namaste so please update the other when you update one.
*
*
* @author mike@givingassistant.org
* @version 1.0
*
*
* HISTORY:
* ========
* 06-12-20 mks ECI-164: original coding
*
*/
// Donors Table
// ---- schema constants
const TEMPLATE_PL_DONORS = 'Donors'; // template's raw name
const COLLECTION_MONGO_PL_DONORS = 'plDonors'; // name of the mongo collection
const COLLECTION_PL_DONORS_EXT = '_don'; // name of the extension for the Donors collection
// ---- schema column names
const PL_CID = 'plCauseID';
const PL_CAUSE_TITLE = 'plCauseTitle';
const PL_DONATIONS_TCC = 'plDonationsToCurrentCause';
const PL_FK = 'plForeignId';
const PL_SHARE_DATA_WITH_CAUSE = 'plShareDataWithCause';
const PL_TOT_DONS = 'plTotalDonations';
const PL_TRANS_COUNT = 'plTransactionCount';
// ---- cache-mapped column names
const PL_CM_CAUSE_TITLE = 'causeTitle';
const PL_CM_CID = 'cid';
const PL_CM_DTCC = 'donationsToCurrentCause';
const PL_CM_FK = 'foreignId';
const PL_CM_SDWC = 'shareDataWithCause';
const PL_CM_TD = 'totalDonations';
const PL_CM_TC = 'transactionCount';
const PL_CM_SMAX_KEY = 'XAPIKEY';

1
config/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
env*

240
config/bootstrap.inc Normal file
View File

@@ -0,0 +1,240 @@
<?php
define('PHP_MINIMUM_VERSION', 70401);
/**
* common directives for rabbitMQ daemons to load environment configuration:
* - set directory paths
* - install the autoloader
* - install the amqp library (rabbitMQ) if required (defined in constants)
* - declare globals
* - load the common files/functions
* - initialize the singleton configuration global
*
* this file should be required by ALL rabbitMQ daemons (brokers)
*
* @author mike@givingassistant.org
* @version 1.0
*
* HISTORY:
* --------
* 06-07-17 MKS initial coding
* 06-13-17 mks reversed the config file paradigm - the default is now to load a development-level
* configuration and overwrite it with the env.xml file, if it exists.
* 05-31-18 mks CORE-1011: update for new XML broker services configuration
* 06-08-18 mks CORE-1034: deprecated prodBox XML tag
* 06-13-18 mks CORE-1048: improved diagnostic logging/start-up messages, deprecated node-check
* 06-15-18 mks CORE-1045: deprecated CONFIG_ID_NODE tag
* 07-10-18 mks CORE-773: replaced echo statements with consoleLog()
* 07-24-18 mks CORE-1097: cleaning up logging output
* 09-02-18 mks DB-43: Parse exception-handling wrappers around require statements
* 11-29-18 mks DB-51: cleaning up debug/log messages
* 05-23-19 mks DB-116: work-around for array_key_first() (PHP 7.3 only function)
* 10-12-20 mks DB-156: fixed debug check for broker services
*
*/
$eos = (isset($_SERVER['HTTP_USER_AGENT'])) ? '<br />' : PHP_EOL;
$res = 'BOOT: '; // console log identifier
function gt(): string
{
global $res;
$cs = ' [ S]';
return('[' . date("d/m/y@H:i:s", time()) . ']' . $cs . $res);
}
// add a version check for php
if (PHP_VERSION_ID < PHP_MINIMUM_VERSION) exit ('A version of PHP >= 7.4.1 is required to run Namaste.' . PHP_EOL);
$topDir = dirname( __DIR__ );
if (!file_exists($topDir . '/logs')) { // <-- using the logs literal because constants not loaded yet
echo gt() . 'ERROR - LOG DIRECTORY DOES NOT EXIST - NAMASTE CANNOT RUN.' . PHP_EOL;
echo gt() . 'Please fix this problem immediately.' . PHP_EOL . PHP_EOL;
die();
}
// set-up the log files
$logFile = $topDir . '/logs/namaste.log';
$logErrors = $topDir . '/logs/namaste_err.log';
$logs = [ $logFile, $logErrors ];
foreach ($logs as $log) {
if (!file_exists($log)) {
echo gt() . 'Creating logfile: ' . $log . PHP_EOL;
touch($log);
}
}
// redirect i/o for console logging
if (!isset($_REDIRECT)) $_REDIRECT = true;
if ($_REDIRECT) {
@fclose(STDIN);
@fclose(STDOUT);
@fclose(STDERR);
}
@$STDIN = fopen('/dev/null', 'r');
if (!@$STDOUT = fopen($logFile, 'a+b')) {
echo gt() . 'Error - unable to open ' . $logFile . ' for logging' . PHP_EOL;
exit(1);
}
if (!@$STDERR = fopen($logErrors, 'a+b')) {
if (!$STDOUT) {
echo gt() . 'Error - unable to open ' . $logErrors . ' for error logging' . PHP_EOL;
} else {
fwrite($STDOUT, gt() . 'Error - unable to open: ' . $logErrors . ' for error logging' . PHP_EOL);
}
exit(1);
}
// =-=-=-=-= from this point forward, must use fwrite(STDERR|STDOUT) for console logging =-=-=-=-=
fwrite($STDOUT, gt() . 'Loading common files...' . PHP_EOL);
// load the files stored in the common directory
foreach(glob($topDir . '/common/*.php') as $filename) {
fwrite($STDOUT, gt() . 'Loading: ' . $filename . PHP_EOL);
try {
/** @noinspection PhpIncludeInspection */
require_once($filename);
} catch (ParseError $p) {
echo gt() . 'Caught parse exception in ' . $filename . PHP_EOL;
echo gt() . $p->getMessage() . PHP_EOL;
exit(1);
}
}
// ---------------------- system constants and error messages are now available -------------------------
$classesDir = $topDir . DIR_CLASSES;
$configDir = $topDir . DIR_CONFIG;
$amqpLib = $topDir . DIR_LIB;
$templateDir = $topDir . DIR_CLASSES . DIR_TEMPLATE;
$logDir = $topDir . DIR_LOGS;
date_default_timezone_set(STRING_SYS_TZ);
fwrite($STDOUT, gt() . 'Loading framework autoloader...' . PHP_EOL);
try {
/** @noinspection PhpIncludeInspection */
require($topDir . FILE_AUTOLOADER);
} catch (ParseError $p) {
echo gt() . 'Caught parse exception in ' . $topDir . FILE_AUTOLOADER . PHP_EOL;
echo gt() . $p->getMessage() . PHP_EOL;
exit(1);
}
if(file_exists($classesDir)) {
Autoloader::register_directory($classesDir);
Autoloader::register_directory($templateDir);
}
fwrite($STDOUT, gt() . 'Loading vendor-library autoloader...' . PHP_EOL);
// php-amqplib load (v2)
if (!file_exists($amqpLib)) {
fwrite($STDERR, getDateTime() . CON_SYSTEM . $res . ERROR_LIB_404 . $amqpLib . PHP_EOL);
die();
}
try {
$loadFile = $amqpLib . '/vendor/autoload.php';
/** @noinspection PhpIncludeInspection */
require_once $loadFile;
} catch (ParseError $p) {
echo gt() . 'Caught parse exception in ' . $loadFile . PHP_EOL;
echo gt() . $p->getMessage() . PHP_EOL;
exit(1);
}
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Connection\AMQPStreamConnection;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Channel\AMQPChannel;
use /** @noinspection PhpUnusedAliasInspection */ PhpAmqpLib\Message\AMQPMessage;
// check to ensure required functions are installed and supported on the server.
// if not, then error to the console and exit.
// pcntl extension is disabled in Apache PHP but enabled in cli-PHP
if (!extension_loaded('pcntl') and !isset($_SERVER['HTTP_USER_AGENT'])) {
consoleLog($res, CON_ERROR, 'API for Process Control may be missing. See: http://www.php.net/manual/en/pcntl.installation.php');
}
if (extension_loaded('pcntl') and isset($_SERVER['HTTP_USER_AGENT'])) {
consoleLog($res, CON_ERROR, 'PCNTL extension SHOULD NOT BE LOADED IN APACHE-PHP! (SECURITY VIOLATION)');
exit(1);
}
/**
* initialize the static configuration object which is used throughout the framework
*
* Since there does not exist a way to gracefully handle errors via the error object,
* we have to output generated errors to STDOUT and let the console operator proceed
* from there with whatever diagnostics we supplied.
*
* generally speaking, we load the namaste.xml configuration -- which, by design, is
* a development-level configuration file.
*
* the env.xml file is optional -- if it exists, it will be loaded and duplicate keys
* in the namaste.xml file will be replaced by the env.xml elements.
*
*/
fwrite($STDOUT, gt() . 'Loading base configuration....' . PHP_EOL);
if (!file_exists($configDir . FILE_BASE_CONFIG)) {
fwrite($STDERR, gt() . ' [ !] COMMON - could not locate base configuration file.' . PHP_EOL);
fwrite($STDERR, gt() . ' [ !] check that ' . FILE_BASE_CONFIG . ' exists in the' . PHP_EOL);
fwrite($STDERR, gt() . ' [ !] directory: ' . $configDir . '.' . PHP_EOL);
} else {
// load the base configuration
if (!gasConfig::singleton($configDir . FILE_BASE_CONFIG, FILE_TYPE_XML)) {
fwrite($STDERR, gt() . ' [ !] COMMON - could not load base configuration file.' . PHP_EOL);
fwrite($STDERR, gt() . ' [ !] file: ' . FILE_BASE_CONFIG . PHP_EOL);
fwrite($STDERR, gt() . ' [ !] directory: ' . $configDir . PHP_EOL);
exit(1);
}
}
// make sure a base config was loaded
if (empty(gasConfig::$settings)) {
fwrite($STDERR, gt() . ' [ !] COMMON - unknown error raised loading base configuration file - program ends.' . PHP_EOL);
}
if (!isset(gasConfig::$settings[CONFIG_ID][CONFIG_ID_ENV])) {
fwrite($STDERR, gt() . ' [ !] COMMON - Have not defined environment status relative to an environment.' . PHP_EOL);
fwrite($STDERR, gt() . ' Check the base configuration file: ' . FILE_BASE_CONFIG . PHP_EOL);
fwrite($STDERR, gt() . ' for the xml-label: ' . CONFIG_ID_ENV . PHP_EOL);
exit(1);
}
fwrite($STDOUT, gt() . 'Loading and layering env configuration....' . PHP_EOL);
// load the environment configuration -- if an error occurs loading the env.xml file, then execution will cease.
if (file_exists($configDir . FILE_ENV_CONFIG)) {
gasConfig::addConfig($configDir . FILE_ENV_CONFIG, FILE_TYPE_XML);
if (!gasConfig::$status) {
fwrite($STDERR, gt() . ' [ !] COMMON - Unable to successfully load the namaste.xml file.' . PHP_EOL);
exit(1);
}
}
// set the rabbitMQ debug if namaste has debug on in the XML under broker-services
define('AMQP_DEBUG', ((gasConfig::$settings[CONFIG_BROKER_SERVICES][CONFIG_BROKER_DEBUG]) ? true : false));
// deprecated via CORE-1011
//// these are the only valid node names
//fwrite($STDOUT, gt() . 'Validating service environments....' . PHP_EOL);
//$validNodes = [ CONFIG_ID_NODE_NAMASTE, CONFIG_ID_NODE_ADMIN, CONFIG_ID_NODE_DEV ];
//if (!in_array(gasConfig::$settings[CONFIG_ID][CONFIG_ID_NODE], $validNodes)) {
// $msg = sprintf(' [ X] COMMON - check the configuration: %s is not a valid node id (cfg.id.nodename)', gasConfig::$settings[CONFIG_ID][CONFIG_ID_NODE]);
// fwrite($STDERR, $msg);
// exit(1);
//}
// initialize the global objects - failures for each object are generated within the object's constructors
// and are sent to stdout...
// initialize the global Resource Manager object
fwrite($STDOUT, gt() . $res . 'Loading resource manager....' . PHP_EOL);
if (!isset(gasResourceManager::$available) or !gasResourceManager::$available) {
gasResourceManager::singleton();
if (!gasResourceManager::$IPL) {
consoleLog($res, CON_ERROR, ERROR_FW_IPL);
} else {
gasResourceManager::$available = true;
}
}
// initialize the global Memcache object
fwrite($STDOUT, gt() . $res . 'Loading memcache manager....' . PHP_EOL);
if (!isset(gasCache::$available) or !gasCache::$available) {
gasCache::singleton();
}
fwrite($STDOUT, gt() . $res . 'Loading static manager....' . PHP_EOL);
if (!isset(gasStatic::$available) or !gasStatic::$available) {
gasStatic::singleton();
}

View File

@@ -0,0 +1,33 @@
-----BEGIN CERTIFICATE-----
MIIFnjCCA4agAwIBAgIJAL/BxYbTTgtDMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV
BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNp
c2NvMQ4wDAYDVQQKDAVnaXZ2YTEQMA4GA1UECwwHbmFtYXN0ZTAeFw0xNzEyMTgx
ODU1MDlaFw0zNzEyMTMxODU1MDlaMFwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApD
YWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNpc2NvMQ4wDAYDVQQKDAVnaXZ2
YTEQMA4GA1UECwwHbmFtYXN0ZTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoC
ggIBAJ6gHIylr1TIDGk4Q1H5KaMX22v6d9f8XyttcMzMOqjMSpdokzRVFuIjLs9L
VNPA2Z0FsrUYN6TVINEf2XnwF4Ilz0OgGkhBKUzddnDDOHbIdK1Eu2J2CjBTDN+z
7BZvY4GbaHmDb5axLP0pG6jfBK3dzcDgPOxZmwgNjNYe2D4w3GBu+KIoEBkLNem2
sTgEoNgNniT2ibd6vL7l1UFyEN+yNTAVqwxLuTYHjavfQtyLtWl9hmhgEzKJxR7G
ZWAFbgfz9p1AV1mPu0+4b8GKSsFOoKLZDcwNqeGCe3tNVJfzoZptfNcFjzPWSpn6
DXrODveYgQ4hEBsvLpeNkUDWLB+TnJ9jihmi/X2LF5O1iXFFXvrfBElvn6RRQw8d
/J6jFTThzSGsxg86RlZUuuL9QJ5yvuThCMdHCveL7LHdbFbo9HsmVozaQ9NztvC1
BC/JyjiA6XZQXa7ShyNQ/JVBsiEZH8qdKcGY8N8r7Ran4kjyhULP2UYtL4uYVXuW
fjEEDHi4exvlwfV8TQRAADiL5HHICquIICJRxga4BJbBROWfOKdhA7Dbpx5GDDBT
IYEassVIP48Eb04qa67Ar87Xd24mwlPSb8k9aBNS3sHlwnGIPKtSISmXKYFJFg/2
Kx4rs8e9+/Le8zNrroxSQJ9Ex1YY8n73XTbCP4rdhrfk+j97AgMBAAGjYzBhMB0G
A1UdDgQWBBQuSkRF1EtLmdiCCwI/Fz6duzxTdzAfBgNVHSMEGDAWgBQuSkRF1EtL
mdiCCwI/Fz6duzxTdzAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAN
BgkqhkiG9w0BAQsFAAOCAgEAnIx3DzkwGN+Zw6E7hjXthmAWib14UYEf7ED+qUfV
kJTVsiOhqzQR/cWtzl0fS6vMcZngIa8OOBc1hcUuWz9ujKfLjyPHieJzTphx2c7b
qB6pr9L/vMewPRXc96L8JCkOzctOsl+C849HvdS6JsAxkFEG05Jt64iMN2ttRgX0
85BwwoXjcuHCZdIuwXkxs0psOdMTiA+ibDPuLLu+fzo2l1vTfWVnA5pcUEbND6OY
AzMNXu9u6yTJnLW5thy+KVTwaanmmXWiwQVTmceJrR/SeYX0u+kpjQyNh42Rk+R8
67x0UoK3uzWEW3+Qr09xk86Oal++0ErhsbcayT8k1C3AggcjF1Av0wEC/UE/NwJl
GEeSDrHb0Ll5bs48BxWpj671PXCTjxKSJK48iexgVRYiIl1OnrggHul0wzjQFrFa
HQradVlU8fSYNMa2taQyXPb00P+IU275TL0BioTdwkmk2bp57d9hFuKkECL2Yqcm
/zGuaWIy3tIic8I51YUdUpuj1TvpahjxxW9SlCwV5p/IDgwhaJshT2nZVoYxcTRe
x1W491gS7xPCSzS9cXMdG+7DHoxiGGudVprzLObMP5+RjArQPSgGhqnW3hFVSbz1
H+25dNeJeQ5ouWqxMD+Abl5j7OwxVEDyS7D6UCjVaE1PxM6izoEvYoKdKCoB9n7X
lpY=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,2 @@
V 321210185814Z 1000 unknown /C=US/ST=California/O=givva/OU=engineering/CN=127.0.0.1/emailAddress=mike@givingassistant.org
V 321214190348Z 1001 unknown /C=US/ST=California/O=givva/OU=namaste/CN=127.0.0.1

View File

@@ -0,0 +1 @@
unique_subject = yes

View File

@@ -0,0 +1 @@
unique_subject = yes

View File

@@ -0,0 +1 @@
V 321210185814Z 1000 unknown /C=US/ST=California/O=givva/OU=engineering/CN=127.0.0.1/emailAddress=mike@givingassistant.org

View File

@@ -0,0 +1,32 @@
-----BEGIN CERTIFICATE-----
MIIFcDCCA1igAwIBAgICEBEwDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCkNhbGlmb3JuaWExDjAMBgNVBAoMBWdpdnZhMRAwDgYDVQQLDAdu
YW1hc3RlMRIwEAYDVQQDDAkxMjcuMC4wLjEwHhcNMTcxMjE5MTcxOTE3WhcNMjcx
MjE3MTcxOTE3WjB3MQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsM
B25hbWFzdGUxGTAXBgNVBAMMEGFnYWRvci1zcGFydGFjdXMwggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQDlTGZja4uwHQxh11+YkCO9DkWqE32FqaOGot+D
Tm6tfFUdepVjkAfF88sxWX0b/xjChBCK8VeXl9M7Gqes6S/Cti8pXzobYhEBb6ZF
l8gOYrqh35aHCJzLznFZ+r7PNk2r28K0QBR1xdy/48btE6Uc/mpp3/K42hJ+aP3R
LN/0AeZ+CmmblQ5H4ffgL+sJz/70lNXh+B3gStIxAlmBirObg03yKp9/UWrM/62o
oBEocl38OPCbXJRVyN+lL0SYKiTKIRnUIwpTtb/rV1N37rsT/UQQQjhl6qZ08en0
FWEDdSq4M7RZGGCFynLrYll6p8eIcQk/i36PbesjQWheIfzZAgMBAAGjggEjMIIB
HzAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIGQDAzBglghkgBhvhCAQ0EJhYk
T3BlblNTTCBHZW5lcmF0ZWQgU2VydmVyIENlcnRpZmljYXRlMB0GA1UdDgQWBBQG
88udlOKhIJfnCbGJ1R2EpN8tQDCBhQYDVR0jBH4wfIAUy7kWBURVvWC1a/FF/zhn
PM8waOahYKReMFwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYw
FAYDVQQHDA1TYW4gRnJhbmNpc2NvMQ4wDAYDVQQKDAVnaXZ2YTEQMA4GA1UECwwH
bmFtYXN0ZYICEAEwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMB
MA0GCSqGSIb3DQEBCwUAA4ICAQBi0iJleyKDl3tnXjFyEM6169Bk/+Oqqgp1zK9P
ENgub4FY9yXX8faoq8DVl1kaK/90Tr+RMuD7BNoEboDqHBmK/nTI2/ThheOCOi+t
+ljc5YBjCEcJAfeMe7KNM/N2EWVuAxC4so8lHa7vGxAPy1E6vH9jQOroZuq6XTEO
P13Oavi7Ph0yd4ZebkZZ9+F7sZyTREL26a5U4RJQfmnO+XL+7VU6G9fJg/hXhc9e
tzXkVXk2NF8LG+kkqQR5rzuEgCmv62wkZKxxbbpPEHY0IBgysSQ2U/ZjB1J6WSCO
6YVfe1aCilrk88HTOq5FYC10elCGx4UHl/BEtvn+MLIlhS1G+JHYaP4D3rRvtu2R
jNeHzjrFRHIlpLZj7pyF5capPX/WERcf00rMVvbm58s389aIOFCD2TWCk/wAr2cD
8DZQzIbXP7CaSzSAR5Etwtj8TedtYL5muAjU+EhjOM8Fq+oTLa5BtJJBRgFKssKJ
NfGF0Zxx7bngBTqcrFLQtHGzsO6FxZETMmmr2JjA97ABy9f69ezwLyZ2/leZ3GAF
UuDUdrEKhHYPAzDQ0DOHr+oIz2i11wf9fBf1qFpLQwOS2beoxcIA6ckrjmH3vATy
RieQ22ziG7hJ36YTBjjTLaDWRxe8z9S/tNHZpxYloQ292JLYTfgGoGxqQfkeRvIq
gbol5w==
-----END CERTIFICATE-----

View File

@@ -0,0 +1,59 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA5UxmY2uLsB0MYddfmJAjvQ5FqhN9hamjhqLfg05urXxVHXqV
Y5AHxfPLMVl9G/8YwoQQivFXl5fTOxqnrOkvwrYvKV86G2IRAW+mRZfIDmK6od+W
hwicy85xWfq+zzZNq9vCtEAUdcXcv+PG7ROlHP5qad/yuNoSfmj90Szf9AHmfgpp
m5UOR+H34C/rCc/+9JTV4fgd4ErSMQJZgYqzm4NN8iqff1FqzP+tqKARKHJd/Djw
m1yUVcjfpS9EmCokyiEZ1CMKU7W/61dTd+67E/1EEEI4ZeqmdPHp9BVhA3UquDO0
WRhghcpy62JZeqfHiHEJP4t+j23rI0FoXiH82QIDAQABAoIBABoCgpLAfkXp5Z+r
mJJrt0IYvlo+f7yLs5rwGr6ARRm0wsrnPs7eZaNUtjXQ2to0I5Dc/itkmPT/KdzI
d0Cr6tkBZXQh8ytsAlXRXmECiJTpKhZ2kFKc7XxI3J7CTKagmEroULu1kRyS5yl1
Ivl9gvs4+MUtvBCv2+0u56u0lWrOHWE1yUgZm94nfeS51Rpx08QnVX38UA32Oc/w
8foJFb4mu0m5XBl4osf0TCJykbIL3y85O6t1lCWR0HcUI/DY8kTZKsCHPgbEFvsb
dMJFWigYXYApbVAAo+6DtKSfvA6ru2h5mfcXmVi4WGNtj8dQms32Zx5I6W6Qx7CN
sJAJJQkCgYEA+vLkLo8V5Bu21d23gSzzWvG33hcEcAYtJdY7Gpbivr+ZP9OJt4dv
8bq1T72E9XRH943eNbqOeewQiQ/pO0aNMdx5tPZdVtVjrujqVWG3+DKiun6vKmYi
ViCVvXpkr3OJ/HqLtSeLtvyf9f4BlDE/U7IR3TfSQEyBo96ZqxpGEycCgYEA6eny
a+ItntnjOmITPs7HoJF18k5vyVKiMTRSO9o6UZaLB7Ljpl9wppN1aMZNuYn2JvYo
Nytrj1HA8jRq6sYvIJkY+pssKikwml6P/AetjVD6eUvpghHCh9685mugjc2iyutR
i+Bur/+04lDU/H4VHMNAn8gVit2sN8t15+ghb/8CgYEApNQA+GvXLxrc/qBAtcH2
ndeCs4dezM3hvaZ278IHcM6cNAYXwMpexuGh0Zxjxmz4ECvItnWwu3hIbB5dTSfL
+eIctrXTHQPQE8S8lhQ3J/jqVaB8IVcwWm3QrMHFfFBhY8qCFRzCchCAaKzMELBA
LhMaFLljigQ2apH9URtSx6UCgYBfAhnn/d8fxUpI/WrpuN1Wd56bg4ZeFEUyjRjV
nKbRWr8vqlZSzjMYRY6Ltvf8429qldLxzZ4LgV5IQkgnAcZEjEqcB4jhuwc1vDDp
Ykj4vCpwOAgpP4Nu4maBhLeawSpdF0Vw9gCfVdInlkNcJu32V8wY2hD97Vm089v5
DM0ACwKBgDYaXb0MvtG0BX2iSfb343KscWbIvuoaB+d2mbrotRD35qd5KKTtn7XI
ZuQzobtjTMkZJHK3MfImhSgdi+BBo8P4i6nJNmoxuPfDOIvTDMZ3rsPaDxmIQhW3
5OD4abzJYtu0dC70C8sJWMIyVFrqGtUDYEKMG37n2HL1gC7zVp4n
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIIFcDCCA1igAwIBAgICEBEwDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCkNhbGlmb3JuaWExDjAMBgNVBAoMBWdpdnZhMRAwDgYDVQQLDAdu
YW1hc3RlMRIwEAYDVQQDDAkxMjcuMC4wLjEwHhcNMTcxMjE5MTcxOTE3WhcNMjcx
MjE3MTcxOTE3WjB3MQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsM
B25hbWFzdGUxGTAXBgNVBAMMEGFnYWRvci1zcGFydGFjdXMwggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQDlTGZja4uwHQxh11+YkCO9DkWqE32FqaOGot+D
Tm6tfFUdepVjkAfF88sxWX0b/xjChBCK8VeXl9M7Gqes6S/Cti8pXzobYhEBb6ZF
l8gOYrqh35aHCJzLznFZ+r7PNk2r28K0QBR1xdy/48btE6Uc/mpp3/K42hJ+aP3R
LN/0AeZ+CmmblQ5H4ffgL+sJz/70lNXh+B3gStIxAlmBirObg03yKp9/UWrM/62o
oBEocl38OPCbXJRVyN+lL0SYKiTKIRnUIwpTtb/rV1N37rsT/UQQQjhl6qZ08en0
FWEDdSq4M7RZGGCFynLrYll6p8eIcQk/i36PbesjQWheIfzZAgMBAAGjggEjMIIB
HzAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIGQDAzBglghkgBhvhCAQ0EJhYk
T3BlblNTTCBHZW5lcmF0ZWQgU2VydmVyIENlcnRpZmljYXRlMB0GA1UdDgQWBBQG
88udlOKhIJfnCbGJ1R2EpN8tQDCBhQYDVR0jBH4wfIAUy7kWBURVvWC1a/FF/zhn
PM8waOahYKReMFwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYw
FAYDVQQHDA1TYW4gRnJhbmNpc2NvMQ4wDAYDVQQKDAVnaXZ2YTEQMA4GA1UECwwH
bmFtYXN0ZYICEAEwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMB
MA0GCSqGSIb3DQEBCwUAA4ICAQBi0iJleyKDl3tnXjFyEM6169Bk/+Oqqgp1zK9P
ENgub4FY9yXX8faoq8DVl1kaK/90Tr+RMuD7BNoEboDqHBmK/nTI2/ThheOCOi+t
+ljc5YBjCEcJAfeMe7KNM/N2EWVuAxC4so8lHa7vGxAPy1E6vH9jQOroZuq6XTEO
P13Oavi7Ph0yd4ZebkZZ9+F7sZyTREL26a5U4RJQfmnO+XL+7VU6G9fJg/hXhc9e
tzXkVXk2NF8LG+kkqQR5rzuEgCmv62wkZKxxbbpPEHY0IBgysSQ2U/ZjB1J6WSCO
6YVfe1aCilrk88HTOq5FYC10elCGx4UHl/BEtvn+MLIlhS1G+JHYaP4D3rRvtu2R
jNeHzjrFRHIlpLZj7pyF5capPX/WERcf00rMVvbm58s389aIOFCD2TWCk/wAr2cD
8DZQzIbXP7CaSzSAR5Etwtj8TedtYL5muAjU+EhjOM8Fq+oTLa5BtJJBRgFKssKJ
NfGF0Zxx7bngBTqcrFLQtHGzsO6FxZETMmmr2JjA97ABy9f69ezwLyZ2/leZ3GAF
UuDUdrEKhHYPAzDQ0DOHr+oIz2i11wf9fBf1qFpLQwOS2beoxcIA6ckrjmH3vATy
RieQ22ziG7hJ36YTBjjTLaDWRxe8z9S/tNHZpxYloQ292JLYTfgGoGxqQfkeRvIq
gbol5w==
-----END CERTIFICATE-----

View File

@@ -0,0 +1,65 @@
-----BEGIN CERTIFICATE-----
MIIFljCCA36gAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwXDELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDjAM
BgNVBAoMBWdpdnZhMRAwDgYDVQQLDAduYW1hc3RlMB4XDTE3MTIxODE5MDM0OFoX
DTMyMTIxNDE5MDM0OFowWDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3Ju
aWExDjAMBgNVBAoMBWdpdnZhMRAwDgYDVQQLDAduYW1hc3RlMRIwEAYDVQQDDAkx
MjcuMC4wLjEwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv2FzJXvmo
pVHiBuhH8omCP3r9bDAcXh6l3uYwz+nPKo3BsxsLLxa/Xbn9oeWyY4h2JWBebGEU
31GsF3y0hhyXzV7TFg1SauRFmbETQaLSAvNLAofptUyVxoIPpslfvr5GGNpWp6V0
5FpyQf3uYnzvgeQMsujqUWg2L3sCx1MGALS9wrGCtA1DbyB7hfC9XE5OVc+8mqAk
mZsst/Z6ljqHXr5w9f1F/a4SYpeby+DKgzHWneLghqgEdtzJb3JshyQa6zhUrEBA
/rUyxca/jYh35pHQtfFLIakNn2l+MXQocKl8v0HjCqwp+qVHzUplBjxoXuPcObEE
kBTQPWBN1yk8nj+YGr2++rEbvnX1iaUigdGCt/zZUXWbJxVNp3t0ug539PPZ4qCM
ycRX6XtzlR+zoPxQvLmqpPhV7lNf/Ulpl52KbbmjhsrsFYXZNeXw0TjOgSVtvHxw
GDiNTfe3ICbPYhtNBKy9qlKOhX+rR+mkIfKz8rmqRUvR8fsKQs0skQcdq6rFyxBV
JNlRqZbcXVht2JqAULIrYC7Pk+kAL5Vr+A88nFy7S6sjMCgHT0u9IYNvDsXom3Xs
hduU9jg1ChLpx0XZ/PCZp8V09ue6T83XwDYI7VVr89mRTRHxlrUTiSY4qTJvKvTU
LrPpdTlKco7OZ2nl8yWaB9t0ZBQppC2CEwIDAQABo2YwZDAdBgNVHQ4EFgQUy7kW
BURVvWC1a/FF/zhnPM8waOYwHwYDVR0jBBgwFoAULkpERdRLS5nYggsCPxc+nbs8
U3cwEgYDVR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcN
AQELBQADggIBAF+Dm7QnPf9rI5LXly2wGu+31yACmkqjJ++MqQU16RU2lQu95v4h
5BOhqDOaT/tTutCRGQcomUlw9p8T8NTluXyk6vUX06fYNhLD1172hKMBAALCW/UX
DVZvmhJJTyWhineMJKSCih07o2riKW8YLvW2rKI2GezH/tNT4ENUEhpXs2vEOTUP
KmNCY7M/c60RqCxH8bpypkkfhDR+mPcQSy2bPX/aYgUf5+K8gkati3x1EgmgF+yk
r1dGjOfEQtsv+klHMoy6OYCyX6ZDkWzLioGMLi2aoJVIkX4YooTxsuLa3f90ij3X
yNgBf8FqV24lMjggqATCxzEp4ULH/b5fhaFzZR/2L1NhpMoKOsbABL91Br6664hS
ZE68uB8eegEDfR5uGbpl3FNef5p3v9Ehqz66VmyLCuBt45mbELSjtp3Qf7bdFS3z
I9LM4UxU9qUP6eZG7k7dzkmKm0U4p1n9+ijEgFRSWa7y7m/yrNVjJ+9V5dA+FPhP
2aF0aiY3twyYmG2/wKCib/56E52KpxwYaKjSDevY1GA+4jJzpm8yXucyGBkebFw+
44KnLw8GnLBuaYONHRnpnzUGSzHAbrJiD3IWPi+m/lWhhaIUNpwCXiNFmKH50og8
PdqrfzRDxPoSYCi5aCvLPEtz8F9SBPcj68P+tLHmdoj1tW6zKevTqmHz
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFnjCCA4agAwIBAgIJAL/BxYbTTgtDMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV
BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNp
c2NvMQ4wDAYDVQQKDAVnaXZ2YTEQMA4GA1UECwwHbmFtYXN0ZTAeFw0xNzEyMTgx
ODU1MDlaFw0zNzEyMTMxODU1MDlaMFwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApD
YWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNpc2NvMQ4wDAYDVQQKDAVnaXZ2
YTEQMA4GA1UECwwHbmFtYXN0ZTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoC
ggIBAJ6gHIylr1TIDGk4Q1H5KaMX22v6d9f8XyttcMzMOqjMSpdokzRVFuIjLs9L
VNPA2Z0FsrUYN6TVINEf2XnwF4Ilz0OgGkhBKUzddnDDOHbIdK1Eu2J2CjBTDN+z
7BZvY4GbaHmDb5axLP0pG6jfBK3dzcDgPOxZmwgNjNYe2D4w3GBu+KIoEBkLNem2
sTgEoNgNniT2ibd6vL7l1UFyEN+yNTAVqwxLuTYHjavfQtyLtWl9hmhgEzKJxR7G
ZWAFbgfz9p1AV1mPu0+4b8GKSsFOoKLZDcwNqeGCe3tNVJfzoZptfNcFjzPWSpn6
DXrODveYgQ4hEBsvLpeNkUDWLB+TnJ9jihmi/X2LF5O1iXFFXvrfBElvn6RRQw8d
/J6jFTThzSGsxg86RlZUuuL9QJ5yvuThCMdHCveL7LHdbFbo9HsmVozaQ9NztvC1
BC/JyjiA6XZQXa7ShyNQ/JVBsiEZH8qdKcGY8N8r7Ran4kjyhULP2UYtL4uYVXuW
fjEEDHi4exvlwfV8TQRAADiL5HHICquIICJRxga4BJbBROWfOKdhA7Dbpx5GDDBT
IYEassVIP48Eb04qa67Ar87Xd24mwlPSb8k9aBNS3sHlwnGIPKtSISmXKYFJFg/2
Kx4rs8e9+/Le8zNrroxSQJ9Ex1YY8n73XTbCP4rdhrfk+j97AgMBAAGjYzBhMB0G
A1UdDgQWBBQuSkRF1EtLmdiCCwI/Fz6duzxTdzAfBgNVHSMEGDAWgBQuSkRF1EtL
mdiCCwI/Fz6duzxTdzAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAN
BgkqhkiG9w0BAQsFAAOCAgEAnIx3DzkwGN+Zw6E7hjXthmAWib14UYEf7ED+qUfV
kJTVsiOhqzQR/cWtzl0fS6vMcZngIa8OOBc1hcUuWz9ujKfLjyPHieJzTphx2c7b
qB6pr9L/vMewPRXc96L8JCkOzctOsl+C849HvdS6JsAxkFEG05Jt64iMN2ttRgX0
85BwwoXjcuHCZdIuwXkxs0psOdMTiA+ibDPuLLu+fzo2l1vTfWVnA5pcUEbND6OY
AzMNXu9u6yTJnLW5thy+KVTwaanmmXWiwQVTmceJrR/SeYX0u+kpjQyNh42Rk+R8
67x0UoK3uzWEW3+Qr09xk86Oal++0ErhsbcayT8k1C3AggcjF1Av0wEC/UE/NwJl
GEeSDrHb0Ll5bs48BxWpj671PXCTjxKSJK48iexgVRYiIl1OnrggHul0wzjQFrFa
HQradVlU8fSYNMa2taQyXPb00P+IU275TL0BioTdwkmk2bp57d9hFuKkECL2Yqcm
/zGuaWIy3tIic8I51YUdUpuj1TvpahjxxW9SlCwV5p/IDgwhaJshT2nZVoYxcTRe
x1W491gS7xPCSzS9cXMdG+7DHoxiGGudVprzLObMP5+RjArQPSgGhqnW3hFVSbz1
H+25dNeJeQ5ouWqxMD+Abl5j7OwxVEDyS7D6UCjVaE1PxM6izoEvYoKdKCoB9n7X
lpY=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,31 @@
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgICEA0wDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCkNhbGlmb3JuaWExDjAMBgNVBAoMBWdpdnZhMRAwDgYDVQQLDAdu
YW1hc3RlMRIwEAYDVQQDDAkxMjcuMC4wLjEwHhcNMTcxMjE4MjEzNjIwWhcNMjcx
MjE2MjEzNjIwWjByMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsM
B25hbWFzdGUxFDASBgNVBAMMC2RlZXB0aG91Z2h0MIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAzGjhXGiy/507TJGaBw90wmIMO1Vv5jBnjBzcYF0d/+va
UQmS5dVWPXYh3qEVuQB0yYHRyUFO/d5jWLbdiy1Kni+DUUUB0rBNi6192tOdnHyD
eT3/qc4S7OJF65Va0gCONwCvyquanDkrA4kAo7e8bxiQ159jJQHR0nDWLx1QWLjQ
bxtEVxYnbkFcaZ5R9+D0wxFwjsnwmgItSDKt1acdw4JxukY/CtYwRKqEEPoQ7RF4
fJy1SSog3H+Uf1/VYET8I4JO9CmtEJl6RzvLV9hFs5R7qU2SZv326fVrIDtsviJJ
d/18e5VkdeJKkVN+0gQ348Srujmiwe0gEaqB7q7EmQIDAQABo4IBIzCCAR8wCQYD
VR0TBAIwADARBglghkgBhvhCAQEEBAMCBkAwMwYJYIZIAYb4QgENBCYWJE9wZW5T
U0wgR2VuZXJhdGVkIFNlcnZlciBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQUW1lHAoMo
eYNUDwtWbTym0JJuvvcwgYUGA1UdIwR+MHyAFMu5FgVEVb1gtWvxRf84ZzzPMGjm
oWCkXjBcMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UE
BwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsMB25hbWFz
dGWCAhABMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkq
hkiG9w0BAQsFAAOCAgEAeGe1HtR+2I2xseSv0AfZ7Hh67oJMqgBfBs0NHE9n7huK
yvMBJesOmjgGmMqKCjZJFD1gbZAMcT/M7m8BBW4vFpM8wKnjyzdGaw4/cqz7QhV0
j242Ytf8dTZlwSnLTOiUupP6W2hnUgYCw/dSaerybRp/LCKMOjoF6/JAC0FNXgYx
1dOqpF0MwQrACxDYNi4kU5tgtJUeDOxVjiBZLYHtV5uFsNHAX09w97NPsFmg9M6Q
Vt6OD+RAUBmjlOJoYEObx1pMdV10mXlhireP9RB+KpOS8yksc5u8VZtnTyxNqkJc
1f91qHTJT7EGrtRvkGVcBh2uc5CJA9wjDOHKwQYUEoMUVn3isDT+cn8B8/N88zE8
vNRiFq+X4gcV4lkYlkv7uUpF1RxkNG3ufUZP6STZsQWlG3K0zGWVSHdI0afNbc4k
py69UGC+3dshGgPCDw+YXp10IoFEHW3gtJc6D1aXFTPLC8b1NMPLmzIln/KRfA7O
hephwZ2vw9qyvOcDGfKGX5U0eiF2gqCajqwwYex0OIyEx9+IwiUFDGePhv9ra/yV
XVZkW3XmOipslZPseoZ+oYISQZDyKdCEtzp8n4Q4SXZbH/z3EO6E5W6oaP6fX3Ch
pbMZVfPhbrePZUsQiHwVHx8ZbcV+grvuRtDVd9EBnzF875BUB/wbfKFRKWtRyBc=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,58 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAzGjhXGiy/507TJGaBw90wmIMO1Vv5jBnjBzcYF0d/+vaUQmS
5dVWPXYh3qEVuQB0yYHRyUFO/d5jWLbdiy1Kni+DUUUB0rBNi6192tOdnHyDeT3/
qc4S7OJF65Va0gCONwCvyquanDkrA4kAo7e8bxiQ159jJQHR0nDWLx1QWLjQbxtE
VxYnbkFcaZ5R9+D0wxFwjsnwmgItSDKt1acdw4JxukY/CtYwRKqEEPoQ7RF4fJy1
SSog3H+Uf1/VYET8I4JO9CmtEJl6RzvLV9hFs5R7qU2SZv326fVrIDtsviJJd/18
e5VkdeJKkVN+0gQ348Srujmiwe0gEaqB7q7EmQIDAQABAoIBAB1b7wp7y0Hljm/X
9dyPvsBwnrsi8ViJmUXJm2mH1lg8wvWiv2Odea6IOiMk1d7ljuCmccBLThIuj+xd
D4L+9Vm1D1Jr6/Ab/HdUauA0Rs4EIEoYupDkFVnKwiotIIdLJyIFSjp83U8U8vWm
Bt589Gasi5k8vlvBYCauqETKHBEx8JS3RhEh0F1FDnrTWnezKnnw3mlZou6t3eaW
DQjavnPHF1c0qEciSPu8LmAcMB+B0g7jENOcYH+s5aN9v0KhLzTmyKszbNtubDkK
ktk/l3aYrfNBccFQMyrfEi+4oaKR0Vtjte/k25HGFic/GtqAW+kIwj3UQC7esOPf
SB81ShECgYEA7Sbok+XUmBkIkAlOeBuOVruLTxYu+D1adVOokJlETWdVC7mebuq4
QrwsoikqkZdFOV75FIlu6Gr6vP0X+LJHXb6aa004fF+Cxfp5ZrdLO9hNkcjlfhB8
jjPd175/5n2dm6sMycBmZUGowED2RB4X1X4OmW7uRYidht72xEjnW20CgYEA3KfM
IyxQ/nSKa/DdkEr3v83wkWfZ3inGrfY6HNScRYkJYxyWGE96T5pa9Y+D/me4ySVH
KvhSUiSD2q4qTmemDPj2pDQI4HBheghPvGUWal8fFUiqvNOOUHM8nZz/nVYZlNQe
RT/FkigF3npSPT4oNI+FaNgolF/82KHLe8O2Bl0CgYBZKnTuDs8FNPxcM7OWQz4c
bD1vyfZ1DZRyYrcRTx84Py7hzrO8HnKTXO8nNXU08nxrmsLqLtZNetO1tS+LKXTd
0Wl8CLfBQ6QGzitRLH+UC7r2omNvJ8G9MdEqagzq27YjroeLX9TgI3TQfFxbtjjd
45yXofbinAAmkrSTjpm2bQKBgQCNNQS6bZ3XeRUsRpRDxvYNVOli5CbUub9fjHdc
A+ONzEipmJ2lKReI4arcAt/hatciQiztHsTvtFZ9F4ATdNka7ChKpNIZb1GyGqeM
VNSndgAaSsqY1Hn6mgRsiRA7y+HLEIPepRT2l45J9dWzQ5fPKxmhItO1QEg7Ci+C
IJjYMQKBgQCLxayFVxp9QSyCK8qc1moJ/ZHauV4fCrBG8a3ol+CEsOh0zH5BAEf7
+/OnIAd+8u2PqhcWMhd+FEuX3dgIXIA9wN1vVQQ0n4gsUROkPOGIZ9cPeYlOLBMS
zmFZaWOV8TJ1SVIpaDKOj6upp5JlqjPkgd5NQHz/g8Y9IElCMXWHFw==
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgICEA0wDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCkNhbGlmb3JuaWExDjAMBgNVBAoMBWdpdnZhMRAwDgYDVQQLDAdu
YW1hc3RlMRIwEAYDVQQDDAkxMjcuMC4wLjEwHhcNMTcxMjE4MjEzNjIwWhcNMjcx
MjE2MjEzNjIwWjByMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsM
B25hbWFzdGUxFDASBgNVBAMMC2RlZXB0aG91Z2h0MIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAzGjhXGiy/507TJGaBw90wmIMO1Vv5jBnjBzcYF0d/+va
UQmS5dVWPXYh3qEVuQB0yYHRyUFO/d5jWLbdiy1Kni+DUUUB0rBNi6192tOdnHyD
eT3/qc4S7OJF65Va0gCONwCvyquanDkrA4kAo7e8bxiQ159jJQHR0nDWLx1QWLjQ
bxtEVxYnbkFcaZ5R9+D0wxFwjsnwmgItSDKt1acdw4JxukY/CtYwRKqEEPoQ7RF4
fJy1SSog3H+Uf1/VYET8I4JO9CmtEJl6RzvLV9hFs5R7qU2SZv326fVrIDtsviJJ
d/18e5VkdeJKkVN+0gQ348Srujmiwe0gEaqB7q7EmQIDAQABo4IBIzCCAR8wCQYD
VR0TBAIwADARBglghkgBhvhCAQEEBAMCBkAwMwYJYIZIAYb4QgENBCYWJE9wZW5T
U0wgR2VuZXJhdGVkIFNlcnZlciBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQUW1lHAoMo
eYNUDwtWbTym0JJuvvcwgYUGA1UdIwR+MHyAFMu5FgVEVb1gtWvxRf84ZzzPMGjm
oWCkXjBcMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UE
BwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsMB25hbWFz
dGWCAhABMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkq
hkiG9w0BAQsFAAOCAgEAeGe1HtR+2I2xseSv0AfZ7Hh67oJMqgBfBs0NHE9n7huK
yvMBJesOmjgGmMqKCjZJFD1gbZAMcT/M7m8BBW4vFpM8wKnjyzdGaw4/cqz7QhV0
j242Ytf8dTZlwSnLTOiUupP6W2hnUgYCw/dSaerybRp/LCKMOjoF6/JAC0FNXgYx
1dOqpF0MwQrACxDYNi4kU5tgtJUeDOxVjiBZLYHtV5uFsNHAX09w97NPsFmg9M6Q
Vt6OD+RAUBmjlOJoYEObx1pMdV10mXlhireP9RB+KpOS8yksc5u8VZtnTyxNqkJc
1f91qHTJT7EGrtRvkGVcBh2uc5CJA9wjDOHKwQYUEoMUVn3isDT+cn8B8/N88zE8
vNRiFq+X4gcV4lkYlkv7uUpF1RxkNG3ufUZP6STZsQWlG3K0zGWVSHdI0afNbc4k
py69UGC+3dshGgPCDw+YXp10IoFEHW3gtJc6D1aXFTPLC8b1NMPLmzIln/KRfA7O
hephwZ2vw9qyvOcDGfKGX5U0eiF2gqCajqwwYex0OIyEx9+IwiUFDGePhv9ra/yV
XVZkW3XmOipslZPseoZ+oYISQZDyKdCEtzp8n4Q4SXZbH/z3EO6E5W6oaP6fX3Ch
pbMZVfPhbrePZUsQiHwVHx8ZbcV+grvuRtDVd9EBnzF875BUB/wbfKFRKWtRyBc=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,31 @@
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgICEAwwDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCkNhbGlmb3JuaWExDjAMBgNVBAoMBWdpdnZhMRAwDgYDVQQLDAdu
YW1hc3RlMRIwEAYDVQQDDAkxMjcuMC4wLjEwHhcNMTcxMjE4MjEzNTU5WhcNMjcx
MjE2MjEzNTU5WjByMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsM
B25hbWFzdGUxFDASBgNVBAMMC2RlZXB0aG91Z2h0MIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAqg/1329TufwLrsoal0NstM35Dedf33yqxoaBJ5u1y5V+
ZlW5IkGIWqNUZAXZQrv9YP9QYqFC7eN61IhLAgnZfiDlUEahm/6/T1zvf+1EUsd7
SCWfxYvJNfj1a2e39n3zJtVxB1DshvdfxOQ1j3d9SA8sxbbDrrLFaXzfD21FaUZv
mdgEAXu5sUh+sfvsMO/s0JA2FUIP8jal09P2CAFK4Nt9gih/1auvUKs3AuPKXWjb
xEwMQBibjKXBnqFaG8Xpcji7bDahtZvDxBaT68G/HQGUPNCaQ7vCMCuUSvPyX/eu
zTpIn5q18nxDxLLP7GCkhcq04OUZIdyEfXjberRebwIDAQABo4IBIzCCAR8wCQYD
VR0TBAIwADARBglghkgBhvhCAQEEBAMCBkAwMwYJYIZIAYb4QgENBCYWJE9wZW5T
U0wgR2VuZXJhdGVkIFNlcnZlciBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQU4Qc8UKy4
zzrFkP0zuojME35caxswgYUGA1UdIwR+MHyAFMu5FgVEVb1gtWvxRf84ZzzPMGjm
oWCkXjBcMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UE
BwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsMB25hbWFz
dGWCAhABMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkq
hkiG9w0BAQsFAAOCAgEAV+CnMPQajFiD2SQQA7DYc51p4I5BuYnPSmbf7R76C4OO
or2JMuvC995QoLzNGrXE7DwJh/fsKIjP7xr2QzhfuXJPSKtKQzEFfOkHTVu4r2ge
O/bqDPkK9Ves6EK4ecRI5Grx5LJB2M0MNvmF5ylsRXNxp9HsTgou1Cs8/0OhApnw
X+d0mn8fjEWo9q2ZFSjDawgJS62hySXSdQKNmHweytrMAttbjpL0d/U7a1Xcb3cI
DO2MEnP2OkIGyBZ+8VBWWrj9JuVM/kf87SALndY/qPzsb5a4/3YkHni9cB2v3Och
ozKux5tW6RfFO54d4sGFDus9YsMGFlwQdT/FCqB5sAz94ovojFXrl/UqnKVDFbVT
p6Lyj1qE0J6dJefpQVO9Mb8Ixol9hPD7FhhCSonJfBHP6OG9XfgVpNAsNOZOMIBa
ue31CBysE2/DLYSkUuMH4zS/GdnSnBTbl6MCKtfyK5Rpaay0ZFdNytStSWswtwXV
CGgoz/4l7dOBum4HyprEM9h9Dw1gf4IDG4uY59tEtg9jlZHRBq1tEWascQuBne1m
ahKCEvbIjYfdWMUGMud7SOTPFBQDT4gVDzXnIB6iYMqwIyFx4OJQsIBlh8BRtwAP
NhGAPw51dDCwLc3Z/f1qXekIgmMVlbeH9EkR2RieL7PLyxsqJnQqBKz8Puw6Bqc=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,58 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAqg/1329TufwLrsoal0NstM35Dedf33yqxoaBJ5u1y5V+ZlW5
IkGIWqNUZAXZQrv9YP9QYqFC7eN61IhLAgnZfiDlUEahm/6/T1zvf+1EUsd7SCWf
xYvJNfj1a2e39n3zJtVxB1DshvdfxOQ1j3d9SA8sxbbDrrLFaXzfD21FaUZvmdgE
AXu5sUh+sfvsMO/s0JA2FUIP8jal09P2CAFK4Nt9gih/1auvUKs3AuPKXWjbxEwM
QBibjKXBnqFaG8Xpcji7bDahtZvDxBaT68G/HQGUPNCaQ7vCMCuUSvPyX/euzTpI
n5q18nxDxLLP7GCkhcq04OUZIdyEfXjberRebwIDAQABAoIBAC5g9ev+f3X8T+9W
PNQ91hqlBaQOEq5vYF+N9REpPPYNihA8lqXJ+3bEjlJM6ghyHlLirjiHxCn+XNQz
a0leCEuGiyNOb+qMGf552PMpcPWmY2+0mxMT4Ubv43ZsLdZyWOqhURbuseLI+fxH
RHgg3TDWup4dDtbI+F+hZ2/cnA5ubkdBKnXU/vQm27vCiQIq+Ma/Q8LYqcwl0nPB
t5dlufni/RimWyd9fOoCklff/MLruXWOXc2N4IB/YZtP+/OxxVjHRKeg+7a3eTBZ
vupzQlxw3QZz+ACmbilmGODyJceRfB9qdulM97q56qyX+aDYwnsy1wOTgvc8csIw
mGOZo/kCgYEA1+4v0YvqxRHq8M1GMvl/T3lx7NXc6jlHa4MEZyY0B/GG4kK20m0P
9UkEm1OizQiI9uPHXN0dmL3jjOpgqRCQqut+z8Z15smp/vtuvlwduY/WhZHuvvek
QbWY7MaX+zrceZITpfVfc9VWzLjyBcVgDoXL73dQ2ZU5/YUp1MlOJk0CgYEAyZ7N
fBS4UaoYpYhDeFJwh4tXJGu/i/NLALalmapzvFDfXuFgaAO4YbvqqtUMW1J9bi0M
0p+APs1xSsuFAoTquDkr08z5pV6SKYH08mtxej9cZPJIrUvbPxwbbGPQ/ZSF9rnG
ToE2GfhXMJFsR3R4qumeHVZK3MBAAwPGza76basCgYA37Pr5nPGLZR6ii6gY38H3
hY7aNnHnQDqdP+vOA3kKbaXvyDOtwI2Xi/fjev/5drJyr4AdLy/RNa1P/AxY/W9a
tW+8xLwYsDaVUe3W4+jW/MglBCz/zQf/9NbMzIrkiNQ9sHXiT/EPATxf/a7Bi+Nb
H5A4T4DjOeExJmI1OIZDKQKBgGyfbb1nvFXi+hxUaWUtpQqhe3VXx36yuLnNrTI4
rtnKCE2pxrLDLlcZUrhux5V7v6/X/YyL+h/btynAtAxDZ+GQi5g0WltJtB1AsqLY
V+6wrCqGjbkvoRNDJVMkA7haiEIAnGI3Itqi/PZhoqBsk4YhDtpnXzXHLbVyF21A
1BK5AoGAOBCVEv03x7EoqggigULP58Q1LL/xw529YcTb8CxbQcnoBKvgGrAj/5J0
kdBJkSWOhadGt3RDTxk4/HLjwgBKmX4Ss/Y6iIakKzL08yEXRlt/PbYSUhG+pF/8
d+YhKfhktop6RSEKMANnWFCxKVKOcRe6oYQW50WkX4ylRgyPdu4=
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgICEAwwDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCkNhbGlmb3JuaWExDjAMBgNVBAoMBWdpdnZhMRAwDgYDVQQLDAdu
YW1hc3RlMRIwEAYDVQQDDAkxMjcuMC4wLjEwHhcNMTcxMjE4MjEzNTU5WhcNMjcx
MjE2MjEzNTU5WjByMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEW
MBQGA1UEBwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsM
B25hbWFzdGUxFDASBgNVBAMMC2RlZXB0aG91Z2h0MIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAqg/1329TufwLrsoal0NstM35Dedf33yqxoaBJ5u1y5V+
ZlW5IkGIWqNUZAXZQrv9YP9QYqFC7eN61IhLAgnZfiDlUEahm/6/T1zvf+1EUsd7
SCWfxYvJNfj1a2e39n3zJtVxB1DshvdfxOQ1j3d9SA8sxbbDrrLFaXzfD21FaUZv
mdgEAXu5sUh+sfvsMO/s0JA2FUIP8jal09P2CAFK4Nt9gih/1auvUKs3AuPKXWjb
xEwMQBibjKXBnqFaG8Xpcji7bDahtZvDxBaT68G/HQGUPNCaQ7vCMCuUSvPyX/eu
zTpIn5q18nxDxLLP7GCkhcq04OUZIdyEfXjberRebwIDAQABo4IBIzCCAR8wCQYD
VR0TBAIwADARBglghkgBhvhCAQEEBAMCBkAwMwYJYIZIAYb4QgENBCYWJE9wZW5T
U0wgR2VuZXJhdGVkIFNlcnZlciBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQU4Qc8UKy4
zzrFkP0zuojME35caxswgYUGA1UdIwR+MHyAFMu5FgVEVb1gtWvxRf84ZzzPMGjm
oWCkXjBcMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UE
BwwNU2FuIEZyYW5jaXNjbzEOMAwGA1UECgwFZ2l2dmExEDAOBgNVBAsMB25hbWFz
dGWCAhABMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkq
hkiG9w0BAQsFAAOCAgEAV+CnMPQajFiD2SQQA7DYc51p4I5BuYnPSmbf7R76C4OO
or2JMuvC995QoLzNGrXE7DwJh/fsKIjP7xr2QzhfuXJPSKtKQzEFfOkHTVu4r2ge
O/bqDPkK9Ves6EK4ecRI5Grx5LJB2M0MNvmF5ylsRXNxp9HsTgou1Cs8/0OhApnw
X+d0mn8fjEWo9q2ZFSjDawgJS62hySXSdQKNmHweytrMAttbjpL0d/U7a1Xcb3cI
DO2MEnP2OkIGyBZ+8VBWWrj9JuVM/kf87SALndY/qPzsb5a4/3YkHni9cB2v3Och
ozKux5tW6RfFO54d4sGFDus9YsMGFlwQdT/FCqB5sAz94ovojFXrl/UqnKVDFbVT
p6Lyj1qE0J6dJefpQVO9Mb8Ixol9hPD7FhhCSonJfBHP6OG9XfgVpNAsNOZOMIBa
ue31CBysE2/DLYSkUuMH4zS/GdnSnBTbl6MCKtfyK5Rpaay0ZFdNytStSWswtwXV
CGgoz/4l7dOBum4HyprEM9h9Dw1gf4IDG4uY59tEtg9jlZHRBq1tEWascQuBne1m
ahKCEvbIjYfdWMUGMud7SOTPFBQDT4gVDzXnIB6iYMqwIyFx4OJQsIBlh8BRtwAP
NhGAPw51dDCwLc3Z/f1qXekIgmMVlbeH9EkR2RieL7PLyxsqJnQqBKz8Puw6Bqc=
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More