blob: 4728f15167536b861d60ecb7975fad1bfdc57a78 [file] [log] [blame]
This folder contains several tools to use and parse the data from the app_acct.fdx extension.
- database.sql :
An example database format for use with the scripts in this folder.
- app_acct.conf :
The part of app_acct.conf that is relevant to this database schema.
- purge_to_file.php :
This PHP script is used to take the records from the incoming table (stored by app_acct.fdx
extension) and save these records in a file in SQL format. This is similar to pg_dump
command, except that all the records that have been saved in the file are removed from
the table. This can be used in cron jobs for example to maintain a reasonable size of
the incoming table and move the data to another host for off-line processing. It can
also be useful to aggregate the data from different hosts, if you are load-balancing your
accounting servers for example (granted that all app_acct.fdx use identical table format
on all the servers). See the top of the file for configuration parameters.
- process_records.php :
This PHP script processes the records pertaining to users sessions, as follow:
* when a session is complete (STOP record received), it stores a session summary
into the processed records table (see process_database.sql file for format).
* It optionally archives the processed records into a different table, before deleting them.
* It can also move records of unterminated sessions that are older than a configurable time
to an orphan_records table, so that they are not re-processed every time.
This orphans table must have the same structure as the "incoming" table.
- display_results.php, display_self.php, display_stats.php :
These scripts give a few examples of how to display the processed data.
USAGE:
*) Initial: create your database using database.sql file
*) Configure the app_acct.fdx extension using tips from app_acct.conf
The following processing can be run for example as cron jobs.
1) On each accounting server for the realm, configure the app_acct.fdx extension to
dump the records in a local database (all servers must use the same database format).
The table would typically be "incoming".
2) Run the purge_to_file.php script on each server regularly, then move the generated
files onto a single server for processing. This server only needs the other tables.
3) Add the data from the files into the database in this server by running:
psql < file.sql
Each file that has been added should then be archived and removed so that it is not
re-added later.
4) Run the process_records.php script on this processing server. Now, the database
contains the aggregated data that can be visualized with display_*.php scripts.