Tuesday, June 18, 2013

Audit trails for financial systems - supporting the business

One of the most costly factors involving financial systems is ensuring the ability to provide an accurate audit capability so  that, when needed, a legal discovery can be performed that satisfies the opposition's and court's concerns about the accuracy and reliability of the information.  This is especially important when the possibility of malfeasance exists.

The major cost factor in legal discovery is the lawyer's time - for which the losing side will eventually end up paying.  While there now exists some commercial tools available to improve the efficiency of e-mail and document searches (e.g. Proofpoint), the ability to perform efficient searches at a system's transaction level (to determine exactly what was done in what order), is still time consuming and labor intensive.  Particularly when the incident involves data that was altered that was not part of a standard business transaction process (e.g. a sale), but instead was from the core database (e.g. customer table, inventory table).

The problem can be best summed up by the fact that most logging systems trap only the user id and a timedate stamp of when a change occurred.  Most deletion logs only show the record id along with that timestamp.  While this can point to a possible incident point, it requires the replaying of the database transaction logs from backups (possibly from several years ago) to determine exactly what was done.  Which takes time and resources that most businesses can better spend on other things.

One can keep a transaction log which mirrors the database, essentially identical data content of the table in question, with a primary key containing a timedate stamp and the user id of the person making the change.  However, that would still leave the problem of searching the log for every table within the timeframe in question, a still laborious task.

It would be better to have a single journal that encompasses the entire database that could be readily searched for all changes.  Unfortunately, relational databases don't handle polymorphic data very well.  However, XML does.   A solution may be found by using both in concert.

One of the capabilities of a number of modern programming languages (e.g. C#, Java) is the ability to employ introspection - meaning the ability to examine a generic structure to determine its characteristics.  This permits the programmer to employ a generic routine to process data of all types.

This routine could, for example, parse a record to be updated (inserted or deleted too), and construct an XML construct containing the relevant information (e.g. the primary index, the primary index field names, the changed field names, and their old and new values.).  This XML construct could be saved as a binary field in a standard RDS journal record which would contain as other fields: a timestamp, user id,  table name and a primary key.  This permits a single search to focus in on users, tables and time periods.  Also, as we capture the before and after values and are using an identity seeded primary key for the journal table, the effort required to successfully illicitly alter the data has increased significantly.  Additionally, one could move the primary key value itself into  the RDS structure where it might be helpful for more focused searches.

Ideally, this would be implemented on the database server in the appropriate Trigger Stored Procedures for Insert, Delete and Update.  That would maximize the efficiency of the operation and greatly increases the difficulty of any player attempting to manipulate the data.  For those instances were one is addressing cross database operations, usually occurring where custom applications have to integrate with canned applications (such as an in-house app to ERP interface), it may make more sense to address it programmatically in the app hosted on the webserver.

A custom reporting utility would be required to decode the XML and return it in an easy to read format.  A generic report could separate the changed field data into columns for field name, old, and new values.


Many Rapid Deployment Tools, such as IronSpeed Designer, as they employ code templates, lend themselves to the programmatic approach.  By altering the templates used for code generation, along with a support library to address the XML  formatting, one could easily make this a default operation for the code generator, eliminating the need for unit testing on up to 90% of the code and functionality.

One recommendation related to the journal table itself - remove the update stored procedure, or alter it to only send a warning to the sysops of an illicit data change, and capturing the relevant information (user, timedate, table, fields altered and the relevant old and new data.) in a security breach report.

No comments:

Post a Comment