Wednesday, June 25, 2014

Alternative Transaction Logging Technique for Faster Legal/Exacting Searches of Custom Application Databases

Due to developments over the last several years in compliance (due to the 2007 financial fallout) and internal auditing (due to the increase in data breaches), it is becoming much more challenging for organizations to address the time and cost of discovery and documentation.  These searches may require exact information of exactly what was done, by who, when, on which server and from what station and in what sequence.  Given that searches may require historical queries going back as long as 12 or more years (due to compliance requirements), and the tendency for enterprise systems to evolve, this may require thousands of man-hours to reload the appropriate system backups (if they are available) and roll through transaction logs, all required to done by certifiably competent specialists, with lawyers and auditors looking over their shoulders reviewing everything to ensure that all requirements have been met.


As far back as 2001 I had been bothered by this issue and have been mulling over practical solutions but kept putting the problem back on the shelf as other projects took priority.  Finally, a year or so ago I  had to address this specific issue again while providing consulting to a startup here in Chicago.  Due to the nature of the business, it was imperative that they have the ability to meet every possibly auditing and search need with the minimum cost.  The solution had to be universal to every line of business application we employed, and work across database and server platforms.  It also had to have a structure that was flexible enough to survive unaltered into the foreseeable future.


While the startup never launched, and we never got any farther than basic workflows for the applications, I finally found a solution to this conundrum when I started writing a ITILv3 support package last year.


To make it work required:
  • Implementation in a relational database environment.
  • No routine customization (common code for the every use for the entire implementation).
  • Searchable variable metadata content.
  • Needed to work seamlessly for complex transactions.
  • Needed to integrate with the RAD tools I was using.
I began by selecting a suitable RAD tool which employed a customizable coding template in a language with which I was comfortable.  For me this was IronSpeed Designer.  This RAD tool permits the user to build fairly complex Web applications from MS SQL, Oracle and other databases in either the VB or C# language.  As it is template driven, using boilerplate code for all components (presentation, data view, business rules, etc.), and can address complex views, it was the perfect tool for me.


I started by determining the audit log structure.  This is short synopsis of what I came up with as the base requirements:


  • User Identification
    • User ID
      • User SID
      • User Name
    • Workstation ID
      • HostName
      • IP
    • Server Information
      • IP
      • HostName
  • Time/Date of Transaction
  • Package Name
  • Originating Program
    • Name
    • Version
    • Last Update Date
  • Database Platform
    • Database Name
  • Transaction Information
    • Table Name
    • Transaction Type
    • Key Field Names
    • Key Field Data
    • Changed Field Names
    • Changed Field Data
In order to address the variable number of elements and their contents in the structure, in addition to the fact that I might want to add to it in the future, I decided to fold the polymorphic data into an XML datagram.  This datagram was then treated as a single large binary field within the relational record.


On the programming side there were several challenges:
  • Writing the necessary code only once (all transactions employ the same code).
  • Integrating it into the support libraries for IronSpeed
  • Ensuring that proper error processing was not compromised (critical!).Writing the code in VB (IronSpeed source code libraries were provided in VB only).
I was able to solve the first challenge rather elegantly, thanks to the excellent engineering of the IronSpeed team, by employing reflection at the core data access library routine level.  This permitted a data-structure invariant solution that worked for both simple and complex (multi-table) transactions.


Integrating this custom module was very straight forward once I waded through the reference material on how to do this with IronSpeed.


Ensuring that the proper error processing was not compromised was simply a matter of ensuring that all possible errors were addressed, even if it was with an unknown error code.  No direct exits to the debugger permitted.


The most challenging aspect turned out writing the new support library in VB which is a language that I despise.




By assigning relational table fields to  the relevant search fields (database ,table name, user, date/time, server, package, IP, etc), one can now do a fast search across entire platforms.  This search capability could even be extended to the XML datagram for an appropriate string search (say a person's name).  Once the desired dataset has been selected, a reporting program can decode the XML to display the relevant fields in a single comprehensive multi-tiered report/spreadsheet or dataset as appropriate.




One element my specific implementation lacked was recording the key element data for new records.  As I was doing this in VB as opposed to TSQL, there would have been significant overhead in moving the data between the database server, webserver and back again to accomplish this.  If it were done in the core stored procedures for the tables, than it would be practical to add those fields to the audit log.




If you are interested in researching this further, you can find the relevant libraries and support documentation on the IronSpeed site at:




http://sjc.ironspeed.com/search?searchid=33017161&showas=post

Friday, June 13, 2014

Compliance and IT



Was only going to do one blog today but decided that this was a bit too relevant for it to wait.





I recently met with a manager of a local service provider. The senior manager was quite concerned about two things their previous IT Manager had not addressed:
 




  • Compliance (HIPPA/HITECH)
  • Business Continuity

As a result of that conversation I put together the following for her (modified for this audience) to help her deal with her concerns about her needs and her future IT Manager.
 










BC Plan Requirements
  • Must be compliant (HITECH, HIPPA, et al):
    •  The organization needs to be compliant with, at a minimum, HITECH and HIPPA. There are probably others that have not surfaced as of yet.
  • Must be maintainable and maintained.
  • As requirements, processes and personnel change, the plan must be maintained.
  • It must work.
  • You need to be able to, at any time, demonstrate the efficacy of the plan.


Compliance
Almost every IT compliance addresses the following issues:


  • Availability
  • Applications
  • Reporting
  • Disaster Recovery
  • Business Continuity
  • Access - as needed basis only - should require signoff by appropriate department management
  • Security:
    • User
    • Internal
    • External
    • Partners
    • Providers
    • Customers/Clients/Members
    • Network
    • Between Applications and Resources
    • Between Applications and Users
    • Between Departments
    • Between Internal users, external users and the public (Internet)
    • Between sites (WAN)
  • Application
  • Need to know
  • Auditing:
    • Internal
    • Periodic
    • Legal Search
    • Organizational, Professional, Business, Government Audits
  • Documentation:
    • Systems
    • Services
    • Procedures/Workflows
    • Remediation
    • Verification
    • Structure
    • Procedures
    • Functionality
  •  Required as part of BC Implementation:
    • Hot Site - IT and Office


In order to implement the plan, it will be necessary to set up a hot site capability.
 
 


Electronic Document Library


In order to ensure that all requisite documentation is available, it will be necessary to establish an electronic document library which has appropriate security and redundancy. A possible viable solution is already in place, but research needs to be done to insure that appropriate access control and redundancy measures are in active to address BC/DR issues.
 


As a byproduct of establishing the document library, it might be appropriate to create a position for a dedicated librarian to manage metadata maintenance and assist with search and audit requirements.
 


Recovery Time Limit (to be determined)


Recommendations to Address Business Continuity Requirements


  • Workflows
    • Workflows are already established for business processes.
    • A regular review of existing and new processes needs to be implemented.
  • Provider contact Info
    • All service and material providers will need to be notified in event the primary location is unavailable for any length of time.
  • Partner contact info
    • All partners will need to be notified in event the primary location is unavailable for any length of time.
  • Staff Redundancy
    • All departments should be populated and have trained personnel on staff to address any short or long term loss of employees for all job functions of a critical or sensitive nature, or requiring specific skills, to include management.
  • Full Systems and process Documentation – maintainable
    • A completely maintainable documentation of the all systems processes and procedures must be maintained which will require a regular, periodic review.
  • For IT, it would be desirable if ITILv3 compliance was followed as this would bring the organization into alignment with most major financial institutions and simplify and changeover in personnel.
Food for thought. Feel free to reply with comments, questions or any omissions/errors.





To all Organization Senior Management, repeat after me: You can outsource jobs but you cannot outsource responsibility (read that last as liability).

Apologies for being away so long, but I have seen so many incidents of terrible management lately that I had to address them and also hold my tongue until I could think objectively again.


As to those incidents:
  • Corporate security breaches too numerous to mention.
  • Overzealous spying by our government on everyone.
  • Three and four hour waits to discover that not only that you can't pay your monthly bill on-line via credit card, you can't do it by phone either - however you can give your bank routing info over the phone to someone you don't know and who already has your social security, address and phone info (here, take everything I own).
  • Government launch of a healthcare site that is incomplete and can't work for the majority of the people it was designed to help.
Corporate security breaches to numerous to mention.
Given the publicity, I am not going add anything else to the mix.  You all know (or should know) the details about these.


Overzealous spying by our government on everyone
My only observation here is why was anyone surprised?  Behave and act accordingly.


Inability to make credit card payments
A very large healthcare provider, for over a year (since at least version 10 if not before), has not been allowing its users who employed Internet explorer to make payments via credit card.  Issue was a website design problem and when you tried to make a payment, you received a GoogleApps error.  Given the number of people who use this tool (600 Million or  if I am recalling correctly), one can expect that to be a problem.  I made them aware of this (via the usual tech support conduit) over a year ago. 


Recently I got fed up enough to ask for a call with their CEO.  The conversation, while pointed was as pleasant as these things go and I think I got the point across.  It went something like this:


  • You have a problem.
  • It is a bad problem.
  • People can't pay their bills using their credit cards on your website.
  • It has been going on for a long time.
  • Your tech support people are well aware of it.
  • Trying to make a payment via phone takes several hours - and - involves giving your bank routing info to someone who already has your social security #, address and phone contact info whom you don't know.  Not a good thing.
  • So you were not aware of this.  Not good.
  • Oh, you have outsourced development of the website?  You have no control?
  • Well someone in the organization must be managing them.  I think you have a problem there that you need to address. 
Long story short. about a month later they had a fix for the problem.  They used an outsourced company to handle the web based credit card payments - for an additional charge.  Questionable, given the recent scandals of data breach with several of the credit transaction clearing houses that have occurred recently, but a step in the right direction.


Federal healthcare website
Everyone knows about the problems with this site, crashes, links to nowhere, etc.  These were all results of bad management.  Both by project managers and the client (read federal government agency bureaucracies).  One I found especially glaring.


Sometime late in the development of the website, it was decided by the powers that be that they wanted specific personal information about applicants before they could begin shopping.  I don't know the exact reasons for this, but suspect that it involved the IRS and the various companies providing the policies so that they could provided options that they had thought would serve them (meaning the provider) best (as opposed to showing everything that was available at the applicants locale and letting them decide).  One of the key pieces of information that was required was income YTD for 2013.  Which was a big problem...  Why?


A large number of people who would be using the site don't work for medium or large businesses which tend to provide corporate healthcare  plans.  These are people who either had no full time or reliable employment or are self employed, which means that, in general, they wouldn't know their 2013 income until their taxes were done (if they did their taxes).  There was no way to say I don't know my income.  You simply got a failure to process message and could not continue.  Same effect if you entered a zero income (which a lot of people, particularly in Chicago and Detroit have these days).  So the site failed at its primary goal - to provide a way for everyone to get health insurance.


I was dismayed by the fact that the site tech support was aware of this oversight and unable to address it up the ladder.  So earlier this year I injected myself into the process to educate the projects top manager about the process (he lives at 1600 Penn Ave. in DC.) 


I sent him a rather pointed e-mail explaining the relevant points detailed above:
  • Failures in managing the project - in particular: 
    • Design changes late in the game
      • Why were these not addressed before the design signoff was given?
      • Did anyone in authority actually read the design doc?
    • Launching the site without thorough testing.
  • The primary users of the website, probably won't know their correct income, if they had any, until after the end of the year and possibly not until April 15th.
  • If you truly didn't know about the issues, than you need to find out who failed to pass it on and address that failure to communicate critical information.
There was a lot more to it, but the end results were:
  • Calls from the Dept. of Health and Human Services regarding the details of the specific issues mentioned above and how they might be addressed.
  • Removal of individuals from the project management.


So what is all of this caterwauling about?


It has come to the point where Information Technology operations can no longer be an area of ignorance, neglect, or treated simply as a cost center that drags down profits by the CEO and CFO. 
  • It has to be done safely, securely.
  • It has be managed by people who can not only manage projects, but have a thorough understanding of the platforms, concepts and tools with which their teams are working.
  • Contrarian views, and issues raised with the architecture and implementation, particularly concerning operations and security must be addressed and not suppressed.
  • There must be sufficient staffing of the operations department so that operations people can be thorough in their tasking, and alert in their vigilance as opposed to sleepwalking through their day.
  • Developers have to have sufficient time to analyze, review and test code so problems can be addressed before the code become operational.
  • Final Testing needs to include expert users.  So that important details don't get missed.
  • Everyone involved needs to be able to coherently communicate.  This can impede the use of offshore outsourcing, but can greatly reduce the potential for major delays and operational snafus.
It's very simple.  Senior Management has to learn that you cannot outsource responsibility.  Senior execs at retail chains and those in banking and finance are discovering this now.  Others will follow.
It's hard to build an empire on a house of cards - the slightest gust of wind can blow them away.