Friday, May 27, 2011

More on the state of IT security

In my last post I touched on the various threat vectors and some ways that might prove effective in addressing them.  Late today (5/28/2011), I learned from the MSN site more details of the level of the RSA break-in (http://www.msnbc.msn.com/id/43199200/ns/technology_and_science-security) and discovered, to my dismay, that what I had feared, but not voiced, had in fact proven true.  It appears that not only cell phone tokens were taken, but enough information to invalidate the two factor security devices that are used by many large defense corporations and also by some banks for wire transfers.

These devices provide a new password when prompted that is only valid for 60 seconds (typically) to minimize the possibility of an intruder with a valid user ID from hacking an account.  Unfortunately, we now have proof that this solution no longer provides a safe method of securing information.  We only know for sure that Lockheed Martin was hacked other contractors and banks that use this technology aren't talking at this point.

As I see it, we are left with only two high security solutions, and only one which is valid for external access.

For internal access:
  • Require simultaneous log in by two users with IDs and passwords at the same station.  One should be from the security dept and the other the user.  
  • Logs need to be kept and stations visually monitored.  This should be used for all administrative log in needs and restricted to visually monitored stations. 
  • Everyone on site should be required to carry a picture ID that can be checked on demand by security. 
  • Validation against a protected isolated security server should be required for both user IDs used at login and for the ID cards carried by all personnel.
  • All maintenance processes need to be carried out by a two man team.
For external access:
  • Use a token device (like RSA provides) in addition to Biometric checks which are verified against a copy in a protected isolated server system. 
  • A call back from security  with a series of personal information questions and answers required.  In addition. a voice comparison and stress analysis could be done. 
  • This would tighten things up a bit, but is still not a guarantee. Fingerprints can be lifted, voices recorded, et al. 

Rant time
In truth, there is no security other than eternal vigilance... and that takes money, intelligent, alert security and admins, and responsible management.  CEOs and CFOs need to hire the best (not cheapest) IT people and enough of them that they aren't sleep walking through their shifts.  It would also be a good idea to listen to them rather than to the salesman who is trying to sell them the latest, greatest solution.  You can now see where that leads.

Monday, May 23, 2011

On the state of IT security - Inhouse and in the cloud

I am sure that most of you who read this somewhat irregular column are by now quite familiar with the recent rash of security breaches (Michaels, Playstation Network (Sony), and RSA to name a few).

What this leaves us all with is the continuing quandary of how to keep these people out.  So I think today that I will address several of the most common vectors and some possible ways of dealing with them. 

  • Brute force attack
    • Technique - These are generally massive and rapid fire attempts, often from multiple apparent sources which pound a service with login attempts.  The goal is quite simply to try every possibility until a working access is obtained.
    • Possible solution - track all login attempts, after 3 retries, reroute attempts from this source IP to an alternate site which accepts after a random number of rejections, and drops the hacker into a honeypot while notifying security and doing a backtrace.  Also, increase the minimum amount of time between those first 3 attempts for an IP (say about 5-10 seconds).
  • Break the service
    • Technique - hacker tries to break the service to obtain administrative control.  Buffer overruns are the first choice here.
    • Possible solution - Only one here.  Spend the time and the money to do testing for this.  Make sure your programmers address any conceivable error condition (default handling is fine as long as it is handled).  Better a user gets thrown out then the service be compromised.  Don't rush an installation and don't rush the coders.  Make the coders check their code and then have an independent team confirm it.
  • SQL Injection
    • Technique - override an SQL query into a database by commenting out the balance of a query in a web page/form and inserting your own.
    • Solutions
      • Don't put direct queries in the web form
      • Validate all fields, discarding any inappropriate characters (then start tracking the session) and drop the hacker into the honeypot and begin tracing.
      • Again testing is imperative.  Also best practices (Model View Model, etc.) can reduce this.
      • Lastly don't rush the coders.  And test for this vulnerability.
  • Bad Users
    • Techniques
      • Passwords or user ids written down, in docs on unsecured laptops, phones or PDAs
      • Cruising bad websites
      • Clicking on e-mail links from people you don't know or you were not expecting.
      • poor helpdesk or security desk training.
      • poor security implementation.
      • poor management.
      • Use of USB flash drives
      • Permitting non-company computers on the corporate network.
      • Of all of these, poor corporate management is the most flagrant and costly and is implied by all the preceding points.  If upper management doesn't follow appropriate security practices, it's hard to enforce it on the worker bees.
    • Solutions
      • Management must get serious about security.  This means developing and enforcing a corporate policy.   To include:
        • Training for all new employees
        • Retraining once per year (minimum)
        • Requiring managers to be aware of what their employers are doing on their computers (which is why they are called managers)
        • Failure to comply needs to be a mandatory dismissal.
      • To avoid the bad websites, threats from personal e-mails and non-company computers, allow people to bring there laptops from home and provide them with a public access.  While this may cost the company a couple of hundred dollars per year, it is nothing compared to the cost of single virus outbreak or hack.
      • While USB drives are incredibly convenient, they are also the biggest new vector for hackers, as they pretty much bypass all security external to the attached computer.  Easiest course is not to use them and block users from using them (easiest way is to disconnect the motherboard USB connectors and then lock the computer case).  However, if you do decide to permit them, then they should be serialized, tracked, audited and erased and returned to the available bin.
  • Outsourcing, public cloud computing/fired employees
    • Technique - Typically, a recently fired employee decides to get revenge by breaking their old employer's systems - and you get caught in the crossfire (Zodiac Island - loss of an entire year's worth of shows).  And/or the hosting company is less than stellar in their retention and backup policies (same case). 
    • Solutions
      • Use in-house or a private cloud solution where your own team can verify that things are running properly.
      • Verify that the hosting company is following through on backups and retention.
      • Test your backup schemes regularly (once a week at least of practical.  You don't have to check everything every time, but hit everything at least a couple of times a year - frequency to be based on data importance and its frequency of updates.
      • Use more than one hosting entity for backups (e.g. use IBM and Sungard for replication sites).
      • Research your hosting provider, get customer references, and don't use a lowest cost service.  You get what you pay for.
      • When you let someone go, remove and verify removal of their access before letting them leave the building (under escort).  Also make sure that you get back any company owned equipment.
  • Corporate/Nation State/Terrorist/ hacking
    • Technique - All of the above.  These people have the best hackers that money can buy and lots of bodies to throw at cracking your systems.
    • Solutions
      • Apply all the solutions above.
      • Have your firewalls and services tested regularly (at least once per year or update).
      • Don't take any vendor's word that their product will keep you safe.
      • Employ multiple security solutions that protect you in multiple ways.
      • Spend the money to acquire good IT professionals (as opposed to the cheapest) and enough of them that they aren't sleepwalking through their work day (no more than 50 hours per week, and with an average of no more than 42 hours - tired people miss things and if they do it is your fault not theirs).
      • Consider implementing 2/3 factor security for all in-house and agent logins
This is just what I can think of off the top of my head.  I am sure that there are additional vectors that I have missed.  Before you dismiss this as being too costly and resource intensive, consider what the cost would be if you were hacked and either sued, or lost critical data (e.g. financials, bank accounts, A/R).  If someone in upper management complains, ask them if they would be willing to take financial and possibly criminal responsibility for any lapse.  Then get their response in writing for the inevitable event.

Monday, April 18, 2011

On ITIL

One of the things I do from time to time is  interview with various enterprises, sometimes for jobs and sometimes, just out of curiosity.  Recently, I went through a couple of such meetings with a local financial institution, and for the first time, saw first hand, some interest at the enterprise level for acquiring talent schooled in ITIL.

Now this was not really all that surprising, as the importance of IT in the enterprise has grown over the years, and the need to catalog best practices has also increased just to keep the quite chaotic and ever evolving morass running.  Another not surprising reason is that the parent company was a member of the British Commonwealth, which is where ITIL originated a while back.  

While I had read about ITIL in the past, I really hadn't studied it, so when the interviewer inquired about my knowledge of ITIL I told him I had none...  And proceeded to spend a good portion of the next hour receiving a lecture on the importance, role and value of ITIL in the enterprise.  Quite educational.  He was, if not an eloquent speaker, at least quite passionate.  He was quite adamant about one thing in particular - that ITIL was all about Risk Management (and (IT) operations management in general isn't? - ???).  Anyway, he peeked my interest enough so that I decided to make this my next course of study, just out of curiosity.

According to Wikipedia.org: "The Information Technology Infrastructure Library (ITIL) is a set of concepts and practices for Information Technology Services Management (ITSM), Information Technology (IT) development and IT operations".  I prefer to think of it as a guidance for technology in the enterprise.

I am now about midway through the foundations sequence and I see a lot of good thinking - if your interest is the philosophy of service provisioning.  Not really of much use in terms of concrete operational doctrine though (no easy outs here) - hence my preference for calling it a guidance (at least at the foundations level), and which is why it probably hasn't yet caught on with many small to medium businesses.  In passing, I would note that most companies really can't afford to even think about it as an enterprise process until they hit the billion dollar revenue mark (defining and tabulating all those metrics and having all those planning/review meetings tends to be labor intensive, and storing all that accumulated data for regulatory compliance can be costly).

As to the content, nothing really new - to me at least.  Things I have spent the last 3 decades learning through the old apprenticeship process, they have spindled, collated, acronymed and formalized into a nice, (barely) digestible tome of wisdom.  Perfect for the MBA set, who usually start out at the (project) manager/consultant/analyst level, so they rarely learn the ins and outs of the reality of IT (which means dealing with the details of day-to-day operations in the trenches).  (I am not intending to demean the MBA, quite the contrary, however, I do view a new MBA like the army views 2nd Lieutenants - stuffed with lots of info, but very short on practical experience.  (As I once heard stated, the job description of a 2LT was to relay instructions to the troops, observe, report and listen attentively to his sergeant for every pearl of wisdom he deigned to drop.) Though this is changing for the better at many institutions (I had to pleasure of working with some MBA students on a team project last year.  They did a bang up job on an analysis for Business Continuity solutions for one of my clients (though I still wouldn't want them running a datacenter for me).

I do see ITIL (at least the foundation level) as becoming a requirement for the CIO/CTO role in the next decade.  I also believe that it should be required for any C level executive, so that they have at least a basic concept of what IT is trying to do (I also note that there is a lot of very good general thinking here for all aspects of enterprise operation).  In particular the CFO, to whom IT reports to in many organizations (to the detriment of those companies in many, but not all, cases).

Wednesday, April 13, 2011

The current state of Deduplication

I recently attended one of the many Enterprise IT related conferences that occurs every spring in the city and had the good fortune to hit a special channel session where one of the speakers, a specialist in storage technology, gave us an interesting heads-up as to the state of deduplication in the enterprise.

To make a long story short:
  • Situation: In a conference call with a client to a vendor of storage equipment with dedupe built-in
  • Topic:  Cost of 2 additional disk drives ($53,000, available from Newegg for $500.00)
  • Reason for Discussion:  Explain the cost
  • Justification:  Licensing for support, warranty, extended services (including dedupe) for the additional drives.
He went further on with this topic to a more interesting point, regarding to a judicial ruling regarding SOX compliance with dedupe. To Point: You cannot dedupe exact copies of docs that are transmitted to recipients at different times/dates.  So if you forgot to include the name of one person in a mailing list, or had to resend because the recipients e-mail got accidentally deleted. etc., that e-mail, if SOX relevant (and it can be argued by lawyers, that almost everything regarding related to money, taxes, or law is) then those exact copies must be stored independently, and completely (no dedupe).  If one were to continue this trend of thinking, you might expect the same ruling to eventually apply to any compliance related documentation trail(HIPPA, etc.).

He told us one additional anecdote:
A large communications provider (phone, etc), several years ago decided to acquire a dedupe solution from one of the major storage providers (part of their suite of applications/devices) at a significant sum (pick a suitable integer and add at least 7 zeros before the decimal point).  Their  goal was to reduce the growth rate of their total document storage via deduplication.  After several rounds with operations and legal, they discovered that, instead of being able to dedupe across their entire document base, they were limited to using dedupe for only a single digit percentage of their documents.  A video (published on one of the major video sites) was made of a presentation at one of these enterprise events by one of their IT execs where he pointed this out to his audience, which, because of contractual obligations, between the customer and their vendor, was quickly pulled first from the show's video list and eventually from the video site itself.

His recommendation to was was not to sell/recommend storage device integrated deduplication, but to us an add-on device that you could configure as required for a lower TCO.

This is all second hand information, so I can't state to it's veracity.  However, I do recommend that you do a thorough check with all departments (including legal, compliance) to verify how effectively you can make use of a dedupe solution before you spend a lot of money on something you may not be able to use efficiently.

Monday, April 4, 2011

On the state of IT security and data storage

There have recently been several notable events in the IT world that if you aren't already, should cause you to question the wisdom of claims of both Security and Managed Services providers.  To point, several weeks ago I sent out a link to a story about a major IT security provider that got hacked (see: http://www.msnbc.msn.com/id/42152645/ns/technology_and_science-security), and last week there was another note worthy event where a irate ex-employee of a data storage service provider wiped out a years production for a TV production company that employed its services (see: http://news.yahoo.com/s/nm/20110331/tv_nm/us_zodiac ).

There is some commonality between the events.  Both provide external services (although its not clear which products were compromised in the first case).  Both  were used to insure data security and integrity.  Both were used so that companies would not have to address the expense of managing their own secure services.  And because their marketing was taken at face value, their customers suffered.

There are a couple of key points:
  1. There is no such thing as perfect security.
  2. Your security is only as good as the people you hire (not the companies you employ).
  3. Hype sells, but doesn't pay the damages from law suits.
  4. Outsourcing is cheaper in the short term, but won't help your case when your customers and/or shareholders sue (notice a train of thought here).
  5. Whether you outsource or not, you are still responsible for the final results (just ask BP about that).
  6. If you have a government entity as a client, be aware that they have deeper pockets for law suits (think just about infinite) than you do.

So how do you avoid, or at least reduce the risk of these kinds of  events from happening to you?  Several things come to mine:
  • Don't depend on a single data storage source...  Keep local backups... That are checked... By your people... on a regular basis.
  • Use multiple layers and types of security... From different providers... That are monitored... By your people... On a continuing basis.
Now your CFOs and CEOs may complain that this is an unnecessary expense.  I acknowledge that each tasking would require at least one full time position (and possibly multiple shifts) to ensure proper coverage and due diligence...  And that it is not expensible.  An appropriate response is to ask them if they would be willing to  insure (with their personal assets), that such will not happen.  Because while the courts can be somewhat forgiving when due diligence is performed, they will be absolutely scathing when it is not.

That is not to say that there aren't services that are trustworthy and reliable - or even that these were not, just don't put all your eggs in one basket (and keep a few hidden in the back of the frig.) lest you find yourself with egg on your face and a large mess to clean up.

Monday, March 21, 2011

Thoughts on Hardware selection

Thought I digress a bit on the how's and whys of server hardware selection.  There are lots of ideas on this but it should always come down to xxx requirements:

  1. The hardware must be capable of supporting the solution both now and for the next 3-5 years.
  2. Performance and cost (both capital and recurring/operational)must be balanced.
  3. Reliability and cost must be balanced.
  4. Risk and cost must be balanced.
Notice the theme here?  Note how cost figures into almost everything?  The one good thing is that cost is the easiest component to figure.  Performance is not too hard to divine these days either.  But reliability and risk are tougher.

Chosing just the best equipment can be an expensive undertaking, what with budget constraints.  So often, just good enough has to do.  And, surprisingly, there is a lot of good less expensive equipment out there that will do the job just fine.

For SANS:
There is no doubt that that EMC, NetAPP or HP SAN will get the job done, but do you really need that level of performance?  Perhaps a LeftHand solution or even a PROMISE SAN can fill the bill.  Maybe, you can acquire a backup unit with the initial purchase, that may make the need for a support contract unnecessary.  Is fiber really the right answer, or will iSCSI provide sufficient throughput? 

For Networking:
Do you really need smart switches everywhere or just at critical junctures?  What about network wiring when VOIP/POE is involved?  Home runs or put the POE equipement in the department closets?  Use one vendor or the most cost effective soluton?

For Servers:
Do we virtualize/cluster/replicate?  Do I use 3U 4 processor unit for 48 cores and a built in RAID array, or the 2U 4 node 2 processor unit (total 96 cores) with a single or mirrored array for each node and a SAN with MPIO?

How do you decide?

The next several postings will address these issues.  First up: Server selection.

Practical Computing in the Cloud (ported from old site)

Cloud computing is the current hot topic in IT.  Providers are pushing it, vendors are pushing it, consultants are pushing it.  About the only people who don't get it are at Corp HQ... and the users.

In a nutshell...

Pros:
  • 7/24 monitoring is available
  • Server management is available
  • Network management is available
  • Server redundancy is available
  • 7/24 management is available
  • Network redundancy is available
  • Business Continuity is less risky
  • VOIP is more pragmatic across multiple sites
  • Backups can be easier to accomplish and more secure
  • Security is centralized
  • Compliance is easier (HIPPA, SOX)
Cons:
  • Loss of Total Control of Administration
  • Security across the corporate WAN
  • Loss of Control of Cost
  • A Good Business Continuity Plan is Mandatory
The explanation (point by point) starting with the Pros...
  • Monitoring - As opposed to having to hire and schedule operators and admins to be available 7/24/365, the hosting provider will usually provide an option to monitor the network and servers.  All good.  However, this doesn't mean that you can get rid of your admins.  You still need people who know how everything is put together so that when that critical business function breaks (the one you spend $100,000+ each year on for development), there are people available who know how to troubleshoot the problem quickly using procedures that they defined for recovery, so your downtime is minimized.  Monitoring does not imply management or recovery.
  • Server Management - Good as far as it goes.  Need patches installed?  As long as they work perfectly, no problem.  But throw in application, device, driver incompatibilities along with the occasional bug, and you can quickly find yourself addressing a two stage process (to test systems first before production), or else dealing with how to roll back patches on a machine that can be anywhere in the US, and is not available via the network anymore.
  • Network Management - A very good idea, if done by your IP Provider or possibly one of their recommended 3rd party partners.  This will generally keep accidents from happening.
  • Server Redundancy is available - Several OSs now support remote clustering - where different cluster members are at different locations.  This is a step up from traditional clustering solutions in that each member server is in a different geographical area.  Issues are syncing due to latency, and updating the servers. 
  • 7/24 Management is available - Provided you can set up good inter corporate communications, get a workable schedule put together and competent remote support, this is a major benefit.  No longer do you have to employ staffers for 3 shifts to ensure up-time.  Lower payroll costs but higher MRC.
  • Network Redundancy is available -  No longer do you have to provide multiple network connections to the datacenter as your (nationwide) provider can provide that as a matter of course (but you still have to put it in the contract).  However, you still have to address the issue of whether to set up redundant connections for each work site (decision for the COO and CFO).
  • Business Continuity is less risky - What this means is that you have more resources available in case of an emergency.  In fact, Sungard can even provide you with a temp office space with equipment per contract, which can be upgraded if necessary.  While there are others that can provide the office space option, most are limited geographically or by the number of users that they can support that way.  There are numerous providers that can provide simple server hosting space.  The problem is maintaining sufficient staff in your business so that they can keep everything in sync and up to date.
  • VOIP is more pragmatic across multiple sites - VOIP should always be part of any cloud solution for a multiple site entity.  Employ an MPLS network with firewall and VPN in the cloud.  Use the same network/security for VOIP.  This can expand the use of VOIP to your entire sales force anywhere wireless access is available via a softphone program on the users computer or laptop.  Reduces the cell phone minute charges for your traveling employees, particularly for international users.
  • Backups can be easier to accomplish and more secure - With an MPLS cloud based network backbone, and using data compression/encryption, backups can be done to your hosted backup servers/SANS from all connected business sites (provided they have adequate bandwidth).  The issue here is error recovery and the need for redundant network connections to ensure that the backups get done on time.
  • Security is centralized - You no longer need a firewall at every site, just a good router that supports MPLS.  This means having a single firewall guy on staff as opposed to one per site.  And now all he does is tell the provider how to set up the firewall, so he will be awake in the morning and into the office on time (usually).
  • Compliance is easier (HIPPA, SOX) - Recent interpretations with HIPPA and SOX state that for compliance, a company must be able to provide document discovery for the last 3 years, and verify that regular complete backups of pertinent data are performed.  A cloud solution can simplify the need to provide proper business continuity techniques (backups, auditing, reporting), by simplifying the backup procedure, providing independent verification for auditing and standardize all aspects of reporting.   This is done by reducing the staff required at each site to manage these tasks.  This tasking is now done by your providers management group, using management tool suites.  Again you have the expense, but it is centralized, contracted, and outsourced.  Giving you a legally defensible position if the need arises.
And now the Cons...

  • Loss of Total Control of Administration - With all the outsourcing and geographical distribution in the Cloud solution, good management is essential, especially with the contractual agreements.  All envolved parties need to fully comprehend what they are purchasing (COO, CTO, CIO, CSO, CFO) and be in agreement that is meets their corporate needs.
  • Security across the corporate WAN - Now you can support a mobile workforce... and that is the problem.  Whether it be bots, pornography, viruses, or an e-mail from cousin Sady with the latest baby pictures, the security issues you face now will require much more forethought and planning.  With cloud computing, Security must always be foremost in everyone's mind.  With everyone/thing connected, one compromised smartphone can lead to the pillaging of the entire environment if appropriate defenses are not in place.
  • Loss of Control of Cost - By outsourcing the cloud environment management and monitoring tasking, you are now at the mercy of your providers limitations.  While you may be able to negotiate a good initial contract, expect the costs to go up once your provider has you locked in.  Changing providers will be prohibitively expensive.  Be wary also as to the financial stability of the selected providers and the equipment and software vendors they employ in your solution.
  • A Good Business Continuity Plan is Mandatory - With the disbursement of the the corporate datacenter into the cloud, you have greatly increased the complexity of your environment in exchange for better flexibility, reliability and redundancy.  With the implementation of a cloud solution, you have to face the need for a verifiable Business Continuity plan.  A good plan is thorough, detailed and exacting in its procedures.  It is also updated as often as any element of the environment.  This can be as often as every week.  As you may gather, this is a costly undertaking.  Unfortunately, it is essential for corporate viability.  Many larger companies will not do business with companies that do not have the capability to address a systematic failure within 24 hours and who can prove that they can do so.
All in all a cloud solution is in every business's future.  Just remember that with this solution you need expert guidance in every facet, and you will still have to maintain in-house expertise to maintain that environment.

Downside of the Cloud and Hosted solutions

I had the occasion to stop off at one of Chicago's premiere theatres recently.  I didn't really want to go there, given the weather, but I did want to secure tickets for an upcoming show.  I hadn't been able to get to their ticketing web site for the last 3 days.  Thirty minutes later, I left with my order reservation and a promise that I could come back and pick up my tickets once they were able to charge my credit card.  The person behind the ticket counter informed me that their servers were inaccessible due to a problem with their Internet connection, which has been more down than up for most of the week.  On the upside, the theatre is just down the street, so that won't be very painful...  for me.

Like most theatres these days, they have either outsourced, or hosted their ticketing system offsite to simplify their cost structure and to make it more accessible to customers.

Like most businesses with the datacenter outside the building, they are dependent upon their Internet Provider, and, as it turns out, that is where the problem lies.

My guess is that they have redundant connections, but that doesn't help when the problem is related to issues at the datacenter.  The potential causes are many:
  • Indifferent or incompetent engineers/admins/management
  • Bad documentation
  • Growth (in traffic levels, number of sites or servers hosted)
  • Reliance on marginal or 'past live' components in the network
  • Hardware failure
  • Insufficient or missed monitoring or audits
  • Accident or fire
  • Untested failover scenarios
So while the cloud, and outsourcing can reduce Asset valuation and payroll obligations on the balance sheet, it can also lead to increased downtime if not properly designed, implemented, documented and most importantly, tested.

A key facet of reducing this downtime on the client side is redundant IP connections.  But to make this work, you have to test it and verify that failover can occur smoothly, without loss of a transaction (short delays are usually acceptable).

However, on the server side (hosted/cloud), there isn't much you can do.  You are at the mercy of the hosting/provider's ability to support their product.  Even if you provide the circuit(s), they still have to get it/them connected - safely, securely, and reliably - to your servers.  This is no mean trick.

So if you do decide on a cloud or hosted solution make sure you do the following:
  • Research your prospective provider thoroughly.
  • Talk with their other clients
  • Document every aspect and procedure
  • Test, test and test
  • And test some more
Lastly, don't forget to define and test a procedure that you will use when the solution eventually fails, which it will.  I leave you a few mantra's of IT Directors everywhere:
  • Murphy is the patron saint of computing.
  • He who has physical control of the assets, rules.
You need to allow for one and obtain the other.

Till next time...

Ironspeed - A way to build Web applications more efficiently

Like most developers, I am always looking for ways to be able to do my job more efficiently.  This means:
  • Creating fewer bugs
  • Reducing the time and amount of code I have to test
  • Guaranteeing application security
  • Reducing development time.
I mostly focus my work in the Windows World.  There Visual Studio is king.  Not perfect by any means, but certainly worth its cost.  Back in 2003, I also started using a new tool that I had read about in one of the trade journals, called IronSpeed Designer.

IronSpeed is a tool with which you can build a basic fully functional Web application in as little as 10 minutes (time required relates directly to the number of screens and tables involved), once you have the database designed.  It does this by using boilerplate code to generate IIS compatible ASP.NET applications based upon a set of control XML files it generates during the building process.  You tell it the database to use, the screens you want, what options you need and it puts together basic apps for every screen. 

Embedded security access code is an option.  That gives you the capability to use a unique user access setup in your database, integrate with Windows...   or you can use your own scheme and code it yourself.

Once you have the base application, you can use their designer to add, modify or move fields as needed.  If you add fields, relationships and tables to the database, you can do that and then  just tell Ironspeed to update it's database references and you are good to go (of course you would still need to add the fields to the screens where they are needed manually).

You use a drag and drop process to change screen and component layouts, usually setting up multiple levels of tables to subdivide the screen as necessary.  Ironspeed also supports tabbing now, so it is easier to implement screens with numerous fields.

To further reduce your workload, you have the capability to generate reports and to define a report cell, you can use a fairly straight forward formula procedure (as in excel) to define a column.  As a bonus, it integrates with Microsoft Sharepoint.

Ironspeed currently supports the following database environments:
  • Oracle
  • MySQL
  • Microsoft SQL
  • Microsoft Access
You also have several state management models from which to choose.

Code customization is also straight forward if you are familiar with the Visual Studio scheme.  You can also use Ironspeed in conjunction with Visual Studio to do line by line debugging with breaks.

While it does take time to learn, this is a great developers tool.  They have an active forum and actually listen to their user base.  Additionally, they have good training videos and live web sessions too.  A great tool for the experienced coder.  You can download a free14 day trial version from their site at:

http://ironspeed.com/