QA concepts are already familiar to me from my time as a microbiologist/immunologist with the FDA. We had a QA division that evaluated us quarterly in the following areas*:
– appropriate gear (lab goats, goggles, gloves appropriate for what you’re handling)
– OSHA regs (aisle spacing, fire extinguishers, etc)
– drills (fire, acid spill, etc)
Keeping things neat:
– do we have expired chemicals hanging around
– are we keeping our documentation up-to-date and readable
– record-keeping (temperature records for fridges & incubators)
– solution & culture standardization
– instrument calibration (making sure all the lasers point the right way!)
Here’s how I relate this to software testing:
Backups & version control. If you have these, you can get yourself out of anything. Remember to practice restoring your backups.
Keeping things neat:
perltidy & perlcritic are your friends. (I still say perlcritic needs to have a drinking game that goes along with it.) Keep your code & documentation fresh.
Testing. Making sure that, given a certain input/environment, your code will produce certain output. For a while I confused testing with error handling – but error handling only deals with a certain set of inputs/$ENV. You want to include error handling in your testing – make sure that something that should throw an error actually does.
Once I got a grip on what I wanted to do, I had to figure out how to accomplish it. Learning how to use the tools was the hard part. Hard enough, in fact, that it took me a year of sporadic false starts before I actually did anything productive. I’m not blessed with a separate QA team for my programming tasks; I have to do it myself, but that is no excuse for having crappy code.
My largest project is my own fork of NMIS, which has no existing tests. (It may now, I forked it a while ago.) I went for the low-hanging fruit & started by testing a simple subroutine that altered text input:
my $ifName = "Serial1/0/0.0";
'convertIfName should replace non-alphanumeric chars with hyphens and lowercase the whole schmear'
Over the course of 3 days, I wrote something like 200 tests.(eta actually I think I mean assertions. I’m still learning the lingo.) These were all simple unit tests (basically, does this one little block of code do what it’s supposed to do). I haven’t started yet with integration testing (does it play well with others).
– I gained a much better understanding of how my code works. And found some interesting glitches – edge cases that (in theory, anyway) would never be executed in the existing production environment, but should probably be tested for anyway in case I decide I want to use them somewhere else.
– I found a lot of unused code & duplicated code, and places I could use now-standard Perl modules (like I said, my fork is old).
– Best of all, I can change my code (at least the parts I have tests for) at will and not worry that I’m going to screw something else up.
Glitches I hit:
– I already have been bitten in the ass by an edge case.
– Haven’t experienced any time savings yet, due to the learning curve.
– I’m about even with aggravation savings at this point – I am taking the next steps (mocking objects, getting ready to test an actual script^Wprogram instead of a module) and it’s like starting all over.
* artificial categories which made it easier to draw parallels with software testing; thanks to Peter Eschright for the great idea from his talk at the recent BarCamp Portland.