My First Agile Project Part 7: Adventures in Agile Testing

Originally published on AgileSoftwareDevelopment.com




Picture courtesy of Sebastian Bergmann@flickr

In Part 7 of My First Agile Project, I’ll be talking about how we went about testing on our project. This part should stand alone if you haven’t read the other parts but if you want to catch up, see the series Table of Contents at the end of this post.

Our project was to integrate and configure a new billing system to replace an old custom Oracle forms app. Doing an integration and configuration project presents a lot of challenges to testing, which I’ll talk about below. In addition, like all projects, ours had a special set challenges. At first we thought the developers would do most of the real testing on the project with unit tests and verification of configuration changes by our Subject Matter Expert. The original plan included 3 sprints of testing at the end of the project just to make sure we had gotten all the needed functionality in during development. Looking back at this plan now, we all wonder how we could have so stupid. :)

Keep reading for more on how our plans went during development, what happened once we got into testing and why we’re still doing testing now - 6 months past our original go-live date. As I said, testing on a configuration / integration project presents special challenges so I hope our adventures will help other teams avoid some of the pitfalls we encountered. If you’re doing a similar project or have had different experiences you’d be willing to share with me and others, please post in the comments.

Working On An Unfinished Product

As I mentioned, we started the project intending to most of our testing as we developed, then doing 3 testing sprints (12 weeks) of regression / integration testing. Our thinking was that since we would be working on a pre-built base product doing mostly configurations we wouldn’t have to test functionality in the base product. There was a big problem with that assumption though.

Our company had decided, along with the vendor, that we would start working on our integration of the product while it was still in development. This was meant to give us a jump on going live. In my opinion the vendor was far too confident of the state of the product in saying it was okay to do this. Working with an unfinished product gave us the opportunity to influence its design but also meant we were delayed many times waiting for fixes or changes we needed. Since the platform we were building on wasn’t finished, a lot of things we completed broke in later sprints and needed reworking. So testing as we went along developing didn’t work like it should have, leading to lots of retesting later.

In addition, we’ve ended up testing every part of the application, even the parts built by the vendor we thought would work out of the box. Next time we do an integration project we’ll be much more careful to factor in time for testing the base product instead of assuming it’ll be correct from the start. I’m also going to recommend against starting work on a product as far from complete as this one was, the trouble it caused just wasn’t justified by the benefits. Our Product Owner might think differently but from a developer perspective, working on something that’s changing underneath you is really difficult.

Developer Testing Isn’t Enough

A billing system touches a lot of parts of the company which means we had a lot of integration points to develop. We ended up integrating with about 10 other systems and had probably 30 points of integration with those systems. With the product we were integrating, that meant a lot of Java code but also a lot of rules written in the product’s custom language and a ton of custom web services to pass data around, also written in their language. The Java code is easy to unit test, which gives you a certain level of comfort with the code but not 100% comfort to be sure. Where it gets hard is testing the rules and web services. We still don’t have a good way of testing that part, so we have to rely on our unit tests and manual tests to make sure the data we want is coming back and the rules are acting properly in the application.

We’ve found it very difficult to test a product that runs from a rules engine. The various levels of customization and rules add complexity very quickly. Once you’ve customized enough of a complicated product, finding if a bug is in your code or the product gets harder and harder. Also, “bugs” can be of multiple types; problems in the code that cause error messages, numbers not adding up or processes not being followed, and just incorrect behavior. There were a lot of times where the designers of the base product had an idea of how to do something and our Product Owner thought that was just wrong. So we would have to either work with them on changing the behavior or try to change it with customizations. This type of thing leads to a lot of work since it’s not just a bug to fix.

Since we relied on unit testing of integrations for most of the project, we didn’t find a lot of these bugs until late, when we started doing manual verification of every process and function in the application. A unit test is never going to see that an account that should have gone into delinquency after 15 days actually waited 16 days in months with 30 days. Or that all the charges on the invoice PDF file were 2 days off from when they should have happened. What we should have done is work in bigger chunks of functionality and have somebody go through and test the whole process and all the scenarios early on. If we’d spent our sprints doing both development and testing it would obviously have slowed the pace of development but that time would have been made up and more by eliminating a ton of rework.

Manual Testing

Once we figured out that we’d be doing a lot more testing than we originally anticipated, our Product Owner (the manager of the billing department) enlisted her billing team as manual testers. During development we had written literally thousands of test cases to use to verify all parts of the system. The new testing team went through these books and added new test cases all the time. They manually went through and compared a percentage of accounts in the new system with the old, checked invoices against the legacy invoices, and went through their daily procedures in the new system. This was hard but absolutely invaluable. For untrained testers, they’ve done a great job. I should have done some training on how to file good bug reports earlier on but we’ve worked well together.

I wish we had done some automated testing of the system for regression testing, but keeping UI tests up to date is a job in and of itself and isn’t as flexible as manual testing by a mile. The bugs we’ve discovered doing this manual testing has made up almost all of the work the developers have been working on for the past few months. Also, nothing is as valuable for finding what the users really want from the system than sitting with them and watching what they use, curse and smile at.

A Cross Country Journey

In trying to explain the difficulty of a big software project to a non-technical friend, I came up with the following: A normal big software project is like driving across country without a map. In a non-Agile project you try to work out in detail where you’ll be going before you leave. With Agile, you start out headed in the right direction and adjust as you go. Sometimes you hit a highway and make excellent time. Sometimes you hit the Grand Canyon and have to drive days out of your way. Our project, I said, was like driving across country without a map in a car that was still being built.

At the user conference for the company we purchased our billing system from, our Product Owner was asked to be on a panel about testing their products. The other participants had big teams of professional testers, hugely expensive automated testing tools, and lots of other advantages it would have been very nice for us to have. But even without them, we’ve done a good job of testing I think. We’ve got a very tight product that will be as right as we can make it when we go live, which is very important for a billing system. Even if you’ve got a small team, there’s no reason to think you can’t do a good job of testing your application, no matter if it’s a custom project or a configuration project. Yes, it takes time. But it’s worth it.

Thanks for reading and if you have testing experiences you can share please do so in the comments.

My First Agile Project Series
Part 1: Doing 80%
Part 2: Inception & Planning
Part 3: Viral Videos and Bad Jokes in Scrum Demos
Part 4: How to lose credibility and jeopardize your project with lack of management buy-in
Part 5: Our Top 5 Agile Mistakes
Part 6: The First End Of Our Project