I got into an interesting chat on twitter about "Happy Path Testing". You see, in the experience of one guy, exploratory testers just don't have the patience to test requirements. They get bored and stop seeing problems when they become familiar with a UI. Well, it seems to me he is blaming several different issues on "Exploratory testers", when really they are issues with some people, and all humans. Let me break it down for you.
Issue 1: Functional Testing is part of the job.
By this, I mean testing the requirements and verifying correct behavior (I.e. "The Happy Path") is part of every testers job. There is NO exception for Exploratory testers. The difference is, with Exploratory testers this is where you start (your acceptance criteria), not where you finish. In many modern software testing groups, there is some automation in place to make sure that the happy path stays happy. This may be in FitNesse, Selenium, Unit tests, TDD, and it may be created and/or maintained by Developers, and/or testers. Usually the basic happy path is best covered by developers, and more complex combinations are covered by testers using some sort of scripts or tools, but the point isn't to automate all testing. It is simply to automate the boring testing so that humans can do what they are better at rather than running the same test the same way time after time. Also, just because the test is automated doesn't mean you never need to test it manually. It is good to test the user experience from time to time. Automation has a narrow validation scope in most cases. A human can generally detect more variance than automation can.
Today I spent several hours validating 117 XML files and out of those I had a list of 2 separate issues. It turns out there was a valid explanation of both problems. So. Did I waste my time? Am I doing a poor job? I say no. No, because this was new functionality that needed thorough testing. Even a slight problem would be quite expensive in this area, and this data change involves some math. I had to validate beyond the smallest scope to make sure that the changes didn't impact more than intended. The FitNesse tests and unit tests not only passed, but were reviewed by another developer, and a tester. Also, this was a new area for me, so I learned more by going through these files difference by difference to understand why. However, why did I stop short of validating all 580 of the files? For a great reason. After testing over 20% of the files, I didn't find even one unexpected difference, yet, I'd validated every expected change over 100 times total. It is possible that I missed a bug, however, the likelihood was low. I conferred with my fellow testers. We all found the same results with different tests. The fixes were good. Did we feel bummed out by learning this? No. We felt proud of our developers because more and more they are delivering solid code. Many times the bugs found are requirements gaps we didn't consider rather than something functionally missing or a code error. This doesn't mean that our testing skills are slipping or that we no longer have to do functional testing. It simply means our team of developers has evolved beyond the "throw it over the wall" phase.
I've worked on teams with another culture in the past. The culture where, "We don't have time to make Unit tests!" That is confusing. If you don't have time to test it, you surely don't have time for the customer to reject it, to fix it, not test it, and get it sent back again. If your excuse is that you don't have enough time, realize that you are dumping boring work, and part of your responsibility ON your exploratory testers. Don't be shocked if the best testers don't want to work with your team. Just as I'm sure you don't want to work with testers who are so limited that they can't start testing without "complete documentation", or expect hand holding, the best testers want to work with developers who have pride in the code they create. There is no joy in finding bugs that any monkey could find. The joy is in discovering a bug important enough that it SHOULD be fixed, but not because it is a developer mistake, but because it saves many users from frustration and makes the product better that it was found and fixed before it harmed any customer.
The second issue is caused by something all humans can fall victim to, not JUST exploratory testers, and not just testers, but all humans! It's congnitive bias, which there is a nice list linked to here. Anyhooo,..not specific at all to testers, but something we professional testers, along with all scientists must work to avoid.
To summarize,..the boring part IS your job as a tester. Any kind of tester. However, it is also your job to try to automate the boring part, for the simple fact that humans aren't especially good at repetitive tasks, and brain engaged testing is the best sort of testing. It may not be possible to eliminate tedious checks, but try to reduce them where you can. Also, as a developer? The boring UNIT TESTS and the discipline of doing some checks yourself before releasing code is YOUR job. It isn't to be pushed off onto the testers. We share in common that the more we automate the happy path, the more time we can spend writing interesting code, and finding interesting bugs rather than rehashing the old stuff, or finding and fixing the same bugs over and over again.
Issue 1: Functional Testing is part of the job.
By this, I mean testing the requirements and verifying correct behavior (I.e. "The Happy Path") is part of every testers job. There is NO exception for Exploratory testers. The difference is, with Exploratory testers this is where you start (your acceptance criteria), not where you finish. In many modern software testing groups, there is some automation in place to make sure that the happy path stays happy. This may be in FitNesse, Selenium, Unit tests, TDD, and it may be created and/or maintained by Developers, and/or testers. Usually the basic happy path is best covered by developers, and more complex combinations are covered by testers using some sort of scripts or tools, but the point isn't to automate all testing. It is simply to automate the boring testing so that humans can do what they are better at rather than running the same test the same way time after time. Also, just because the test is automated doesn't mean you never need to test it manually. It is good to test the user experience from time to time. Automation has a narrow validation scope in most cases. A human can generally detect more variance than automation can.
Today I spent several hours validating 117 XML files and out of those I had a list of 2 separate issues. It turns out there was a valid explanation of both problems. So. Did I waste my time? Am I doing a poor job? I say no. No, because this was new functionality that needed thorough testing. Even a slight problem would be quite expensive in this area, and this data change involves some math. I had to validate beyond the smallest scope to make sure that the changes didn't impact more than intended. The FitNesse tests and unit tests not only passed, but were reviewed by another developer, and a tester. Also, this was a new area for me, so I learned more by going through these files difference by difference to understand why. However, why did I stop short of validating all 580 of the files? For a great reason. After testing over 20% of the files, I didn't find even one unexpected difference, yet, I'd validated every expected change over 100 times total. It is possible that I missed a bug, however, the likelihood was low. I conferred with my fellow testers. We all found the same results with different tests. The fixes were good. Did we feel bummed out by learning this? No. We felt proud of our developers because more and more they are delivering solid code. Many times the bugs found are requirements gaps we didn't consider rather than something functionally missing or a code error. This doesn't mean that our testing skills are slipping or that we no longer have to do functional testing. It simply means our team of developers has evolved beyond the "throw it over the wall" phase.
I've worked on teams with another culture in the past. The culture where, "We don't have time to make Unit tests!" That is confusing. If you don't have time to test it, you surely don't have time for the customer to reject it, to fix it, not test it, and get it sent back again. If your excuse is that you don't have enough time, realize that you are dumping boring work, and part of your responsibility ON your exploratory testers. Don't be shocked if the best testers don't want to work with your team. Just as I'm sure you don't want to work with testers who are so limited that they can't start testing without "complete documentation", or expect hand holding, the best testers want to work with developers who have pride in the code they create. There is no joy in finding bugs that any monkey could find. The joy is in discovering a bug important enough that it SHOULD be fixed, but not because it is a developer mistake, but because it saves many users from frustration and makes the product better that it was found and fixed before it harmed any customer.
The second issue is caused by something all humans can fall victim to, not JUST exploratory testers, and not just testers, but all humans! It's congnitive bias, which there is a nice list linked to here. Anyhooo,..not specific at all to testers, but something we professional testers, along with all scientists must work to avoid.
To summarize,..the boring part IS your job as a tester. Any kind of tester. However, it is also your job to try to automate the boring part, for the simple fact that humans aren't especially good at repetitive tasks, and brain engaged testing is the best sort of testing. It may not be possible to eliminate tedious checks, but try to reduce them where you can. Also, as a developer? The boring UNIT TESTS and the discipline of doing some checks yourself before releasing code is YOUR job. It isn't to be pushed off onto the testers. We share in common that the more we automate the happy path, the more time we can spend writing interesting code, and finding interesting bugs rather than rehashing the old stuff, or finding and fixing the same bugs over and over again.