Every movement begins with a moment.
In a digital agency like Moxie, the push for accelerated delivery is constant and expected. So, too, is the demand for quality. Meeting both these criteria — which Moxie does both proudly and consistently — requires an ongoing and relentless assessment and improvement of internal processes. In keeping with this practice, Moxie’s QA team — with input from the development team — recently re-evaluated our software testing methodology.
When planning for software testing, our approach has focused predominantly on manual techniques. And while this has been an effective solution, a few defects were beginning to pop up with certain products, like marketing campaign delivery. To address these issues — and to stay true to our operational philosophy of repeatable, reliable, predictable, innovative and strategic delivery — we honed in on the marketing campaign emails. The delivery workload in this area had been steadily increasing. Testing email is a clumsy, exhaustive and time-consuming process. With the ongoing rise in volume and complexity, the test team faced constant time constraints and the inevitable fatigue that comes with redundant testing. We (mere humans) simply couldn’t keep up. It quickly became obvious that maintaining quality by adding manual testers or having them work longer hours simply wasn’t a sustainable strategy.
Marketing email QA turned out to be a great case study for automation. Imagine a scenario where a manual tester:
All of these tasks are tedious and involve thumbing through plain text in order to identify discrepancies, whereas a tool will leave no html untouched. Due to its redundant nature, frequency of error is another issue with manual testing. Simply put, the more often a tester performs a specific task, the more likely he or she is to miss identifying a defect. Not so with automation. It can perform each of these functions efficiently, repeatedly and accurately 100% of the time.
It’s about augmentation, not minimization
When introducing automation, our goal was to augment manual testing versus minimize it. By mechanizing tedious, repetitive tasks, our testers were free to perform other important tests that can’t be automated. A key advantage of this approach: Reducing the fatigue of our manual team. Only a manual tester can perform exploratory testing or identify alternative test scenarios on the fly. In this particular instance, we were able to shave off 20% to 30% of the tester workload by focusing our efforts on identifying ideal automation candidates. As we move forward, we will continue to look for additional candidates that can help us achieve another 10% to 20% coverage.
Options: To build or not to build
Selecting the appropriate tool is key, but in our case, there was no existing tool that could help us support our development process based on the technology we married. As disappointing as this could have been, we were really excited about the opportunity to pioneer a solution. One of the positive side effects of this exercise was the value of collaboration between the different functional disciplines of development and QA. The Automation Engineers from the QA team worked closely with the development team to leverage their skills in designing the framework to be as flexible and scalable as possible. New modules can be added to easily extend functionality. Moreover, the development team can build hooks within the “application in test” for automated scripts to identify while testing future enhancements. It can be easy to get carried away by determining the scope of automation; if you aren’t careful, the cost of maintaining the automation script can far exceed the expense of introducing updates to the tested application. Working with the development team helped us avoid that trap and design a solution that would be robust and scalable.
Tools of the trade
Our automated test framework for marketing campaign emails is a Node.js-based application, exercising various open source libraries and modules such as Cheerio, mailparser.io, Handlebars and Hyperquext, among others. The asynchronous and modular nature of Node has allowed us to conduct a colossal number of parallel tasks at breakneck speed, with the largest and most time-consuming portions of the functionality already available thanks to the open source community. PhantomJS is leveraged to attach a screenshot of the email html rendered in a WebKit browser and attached to the returned report, and all the test data is stored as detailed message objects in MongoDB. The database usage allows us to track what we’ve already tested and their results, which we can massage in the future in a number of metric or application building ways. We store the framework in Atlassian Stash and designed it to be platform agnostic, allowing us to develop from any OS while still running the production version from an Ubuntu server.
What opportunities have you seen to solve a problem via automation? Would existing tools suffice or leave you wanting more?
About the Author
Jonathan Terry is the Senior QA Engineer at Moxie. A passionate technologist, he’s always on the lookout for open source toys, headphones, problems to solve and time to spend with his family. You can find him on various social media outlets @jonyet and costumed annually at Dragon*Con.
PLEASE PROVIDE YOUR INFORMATIONTO DOWNLOAD THE PAPER.