I have worked in “enterprise” settings most of my career. In these settings automation handled a full regression and was costly to maintain. In this article I will point out a few lessons learned from working in an agency setting. This will be scoped to web applications.
- Enterprise (setting) – Typically an app or two with a very long life cycle. Most often, a bug introduced could mean lost revenue.
- Agency (setting) – Applications are developed and handed over to the client when development is complete.
- Concurrent (testing model) – ‘Regression’ automated tests are ran after a new build. At the same time, or shortly after, you would test new functionality. A ticket is not complete until the new functionality is added to the automated regression plan.
- Does the automation require heavy maintenance?
- In development a page can change with every commit.
- How easy does your automation lend itself to the scrum methodology?
- If your automation package requires a hand off longer than a couple hours, then it is likely not set up in an intuitive way.
- Can anyone contribute at any time?
- Multiple testers on a given project ought to be able to make changes to test cases as needed.
- The answer to this question can be reliant on a couple factors;
- Do you use source control?
- Is your automation framework available to the entire team?
- What value does the automation provide?
- If the manual regression test takes longer than a day and needs to be done more than once a month, then you are likely investing in automation.
Providing Cost Benefit
In an agency setting we bill time+materials. If it takes me an hour to test a new function, it might take an hour and a half more to make a solid automated test (I always ballpark initial automation as “time to test + 50%”). So already we are at 2.5 hours. This isn’t a big deal, as the regression test provides value in saving regression hours for subsequent builds. Additional cost is incurred with each build that breaks your test. As your regression grows, so to does your required maintenance. A way to minimize this cost is to use a simple framework. QA automation specialist have their choice from many frameworks and those frameworks typically offer more than you really need. Selenium is one such tool. It provides a bevy of commands that mostly go unused and requires many tools to actually use. For Selenium you work in a fragmented system. You could develop your test using Selenium IDE and export to your language of choice. You then fix any issues with your exported code and use something like jUnit to run the tests. If you want to schedule your test and email results, then you’ll have to write that.
I am a fan of using a paid product, called Telerik Test Studio. Telerik allows us to make, edit, run, schedule, and report on tests quite easily. Its downfall is that it is only available on Windows and tests only in IE, Firefox, Chrome, and WPF. Most shops end up using Selenium as its free and can be extended to many devices via third-party plugins. Microsoft Test Manager uses the Selenium driver for Chrome, too.
I use my automation suite to primarily test MVP flows and happy paths. I will include some negative tests, of course, but most of this should have already been addressed in manual testing so these tests provide little value outside of a “sanity” check. “But it doesn’t cover Safari or mobile”, you might contend. I have found mobile device emulators to be unreliable and outside of Appium there isn’t a one size-fits-all solution. I do most of my initial, manual testing in Safari as my primary machine is a Mac. I also have a pile of mobile devices and an intern. If something works in Telerik but not on those devices, it’s usually a device specific issue anyway.
The Value of Automation
As noted before, the automation provides a quick way to verify previous functionality in new builds. The argument is easily won that it is necessary for quicker and more reliable QA. I have never had an employer refuse to purchase an automation solution. My argument was presented in showing how long it takes to create/run test in given frameworks. The cost, compared to Selenium, is often paid for in saved development hours. In the agency setting, I am always thinking of the fairness to the client. Should the client have to pay for extra maintenance hours because you chose the wrong framework? The answer is no. That means that maintenance needs to be as simple and quick as possible. The simplicity would allow any team member the ability to update the test case(s). The speed in which issues are resolved is related to your testing framework.
Other Facets of Test Automation
In addition to automated regression tests, I rely on SortSite. This tool saves a lot of time checking for these things manually and helps ensure a quality product.
Covered by SortSite
- Accessibility – check against WCAG and Section 508 guidelines
- Broken Links – check for broken links and spelling errors
- Compatibility – check for HTML, script and image formats that don’t work in common browsers
- Search Engine Optimization – check Google, Bing and Yahoo webmaster guidelines
- Privacy – check for compliance with EU and US law
- Web Standards – validate HTML and CSS
- Usability – check against Usability.gov guidelines
Covered by ZAP
- A1: Injection
- A2: Broken Authentication and Session Management
- A3: Cross-Site Scripting (XSS)
- A4: Insecure Direct Object References
- A8: Cross-Site Request Forgery (CSRF)
- A10: Unvalidated Redirects and Forwards
A Note on CI
When your dev team finds out you have automation, they’ll likely ask if it can be included as part of the build process. This comes from the love of unit tests included in the project. This is certainly feasible, but not without issues. It is almost guaranteed that some tests will fail. The new build might have changed an element’s id or the xpath has changed. For that reason I don’t include automated regression in the build process. I can just run it when I need to, which is often right after a build or deployment.