Manual Software Testing Example

Taking a look at Google.com

First, let’s take a look at it the site.

Google.com as of 05/29/2014
Google.com as of 05/29/2014

Here is a list of what I see:

  1. favicon is a white g on a blue block
    • Not all browsers support favicons next to the address bar. That is because of past phishing scams where the icon was the padlock icon.
    • All modern browsers support he favicon on tabs and bookmark bars.
  2. The URL in the address bar is https when I entered http
    • The site accepts http but redirects to https
  3. Padlock next to address bar
    • Clicking on it will allow the user to view the security certificate
  4. +You link to https://plus.google.com/
    • plus is a subdomain of google.com and is likely out of scope of testing goggle.com
  5. Gmail link to https://mail.google.com/mail/
    • mail is a subdomain of google.com and is likely out of scope of testing goggle.com
  6. Grid icon
    • On-click event: Display links to other services
  7. Sign-in button
    • https://accounts.google.com/ServiceLogin?hl=en&continue=https://www.google.com/
    • Links to accounts.google.com and will redirect (upon successful login) to googgle.com
      • continue is a uri parameter that holds the redirect url
      • returnURL is the same thing but for ASP
  8. ‘Notification’ to install Chrome
    • Chrome icon image
    • Heading text, “A faster way to browse the web”
    • Close icon
      • Hides the ‘notification’
    • Install Google Chrome button
  9. Google Logo banner image
  10. TextBox for search input
  11. Google Search button
    • Goes to https://www.google.com/#q={searchTerm}
  12. I am Feeling Lucky button
  13. Message of the Day with Hyper link
  14. Adverting, Business, About, Privacy & Terms and Settings links
    • All part of the footer

Testing Google.com

Next, let’s discuss how to test each element.

1. Favicon

You can’t really test an image, but you can test how it displays in each browser.
Here is an example of an icon whose background appears to be clear until viewed in Chrome’s background tab:

favicon
favicon

2. HTTP to HTTPS

You can only make sure that all http calls you make are redirected to a secure connection. This is a server setting so it shouldn’t matter what URL you try.
HTTPS can be captured and even decrypted using a proxy like Fiddler. Then you can see things like tracking pixels and post-backs.

HTTPS Traffic Decrytped
HTTPS Traffic Decrytped

3. Padlock

Click on the padlock to view the certificate information. The certificate should be valid. If not, most browsers will display some sort of warning.

Website Identification
Website Identification

4. +You

Testing links that go out of scope should just be limited to verifying the link is valid.

Verify the link goes to plus.google.com. If you are signed out, you will go to accounts.google.com.
This should change to +{yourName} after signing in.

5. Gmail

Testing links that go out of scope should just be limited to verifying the link is valid.

Verify the link goes to mail.google.com. If you are signed out, you will go to accounts.google.com.

6. Grid

Click the grid to trigger the OnClick event. Verify each icon and link.

OnClick Event Pop-up
OnClick Event Pop-up

7. Sign in

Login and verify the button in no longer visible.

The sign in page is out of scope but I’d like to cover it as it is a common element.

Google Sign in
Google Sign in

Here we have a chance to do some security testing. Here:

  • Never return “invalid username”
    • Using a brute force attack you can find a valid username. From there, its a matter of trying passwords.
  • After a certain number of bad attempts with a valid username some locking mechanism should be triggered.
  • Remember me should not work to gain access to high value systems.
    • Example: After logging in, you should be required to re-enter your password to get to your account settings.

Read more on Authentication

8. Install Chrome

This element shouldn’t appear in Chrome. Verify the image, heading text, and the link.

9. Banner Image

Not much testing here. Is the image there?

10. TextBox

Here we have a chance to do some fuzz testing. Generally a TextBox corresponds to a database value. When you enter your credentials in Google’s sign in page and submit the server compares that value to the values in the database. If you had access to this database you could see the max size for each column and its data type. This information is useful in testing bounds and types. In the following example we see that EmailAddress is nvarchar with a length limit of 50 characters. Being a nvarchar we can enter almost any character. If it were an int we could try entering an alphabetic character.

AdventureWorks Example
AdventureWorks Example

Having a connection to a database also comes with the potential for attacks. Common attacks are unicode transformation and SQL injection. I will not be demonstrating these attacks on Goggle as I do not have their permission to do so.

11. Search button

Here are some test cases I derived from exploratory testing:

  • Click Search button without entering anything in the TextBox
    • Nothing should happen
  • Typing in search terms will hide the Search button, due to AutoComplete
    • Verify this behavior

12. I am Felling Lucky

Here are some test cases I derived from exploratory testing:

  • Hover text changes when no search term is present in the TextBox
    • Click the button and verify the trending searches are displayed
  • Typing in search terms will hide the Search button, due to AutoComplete
    • Verify this behavior

13.  Message of the Day

Verify the message matches the URL.

14. Footer

More links to verify. Also, the Settings link

Automated Software Testing Tools

About Automated Software Testing

In software testing, test automation is the use of special software (separate from the software being tested) to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or add additional testing that would be difficult to perform manually.
http://en.wikipedia.org/wiki/Test_automation

Picking the Right Tools for the Job

There is no one-size-fits-all solution. You will need to take into consideration:

  • End-users device; Desktop and/or mobile?
  • (Web Applications) Web browsers
  • Technology used to make the tested application (ASP, WinForms, etc.)
  • Size of QA Team
  • Size of QA Automation Team
  • Price

What makes an automation tool good? Here are my thoughts:

  • Usability
    • Buttons and options are tidy and make sense as to their grouping
    • Programs flows like a book, left-to-right and top-to-bottom
  • Ease of Use
    • Beginners can record-and-click to create tests
    • Advanced users can write custom code
  • Productivity
    • A high purchase price is offset after a few months of test automation
    • Compare how long it takes to create and then modify a test in each tool
  • Comments
    • Each test, step, custom code has a description and/or comment

Automation Software

There are many tools in the wild but I can only speak as to the ones I have used.

These tools are good for scripting.

  • AutoIT
    • AutoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting.
    • Example: I made a list of URLs to browse via IE to test a toolbar for memory leaks. Monitored the toolbar process via PerfMon.
  • AutoHotKey
    • Fast scriptable desktop automation with hotkeys
    • Example: I made a script that simulated a key press every 10 seconds.
  • Chromium browser automation
    • Extension for automating chromium browser, Create project -> Record -> Edit Automation -> Manage -> Play
    • Example: I recorded myself trying to log in to a site. Played back 10 times to see if the account got locked or if the time between requests was extended after each bad attempt.
  • PowerShell
    • Comes with most Windows OS distributions
    • Supports .Net framework
    • Example: I wrote a script that copies the latest build from the build machine to each test environment and installed that latest build. The scripts also sent out an email when it successfully ran a warm-up test, telling QA the build was ready to be tested.

These tools are good for testing:

  • Selenium
    • Made by eBay using java tools
    • Can run test in JUnit or NUnit
    • Limited browser support
    • Free
  • Telerik Test Studio
    • Supports most modern browsers
    • Supports Silverlight and WPF applications
    • Framework is available for free
    • Integrates with TFS
  • Microsoft Test Manager
    • Part of Visual Studio (Test Professional, Premium or higher)
    • Same browser support of Selenium
    • Test plan and test case repository

Recommendations

For scripting, I prefer PowerShell. It is fully supported by Microsoft and comes pre-installed on most Windows OS deployments.

For testing, I prefer Test Studio. It has quarterly releases by Telerik and hits all the right notes in what I consider a good tool.

Why Telerik Test Studio?

  • Usability
    •  Layout is clear and familiar (follows MS Office lead)
  • Ease of Use
    • Beginners can record-and-click to create new tests. They can also edit tests, like modify the find logic of an html element, rather easily.
    • Advanced users can write custom code in VB or C#.
  • One environment
    • In Selenium you need separate programs to create, edit, and playback test.
    • In Test Manager you need to have Visual Studio Pro (or higher) to convert test cases from recorded actions to automated tests.
  • Compatibility
    • Browsers: Internet Explorer, Chrome, Firefox, Safari
    • Application Platforms: Silverlight and WPF
  • Productivity
    • Tests and created and modified quickly.
  • Comments
    • Each Test and Test List has a description. Each Test Step can be commented or have its description altered.

Was there a close second?

Microsoft Test Manager is evolving with each iteration and could come to surpass HP’s Quality Center for enterprise solutions. It could overtake Telerik if it lets the Test Professional edition write automated tests and those tests are in a separate solution for the application.

Strengths

  • Test plans and test cases are saved as part of the project
  • Manual testing has great features
    • Sidebar to step through test cases and mark as pass or fail
    • Take photos or videos of testing
    • Attach bugs in TFS

Weaknesses

  • Fragmented IDE requirements
    • Must have Test Professional to make tests but must have Visual Studio Professional (or higher) to convert tests to automation
  • Limited browser support
  • High initial cost

Best Practices

  •  Try not to rely on existing data
    • Your data will typically be wiped when a new build is deployed or when a vm is snapshot reverted
  • Check for dependencies
    • Do not fail an automated test that deletes a record if the record cannot be found.
      • This is a setup issue. The functionality is not being tested, only it’s error handling.
  • One failed test should not invalidate the rest of testing
    • Because the homepage test fails does not mean you can’t test other pages
  • Do not leave artifacts
    • Added a record? Why not modify it and then delete it to?
      • This will test CRUD operations
  • Try not to rely on known IDs
    • Data can be  entered in asynchronously or tests ran out of order
    • Instead try to look up the information first, see Check for dependencies (above)
  • Make test independent
    • Test 1: Login
    • Test 2: Login and Add Record
      • A test list that run Test 1 then Test 2 will run the login code twice.
        • This will add time to the overall test time
        • This will allow you to run the test individually after a test list fails
  • Use modular test and code
    • Never repeat code, instead write a module or function to handle things like logging on

Security Software

OWASP offers an incredible security tool, called Zed Attack Proxy. Beginners can simply enter the URL to their web application and click go.

Manual Software Testing Primer

About Software Testing

Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to the process of executing a program or application with the intent of finding software bugs (errors or other defects).
http://en.wikipedia.org/wiki/Software_testing

Test Environment

The environment in which you will test is likely predetermined by the QA team, Development team, or the Infrastructure team. Use of virtual machines (vm) is standard practice. Virtual machines allow you to pre-configure an environment (OS/Software combo) and save as a snapshot. A tester can fire up the vm and perform his/her testing and revert the vm when finished. Use of a VM is preferred over your workstation, because you want to limit potential risk factors. VMs should contain just your software (and any dependencies) and the OS. You may also want to include some testing tools, like Fiddler.

Common VM Technologies

Testing the Application

The VM is fired up (or you have logged onto your remote testing device) and you have installed the latest build. Now what?

Verify the Build and Deployment

Here is a check list of things to check:

  • Does the application start?
    • For web application deployments, a warm-up test is recommended
      • Warm-up Test: Requests every publicly accessible page and asserts the response is 200 (OK)
        • Should be part of any deployment script
  • If the application requires a SQL connection, then verify the connection settings
    • Usually authentication is tied to a database/server, so try logging in
      • If there is an error:
        • Check the *.config file for your application in a text editor, like Notepad++
          • Inspect the connection string, verify
            • Server
            • Database
            • Authentication method
            • Authentication credentials
        • Ensure the database is online and can be connected to, by
          • (Windows) Run cmd.exe and type ping {serverName}
          • Connect to the SQL server using a browser like SQL Server Management Studio (include in SQL Server Express)
            • Drill down to {serverName}\Databases\{databaseName}\{tableName}
              • Where {tableName} is your user table
            • Right-click a table and click Select Top 1000 Rows
  • Release Builds only, also check:
    • Version Numbers: Right-click the executable, select Properties, and then select Details
    • Digital Certificates: Right-click the executable, select Properties, and then select Digital Certificates

Read more on Installation Testing

Smoke Test

Verify that the current build is worth the effort of a full test. Sometimes a build breaks so bad that you cannot perform simple tasks. These should be done first in each environment and requires co-ordination in large teams. A build could be broken in one particular OS/Software combination and the fix will require a re-factoring. In this case you wouldn’t want to continue testing in the other environments as it will change soon.

Read more on Sanity Testing and Smoke Testing

Verify functionality

This will differ widely depending on the application. Hopefully there is already some documentation for you to start.

Testing from here is split into two categories:

  • Functional: Tests focus on the immediate task
    • Usually reserved for updating existing code
    • Example: A new login page was made
      • Test the login functionality
  • Regression: A full run of all tests
    • Usually reserved for major versions, release builds, and when code modifies a shared resource.
    • Example: A new login page was made using new session management
      • Most of your tests require an active session

Read more on Regression Testing

Exploratory Testing

When are given new code area to cover, you should try:

  1. Happy Path Testing
  2. Negative Testing
  3. Fuzz Testing
  4. Security Testing
  5. Conformance Testing (if applicable)
  6. Accessibility Testing (if applicable)
  7. Language localization (if applicable)

In all cases you should also:

Other Testing

A/B Testing: Usually done by the marketing team. QA will have tested all pages and the mechanism diverts users to one page or the other.

Acceptance Testing: The stakeholder will do some light testing after QA is finished to ensure the final product satisfactorily meets the requirements.

Performance Testing: Tests the application under various loads. Usually reserved for test automation.

Concurrent Testing

Concurrent Testing is the process of running automated tests while manual testers do exploratory testing. Any company that uses test automation, will have some sort of concurrent testing. No one should rely solely on automated tests.

Read more on Concurrent Testing

Further Reading

Software Testing

  • Wikipedia parent article of most of the links in this article

Udacity CS258

  • Software Testing Methodologies Class Online

Q&A with a Software Tester

What is a QA software analyst?

Develops, publishes, and implements test plans. Writes and maintains test automation. Develops quality assurance standards. Defines and tracks quality assurance metrics such as defect densities and open defect counts. Requires a bachelor’s degree and 0-2 years of experience coding in C, C++, Java. Must have a working knowledge of quality assurance methodologies. Familiar with NT, UNIX and/or Solaris environments. Relies on experience and judgment to plan and accomplish goals. Performs a variety of tasks. Works under general supervision; typically reports to a manager. A certain degree of creativity and latitude is required.
Salary.com

What is a typical day in QA like?

Typically, a QA member will:

What is the career path?

Note: There is no formal system or naming convention.

  1. Software Tester –  Entry level testing positions (testers) that do black-box testing.
    1. Requires an aptitude for PC troubleshooting
    2. Executes Test Plans
  2. Analyst – More experienced testing positions do white-box testing.
    1. Requires knowledge of programming logic and design
    2. May also require security testing proficiency
    3. Writes and Executes Test Plans
  3. Automation Specialist  – Uses white-box testing to design for test automation.
    1. Requires knowledge of Computer Science, generally a Bachelor of Science
    2. Converts Test Plans to automated tests
  4. Team Lead – Provides peers guidance.
    1. Can be a Tester, Analyst, or Automation Specialist
  5. Manager – Guides the efforts of the team and makes process adjustments

What are the differences between a developer and an automation specialist?

Both require programming knowledge, however a developer has more. A developer is responsible for writing clean and efficient code to run in production environments. The automation specialist’s code is often only seen and used by QA. That is like how the unit tests are only seen and used by developers. QA’s code is written in large nets of error catching and handling. The program might just try to get record 13 but the tests would have to make sure 13 exists, the call to delete was made, the record 13 was deleted, and no errors were returned.

How do I become a tester?

Like most jobs the education requirements are to scare off people with no confidence. I have seen people hired with no prior QA experience. As long as you show yourself capable of critical thinking you will be fine.

Should I get certifications?

If you want, sure. Will it help? It shouldn’t. You can learn a great deal about testing and even show you retained some of that information for a short while. The problem is that you can take these tests and pass by simply remembering the vocabulary. This does not equate to proficiency.

Instead, I recommend doing some reading on testing. Also, it never hurts to get an online Master of Science in Computer Science for $6,600 from Georgia Tech.

How do I hire a good tester?

There are tests on ProveIt, but those kinds of tests mark you off for clicking a button instead of going to File->Open.

I recommend making your own test:

  • Use a VM snapshot that you can revert to with each interview
  • Give them the tools they need – Internet access and/or pre-installed software that is used by your team
  • Give them 30 minutes to take the test
  • Have a number of bugs ranging in difficulty
    • Easy Bugs – Ones found during Happy Path Testing
      • Example: Clicking ‘submit’ on login form causes 404 error
    • Avg Bugs – Ones found during Negative Testing
      • Example: Entered a letter in a number field
    • Hard Bugs – Ones found using tools
      • Example: Javascript errors in Developer Tools