No matter what side of the aisle you sit on politically, if you’re responsible for a major roll out of your company’s website, the government’s healthcare site roll out might make you cringe. With everything that is at stake, how do you know if your site meets the expectations of your customers on the date they expect? There are many critical elements to the success of a major launch: clearly defined business goals, written requirements, strong leadership, technology choices, selecting a partner with a proven track record and much more. Testing is also a critical component of the process. There are three essential factors to an effective testing process:
1. Hire experienced Quality Assurance staff and involve them in the project early
2. Make testing transparent
3. Feedback is critical.
Hiring experienced people with a successful track record is a no-brainer, but too often tight budgets and schedules are managed by bringing QA staff on late, reducing testing staff levels or cutting the testing time. Your test team needs time to fully understand the business and technical requirements, write detailed test plans, execute the plans, research and document all the defects found during testing and retest again as the code is fixed. This is an iterative process that should start as the written requirements are delivered. The process of writing a thorough test plan will often highlight missing details in a requirement or design even before coding begins.
Although it’s true you won’t have a great site without great design and technical teams, without a very strong, integrated QA effort you won’t know when, or if, your site is really ready for prime time.
Metrics are Critical
Most projects track how many defects have been found and their relative severity. It’s an important data point. Alone, it tells you some testing was done and some issues were found. To judge where your project is relative to your launch date, your business requirements, and your quality goals you need to minimally plan for and track the following:
– Test development:
- What is the total number of tests required to test the full set of business requirements & features?
- How many tests are required for performance, integration, usability, compatibility, user interface?
- How many of the written tests have been reviewed by the business subject matter expert to insure the business requirements are accurately reflected in the test and by the technical team to insure the technical design and code paths will be exercised?
- How many of the total number of tests have been written to-date?
– Test execution:
- Of the total number of tests, how many have been executed to date?
- Of the tests executed, how many have been completed with no defects found?
- Of the total number of tests written, how many need to be modified & re-executed due to requirement or design changes?
- Of the total number of tests, how many need to be re-executed for a final regression?
- What are the severity levels of the open defects by feature / function? What are your high risk modules?
- How many defects have been fixed, retested and closed? Is your QA team able to keep up with test execution and retesting fixed defects?
- What is the fixed to found rate of defects reported each week? Has the progress flat-lined?
The number of tests to be written and executed informs your QA staffing level and your timeline. Without written tests and documented results you won’t know what features & functions are tested and if the tests accurately and thoroughly exercise the requirements.
To judge if your project is on schedule, you need to know how many of the requirements have been implemented and how well. The rate of test execution and the results will indicate if the implementation is proceeding as scheduled. A requirement shouldn’t be considered complete when it’s implemented. It is complete when it’s tested and has met all the business requirements and quality goals. If the defect find rate is equal to the defect fix rate, the progress has flat-lined. Until that trend line is on a steady downward direction the quality of your site and your launch date may be in jeopardy.
If you are managing a major launch of a site, consider including a usability test and a beta test phase. There are many usability testing service companies that can help you assess the site’s user interface and ease of use. And, as with a beta test, it will be better to hear customer criticism when there is still an opportunity to make changes, or adjust the official launch should that be necessary.
Incorporate informal methods of gathering feedback as well as formal. One way to verify the metrics you are tracking throughout the project is to use the traditional walk-a-round method. Continually ask your testing staff, and others, what they think of the progress of the site. They represent the first line of customers and they should be able to provide a good indication of your site’s readiness for prime time.