– “The app keeps crashing”
– “It doesn’t work on my phone (Samsung Galaxy S4)”
– “I upgraded the app and now it doesn’t work anymore”
– “With the latest iOS upgrade the camera function in the app doesn’t work anymore”
At first we were surprised about how many obvious issues you can see reported in the reviews for most apps and other mobile services. Later, however, we found that 69% of all mobile services are never tested beyond the device that they were implemented on or by the original developer before launch.
As mobile adoption has become almost ubiquitous in developed markets, the business imperative for mobile business is clear. But our study indicates that the speed of adoption and proliferation of handheld devices, coupled with use of social media, seems to have caught enterprise testing by surprise. Organizations may not be giving mobile the priority it warrants. Only 31% of respondents across the world currently test mobile applications — a figure that does not deviate much from region to region, and those surveyed readily admit to being ill equipped for mobile testing. This suggests that QA has fallen behind the mainstream mobile curve. (World Quality Report 2012-2013 By HP, Sogeti and CapGemini)
Negative feedback and reviews can be disastrous for a mobile service, as it’s difficult to turn around from a bad start.
Mobile test strategy approach
Putting together a great test strategy is crucial to deliver high quality mobile services. Here are the key steps for you to develop your mobile testing strategy:
1. Acceptance Criteria
First define what will make you, as the company, product owners or developer, satisfied with the service by specifying the acceptance criteria. This should include :
- Use cases – define what the user will do and the expected result/outcome
- Look and feel – use mock-ups or wireframes to compare the look and feel of the final service with the original specification (note that this may change in the process)
- Non-functional requirements – define expectations for performance, upgrades, reliability, etc)
- Target devices and OS versions – define what devices and OS versions the service shall work on
- The Acceptance Criteria will be used for the final User Acceptance Testing (UAT) below.
2. Development testing
Testing should start from the first prototype release of code with some kind of working functionality and continue until the end of the project.
Unit testing is performed by the developer. Before any release, whether it is the first release, bug fix or feature update – the application must be tested by the developer prior to releasing it for further testing or to any users.
3. Integration testing
Integration testing is usually performed as soon as access to back-end services, web services and APIs are available. Integration testing ensures that all APIs work as expected, that all areas of the system communicate with each other correctly and that there are no gaps in the data flow. The final integration test proves that the system works as an integrated unit when all the fixes are complete.This testing is usually done with automated test scripts. By investing in early integration testing you can avoid wasting valuable development resources before other systems are mature enough.
4. Quality Assurance Testing
Real quality testing should be performed outside of the core development team by professional testers. There are several types of testing that are performed by the Quality Assurance Team:
a. Smoke testing
Smoke testing is preliminary testing to reveal simple failures severe enough to reject a prospective software release. A subset of test cases that covers the most important functionality of a component or system are selected and run, to ascertain if the most crucial functions of a program work correctly. For example, a smoke test may ask basic questions like “Does the program run?”, “Does clicking the main button do anything?” The purpose is to determine whether the application is so badly broken that further testing is unnecessary.
b. Functional testing
Functional testing assures that each element of the service meets the functional requirements of the business as outlined in the requirements document/functional brief, use cases, system design specification, other functional documents produced and user interface testing. Typically the use cases in the acceptance criteria make up the core of the Functional testing. Functional testing also includes device and OS specific testing. This is when the test team tests across all the physical devices agreed as the target devices for the service.
c. Exploratory Testing
Testing that is not scripted or planned in as much detail as a more traditional test plan. Rather than walking through a series of pre-defined test cases, the tester is asked to think about the high-level test areas in the software. From there, the tester is encouraged to use their knowledge of the product and the code in order to navigate through different areas of the software. As they walk through the software, their emphasis is to find bugs, expose new test areas, and learn more about the product to help guide future test efforts.
d. Bug fix verification Testing
Bugs are reported continuously throughout testing and when a bug has been confirmed as fixed by the developer then the use case where the bug was identified is performed again to verify that the bug has been fixed.
e. Performance Testing and other Non-functional testing
Non-functional testing proves that the documented performance standards or requirements are met. Examples of testable standards include response time and compatibility with specified browsers and operating systems but also stress testing the application.
When performance testing is not specified in documents the comparison should be done subjectively:
1) Compare to previous experience testing similar application
2) Compare same app on different platform
3) Compare same app with older version.
4) Compare app performance during long testing time without app restart.
f. Regression testing
Regression testing is performed after the release of each phase to ensure that there is no impact on previously released software. Regression testing cannot be conducted on the initial build because the test cases are taken from defects found in previous build ins, so initially there will be nothing to test against.
Testing order by status and then individual priority order:
- Fixed and resolved issues
- Full test case testing
- Exploratory testing
5. User Acceptance Testing (UAT)
Finally the service is ready to go live, but before this there is one final step. The customer (external or internal) of the project goes through the Acceptance Criteria one final time to ensure that the agreed minimum criteria are met. Depending on the preparations prior to this it can be a formality or it can go on for months with additional iterations with development and testing above. Once the UAT is completed the service is ready to go live.
6. Post launch testing
Quality assurance never stops. Once the service is live we recommend continued testing and monitoring of user feedback to make sure that the service is continuously enhanced and improved.
Please feel free to contact us for more information about testing.
Image courstey of The Fox and King