Testing for
Purpose
The purpose of this document is to have a shared understanding of the overall approach, tools, targets and timing of test activities.
Guiding Principles
Principles underlying the testing approach.
Principle | Description |
Shared Responsibility | Everyone is responsible for testing and quality |
Test Automation | All types of tests (unit, integration, acceptance, performance, ...) should be automated. Manual testing will only be used for exploratory type testing and UAT. |
Data Management | Real world examples and modified data, TBD |
Test Management | Test cases, code, documents and data will be treated with the same importance as production code. |
Quality and Test Objectives
Constantly Deliver Working Software that Meets Customer’s Requirements by means of Providing Fast Feedback and Defect Prevention, rather than Defect Detection |
The following quality attributes have been identified as relevant and are used as a basis for the test approach in terms of priority and test targets.
Attribute | Description | Measure and Target | Priority |
Correctness | Features and functions work as intended | 100% completion of agreed features # High priority defects = 0 | |
Maintainability | How easy is it to add features, correct defects or release changes to the system | ||
Interoperability | Ease with which the system can exchange information with other systems | User interface renders and functions properly on predefined browser versions (see Configuration Testing) | |
Integrity | Ability to protect from viruses, protect privacy of data entered, prevent unauthorised access and information loss | All access will be via HTTPS over a secured connection | |
Availability | Percentage of planned up-time that the system is required to be operational | ||
Performance | Measures the responsiveness of the system under a given load and the ability to scale to meet growing demand. | Response time TO DO Throughput TO DO | |
Security | |||
Usability |
Configuration Testing
Test devices
-
Macbook Pro 13
-
browserstack
Supported browsers
-
Chrome (latest)
-
Safari (latest)
-
Firefox (latest for UI tests)
-
Mobile Safari
-
Android webview (native)
-
Samsung Internet 5.4, 6.2
OS X | iOS | Android | Win 10 | |
Chrome | ||||
Firefox | ||||
Safari | ||||
Edge | ||||
IE10???? |
Breakpoints
desktop | tablet | mobile | ||
Max 1400px | 768px | 360px |
Test Approach
-
Distinction between testing and checking
-
risk-weighted test scenarios supported by strategic
test automation -
Checking is automated through unit test,
integration test etc
-
Testing is done through a combination of heuristic testing, risk-based testing, and exploratory testing
Test Driven Development and context-driven testing
Step | Description | Roles |
Discuss | Discuss or workshop to create a shared understanding of the required feature/user stories. Identify real-life scenarios and examples that have realistic context. | PO, TL, QA, UX, BA |
Distill | Distill the required feature into an executable specification based upon the user stories, examples and acceptance criteria. Specifications are kept in human readable form using the following format: Feature Story: As a [role] I want [feature] So that [feature] Scenario Outline: Title Given [context] And [more context] When [event] Then [outcome] And [another outcome] | PO, DEV, QA, BA |
Develop | Develop the required feature using TTD. Automated tests should be built around the identified scenarios | DEV, QA |
Demonstrate | Demonstrate the implementation by running the tests and performing manual exploratory tests | PO, UX, BA |
Test Types
Type | Definition | Tool |
Unit | Testing that verifies the implementation of software elements in isolation | |
API/Service | Testing in which software elements, hardware elements, or both are combined and tested until the entire system has been integrated | |
Acceptance / UI | Testing based on acceptance criteria to enable the customer to determine whether or not to accept the system | |
E2E | Testing previously developed software still functions correctly | |
cross-functional | Testing compliance with different accessibility standards |
Defect Management
Ideally, defects are only raised and recorded when they are not going to be fixed immediately. In this case the conditions under which they occur and the severity needs to be accurately recorded so that the defect can be easily reproduced and then fixed.
Defect priority is defined by severity and impact, i.e. what happens and who is affected. QA determines severity and impact of a defect so that the PO can decide when or if a defect should be resolved. The following table is to be understood as reference framework for defect severity, impact and priority as well as appropriate actions.
Please note that defects are not currently assigned a priority and it is mentioned here only to serve as a quick reference for other parts of the document.
Priority (PO) | Severity (QA) | Impact (QA) | Action (PO) |
HIGH | System breakdown and/or key feature not working. | Affects large number of users. | Needs to be resolved immediately and takes priority over any new feature. Story is not done before defect is resolved. |
MEDIUM | Causes undesirable behavior but system is still functional or workaround exists. | Affects moderate number of users. | During the normal course of the development activities defect should be resolved. Story may be considered done without defect fix. |
LOW | Mostly cosmetic, does not cause any major breakdown of system. | Affects minimal number of users. | The defect is an irritant but repair can be done once the more serious defects have been fixed. Story can be considered done without defect fix. |
Test Environments
Name | Description | Data Setup | Test Usage |
Local Dev | This environment is local and specific to each developer/tester machine. It is based on the version/branch of source code being developed. | Subset of production data enhanced with edge case data | Unit, Functional, Acceptance, ... |
Dev/CI | This environment supports continuous integration of code changes and execution of unit, functional and acceptance tests. Additionally, static code analysis is completed in this environment. | Subset of production data enhanced with edge case data | Unit, Functional, Acceptance |
UAT | This environment supports exploratory and user acceptance testing | Subset of production data enhanced with edge case data | Acceptance, Exploratory |
Staging | This environment supports user acceptance testing | Should be as close to production data as possible | |
Performance | This environment supports performance testing | Subset of production data | Performance |
Prod | Production environment |
Entry and Exit Criteria
Entry Criteria (In Dev -> Ready for QA)
Testing will only commence once all of the following criteria are met
Story implementation is complete | |
Tests passed (unit, integration, journey) | |
Successful desk check | |
No outstanding high priority defects |
Exit Criteria (In Showcase -> Done)
Testing is considered done once all of the following criteria have been met
Test execution is complete | |
Main functionality passed for supported browsers and platforms | |
User acceptance testing complete (showcase with PO) | |
No outstanding high priority defects | |
All outstanding minor and medium defects have been recorded |
Additional Tools
Browser extensions Chrome
-
Google Analytics Debug
-
Accessibility Developer Tools
-
Postman
-
Referer Control
-
Tag Assistant
-
User Agent Switcher
-
Web Developer
Docker for Mac (selenium grid setup) - in test phase at the moment
VMs for Windows/IE
QA activities for stories
-
Check stories “In Analysis” for completeness and testability
-
Story kick-off (“Ready for Dev")
-
Desk check before stories go to “Ready for Testing”
-
Test stories (“In QA”)
-
Present story to PO (“Ready for Showcase”) and deploy on Showcase environment
-
Deploy to production environment