Testing metrics are considered to be quantitative measures that can be useful in estimating the quality, progress, health, and productivity of the software testing process. Testing metrics aim to ensure efficiency and effectiveness in the software testing process. Also, it proves to help make better decisions for the further testing process by providing accurate and reliable data about implemented testing processes.
A Testing Metric generally defines the degree in quantitative measures to which a system, system component, or process possesses a given attribute. An apt example could be the weekly mileage of a scooter compared to its ideal mileage recommended by the manufacturer.
Software testing metrics are the quantitative indication of the extent, capacity, dimension, amount, or size of some attribute of a process or product.
Why Test Metrics Are Vital?
We cannot improvise any process if we can’t make a comparison to know the inferiority and superiority. Test metrics help the Testing team to know such with the help of testing metrics. Test metrics for:
- Decision-making for the next phase of testing activities
- Prove to be evidence of any prediction or claim
- Helpful in understanding the type of improvement required in any area
- for decision-making or technological change
1) Project Metrics: This Kind of Metrics is useful to calculate the efficiency of the project team or any kind of testing tools considered by team members during testing
2) Product Metrics: This kind of Metric comprehensively deals with the quality of the software or the application.
3) Process Metrics: This kind of Metric is useful to enhance the process efficiency of the Software development life cycle.
Identifying the correct kind of metric to be used is highly crucial. Below are the factors to consider before deciding on the type of Metric to be used:
- Fixing target audience for metric preparations
- Defining the goal for metrics
- Introducing all the suitable metrics according to project needs
- Analyzing the cost-benefit factor of all the metrics and also the project lifestyle phase in which the same consequences maximum output
Manual Test Metrics:
Manual metrics are generally classified into two classes:
(a) Base Metrics
(b) Calculated Metrics
Base Metrics are considered as information collected by Testers in the Test case development and execution (number of test cases, number of test cases executed) phase. Whereas Calculated metrics are derived from the data collected from base metrics. Usually, Test managers follow the Calculated Test Metrics for test reporting goals.(% Completed, % Test coverage)
Test Metrics Formula:
Requirement Coverage = (Number of requirements covered / Total number of requirements) x 100
Test Design Coverage = (Total number of requirements mapped to test cases / Total number of requirements) x 100
Test Execution Coverage = ((Total Number of test cases or scenarios executed (Pass + Fail)/ Total Number of test cases or scenarios planned to execute)*100)
Schedule Variance = ( Actual efforts – estimated efforts ) / Estimated Efforts) X 100
Schedule slippage = (Actual end date – Estimated end date) / (Planned End Date – Planned Start Date) X 100
Defect Severity Index (DSI) = Sum of (Defect * Severity Level) / Total number of defects
Passed Test Cases Percentage = (Number of Passed Tests/Total number of tests executed) X 100
Failed Test Cases Percentage = (Number of Failed Tests/Total number of tests executed) X 100
Blocked Test Cases Percentage = (Number of Blocked Tests/Total number of tests executed) X 100
Fixed Defects Percentage = (Defects Fixed/Defects Reported) X 100
Development team to repair defects = (Total time taken for bugfixes/Number of bugs)
Number of tests run per period = Number of tests run/Total time
Test design efficiency = Number of tests designed /Total time
Test review efficiency = Number of tests reviewed /Total time
Defect Removal efficiency = (No. of defects found during testing)/(No. of defects found during testing + No. of defects found after delivery)