Software Testing
Framework
Document version:
2.0
Table of Contents
Table
of Contents......................................................................................... 2
Revision History........................................................................................... 4
Testing Framework....................................................................................... 5
1.0 Introduction............................................................................................................................................ 5
1.2 Traditional Testing Cycle........................................................................ 5
2.0 Verification and Validation Testing Strategies............................................ 6
2.1 Verification Strategies........................................................................... 6
2.1.1 Review’s................................................................................... 7
2.1.2 Inspections............................................................................... 8
2.1.3 Walkthroughs........................................................................... 8
2.2 Validation Strategies............................................................................. 8
3.0 Testing Types............................................................................................................................................. 9
3.1 White Box Testing.................................................................................. 9
White Box Testing Types............................................................................... 9
3.1.1 Basis Path Testing.................................................................... 10
3.1.2 Flow Graph Notation................................................................ 10
3.1.3 Cyclomatic Complexity............................................................... 10
3.1.4 Graph Matrices........................................................................ 10
3.1.5 Control Structure Testing........................................................ 10
3.1.5.1 Condition Testing........................................................... 10
3.1.5.2 Data Flow Testing.......................................................... 10
3.1.6 Loop Testing...................................................................................... 10
3.1.6.1 Simple Loops......................................................................... 11
3.1.6.2 Nested Loops......................................................................... 11
3.1.6.3 Concatenated Loops................................................................ 11
3.1.6.4 Unstructured Loops................................................................. 11
3.2 Black Box Testing................................................................................. 11
Black Box Testing Types.............................................................................. 11
3.2.1 Graph Based Testing Methods..................................................... 11
3.2.2 Equivalence Partitioning........................................................... 11
3.2.3 Boundary Value Analysis........................................................... 12
3.2.4 Comparison Testing.................................................................. 12
3.2.5 Orthogonal Array Testing......................................................... 12
3.3 Scenario Based Testing (SBT)....................................................... 12
3.4 Exploratory Testing.................................................................... 13
4.0 Structural System Testing Techniques..................................................... 13
5.0 Functional System Testing Techniques..................................................... 13
4.0 Testing Phases....................................................................................................................................... 14
4.2 Unit Testing......................................................................................... 15
4.3 Integration Testing.............................................................................. 15
4.3.1 Top-down Integration............................................................... 15
4.3.2 Bottom-up Integration.............................................................. 15
4.4 Smoke Testing...................................................................................... 16
4.5 System Testing..................................................................................... 16
4.5.1. Recovery Testing..................................................................... 16
4.5.2. Security Testing...................................................................... 16
4.5.3. Stress Testing........................................................................ 16
4.5.4. Performance Testing................................................................ 16
4.5.5. Regression Testing.................................................................. 17
4.6 Alpha Testing....................................................................................... 17
4.7 User Acceptance Testing........................................................................ 17
4.8 Beta Testing........................................................................................ 17
5.0 Metrics........................................................................................................................................................... 17
6.0 Test Models................................................................................................................................................ 19
6.1 The ‘V’ Model........................................................................................ 19
6.2 The ‘W’ Model....................................................................................... 20
6.3 The Butterfly Model.............................................................................. 21
7.0 Defect Tracking Process............................................................................................................ 23
8.0 Test Process for a Project....................................................................................................... 24
9.0 Deliverables............................................................................................................................................ 25
Revision History
Version No.
|
Date
|
Author
|
Notes
|
1.0
|
August 6, 2003
|
Harinath
|
Initial
Document Creation and Posting on web site.
|
2.0
|
December 15, 2003
|
Harinath
|
Renamed the document to Software
Testing Framework V2.0
Modified the structure of the document.
Added
Testing Models section
Added SBT, ET testing types.
|
Next Version of this framework would
include Test Estimation Procedures and
More Metrics.
Testing Framework
Through experience they determined, that there should
be 30 defects per 1000 lines of code. If testing does not uncover 30 defects, a
logical solution is that the test process was not effective.
1.0 Introduction
Testing plays an
important role in today’s System Development Life Cycle. During Testing, we
follow a systematic procedure to uncover defects at various stages of the life
cycle.
This framework is
aimed at providing the reader various Test Types, Test Phases, Test Models and
Test Metrics and guide as to how to perform effective Testing in the project.
All the definitions and standards mentioned in this
framework are existing one’s. I have not altered any definitions, but where
ever possible I tried to explain them in simple words. Also, the framework,
approach and suggestions are my experiences. My intention of this framework is
to help Test Engineers to understand the concepts of testing, various
techniques and apply them effectively in their daily work. This framework is
not for publication or for monetary distribution.
If you have any
queries, suggestions for improvements or any points found missing, kindly write
back to me.
1.2 Traditional
Testing Cycle
Let us look at
the traditional Software Development life cycle. The figure below depicts the
same.
Fig A Fig B
In the above
diagram (Fig A), the Testing phase comes after the Coding is complete and
before the product is launched and goes into maintenance.
But, the
recommended test process involves testing in every phase of the life cycle (Fig
B). During the requirement phase, the emphasis is upon validation to determine
that the defined requirements meet the needs of the project. During the design
and program phases, the emphasis is on verification to ensure that the design
and programs accomplish the defined requirements. During the test and
installation phases, the emphasis is on inspection to determine that the
implemented system meets the system specification.
The chart below
describes the Life Cycle verification activities.
Life Cycle Phase
|
Verification Activities
|
Requirements
|
|
Design
|
|
Program (Build)
|
|
Test
|
|
Installation
|
|
Maintenance
|
|
Throughout the
entire lifecycle, neither development nor verification is a straight-line
activity. Modifications or corrections to a structure at one phase will require
modifications or re-verification of structures produced during previous phases.
2.0 Verification and Validation
Testing Strategies
2.1 Verification Strategies
The Verification
Strategies, persons / teams involved in the testing, and the deliverable of
that phase of testing is briefed below:
Verification Strategy
|
Performed By
|
Explanation
|
Deliverable
|
Requirements Reviews
|
Users, Developers, Test Engineers.
|
Requirement Review’s help in base lining
desired requirements to build a system.
|
Reviewed and approved statement of
requirements.
|
Design Reviews
|
Designers, Test Engineers
|
Design Reviews help in validating if the
design meets the requirements and build an effective system.
|
System Design Document, Hardware Design
Document.
|
Code Walkthroughs
|
Developers, Subject Specialists, Test
Engineers.
|
Code Walkthroughs help in analyzing the
coding techniques and if the code is meeting the coding standards
|
Software ready for initial testing by the
developer.
|
Code Inspections
|
Developers, Subject Specialists, Test
Engineers.
|
Formal analysis of the program source
code to find defects as defined by meeting system design specification.
|
Software ready for testing by the testing
team.
|
2.1.1 Review’s
The focus of
Review is on a work product (e.g. Requirements document, Code etc.). After the
work product is developed, the Project Leader calls for a Review. The work
product is distributed to the personnel who involves in the review. The main
audience for the review should be the Project Manager, Project Leader and the
Producer of the work product.
Major reviews
include the following:
1. In Process Reviews
2.
Decision Point or Phase End Reviews
3.
Post Implementation Reviews
Let us discuss in
brief about the above mentioned reviews. As per statistics Reviews uncover over
65% of the defects and testing uncovers around 30%. So, it’s very important to
maintain reviews as part of the V&V strategies.
In-Process
Review
In-Process Review
looks at the product during a specific time period of a life cycle, such as
activity. They are usually limited to a segment of a project, with the goal of
identifying defects as work progresses, rather than at the close of a phase or
even later, when they are more costly to correct.
Decision-Point or Phase-End Review
This review looks
at the product for the main purpose of determining whether to continue with
planned activities. They are held at the end of each phase, in a semiformal or
formal way. Defects found are tracked through resolution, usually by way of the
existing defect tracking system. The common phase-end reviews are Software Requirements Review, Critical
Design Review and Test Readiness
Review.
·
The Software Requirements Review is aimed at validating and approving
the documented software requirements for the purpose of establishing a baseline
and identifying analysis packages. The Development Plan, Software Test Plan,
Configuration Management Plan are some of the documents reviews during this
phase.
·
The Critical Design Review baselines the detailed design specification.
Test cases are reviewed and approved.
·
The Test Readiness Review is performed when the appropriate application
components are near completing. This review will determine the readiness of the
application for system and acceptance testing.
Post Implementation Review
These reviews are
held after implementation is complete to audit the process based on actual
results. Post-Implementation reviews are also known as Postmortems and are held to assess the success of the overall
process after release and identify any opportunities for process improvement.
They can be held up to three to six months after implementation, and are
conducted in a format.
There are three
general classes of reviews:
1. Informal or Peer Review
2. Semiformal or Walk-Through
3. Format or Inspections
Peer Review is generally a
one-to-one meeting between the author of a work product and a peer, initiated
as a request for import regarding a particular artifact or problem. There is no
agenda, and results are not formally reported. These reviews occur on an as
needed basis throughout each phase of a project.
2.1.2
Inspections
A knowledgeable individual called a moderator, who is not
a member of the team or the author of the product under review, facilitates
inspections. A recorder who records the defects found and actions assigned
assists the moderator. The meeting is planned in advance and material is
distributed to all the participants and the participants are expected to attend
the meeting well prepared. The issues raised during the meeting are documented
and circulated among the members present and the management.
2.1.3
Walkthroughs
The author of the material being reviewed facilitates
walk-Through. The participants are led through the material in one of two
formats; the presentation is made without interruptions and comments are made
at the end, or comments are made throughout. In either case, the issues raised
are captured and published in a report distributed to the participants.
Possible solutions for uncovered defects are not discussed during the review.
2.2 Validation Strategies
The Validation
Strategies, persons / teams involved in the testing, and the deliverable of
that phase of testing is briefed below:
Validation Strategy
|
Performed By
|
Explanation
|
Deliverable
|
Unit Testing.
|
Developers / Test Engineers.
|
Testing of single program, modules, or
unit of code.
|
Software unit ready for testing with
other system component.
|
Integration Testing.
|
Test Engineers.
|
Testing of integrated programs, modules,
or units of code.
|
Portions of the system ready for testing
with other portions of the system.
|
System Testing.
|
Test Engineers.
|
Testing of entire computer system. This
kind of testing usually includes functional and structural testing.
|
Tested computer system, based on what was
specified to be developed.
|
Production Environment Testing.
|
Developers, Test Engineers.
|
Testing of the whole computer system
before rolling out to the UAT.
|
Stable application.
|
User Acceptance Testing.
|
Users.
|
Testing of computer system to make sure
it will work in the system regardless of what the system requirements
indicate.
|
Tested and accepted system based on the
user needs.
|
Installation Testing.
|
Test Engineers.
|
Testing of the Computer System during the
Installation at the user place.
|
Successfully installed application.
|
Beta Testing
|
Users.
|
Testing of the application after the
installation at the client place.
|
Successfully installed and running
application.
|
3.0 Testing Types
There are two
types of testing:
- Functional or Black Box
Testing,
- Structural or White Box
Testing.
Before the
Project Management decides on the testing activities to be performed, it should
have decided the test type that it is going to follow. If it is the Black Box,
then the test cases should be written addressing the functionality of the
application. If it is the White Box, then the Test Cases should be written for
the internal and functional behavior of the system.
Functional
testing ensures that the requirements are properly satisfied by the application
system. The functions are those tasks that the system is designed to
accomplish.
Structural
testing ensures sufficient testing of the implementation of a function.
3.1 White Box Testing
White Box Testing; also know as glass box testing is a
testing method where the tester involves in testing the individual software
programs using tools, standards etc.
Using white box testing methods, we can derive test cases
that:
1) Guarantee that
all independent paths within a module have been exercised at lease once,
2) Exercise all
logical decisions on their true and false sides,
3) Execute all
loops at their boundaries and within their operational bounds, and
4) Exercise
internal data structures to ensure their validity.
Advantages of
White box testing:
1) Logic errors
and incorrect assumptions are inversely proportional to the probability that a
program path will be executed.
2) Often, a
logical path is not likely to be executed when, in fact, it may be executed on
a regular basis.
3) Typographical
errors are random.
White Box Testing Types
There are various
types of White Box Testing. Here in this framework I will address the most
common and important types.
3.1.1
Basis Path Testing
Basis path
testing is a white box testing
technique first proposed by Tom McCabe. The Basis path method enables to derive
a logical complexity measure of a procedural design and use this measure as a
guide for defining a basis set of execution paths. Test Cases derived to
exercise the basis set are guaranteed to execute every statement in the program
at least one time during testing.
3.1.2
Flow Graph Notation
The flow graph
depicts logical control flow using a diagrammatic notation. Each structured
construct has a corresponding flow graph symbol.
3.1.3
Cyclomatic Complexity
Cyclomatic
complexity is a software metric that provides a quantitative measure of the
logical complexity of a program. When used in the context of a basis path
testing method, the value computed for Cyclomatic complexity defines the number
for independent paths in the basis set of a program and provides us with an
upper bound for the number of tests that must be conducted to ensure that all
statements have been executed at lease once.
An independent path is any path through the
program that introduces at least one new set of processing statements or a new
condition.
Computing Cyclomatic Complexity
Cyclomatic
complexity has a foundation in graph theory and provides us with extremely
useful software metric. Complexity is computed in one of the three ways:
1. The number of
regions of the flow graph corresponds to the Cyclomatic complexity.
2. Cyclomatic
complexity, V(G), for a flow graph, G is defined as
V (G) = E-N+2
Where E, is the
number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic
complexity, V (G) for a flow graph, G is also defined as:
V (G) = P+1
Where P is the
number of predicate nodes contained in the flow graph G.
3.1.4
Graph Matrices
The procedure for
deriving the flow graph and even determining a set of basis paths is amenable
to mechanization. To develop a software tool that assists in basis path
testing, a data structure, called a graph
matrix can be quite useful.
A Graph Matrix is a square matrix whose
size is equal to the number of nodes on the flow graph. Each row and column
corresponds to an identified node, and matrix entries correspond to connections
between nodes.
3.1.5
Control Structure Testing
Described below
are some of the variations of Control Structure Testing.
3.1.5.1 Condition Testing
Condition
testing is a test case design method that exercises the logical conditions
contained in a program module.
3.1.5.2 Data Flow Testing
The
data flow testing method selects test paths of a program according to the
locations of definitions and uses of variables in the program.
3.1.6
Loop
Testing
Loop Testing is a
white box testing technique that focuses exclusively on the validity of loop
constructs. Four classes of loops can be defined: Simple loops, Concatenated
loops, nested loops, and unstructured loops.
3.1.6.1 Simple Loops
The
following sets of tests can be applied to simple loops, where ‘n’ is the
maximum number of allowable passes through the loop.
1.
Skip the loop entirely.
2.
Only one pass through the loop.
3.
Two passes through the loop.
4.
‘m’ passes through the loop where m<n.
5.
n-1, n, n+1 passes through the loop.
3.1.6.2 Nested Loops
If
we extend the test approach for simple loops to nested loops, the number of
possible tests would grow geometrically as the level of nesting increases.
1.
Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop
while holding the outer loops at their minimum iteration parameter values. Add
other tests for out-of-range or exclude values.
3.
Work outward, conducting tests for the next loop, but keeping all other outer
loops at minimum values and other nested loops to “typical” values.
4.
Continue until all loops have been tested.
3.1.6.3 Concatenated Loops
Concatenated
loops can be tested using the approach defined for simple loops, if each of the
loops is independent of the other. However, if two loops are concatenated and
the loop counter for loop 1 is used as the initial value for loop 2, then the
loops are not independent.
3.1.6.4 Unstructured Loops
Whenever
possible, this class of loops should be redesigned to reflect the use of the
structured programming constructs.
3.2 Black Box Testing
Black box
testing, also known as behavioral testing focuses on the functional
requirements of the software. All the functional requirements of the program
will be used to derive sets of input conditions for testing.
Black Box Testing
Types
The following are the most
famous/frequently used Black Box Testing Types.
3.2.1 Graph Based Testing Methods
Software
testing begins by creating a graph of important objects and their relationships
and then devising a series of tests that will cover the graph so that each
objects and their relationships and then devising a series of tests that will
cover the graph so that each object and relationship is exercised and error are
uncovered.
3.2.2 Equivalence Partitioning
Equivalence
partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
EP
can be defined according to the following guidelines:
1.
If an input condition specifies a range, one valid and one two invalid classes
are defined.
2. If an input condition requires a specific value,
one valid and two invalid equivalence classes are defined.
3.
If an input condition specifies a member of a set, one valid and one invalid
equivalence class are defined.
4.
If an input condition is Boolean, one valid and one invalid class are defined.
3.2.3 Boundary Value Analysis
BVA
is a test case design technique that complements equivalence partitioning.
Rather than selecting any element of an equivalence class, BVA leads to the
selection of test cases at the “edges” of the class. Rather than focusing
solely on input conditions, BVA derives test cases from the output domain as
well.
Guidelines
for BVA are similar in many respects to those provided for equivalence
partitioning.
3.2.4 Comparison Testing
Situations
where independent versions of software be developed for critical applications,
even when only a single version will be used in the delivered computer based
system. These independent versions from the basis of a black box testing
technique called Comparison testing or back-to-back testing.
3.2.5 Orthogonal Array Testing
The orthogonal array testing method is particularly
useful in finding errors associated with region faults – an error category
associated with faulty logic within a software component.
3.3 Scenario Based Testing (SBT)
Dr.Cem Kaner in “A Pattern for Scenario Testing” has
explained scenario Based Testing in great detail that can be found at www.testing.com.
What is Scenario Based Testing
and How/Where is it useful is an interesting question. I shall explain in brief
the above two mentioned points.
Scenario Based Testing is
categorized under Black Box Tests and are most helpful when the testing is
concentrated on the Business logic and functional behavior of the application.
Adopting SBT is effective when testing complex applications. Now, every
application is complex, then it’s the teams call as to implement SBT or not. I
would personally suggest using SBT when the functionality to test includes
various features and functions. A best example would be while testing banking
application. As banking applications require utmost care while testing,
handling various functions in a single scenario would result in effective
results.
A sample transaction
(scenario) can be, a customer logging into the application, checking his
balance, transferring amount to another account, paying his bills, checking his
balance again and logging out.
In brief, use Scenario Based
Tests when:
1.
Testing complex applications.
2.
Testing Business functionality.
When designing scenarios, keep
in mind:
1.
The scenario should be close to the real life
scenario.
2.
Scenarios should be realistic.
3.
Scenarios should be traceable to any/combination
of functionality.
4.
Scenarios should be supported by sufficient data.
3.4 Exploratory Testing
Exploratory Tests are categorized under Black Box Tests
and are aimed at testing in conditions when sufficient time is not available
for testing or proper documentation is not available.
Exploratory testing is ‘Testing while Exploring’. When
you have no idea of how the application works, exploring the application with
the intent of finding errors can be termed as Exploratory Testing.
Performing Exploratory Testing
This is one big question for many people.
The following can be used to perform Exploratory Testing:
- Learn
the Application.
- Learn
the Business for which the application is addressed.
- Learn
the technology to the maximum extent on which the application has been
designed.
- Learn
how to test.
- Plan
and Design tests as per the learning.
4.0 Structural System Testing Techniques
The following are
the structural system testing techniques.
Technique
|
Description
|
Example
|
Stress
|
Determine system performance with
expected volumes.
|
Sufficient disk space allocated.
|
Execution
|
System achieves desired level of
proficiency.
|
Transaction turnaround time adequate.
|
Recovery
|
System can be returned to an operational
status after a failure.
|
Evaluate adequacy of backup data.
|
Operations
|
System can be executed in a normal
operational status.
|
Determine systems can run using document.
|
Compliance
|
System is developed in accordance with
standards and procedures.
|
Standards follow.
|
Security
|
System is protected in accordance with
importance to organization.
|
Access denied.
|
5.0 Functional System Testing Techniques
The following are
the functional system testing techniques.
Technique
|
Description
|
Example
|
Requirements
|
System performs as specified.
|
Prove system requirements.
|
Regression
|
Verifies that anything unchanged still
performs correctly.
|
Unchanged system segments function.
|
Error Handling
|
Errors can be prevented or detected and
then corrected.
|
Error introduced into the test.
|
Manual Support
|
The people-computer interaction works.
|
Manual procedures developed.
|
Intersystems.
|
Data is correctly passed from system to
system.
|
Intersystem parameters changed.
|
Control
|
Controls reduce system risk to an
acceptable level.
|
File reconciliation procedures work.
|
Parallel
|
Old systems and new system are run and
the results compared to detect unplanned differences.
|
Old and new system can reconcile.
|
4.0 Testing Phases
4.2 Unit Testing
Goal of Unit
testing is to uncover defects using formal techniques like Boundary Value
Analysis (BVA), Equivalence Partitioning, and Error Guessing. Defects and
deviations in Date formats, Special requirements in input conditions (for
example Text box where only numeric or alphabets should be entered), selection
based on Combo Box’s, List Box’s, Option buttons, Check Box’s would be
identified during the Unit Testing phase.
4.3 Integration Testing
Integration
testing is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested components and build a program
structure that has been dictated by design.
Usually, the
following methods of Integration testing are followed:
1.
Top-down Integration approach.
2.
Bottom-up Integration approach.
4.3.1 Top-down Integration
Top-down
integration testing is an incremental approach to construction of program
structure. Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module. Modules subordinate to the
main control module are incorporated into the structure in either a depth-first
or breadth-first manner.
- The Integration process is
performed in a series of five steps:
- The main control module is used
as a test driver and stubs are substituted for all components directly
subordinate to the main control module.
- Depending on the integration
approach selected subordinate stubs are replaced one at a time with actual
components.
- Tests are conducted as each
component is integrated.
- On completion of each set of
tests, another stub is replaced with the real component.
- Regression testing may be
conducted to ensure that new errors have not been introduced.
4.3.2 Bottom-up Integration
Button-up
integration testing begins construction and testing with atomic modules (i.e.
components at the lowest levels in the program structure). Because components
are integrated from the button up, processing required for components
subordinate to a given level is always available and the need for stubs is
eliminated.
- A Bottom-up integration
strategy may be implemented with the following steps:
- Low level components are
combined into clusters that perform a specific software sub function.
- A driver is written to
coordinate test case input and output.
- The cluster is tested.
- Drivers are removed and
clusters are combined moving upward in the program structure.
4.4 Smoke Testing
“Smoke
testing might be a characterized as a rolling integration strategy”.
Smoke testing is
an integration testing approach that is commonly used when “shrink-wrapped”
software products are being developed. It is designed as a pacing mechanism for
time-critical projects, allowing the software team to assess its project on a
frequent basis.
The smoke test
should exercise the entire system from end to end. Smoke testing provides
benefits such as:
1) Integration
risk is minimized.
2) The quality of
the end-product is improved.
3) Error
diagnosis and correction are simplified.
4) Progress is
easier to asses.
4.5 System Testing
System testing is
a series of different tests whose primary purpose is to fully exercise the
computer based system. Although each test has a different purpose, all work to
verify that system elements have been properly integrated and perform allocated
functions.
The following
tests can be categorized under System testing:
- Recovery Testing.
- Security
Testing.
- Stress
Testing.
- Performance
Testing.
4.5.1. Recovery Testing
Recovery testing
is a system test that focuses the software to fall in a variety of ways and
verifies that recovery is properly performed. If recovery is automatic,
reinitialization, checkpointing mechanisms, data recovery and restart are
evaluated for correctness. If recovery requires human intervention, the
mean-time-to-repair (MTTR) is evaluated to determine whether it is within
acceptable limits.
4.5.2. Security Testing
Security testing
attempts to verify that protection mechanisms built into a system will, in
fact, protect it from improper penetration. During Security testing, password
cracking, unauthorized entry into the software, network security are all taken
into consideration.
4.5.3. Stress Testing
Stress testing
executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume. The following types of tests may be conducted during
stress testing;
·
Special tests may be designed that
generate ten interrupts per second, when one or two is the average rate.
·
Input data rates may be increases by
an order of magnitude to determine how input functions will respond.
·
Test Cases that require maximum
memory or other resources.
·
Test Cases that may cause excessive
hunting for disk-resident data.
·
Test Cases that my cause thrashing
in a virtual operating system.
4.5.4. Performance Testing
Performance tests
are coupled with stress testing and usually require both hardware and software
instrumentation.
4.5.5. Regression Testing
Regression testing is the re-execution of
some subset of tests that have already been conducted to ensure that changes
have not propagated unintended side affects.
Regression may be conducted manually, by
re-executing a subset of al test cases or using automated capture/playback
tools.
The Regression test suit contains three
different classes of test cases:
·
A representative sample of tests
that will exercise all software functions.
·
Additional tests that focus on
software functions that are likely to be affected by the change.
·
Tests that focus on the software
components that have been changed.
4.6 Alpha Testing
The Alpha testing
is conducted at the developer sites and in a controlled environment by the
end-user of the software.
4.7 User Acceptance Testing
User Acceptance
testing occurs just before the software is released to the customer. The
end-users along with the developers perform the User Acceptance Testing with a
certain set of test cases and typical scenarios.
4.8 Beta Testing
The Beta testing
is conducted at one or more customer sites by the end-user of the software. The
beta test is a live application of the software in an environment that cannot
be controlled by the developer.
5.0 Metrics
Metrics are the
most important responsibility of the Test Team. Metrics allow for deeper
understanding of the performance of the application and its behavior. The fine
tuning of the application can be enhanced only with metrics. In a typical QA
process, there are many metrics which provide information.
The following can be regarded as the fundamental metric:
IEEE
Std 982.2 - 1988 defines a Functional or Test
Coverage Metric. It can be used to measure test coverage prior to software
delivery. It provide a measure of the percentage of the software tested at any
point during testing.
It is calculated as follows:
Function Test Coverage = FE/FT
Where
FE
is the number of test requirements that are covered by test cases that were
executed against the software
FT
is the total number of test requirements
Software
Release Metrics
The software is ready for release when:
1. It has been tested with a test suite
that provides 100% functional coverage, 80% branch coverage, and 100% procedure
coverage.
2. There are no level 1 or 2 severity
defects.
3. The defect finding rate is less than 40
new defects per 1000 hours of testing
4. The software reaches 1000 hours of
operation
5. Stress testing, configuration testing,
installation testing, Naïve user testing, usability testing, and sanity testing
have been completed
IEEE
Software Maturity Metric
IEEE
Std 982.2 - 1988 defines a Software Maturity Index
that can be used to determine the readiness for release of a software system.
This index is especially useful for assessing release readiness when changes,
additions, or deletions are made to existing software systems. It also provides
an historical index of the impact of changes. It is calculated as follows:
SMI
= Mt - ( Fa + Fc + Fd)/Mt
Where
SMI
is the Software Maturity Index value
Mt
is the number of software functions/modules in the current release
Fc
is the number of functions/modules that contain changes from the previous
release
Fa
is the number of functions/modules that contain additions to the previous
release
Fd
is the number of functions/modules that are deleted from the previous release
Reliability
Metrics
Perry offers the following equation for
calculating reliability.
Reliability
= 1 - Number of errors (actual or predicted)/Total number of lines of
executable code
This reliability value is calculated for
the number of errors during a specified time interval.
Three other metrics can be calculated
during extended testing or after the system is in production. They are:
MTTFF (Mean Time to First Failure)
MTTFF = The number of time intervals the
system is operable until its first failure
MTBF (Mean Time Between Failures)
MTBF = Sum of the time intervals the system
is operable
Number of failures for the time period
MTTR (Mean Time To Repair)
MTTR = sum of the time intervals required
to repair the system
The number of repairs during the time
period
6.0 Test Models
There are various models of Software
Testing. Here in this framework I would explain the three most commonly used
models:
1.
The ‘V’ Model.
2.
The ‘W’ Model.
3.
The Butterfly Model
6.1 The ‘V’ Model
The following diagram depicts the ‘V’ Model
|
The diagram is self-explanatory. For an
easy understanding, look at the following table:
SDLC
Phase
|
Test
Phase
|
1. Requirements
|
1. Build Test Strategy.
2. Plan for Testing.
3. Acceptance Test Scenarios
Identification.
|
2. Specification
|
1. System Test Case Generation.
|
3. Architecture
|
1. Integration Test Case Generation.
|
4. Detailed Design
|
1. Unit Test Case Generation
|
6.2 The ‘W’ Model
The
following diagram depicts the ‘W’ model:
The ‘W’ model depicts that the Testing
starts from day one of the initiation of the project and continues till the
end. The following table will illustrate the phases of activities that happen
in the ‘W’ model:
SDLC
Phase
|
The
first ‘V’
|
The
second ‘V’
|
1. Requirements
|
1. Requirements Review
|
1.
Build Test Strategy.
2. Plan for Testing.
3.
Acceptance (Beta) Test Scenario Identification.
|
2. Specification
|
2. Specification Review
|
1.
System Test Case Generation.
|
3. Architecture
|
3. Architecture Review
|
1.
Integration Test Case Generation.
|
4. Detailed Design
|
4. Detailed Design Review
|
1.
Unit Test Case Generation.
|
5. Code
|
5. Code Walkthrough
|
1.
Execute Unit Tests
|
|
|
1.
Execute Integration Tests.
|
|
|
1.
Regression Round 1.
|
|
|
1.
Execute System Tests.
|
|
|
1.
Regression Round 2.
|
|
|
1.
Performance Tests
|
|
|
1.
Regression Round 3
|
|
|
1.
Performance/Beta Tests
|
In the second ‘V’, I have mentioned
Acceptance/Beta Test Scenario Identification. This is because, the customer might
want to design the Acceptance Tests. In this case as the development team
executes the Beta Tests at the client place, the same team can identify the
Scenarios.
Regression Rounds are performed at regular
intervals to check whether the defects, which have been raised and fixed, are
re-tested.
6.3 The Butterfly
Model
The testing
activities for testing software products are preferable to follow the Butterfly
Model. The following picture
depicts the test methodology.
Fig: Butterfly Model
In the Butterfly model of Test Development,
the left wing of the butterfly depicts the Test
Analysis. The right wing depicts the Test
Design, and finally the body of the butterfly depicts the Test Execution. How this exactly
happens is described below.
Test
Analysis
Analysis is the key
factor which drives in any planning. During the analysis, the analyst
understands the following:
·
Verify that each
requirement is tagged in a manner that allows correlation of the tests for that
requirement to the requirement itself. (Establish Test Traceability)
·
Verify traceability
of the software requirements to system requirements.
·
Inspect for
contradictory requirements.
·
Inspect for
ambiguous requirements.
·
Inspect for missing
requirements.
·
Check to make sure
that each requirement, as well as the specification as a whole, is
understandable.
·
Identify one or
more measurement, demonstration, or analysis method that may be used to verify
the requirement’s implementation (during formal testing).
·
Create a test
“sketch” that includes the tentative approach and indicates the test’s
objectives.
During
Test Analysis the required documents will be carefully studied by the Test
Personnel, and the final Analysis Report
is documented.
The
following documents would be usually referred:
1.
Software
Requirements Specification.
2.
Functional
Specification.
3.
Architecture
Document.
4.
Use Case Documents.
The Analysis Report would consist of the
understanding of the application, the functional flow of the application,
number of modules involved and the effective Test Time.
Test Design
The
right wing of the butterfly represents the act of designing and implementing
the test cases needed to verify the design artifact as replicated in the
implementation. Like test analysis, it
is a relatively large piece of work.
Unlike test analysis, however, the focus of test design is not to
assimilate information created by others, but rather to implement procedures,
techniques, and data sets that achieve the test’s objective(s).
The outputs of
the test analysis phase are the foundation for test design. Each requirement or design construct has had
at least one technique (a measurement, demonstration, or analysis) identified
during test analysis that will validate or verify that requirement. The tester must now implement the intended
technique.
Software test
design, as a discipline, is an exercise in the prevention, detection, and
elimination of bugs in software.
Preventing bugs is the primary goal of software testing. Diligent and competent test design prevents bugs from ever reaching
the implementation stage. Test design,
with its attendant test analysis foundation, is therefore the premiere weapon
in the arsenal of developers and testers for limiting the cost associated with
finding and fixing bugs.
During Test
Design, basing on the Analysis Report the test personnel would develop the
following:
- Test Plan.
- Test Approach.
- Test Case documents.
- Performance Test Parameters.
- Performance Test Plan.
Test Execution
Any test case
should adhere to the following principals:
- Accurate – tests what the
description says it will test.
- Economical – has only the steps
needed for its purpose.
- Repeatable – tests should be
consistent, no matter who/when it is executed.
- Appropriate – should be apt for
the situation.
- Traceable – the functionality
of the test case should be easily found.
During the Test
Execution phase, keeping the Project and the Test schedule, the test cases
designed would be executed. The following documents will be handled during the
test execution phase:
1. Test Execution
Reports.
2.
Daily/Weekly/monthly Defect Reports.
3. Person wise
defect reports.
After the Test Execution phase, the
following documents would be signed off.
1. Project Closure Document.
2. Reliability Analysis Report.
3. Stability Analysis Report.
4. Performance Analysis Report.
5. Project Metrics.
7.0 Defect Tracking Process
The Defect
Tracking process should answer the following questions:
- When is the defect found?
2.
Who raised the defect?
3.
Is the defect reported properly?
4.
Is the defect assigned to the appropriate developer?
5.
When was the defect fixed?
6.
Is the defect re-tested?
7.
Is the defect closed?
The defect
tracking process has to be handled carefully and managed efficiently.
The following
figure illustrates the defect tracking process:
Defect Classification
This section
defines a defect Severity Scale framework for determining defect criticality
and the associated defect Priority Levels to be assigned to errors found
software.
The defects can
be classified as follows:
Classification
|
Description
|
Critical
|
There is s
functionality block. The application is not able to proceed any further.
|
Major
|
The application
is not working as desired. There are variations in the functionality.
|
Minor
|
There is no
failure reported due to the defect, but certainly needs to be rectified.
|
Cosmetic
|
Defects in the
User Interface or Navigation.
|
Suggestion
|
Feature which
can be added for betterment.
|
Priority
Level of the Defect
The priority
level describes the time for resolution of the defect. The priority level would
be classified as follows:
Classification
|
Description
|
Immediate
|
Resolve the
defect with immediate effect.
|
At the Earliest
|
Resolve the
defect at the earliest, on priority at the second level.
|
|
Resolve the
defect.
|
Later
|
Could be
resolved at the later stages.
|
8.0 Test Process for a Project
In this section,
I would explain how to go about planning your testing activities effectively
and efficiently. The process is explained in a tabular format giving the phase
of testing, activity and person responsible.
For this, I assume that the project has been identified
and the testing team consists of five personnel: Test Manager, Test Lead,
Senior Test Engineer and 2 Test Engineer’s.
SDLC Phase
|
Testing Phase/Activity
|
Personnel
|
1. Requirements
|
1. Study the requirements for
Testability.
2.
Design the Test Strategy.
3. Prepare the Test Plan.
4.
Identify scenarios for Acceptance/Beta Tests
|
Test Manager / Test Lead
|
2. Specification
|
1. Identify System Test Cases /
Scenarios.
2. Identify Performance Tests.
|
Test Lead, Senior Test Engineer, and Test
Engineers.
|
3. Architecture
|
1. Identify Integration Test Cases /
Scenarios.
2. Identify Performance Tests.
|
Test Lead, Senior Test Engineer, and Test
Engineers.
|
4. Detailed Design
|
1. Generate Unit Test Cases
|
Test Engineers.
|
9.0
Deliverables
The Deliverables from the Test team would
include the following:
1. Test
Strategy.
2. Test
Plan.
3. Test
Case Documents.
4. Defect
Reports.
5. Status
Reports (Daily/weekly/Monthly).
6. Test
Scripts (if any).
7. Metric
Reports.
8. Product
Sign off Document.
No comments:
Post a Comment