‘Breaking a massive complex
assignment into small simple steps of objectives’ is one of the deep-seated
rules to success. This law prevails into software industry as well. The nature,
complexity and size of software has exponentially grown resulting in developing
software in one go as a difficult process. Hence, today almost all softwares
are developed by integrating small structured independent programs called
modules. This Modular Software System helps software development teams to proficiently
handle the large sized coded complex development process.
Since modular software
are embedded everywhere, so they must be designed to work reliably. As per
texts 13, Software reliability is defined as the probability that the
software does not fail within a specified period of time under given
circumstances. Therefore, with increased complexity of products design,
shortened development cycles and highly destructive consequences of software
failures, a major responsibility lies in the area of software testing. During the
development of modular software, faults can crop in modules due to human flaw.
These faults make themselves evident in terms of failures when the modules are
tested independently during the module testing phase of software development
life cycle. To assess modular software quantitatively and to find the total
number of faults removed from each module mathematical tools namely software
reliability growth models (SRGMs) are used. Various significant metrics, such
as initial number of faults, failure intensity, reliability within a specific
period of time, number of faults remaining, can be effortlessly determined
During the last three decades,
a large number of SRGMs have been proposed in literature 3, 13, 16, 19, 20, 22.
Early researches related SRGM with respect to testing time 3, 17. However,
incorporation of testing resources leads to development of more accurate SRGMs 4,
5, 9, 26. The reason being, the detection of faults are more intimately linked
to the amount of resources expended 18, 24 on testing as it includes-
(a) Manpower, that takes into account
(Failure identification personnel).
(ProgrammersFailure correction personnel).
(b) Computer time
et al. 5 showed that logistic testing resource function can be directly
incorporated into both exponential-type and S-type NHPP models under both ideal
and imperfect debugging situations. Later, Kapur et al.9 studied the testing
resource dependent learning process and classified faults into two types on the
basis of amount of testing resources needed to remove them. In this paper too
we took the fault removal rate as a function of testing resource.
For accomplishing the goal
of developing reliable modular software, the consideration of fault detection
process is vital as it helps in measuring the effectiveness of uncovering bugs
that lie dormant in the software by test techniques and test cases. A lot of SRGMs
in literature supposed that during the fault detection process each failure
caused by a fault occurs independently and at random time according to the same
distribution 3, 20. However, practically as the testing progresses, the
testing team gains experience and with the employment of new tools and
techniques the fault detection rate (FDR) gets notably changed. Further, the other
factors that can affect the fault detection rate are running environment,
testing strategy and defect density. And the point of time where change in
fault detection rate is observed is termed as ‘Change Point’. Taking the effect
of change point on software reliability is one of the imperative matters in the
development of accurate SRGM’s. The work in change point started with Zhao 28
who introduced the Change Point Analysis in Hardware and Software reliability. Shyur
23, Wang and Wang 25 also made contributions in this area. In addition,
some research incorporated change-point analysis in their models as the testing
resource consumption may not be even over time 4, 10, 15.
Another main concern in the
software industry is the software development cost. Overspending on development
cost can result in financial crisis for the company. On the other hand, spending
less can result in low quality software product as in this case the software
development firm will have to set low reliability aspiration level for each
module. Thus, there arises the need of optimizing total development cost of
modular software. During module testing, the modules are tested independently.
But the testing activities of different modules should be completed within a
limited time. The testing activities of different modules should be completed
within a limited time, and these activities normally consume about 40-50% of
the total amount of limited software development resources. This persuades the
management to allocate testing resources among the software modules optimally
so that the desired reliability of the software can be achieved. Such
optimization problems are called “Testing Resource Allocation problems”. In
this paper we have formulated such a resource allocation problem. The developed
non-linear optimization problem is solved by an algorithm based on Karush Kuhn
Tucker (KKT) optimality conditions.
The work is organized as
follows: Section 2 details the literature review on testing resource allocation
problem Section 3 highlights on the Goel-Okumoto software reliability growth
model with change point and testing resource, required for modeling the failure
mechanism of the modules. Section 4 elaborates on formulation of our testing
resource allocation problem. Further in this section we also discuss an optimization
algorithm based on the KKT optimality conditions. Section 5 illustrates the
optimization problem solution through a numerical example. Finally, conclusions
are drawn and are given in section 6.