Volume 2, Issue 3, September 2017, Page: 100-106
Effective Approach for Code Coverage Using Monte Carlo Techniques in Test Case Selection
Varun Jasuja, Computer Science and Engineering, Guru Nanak Institute of Technology, Ambala, India
Rajesh Kumar Singh, Computer Science Application, SUS Institute of Computer, Tangori, India
Received: Feb. 20, 2017;       Accepted: Mar. 13, 2017;       Published: Mar. 29, 2017
DOI: 10.11648/j.dmath.20170203.17      View  1558      Downloads  87
Abstract
Source code analysis alludes to the profound examination of source code and/or gathered form of code with a specific end goal to help discover the imperfections as far as security, comprehensibility, understanding and related parameters. In a perfect world, such systems consequently discover the defects with such a high level of certainty that what's found is surely a blemish. Notwithstanding, this is past the best in class for some sorts of utilization security defects. In this manner, such devices much of the time serve as helps for an examiner to help them focus in on security pertinent segments of code so they can discover blemishes all the more productively, instead of a device that just consequently discovers imperfections. Code Coverage is a measure used to portray the extent to which the source code of a system is tried by a specific test suite. A project with high code scope has been all the more completely tried and has a lower shot of containing software bugs than a system with low code scope. A wide range of measurements can be utilized to ascertain code scope; the absolute most fundamental are the percent of system subroutines and the percent of project articulations called amid execution of the test suite. This research work focus on the quality of source code using code coverage and analysis techniques. In the proposed research work, an effective model based approach shall be developed and implemented to improve the performance of code in terms of overall code coverage time, code complexity and related metrics.
Keywords
Code Coverage, Software Testing, Automated Test Case Generation
To cite this article
Varun Jasuja, Rajesh Kumar Singh, Effective Approach for Code Coverage Using Monte Carlo Techniques in Test Case Selection, International Journal of Discrete Mathematics. Vol. 2, No. 3, 2017, pp. 100-106. doi: 10.11648/j.dmath.20170203.17
Copyright
Copyright © 2017 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Reference
[1]
D. Spinellis, “Code Quality: The Open Source Perspective”, Addison-Wesley, Boston - MA, 2003.
[2]
B. N. Corwin, R. L. Braddock, "Operational performance metrics in a distributed system", Symposium on Applied Computing, Missouri - USA, 1992, pp. 867-872.
[3]
R. Numbers, "Building Productivity Through Measurement", Software Testing and Quality Engineering Magazine, vol 1, 1999, pp. 42-47.
[4]
IFPUG - International Function Point Users Group, online, last update: 03/2008, available: http://www.ifpug.org/.
[5]
B. Boehm, “Cost Models for Future Software Life Cycle Processes: COCOMO 2.0”, U.S. Center for Software Engineering, Amsterdam, 1995, pp. 57-94.
[6]
N. E. Fenton, M. Neil, “Software Metrics: Roadmap”, International Conference on Software Engineering, Limerick - Ireland, 2000, pp. 357–370.
[7]
M. K. Daskalantonakis, “A Pratical View of Software Measurement and Implementation Experiences Within Motorola”, IEEE Transactions on Software Engineering, vol 18, 1992, pp. 998–1010.
[8]
R. S. Pressman, "Software engineering a practitioner's approach", 4th. ed, McGraw-Hill, New York - USA, 1997, pp. 852.
[9]
I. Sommerville, “Engenharia de Software”, Addison-Wesley, 6° Edição, São Paulo – SP, 2004.
[10]
D. C. Ince, M. J. Sheppard, "System design metrics: a review and perspective", Second IEE/BCS Conference, Liverpool - UK, 1988, pp. 23-27.
[11]
L. C. Briand, S. Morasca, V. R. Basili, “An Operational Process for Goal-Driven Definition of Measures”, Software Engineering - IEEE Transactions, vol 28, 2002, pp. 1106-1125.
[12]
Refactorit tool, online, last update: 01/2008, available: http://www.aqris.com/display/ap/RefactorIt.
[13]
O. Burn, CheckStyle, online, last update: 12/2007, available: http://eclipse-cs.sourceforge.net/index.shtml.
[14]
M. G. Bocco, M. Piattini, C. Calero, "A Survey of Metrics for UML Class Diagrams", Journal of Object Technology 4, 2005, pp. 59-92.
[15]
J Depend tool, online, last update: 03/2006, available: http://www.clarkware.com/software/JDepend.html.
[16]
Metrics Eclipse Plugin, online, last update: 07/2005, available: http://sourceforge.net/projects/metrics.
[17]
Coverlipse tool, online, last update: 07/2006, available: http://coverlipse.sourceforge.net/index.php.
[18]
J Hawk Eclipse Plugin, online, last update: 03/2007, available: http://www.virtualmachinery.com/jhawkprod.htm International Journal of Computing and Corporate Research, 3 (4).
Browse journals by subject