Escolar Documentos
Profissional Documentos
Cultura Documentos
1
Miss.V.Mohanakumari.,2 Ms. Kavitha.S .,M.C.A., M.Phil., (P.hd).
1M.phil Research Scholar, Department of computer Science, Auxilium College of Arts & Science for
Women (Autonomous), Gandhi Nagar, Vellore, Tamilnadu, India.
2Assistant Prof, Department of Computer Science , Auxilium College of Arts & Science for Women
(Autonomous) Gandhi Nagar, Vellore.
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract: Software testing is well established as
an essential part of the software development
The Python programming language is process and as a quality assurance technique widely
typically not seen as a language that can be used in industry. Furthermore, literature suggests
formally verified. This research attempts to bridge that 30 to 50% of a project’s effort is consumed by
the gap by introducing novel techniques to annotate testing. Developer testing (a developer test is “a
Python programs with type specifications, codified unit or integration test written by
contracts, and translate them to statically verifiable developers”) in particular, has risen to be an
components. In this proposed method a novel has efficient method to detect defects early in the
introduced a tool, Python Correct, which uses these development process. In the form of unit testing, its
techniques to perform extended static checking popularity has been increasing as more
(ESC) on Python programs, as well as to generate programming languages are supported by unit
executable test cases through symbolic execution. testing frameworks (e.g., JUnit, NUnit, etc.).
These analyses serve to improve code quality and
development productivity. One of the problems that The main goal of testing is the detection
occur when writing contracts is that whenever the of defects. Developer testing adds to this the ability
program cannot be verified by an automatic verifier, to point out where the defect occurs. The extent to
the cause could lie in three different reasons. Aim which detection of the cause of defects is possible
to show that Python programs can benefit from depends on the quality of the test suite. In addition,
existing static verification tools and techniques if Beck explains how developer testing can be used to
they are simply made available to Python increase confidence in applying changes to the code
developers. The goal of program verification is to without causing parts of the system to break. This
prove mathematically that the given program fulfils extends the benefits of testing to include faster
a given formal specification. The consequence is implementation of new features or refactoring.
that for every input that fulfils the precondition, Consequently, it is reasonable to expect that there is
post condition and all loop invariants have to hold. a relation between the quality of the test code of a
The experimental result shows that accuracy of software system and the development team’s
error code and validates the run time process in the performance in fixing defects and implementing
framework. To reduce the minimum time of error new features.
detection.
Python is a high-level, interpreted, interactive and
object-oriented scripting language. Python is
designed to be highly readable. It uses English
Introduction
© 2018, IRJET.NET- All Rights Reserved Page 1
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 05 Issue: 08 | August-2018 www.irjet.net p-ISSN: 23950072
keywords frequently where as other languages use software quality to help make this management
punctuation, and it has fewer syntactical decision.
constructions than other languages.
Software quality is measured with verification and
Python is Interpreted − Python is validation. Verification is a method used to ensure
processed at runtime by the interpreter. You that a product is built correctly. Validation is a
do not need to compile your program before method to ensure that the correct product is built .
executing it. This is similar to PERL and While both methods are important, this thesis will
PHP. focus on verification. Well-known software
Python is Interactive − You can actually sit verification techniques are testing, review, program
at a Python prompt and interact with the proving, and model checking. This thesis will focus
interpreter directly to write your programs. on testing. The number and importance of test cases
that are performed in testing and the absence of
Python is Object-Oriented − Python errors during executing those test cases gives a
supports Object-Oriented style or technique measure of the confidence of software quality.
of programming that encapsulates code
within objects. The problem of testing is that the combinatorial
number of input values is exponential on
Python is a Beginner's Language − Python the sum of the number of input variable bits, which
is a great language for the beginner-level is large; thus, testing all combinations of
programmers and supports the development input values is usually impractical. Hence, many
of a wide range of applications from simple testing approaches to determine important
text processing to WWW browsers to testing input value combinations have been
games. proposed. The black-box testing techniques
in particular deliver benefits for testing large and
Society is coming to use and rely more and more on
complex software, since they only use
computer systems, such as safety Critical systems,
specifications that abstract away details of source
business systems, home appliance systems,
code.
entertainment systems, and mobile systems. Our
dependence on such computer systems raises
The first issue of black-box testing is how to
people’s awareness of the social importance of the
identify use cases to test. A use case is a detailed
software quality in these systems. The quality of
description of a sequence of transactions between
software is eventually determined when a product is
system and actors that exist outside the system.
released. A product manager usually has to make
Black-box testing lists testing input patterns that
the decision on when to release a product using
can be derived from specifications, such as
collected software quality data; however, because
requirement specifications or module interface
software is substantially complex, methods for
specifications. In industry, testers usually identify
measuring software quality are sometimes in
use cases with the prediction of possible usages of
adequate .Consequently, a product manager may be
the product; however, since testers perform the
pressured to make a release decision to meet the
usage prediction individually and ad-hoc, the
product release schedule with uncertain software
prediction depends on the testers’ individual skills
quality. This thesis proposes a statistical software
and preferences, which can cause uncertain
testing technique to provide a rationale for
software quality. Several researchers have It can be generated prioritized test cases from a list
suggested methods for improving this use cases of pairs (event, class of system’s executed
identification. events)with an occurrence probability. The event is
an invocation of an access program of a component.
Although she introduced classes of executed events,
Jacobson introduced a systematic process to she left room to discuss how to identify those
identify use cases. His process incorporated UML classes. Musa proposed the operational profile,
use case diagrams and produced the use cases. which profiles each user operation with its
However, since the description of use cases is occurrence probability and importance and
written in a natural language, which has ambiguity, prioritizes user operations to test. Although this
the exhaustive listing of all use cases is difficult. approach is reportedly beneficial the identification
Some researchers proposed identification methods of the set of user operations relies on an informal
using formal specifications such as Z[2], VDM refinement process, which makes completing an
[1],and state machine . By utilizing its formalism, exhaustive list of user operations a complex task.
they could generate an exhaustive list of use cases
that are specified. However, since most Testing process:
specification languages are difficult to write
because of their complicated model construction,
they are rarely adopted in industry.
A broad standard library − Python's bulk model checking- assisted test generation. This paper
of the library is very portable and cross- reviews some of these existing and new test criteria.
platform compatible on UNIX, Windows, We developed a unified framework for evaluating
and Macintosh. the effectiveness of these test criteria and the
efficiency of model-checking-assisted test
Interactive Mode − Python has support for generation for these criteria. The benefits of this
an interactive mode which allows work are three-fold: first, the computational study
interactive testing and debugging of carried out in this work assesses the practical
snippets of code. effectiveness and efficiency of model-checking-
assisted test case generation, which are important
Portable − Python can run on a wide metrics to consider for selecting the right test
variety of hardware platforms and has the criteria and test generation approach. Second, we
same interface on all platforms. propose a unified test generation framework based
on generalized B¨uchi automata. The framework
Extendable − You can add low-level uses the same off-the-shelf model checker, in this
modules to the Python interpreter. These case, SPIN model checker [10], to generate test
modules enable programmers to add to or cases for different criteria and compare themon a
customize their tools to be more efficient. consistent basis. Last but not least, we describe in
great details the methodology and automated test
Databases − Python provides interfaces to
generation environment that we developed on the
all major commercial databases.
basis of our unified framework. Such details are of
interest to researchers who needs to carry out their
GUI Programming − Python supports GUI
own experimental study on test criteria, and to
applications that can be created and ported
practitioners who want to integrate model checking
to many system calls, libraries and windows
assisted test generation into their testing process.
systems, such as Windows MFC,
Python provides two very important features
Macintosh, and the X Window system of
to handle any unexpected error in your Python
Unix.
programs and to add debugging capabilities in
Scalable − Python provides a better them:
structure and support for large programs – Exception Handling: This would be
than shell scripting. covered in this tutorial.
– Assertions: This would be covered
Literature Survey: in Assertions in Python.
• An exception is an event, which occurs
Test case generation is often cited as one of during the execution of a program, that
the most challenging tasks in testing dependable disrupts the normal flow of the program's
systems. Besides benefits as a verification instructions.
technique by its own right, model checking is • In general, when a Python script encounters
emerging as an efficient method for automating test a situation that it can't cope with, it raises an
case generation. Existing testing criteria and a range exception. An exception is a Python object
of new criteria, namely the vacuity-based ones, that represents an error.
inspired by formal requirements have been used in
• When a Python script raises an exception, it high quality is able to cover each branch of the data
must either handle the exception flow graph of the program. Each input parameter
immediately otherwise it would terminate has to be tested and a corresponding testing method
and come out. should be available in the unit test.
• There are two main error handling models:
status codes and exceptions. Status codes The generation of the error is done normally
can be used by any programming language. automatically by specific mutation testing software
Exceptions require language/runtime tools. This thesis describes an approach for
support. mutation testing in the programming language
Python. The tool which is used for mutant
– Raising Exceptions generation is the proof-of-concept tool MutPy. In
the event that a mutated version of the program has
– Catching Exceptions been discovered by the existing testing data, the
mutant is no longer needed for further testing and
– Swallow the Exception the tester can discard it. We can remove the mutant
from the test results, because the output, which it
– Dealing With Transient Failure
delivers, is not the correctly stated one we expect.
We refer to this operation as killing the mutant. The
– Error Logger
second goal of mutation testing is to have as many
errors as possible killed. Unfortunately, in many
cases the software, used for mutant generation,
tends to create errors that are semantically different
from the original program, but are, nevertheless,
functionally equivalent to it. They are called
Functionally Equivalent Error or simply Equivalent
Error.
Python
Assertion Error has taken on a special signifying Assertion Error: Test description
value for Python testing tools – virtually all of them
will provide testing assertions which raise Where running it under py.test gives us proper
Assertion Errors, and will attempt to catch diagnostic output:
Assertion Errors, treating them as failures rather
than exceptions. The unit test* documentation assert_test.py:4: in <module>
provides quite a nice description of this distinction: assert expected == actual, "Test description"
E AssertionError: Test description
Like Test::More‘s ok(), there’s no default useful E assert 'Foo' == 'Bar'
debugging information provided by assert other
than what you provide. One can explicitly add it to unittest, an SUnit descendent that’s bundled as part
the assert statement’s name, which is evaluated at of Python’s core library, provides its own functions
run-time: that directly raise AssertionErrors. The basic unit is
assertTrue:
assert expected == actual, "%s != %s" %
(repr(expected), repr(actual)) self.assertTrue(testCondition )
However, evidence that assert was not unittest‘s test assertions are meant to be run inside
really meant for software testing starts to emerge as xUnit-style Test Cases which are run inside
one digs deeper. It’s stripped out when one is try/except blocks which will catch
running the code in production mode, and there’s AssertionErrors. Amongst others, unittest provides
no built-in mechanism for seeing how many times an assertEqual:
assert was called – code runs that are devoid of
any assert statement are indistinguishable from self.assertEqual( expected, actual )
code in which every assert statement passed. This
makes it an unsuitable building block for testing Although surprisingly we’ve had to remove the test
tools, despite its initial promise. name, so that it’ll be automatically set to a useful
diagnostic message:
PyTest – a “mature full-featured Python testing
tool” – deals with this by essentially turning assert Traceback (most recent call last):
in to a macro. Testing libraries imported by code File "test_fizzbuzz.py", line 11, in test_basics
running under PyTest will have their assert self.assertEqual( expected, actual )
statements rewritten on the fly to produce testing AssertionError: 'Foo' != 'Bar'
code capable of useful diagnostics and
Much as Perl unifies around TAP and Test::Builder,
instrumentation. Running:
Python’s test tools unify around the raising of
Assert expected == actual, "Test description" AssertionErrors. One notable practical difference
between these approaches is that a single test
Using python directly gives us a simple error: assertion failure in Python will cause other test
assertions in the same try/except scope not to be
Trace back (most recent call last): run – and in fact, not even acknowledged. Running:
File "assert_test.py", line 4, in <module>
Assert expected == actual, "Test description" assert 0, "Should be 1"
assert 1, "Should also be 1" should always try to add checks to your code to
make sure that it can deal with bad input and edge
Gives us no information at all about our second cases gracefully. We will look at this in more detail
assertion. in the chapter about exception handling.
2) Logical errors
CONCLUSION
FUTURE SCOPE:
way it has helped big data technology to grow. Conference on Computer and Information Science,
Python has been an important part of Google since Honolulu, Hawaii, pp. 405-411.
the beginning, and remains so as the system grows
and evolves. Today, dozens of Google engineers [6] Shengchao Qin, Guanhua He, (200 ), “Linking
use Python, and we're looking for more people with Object-Z with Spec#”, 12th IEEE
skills in this language. InternationalConference on Engineering Complex
Computer Systems.