Software Testing Techniques

Testing Techniques can be divided into following

Specification-based - black-box techniques
Structure-based - white-box techniques
Experience-based techniques

White-box techniques (Structure Based):
This is a software testing technique whereby explicit knowledge of the internal workings of the item being tested is used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc.
Usually Unit testing, component testing are the white-box testing. But  it is equally important in integration level as well to verify one module calls another module in a right way.
Structural testing has well defined way of testing. which are defined as -

Statement Testing: It is a component level testing and tests single statements.
Loop Testing: Propose of loop testing is to validating loop constructs. Usually tests-loop to be skipped, loop to be executed more than once, loop to be executed just once
Path Testing: will discuss later
Condition/Branch Testing:  Validating all possible output in specific condition. For every decision, each branch need to be executede at least once. IF, for while, Switch

IF ( a = b) THEN
    Statement 1
    statement 2

Experience-based techniques:
Experienced-based testing is where tests are derived from the tester’s skill and intuition and their experience with similar applications and technologies. it is useful in identifying special tests not easily captured by formal techniques, especially when applied after more formal approaches.
A commonly used experienced-based technique is error guessing. Generally testers anticipate defects based on experience.

Black-Box  (Specification based):
Testing software based on output requirements and without any knowledge of the internal structure or coding in the program.

Equivalent Partitioning
Boundary Value Analysis
State Transition Testing
Cause- Effect Graphing
Syntax Testing
Use case testing
Equivalence partitioning (EP) is a test case design technique that is based on the premise that the inputs and outputs of a component can be partitioned into classes that, according to the component's specification, will be treated similarly by the component.. Thus the result of testing a single value from an equivalence partition is considered representative of the complete partition.As an example consider any program that accepts days of ht week and months of they year as inputs. Intuitively you would probably not expect to have to test every date of the year. You would obviously try months with 30 days (e.g. June) and months with 31 days (e.g. January) and you may even remember to try out the special case of February for both non-leap year (28 days) and leap years (29 days). Equally, looking at the days of the week you would not, depending on the application, test every day. You may test for weekdays (e.g. Tuesday) and weekends (e.g. Sunday). What you are in effect doing is deciding on equivalence classes for the set of data in question.Not everyone will necessarily pick the same equivalence classes; there is some subjectivity involved. But the basic assumption you are making is that anyone value from the equivalence, class, is as good as any other when we come to design the test.We hope that you can see how this technique can dramatically reduce the number of tests that you may have for a particular software component.

Boundary Value Analysis is base on the following premise. Firstly, the inputs and outputs of a component can be partitioned into classes that, according to the component's specification, will be treated similarly by the component and, secondly, that developers are prone to marking errors in their treatment of the boundaries of these classes. Thus test cases are generated to exercise these boundaries.State transition testing focuses on the testing of transitions from one state (e.g., open, closed) of an object (e.g., an account) to another state.
A cause-effect graph is a graphical representation of inputs (causes) with their associated outputs (effects), which can be used to design test cases. Furthermore, cause-effect graphs contain directed arcs that represent logical relationships between causes and effects. Each arc can be influenced by Boolean operators. Such graphs can be used to design test cases, which can directly be derived from the graph or to visualize and measure the completeness and the clearness of a test model for the tester.
Syntax Based Testing is a techniques in which syntax command generator generates test cases based on the syntax rules of a system. Every input has a syntax. Both valid and invalid values are created. It is a data-driven black-box testing techniques for testing input data to language processor, such as string processor and compilers. test Cases are based on rigid data definition.
Test execution automation is essential for syntax testing because this method produces a large number of tests.
Use case testing
Decision table testing: Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design. They may be used to record complex business rules that as system is to implement. The specification is analyzed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such a way that they can either be true or false (Boolean). The decision table contains the triggering conditions, often combination of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions, which result in the execution of the actions associated with that rule. The coverage standard commonly used with decision table testing is to have at least one test per column, which typically involves covering all combination of triggering conditions.

No comments: