Bug Life Cycle - BLC



Bug life cycle starts when a bug is found by a tester. After that a tester has to do several things in different point of times.
The very first things to do before logging a bug is to make sure that this issue has already been identified or not. Simply, searching on bug storing database for similar issue. If bug is already has been entered; make sure it is still active and scenario is same that you have identified. If not reopen it and add more information (if repro. steps are different add your ) and bring this issue to developer's/team's attention.
Always avoid entering duplicate bug. Adding duplicate bug effects a tester image .
After tester entering a bug it must be assigned to someone, mostly to developer. (Eventually goes to developer). Developer checks bug and then fixes his code in same build, produces a new build with fix and assign bus to tester to verify it. Now tester needs to verify the fix, which is called regression testing. If bug has been fixed, it is tester's responsibility to close it. If not, the bug will be reactivated and cycle will starts all over again.


Some Useful UNIX Commands for a QA Tester

In a UNIX environment, a tester frequently involves in -
Running scripts as per test case step requirement.
File manipulation (copying/renaming/deleting)
Navigation (change directory/listing files/dictionary)
File/directory creating
Monitoring
remote log in
Using VI/others editors
Search command  grep, find
 The find command is used to search the UNIX system for specific files and/or directories.
 Executing scripts
Creating auto-run of script using cron-tab job

Useful protocols for a software tester

What web protocol is ?
When two or more computers communicate in Internet , they must have a common way in which to communicate. They use protocol to do it. Simply, protocol is an agreement by which two or more computers can communicate.


TCP/IP : 
Transmission Control Protocol/Internet Protocol(TCP/IP) is set of Internet communication protocol. Transfer Control Protocol (TCP) breaks data into small pieces (called Packets) of no bigger than 1500 characters each. Each packet is inserted into different Internet Protocol (IP) “envelopes.” Each contains the address of the intended recipient and has the exact same header as all other envelopes. A router receives the packets and then determines the most efficient way to send the packets to the recipient. Upon arrival at their destination, TCP checks the data for corruption against the header included in each packet. If TCP finds a bad packet, it sends a request that the packet be re-transmitted. Numeric IP address( is a 32-bit address comprised of four 8-bit numbers (28) separated by periods. Each of the four numbers has a value between 0 and 255) works perfectly for web address However instead of IP Address, use of Uniform Resource Locator's (URLs) is in use because of it's user friendliness. So when a human types a URL into a browser, the request is sent to a Domain Name Server (DNS), which then translates the URL to an IP address understood by computers.

HTTP/HTTPS :
HTTP (Hypertext Transfer Protocol) is the set of rules for transferring files (text, graphic images, sound, video, and other multimedia files) on the World Wide Web. Whenever you surf the web, your browser will be sending HTTP request messages for HTML pages, images, scripts and styles sheets. Web servers handle these requests by returning response messages that contain the requested resource.
HTTP protocol is not suitable for use in a wide range of applications because it can be easily monitored and replayed. For example, someone using a network monitor can easily capture passwords used to access a banking web site. So, HTTP supports the use of several authentication mechanisms to control access to pages and other resources. HTTPs runs over an encrypted SSL session (HTTP over SSL (Secure Sockets Layer)) . So, if the website begins with https:// instead of http://, it is a secure site. Client and server need to create a shared secret key by using a public / private key handshake. Typically, HTTP data is sent over TCP/IP port 80, whereas SSL HTTP data is sent over port 443.
Web have a secure connection or not:
In Internet Explorer, you will see a lock icon in the Security Status bar. The Security Status bar is located on the right side of the Address bar. For example this web site is not secure. Security Status bar color is red. and there is certification error instead of lock sign.

This website is secured. It has white Security Status bar That means it has normal validation certificate and lock sign is there.

This website is secured. it has green Security Status bar . That means it has extended validation certificate.
Color in web status bar; What it means? Red The certificate is out of date, invalid, or has an error. For more information, see "About Certificate Errors" in Related Topics. Yellow The authenticity of the certificate or certification authority that issued it cannot be verified. This might indicate a problem with the certification authority's website. White The certificate has normal validation. This means that communication between your browser and the website is encrypted. The certification authority makes no assertion about the business practices of the website. Green The certificate uses extended validation. This means that communication between your browser and website is encrypted and that the certification authority has confirmed the website is owned or operated by a business that is legally organized under the jurisdiction shown in the certificate and on the Security Status bar. The certification authority makes no assertion about the business practices of the website.

FTP: 
File Transfer Protocol (FTP), a standard Internet protocol is the simplest way to exchange files between computers on the Internet. Like the Hypertext Transfer Protocol HTTP which transfers displayable Web pages and related files, and the Simple Mail Transfer Protocol SMTP which transfers e-mail, FTP is an application protocol that uses the Internet's TCP/IP protocols. FTP is commonly used to transfer Web page files from their creator to the computer that acts as their server for everyone on the Internet. It's also commonly used to download programs and other files to your computer from other servers.

SOAP:
Simple Object Access Protocol (SOAP) is a protocol that can be used for accessing the Web pages. SOAP is an XML based Object invocation Protocol. SOAP was developed for distributed applications to communicate through HTTP and firewalls. SOAP messages are independent of any operating system or protocol and may be transported using a variety of Internet protocols including SMTP, MIME, and HTTP 

SMTP:
Simple Mail Transfer Protocol, a protocol for sending email messages between servers. The messages can then be retrieved with an e-mail client using either POP or IMAP

POP:
Post Office Protocol, a protocol used to retrieve e-mail from a mail server to email client. POP stores your email on your computer in your email client (i.e. Thunderbird, Outlook, or whatever program you use to check email). When you check email, it is downloaded to your email client and removed from the mail server. This is why you can read your email when you're offline—because the email is actually on your computer you don't need an Internet connection to see it.

IMAP:

It is a protocol for accessing mail that is in mail server using an email client. IMAP keeps your email on the mail server so you can access it from multiple locations and with multiple email clients. For example, you can see the same email at home and at work. Likewise, you can see it in iCampus, Webmail, and Thunderbird.

MIME: 

SMTP uses the MIME protocol to send binary data across TCP/IP networks. The MIME protocol converts binary data to pure text.

TCP: 

TCP (a network protocols) is used for transmission of data from an application to the network. TCP is responsible for breaking data down into IP packets before they are sent, and for assembling the packets when they arrive. IP is responsible for the sending and receiving data packets over the Internet.

DCHP:
Is responsible for allocating the dynamic IP address to computer in a network. Dynamic Host Configuration Protocol (DHCP) is a standard protocol defined by RFC 1541 (which is superseded by RFC 2131) that allows a server to dynamically distribute IP addressing and configuration information to clients. Normally the DHCP server provides the client with at least this basic information: •IP Address,•Subnet Mask,•Default Gateway. Other information can be provided as well, such as Domain Name Service (DNS) server addresses and Windows Internet Name Service (WINS) server addresses. The system administrator configures the DHCP server with the options that are parsed out to the client.

HTTP:
Hyper Text Transfer Protocol HTTP takes care of the communication between a web server and a web browser. HTTP is used for sending requests from a web client (a browser) to a web server, returning web content (web pages) from the server back to the client. HTTPs run over an encrypted SSL session (HTTP over SSL (Secure Sockets Layer)). So, if the website begins with https:// instead of http://, it is a secure site. Client and server need to create a shared secret key by using a public / private key handshake. Typically, HTTP data is sent over TCP/IP port 80, whereas SSL HTTP data is sent over port 443.
FTP - FTP refers to a network protocol responsible for transferring files from one computer to another on the Internet. The FTP service is provided through a TCP network protocol. In order to establish an FTP connection the user needs to point his FTP client to an FTP server. The information needed includes an FTP host, FTP account credentials (username or password) and a FTP port. The default command port for FTP connections is port 21.

ICMP:
Internet Control Message Protocol takes care of error-handling in the network. It is chiefly used by networked computers' operating systems to send error messages—indicating, for instance, that a requested service is not available or that a host or router could not be reached

SNMP:
Simple Network Management Protocol. It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention

Software Testing Techniques

Testing Techniques can be divided into following

Specification-based - black-box techniques
Structure-based - white-box techniques
Experience-based techniques


White-box techniques (Structure Based):
This is a software testing technique whereby explicit knowledge of the internal workings of the item being tested is used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc.
Usually Unit testing, component testing are the white-box testing. But  it is equally important in integration level as well to verify one module calls another module in a right way.
Structural testing has well defined way of testing. which are defined as -


Statement Testing: It is a component level testing and tests single statements.
Loop Testing: Propose of loop testing is to validating loop constructs. Usually tests-loop to be skipped, loop to be executed more than once, loop to be executed just once
Path Testing: will discuss later
Condition/Branch Testing:  Validating all possible output in specific condition. For every decision, each branch need to be executede at least once. IF, for while, Switch

IF ( a = b) THEN
    Statement 1
ELSE
    statement 2
END IF


Experience-based techniques:
Experienced-based testing is where tests are derived from the tester’s skill and intuition and their experience with similar applications and technologies. it is useful in identifying special tests not easily captured by formal techniques, especially when applied after more formal approaches.
A commonly used experienced-based technique is error guessing. Generally testers anticipate defects based on experience.

Black-Box  (Specification based):
Testing software based on output requirements and without any knowledge of the internal structure or coding in the program.

Techniques:
Equivalent Partitioning
Boundary Value Analysis
State Transition Testing
Cause- Effect Graphing
Syntax Testing
Use case testing
Equivalence partitioning (EP) is a test case design technique that is based on the premise that the inputs and outputs of a component can be partitioned into classes that, according to the component's specification, will be treated similarly by the component.. Thus the result of testing a single value from an equivalence partition is considered representative of the complete partition.As an example consider any program that accepts days of ht week and months of they year as inputs. Intuitively you would probably not expect to have to test every date of the year. You would obviously try months with 30 days (e.g. June) and months with 31 days (e.g. January) and you may even remember to try out the special case of February for both non-leap year (28 days) and leap years (29 days). Equally, looking at the days of the week you would not, depending on the application, test every day. You may test for weekdays (e.g. Tuesday) and weekends (e.g. Sunday). What you are in effect doing is deciding on equivalence classes for the set of data in question.Not everyone will necessarily pick the same equivalence classes; there is some subjectivity involved. But the basic assumption you are making is that anyone value from the equivalence, class, is as good as any other when we come to design the test.We hope that you can see how this technique can dramatically reduce the number of tests that you may have for a particular software component.

Boundary Value Analysis is base on the following premise. Firstly, the inputs and outputs of a component can be partitioned into classes that, according to the component's specification, will be treated similarly by the component and, secondly, that developers are prone to marking errors in their treatment of the boundaries of these classes. Thus test cases are generated to exercise these boundaries.State transition testing focuses on the testing of transitions from one state (e.g., open, closed) of an object (e.g., an account) to another state.
A cause-effect graph is a graphical representation of inputs (causes) with their associated outputs (effects), which can be used to design test cases. Furthermore, cause-effect graphs contain directed arcs that represent logical relationships between causes and effects. Each arc can be influenced by Boolean operators. Such graphs can be used to design test cases, which can directly be derived from the graph or to visualize and measure the completeness and the clearness of a test model for the tester.
Syntax Based Testing is a techniques in which syntax command generator generates test cases based on the syntax rules of a system. Every input has a syntax. Both valid and invalid values are created. It is a data-driven black-box testing techniques for testing input data to language processor, such as string processor and compilers. test Cases are based on rigid data definition.
Test execution automation is essential for syntax testing because this method produces a large number of tests.
Use case testing
Decision table testing: Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design. They may be used to record complex business rules that as system is to implement. The specification is analyzed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such a way that they can either be true or false (Boolean). The decision table contains the triggering conditions, often combination of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions, which result in the execution of the actions associated with that rule. The coverage standard commonly used with decision table testing is to have at least one test per column, which typically involves covering all combination of triggering conditions.