Manual Testing & Different types of it
Manual testing is a software testing method where testers execute test cases without the use of any automation tools or scripts. It involves human intervention to ensure that an application or system works correctly. Manual testing is an essential part of the software testing process and is often used in conjunction with automated testing.
It’s a testing method in which a human tester, rather than automated tools or scripts, performs the testing process. It involves manually executing test cases, interacting with the software as an end-user would, and checking for defects or discrepancies.
Need for Manual Testing :
- Manual testing allows testers to use their creativity, intuition, and domain knowledge to explore the software, identify defects, and assess its overall quality. This type of testing is valuable for finding unanticipated issues that automated tests may not detect.
- Manual testing is essential for evaluating the visual aspects of the software, including the layout, design, and responsiveness of the user interface.
- Manual testing enables testers to evaluate the software from a user’s perspective, including aspects like usability, accessibility, and overall user experience. Testers can provide subjective feedback based on their experience, which automated tests cannot do.
Advantages of Manual Testing :
- In the early stages of development, when the software is rapidly evolving, writing and maintaining automated tests can be time-consuming. Manual testing provides a more flexible approach to identify defects and provide feedback.
- Manual testing is effective for scenarios where there are no predefined test cases or limited documentation. Testers can quickly adapt to changes and new requirements.
- For tasks that are needed only once or infrequently, the overhead of creating and maintaining automated tests may not be justified. Manual testing is more efficient in such cases.
- Some test scenarios, particularly those involving intricate business logic or edge cases, may be challenging to automate effectively. Manual testing is better suited for these situations.
Types of Manual Testing :
- Black Box Testing: Black box testing focuses on testing the software’s functionality without knowing the internal code or logic. Testers examine the application’s inputs and outputs to verify if it meets the specified requirements. It is a software testing technique where the internal structure, design, or code of the application being tested is not known to the tester. Testers focus on the inputs and expected outputs of the software, without considering how the program achieves those results. There are several types of black box testing, including:
Following are the types of Black box testing :- Functional Testing: This is one of the most common black box testing types. It involves testing the application’s functionality by providing various inputs and verifying if the expected outputs are produced. Examples include equivalence partitioning, boundary value analysis, and decision tables.
- Non-functional Testing: Unlike functional testing, non-functional testing evaluates aspects of the software other than its specific functionalities. It includes testing for performance, usability, security, and scalability. Examples include load testing, stress testing, and security testing.
- Regression Testing: This type of testing focuses on ensuring that new code changes or feature additions do not negatively impact existing functionalities. Test cases are re-executed, and any deviations from expected behavior are identified.
- Smoke Testing: Smoke testing is a preliminary test that checks whether the critical and basic functionalities of the application are working. If the “smoke” (basic functions) clears, further, more detailed testing can proceed.
- Integration Testing: This type of testing focuses on verifying the interactions between different modules or components of the application. The goal is to ensure that integrated components work together as expected.
- System Testing: System testing evaluates the entire software system as a whole. It aims to verify that the integrated system meets the specified requirements and functions correctly.
- Acceptance Testing: Acceptance testing is conducted to determine whether the software meets the business requirements and is ready for release. It is often divided into two subtypes: User Acceptance Testing (UAT) and Alpha/Beta Testing.
- Usability Testing: Usability testing assesses how user-friendly the software is. Testers evaluate the software’s interface and user experience to ensure it meets user expectations.
- Compatibility Testing: This testing checks how well the software performs on different platforms, browsers, and devices to ensure it is compatible with a variety of environments.
- Localization and Internationalization Testing: These tests focus on ensuring that the software can adapt to different languages and cultural settings (internationalization) and confirming that it works well in a specific locale (localization).
- Boundary Testing: Boundary testing checks the behavior of the software at the edge or boundary of its input domain. It helps identify potential issues related to limits, such as minimum and maximum values.
- Ad Hoc Testing: Ad hoc testing is informal and unstructured testing. Testers explore the software with little or no predefined test cases, often to find defects or unexpected behaviors.
- Exploratory Testing: Similar to ad hoc testing, exploratory testing involves testers exploring the software in real time, actively learning about it, and designing test cases on the fly.
- Equivalence Class Testing: Equivalence class testing divides the input domain into groups of equivalent inputs and then tests each group to ensure that the software behaves consistently within these classes.
- Decision Table Testing: Decision table testing is used when the behavior of the software depends on combinations of inputs. Test cases are derived from a decision table that outlines possible input combinations and their expected outcomes.
- White Box Testing: White box testing, also known as structural testing or glass box testing, is a software testing method that examines the internal structure of the software’s code and logic. Testers with knowledge of the application’s code perform white box testing to assess the effectiveness and correctness of the code. It is involves examining the internal code and logic of the software. Testers use their knowledge of the code to create test cases that assess the program’s internal structure.
Following are the types of White box testing :- Statement Coverage (Line Coverage): This type of testing aims to ensure that every statement in the code is executed at least once. It measures the percentage of code statements that are executed during testing.
- Branch Coverage (Decision Coverage): Branch coverage is concerned with testing every possible branch of conditional statements and loops within the code. It ensures that both true and false outcomes of conditions are tested.
- Path Coverage: Path coverage is a more comprehensive form of testing that involves testing every possible path through the code, including various combinations of branches and loops. It aims to ensure that every possible execution path is tested.
- Condition Coverage: Condition coverage focuses on testing the different conditions within the code, making sure that each condition evaluates to both true and false during testing.
- Function Coverage: Function coverage ensures that every function or subroutine within the code is called at least once during testing.
- Statement and Decision Coverage: This combines statement coverage and branch coverage to ensure that both individual statements and conditional branches are exercised.
- Modified Condition/Decision Coverage (MC/DC): MC/DC is a more stringent form of testing used in safety-critical applications. It ensures that each condition within a decision statement is tested for its contribution to the overall decision outcome.
- Multiple Condition Coverage: This white box testing type focuses on testing different combinations of conditions within a decision statement to ensure that all possible combinations are tested.
- Loop Testing: Loop testing concentrates on testing loops and their boundary conditions, including zero iterations, single iterations, and multiple iterations.
- Data Flow Testing: Data flow testing identifies how data is used and manipulated within the code. It aims to find data-related issues, such as uninitialized variables, redundant assignments, or data flow anomalies.
- Control Flow Testing: Control flow testing explores the execution paths through the code, looking for control flow-related issues like dead code, unreachable code, or infinite loops.
- Static Analysis: Static analysis tools analyze the source code without executing it. They can identify potential issues such as coding standards violations, code complexity, and potential security vulnerabilities.
- Security Testing: White box security testing assesses the software’s code and design to identify and rectify potential security vulnerabilities, such as input validation issues, injection attacks, and authentication weaknesses.
- Code Review: Code review involves a thorough manual examination of the source code by experienced developers or testers to identify defects, adherence to coding standards, and best practices.
- Integration Testing: While integration testing is often considered a black box testing type, in white box integration testing, the focus is on the internal interactions and integrations of different components within the software, with knowledge of the code.
- Grey Box Testing: Grey box testing is a software testing approach that combines elements of both black box and white box testing. In grey box testing, testers have partial knowledge of the internal code and design of the application. This partial knowledge allows testers to design test cases that target specific areas of the code while also considering the software’s functionalities as a whole.
Following are the types of Grey box testing :- Data-Driven Testing: In data-driven testing, the tester uses knowledge of the code’s internal structure to design test cases based on input data and expected output. This approach can help uncover issues related to how data is processed within the application.
- State Transition Testing: Grey box testers may use their knowledge of the application’s architecture to create test cases that focus on transitions between different states or modes within the software. This is particularly useful for applications with complex state management.
- API Testing: Grey box testers often conduct API testing, which involves testing the application’s exposed APIs and web services. They may use knowledge of the API endpoints and their underlying code to design test cases that assess data exchange and integration.
- Database Testing: Testers with access to the application’s database schema and queries can perform grey box testing focused on database interactions. This includes testing data retrieval, storage, and validation.
- Code-Based Testing: While grey box testers do not have full access to the source code, they may still review and analyze specific sections of code to identify potential vulnerabilities, bugs, or areas that require testing.
- Scenario-Based Testing: Testers may use knowledge of the application’s architecture and code to create test scenarios that mimic real-world usage, combining various functionalities to assess end-to-end processes.
- Risk-Based Testing: Grey box testers can employ risk-based testing, where they prioritize testing efforts based on their understanding of the application’s architecture, code, and critical functionalities.
- Fault Injection Testing: Testers intentionally introduce faults or defects into specific areas of the code where vulnerabilities or weaknesses are suspected. This can help uncover how the application handles errors and exceptions.
- Path Testing: Path testing, a combination of white box and black box techniques, involves testing specific execution paths through the code that are identified based on knowledge of the code’s structure.
- Code Coverage Analysis: Testers can use code coverage tools to measure how much of the code has been executed during testing. This helps identify untested or under-tested code segments.
Leave a Reply