Test approaches
Test approaches are strategies or methodologies used to guide the testing process. They help ensure comprehensive coverage, effective defect detection, and efficient use of resources. Here are some common test approaches:
-
Variable Analysis: Identify anything whose value can change. Variables may be obvious, subtle, or hidden.
-
TouchPoints: Identify any public or private interface that provides visibility or control. These are places to provoke, monitor, and verify the system.
-
Boundaries: Examine how the system behaves at, near, and beyond its defined limits. Boundaries can be numeric (e.g., maximum length, minimum value), logical (e.g., permission levels), or physical (e.g., storage capacity).
- Approaching the boundary: Test values just below or above the limit (e.g., one less than the maximum allowed).
- At the boundary: Use values exactly at the limit (e.g., maximum allowed characters in a field).
-
Beyond the boundary: Attempt to exceed the limit and observe error handling or system response.
-
Goldilocks: Evaluate system behavior with values or conditions that are too extreme, too minimal, and just right. This approach helps uncover issues related to limits, validation, and optimal operation.
- Too big: Provide inputs or configurations that exceed recommended or allowed limits (e.g., overly large files, maximum field lengths, excessive numbers).
- Too small: Use inputs or settings below minimum requirements (e.g. empty fields, zero values, minimal file sizes).
-
Just right: Test with values or conditions that are within the ideal or expected range, ensuring normal operation.
-
CRUD: Test the system’s ability to handle the four fundamental operations—Create, Read, Update, and Delete—on data or entities. For each operation, consider different scenarios and edge cases:
- Create: Add new records or entities, including valid, invalid, and duplicate entries.
- Read: Retrieve and display data, checking for accuracy, completeness, and correct handling of missing or restricted information.
- Update: Modify existing records, testing with valid changes, invalid updates, and concurrent modifications.
-
Delete: Remove records, verifying correct deletion, handling of dependencies, and system response to attempts to access deleted data. Apply CRUD testing across different data sets, user roles, and system states to ensure robust data management and integrity.
-
Follow the Data: Perform a sequence of actions involving data, verifying integrity at each step and across transitions. This approach helps uncover issues related to data flow, transformation, and persistence throughout the system.
Example: Enter → Search → Report → Export → Import → Update → View
- Configurations: Test the system under different configuration settings and environments to uncover issues related to compatibility, performance, and resource management. Vary configuration-related variables such as:
-
Screen resolution: Check how the user interface adapts to various display sizes and resolutions, ensuring proper layout and usability.
-
Network speed, latency, signal strength: Simulate slow, unstable, or intermittent network conditions to verify system responsiveness, error handling, and data synchronization.
-
Memory, disk availability: Adjust available system resources to observe behavior under low-memory or low-disk scenarios, checking for graceful degradation and appropriate warnings.
-
Peripheral count: Test with different numbers and types of connected devices (e.g., 0, 1, many monitors, mice, printers) to ensure correct detection, configuration, and operation.
-
Interruptions: Test how the system responds to unexpected events or disruptions that interrupt normal operation. This helps identify issues with data integrity, recovery, and user experience. Consider the following scenarios:
- Log off: User logs out during an active session or operation.
- Shut down: System or device is shut down while tasks are in progress.
- Reboot: System restarts unexpectedly during use.
- Kill process: Application process is forcibly terminated.
- Disconnect: Network connection is lost or interrupted.
- Hibernate: Device enters sleep or hibernate mode during activity.
- Timeout: Operations or sessions expire due to inactivity.
-
Cancel: User cancels an operation before completion.
-
Starvation: Push system resources to their maximum capacity to observe how the application behaves under stress and resource exhaustion. This approach helps identify issues related to performance degradation, stability, and error handling when resources are scarce or fully consumed. Test scenarios may include:
- CPU: Run intensive processes or simulate high computational load to check for slowdowns, crashes, or unresponsive behavior.
- Memory: Allocate large amounts of memory or simulate memory leaks to verify how the system handles low-memory conditions and whether it fails gracefully.
- Network: Saturate network bandwidth or introduce heavy traffic to assess responsiveness, data loss, and error recovery.
-
Disk: Fill disk space or perform frequent read/write operations to test how the system manages low disk availability, file system errors, and data integrity.
-
Position: Test how the system handles data or actions at different positions within a sequence, list, or structure. This helps uncover issues related to indexing, ordering, and boundary conditions. Consider the following scenarios:
- Beginning: Perform operations on the first item or at the start of a sequence (e.g., insert, update, delete the first record).
- Middle: Interact with items located in the middle of a list or sequence to check for correct handling and navigation.
-
End: Test with the last item or at the end of a sequence, ensuring proper processing and boundary management (e.g., deleting the last entry, navigating to the final page).
-
Selection: Test how the system handles operations or actions applied to a subset, none, or all items in a group or list. This approach helps uncover issues related to filtering, bulk actions, and edge cases in selection logic.
- Some: Select a few items and perform actions (e.g., update, delete, export) to verify correct handling and feedback.
- None: Attempt actions with no items selected to ensure the system prevents unintended operations and provides appropriate warnings or messages.
-
All: Select all items and execute bulk actions to check for performance, accuracy, and proper application of changes across the entire set.
-
Count: Test how the system handles different quantities of items, entities, or transactions. This approach helps uncover issues related to empty states, singular cases, and scalability. Consider the following scenarios:
- 0: No items present (e.g., zero transactions, empty lists). Verify that the system displays appropriate messages, prevents invalid actions, and handles empty states gracefully.
- 1: A single item present (e.g., one transaction, one user). Check for correct processing, display, and handling of edge cases unique to singular instances.
-
Many: Multiple items present (e.g., many simultaneous transactions, large data sets). Assess system performance, accuracy, and usability when managing large quantities, and ensure bulk actions and navigation work as expected.
-
Multi-User: Test how the system handles simultaneous actions performed by multiple users, either from different accounts or the same account logged in from multiple devices or sessions. This approach helps uncover issues related to concurrency, data consistency, access control, and conflict resolution. Consider scenarios such as:
- Simultaneous create, update, or delete: Two users (or two sessions of the same user) attempt to create, modify, or delete the same record at the same time. Verify how the system manages conflicts, prevents data loss, and maintains integrity.
- Concurrent access: Multiple users access and interact with shared resources or data simultaneously. Check for proper synchronization, locking mechanisms, and responsiveness.
- Session management: The same account is logged in from multiple devices or browsers. Test how the system handles session validity, updates, and potential security concerns.
-
Role-based actions: Different user roles (e.g., admin vs. regular user) perform overlapping or conflicting operations to ensure correct enforcement of permissions and business rules.
-
Flood: Test how the system handles a surge of simultaneous transactions, requests, or actions that flood the queue or processing pipeline. This approach helps uncover issues related to concurrency, queuing, rate limiting, and system stability under heavy load. Scenarios may include:
- Submitting the same request or clicking a button multiple times in rapid succession.
- Initiating bulk uploads, downloads, or data imports simultaneously.
- Generating a large number of API calls or user actions at once.
-
Observing system response, error handling, and recovery when overwhelmed by excessive activity.
-
Dependencies: Identify "has a" relationships (e.g., a Customer has an Invoice; an Invoice has multiple Line Items).
Apply CRUD, Count, Position, and/or Selection heuristics: - Customer has 0, 1, many invoices
- Invoice has 0, 1, many line items
- Delete last line item then read
- Update first line item
- Some, none, all line items are taxable
-
Delete customer with 0, 1, many invoices
-
Constraints: Test how the system enforces and responds to constraints by intentionally violating them. This helps uncover weaknesses in validation, error handling, and business rule enforcement. Consider the following scenarios:
- Leave required fields blank to verify that the system prompts for missing information and prevents incomplete submissions.
- Enter invalid combinations in dependent fields (e.g., selecting incompatible options or values) to check for proper validation and error messaging.
-
Enter duplicate IDs or names to ensure the system detects and handles duplicates appropriately, preventing conflicts or data corruption.
-
Input Method: Test how the system handles data entered through various input methods. This helps uncover issues related to validation, formatting, and compatibility across interfaces. Consider the following approaches:
- Typing: Manually enter data to check for input validation, error handling, and user experience.
- Copy/paste: Paste data from external sources to identify problems with formatting, hidden characters, or unexpected input.
- Import: Use file or data import features to verify correct parsing, error reporting, and handling of different file types or formats.
- Drag/drop: Test drag-and-drop functionality for files, images, or other objects to ensure smooth operation and proper feedback.
-
Various interfaces (GUI vs. API): Submit data through graphical user interfaces and programmatic APIs to confirm consistent behavior, validation, and error handling across all entry points.
-
Sequences: Vary the order and combination of operations to uncover issues related to workflow, state management, and unexpected interactions. Consider the following scenarios:
-
Undo/redo: Test the system’s ability to correctly reverse or reapply actions, ensuring data integrity and proper state restoration.
-
Reverse: Perform actions in the opposite order of typical workflows to identify dependencies or assumptions in process sequencing.
-
Combine: Execute multiple operations together or in quick succession to observe how the system handles complex or overlapping tasks.
-
Invert: Swap the usual sequence of actions (e.g., delete before create) to check for robustness and error handling.
-
Simultaneous actions: Trigger multiple actions at the same time, either from different users or interfaces, to test concurrency and conflict resolution.
-
Sorting: Test how the system handles sorting of data in different contexts and formats. Consider the following scenarios:
-
Alphabetic vs. numeric: Verify correct ordering for both text and numerical values, including edge cases like mixed types, case sensitivity, and special characters.
-
Across multiple pages: Ensure sorting is consistent when data spans multiple pages or views, and that navigation between pages preserves the selected sort order.
-
Custom sort criteria: Test sorting by different fields, user-defined criteria, and multi-level sorting (e.g., sort by date, then by name).
-
Performance and accuracy: Assess how sorting performs with large data sets and confirm that results are accurate and complete.
-
State Analysis: Identify all possible system states and the events or transitions that move the system between them. Represent these states and transitions visually (e.g., with a diagram or table) to clarify how the system behaves under different conditions. Combine with Sequences and Interruptions to test transitions, unexpected changes, and recovery from abnormal states.
-
Map Making: Identify a "base" or "home" state in the system, such as a default screen or initial configuration. From this base, systematically explore by taking one action or navigating in one direction, then returning to the base state after each step. Repeat this process for different actions or paths to ensure all transitions and states are covered. This approach helps uncover issues related to navigation, state management, and unexpected transitions.
-
Users & Scenarios: Consider the different types of users and the various scenarios in which they interact with the system. This heuristic helps uncover issues related to usability, access, and edge cases by simulating real-world usage patterns.
- Use cases: Test the system by following typical workflows or tasks that users are expected to perform, ensuring each step behaves as intended.
- Soap operas: Create dramatic, complex scenarios that combine multiple actions, interruptions, and edge cases to mimic unpredictable real-world situations.
- Personae: Develop fictional user profiles representing different backgrounds, skill levels, and needs. Test how each persona interacts with the system, identifying potential usability or accessibility issues.
- Extreme personalities: Simulate users with unusual or challenging behaviors, such as those who always try to break the rules, enter unexpected data, or use the system in unconventional ways. This helps reveal vulnerabilities and robustness gaps.
Useful Resources¶
- Tomes, S. (2022, March 31). Test heuristics cheat sheet. Ministry of Testing. https://www.ministryoftesting.com/articles/test-heuristics-cheat-sheet