Test plan organization
Test plan should be organized to cater to needs of creation, review, execution and tracking verification to closure. For the purpose of creation and review,…
Test plan should be organized to cater to needs of creation, review, execution and tracking verification to closure.
For the purpose of creation and review, test plans are best organized hierarchically. Primary hierarchy should be something that helps verification team with the execution activity.
Simplest test plan organization can be multiple sections and each section containing multiple tests related to that section. Each test should also list the various variations of the same test to cover all the configurations. Each section header could describe the overall verification strategy for this section and broadly what features its covering. Each test description can focus on the test sequence made up of the stimulus, synchronization and test specific checks.
Apart from the technical information the information necessary for the execution tracking should stay in the same plan. This makes test plan usable in everyday execution. When its part of everyday execution tracking it helps test plan stay current. Test plan cannot be completely written at the beginning of the project. Its does undergo updates throughout the execution. So its important for test plan to stay current.
In order to cater to various needs of creation, review and tracking execution to closure, a test plan should be maintained in one common source. Source test plan can be maintained in text file format or in the form statically initialized in scripting language data structures. Text format is easy to edit and process in the linux or unix environments with its editor and shell commands.
Text file format or data structure is not very effective for review by designers and architects. Different views of the test plan in the form of excel, mind map or any other such forms are suitable for review. Different views cater to needs of different audience. Maintaining multiple copies manually for providing different views is recipe for disaster. It’s practically impossible to maintain redundant copies of the same information in sync with each other. Checking whether they are in sync with each other itself becomes another project. Not allowing different views will reduce the efficiency and effectives of the corresponding stakeholder.
Utilities can be written to generate these views from same source text based test plan. Along with different views the information relevant for tracking can also be generated from the text based test plan.
A test plan should contain:
- Technical
- Primary section: Primary section should be selected based on what works for execution. For example normal operation, error injection, low power tests, performance and miscellaneous etc. grouping might be more effective for everyday execution than the specification chapter based grouping
- Test name: Follow naming convention suitable to make it easy to find the tests. Name should broadly reflect category and test intent
- Test description: Description of test should be actionable and concise. It should clearly indicate expected configuration, any additional test specific DUT initializations prior to starting test, clear sequence of stimulus and test specific checks to be performed. Test plan in section description should clearly identify the global randomization and global check commonly done across tests to avoid duplication of the same. This also helps avoid paranoia for reviewers.
- Test variations: Include the information about the variations of the test to be executed. Include the exact batch mode command to be executed in the test plan. Regression list is used for running tests and it contains the list of test commands. Test commands from the test plan should be extracted to generate the regression list. This gives the peace of mind because the test regression is running what is present in the test plan
- Low power: If some test is relevant for the low power simulations there should be provision for tagging the same. Using this tag the regression list required for the low power simulations can be generated
- Gate simulations: If the test is relevant for the gate simulation the same should also be tagged. Additionally the hierarchical or flat gate or with package simulations can be tagged. Using these tags the regression list required for the different gate level simulations can be generated
- Pre-commit: Sub-set of tests relevant for check-in regressions should be tagged. This allows any changes to test commands reflecting in pre-commit simulations immediately. It also enables easy updates of the pre-commit test lists dynamically as the execution progresses to improve the check-in regression coverage
- Management
- Owner: Provision to add the owner per test or per section. The name of engineer responsible for writing and executing the test. This information is useful in assessing the work loads per engineer and also regression failure debug assignments
- Milestone: Provision to add the priority or milestone information to the test. This enables the prioritization information to reflect in the test plan allowing clear alignment of verification team to milestone
- Test status: Status of test in last reference regression should be stored in the test plan. This allows the status to be visualized for different sections or custom grouping of sections to generate insights about execution status for making decisions during the design release. Tags to track the status can be NYET, FAIL and PASS etc.
- Review
- Misc.: Additional tags can be added so that hierarchical plan referring to any other useful criteria can be generated. For example requirement specification indexed hierarchical view. Specification indexed hierarchical view may not be helpful for execution but can be helpful during review of test plan. Milestone indexed view can very helpful for tracking purpose but not for writing or reviewing