Test plan prerequisites
Before jumping into writing the test plan familiarize with following some of the prerequisites. Verification strategy Understand the verification strategy before writing the test plan….
Before jumping into writing the test plan familiarize with following some of the prerequisites.
Verification strategy
Understand the verification strategy before writing the test plan. Verification strategy contains the technology used for verification, division of verification across test benches and any verification augmentation by emulation etc.
Approach: Constrained random verification
Considering the complexity of current DUTs the popular approach used is coverage driven constrained verification. In this approach the self-checking constrained random test bench is built. This test bench will implement at least 80 % of the overall checks required. These are captured in the checks plan. Most of the randomized parameters are covered by the functional coverage.
All the globally randomized configurations, reusable stimulus should be captured in the randomization sheet or section descriptions of the test plan. A randomization sheet should list variable name, object to which it belongs, constraint range and frequency of randomization should be captured. These need not be again repeated in the test plan.
This allows test plan writing process to focus on the enumeration of features and scenarios.
Type of tests: Directed vs. Constrained random
Another important consideration is whether to go for directed test or constrained random test for a given feature or scenario. Simple answer might be to go with constrained random for everything. Yes that’s possible but remember constrained random is expensive. It comes with the cost to implement and clean it up. The constrained random stimulus requires self-checking environments. This is a complex task.
Constrained random is best suited for:
- Features having large sequential state space and events
- Features that are complex and have high degree of interaction with the other features
- Features containing large number of permutations and combinations of programmable variations
What do we mean by variations? Lets take an example. Consider a communication protocol layer that can send a packet of size 1 to 255 bytes. It may be simple to write a test that can send all the packet sizes. But if you want to cover permutation of two packets of different sizes back to back the problem space explodes. It would be 64770 to be exact.
Constrained random might be a overkill for:
- Simple features, which work exclusively without interacting much with the rest of design
- Low probability scenarios which take place only under certain specific conditions
- Important scenarios, which have limited possibilities and can be enumerated completely. There is no point in waiting for constrained random test to hit it. Just sweep the limited possibilities
- Scenarios containing very specific long sequences of stimulus and checks. Typically these also tend to be rare occurrences
Another important aspect is debugging. Debugging of constrained random tests is quite challenging. There is additional complexity involved in debugging constrained random tests. This is due to no sequentially executing code to look at as to what has generated the stimulus.
In a traditional directed test debug engineer can correlate the execution log to the test source code. This can help make sense of scenario executed. For constrained random stimulus generation, the stimulus generated is pseudo random and dependent on the state and randomized configurations. There is no direct code reference to correlate to. It has to be figured from the execution log of the specific test run.
That’s why it’s important to invest carefully in the constrained random stimulus generation taking in to consideration the cost of self-checking infrastructure development and additional complexity in the debug in mind.
Functional division
Do a broad functional division to set the focus and establish the scope for each feature. This division mechanism will vary case-by-case basis. Let’s look at the some of the ideas taking the communication protocols as example.
- For example consider a layer of protocol stack. It will have three areas of operations:
- Layer Initialization
- Layer’s Service to upper layer
- Layer management
- For some protocol it may be possible that any given time only one of these can be active. These functionalities may have a very little interaction between them.
- Not all layer management functionality may be equal. Some of them are initiated at higher frequency compared to others. Some of them may be initiated in very controlled settings. For instance test mode will not be initiated during the middle of normal traffic randomly. It will only be initiated during the DUT idle mode