Functional coverage – Value proposition
Functional coverage complements the code coverage by addressing its inherent limitations. This blog will help you understand what is the key value proposition of the…
Functional coverage complements the code coverage by addressing its inherent limitations. This blog will help you understand what is the key value proposition of the functional coverage. This in turn will help you achieve the maximum impact from it.
Functional verification has to address verification requirements made up of both requirements specifications and implementation.
Functional coverage’s primary purpose is finding out has the functional verification done its job well. The means used for functional verification to achieve the goal doesn’t matter in this context. It can achieve it with all directed tests or use constrained random or use emulation.
When the functional verification relies on the constrained random approach functional coverage helps in finding out the effectiveness of constraints. It helps in finding out the constraints are correct and they aren’t over constraining. But this aspect of the functional coverage has become so popular that the primary purpose of it has become overshadowed.
Functional coverage focuses on following three with possible overlap between each other to figure out whether the functional verification objectives are met:
- Randomization or stimulus coverage
- Requirements coverage
- Implementation coverage
In the following sections we will briefly discuss what each of these mean
Randomization or stimulus functional coverage
Uncertainness of constrained random environments is both boon and bane. Bane part is addressed with the functional coverage. It helps provide the certainty that randomization has indeed hit the important values that we really care for.
At a very basic level it starts with ensuring each of the randomized values are covering the their entire range of values possible. This is not the end of it.
Functional coverage’s primary value for the constrained random environments is to determine its effectiveness in randomization. Functional coverage quantifies the effectiveness of randomization. This does not directly say anything about effectiveness or completeness of the functional verification. This just means we have enabler that can help achieve the functional verification goals.
It’s the requirements and implementation coverage that really measures the effectiveness and completeness of the functional verification.
Requirements functional coverage
Requirements specification view of functional verification is looking at design from the end application point of view.
Broadly it looks at whether test bench is capable of covering the required scope of verification from application scenarios point of view. That includes:
- All key atomic scenarios in all key configurations
- Concurrency or decoupling between features
- Application scenario coverage
- Software type of interactions:
- Like polling versus interrupt driven
- Error recovery sequences
- Virtualizations and security
- Various types of traffic patterns
- Real life interesting scenarios like:
- Reset in middle of traffic
- Zero length transfer for achieving something
- Various types of low power scenarios with different entry and exit conditions
- Various real life delays(can be scaled proportionately)
- Software type of interactions:
Bottom line is requirement specification functional coverage should be able to convince someone who understands requirements that design will work without knowing anything about implementation. This is one the primary value that functional coverage brings over the code coverage.
Implementation functional coverage
Implementation related functional coverage is highly under focused area. But remember it’s equally important one. Many verification engineers will fall in trap that code coverage will take care of it. Which is only partially true.
The implementation means the micro-architecture, clocking, reset, pad or any other analog components interface related aspects.
Micro-architecture details to be covered include internal FIFOs becoming full and empty multiple times, arbiters experiencing various scenarios, concurrency of events, multi-threaded implementation, pipelines experiencing various scenarios etc.
Clocking coverage is whether all the specification defined clock frequency ranges are covered, for multiple clock domains key relations between the two clocks, special requirements such as spread spectrum, clock gating etc.
For resets whether external asynchronous resets at all the key states of any internal state machines. For multi-client or multi-channel where they are expected to operate independently if one is in reset whether other can make progress etc.
Pads and analog blocks interface coverage is very critical. Making sure all the possible interactions is exercised whether effects can be seen or not in the simulation is still important.
Combination of the white box and black box functional coverage addresses both of the above.