PinDown concluded that this failure is an always failing bug, i.e. it is not due to a recent mistake. This failure has always been there, at least all the way back to the user-defined earliest revision. The failure occurs on every tested revision, both back in time and on the latest revision. With constrained random testing this occurs often when a new test scenario is created that uncovers an issue that has always been there.
The Historical Causes section provides some extra information: the same type of failure has occurred before and back then, unlike now, it was due to a bad commit. The committed files in these old bad commits are listed as "Historical Causes" in order to provide a hint on what type of functionality that may be involved. Sometimes it may help to understand which sub-module that is normally causing certain type of failure messages.
|The confidence level for the Historical Causes section is low, it is just there as a hint on which sub-module that has been causing this type of failure message before. However, the confidence level for an always failing bug is high. The chance that it actually is a regression bug after all tested revisions fail is low.|
In this example the test fails because the test scenario is new. The test has never passed so we can exclude any recent mistake from being the cause. This means that it is not a bad commit that has caused this problem.
In the "Historical Causes"-section we can see that this type of failure is normally linked to the USB, more specifically to the usb_io.v file (the usb_top.v file is probably just instantiating the usb_io.v file so we can probably ignore it). This gives us an idea where the problem may lie. We probably need to fix the USB IO, or something related like this test, to make the test pass for the very first time.
Now there is actually an alternative explanation. The test may actually fail due to a recent mistake, i.e. a bad commit, but only if that commit happened more than 24 hours ago. If you look at the first line in the bug report above you will find the following text "…the user-defined debug limit, which is set to 1 day, limit is 10582918, Jul 1". This means that the user who setup PinDown has told PinDown to only debug 1 day worth of revision history which means that any bad commit older than that will not be found.
How Did PinDown Reach This Conclusion?
The historical causes section is based on a lookup in the PinDown database for similar failure messages in previous test runs. PinDown looks for old results in the database that matches the test failure in the bug report, more specifically that matches these criteria:
The same test name as in the bug report
The same build name as in the bug report
The failure message must be similar, but not necessarily exactly the same
For randomly generated test, any seed number is considered, not just the specific seed number in the bug report
Only historical regression bugs are considered, i.e. bugs which were due to bad commits. Using old regression bugs here is a way to produce a statistical link between file updates and failure messages which can be useful to be aware of, even for always failing bugs such as this one.
If many results are found then the list of files is sorted in order of number of commits to each file that lead to this type of failure message.
Always Failing Bug
This bug is essentially an always failing bug. PinDown reached this conclusion by testing the following during debug (using the same test and seed as in the test phase):
The test fails when retested a second time on the same revision as during the test phase
The test fails on all older revisions as well. It never passes.
Potential Stability Issues
Historical Causes does not have any stability issues it is just a read from the PinDown database.
For always failing bugs, in rare cases this type of bug report may be indicate an underlying problem:
Random Instability: With constrained random testing, reproducing the same test scenario using the same seed number is not possible if the testbench is too different than the revision of the testbench that was used during the test phase. This may cause the test to fail on every single revision older than such a major testbench update.
Uncaptured Randomness: With constrained random testing, it is important to capture all randomly generated inputs to the test. If there is a random generator which is not controlled by a seed then this random generator needs to be turned off or captured in some other way (maybe with a different seed). Otherwise we are not retesting the same test. A variant of this scenario is when running in a very flaky environment. Maybe the hardware on which the tests are running are very different from each other. This is not the case with computer farms, but when you are testing ASIC samples and PCB boards in an early development phase it may be the case. In this scenario it is important to re-run on exactly the same piece of hardware, again to minimise randomn behaviour. Some instability can be handled with the set_error_response command.
PinDown Setup Problem: If PinDown has been setup incorrectly, e.g. not having the correct criterias for pass and fail then PinDown will interpret the results incorrectly (see the Extraction User Guide).
Severe IT issues: If all failures are due to IT issues, both in the test phase and when re-running the tests during the debug phase then there may not be any real failures at all, just intermittent IT-issues. In this scenario you should increase the number of times a test is repeated in the debug phase to make sure it is a real deterministic failure before the debugging starts (see the set_error_response command).