This bug report is a progress report and contains what PinDown knows about the issue so far. It is sent before the conclusive bug report is issued.
The Historical Causes section provides some extra information: the same type of failure has occurred before and back then it was due to a bad commit. The committed files in these old bad commits are listed as "Historical Causes" in order to provide a hint on what type of functionality that may be involved. Sometimes it may help to understand which sub-module that is normally causing certain type of failure messages. When the final report arrives we will know whether the same files were involved this time too.
|This debug progress report is enabled by the user (see "progressreports" in the man page for set_mail). PinDown can be set to send a progress report once to capture the groups of failure signatures (or "buckets"). Alternatively, PinDown can be set to send progress reports continuously during the debug session, once per iteration, all the way until the conclusion. The progress reports are by default off.|
This is a progress report so we don’t know yet why the test is failing. However, in the "Historical Causes"-section we can see that this type of failure is normally linked to the USB, more specifically to the usb_io.v file (the usb_top.v file is probably just instantiating the usb_io.v file so we can probably ignore it). This gives us an idea where the problem may lie. When the final bug report arrives we will see if this was indeed where the problem was this time too.
How Did PinDown Reach This Conclusion?
The historical causes section is based on a lookup in the PinDown database for similar failure messages in previous test runs. PinDown looks for old results in the database that matches the test failure in the bug report, more specifically that matches these criteria:
The same test name as in the bug report
The same build name as in the bug report
The failure message must be similar, but not necessarily exactly the same
For randomly generated test, any seed number is considered, not just the specific seed number in the bug report
Only historical regression bugs are considered, i.e. bugs which were due to bad commits. Using old regression bugs here is a way to produce a statistical link between file updates and failure messages.
If many results are found then the list of files is sorted in order of number of commits to each file that lead to this type of failure message.
PinDown has not yet reached a conclusion when these progress reports are sent out.
Potential Stability Issues
Historical Causes does not have any stability issues it is just a read from the PinDown database.
You have to wait for a conclusion before you can draw conclusions about stability issues on the final bug report. However for completeness, the types of stability problem that may affect the outcome are these:
Random Instability: With constrained random testing, reproducing the same test scenario using the same seed number is not possible if the testbench is too different than the revision of the testbench that was used during the test phase. This may cause the test to fail on every single revision older than such a major testbench update.
Uncaptured Randomness: With constrained random testing, it is important to capture all randomly generated inputs to the test. If there is a random generator which is not controlled by a seed then this random generator needs to be turned off or captured in some other way (maybe with a different seed). Otherwise we are not retesting the same test. A variant of this scenario is when running in a very flaky environment. Maybe the hardware on which the tests are running are very different from each other. This is not the case with computer farms, but when you are testing ASIC samples and PCB boards in an early development phase it may be the case. In this scenario it is important to re-run on exactly the same piece of hardware, again to minimise randomn behaviour. Some instability can be handled with the set_error_response command.
PinDown Setup Problem: If PinDown has been setup incorrectly, e.g. not having the correct criterias for pass and fail then PinDown will interpret the results incorrectly (see the Extraction User Guide).
Severe IT issues: If all failures are due to IT issues, both in the test phase and when re-running the tests during the debug phase then there may not be any real failures at all, just intermittent IT-issues. In this scenario you should increase the number of times a test is repeated in the debug phase to make sure it is a real deterministic failure before the debugging starts (see the set_error_response command).