PinDown concluded that this failure is not due to a recent mistake, i.e. a regression bug. This failure has always been there, at least all the way back to the user-defined earliest revision. The failure occurs on every tested revision, both back in time and on the latest revision.

With constrained random testing this occurs often when a new test scenario is created that uncovers an issue that has always been there.

Note The confidence level for this type of bug report is high. The chance that it actually is a regression bug when all tested revisions fail is low.

Example

images/bugreport_always_failing_v3.png

How Did PinDown Reach This Conclusion?

PinDown tested the following during debug to reach this conclusion (using the same test and seed as in the test phase):

  • The test fails when retested a second time on the same revision as during the test phase

  • The test fails on all older revisions as well. It never passes.

If you suspect that there may be a passing revision somewhere out there you can try to increase the debug window (set_earliest_revision) or increase the number of revisions that should be tested within the debug window (set_diagnosis_optimization -limit).

Potential Stability Issues

In rare cases this type of bug report may be indicate an underlying problem:

  • Random Instability: With constrained random testing, reproducing the same test scenario using the same seed number is not possible if the testbench is too different than the revision of the testbench that was used during the test phase. This may cause the test to fail on every single revision older than such a major testbench update.

  • Uncaptured Randomness: With constrained random testing, it is important to capture all randomly generated inputs to the test. If there is a random generator which is not controlled by a seed then this random generator needs to be turned off or captured in some other way (maybe with a different seed). Otherwise we are not retesting the same test. A variant of this scenario is when running in a very flaky environment. Maybe the hardware on which the tests are running are very different from each other. This is not the case with computer farms, but when you are testing ASIC samples and PCB boards in an early development phase it may be the case. In this scenario it is important to re-run on exactly the same piece of hardware, again to minimise randomn behaviour. Some instability can be handled with the set_error_response command.

  • PinDown Setup Problem: If PinDown has been setup incorrectly, e.g. not having the correct criterias for pass and fail then PinDown will interpret the results incorrectly (see the Extraction User Guide).

  • Severe IT issues: If all failures are due to IT issues, both in the test phase and when re-running the tests during the debug phase then there may not be any real failures at all, just intermittent IT-issues. In this scenario you should increase the number of times a test is repeated in the debug phase to make sure it is a real deterministic failure before the debugging starts (see the set_error_response command).