Papers
Presented at DVCon Feb 2015, San Jose, CA
A popular approach to regression testing is to test every commit to the revision control system with a short test suite. The idea is that if this short test suite fails then we know which commit that caused the problem and the committer can be automatically notified about his or her mistake.
Presented at the Microprocessor Test and Verification Conference (MTVCon13)
The purpose of regression testing is to quickly catch any deterioration in quality of a product under development. The more frequently tests are run, the earlier new issues can be detected resulting in a larger burden for the engineers who need to manually debug all test failures, many of which are failing due to the same underlying bug. However, there are software tools that automatically debug the test failures back to the faulty change and notifies the engineer who made this change. By analyzing data from a real commercial ASIC project we measured whether bugs are fixed faster when using automatic debug tools compared to manual debugging. We saw that bugs that had been automatically debugged was fixed 4 times faster.
Planning for Bugs
Bugs appear seemingly randomly during the development of an ASIC, which makes planning a challenge. Here we show that some metrics regarding bugs are predictable and useful for planning. In this paper we explore 4 different ASIC projects and found that on average every 37th commit introduces a regression bug. We also found that the median bug is small.
Presented at MTVCon Dec 2014, Austin, TX
Lint tools analyze RTL statically and report code segments that do not comply with the selected coding guidelines. It is quick to run and as the error messages are very precise it is easy to fix the issues. It requires much less resources to fix a linting issue than to find and fix the same issue during simulation of a test. However, the large amounts of errors and warning messages that the linting tools produce is a problem.