We presented the poster "Predicting Bad Commits" at DVCon, Feb 26 2019, San Jose, CA , where we explored the feasibility of using machine learning to predict bugs. Bug prediction occurs before tests are launched. At this time there is not even a test failure to analyze.

The idea is that the better you can predict bugs the better you can control the verification flow. This means that you can control whether to run a large/small test suite depending on how risky the recent changes are. Also debugging of bugs takes less time as you know the most likely culprits before you start.

We also participated in a round table discussion on AI and EDA, which will end up as an article series by Brian Baily in semiengineering.com

Larry Melling (Cadence), Harry Foster (Mentor), Daniel Hansson (Verifyter), Manish Pandey (Carnegie Mellon), Doug Letcher (Metrics), Raik Brinkmann (OneSpin) 

At our booth we talked about bug prediction and automatic debug at our booth, which had quite a few visitors. According to our scientific studies machine learning attracts more engineers than candy these days :-)

At MTVCon, Dec 10-11 2018, Austin, TX Christian Graber from Verifyter presented a paper on bug prediction using machine learning called "Boosting Continuous Integration performance with Machine Learning".

This presentation was part of a session focused on how machine learning can be used in conjunction with mining version control data to predict bugs.

As part of this session Alper Sen, associate professor from Bogazici University, presented a paper called "Predicting Buggy Modules During Virtual Prototype Developement" which focused on predicting bugs in SystemC with very good results.

Avi Ziv from IBM Research presented "Mining version control data - are software and hardware the same?" where he talked about what the hardware community can learn from the software community, especially in the field called MSR, Mining Software Repositories, which is a very active software research community centered around the annual MSR conference. He also described some concrete work that have been done in this area inside IBM.

At DVCon Europe, Oct 24-25, Munich we presented "Enabling Visual Design Verification Analytics – From Prototype Visualizations to an Analytics Tool using the Unity Game Engine" where we showed how the bug reports generated by PinDown, our automatic debugger for regression tests, can be visualized in a cool way that enables an analytical view. It allows you to see which areas of the design that are error-prone and need some extra attention. It also allows you to identify areas of the design that lack test coverage. Here is a demo. We have explored this field earlier this year, but now we went more into the filtering aspects of the visualizer. You can set the time frame and search for a certain activity level/fault ratio to make it easier to analyze the data.



Most of the time we spent in the booth talking to verification people about how PinDown uses Machine Learning to predict bugs before verification even starts. This speeds up PinDown's own debug process and it also allow you to run the regression runs from a risk point of view (large test suite for risky commits, small test suites for safe commits).