Browse Prior Art Database

Parameterized control of simulation runs depending on simulation output and/or simulation state Disclosure Number: IPCOM000239072D
Publication Date: 2014-Oct-09
Document File: 5 page(s) / 95K

Publishing Venue

The Prior Art Database


In functional verification, it is often desirable to investigate closely a given scenario (e.g. by generating an all events trace of all signals). Such a scenario might be a rare case. If such a scenario occurs sporadically during regression runs, the user might not get a hold of such an occurrence, because in regression, pass rates of test are typically close to 100% and the results of passing test are being deleted after a test ends. If an interesting scenario occurs once in tens of thousands of tests, manual runs are not an option. In the following, a method is presented to catch those scenarios. The conditions to terminate a simulation run are hereby specified as parameters.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 50% of the total text.

Page 01 of 5

Parameterized control of simulation runs depending on simulation output and /


simulation state

Introduction. In order to manually recreate a scenario which occurs very rarely in random simulation, the standard way is to restrict randomness so that the probability of such a scenario increases to a point where a limited number of tests has a significant chance to hit the scenario the user is looking for.

However, this may turn out to be a time consuming and error prone process, especially in case of window conditions and if multiple preconditions must match. A better approach is therefore to stop the test if such a scenario occurs, so whenever the scenario is hit in regression, the test is forced to fail. In this case, all necessary information to recreate the test are kept, so it is easy to re-run the test in order to produce an all events trace.

Forcing the test to fail can be done by adding an assert to the VHDL code of the design.

Alternatively, a checker could be added to the verification code. However, changing the environment or the design is often problematic, especially in later stages of the development. In addition, it is bad practice to bloat logic or verification code (rtx) by adding code which is not related to any checking, but merely to find out if a scenario occurred.

Generic trigger code. Disclosed is an easy way to achieve the goal in the following: A separate code module is used to interpret parameters in order to provide a generic checking mechanism. So instead of developing a hard coded checker for each and every specific scenario, a generic mechanism is used to create a checker on the fly. This code resides in a separate code module (trace monitor) which can be loaded in addition to the other verification code if needed. The advantage of this approach is its independence of project and hardware, it needs to be written only once. What exactly should be checked is specified by parameters at runtime. In our implementation, parameters might be patterns in the trace output file and/or signal values in the device under simulation.

During a simulation run, the trace monitor observes the trace output file written during simulation as well as signals in the simulation model. If a trace message or a signal value matches the specified parameter, the simulation is stopped so it can be re-run

with the all events trace option being switched on.

Facility values and trace patterns can be given as command line parameters to the script starting the simulation e.g. in the form of
run -parm "TraceMonitor.TracePattern = reject_collision"

or in a separate parameter file, e.g.

TraceMonitor.Facility = { {


Page 02 of 5


If a trace pattern is specified, the trace output file is inspected for a pattern match. After a line is written to the trace output file, it is also sent to a regular expression parsing engine. In our case, the boost regex library is being used. I...