You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking at out unit-tests, a lot of them are tests that issue a bunch of poke + step + expect.
Is there a way to get the svsim system to output this sequence to a yaml/json file without going through elaboration?
A colleague said that these are called "sequences" in UVM.
If so, then I could set up bazel to generate the sequences and have an action that uses a small c++ program to parse these poke + step + expect yaml/json sequences in a cc_test() on elaborated + firtool + verilated modules. The C++ side would then have to read in a string and apply it to the right input pin and read/check the output pins. This is a mapping from a dynamic name in yaml/.json to a static name in C++.
Some developers might prefer to write sequences such as the above directly in .json/yaml or capture them from integration tests too.
Of couser this would not work for more complicated test-cases that use peek() to modify the control flow in the Scala unit-test or even use peek() and implement their own check, though it could be useful to log peek() for human inspection or to implement warnings in the cc_test() or statistics or some feature that my imagniantion fails to capture.
For even faster test regression execution, it would be possible to pin the test-trace artifact and only update it when the test actually changes. It is a little bit tricky to express the dependencies for the test-trace accurately: it depends on the unit-test and the test interface, not the device under test. However, svsim does capture this concept in ModuleInfo, though it is a bit unclear to me how type information is captured. Does peek/poke/expect use ModuleInfo extracted from the DUT?
ModuleInfo(
name ="GCD",
ports =Seq(
newModuleInfo.Port(
name ="clock",
isSettable =true,
isGettable =true
),
newModuleInfo.Port(
name ="a",
isSettable =true,
isGettable =true
),
This also gives me an idea on how to address the "missing" fork/join() feature of svsim. The output of the peek poke expect could output several independent "threads" and the cc_test() woudl the launch multiple threads(or handle it in a single thread). The use-case would be to have a "thread" that is issuing transactions(Decoupled) and a separated thread that checks expected answers(Valid).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Looking at out unit-tests, a lot of them are tests that issue a bunch of poke + step + expect.
Is there a way to get the svsim system to output this sequence to a yaml/json file without going through elaboration?
A colleague said that these are called "sequences" in UVM.
If so, then I could set up bazel to generate the sequences and have an action that uses a small c++ program to parse these poke + step + expect yaml/json sequences in a cc_test() on elaborated + firtool + verilated modules. The C++ side would then have to read in a string and apply it to the right input pin and read/check the output pins. This is a mapping from a dynamic name in yaml/.json to a static name in C++.
Some developers might prefer to write sequences such as the above directly in .json/yaml or capture them from integration tests too.
Of couser this would not work for more complicated test-cases that use peek() to modify the control flow in the Scala unit-test or even use peek() and implement their own check, though it could be useful to log peek() for human inspection or to implement warnings in the cc_test() or statistics or some feature that my imagniantion fails to capture.
For even faster test regression execution, it would be possible to pin the test-trace artifact and only update it when the test actually changes. It is a little bit tricky to express the dependencies for the test-trace accurately: it depends on the unit-test and the test interface, not the device under test. However, svsim does capture this concept in ModuleInfo, though it is a bit unclear to me how type information is captured. Does peek/poke/expect use ModuleInfo extracted from the DUT?
This also gives me an idea on how to address the "missing" fork/join() feature of svsim. The output of the peek poke expect could output several independent "threads" and the cc_test() woudl the launch multiple threads(or handle it in a single thread). The use-case would be to have a "thread" that is issuing transactions(Decoupled) and a separated thread that checks expected answers(Valid).
Beta Was this translation helpful? Give feedback.
All reactions