The Megatest project was started for two reasons, the first was an -immediate and pressing need for a generalized tool to manage a suite -of regression tests and the second was the fact that I had written or -maintained several such tools at different companies over the years. I -thought a single open source tool, flexible enough to meet the needs -of any team doing continuous integration and or running a complex -suite of tests for release qualification would solve some problems for -me and for others.
-- Matt Welland, original author of the Megatest tool suite.-
Megatest was created to provide a generalized tool for managing suites +of regression tests and to provide a multi-host, distributed +alternative to "make". The EDA world is littered with proprietory, +company-specific tools for this purpose and by going open source and +keeping the tool flexible the hope is that Megatest could be useful to +any team at any company for continuous integration and almost any +other general automation tasks.
Note: This road-map is a wish list and not a formal plan. Items are in @@ -1941,10 +1931,310 @@
This test runs a single step called "stepname1" which runs a script "stepname.sh". Note that although it is common to put the actions needed for a test step into a script it is not necessary.
Design your tests to surface errors. Ensure that all logs are +processed by logpro (or a custom log processing tool) and can be +reached by a mouse click or two from the test control panel.
To illustrate, here is a set of scripts with nested calls where script1.sh calls script2.sh which calls script3.sh which finally calls the Cadence EDA tool virtuoso:
#!/bin/bash +code ... +script2.sh some parameters > script2.log +more code ...+
#!/bin/bash +code ... +script3.sh some more parameters > script3.log +more code ...+
#!/bin/bash +code ... +virtuoso params and switches ... +more code ...+
The log files script2.log, script3.log and the log output from +virtuoso are not accessible from the test control panel. It would be +much better for future users of your automation to use steps more +fully. One easy option would be to post process the logs in downstream +additional steps:
[ezsteps] +step1 script1.sh +step2 cat script2.log +step3 cat script3.log + +[logpro] +step1 ;; some logpro rules + (expect:required in "LogFileBody" > 0 "Expect this output" #/something expected/) +step2 ;; some logpro rules for script2.sh +step3 ;; some logpro rules for script3.sh + +[scripts] +script1.sh #!/bin/bash + code ... + +...+
With the above testconfig the logs for every critical part of the +automation are fully surfaced and rules can be created to flag errors, +warnings, aborts and to ignore false errors. A user of your automation +will be able to see the important error with two mouse clicks from the +runs view.
An even better would be to eliminate the nesting if possible. As a +general statement with layers - less is usually more. By flattening +the automation into a sequence of steps you can use the test control +panel to re-run a step with a single click or from the test xterm run +only the errant step from the command line.
The message here is make debugging and maintenace easy for future +users (and yourself) by keeping clicks-to-error in mind.
From the dashboard click on a test PASS/FAIL button. This brings up a +test control panel. Aproximately near the center left of the window +there is a button "Start Xterm". Push this to get an xterm with the +full context and environment loaded for that test. You can run scripts +or ezsteps by copying from the testconfig (hint, load up the +testconfig in a separate text editor window).
With more recent versions of Megatest you can step through your test +from the test control panel. Click on the cell labeled "rerun this +step" to only rerun the step or click on "restart from here" to rerun +that step and downstream steps.
NOTE 1: visual feedback can take some time, give it a few seconds and +you will see the step change color to blue as it starts running.
NOTE 2: steping through only works if you are using ezsteps.
Bisecting is a debug strategy intended to speed up finding the root +cause of some bug.
It is common to start debugging where the problem was observed and +then work back. However by inspecting the output at stage "C" in the +example above you would potentially save a lot of debug effort, this +is similar to the feature in source control tools like git and fossil +called biseceting.
Most bugs in Megatest based automation will be in the scripts called +in your test steps and if you utilize the good design practice +described above should be fairly easy for you to reproduce, isolate +and find.
Some bugs however will come from subtle and hard to detect +interactions between Megatest and your OS and Unix environment. This +includes things like constructed variables that are legal in one +context (e.g. tcsh) but illegal in another context (e.g. bash), +variables that come from your login scripts and access and permissions +issues (e.g. a script that silently fails due to no access to needed +data). Other bugs might be due to Megatest itself.
To isolate bugs like this you may need to look at the log files at +various stages in the execution process of your run and tests.
Stage | +How to inspect | +Watch for or try … | +
---|---|---|
A: post config processing |
+megatest -show-config -target your/target |
+#f (failed var processing) |
+
B: post runconfig |
+megatest -show-runconfig -target your/target |
+Add -debug 0,9 to see which file your settings come from |
+
C: processing testconfigs |
+inspect output from "megatest -run …" |
+Messages indicating issues process configs, dependency problems |
+
D: process testconfig for test launch |
+inspect output from megatest runner |
+Zero items (items expansion yielded no items) |
+
E,F: launching test |
+start test xterm, look at mt_launch.log |
+Did your batch system accept the job? Has the job landed on a machine? |
+
G: starting test |
+look at your batch systems logs for the process |
+Did the megatest -execute process start and run? Extract the "megatest -execute …" command and run it from your xterm. |
+
H,H1,H2: step exectution |
+look at <stepname>.log, <stepname>.html and your own internal logs |
+Do you have sufficiently tight logpro rules? You must always have a "required" rule! |
+
Sometimes finding the environment variable that is causing the problem +can be very difficult. Bisection can be applied.
Edit the megatest.csh or megatest.sh file and comment out 50% per +round, source in fresh xterm and run the test.
This idea can also be applied to your .cshrc, .bashrc, .aliases and +other similar files.
A common issue when tcsh or csh shells are used for scripting is to +forget or choose to not use -f in your #! line.
#!/bin/tcsh +...+
#!/bin/tcsh -f +...+
As described above it is often helpful to know the content of +variables in various contexts as Megatest works through the actions +needed to run your tests. A handy technique is to force the startup of +an xterm in the context being examined.
For example, if an item list is not being generated as expected you +can inject the startup of an xterm as if it were an item:
[items] +CELLNAME [system getcellname.sh]+
[items] +DEBUG [system xterm] +CELLNAME [system getcellnames.sh]+
When this test is run an xterm will pop up. In that xterm the +environment is exactly that in which the script "getcellnames.sh" +would run. You can now debug the script to find out why it isn’t +working as expected.
Similarly in a script just call the xterm. +NOTE: This technique can be very helpful in debugging running of EDA tools in Perl, Ruby, Python or tcl scripts:
some_code(); +$cmdline="virtuoso -some-switches and params ..."; +print "$cmdline"; # print the command line so you can paste it into the xterm that pops up +system("xterm"); # this line is added for the debug and removed when done +system($cmdline); +more_code();+
Run the "env" command to record the environment:
env | sort > stagename.log
In a config file:
#{shell env | sort > stagename.log} + +# or + +[system env | sort > stagename.log]+
In scripts just insert the commands, this example helps you identify +if "some commands …" changed any environment variables.:
env | sort > somefile-before.log +some commands ... +env | sort > somefile-after.log+
meld somefile-before.log somefile-after.log+
To try various values for a variable without mutating the current value
SOMEVAR=123 runcmd.sh+
(setenv SOMEVAR 123;runcmd.sh) + +# OR + +env SOMEVAR=123 runcmd.sh+
This section is a compendium of a various useful tricks for debugging, -configuring and generally getting the most out of Megatest.
This section is a collection of a various useful tricks for that +didn’t quite fit elsewhere.
The following example will limit a test in the jobgroup "group1" to no more than 10 tests simultaneously.
In your testconfig:
[jobgroups] group1 10 custdes 4
From the dashboard click on a test PASS/FAIL button. This brings up a test control panel. Aproximately near the center left of the -window there is a button "Start Xterm". Push this to get an xterm with the full context and environment loaded for that test. You can run -scripts or ezsteps by copying from the testconfig (hint, load up the testconfig in a separate gvim or emacs window). This is the easiest way -to debug your tests.
It is often helpful to know the content of variables in various -contexts as Megatest does the actions needed to run your tests. A handy technique is to force the startup of an xterm in the context being examined.
For example, if an item list is not being generated as expected you -can inject the startup of an xterm as if it were an item:
[items] -CELLNAME [system getcellname.sh]-
[items] -DEBUG [system xterm] -CELLNAME [system getcellnames.sh]-
When this test is run an xterm will pop up. In that xterm the -environment is exactly that in which the script "getcellnames.sh" -would run. You can now debug the script to find out why it isn’t -working as expected.
The default location "tests" for storing tests can be extended by adding to your tests-paths section.
Variable | @@ -3394,11 +3649,11 @@
---|