Preface
This book is organised as three sub-books; getting started, writing tests and reference.
Why Megatest?
The Megatest project was started for two reasons, the first was an immediate and pressing need for a generalized tool to manage a suite of regression tests and the second was the fact that the author had written or maintained several such tools at different companies over the years and it seemed a good thing to have a single open source tool, flexible enough to meet the needs of any team doing continuous integrating and or running a complex suite of tests for release qualification.
Megatest Design Philosophy
Megatest is intended to provide the minimum needed resources to make writing a suite of tests and tasks for implementing continuous build for software, design engineering or process control (via owlfs for example) without being specialized for any specific problem space. Megatest in of itself does not know what constitutes a PASS or FAIL of a test. In most cases megatest is best used in conjunction with logpro or a similar tool to parse, analyze and decide on the test outcome.
Megatest Architecture
All data to specify the tests and configure the system is stored in plain text files. All system state is stored in an sqlite3 database. Tests are launched using the launching system available for the distributed compute platform in use. A template script is provided which can launch jobs on local and remote Linux hosts. Currently megatest uses the network filesystem to call home to your master sqlite3 database.
Road Map
Note 1: This road-map is tentative and subject to change without notice.
Note 2: Starting over. Old plan is commented out.
Current Items
ww05 - migrate to inmem-db
-
Switch to inmem db with fast sync to on disk db’s [DONE]
-
Server polls tasks table for next action
-
Task table used for tracking runner process [DONE]
-
Task table used for jobs to run
-
Task table used for queueing runner actions (remove runs, cleanRunExecute, etc)
-
Getting Started
How to install Megatest and set it up for running your regressions and continuous integration process.
Installation
Dependencies
Chicken scheme and a number of "eggs" are required for building Megatest. See the script installall.sch in the utils directory of the distribution for a mostly automated way to install everything needed for building Megatest on Linux.
[An example footnote.]
Writing Tests
Creating a new Test
The following steps will add a test "yourtestname" to your testsuite. This assumes starting from a directory where you already have a megatest.config and runconfigs.config.
-
Create a directory tests/yourtestname
-
Create a file tests/yourtestname/testconfig
[ezsteps] stepname1 stepname.sh # test_meta is a section for storing additional data on your test [test_meta] author myname owner myname description An example test reviewed never
This test runs a single step called "stepname1" which runs a script "stepname.sh". Note that although it is common to put the actions needed for a test step into a script it is not necessary.
How To Do Things
Process Runs
Remove Runs
From the dashboard click on the button (PASS/FAIL…) for one of the tests. From the test control panel that comes up push the clean test button. The command field will be prefilled with a template command for removing that test. You can edit the command, for example change the argument to -testpatt to "%" to remove all tests.
megatest -remove-runs -target ubuntu/nfs/none -runname ww28.1a -testpatt diskperf/% -v
megatest -remove-runs -target %/%/% -runname % -testpatt % -v
Archive Runs
Megatest supports using the bup backup tool (https://bup.github.io/) to archive your tests for efficient storage and retrieval. Archived data can be rapidly retrieved if needed. The metadata for the run (PASS/FAIL status, run durations, time stamps etc.) are all preserved in the megatest database.
For setup information see the Archiving topic in the reference section of this manual.
To Archive
Hint: use the test control panel to create a template command by pushing the "Archive Tests" button.
megatest -target ubuntu/nfs/none -runname ww28.1a -archive save-remove -testpatt %
To Restore
megatest -target ubuntu/nfs/none -runname ww28.1a -archive restore -testpatt diskperf/%
Hint: You can browse the archive using bup commands directly.
bup -d /path/to/bup/archive ftp
Submit jobs to Host Types based on Test Name
[host-types] general ssh #{getbgesthost general} nbgeneral nbjob run JOBCOMMAND -log $MT_LINKTREE/$MT_TARGET/$MT_RUNNAME.$MT_TESTNAME-$MT_ITEM_PATH.lgo [hosts] general cubian xena [launchers] envsetup general xor/%/n 4C16G % nbgeneral [jobtools] launcher bsub # if defined and not "no" flexi-launcher will bypass launcher unless there is no # match. flexi-launcher yes
Tricks
This section is a compendium of a various useful tricks for debugging, configuring and generally getting the most out of Megatest.
Limiting your running jobs
The following example will limit a test in the jobgroup "group1" to no more than 10 tests simultaneously.
In your testconfig:
[test_meta] jobgroup group1
In your megatest.config:
[jobgroups] group1 10 custdes 4
Debugging Tricks
Examining The Environment
Test Control Panel - xterm
From the dashboard click on a test PASS/FAIL button. This brings up a test control panel. Aproximately near the center left of the window there is a button "Start Xterm". Push this to get an xterm with the full context and environment loaded for that test. You can run scripts or ezsteps by copying from the testconfig (hint, load up the testconfig in a separate gvim or emacs window). This is the easiest way to debug your tests.
During Config File Processing
It is often helpful to know the content of variables in various contexts as Megatest does the actions needed to run your tests. A handy technique is to force the startup of an xterm in the context being examined.
For example, if an item list is not being generated as expected you can inject the startup of an xterm as if it were an item:
[items] CELLNAME [system getcellname.sh]
[items] DEBUG [system xterm] CELLNAME [system getcellnames.sh]
When this test is run an xterm will pop up. In that xterm the environment is exactly that in which the script "getcellnames.sh" would run. You can now debug the script to find out why it isn’t working as expected.
Organising Your Tests and Tasks
The default location "tests" for storing tests can be extended by adding to your tests-paths section.
[misc] parent #{shell dirname $(readlink -f .)} [tests-paths] 1 #{get misc parent}/simplerun/tests
The above example shows how you can use addition sections in your config file to do complex processing. By putting results of relatively slow operations into variables the processing of your configs can be kept fast.
Alternative Method for Running your Job Script
[setup] runscript main.csh
The runscript method is essentially a brute force way to run scripts where the user is responsible for setting STATE and STATUS and managing the details of running a test.
Debugging Server Problems
Some handy Unix commands to track down issues with servers not communicating with your test manager processes. Please put in tickets at https://www.kiatoa.com/fossils/megatest if you have problems with servers getting stuck.
sudo lsof -i sudo netstat -lptu sudo netstat -tulpn
Reference
Megatest Config File Settings
Trim trailing spaces
[configf:settings trim-trailing-spaces yes]
Submit jobs to Host Types based on Test Name
[host-types] general nbfake remote bsub [launchers] runfirst/sum% remote % general [jobtools] launcher bsub # if defined and not "no" flexi-launcher will bypass launcher unless there is no # match. flexi-launcher yes
host-types
List of host types and the commandline to run a job on that host type.
general nbfake
launchers
runfirst/sum% remote
The testconfig File
Setup section
Header
[setup]
The runscript method is a brute force way to run scripts where the user is responsible for setting STATE and STATUS
runscript main.csh
Requirements section
Header
[requirements]
Wait on Other Tests
# A normal waiton waits for the prior tests to be COMPLETED # and PASS, CHECK or WAIVED waiton test1 test2
Mode
The default (i.e. if mode is not specified) is normal. All pre-dependent tests must be COMPLETED and PASS, CHECK or WAIVED before the test will start
[requirements] mode normal
The toplevel mode requires only that the prior tests are COMPLETED.
[requirements] mode toplevel
A item based waiton will start items in a test when the same-named item is COMPLETED and PASS, CHECK or WAIVED in the prior test. This was historically called "itemwait" mode. The terms "itemwait" and "itemmatch" are synonyms.
[requirements] mode itemmatch
Itemmap
For cases were the dependent test has a similar but not identical itempath to the downstream test an itemmap can allow for itemmatch mode
[requirements] mode itemmatch itemmap .*x/ y/ # ## pattern replacement notes # # ## Example # ## Remove everything up to the last / itemmap .*/ # # ## Example # ## Replace foo/ with bar/ itemmap foo/ bar/
[requirements] # With a toplevel test you may wish to generate your list # of tests to run dynamically # # waiton #{shell get-valid-tests-to-run.sh}
Run time limit
runtimelim 1h 2m 3s # this will automatically kill the test if it runs for more than 1h 2m and 3s
Skip
A test with a skip section will conditional skip running.
[skip] prevrunning x # rundelay 30m 15s
Skip on Still-running Tests
# NB// If the prevrunning line exists with *any* value the test will # automatically SKIP if the same-named test is currently RUNNING. The # "x" can be any string. Comment out the prevrunning line to turn off # skip. [skip] prevrunning x
Skip if a File Exists
[skip] fileexists /path/to/a/file # skip if /path/to/a/file exists
Skip if test ran more recently than specified time
[skip] rundelay 15m 15s
Controlled waiver propagation
If test is FAIL and previous test in run with same MT_TARGET is WAIVED then apply the following rules from the testconfig: If a waiver check is specified in the testconfig apply the check and if it passes then set this FAIL to WAIVED
Waiver check has two parts, 1) a list of waiver, rulename, filepatterns and 2) the rulename script spec (note that "diff" and "logpro" are predefined)
###### EXAMPLE FROM testconfig ######### # matching file(s) will be diff'd with previous run and logpro applied # if PASS or WARN result from logpro then WAIVER state is set # [waivers] # logpro_file rulename input_glob waiver_1 logpro lookittmp.log [waiver_rules] # This builtin rule is the default if there is no <waivername>.logpro file # diff diff %file1% %file2% # This builtin rule is applied if a <waivername>.logpro file exists # logpro diff %file1% %file2% | logpro %waivername%.logpro %waivername%.html
Ezsteps
To transfer the environment to the next step you can do the following:
$MT_MEGATEST -env2file .ezsteps/${stepname}
Triggers
In your testconfig triggers can be specified
[triggers] # Call script running.sh when test goes to state=RUNNING, status=PASS RUNNING/PASS running.sh # Call script running.sh any time state goes to RUNNING RUNNING/ running.sh # Call script onpass.sh any time status goes to PASS PASS/ onpass.sh
Scripts called will have; test-id test-rundir trigger, added to the commandline.
HINT
To start an xterm (useful for debugging), use a command line like the following:
[triggers] COMPLETED/ xterm -e bash -s --
There is a trailing space after the -- |
Override the Toplevel HTML File
Megatest generates a simple html file summary for top level tests of iterated tests. The generation can be overridden. NOTE: the output of the script is captured from stdout to create the html.
# Override the rollup for specific tests [testrollup] runfirst mysummary.sh
Archiving Setup
In megatest.config add the following sections:
[archive] # where to get bup executable # bup /path/to/bup [archive-disks] # Archives will be organised under these paths like this: # <testsuite>/<creationdate> # Within the archive the data is structured like this: # <target>/<runname>/<test>/ archive0 /mfs/myarchive-data/adisk1
Programming API
These routines can be called from the megatest repl.
API Call | Purpose comments | Returns | Comments |
---|---|---|---|
(rmt:login run-id) |
Verify the the version, testsuite area etc. are correct. |
#( #t "successful login" ) |
|
(rmt:start-server run-id) |
#( success/fail n/a ) |
||
(rmt:kill-server run-id) |
#( success/fail n/a ) |
Works only if the server is still reachable |
API Call | Purpose comments | Returns | Comments |
---|---|---|---|
(rmt:get-key-val-pairs run-id) |
#t=success/#f=fail |
Works only if the server is still reachable |
|
(rmt:get-keys run-id) |
( key1 key2 … ) |
Megatest Internals
Appendix A: Example Appendix
One or more optional appendixes go here at section level zero.
Appendix Sub-section
Preface and appendix subsections start out of sequence at level 2 (level 1 is skipped). This only applies to multi-part book documents. |
Example Bibliography
Example Glossary
Glossaries are optional. Glossaries entries are an example of a style of AsciiDoc labeled lists.
- A glossary term
-
The corresponding (indented) definition.
- A second glossary term
-
The corresponding (indented) definition.
Example Colophon
Text at the end of a book describing facts about its production.