// This file is part of Megatest. // // Megatest is free software: you can redistribute it and/or modify // it under the terms of the GNU General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // Megatest is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with Megatest. If not, see <http://www.gnu.org/licenses/>. Reference --------- Megatest Use Modes ~~~~~~~~~~~~~~~~~~ .Base commands [width="80%",cols="^,2m,2m",frame="topbot",options="header"] |====================== |Use case | Megatest command | mtutil |Start from scratch | -rerun-all | restart |Rerun non-good completed | -rerun-clean | rerunclean |Rerun all non-good and not completed yet | -set-state-status KILLREQ; -rerun-|clean | killrerun |Continue run | -run | resume |Remove run | -remove-runs | clean |Lock run | -lock | lock |Unlock run | -unlock | unlock |killrun | -set-state-status KILLREQ; -kill-run | killrun |====================== Config File Helpers ~~~~~~~~~~~~~~~~~~~ Various helpers for more advanced config files. .Helpers [width="80%",cols="^,2m,2m,2m",frame="topbot",options="header"] |====================== |Helper | Purpose | Valid values | Comments | #{scheme (scheme code...)} | Execute arbitrary scheme code | Any valid scheme | Value returned from the call is converted to a string and processed as part of the config file | #{system command} | Execute program, inserts exit code | Any valid Unix command | Discards the output from the program | #{shell command} or #{sh ...} | Execute program, inserts result from stdout | Any valid Unix command | Value returned from the call is converted to a string and processed as part of the config file | #{realpath path} or #{rp ...} | Replace with normalized path | Must be a valid path | | #{getenv VAR} or #{gv VAR} | Replace with content of env variable | Must be a valid var | | #{get s v} or #{g s v} | Replace with variable v from section s | Variable must be defined before use | | #{rget v} | Replace with variable v from target or default of runconfigs file | | | #{mtrah} | Replace with the path to the megatest testsuite area | | |====================== Config File Settings ~~~~~~~~~~~~~~~~~~~~ Settings in megatest.config Config File Additional Features ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Including output from a script as if it was inline to the config file: ------------------------- [scriptinc myscript.sh] ------------------------- If the script outputs: ------------------------- [items] A a b c B d e f ------------------------- Then the config file would effectively appear to contain an items section exactly like the output from the script. This is useful when dynamically creating items, itemstables and other config structures. You can see the expansion of the call by looking in the cached files (look in your linktree for megatest.config and runconfigs.config cache files and in your test run areas for the expanded and cached testconfig). Wildcards and regexes in Targets ------------------------- [a/2/b] VAR1 VAL1 [a/%/b] VAR1 VAL2 ------------------------- Will result in: ------------------------- [a/2/b] VAR1 VAL2 ------------------------- Can use either wildcard of "%" or a regular expression: ------------------------- [/abc.*def/] ------------------------- Disk Space Checks ~~~~~~~~~~~~~~~~~ Some parameters you can put in the [setup] section of megatest.config: ------------------- # minimum space required in a run disk minspace 10000000 # minimum space required in dbdir: dbdir-space-required 100000 # script that takes path as parameter and returns number of bytes available: free-space-script check-space.sh ------------------- Trim trailing spaces ~~~~~~~~~~~~~~~~~~~~ NOTE: As of Megatest version v1.6548 trim-trailing-spaces defaults to yes. ------------------ [configf:settings trim-trailing-spaces no] # |<== next line padded with spaces to here DEFAULT_INDENT [configf:settings trim-trailing-spaces no] ------------------ The variable DEFAULT_INDENT would be a string of 3 spaces Job Submission Control ~~~~~~~~~~~~~~~~~~~~~~ Submit jobs to Host Types based on Test Name ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .In megatest.config ------------------------ [host-types] general nbfake remote bsub [launchers] runfirst/sum% remote % general [jobtools] launcher bsub # if defined and not "no" flexi-launcher will bypass launcher unless # there is no host-type match. flexi-launcher yes ------------------------ host-types ++++++++++ List of host types and the commandline to run a job on that host type. .host-type => launch command ------------ general nbfake ------------ launchers +++++++++ .test/itempath => host-type ------------ runfirst/sum% remote ------------ Miscellaneous Setup Items +++++++++++++++++++++++++ Attempt to rerun tests in "STUCK/DEAD", "n/a", "ZERO_ITEMS" states. .In megatest.config ------------------ [setup] reruns 5 ------------------ Replace the default blacklisted environment variables with user supplied list. Default list: USER HOME DISPLAY LS_COLORS XKEYSYMDB EDITOR MAKEFLAGS MAKEF MAKEOVERRIDES .Add a "bad" variable "PROMPT" to the variables that will be commented out in the megatest.sh and megatest.csh files: ----------------- [setup] blacklistvars USER HOME DISPLAY LS_COLORS XKEYSYMDB EDITOR MAKEFLAGS PROMPT ----------------- Run time limit ++++++++++++++ ----------------- [setup] # this will automatically kill the test if it runs for more than 1h 2m and 3s runtimelim 1h 2m 3s ----------------- Post Run Hook +++++++++++++ This runs script to-run.sh after all tests have been completed. It is not necessary to use -run-wait as each test will check for other running tests on completion and if there are none it will call the post run hook. Note that the output from the script call will be placed in a log file in the logs directory with a file name derived by replacing / with _ in post-hook-<target>-<runname>.log. ------------------- [runs] post-hook /path/to/script/to-run.sh ------------------- Tests browser view ~~~~~~~~~~~~~~~~~~ The tests browser (see the Run Control tab on the dashboard) has two views for displaying the tests. . Dot (graphviz) based tree . No dot, plain listing The default is the graphviz based tree but if your tests don't view well in that mode then use "nodot" to turn it off. ----------------- [setup] nodot ----------------- Capturing Test Data ~~~~~~~~~~~~~~~~~~~ In a test you can capture arbitrary variables and roll them up in the megatest database for viewing on the dashboard or web app. .In a test as a script ------------------------ $MT_MEGATEST -load-test-data << EOF foo,bar, 1.2, 1.9, > foo,rab, 1.0e9, 10e9, 1e9 foo,bla, 1.2, 1.9, < foo,bal, 1.2, 1.2, < , ,Check for overload foo,alb, 1.2, 1.2, <= , Amps,This is the high power circuit test foo,abl, 1.2, 1.3, 0.1 foo,bra, 1.2, pass, silly stuff faz,bar, 10, 8mA, , ,"this is a comment" EOF ------------------------ Alternatively you can use logpro triggers to capture values and inject them into megatest using the -set-values mechanism: .Megatest help related to -set-values ------------------------ Test data capture -set-values : update or set values in the testdata table :category : set the category field (optional) :variable : set the variable name (optional) :value : value measured (required) :expected : value expected (required) :tol : |value-expect| <= tol (required, can be <, >, >=, <= or number) :units : name of the units for value, expected_value etc. (optional) ------------------------ Dashboard settings ~~~~~~~~~~~~~~~~~~ .Runs tab buttons, font and size ------------------ [dashboard] btn-height x14 btn-fontsz 10 cell-width 60 ------------------ Database settings ~~~~~~~~~~~~~~~~~ .Database config settings in [setup] section of megatest.config [width="70%",cols="^,2m,2m,2m",frame="topbot",options="header"] |====================== |Var | Purpose | Valid values | Comments |delay-on-busy | Prevent concurrent access issues | yes\|no or not defined | Default=no, may help on some network file systems, may slow things down also. |faststart | All direct file access to sqlite db files | yes\|no or not defined | Default=yes, suggest no for central automated systems and yes for interactive use |homehost | Start servers on this host | <hostname> | Defaults to local host |hostname | Hostname to bind to | <hostname>\|- | On multi-homed hosts allows binding to specific hostname |lowport | Start searching for a port at this portnum| 32768 | |required | Server required | yes\|no or not defined | Default=no, force start of server always |server-query-threshold | Start server when queries take longer than this | number in milliseconds | Default=300 |timeout | http api timeout | number in hours | Default is 1 minute, do not change |====================== The testconfig File ------------------- Setup section ~~~~~~~~~~~~~ Header ^^^^^^ ------------------- [setup] ------------------- The runscript method is a brute force way to run scripts where the user is responsible for setting STATE and STATUS ------------------- runscript main.csh ------------------- Iteration ~~~~~~~~~ .Sections for iteration ------------------ # full combinations [items] A x y B 1 2 # Yields: x/1 x/2 y/1 y/2 # tabled [itemstable] A x y B 1 2 # Yields x/1 y/2 ------------------ .Or use files ------------------ [itemopts] slash path/to/file/with/items # or space path/to/file/with/items ------------------ .File format for / delimited ------------------ key1/key2/key3 val1/val2/val2 ... ------------------ .File format for space delimited ------------------ key1 key2 key3 val1 val2 val2 ... ------------------ Requirements section ~~~~~~~~~~~~~~~~~~~~ .Header ------------------- [requirements] ------------------- Wait on Other Tests ~~~~~~~~~~~~~~~~~~~ ------------------- # A normal waiton waits for the prior tests to be COMPLETED # and PASS, CHECK or WAIVED waiton test1 test2 ------------------- NOTE: Dynamic waiton lists must be capable of being calculated at the beginning of a run. This is because Megatest walks the tree of waitons to create the list of tests to execute. .This works ------------------- waiton [system somescript.sh] ------------------- .This does NOT work (the full context for the test is not available so #{shell ...} is NOT enabled to evaluate. ------------------- waiton #{shell somescript.sh} ------------------- .This does NOT work ------------------- waiton [system somescript_that_depends_on_a_prior_test.sh] ------------------- Mode ~~~~ The default (i.e. if mode is not specified) is normal. All pre-dependent tests must be COMPLETED and PASS, CHECK or WAIVED before the test will start ------------------- [requirements] mode normal ------------------- The toplevel mode requires only that the prior tests are COMPLETED. ------------------- [requirements] mode toplevel ------------------- A item based waiton will start items in a test when the same-named item is COMPLETED and PASS, CHECK or WAIVED in the prior test. This was historically called "itemwait" mode. The terms "itemwait" and "itemmatch" are synonyms. ------------------- [requirements] mode itemmatch ------------------- Overriding Enviroment Variables ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Override variables before starting the test. Can include files (perhaps generated by megatest -envdelta or similar). -------------------- [pre-launch-env-vars] VAR1 value1 # Get some generated settings [include ../generated-vars.config] # Use this trick to unset variables #{scheme (unsetenv "FOOBAR")} -------------------- Itemmap Handling ~~~~~~~~~~~~~~~~ For cases were the dependent test has a similar but not identical itempath to the downstream test an itemmap can allow for itemmatch mode .example for removing part of itemmap for waiton test (eg: item +foo-x/bar+ depends on waiton's item +y/bar+) ------------------- [requirements] mode itemwait # itemmap <item pattern for this test> <item replacement pattern for waiton test> itemmap .*x/ y/ ------------------- .example for removing part of itemmap for waiton test (eg: item +foo/bar/baz+ in this test depends on waiton's item +baz+) ------------------- # ## pattern replacement notes # # ## Example # ## Remove everything up to the last / [requirements] mode itemwait # itemmap <item pattern for this test> <nothing here indicates removal> itemmap .*/ ------------------- .example replacing part of itemmap for (eg: item +foo/1234+ will imply waiton's item +bar/1234+) ------------------- # # ## Example # ## Replace foo/ with bar/ [requirements] mode itemwait # itemmap <item pattern for this test> <item replacement pattern for waiton test> itemmap foo/ bar/ ------------------- .example for backreference (eg: item +foo23/thud+ will imply waiton's item +num-23/bar/thud+ ------------------- # # ## Example # ## can use \{number} in replacement pattern to backreference a (capture) from matching pattern similar to sed or perl [requirements] mode itemwait # itemmap <item pattern for this test> <item replacement pattern for waiton test> itemmap foo(\d+)/ num-\1/bar/ ------------------- .example multiple itemmaps ------------------- # multi-line; matches are applied in the listed order # The following would map: # a123b321 to b321fooa123 then to 321fooa123p # [requirements] itemmap (a\d+)(b\d+) \2foo\1 b(.*) \1p ------------------- Complex mapping ~~~~~~~~~~~~~~~ Complex mappings can be handled with a separate [itemmap] section (instead if an itemmap line in the [requirements] section) Each line in an itemmap section starts with a waiton test name followed by an itemmap expression .eg: The following causes waiton test A item +bar/1234+ to run when our test's +foo/1234+ item is requested as well as causing waiton test B's +blah+ item to run when our test's +stuff/blah+ item is requested -------------- [itemmap] A foo/ bar/ B stuff/ -------------- Complex mapping example ~~~~~~~~~~~~~~~~~~~~~~~ // image::itemmap.png[] image::complex-itemmap.png[] We accomplish this by configuring the testconfigs of our tests C D and E as follows: .Testconfig for Test E has ---------------------- [requirements] waiton C itemmap (\d+)/res \1/bb ---------------------- .Testconfig for Test D has ---------------------- [requirements] waiton C itemmap (\d+)/res \1/aa ---------------------- .Testconfig for Test C has ---------------------- [requirements] waiton A B [itemmap] A (\d+)/aa aa/\1 B (\d+)/bb bb/\1 ---------------------- .Testconfigs for Test B and Test A have no waiton or itemmap configured ------------------- ------------------- .Walk through one item -- we want the following to happen for testpatt +D/1/res+ (see blue boxes in complex itemmaping figure above): . eg from command line +megatest -run -testpatt D/1/res -target mytarget -runname myrunname+ . Full list to be run is now: +D/1/res+ . Test D has a waiton - test C. Test D's itemmap rule +itemmap (\d+)/res \1/aa+ -> causes +C/1/aa+ to run before +D/1/res+ . Full list to be run is now: +D/1/res+, +C/1/aa+ . Test C was a waiton - test A. Test C's rule +A (\d+)/aa aa/\1+ -> causes +A/aa/1+ to run before +C/1/aa+ . Full list to be run is now: +D/1/res+, +C/1/aa+, +A/aa/1+ . Test A has no waitons. All waitons of all tests in full list have been processed. Full list is finalized. itemstable ~~~~~~~~~~ An alternative to defining items is the itemstable section. This lets you define the itempath in a table format rather than specifying components and relying on getting all permutations of those components. Dynamic Flow Dependency Tree ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .Autogeneration waiton list for dynamic flow dependency trees ------------------- [requirements] # With a toplevel test you may wish to generate your list # of tests to run dynamically # waiton #{shell get-valid-tests-to-run.sh} ------------------- Run time limit ~~~~~~~~~~~~~~ ----------------- [requirements] runtimelim 1h 2m 3s # this will automatically kill the test if it runs for more than 1h 2m and 3s ----------------- Skip ~~~~ A test with a skip section will conditional skip running. .Skip section example ----------------- [skip] prevrunning x # rundelay 30m 15s ----------------- Skip on Still-running Tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ----------------- # NB// If the prevrunning line exists with *any* value the test will # automatically SKIP if the same-named test is currently RUNNING. The # "x" can be any string. Comment out the prevrunning line to turn off # skip. [skip] prevrunning x ----------------- Skip if a File Exists ~~~~~~~~~~~~~~~~~~~~~ ----------------- [skip] fileexists /path/to/a/file # skip if /path/to/a/file exists ----------------- Skip if a File Does not Exist ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ----------------- [skip] filenotexists /path/to/a/file # skip if /path/to/a/file does not exist ----------------- Skip if a script completes with 0 status ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ----------------- [skip] script /path/to/a/script # skip if /path/to/a/script completes with 0 status ----------------- Skip if test ran more recently than specified time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .Skip if this test has been run in the past fifteen minutes and 15 seconds. ----------------- [skip] rundelay 15m 15s ----------------- Disks ~~~~~ A disks section in testconfig will override the disks section in megatest.config. This can be used to allocate disks on a per-test or per item basis. Controlled waiver propagation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If test is FAIL and previous test in run with same MT_TARGET is WAIVED or if the test/itempath is listed under the matching target in the waivers roll forward file (see below for file spec) then apply the following rules from the testconfig: If a waiver check is specified in the testconfig apply the check and if it passes then set this FAIL to WAIVED Waiver check has two parts, 1) a list of waiver, rulename, filepatterns and 2) the rulename script spec (note that "diff" and "logpro" are predefined) ----------------- ###### EXAMPLE FROM testconfig ######### # matching file(s) will be diff'd with previous run and logpro applied # if PASS or WARN result from logpro then WAIVER state is set # [waivers] # logpro_file rulename input_glob waiver_1 logpro lookittmp.log [waiver_rules] # This builtin rule is the default if there is no <waivername>.logpro file # diff diff %file1% %file2% # This builtin rule is applied if a <waivername>.logpro file exists # logpro diff %file1% %file2% | logpro %waivername%.logpro %waivername%.html ----------------- Waiver roll-forward files ^^^^^^^^^^^^^^^^^^^^^^^^^ To transfer waivers from one Megatest area to another it is possible to dump waivers into a file and reference that file in another area. .Dumping the waivers --------------------------- megatest -list-waivers -runname %-a > mywaivers.dat --------------------------- .Referencing the saved waivers --------------------------- # In megatest.config, all files listed will be loaded - recomended to use # variables to select directorys to minimize what gets loaded. [setup] waivers-dirs /path/to/waiver/files /another/path/to/waiver/files --------------------------- .Waiver files format --------------------------- [the/target/here] # comments are fine testname1/itempath A comment about why it was waived testname2 A comment for a non-itemized test --------------------------- Ezsteps ~~~~~~~ Ezsteps is the recommended way to implement tests and automation in Megatest. NOTE: Each ezstep must be a single line. Use the [scripts] mechanism to create multiline scripts (see example below). .Example ezsteps with logpro rules ----------------- [ezsteps] lookittmp ls /tmp [logpro] lookittmp ;; Note: config file format supports multi-line entries where leading whitespace is removed from each line ;; a blank line indicates the end of the block of text (expect:required in "LogFileBody" > 0 "A file name that should never exist!" #/This is a awfully stupid file name that should never be found in the temp dir/) ----------------- Automatic environment propagation with Ezsteps ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # if your upstream file is csh you can force csh like this # if your upstream is bash loadenv source $REF/ourenviron.sh Turn on ezpropvars and environment variables will be propagated from step to step. Use this to source script files that modify the envionment where the modifications are needed in subsequent steps. NOTE: aliases and variables with strange whitespace or characters will not propagate correctly. Put in a ticket on the http://www.kiatoa.com/fossils/megatest site if you need support for a specific strange character combination. .Turn on auto propagate for bash --------------------------- [setup] ezpropvars sh --------------------------- .Write your ezsteps. The loadenv.csh step will use /bin/csh as its shell, other steps will use bash. --------------------------- [ezsteps] loadenv.csh source $REF/ourenviron.csh compile make install make install --------------------------- Bash and csh are supported. You can override the shell binary location from the default /bin/bash and /bin/csh if needed. .Turn on auto propagate for csh --------------------------- [setup] ezpropvars csh /bin/csh --------------------------- .Example of auto propagation using extensions --------------------------- [ezsteps] step1.sh export SOMEVAR=$(ps -def | wc -l);ls /tmp # The next step will get the value of $SOMEVAR from step1.sh step2.sh echo $SOMEVAR --------------------------- .Example of multi-line script --------------------------- [scripts] tarresults tar cfvz $DEST/srcdir1.tar.gz srcdir1 tar cfvz $DEST/srcdir2.tar.gz srcdir2 [setup] ezpropvars sh [ezsteps] step1 DEST=/tmp/targz;source tarresults --------------------------- The above example will result in files; tarresults and ez_step1 being created in the test dir. Scripts ~~~~~~~ .Specifying scripts inline (best used for only simple scripts) ---------------------------- [scripts] loaddb #!/bin/bash sqlite3 $1 <<EOF .mode tabs .import $2 data .q EOF ---------------------------- The above snippet results in the creation of an executable script called "loaddb" in the test directory. NOTE: every line in the script must be prefixed with the exact same number of spaces. Lines beginning with a # will not work as expected. Currently you cannot indent intermediate lines. .Full example with ezsteps, logpro rules, scripts etc. ----------------- # You can include a common file # [include #{getenv MT_RUN_AREA_HOME}/global-testconfig.inc] # Use "var" for a scratch pad # [var] dumpsql select * from data; sepstr ..................................... # NOT IMPLEMENTED YET! # [ezsteps-addendum] prescript something.sh postscript something2.sh # Add additional steps here. Format is "stepname script" [ezsteps] importdb loaddb prod.db prod.sql dumpprod dumpdata prod.db "#{get var dumpsql}" diff (echo "prod#{get var sepstr}test";diff --side-by-side \ dumpprod.log reference.log ;echo DIFFDONE) [scripts] loaddb #!/bin/bash sqlite3 $1 <<EOF .mode tabs .import $2 data .q EOF dumpdata #!/bin/bash sqlite3 $1 <<EOF .separator , $2 .q EOF # Test requirements are specified here [requirements] waiton setup priority 0 # Iteration for your test is controlled by the items section # The complicated if is needed to allow processing of the config for the dashboard when there are no actual runs. [items] THINGNAME [system generatethings.sh | sort -u] # Logpro rules for each step can be captured here in the testconfig # note: The ;; after the stepname and the leading whitespace are required # [logpro] inputdb ;; (expect:ignore in "LogFileBody" < 99 "Ignore error in comments" #/^\/\/.*error/) (expect:warning in "LogFileBody" = 0 "Any warning" #/warn/) (expect:required in "LogFileBody" > 0 "Some data found" #/^[a-z]{3,4}[0-9]+_r.*/) diff ;; (expect:ignore in "LogFileBody" < 99 "Ignore error in comments" #/^\/\/.*error/) (expect:warning in "LogFileBody" = 0 "Any warning" #/warn/) (expect:error in "LogFileBody" = 0 "< or > indicate missing entry" (list #/(<|>)/ #/error/i)) (expect:error in "LogFileBody" = 0 "Difference in data" (list #/\s+\|\s+/ #/error/i)) (expect:required in "LogFileBody" > 0 "DIFFDONE Marker found" #/DIFFDONE/) (expect:required in "LogFileBody" > 0 "Some things found" #/^[a-z]{3,4}[0-9]+_r.*/) # NOT IMPLEMENTED YET! # ## Also: enhance logpro to take list of command files: file1,file2... [waivers] createprod{target=%78/%/%/%} ;; (disable:required "DIFFDONE Marker found") (disable:error "Some error") (expect:waive in "LogFileBody" < 99 "Waive if failed due to version" #/\w+3\.6.*/) # test_meta is a section for storing additional data on your test [test_meta] author matt owner matt description Compare things tags tagone,tagtwo reviewed never ----------------- Triggers ~~~~~~~~ In your testconfig or megatest.config triggers can be specified .Triggers spec ----------------- [triggers] # Call script running.sh when test goes to state=RUNNING, status=PASS RUNNING/PASS running.sh # Call script running.sh any time state goes to RUNNING RUNNING/ running.sh # Call script onpass.sh any time status goes to PASS PASS/ onpass.sh ----------------- Scripts called will have; test-id test-rundir trigger test-name item-path state status event-time, added to the commandline. HINT To start an xterm (useful for debugging), use a command line like the following: .Start an xterm using a trigger for test completed. ----------------- [triggers] COMPLETED/ xterm -e bash -s -- ----------------- NOTE: There is a trailing space after the double-dash There are a number of environment variables available to the trigger script but since triggers can be called in various contexts not all variables are available at all times. The trigger script should check for the variable and fail gracefully if it doesn't exist. // ,cols="^,2m" .Environment variables visible to the trigger script [width="90%",frame="topbot",options="header"] |====================== | Variable | Purpose | MT_TEST_RUN_DIR | The directory where Megatest ran this test | MT_CMDINFO | Encoded command data for the test | MT_DEBUG_MODE | Used to pass the debug mode to nested calls to Megatest | MT_RUN_AREA_HOME | Megatest home area | MT_TESTSUITENAME | The name of this testsuite or area | MT_TEST_NAME | The name of this test | MT_ITEM_INFO | The variable and values for the test item | MT_MEGATEST | Which Megatest binary is being used by this area | MT_TARGET | The target variable values, separated by '/' | MT_LINKTREE | The base of the link tree where all run tests can be found | MT_ITEMPATH | The values of the item path variables, separated by '/' | MT_RUNNAME | The name of the run |====================== Override the Toplevel HTML File ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Megatest generates a simple html file summary for top level tests of iterated tests. The generation can be overridden. NOTE: the output of the script is captured from stdout to create the html. .For test "runfirst" override the toplevel generation with a script "mysummary.sh" ----------------- # Override the rollup for specific tests [testrollup] runfirst mysummary.sh ----------------- Archiving Setup --------------- In megatest.config add the following sections: .megatest.config -------------- [archive] # where to get bup executable # bup /path/to/bup [archive-disks] # Archives will be organised under these paths like this: # <testsuite>/<creationdate> # Within the archive the data is structured like this: # <target>/<runname>/<test>/ archive0 /mfs/myarchive-data/adisk1 -------------- Environment Variables --------------------- It is often necessary to capture and or manipulate environment variables. Megatest has some facilities built in to help. Capture variables ~~~~~~~~~~~~~~~~~ .Commands ------------------------------ # capture the current enviroment into a db called envdat.db under # the context "before" megatest -envcap before # capture the current environment into a db called startup.db with # context "after" megatest -envcap after startup.db # write the diff from before to after megatest -envdelta before-after -dumpmode bash ------------------------------ Dump modes include bash, csh and config. You can include config data into megatest.config, runconfigs.config and testconfig files. This is useful for capturing a complex environment in a special-purpose test and then utilizing that environment in downstream tests. .Example of generating and using config data ------------------------------ megatest -envcap original # do some stuff here megatest -envcap munged megatest -envdelta original-munged -dumpmode ini -o modified.config ------------------------------ Then in runconfigs.config .Example of using modified.config in a testconfig ------------------------------ [pre-launch-env-vars] [include modified.config] ------------------------------ Managing Old Runs ----------------- It is often desired to keep some older runs around but this must be balanced with the costs of disk space. . Use -remove-keep . Use -archive (can also be done from the -remove-keep interface) . use -remove-runs with -keep-records .For each target, remove all runs but the most recent 3 if they are over 1 week old --------------------- # use -precmd 'sleep 5;nbfake' to limit overloading the host computer but to allow the removes to run in parallel. megatest -actions print,remove-runs -remove-keep 3 -target %/%/%/% -runname % -age 1w -precmd 'sleep 5;nbfake'" --------------------- Nested Runs ----------- A Megatest test can run a full Megatest run in either the same Megatest area or in another area. This is a powerful way of chaining complex suites of tests and or actions. If you are not using the current area you can use ezsteps to retrieve and setup the sub-Megatest run area. In the testconfig: --------------- [subrun] # Required: wait for the run or just launch it # if no then the run will be an automatic PASS irrespective of the actual result run-wait yes|no # Optional: where to execute the run. Default is the current runarea run-area /some/path/to/megatest/area # Optional: method to use to determine pass/fail status of the run # auto (default) - roll up the net state/status of the sub-run # logpro - use the provided logpro rules, happens automatically if there is a logpro section # passfail auto|logpro # Example of logpro: passfail logpro # Optional: logpro ;; if this section exists then logpro is used to determine pass/fail (expect:required in "LogFileBody" >= 1 "At least one pass" #/PASS/) (expect:error in "LogFileBody" = 0 "No FAILs allowed" #/FAIL/) # Optional: target translator, default is to use the parent target target #{shell somescript.sh} # Optional: runname translator/generator, default is to use the parent runname run-name #{somescript.sh} # Optional: testpatt spec, default is to first look for TESTPATT spec from runconfigs unless there is a contour spec test-patt %/item1,test2 # Optional: contour spec, use the named contour from the megatest.config contour spec contour contourname ### NOTE: Not implemented yet! Let us know if you need this feature. # Optional: mode-patt, use this spec for testpatt from runconfigs mode-patt TESTPATT # Optional: tag-expr, use this tag-expr to select tests tag-expr quick # Optional: (not yet implemented, remove-runs is always propagated at this time), propagate these actions from the parent # test # Note// default is % for all propagate remove-runs archive ... --------------- Programming API --------------- These routines can be called from the megatest repl. .API Keys Related Calls [width="70%",cols="^,2m,2m,2m",frame="topbot",options="header,footer"] |====================== |API Call | Purpose comments | Returns | Comments |(rmt:get-keys run-id) | | ( key1 key2 ... ) | | (rmt:get-key-val-pairs run-id) | | #t=success/#f=fail | Works only if the server is still reachable |====================== :numbered!: