test_server_ctrl
This module provides a low level interface to the Test Server.
The test_server_ctrl
module provides a low level
interface to the Test Server. This interface is normally
not used directly by the tester, but through a framework built
on top of test_server_ctrl
.
Common Test is such a framework, well suited for automated black box testing of target systems of any kind (not necessarily implemented in Erlang). Common Test is also a very useful tool for white box testing Erlang programs and OTP applications. Please see the Common Test User's Guide and reference manual for more information.
If you want to write your own framework, some more information
can be found in the chapter "Writing your own test server
framework" in the Test Server User's Guide. Details about the
interface provided by test_server_ctrl
follows below.
Functions
start() -> Result
Result = ok | {error, {already_started, pid()}
This function starts the test server.
stop() -> ok
This stops the test server and all its activity. The running test suite (if any) will be halted.
add_dir(Name, Dir) -> ok
add_dir(Name, Dir, Pattern) -> ok
add_dir(Name, [Dir|Dirs]) -> ok
add_dir(Name, [Dir|Dirs], Pattern) -> ok
Name = term()
Dir = term()
Dirs = [term()]
Pattern = term()
Puts a collection of suites matching (*_SUITE) in given
directories into the job queue. Name
is an arbitrary
name for the job, it can be any erlang term. If Pattern
is given, only modules matching Pattern*
will be added.
add_module(Mod) -> ok
add_module(Name, [Mod|Mods]) -> ok
Mod = atom()
Mods = [atom()]
Name = term()
This function adds a module or a list of modules, to the
test servers job queue. Name
may be any Erlang
term. When Name
is not given, the job gets the name of
the module.
add_case(Mod, Case) -> ok
Mod = atom()
Case = atom()
This function will add one test case to the job queue. The job will be given the module's name.
add_case(Name, Mod, Case) -> ok
Name = string()
Equivalent to add_case/2
, but the test job will get
the specified name.
add_cases(Mod, Cases) -> ok
Mod = atom()
Cases = [Case]
Case = atom()
This function will add one or more test cases to the job queue. The job will be given the module's name.
add_cases(Name, Mod, Cases) -> ok
Name = string()
Equivalent to add_cases/2
, but the test job will get
the specified name.
add_spec(TestSpecFile) -> ok | {error, nofile}
TestSpecFile = string()
This function will add the content of the given test
specification file to the job queue. The job will be given the
name of the test specification file, e.g. if the file is
called test.spec
, the job will be called test
.
See the reference manual for the test server application for details about the test specification file.
add_dir_with_skip(Name, [Dir|Dirs], Skip) -> ok
add_dir_with_skip(Name, [Dir|Dirs], Pattern, Skip) -> ok
add_module_with_skip(Mod, Skip) -> ok
add_module_with_skip(Name, [Mod|Mods], Skip) -> ok
add_case_with_skip(Mod, Case, Skip) -> ok
add_case_with_skip(Name, Mod, Case, Skip) -> ok
add_cases_with_skip(Mod, Cases, Skip) -> ok
add_cases_with_skip(Name, Mod, Cases, Skip) -> ok
Skip = [SkipItem]
SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}
Mod = atom()
Comment = string()
Cases = [Case]
Case = atom()
These functions add test jobs just like the add_dir, add_module, add_case and add_cases functions above, but carry an additional argument, Skip. Skip is a list of items that should be skipped in the current test run. Test job items that occur in the Skip list will be logged as SKIPPED with the associated Comment.
add_tests_with_skip(Name, Tests, Skip) -> ok
Name = term()
Tests = [TestItem]
TestItem = {Dir,all,all} | {Dir,Mods,all} | {Dir,Mod,Cases}
Dir = term()
Mods = [Mod]
Mod = atom()
Cases = [Case]
Case = atom()
Skip = [SkipItem]
SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}
Comment = string()
This function adds various test jobs to the test_server_ctrl job queue. These jobs can be of different type (all or specific suites in one directory, all or specific cases in one suite, etc). It is also possible to get particular items skipped by passing them along in the Skip list (see the add_*_with_skip functions above).
abort_current_testcase(Reason) -> ok | {error,no_testcase_running}
Reason = term()
When calling this function, the currently executing test case will be aborted. It is the user's responsibility to know for sure which test case is currently executing. The function is therefore only safe to call from a function which has been called (or synchronously invoked) by the test case.
set_levels(Console, Major, Minor) -> ok
Console = integer()
Major = integer()
Minor = integer()
Determines where I/O from test suites/test server will
go. All text output from test suites and the test server is
tagged with a priority value which ranges from 0 to 100, 100
being the most detailed. (see the section about log files in
the user's guide). Output from the test cases (using
io:format/2
) has a detail level of 50. Depending on the
levels set by this function, this I/O may be sent to the
console, the major log file (for the whole test suite) or to
the minor logfile (separate for each test case).
All output with detail level:
- Less than or equal to
Console
is displayed on the screen (default 1) - Less than or equal to
Major
is logged in the major log file (default 19) - Greater than or equal to
Minor
is logged in the minor log files (default 10)
To view the currently set thresholds, use the
get_levels/0
function.
get_levels() -> {Console, Major, Minor}
Returns the current levels. See set_levels/3
for
types.
jobs() -> JobQueue
JobQueue = [{list(), pid()}]
This function will return all the jobs currently in the job queue.
multiply_timetraps(N) -> ok
N = integer() | infinity
This function should be called before a test is started
which requires extended timetraps, e.g. if extensive tracing
is used. All timetraps started after this call will be
multiplied by N
.
scale_timetraps(Bool) -> ok
Bool = true | false
This function should be called before a test is started. The parameter specifies if test_server should attempt to automatically scale the timetrap value in order to compensate for delays caused by e.g. the cover tool.
get_timetrap_parameters() -> {N,Bool}
N = integer() | infinity
Bool = true | false
This function may be called to read the values set by
multiply_timetraps/1
and scale_timetraps/1
.
cover(Application,Analyse) -> ok
cover(CoverFile,Analyse) -> ok
cover(App,CoverFile,Analyse) -> ok
Application = atom()
CoverFile = string()
Analyse = details | overview
This function informs the test_server controller that next test shall run with code coverage analysis. All timetraps will automatically be multiplied by 10 when cover i run.
Application
and CoverFile
indicates what to
cover compile. If Application
is given, the default is
that all modules in the ebin
directory of the
application will be cover compiled. The ebin
directory
is found by adding ebin
to
code:lib_dir(Application)
.
A CoverFile
can have the following entries:
{exclude, all | ExcludeModuleList}. {include, IncludeModuleList}. {cross, CrossCoverInfo}.
Note that each line must end with a full
stop. ExcludeModuleList
and IncludeModuleList
are lists of atoms, where each atom is a module name.
CrossCoverInfo
is used when collecting cover data
over multiple tests. Modules listed here are compiled, but
they will not be analysed when the test is finished. See
cross_cover_analyse/2
for more information about the cross cover mechanism and the
format of CrossCoverInfo
.
If both an Application
and a CoverFile
is
given, all modules in the application are cover compiled,
except for the modules listed in ExcludeModuleList
. The
modules in IncludeModuleList
are also cover compiled.
If a CoverFile
is given, but no Application
,
only the modules in IncludeModuleList
are cover
compiled.
Analyse
indicates the detail level of the cover
analysis. If Analyse = details
, each cover compiled
module will be analysed with
cover:analyse_to_file/1
. If Analyse = overview
an overview of all cover compiled modules is created, listing
the number of covered and not covered lines for each module.
If the test following this call starts any slave or peer
nodes with test_server:start_node/3
, the same cover
compiled code will be loaded on all nodes. If the loading
fails, e.g. if the node runs an old version of OTP, the node
will simply not be a part of the coverage analysis. Note that
slave or peer nodes must be stopped with
test_server:stop_node/1
for the node to be part of the
coverage analysis, else the test server will not be able to
fetch coverage data from the node.
When the test is finished, the coverage analysis is
automatically completed, logs are created and the cover
compiled modules are unloaded. If another test is to be run
with coverage analysis, test_server_ctrl:cover/2/3
must
be called again.
cross_cover_analyse(Level, Tests) -> ok
Level = details | overview
Tests = [{Tag,LogDir}]
Tag = atom()
LogDir = string()
Tag
. This
can either be the run.<timestamp>
directory or
the parent directory of this (in which case the latest
run.<timestamp>
directory is chosen.Analyse cover data collected from multiple tests. The modules
analysed are the ones listed in cross
statements in
the cover files. These are modules that are heavily used by
other tests than the one where they belong or are explicitly
tested. They should then be listed as cross modules in the
cover file for the test where they are used but do not
belong. Se example below.
This function should be run after all tests are completed,
and the result will be stored in a file called
cross_cover.html
in the run.<timestamp>
directory of the test the modules belong to.
Note that the function can be executed on any node, and it
does not require test_server_ctrl
to be started first.
The cross
statement in the cover file must be like this:
{cross,[{Tag,Modules}]}.
where Tag
is the same as Tag
in the
Tests
parameter to this function and Modules
is a
list of module names (atoms).
Example:
If the module m1
belongs to system s1
but is
heavily used also in the tests for another system s2
,
then the cover files for the two systems' tests could be like
this:
s1.cover: {include,[m1]}. s2.cover: {include,[....]}. % modules belonging to system s2 {cross,[{s1,[m1]}]}.
When the tests for both s1
and s2
are completed, run
test_server_ctrl:cross_cover_analyse(Level,[{s1,S1LogDir},{s2,S2LogDir}])
and the accumulated cover data for m1
will be written to
S1LogDir/[run.<timestamp>/]cross_cover.html
.
Note that the m1
module will also be presented in the
normal coverage log for s1
(due to the include statement in
s1.cover
), but that only includes the coverage achieved by the
s1
test itself.
The Tag in the cross
statement in the cover file has
no other purpose than mapping the list of modules
([m1]
in the example above) to the correct log
directory where it should be included in the
cross_cover.html
file (S1LogDir
in the example
above). I.e. the value of Tag
has no meaning, it
could be foo
as well as s1
above, as long as
the same Tag
is used in the cover file and in the
call to this function.
trc(TraceInfoFile) -> ok | {error, Reason}
TraceInfoFile = atom() | string()
This function starts call trace on target and on slave or peer nodes that are started or will be started by the test suites.
Timetraps are not extended automatically when tracing is
used. Use multiply_timetraps/1
if necessary.
Note that the trace support in the test server is in a very early stage of the implementation, and thus not yet as powerful as one might wish for.
The trace information file specified by the
TraceInfoFile
argument is a text file containing one or
more of the following elements:
{SetTP,Module,Pattern}.
{SetTP,Module,Function,Pattern}.
{SetTP,Module,Function,Arity,Pattern}.
ClearTP.
{ClearTP,Module}.
{ClearTP,Module,Function}.
{ClearTP,Module,Function,Arity}.
SetTP = tp | tpl
ttb
module in the observer
application. tp
means set trace pattern on global
function calls. tpl
means set trace pattern on local
and global function calls.
ClearTP = ctp | ctpl | ctpg
ttb
module in the observer
application. ctp
means clear trace pattern (i.e. turn
off) on global and local function calls. ctpl
means
clear trace pattern on local function calls only and ctpg
means clear trace pattern on global function calls only.
Module = atom()
Function = atom()
Arity = integer()
Pattern = [] | match_spec()
The trace result will be logged in a (binary) file called
NodeName-test_server
in the current directory of the
test server controller node. The log must be formatted using
ttb:format/1/2
.
stop_trace() -> ok | {error, not_tracing}
This function stops tracing on target, and on slave or peer nodes that are currently running. New slave or peer nodes will no longer be traced after this.
FUNCTIONS INVOKED FROM COMMAND LINE
The following functions are supposed to be invoked from the
command line using the -s
option when starting the erlang
node.
Functions
run_test(CommandLine) -> ok
CommandLine = FlagList
This function is supposed to be invoked from the commandline. It starts the test server, interprets the argument supplied from the commandline, runs the tests specified and when all tests are done, stops the test server and returns to the Erlang prompt.
The CommandLine
argument is a list of command line
flags, typically ['KEY1', Value1, 'KEY2', Value2, ...]
.
The valid command line flags are listed below.
Under a UNIX command prompt, this function can be invoked like this:
erl -noshell -s test_server_ctrl run_test KEY1 Value1 KEY2 Value2 ... -s erlang halt
Or make an alias (this is for unix/tcsh)
alias erl_test 'erl -noshell -s test_server_ctrl run_test \!* -s erlang halt'
And then use it like this
erl_test KEY1 Value1 KEY2 Value2 ...
The valid command line flags are
DIR dir
dir
to
the job queue.
MODULE mod
mod
to the job queue.
CASE mod case
case
in module mod
to the
job queue.
SPEC spec
spec
.
SKIPMOD mod
mod
SKIPCASE mod case
case
in module mod
.
NAME name
SPEC
which keeps
its names.
COVER app cover_file analyse
app
, cover_file
and analyse
corresponds to the parameters to
test_server_ctrl:cover/3
. If no cover file is used,
the atom none
should be given.
TRACE traceinfofile
trc/1
above for more information about the
syntax of this file.
FRAMEWORK CALLBACK FUNCTIONS
A test server framework can be defined by setting the
environment variable TEST_SERVER_FRAMEWORK
to a module
name. This module will then be framework callback module, and it
must export the following function:
Functions
get_suite(Mod,Func) -> TestCaseList
Mod = atom()
Func = atom()
TestCaseList = [SubCase]
SubCase = atom()
This function is called before a test case is started. The
purpose is to retrieve a list of subcases. The default
behaviour of this function should be to call
Mod:Func(suite)
and return the result from this call.
init_tc(Mod,Func,Args0) -> {ok,Args1} | {skip,ReasonToSkip} | {auto_skip,ReasonToSkip} | {fail,ReasonToFail}
Mod = atom()
Func = atom()
Args0 = Args1 = [tuple()]
ReasonToSkip = term()
ReasonToFail = term()
This function is called before a test case or configuration
function starts. It is called on the process executing the function
Mod:Func
. Typical use of this function can be to alter
the input parameters to the test case function (Args
) or
to set properties for the executing process.
By returning {skip,Reason}
, Func
gets skipped.
Func
also gets skipped if {auto_skip,Reason}
is returned,
but then gets an auto skipped status (rather than user skipped).
To fail Func
immediately instead of executing it, return
{fail,ReasonToFail}.
end_tc(Mod,Func,Status) -> ok | {fail,ReasonToFail}
Mod = atom()
Func = atom()
Status = {Result,Args} | {TCPid,Result,Args}
ReasonToFail = term()
Result = ok | Skip | Fail
TCPid = pid()
Skip = {skip,SkipReason}
SkipReason = term() | {failed,{Mod,init_per_testcase,term()}}
Fail = {error,term()} | {'EXIT',term()} | {timetrap_timeout,integer()} | {testcase_aborted,term()} | testcase_aborted_or_killed | {failed,term()} | {failed,{Mod,end_per_testcase,term()}}
Args = [tuple()]
This function is called when a test case, or a configuration function,
is finished. It is normally called on the process where the function
Mod:Func
has been executing, but if not, the pid of the test
case process is passed with the Status
argument.
Typical use of the end_tc/3
function can be to clean up
after init_tc/3
.
If Func
is a test case, it is possible to analyse the value of
Result
to verify that init_per_testcase/2
and
end_per_testcase/2
executed successfully.
It is possible with end_tc/3
to fail an otherwise successful
test case, by returning {fail,ReasonToFail}
. The test case Func
will be logged as failed with the provided term as reason.
report(What,Data) -> ok
What = atom()
Data = term()
This function is called in order to keep the framework up-to-date with the progress of the test. This is useful e.g. if the framework implements a GUI where the progress information is constantly updated. The following can be reported:
What = tests_start, Data = {Name,NumCases}
What = loginfo, Data = [{topdir,TestRootDir},{rundir,CurrLogDir}]
What = tests_done, Data = {Ok,Failed,{UserSkipped,AutoSkipped}}
What = tc_start, Data = {{Mod,Func},TCLogFile}
What = tc_done, Data = {Mod,Func,Result}
What = tc_user_skip, Data = {Mod,Func,Comment}
What = tc_auto_skip, Data = {Mod,Func,Comment}
What = framework_error, Data = {{FWMod,FWFunc},Error}
error_notification(Mod, Func, Args, Error) -> ok
Mod = atom()
Func = atom()
Args = [tuple()]
Error = {Reason,Location}
Reason = term()
Location = unknown | [{Mod,Func,Line}]
Line = integer()
This function is called as the result of function Mod:Func
failing
with Reason at Location. The function is intended mainly to aid
specific logging or error handling in the framework application. Note
that for Location to have relevant values (i.e. other than unknown),
the line
macro or test_server_line
parse transform must
be used. For details, please see the section about test suite line numbers
in the test_server
reference manual page.
warn(What) -> boolean()
What = processes | nodes
The test server checks the number of processes and nodes
before and after the test is executed. This function is a
question to the framework if the test server should warn when
the number of processes or nodes has changed during the test
execution. If true
is returned, a warning will be written
in the test case minor log file.
target_info() -> InfoStr
InfoStr = string() | ""
The test server will ask the framework for information about the test target system and print InfoStr in the test case log file below the host information.
- start/0
- stop/0
- add_dir/2
- add_dir/3
- add_dir/2-1
- add_dir/3-1
- add_module/1
- add_module/2
- add_case/2
- add_case/3
- add_cases/2
- add_cases/3
- add_spec/1
- add_dir_with_skip/3
- add_dir_with_skip/4
- add_module_with_skip/2
- add_module_with_skip/3
- add_case_with_skip/3
- add_case_with_skip/4
- add_cases_with_skip/3
- add_cases_with_skip/4
- add_tests_with_skip/3
- abort_current_testcase/1
- set_levels/3
- get_levels/0
- jobs/0
- multiply_timetraps/1
- scale_timetraps/1
- get_timetrap_parameters/0
- cover/2
- cover/2-1
- cover/3
- cross_cover_analyse/2
- trc/1
- stop_trace/0
- run_test/1
- get_suite/2
- init_tc/3
- end_tc/3
- report/2
- error_notification/4
- warn/1
- target_info/0