htf.main() — Testscript utility
htf.main
is used to start tests if not using htf
(the command-line-utility) within a test-script.
- htf.main(title: str = 'Testreport', tests: Any | List[Any] = '__main__', tags: str | None = None, fixture_tags: str | None = None, html_report: str | List[str] | None = None, junit_xml_report: str | List[str] | None = None, json_report: str | List[str] | None = None, yaml_report: str | List[str] | None = None, report_server: str | List[str] | None = None, extra_report: Any | List[Any] | None = None, create_drafts: bool = False, open_report: bool = False, pattern: str = 'test*.py', metadata: Dict[str, Any] | None = None, parameters: Dict[str, str] | None = None, exit: bool = True, fail_fast: bool = False, minimize_report_size: bool = False, interactive: bool = False, interactive_address: str = '0.0.0.0', interactive_port: int = 8080, open_browser: bool = True, shuffle: bool = False, shuffle_seed: str | int | None = None) int
Test runner.
- Parameters:
title – the test’s title
tests=None – a test-specifier or a list of test-specifiers (folder, file, test-case, test-case-method, module)
tags=None – test filter expression
fixture_tags=None – fixture filter expression
html_report=None – HTML-report filename(s)
junit_xml_report=None – JUnit-XML-report filename(s)
json_report=None – JSON-report filename(s)
yaml_report=None – YAML-report filename(s)
report_server=None – Url or list of urls to report server
extra_report=None – Extra reports that support instance.render(data). This is useful for
htf.DOORSTestReport
for example that needs additional parameters.create_drafts=False – if set to
True
draft reports are created after each testopen_report=False – if set to
True
the HTML test report is opened after the test runpattern="test*.py" – the pattern to be used when searching for python files in folders
metadata=None – a dictionary that contains metadata about the test run
parameters=None – a dictionary that contains parameters for the test run
exit=True – if set to
True
the exit() is called at the end of the testsfail_fast=False – if set to
True
the test run ends after the first failing testminimize_report_size=False – if set to
True
the report size is minimized.interactive=False – if set to
True
the tests are run interactivelyinteractive_address="0.0.0.0" – the ip address the interaction server listens on.
interactive_port=8080 – the port the interaction server listens on.
open_browser=True – if set to
False
browser is not opened automaticallyshuffle=False – set to
True
to shuffle testsshuffle_seed=None – the shuffle seed or ‘last’ to use the last shuffle seed
- Returns:
0
if the run was successful or the number of failed tests in case or errors and failures.- Return type:
Running tests
This section show some usage examples.
The first example covers the basic usage for a simple test-case running on Python 3.9+.
import htf
class ExampleTestCase(htf.TestCase):
def test_example(self):
self.assertTrue(True)
if __name__ == "__main__":
htf.main()
This runs all tests within ExampleTestCase
and generates an HTML-report
names testreport.html
.
Specifying tests
htf.main
supports the tests
parameter. It may be None
,
a test-specifier or a list of test-specifiers.
A test-specifier may be a python module, a feature file, a globbing-expression for python modules (e.g. test_*.py
),
a globbing expression for feature files (e.g. features/*.feature
),
a folder or an import-string (python style).
In comparison to the htf
command-line-utility it may also be a module-instance,
a test-case or a test-case-method.
Test-specifiers can be mixed.
htf.main(tests=["<test-specifier1>", "<test-specifier2>", .. "<test-specifierN>"])
If only one test-specifier is supplied, it is not necessary to use a list
htf.main(tests="<test-specifier>")
If a test-specifier is a python module it is imported and all testable objects are collected into a test-suite that is used for the test run.
htf.main(tests="test_1.py")
If a test-specifier ist a globbing-expression the expression is evaluated and all matching files are imported.
htf.main(tests="test_*.py")
If a test-specifier is a folder this folder is searched recursively and all
python modules found within (see File pattern, pattern=
)
are imported and all testable
objects are collected into a test suite that is used for the test run.
Consider the folder tests
, which contains python modules with unit tests.
To execute all test cases in the folder tests
, simply use
htf.main(tests="tests")
In case tests
is an importable package or module, located in the python search path, the names might collide.
In this case, simply use
htf.main(tests="tests/")
If a test-specifier is an import string, this may be a package, a module, a class (inheriting from htf.TestCase
)
or a method.
If a test-specifier is a package or a module, it is imported and containing tests are executed.
htf.main(tests=["tests", "tests.test_example"])
If a class is used it should be a testcase inheriting from htf.TestCase
.
This testcase is used in a test suite and all tests are run.
htf.main(tests=tests.test_example.ExampleTestCase)
If a method is used it should be a method within a test case.
htf.main(tests="tests.test_example.ExampleTestCase.test_one")
The examples above are analog to the command-line-utility htf
.
In contrast to the command-line-utility htf, htf.main
supports even more test-specifiers.
To test a module-instance use
import htf
import htf_examples.htf_main_1
if __name__ == "__main__":
htf.main(tests=htf_examples.htf_main_1)
To test a test-case use
import htf
class ExampleTestCase(htf.TestCase):
def test_example(self):
self.assertTrue(True)
if __name__ == "__main__":
htf.main(tests=ExampleTestCase)
And to test a test-case-method use
import htf
class ExampleTestCase(htf.TestCase):
def test_example1(self):
self.assertTrue(True)
def test_example2(self):
self.assertTrue(True)
if __name__ == "__main__":
htf.main(tests=ExampleTestCase.test_example1)
Warning
When specifying methods as test specifiers in Python 3 the test class will be the class implementing the method and not necessarily the one specified. This means the methods called will also be the ones of the implementing class and may not be the ones you expect.
To clarify see the following example.
class Super:
def whoami(self):
print("Super")
def test_whoami(self):
self.whoami()
class Sub(Super):
def whoami(self):
print("Sub")
When running htf with Sub.test_whoami
as a test specifier Super.test_whoami
is executed instead.
This is due to a limitation in Python 3 where the inheriting class can not be resolved from the function reference.
htf.main(tests=Sub.test_whoami)
To work around this issue you can either use the class itself as a specifier Sub
, or pass an instance instead:
htf.main(tests=Sub().test_whoami)
Test-title
To specify the test-title use the title=
argument.
htf.main(title="This is my first test")
HTML-report
By default htf.main
generates an HTML-report called testreport.html
.
To specify an HTML-report use the html_report=
argument.
htf.main(html_report="tests.html")
Multiple reports can be generated by supplying a list of filenames
htf.main(html_report=["tests1.html", "tests2.html"])
To open all HTML-reports use open_report=True
.
htf.main(html_report="tests.html", open_report=True)
JUnit-XML-report
htf.main
is also able to generate JUnit-XML-compatible XML-reports. The reports can be evaluated by different tools, eg.
Jenkins.
To specify an XML-report use the junit_xml_report=
argument.
htf.main(junit_xml_report="tests.xml")
or with multiple reports
htf.main(junit_xml_report=["tests1.xml", "tests2.xml"])
JSON-report
htf.main
can also generate JSON-reports.
To specify a JSON-report use the json_report=
argument.
htf.main(json_report="tests.json")
or with multiple reports
htf.main(json_report=["tests1.json", "tests2.json"])
YAML-report
htf.main
can also generate YAML-reports.
To specify a YAML-report use the yaml_report=
argument.
htf.main(yaml_report="tests.yaml")
or with multiple reports
htf.main(yaml_report=["tests1.yaml", "tests2.yaml"])
Other reports
There are other reports that need more parameters so that
they need to be instantiated outside of htf.main
.
For example htf.DOORSTestReport
needs more
information about report-module, requirements-modules and links-modules.
To use such a report use the extra_report=
argument
import htf
class Tests(htf.TestCase):
def test_assertionError(self):
o = None
print(o.no_there)
if __name__ == "__main__":
doors_report = htf.DOORSTestReport(
filename="doors.dxl",
reportModule="/path/to/report/module",
requirementsModules={
"REQ_.*": "/path/to/requirements_module",
"TEST_.*": "/path/to/test_specification",
},
linksModules={
"REQ_.*": "/path/to/links/to/requirements_module",
"TEST_.*": "/path/to/links/to/test_specification",
}
)
htf.main(extra_report=doors_report)
You can also supply a list of report instances
htf.main(extra_report=[report1, report2, ..., reportN])
Report Server
To send test results to a test report server (hre) use report_server
.
htf.main(report_server="https://reports.host/project/token")
To disable SSL-verfification you can set the environment variable HTF_INSECURE
to any value.
Draft Reports
To create draft reports after each test set create_drafts=True
.
htf.main(create_drafts=True)
Minimized Report Size
To minimize the report size use minimize=True
.
When used for example captured stdout and stderr is only kept for failed tests.
By default the report size is not minimized.
htf.main(minimize=True)
Shuffle Tests
To shuffle tests before running use shuffle=True
.
htf.main(shuffle=True)
To set the shuffle seed use shuffle_seed=<int>|'last'
. If 'last'
is used the seed is loaded from
'shuffle_seed.txt'
. If the file cannot be found one seed is created.
After a seed is created it is stored to be used next time. Using the seed the shuffle result can be reproduced.
htf.main(shuffle=True, shuffle_seed=12345)
htf.main(shuffle=True, shuffle_seed='last')
Filename-templates
Filenames for reports can include prepopulated templates.
Available templates:
{{title}}
- the title as a filename
{{date}}
- the current date"%Y-%m-%d_%H-%M-%S"
{{hostname}}
,{{host}}
or{{node}}
- the hostname
{{htf_version}}
or{{version}}
- the htf version, eg.htf_1.0.0
{{python_version}}
- the python version, eg.Python_3.6.1
To generate an HTML-report containing the current node and the current date
htf.main(html_report="test_{{host}}_{{date}}.html")
Fail fast
If the fail_fast=True
argument is used the test run ends after the first
failing test.
File pattern
If a test-specifier is a folder the default pattern used for file discovery is test*.py
.
The pattern can be changed using the pattern=
argument.
htf.main(tests="tests/", pattern="*.py")
Tagging
To select tests using a logical expression use the tags
option set to an expression.
htf.main(tags="foo|bar")
To list all available tags use the htf.get_tags
function.
htf.get_tags(tests="/path/to/test")
- htf.get_tags(tests: Any | List[Any] = '__main__', pattern: str = 'test*.py') List[str]
Get a list of tags found in the specified tests.
To use tags for fixtures use the fixture_tags
option set to an expression.
htf.main(fixture_tags="hardware")
Metadata
To add metadata about the test run use the metadata
parameter set to a dictionary containing
metadata.
This feature can be used to, for example, add information about a tested firmware version, etc.
metadata_dict = dict(firmware_version="1.2.3", hardware_version="4.5.6")
htf.main(metadata=metadata_dict)
The metadata will appear in the test reports.
Parameters
You can add parameters to test runs to control the behavior of tests and fixtures.
Use the parameters
parameter as a dict to supply parameters to test runs.
The parameters are available via the htf.fixtures.Parameters
fixture.
htf.main(parameters=dict(name1="value1", name2="value2"))
The parameters will also appear in the test result metadata.
htf.run()
htf.run
is an alias for htf.main
.
- htf.run(title: str = 'Testreport', tests: Any | List[Any] = '__main__', tags: str | None = None, fixture_tags: str | None = None, html_report: str | List[str] | None = None, junit_xml_report: str | List[str] | None = None, json_report: str | List[str] | None = None, yaml_report: str | List[str] | None = None, report_server: str | List[str] | None = None, extra_report: Any | List[Any] | None = None, create_drafts: bool = False, open_report: bool = False, pattern: str = 'test*.py', metadata: Dict[str, Any] | None = None, parameters: Dict[str, str] | None = None, exit: bool = True, fail_fast: bool = False, minimize_report_size: bool = False, interactive: bool = False, interactive_address: str = '0.0.0.0', interactive_port: int = 8080, open_browser: bool = True, shuffle: bool = False, shuffle_seed: str | int | None = None) int
Test runner.
- Parameters:
title – the test’s title
tests=None – a test-specifier or a list of test-specifiers (folder, file, test-case, test-case-method, module)
tags=None – test filter expression
fixture_tags=None – fixture filter expression
html_report=None – HTML-report filename(s)
junit_xml_report=None – JUnit-XML-report filename(s)
json_report=None – JSON-report filename(s)
yaml_report=None – YAML-report filename(s)
report_server=None – Url or list of urls to report server
extra_report=None – Extra reports that support instance.render(data). This is useful for
htf.DOORSTestReport
for example that needs additional parameters.create_drafts=False – if set to
True
draft reports are created after each testopen_report=False – if set to
True
the HTML test report is opened after the test runpattern="test*.py" – the pattern to be used when searching for python files in folders
metadata=None – a dictionary that contains metadata about the test run
parameters=None – a dictionary that contains parameters for the test run
exit=True – if set to
True
the exit() is called at the end of the testsfail_fast=False – if set to
True
the test run ends after the first failing testminimize_report_size=False – if set to
True
the report size is minimized.interactive=False – if set to
True
the tests are run interactivelyinteractive_address="0.0.0.0" – the ip address the interaction server listens on.
interactive_port=8080 – the port the interaction server listens on.
open_browser=True – if set to
False
browser is not opened automaticallyshuffle=False – set to
True
to shuffle testsshuffle_seed=None – the shuffle seed or ‘last’ to use the last shuffle seed
- Returns:
0
if the run was successful or the number of failed tests in case or errors and failures.- Return type:
htf.verify()
htf.verify
verifies the system environment for validated installations.