4952325051
When writing an expected-to-fail test case, the cardinality of $region_highlight at the time the test is written may differ from the cardinality it will have once the bug is fixed. For example, with issue #641.5, the current highlighting is ['nice', 'x=y', 'y', 'ls'] — four elements — but the correct highlighting would have three elements: ['nice', 'x=y', 'ls']. There is no point in reporting a separate test failure for the cardinality check in this case, nor for 'ls' being highlighted as 'command' rather than 'default'. At the same time, in other cases the current and correct highlighting may have the same number of elements (for example, this would be the case for a hypothetical "the command word is highlighted as an alias rather than a function" bug). Thus, the previous commit, q.v.. |
||
---|---|---|
.. | ||
generate.zsh | ||
README.md | ||
tap-colorizer.zsh | ||
tap-filter | ||
test-highlighting.zsh | ||
test-perfs.zsh |
zsh-syntax-highlighting / tests
Utility scripts for testing zsh-syntax-highlighting highlighters.
The tests harness expects the highlighter directory to contain a test-data
directory with test data files.
See the main highlighter for examples.
Each test should define the string $BUFFER
that is to be highlighted and the
array parameter $expected_region_highlight
.
The value of that parameter is a list of strings of the form "$i $j $style"
.
or "$i $j $style $todo"
.
Each string specifies the highlighting that $BUFFER[$i,$j]
should have;
that is, $i
and $j
specify a range, 1-indexed, inclusive of both endpoints.
$style
is a key of $ZSH_HIGHLIGHT_STYLES
.
If $todo
exists, the test point is marked as TODO (the failure of that test
point will not fail the test), and $todo
is used as the explanation.
If a test sets $skip_test
to a non-empty string, the test will be skipped
with the provided string as the reason.
If a test sets unsorted=1
the order of highlights in $expected_region_highlight
need not match the order in $region_highlight
.
Normally, tests fail if $expected_region_highlight
and $region_highlight
have different numbers of elements. Tests may set $expected_mismatch
to an
explanation string (like $todo
) to avoid this and skip the cardinality check.
$expected_mismatch
is set implicitly if the $todo
component is present.
Note: $region_highlight
uses the same "$i $j $style"
syntax but
interprets the indexes differently.
Note: Tests are run with setopt NOUNSET WARN_CREATE_GLOBAL
, so any
variables the test creates must be declared local.
Isolation: Each test is run in a separate subshell, so any variables,
aliases, functions, etc., it defines will be visible to the tested code (that
computes $region_highlight
), but will not affect subsequent tests. The
current working directory of tests is set to a newly-created empty directory,
which is automatically cleaned up after the test exits. For example:
setopt PATH_DIRS
mkdir -p foo/bar
touch foo/bar/testing-issue-228
chmod +x foo/bar/testing-issue-228
path+=( "$PWD"/foo )
BUFFER='bar/testing-issue-228'
expected_region_highlight=(
"1 21 command" # bar/testing-issue-228
)
Writing new tests
An experimental tool is available to generate test files:
zsh -f tests/generate.zsh 'ls -x' acme newfile
This generates a highlighters/acme/test-data/newfile.zsh
test file based on
the current highlighting of the given $BUFFER
(in this case, ls -x
).
This tool is experimental. Its interface may change. In particular it may
grow ways to set $PREBUFFER
to inject free-form code into the generated file.
Highlighting test
test-highlighting.zsh
tests the correctness of
the highlighting. Usage:
zsh test-highlighting.zsh <HIGHLIGHTER NAME>
All tests may be run with
make test
which will run all highlighting tests and report results in TAP format.
By default, the results of all tests will be printed; to show only "interesting"
results (tests that failed but were expected to succeed, or vice-versa), run
make quiet-test
(or make test QUIET=y
).
Performance test
test-perfs.zsh
measures the time spent doing the
highlighting. Usage:
zsh test-perfs.zsh <HIGHLIGHTER NAME>
All tests may be run with
make perf