# integration_test.h ## Overview
Implements an integration test system. The difference between the unit test system in `unit_test.h` and the integration test system is that the unit test system is intended to test small, isolated pieces of functionality. It creates as little context around this as possible. In contrast, the integration test system is intended to test whole applications. It might boot up a full application and then run one or more tests on it. By their very nature, integration tests are slower and more fragile than unit tests, but they can also find issues that are hard to detect with unit tests. Each integration test runs in a specific *Context*, identified by a unique string hash. The *Context* specifies the "scaffolding" that is set up before the unit test runs. I.e., some tests may want to run within a running editor, some test may need two running editors to test collaboration, some may want to launch all executables themselves, etc.
## Index
`TM_INTEGRATION_TEST_CONTEXT__THE_MACHINERY_EDITOR`
`struct tm_integration_test_config_t`
`tm_integration_test_runner_o`

`struct tm_integration_test_runner_i`
`inst`
`context`
`config`
`app`
`wait()`
`record()`
`expect_error()`
`has_errors()`
`num_errors()`

`TM_WAIT()`
`TM_WAIT_LOOP()`
`TM_INTEGRATION_TEST()`
`TM_INTEGRATION_TESTF()`
`TM_INTEGRATION_EXPECT_ERROR()`

`struct tm_integration_test_i`
`name`
`path_config_json_file`
`config_key`
`context`
`tick()`

`tm_integration_test_i_version`
## API
### `TM_INTEGRATION_TEST_CONTEXT__THE_MACHINERY_EDITOR`
~~~c #define TM_INTEGRATION_TEST_CONTEXT__THE_MACHINERY_EDITOR ~~~ Context used for tests that want to run within The Machinery editor.
### `struct tm_integration_test_config_t`
`tm_integration_test_config_t` holds optional configuration data for a test (initialized from `tm_integration_test_i.path_config_json_file`). ~~~c typedef struct tm_integration_test_config_t { tm_config_i *config; tm_config_item_t object; TM_PAD(4); } tm_integration_test_config_t; ~~~
### `tm_integration_test_runner_o`
~~~c typedef struct tm_integration_test_runner_o tm_integration_test_runner_o; ~~~ Represents an integration test runner.
### `struct tm_integration_test_runner_i`
Interface for a "runner" that runs integration tests. The individual test functions will receive the runner as an argument. #### `inst` ~~~c tm_integration_test_runner_o *inst; ~~~ Instance data for the test runner. #### `context` ~~~c tm_strhash_t context; ~~~ The current context that the "runner" is running. Integration tests can check this value and decide not to run, if the context does not match what they are expecting. #### `config` ~~~c tm_integration_test_config_t config; ~~~ Optional. If a test specifies a `tm_integration_test_i.path_config_json_file`, this field will contain the config data of the test.. #### `app` ~~~c struct tm_application_o *app; ~~~ Pointer to the "application" that the integration tests are running in. The actual value of this parameter depends on which context the tests are running in. For example, in the `TM_INTEGRATION_TEST_CONTEXT__THE_MACHINERY_EDITOR` context, this will point to the editor's application object. Note that in some contexts, `app` may be NULL. #### `wait()` ~~~c bool (*wait)(tm_integration_test_runner_o *inst, float sec, uint64_t id); ~~~ Used by the integration tests to implement tests that run over several frames. When you call `wait()`, the function will return *false* for the specified number of seconds, then return *true* once and finally continue to return *false* again. The `id` should be a unique non-zero identifier for this `wait()` call. Typically, you would use `__LINE__`. This function is typically used with the helper `TM_WAIT()` or `TM_WAIT_LOOP()` macros (see below) which automatically use `__LINE__` as the `id`. For more advanced uses of `wait()`, you should assume the following behavior: * If we're not currently waiting on any ID, we will start waiting on `id` and return *false*. * If we're currently waiting at some other ID than `id`, we will just return *false*. * If we're currently waiting at `id` and `sec` has elapsed since we started waiting, we will return *true* and stop waiting. Otherwise, we will return *false*. If we're not waiting on anything when the `tick()` function exits, the test is considered done. #### `record()` ~~~c bool (*record)(tm_integration_test_runner_o *inst, bool pass, const char *test_str, const char *file, uint32_t line); ~~~ Records the result of an integration test with the runner. * `pass` specifies if the test succeeded or not. * `test_str` is a string describing the test that will be printed in test reports. * `file` is the `__FILE__` where the test is located. * `line` is the `__LINE__` where the test is located. Returns the value of `pass`. #### `expect_error()` ~~~c void (*expect_error)(tm_integration_test_runner_o *inst, const char *err, const char *file, uint32_t line); ~~~ Tells the test runner to expect the error message `err`. Normally, a test runner does not expect any errors to occur, so if a test logs an error (using `tm_error_i->errorf()`), the test is considered to have failed. However, sometimes you want to test that the error handling works and that an API produces a certain error when called in a certain way. To do that, you first call `expect_error()` with the error you expect and then run the test that should produce the error message. If the expected error message is written to `tm_error_i` before the next call to `record()`, the test is considered to have succeeded (produced the expected error message), otherwise, the test is considered to have failed (not produced the right error message). Note that to make this work, the test runner has to set up the `tm_error_api->def` interface to call into the test runner code. That way, it can intercept the `tm_error_i->errorf()` call and check if the error matches the expectations or not. #### `has_errors()` ~~~c bool (*has_errors)(void); ~~~ Returns *true* if any error has been logged. #### `num_errors()` ~~~c uint32_t (*num_errors)(void); ~~~ Returns the number of logged errors.
### `TM_WAIT()`
~~~c #define TM_WAIT(tr, sec) ~~~ Calls `tm_integration_test_runner_i->wait()` to wait for the specified time inside an integration test. A typical use case looks something like this: ~~~c static void my_test_tick(tm_integration_test_runner_i *tr) { const float step_time = 0.5f; if (TM_WAIT(tr, step_time)) open(tr, "C:\\work\\sample-projects\\modular-dungeon-kit\\project.the_machinery_dir"); if (TM_WAIT(tr, step_time)) save_to_asset_database(tr, "C:\\work\\sample-projects\\modular-dungeon-kit\\modular-dungeon-kit.the_machinery_db"); ... ~~~ The integration test runner will call the `tick()` function once per frame. After 0.5 seconds, the first `TM_WAIT()` call will return `true` and the file will be opened. At 1 seconds, the second `TM_WAIT()` call will return `true`, and the file will be saved, etc. The runner automatically detects when all `TM_WAIT()` calls in the `tick()` function have completed. At this point, the integration test is considered done and the runner will move on to the next test. !!! TIP Note that the test function is not executed in a single serial call, it is called multiple times as a `tick()` function. You have to take this into account when writing the function. For example, the values of all local variables will be reset between each call.
### `TM_WAIT_LOOP()`
~~~c #define TM_WAIT_LOOP(tr, sec, i) ~~~ Since `TM_WAIT()` uses the `__LINE__` macro to uniquely identify wait points, it doesn't work when called in a loop. In this case you can use `TM_WAIT_LOOP()` instead. It takes an iteration parameter `i` that uniquely identifies this iteration of the loop (typically it would just be the iteration index). This together with `__LINE__` gives a unique identifier for the wait point. !!! WARNING If you have multiple nested loops, be aware that using just the inner loop index `j` is not enough to uniquely identify the wait point, since it is repeated for each iteration of the outer loop. Instead, you want to combine the outer and inner index. ~~~c for (uint32_t i = 0; i < n_i; ++i) { for (uint32_t j = 0; j < n_j; ++j) { if (TM_WAIT_LOOP(tr, 1, i * n_j + j)) { ... } } } ~~~
### `TM_INTEGRATION_TEST()`
~~~c #define TM_INTEGRATION_TEST(tr, assertion) ~~~ Unit test macro. Tests the `assertion` using the test runner `tr`. In case of an error, a stringified version of the `assertion` is logged.
### `TM_INTEGRATION_TESTF()`
~~~c #define TM_INTEGRATION_TESTF(tr, assertion, format, ...) ~~~ As `TM_INTEGRATION_TEST()`, but records a formatted string in case of error.
### `TM_INTEGRATION_EXPECT_ERROR()`
~~~c #define TM_INTEGRATION_EXPECT_ERROR(tr, error) ~~~ Macro for calling `expect_error()` with current `__FILE__` and `__LINE__`.
### `struct tm_integration_test_i`
Interface for integration tests. #### `name` ~~~c const char *name; ~~~ Name of the test. #### `path_config_json_file` ~~~c const char *path_config_json_file; ~~~ Optional. Path to a JSON config file used to configure the test. Environment variables in the form `%VAR%` in the path will be automatically replaced with the value of the environment variable. If specified, the loaded configuration data will be available in the `tm_integration_test_runner_i->config` field. #### `config_key` ~~~c tm_strhash_t config_key; ~~~ Optional. If provided, the object `root[config_key]` in the configuration file, where `root` is the root object, will be considered as the configuration object. If not provided, `root` will be used as the configuration object. #### `context` ~~~c tm_strhash_t context; ~~~ Context that this test will run in. Tests will only be run in contexts that match their `context` setting. #### `tick()` ~~~c void (*tick)(tm_integration_test_runner_i *); ~~~ Ticks the test. The `tick()` function will be called repeatedly until all its `wait()` calls have completed.
### `tm_integration_test_i_version`
~~~c #define tm_integration_test_i_version ~~~ Current version of `tm_integration_test_i` to use with `tm_add_or_remove_implementation()`.