Help:Wikifunctions/Function call metadata

Whenever Wikifunctions runs a function, it collects and reports information about the run, including any errors that have been raised alongside a variety of basic metrics such as the run's duration, CPU usage, and memory usage. The purpose of this information is to provide function contributors and users with some awareness of the performance characteristics of particular function implementations.

Where is this shown?
This information, known as function call metadata, is displayed in the user interface, in a pop-up dialog available in four different settings:


 * 1) Immediately after invoking a function from the Evaluate a function call page
 * 2) When viewing test results on the page for a particular function
 * 3) When viewing test results on the page for a particular implementation.
 * 4) When viewing test results on the page for a particular tester.

In setting 1, the metadata dialog can be displayed by clicking on the button labeled Show metrics (which may soon be changed to Show metadata). In the other three settings, it is displayed by clicking on the information icon (letter 'i' inside a circle) for a particular tester run.

When viewing the metadata dialog in setting 1, the user sees metadata from the function run they just requested. When viewing metadata dialog for a tester in the other three settings, what's shown is the metadata for the most recent run of the tester. Additional information about tester run metadata is given below, in Metadata for tester runs.

Metadata is collected in the orchestrator, evaluator, and executor components. See Function Evaluation for Wikifunctions for general information about these components.

What do the different bits of data mean?
The currently implemented metadata elements are described in the following sections. The headings for individual metadata elements, shown in bold (e.g., Implementation type) are the labels that show up in the metadata dialog for an English-language reader.

Implementation metadata

 * Implementation type
 * The type (BuiltIn, Evaluated, or Composition) of the implementation used to run the function. See the Function model for more about implementation types.


 * Implementation ID
 * The persistent ID, if there is one, of the implementation used to run the function. See the Function model for more about persistent IDs.

Orchestrator metadata

 * Orchestration start time
 * Wall clock time when orchestration began, given to millisecond precision, in Coordinated Universal Time (UTC).


 * Orchestration end time
 * Wall clock time when orchestration finished, given to millisecond precision, in Coordinated Universal Time (UTC).


 * Orchestration duration
 * The time elapsed, given in milliseconds, between Orchestration start time and Orchestration end time.


 * Orchestration CPU usage
 * CPU time used by the orchestrator during the interval between Orchestration start time and Orchestration end time, given in milliseconds, as reported by the Node.js method.


 * Orchestration memory usage
 * Orchestrator memory allocation at the moment when the orchestrator finished handling the function call, as reported by the Node.js method.


 * Orchestration server
 * The virtual host on which the orchestrator ran while handling the function call, as reported by Node.js method . As of this writing, this value is a Docker container ID.

Evaluator metadata

 * Evaluation start time
 * Wall clock time when evaluation began, given to millisecond precision, in Coordinated Universal Time (UTC).


 * Evaluation end time
 * Wall clock time when evaluation finished, given to millisecond precision, in Coordinated Universal Time (UTC).


 * Evaluation duration
 * The time elapsed, given in milliseconds, between Evaluation start time and Evaluation end time.


 * Evaluation CPU usage
 * CPU time used by the evaluator during the interval between Evaluation start time and Evaluation end time, given in milliseconds, as reported by the Node.js method.


 * Evaluation memory usage
 * Orchestrator memory allocation at the moment when the orchestrator finished handling the function call, as reported by the Node.js method


 * Evaluation server
 * The virtual host on which the orchestrator ran while handling the function call, as reported by Node.js method . As of this writing, this value is a Docker container ID.

Executor metadata

 * Execution CPU usage
 * CPU time used by the executor, given in milliseconds, as reported by the  property returned from the Node.js method.


 * Execution memory usage
 * Memory used by the executor, given in milliseconds, as reported by the  property returned from the Node.js method

Errors
Errors are currently reported, as instances of, from the orchestrator and evaluator components. Error conditions involving an executor are currently reported from the evaluator that spawned the executor, but in the near future we expect to begin reporting errors directly from executors. In rare circumstances, it's also possible that an error raised in the WikiLambda component might be reported.


 * Error(s)
 * A  that has been returned from the function call, presented in summary form for readability. Note that a   may have nested.

Metadata for tester runs
Each run of a tester involves running two functions:


 * 1) the function being tested is run first
 * 2) a result-checking function is then run to determine if the result of the first function call is correct.

If the result of (1) is correct, and no errors arise in the execution of (2), the metadata dialog for the tester shows exactly the metadata for (1). If, on the other hand, the first function call has returned an incorrect result, the metadata dialog also shows these two metadata elements, in addition to the metadata returned for (1):


 * Expected result
 * The result expected from (1), a, as defined by the tester.


 * Actual result
 * The result actually returned from (1), a.

Similarly, if an error arises in the execution of (2), that error is displayed along with the metadata returned for (1):


 * Validator error(s)
 * A  that has been returned from (2), presented in summary form for readability.

Testers are instances of, and are described in greater detail in the Function model.

Caching of test results and metadata
Test results and metadata from tester runs are cached in a database for performance optimization. So long as there have been no changes to the tested function, tested function implementation, or the tester itself, the cached metadata remains valid, and it is unnecessary to rerun the tester.