Help:Wikifunctions/Function call metadata

Whenever WikiFunctions runs a function, it collects and reports information about the run, including errors that have been raised and a variety of basic metrics such as the run's duration, CPU usage, and memory usage. The purpose of this information is to provide function contributors and users with some awareness of the performance characteristics of particular function implementations. This information, known as function call metadata, is displayed in the user interface, in a pop-up dialog available in three different settings:


 * 1) Immediately after invoking a function from the Evaluate a function call page
 * 2) When viewing tester status on the page for a particular function
 * 3) When viewing tester status on the page for a particular tester.

When viewing the metadata dialog in setting 1, the user sees metadata from the function run they just requested. When viewing metadata dialog for a tester in settings 2 and 3, what's shown is the metadata for the most recent run of the tester. Additional information about tester run metadata is given below, in Metadata for tester runs.

Metadata is collected in the orchestrator, evaluator, and executor components. See Function Evaluation for Wikifunctions for general information about these components.

The currently implemented metadata elements are described in the following sections. The italicized headings (e.g., Orchestration start time) are the labels that show up in the metadata dialog for an English-language reader.

Orchestrator metadata
Orchestration start time. Wall clock time when orchestration began, given to millisecond precision, in Coordinated Universal Time (UTC).

Orchestration end time. Wall clock time when orchestration finished, given to millisecond precision, in Coordinated Universal Time (UTC).

Orchestration duration. The time elapsed, given in milliseconds, between Orchestration start time and Orchestration end time.

Orchestration CPU usage. CPU time used by the orchestrator during the interval between Orchestration start time and Orchestration end time, given in milliseconds, as reported by the Node.js method ."Note: Orchestration CPU usage must be interpreted carefully, because it doesn't necessarily reflect CPU time used exclusively for the current function call.  Depending on operational configuration and current load, it could reflect time spent on multiple different function calls, because the orchestrator may be configured to handle multiple calls in an interleaved fashion.  The implementation of this metric will be revisited in the future, after operational configuration has been more permanently determined.  See also Phabricator ticket T314953."Orchestration memory usage. Orchestrator memory allocation at the moment when the orchestrator finished handling the function call, as reported by the Node.js method ."Note: Orchestration memory usage must be interpreted carefully, because it doesn't necessarily reflect memory allocation made exclusively for the current function call.  Depending on operational configuration, current load, and garbage collection behavior, it could reflect memory needed for function calls handled previously, or concurrently with the current function call.  The implementation of this metric will be revisited in the future, after operational configuration has been more permanently determined.  See also Phabricator ticket T314953."Orchestration server.  The virtual host on which the orchestrator ran while handling the function call, as reported by Node.js method. As of this writing, this value is a Docker container ID.

Evaluator metadata
Evaluation start time. Wall clock time when evaluation began, given to millisecond precision, in Coordinated Universal Time (UTC).

Evaluation end time. Wall clock time when evaluation finished, given to millisecond precision, in Coordinated Universal Time (UTC).

Evaluation duration. The time elapsed, given in milliseconds, between Evaluation start time and Evaluation end time.

Evaluation CPU usage. CPU time used by the evaluator during the interval between Evaluation start time and Evaluation end time, given in milliseconds, as reported by the Node.js method ."The note for Orchestration CPU usage also applies here."Evaluation memory usage.  Orchestrator memory allocation at the moment when the orchestrator finished handling the function call, as reported by the Node.js method  "The note for Orchestration memory usage also applies here."Evaluation server. The virtual host on which the orchestrator ran while handling the function call, as reported by Node.js method. As of this writing, this value is a Docker container ID.

Executor metadata
Execution CPU usage. CPU time used by the executor, given in milliseconds, as reported by the  property returned from the Node.js method  ."Note: this metric must be interpreted carefully, because it doesn't necessarily give an accurate report of the total CPU usage by the executor of the current function call.  See also Phabricator ticket T313460."Execution memory usage. Memory used by the executor, given in milliseconds, as reported by the  property returned from the Node.js method  "Note: this metric must be interpreted carefully, because it doesn't necessarily give an accurate report of the total memory usage by the executor of the current function call.  See also Phabricator ticket T313460."

Errors
Errors are currently reported, as ZErrors, from the orchestrator and evaluator components. Error conditions involving an executor are currently reported from the evaluator that spawned the executor, but in the near future we expect to begin reporting errors directly from executors. In rare circumstances, it's also possible that an error raised in the WikiLambda component might be reported.


 * Error(s).  A ZError that has been returned from the function call, presented in summary form for readability.

Metadata for tester runs
Each run of a tester involves running two functions.:


 * 1) the function being tested is run first
 * 2) a result-checking function is then run to determine if the result of the first function call is correct.

If the result of (1) is correct, and no errors arise in the execution of (2), the metadata dialog for the tester shows exactly the metadata for (1). If, on the other hand, the first function call has returned an incorrect result, the metadata dialog also shows these two metadata elements, in addition to the metadata returned for (1):


 * Expected result.  The result expected from (1), a ZObject, as defined by the tester.
 * Actual result.  The result actually returned from (1), a ZObject.

Similarly, if an error arises in the execution of (2), that error is displayed along with the metadata returned for (1):


 * Validator error(s).  A ZError that has been returned from (2), presented in summary form for readability

Testers are instances of, and are described in greater detail in the Function model.