Extension:WikiLambda/Jupyter kernel proposal

''I've been doing some research on interpreters and debuggers which brought me right back around to Wikifunctions and how to implement them using multiple languages in a reasonable way that also lets function developers introspect their code as it runs. --Brion Vibber (WMF) (talk) 02:09, 26 February 2021 (UTC)''

There's a few pieces I'm looking at for the Wikifunctions backend and frontend story:
 * sandboxed function evaluator that supports shoving arbitrary sources into popular programming languages and returning output
 * preferably that also supports single-step debugging where the runtime supports it
 * a debugger UI that integrates well into MediaWiki frontend
 * being able to use the same debugger UI for client-side interpreters (for other projects like interactive diagrams)

Jupyter
I'm inclined to look further at Jupyter for the backend and the debug protocol; it's a project with years of experience sandboxing scripting languages for interactive execution from web pages.

A few things Jupyter does well:
 * already handles languages like Python (currently Python and C++ are the only languages with debugging support, which is new)
 * already deals with isolation/safety issues because it's often exposed to the web
 * has a pub-sub protocol between the kernels and the frontends, which we can probably adapt to allow code to invoke additional functions in a new kernel

How it generally works at the runtime/execution level:
 * "kernel" manages the programming language runtime, creating an interpreter context for each invocation
 * sends messages back and forth to management & frontends over the Jupyter protocol
 * to support debugging, kernel interfaces with the runtime's internal debugging API and exposes events, sources, and data
 * somewhere in all this there should be a container or chroot boundary and various safeties. need to confirm we understand how the pieces work.

The frontend in a Jupyter notebook would be replaced by the service layer we expose to MediaWiki, allowing functions to be invoked for use in a wiki page rendering or for an interactive debugging session. For the latter, we'd want a debugger frontend that integrates well with MediaWiki, showing source code and variables and letting you set breakpoints and such.

Performance
Runtime performance concerns include:
 * there will likely be some constant latency to spinning up a new execution environment for each function invocation.
 * reusing a running kernel for multiple invocations would leak global state across calls with different parameters, so isn't safe. ;_;
 * ...unless we have a specialized library for pure functions that have no mutable state, and can execute multiple calls in-process!
 * source parsing/compilation may be slow on larger modules.
 * some language runtimes may be able to cache compiled bytecode, which might reduce this cost somewhat if we can invoke that path by storing source on the filesystem and then importing it from injected source
 * Sending large data sets between functions will be slow, incurring serialization/deserialization time. Best practices for function authors should be to send references to data sources around when possible instead of raw data blobs.
 * Specific support for passing large buffers without a copy would be neat, but might be complex and probably won't work with complex data types well.

Languages like JS that use JIT compilation and do on-stack replacement will still have a chance to optimize code that runs long loops -- a Mandelbrot fractal generator would run reasonably fast, for instance, but this compilation would happen every time the fractal function was invoked. So better to call once per frame and return a buffer than call once per pixel!

Debugging
The Jupyter debugger protocol is based on the language server / debugger protocol used by VS Code and other projects, extended a bit to cover Jupyter's model where there's a web service intermediary between the execution kernels and the web front-end. We could use this protocol with the existing Python and C++ kernels, and could help add debugging support to a Node JS kernel perhaps, to get our primary work languages going.

We would have the additional complication that a virtual call stack may include functions running in multiple kernels, so we need to be able to stitch that into a cohesive debugging view of multiple sources and languages.

A MediaWiki-friendly debugger frontend could use our standard widget sets, pull sources from the wiki, etc, and manage the function invocation and debugging protocol via a persistent WebSockets connection. I'm not sure if we have anything else using WebSockets right now in primary production; this'll have a long-running connection to the client during the debug session, and similarly the kernel for the running function may stay open for minutes at a time or more as execution is paused & single-stepped, variables inspected, breakpoints set and cleared, sources and stack traces perused. So this has some additional resource usage of memory for the execution kernels over a much longer period of time than a normal function invocation, even if it's not using CPU continuously.

If we wanted, we could potentially rework Scribunto on this framework and allow interactive debugging of Lua modules. Something to consider.

The debugger UI would also be useful for client-side widgets using a sandboxed interpreter, which I'm researching for a future project. The interpreter would need to expose the low-level Jupyter debugging API, and the debugger would just connect to a virtual event bus instead of a WebSocket.

Pure function kernel and optimizations
Early planning thoughts on functions have centered on the pure application of functions as calls with lists of arguments, which may be further calls; this essentially defines a very simple programming language based on call expressions, with no variables (but the promise of caching/memoization speeding multiple calls). Many functions may be implementable this way, especially those that mainly call other functions and filter their data.

Because there's no global state that can be modified, this avoids some of the complications of general programming language implementations for the special case of calling another function that's implemented in the same way -- the same running interpreter process can simply load up an additional function definition and execute it in-process, without having to transfer arguments and results through the Jupyter protocol.

The language runtime extensions that expose the Wikifunctions service could also implement the pure-function interpreter in-process within a JavaScript or Python or Lua function, avoiding RPC overhead. This could be a large performance boost for common simple filter functions, for instance.

Synchronous RPC calls
When making a call into another Wikifunction that requires spinning up a new execution state (possibly in another language):
 * caller's runtime sends an event up to the function service with the target function and the serialized arguments
 * caller blocks waiting for a response...
 * function service launches the callee kernel and invokes it with the serialized arguments
 * callee runs
 * function service receives the serialized result from the callee kernel, and shuts it down
 * function service stores any cached data with suitable invalidation hooks registered
 * function service sends the  or   on to the caller's kernel
 * caller unblocks, translates the return value to a native programming language value, and returns.

Async code
It occurs to me that this system would allow calls to other functions to proceed as non-blocking async RPC calls rather than blocking for a reply.

For JS code, this would allow using  functions and the   operator, or  s with callbacks; other languages have similar structures. This might be useful for getting some parallelism, but that's also dangerous -- if you run ten thousand invocations to a sync function in a loop you've only fired up one kernel which runs one function in sequence, but if you fire off ten thousand async calls in a row and only later wait on them, you're going to instantly fill the function execution engine queue.

For this reason I would recommend either using synchronous function invocation only, or very carefully defining how much processor time and how many simultaneously active subprocesses can be spawned to reduce the impact of fork-bombs.

Callbacks and first-class functions
Another possibility to consider is whether to support sending first-class function objects as arguments and return values, which enables callbacks and user-defined control structures like  loops over custom data structures.

This would potentially be a big addition to the model: it would make certain types of programming MUCH easier that otherwise must be split over multiple functions, with the "callback"'s name and any necessary closure state manually passed in as parameters to the control function. This sort of pattern is common in today's MediaWiki templates, and isn't great. Being able to just pop a closure inline in your main function would be super nice.

Different from the async model where the runtime returns to a low layer of the stack and then waits on additional data for its event loop, the callback model would keep the stack where it is when making a call, but would accept both a  event or a   event. A call would be handled by the runtime calling the registered callback function object, then passing its value back to the other side of the connection and waiting for the next message. Unlike async, there's no danger of using more than your share of allocated CPU time since calls block on one side of the connection or the other.

Kernels passing function references around would keep some sort of reference count; this should integrate with the runtime language's garbage collector, so when it gets freed the registered UUID or whatever can be freed on the caller's end. If passed from one kernel to another, ownership will be preserved and the call can be made from a third or fourth function, each of which would maintain their own reference while it lived.

Note that callbacks would have significantly less invocation overhead than invoking a fresh Wikifunction; there would still be data serialization and the costs of the RPC (maybe even local network transit between servers) but it would not have to set up a new execution state or parse or compile any new code. So calling a callback in a loop would have less per-iteration overhead than calling forward to another function written in JS/Python/etc.

Not sure if this is a good idea, but I really like callbacks and I think a pure-function-based system will get a big usability boost from being able to use filter functions and custom control structures.

Wikidata queries and query building
Let's say we have two functions, one which returns a list of Q references based on a CSV page in Commons, and another which takes that list and filters it manually based on presence of some property:

This at least avoids round-tripping for every item in the filter, but if we were going to use this list and either pop it back into Wikidata to fetch some properties for display, or add some more filters to the query to avoid transferring data we didn't need, it might be nice to do a single round-trip and avoid having to send the long list of IDs to and from the query server multiple times. Especially if one were to refactor the data-file provider to drive out of Wikidata or another database.

Might be worth thinking about good interfaces for reading portions of large files, making queries of large CSV or RDF data sets, and being able to compose the filtering to minimize roundtrips while remaining both ergonomic and performant in a multi-language, multi-process RPC scenario!

Idempotency and non-determinism
The general idea of Wikifunctions is to have idempotent, deterministic functions that are based on their input state. But in practice, there are likely to be many sources of non-determinism which need to either be shrugged at and left as-is, plugged/neutered, or taken into proper account with a suitable cache invalidation system.

Sources of non-determinism:
 * language features that are deliberately non-deterministic, like
 * language features that return changing information about the world, like
 * the state of the language runtime itself, like checking for a feature that isn't available yet in the version of the kernel that's running when the function is deployed
 * reading a sandboxed filesystem, if the language allows it, might vary over time depending on the container details
 * reading data from a service, such as loading a file from Wikimedia Commons or making a query to Wikidata

The last (using services that we provide) are the easiest to plan for, because we'll control the API for accessing them and can treat the state of the world as an input to the function for caching purposes. A function invocation that caches its output after reading a Commons file could register a cache-invalidation hook for when that particular filename gets re-uploaded or deleted, which invalidates the cached function results and bubbles up to anything that cached data based on it.

World state might be avoidable by patching runtimes (eg to make Math.random use a consistent seed number, or replace the JS Date constructor and Date.now etc with functions that return a constant stub value, or with functional versions that register a timeout-based cache invalidation hook) but this could be error-prone in that it's easy to miss something and end up with bad cached data.

Wikidata generally though is tricky to cache on, since many things could change your result set. I don't know how hard it would be to devise a generic system for creating cache invalidation hook requests from a query, but it sounds hard.

Cache updates and getting behind
For MediaWiki templates, we went with a system that re-renders all templates on demand in case of page cache invalidation, but the way we invalidate pages' caches is to go through the list of all the pages registered as using the template and update their cache-invalidation time. For very large numbers of affected pages this can be done in many batches and can take quite a bit of time.

I don't know if we want to plan a more general improvement to this sort of invalidation technique, but we might want to specifically plan for what happens when a large invalidation happens of pages using a certain function. For a common function this could be millions of pages, and if we have to re-invoke every function on every page as they get visited and re-rendered there might be a glut of new invocation calls.


 * can we continue to use invalid data under certain circumstances?
 * can we partition a particular function or set of functions to use a "fair" amount of resources and avoid starving other work?
 * would throttling distinguish between top-level functions and the functions they call?
 * or invocations from cache invalidations versus invocations with no cached data?

Things to consider. I do kind of fear a giant out of control spike of usage from some rogue function that's spamming a bunch of loops -- or worse, a simple legit tweak to the source code of a fundamental function that's used on low-level templates everywhere, causing a re-render of every function and every page of Wikipedia. :D

Linking and explicit imports
The JS code examples above load up other functions through a string at runtime:. This is actually not ideal, because the function can't be looked up ahead of time. It might not matter a lot for this case with a one-off call, but in sequences and loops you could end up making a *lot* of round-trips to the database or memcache to look up function references. If the functions being called are pure functions that can be implemented by our runtime in-process, then you might get an advantage from pre-fetching the sources of all linked functions.

This requires having the names of to-be-called functions available statically, either derived from the function source code or in some sort of metadata. JS or Python could use import statements, perhaps.