Jump to content

Product Safety and Integrity/Account Security/Securing User-Managed Code

From mediawiki.org

(For more details about the March 2026 user script incident, jump to the incident FAQ below.)

Wikimedia projects offer unparalleled room for flexibility to users to add custom code to their own session, and (for some privileged users) code that runs across entire wikis. This is an extremely empowering model that reflects the community-driven nature of the project – as well as the reality that it is difficult to build every kind of high-efficiency feature directly into the platform to support the intense work of volunteers.

A major danger of this system is that risks are often being taken on behalf of Wikimedia users without those users' knowledge or participation. This can be through other users directly editing site-wide code, through code dependencies among scripts and gadgets, or even dependencies on the integrity of third-party servers not controlled by the Wikimedia Foundation or the community.

There is always going to be risk to a system built around user-controlled (which can also mean attacker-controlled) code. For this system to work in the long run, it needs to be secure enough that these risks are tolerable, yet powerful enough that it's still solving real user problems. We are not yet at the right balance of risk. There have been successful attacks on targeted volunteers in the past that have exploited risks in this system. The Wikimedia Foundation and the communities continue to discover malicious code.

Since late 2025, our team has been researching ways to better secure this system, and in 2026 we are beginning to implement them. This work mainly covers user scripts, gadgets, and managing site-wide interface code.

Areas of work

[edit]

Content Security Policy

[edit]

This has been the area we have put the most work into so far.

Content Security Policy (CSP) is a technical security standard that allows a website to specify (among other things) what servers the user's web browser is allowed to contact while the user is on that website.

Since January 2026, we've globally enabled a CSP in report-only mode, which tells user browsers not to block anything, but to just report to us what third-party services are used on the wikis.

We used the data from this process to develop an enforcing Content Security Policy that explicitly allows many of the third-party services currently in use, while blocking new ones. The goal is to avoid breaking many existing user scripts, but to limit new user scripts from using new third-party services.

We had been planning on deploying this toward the end of March 2026, but we accelerated its deployment after a user script incident. This CSP is now in place across all Wikimedia wikis. (If this broke your script or gadget, see below for more information.)

Since this CSP was intended to preserve most existing user script functionality, it covers a large range of different third-party services. It is still much too wide to be a safe long-term policy for Wikimedia projects.

We expect to coordinate with script and gadget authors to identify different kinds of use cases, and how we can narrow this set of third parties further over time. This may involve migrating code to a more defined and trusted set of locations, or shifting more code into the platform, or other solutions depending on the use case.

Site-wide code editing

[edit]

Users must reauthenticate before they can edit site-wide scripts (e.g. MediaWiki:Common.js) and gadget code. This restriction was introduced in March 2026 after a user script incident, and makes it more difficult for malicious code to spread itself from one user to many others.

We are planning to improve the technical infrastructure behind this reauthentication system, and make the user experience clearer. After that, we plan to apply reauthentication requirements to other sensitive actions as well.

Code analysis

[edit]

We're trying out traditional static analysis tools (like semgrep) and dynamic analysis, customized for user-managed code, to identify unexpected and suspicious behavior in user scripts.

We are also interested in calculating "risk scores" that could be used as the basis for other security features.

Safe import

[edit]

We're looking into the possibility of developing a "safe import" function that could apply rules at import time, and fail to import scripts that don't meet those rules.

These rules could be based upon risk scores from code analysis, or on rules based on the permissions available to the user running the script, or other things identified by the Wikimedia Foundation and the community as risky.

Roadmap

[edit]

April to June 2026:

  • Audit staff accounts and remove unnecessary privileges.
  • Apply stricter re-authentication requirements for staff use of sensitive privileges.
  • Simplify the process of re-authentication for site-wide code editing, and adapt it to cover other sensitive actions by both staff and volunteers.
  • Continue refining the enforcing Content Security Policy.
  • Dynamically analyze user-managed code for risky behavior.

FAQ

[edit]

Enforcing Content Security Policy

[edit]

Why is it such a big deal to allow third-party services for use in user scripts? The scripts I use are safe, and I'm choosing to trust those third parties.

The biggest risks with third-party services are not about how good-faith users intend to use them, but about how bad-faith users could also use them, or about the risks presented by the third-party service itself.

My user script or gadget was broken when you added an enforcing CSP. How do I get it fixed?

If your script or gadget was already in use before March 2026 and it was broken, please file a Phabricator task, and make it a subtask of T419265. Please include the external domain(s) that need to be added to the CSP; you should be able to find these in the error messages that are printed or to the browser console when the CSP is violated.

We will work to accommodate your request and provide a reasonable solution, but we cannot guarantee that we'll accommodate every request in full. The current CSP is a transitional policy, and we plan to work with developers to narrow the set of allowed third parties over time.

How do I get a new third-party domain added to the Content Security Policy, so I can use it in my user script or gadget?

The above process is only for third-party domains that were already in use. Going forward, we are focused on reducing, not increasing, the number of third-party services that can be connected to directly from a live user session.

Script and gadget developers should consider alternative approaches that don't require your script to contact a third-party service. For example:

  • Load libraries from https://cdnjs.toolforge.org, rather than from third-party CDNs. If the library you need isn't there, you can ask for it to be added.
  • Fonts, images, JSON files, scripts (JS) and styles (CSS) can be hosted on-wiki or in Toolforge/WMCS.
  • Host your tool in Toolforge, instead of on-wiki as a user script.
  • Put most of the logic into a Toolforge tool, and create a small user script that integrates the Toolforge tool on wiki.

March 2026 user script incident

[edit]

How did the incident begin, and what did the malicious user script do?

The incident began on March 5, 2026, when a Wikimedia Foundation staff member was testing performance and third-party resource usage of user scripts, using browser-based networking tools. A principal goal was to understand which third-party services were already in use by existing user scripts. Static analysis of the stored script code would not be enough, since resources can be fetched dynamically, and some fetched code could in turn cause other resources to be fetched.

This testing did not occur in a properly sandboxed environment, but instead loaded various user JavaScript files into the staff member's Meta-wiki global.js page. This testing encountered dormant worm code residing in a user JavaScript page on Russian Wikipedia, which was subsequently executed on Meta-wiki in the context of a privileged user account.

The worm abused the staff member's privileges to add itself to MediaWiki:Common.js on Meta-Wiki, a site script that is loaded for every visitor on every page on that wiki. Once the worm had added its code there, it infected 97 users who visited Meta-Wiki during a 23-minute period.

For those users affected, the script deleted pages on their behalf every time they visited any page on Meta-Wiki. The worm also loaded some non-malicious external resources, causing each affected user's browser to briefly connect to two third-party services: ajax.googleapis.com (a code hosting service operated by Google), and cyclowiki.org (a Wikipedia-like wiki focused on Russian speakers). This caused the users' IP addresses, user agents, and other metadata that accompanies standard web requests to be sent to those services, but did not share information specific to the affected users' Wikimedia account. The script also attempted to connect to a domain (basemetrika.ru) that may at one point have been controlled by the original script author. However, this domain was unregistered and inactive during the incident, so no connection was established.

Wikimedia wikis were then put into read-only mode for around 2 hours while the incident was investigated and the worm was contained and removed. User-managed code was disabled for most of the day until cleanup and communications were complete, and some protections (an enforcing CSP, and re-authentication for site-wide code editing) were put into place.

This was a significant operational error on the Wikimedia Foundation's part, and we are sorry for the disruption and cleanup that this incident caused.

This incident also demonstrated some of the risks in the Wikimedia platform, which allowed an operational error like this to escalate as quickly and severely as it did.

Why did you add restrictions like CSP and reauthentication after the incident?

The testing was happening because we were already planning on adding an enforcing CSP very soon, and were in the final stages of analysis before doing so. We were similarly quite aware of risks around editing site-wide code, and had planned to require reauthentication for that (and other sensitive actions) later in 2026.

Due to the risks raised by the incident and increased attention on the weaknesses of this system, we decided to accelerate these plans and put some protections in place before re-enabling user scripts.

The reauthentication requirement makes it more difficult for malicious scripts to take privileged actions, or for a worm to spread itself using site JS. The enforcing CSP prevents malicious code from loading external code or "phoning home" to its author. The worm in this incident did try to phone home, but failed because the domain it was using had expired and was unregistered at the start of the incident.

What are you doing to prevent an incident like this from happening again?

We have conducted an internal retrospective on the incident, and are updating our processes and staff expectations around handling third-party code, and use of accounts with staff rights.

Our main focus is in designing systems that make it hard to make mistakes like the one that caused this incident, and that limit the damage caused when they happen. So, we are working on technical changes to 1) reduce the risk that staff accounts can pose to the wikis, and 2) reduce the risk that user-managed code can pose to the wikis and all of its users.

For staff accounts, we are not publicly sharing all of the steps we are taking. But one straightforward thing we will do is significantly reduce the number of staff members who have potentially destructive privileges active on their accounts. As we impose re-authentication requirements for privilege escalation, we are considering first imposing those on staff accounts, before adapting them for the use of sensitive rights by volunteer accounts.

For how we will reduce the risk of user-managed code in general, see our roadmap and plans above.

Contact

[edit]

Subscribe to the newsletter