Talk:Citoid

About this board

Previous archives are at /Archive 1

Folly Mox (talkcontribs)

1. Would you be at all willing to maintain a brief list of known paywalled sources such that Citoid can apply a "url-access" parameter to citations to such domains. I'm thinking places like nytimes.com, ft.com, forbes.com, stltoday.com, latimes.com, etc. At present url-access always needs to be added manually, usually after a failed attempt to verify a claim.

2. I've noticed that citations to The Guardian consistently render the website / work parameter as "the Guardian". Would you be willing to uppercase the first letter in the website / work parameter for all sources that don't equal the first bit of the domain name? There may be sources who prefer a different case styling, but it looks weird in the rendered template. Alternatively, could you uppercase the first letter of the website/ work parameter when the first word is "the"?

3. An astute unregistered editor noticed at en:Help talk:Citation Style 1#Unix epoch that many sources using the date "1970-01-01" (the unix epoch) are doing so in error. Would you be willing to discard this date as bogus for sources that are not books, journals, or periodicals?

4. Is this a good place to discuss improvements to Citoid, or would Phabricator work better? I've recently registered an account there.

Folly Mox (talkcontribs)

3. So I realised I'm dumb, and web sources should not report a date prior to c. 1995 in any case. So the unix epoch should probably just be discarded regardless of spurce type.

FeRDNYC (talkcontribs)

3. The CS1 templates will all reject an |access-date= before Wikipedia's inception regardless of type (see this discussion), so we must be talking about publication date. The bigger problem is, unlike a physical-media citation type, if a website shows a timestamp of 1970-01-01 on some page (which I cannot prove, but believe with near-certainty, happens somewhere in the wild), then that's the only date we have for that source. IOW, it's arguably "correct" to use it in the citation, despite its obvious impossibility.

Folly Mox (talkcontribs)

Yeah I am talking about |date=, not access-date=

The Zotero translators seem to lean pretty heavily into HTML metadata, so it's possible the hypertext document could have a date listed as the unix epoch, with an actual publication date somewhere in the byline or footer, but the more common scenerio is probably like this one I fixed yesterday at en:Yuan Dynasty: https://www.academia.edu/2439642

Here, the service hosting the source (academia) reports a bogus unix epoch date, which any parser will pick up, but inspecting the actual source document reveals a publication date in 2010.

Folly Mox (talkcontribs)

I'd say that if a genuine web based source has the only available publication date set prior to the deployment of the world wide web in the early 1990s, it's safest to ignore the date rather than use a known incorrect value.

The nice thing about book and journal sources is that they'll have more than one service documenting their existence, so if one site is erroneously reporting a unix epoch date for the source, it can be cross-checked and corrected.

Whatamidoing (WMF) (talkcontribs)
Folly Mox (talkcontribs)

Rereading my comment now a month later, I definitely wasn't clear about what constitutes "a genuine web based source". I miscommunicated similarly in a completely different discussion about overlinking, also by employing the term "genuine" as if I hadn't put a lot of assumptions behind it. Probably time to choose my words more carefully.

In any case, as regards the topic I was initially trying to discuss, the unix epoch date "1970-01-01", it makes more sense to have citation templates add it to a tracking category rather than never return a date from Citoid, purely for visibility reasons. It's easy (although time-consuming) to run through a maintenance category full of likely bad data and fix it; it's much more difficult to find every citation without a publication date and ensure there actually is none provided. The second set is probably three or four orders of magnitude larger than the first, so my initial idea was probably uh ill-considered 🙃

This post was hidden by FeRDNYC (history)
This post was hidden by FeRDNYC (history)
This post was hidden by FeRDNYC (history)
Whatamidoing (WMF) (talkcontribs)
Reply to "A few questions"

Improving citation quality

16
Folly Mox (talkcontribs)

A few of us at en-wiki are about 55% through a massive cleanup project that resulted from the careless use of a user script that used Citoid to overwrite existing references with automatically generated ones. The current phase involves manually checking about 2400 diffs from the period January through April 2023. We haven't yet identified the full scope of the cleanup.

Through the course of this cleanup, we've determined that Citoid's references do not require improvement by human editors for only a small percentage of sources. As one of the involved editors with a more technical background (although decades ago), I'm trying to understand how the whole thing works, so we can improve the quality of references on en-wiki and avoid repeats of this sort of cleanup. I've personally invested probably over 100 hours in this over the past few weeks.

Folly Mox (talkcontribs)

What I'm understanding from the documentation available here, Citoid uses a fork of the Zotero translators. Is that correct? If so, how recently was it forked, and/or how often is it forked? Citoid/Determining if a URL has a translator in Zotero states In production for wikimedia, we've enabled three translators.

Mvolz (WMF) (talkcontribs)

Sorry, that's outdated. We recently reactivated the fork, but not to make changes to whether it's supported in translation-server or not. I've now fixed that page. (In the past we enabled 3 translators that Zotero didn't have enabled, not 3 total)

Folly Mox (talkcontribs)

The repository at GitHub shows a great number of javascript files, such that I can't figure out how to get to the end of "A" in the alphabetical listing. How does this align with having "enabled three translators"?

Talk:Citoid/Determining if a URL has a translator in Zotero has a comment from 2015 stating Citoid uses its own HTML meta-data scraper as a fall-back when Zotero doesn't return any result. Is there any way to record / indicate this? Like a hidden comment before the closing ref tag along the lines of "citation created by generic translator", or a warning message to the editor along the lines of "automated citations to this site may contain errors, please double check"?

Citoid is a very powerful library, and during the course of my cleanup efforts I've dropped into the visual editor a couple times to make use of it (in cases where the reference had been generated from a URL where Citoid's behaviour is suboptimal, but which contained a DOI that could be used to create a complete citation). However, at en-wiki at least, there's a culture of trusting code to function perfectly in all cases where it doesn't generate any warnings or errors. Effecting cultural change is difficult, and creating references manually is time-consuming, so I'm exploring all avenues. I don't think my technical skills are high enough to start writing Zotero translators, and I'm not sure how to get Citoid to incorporate those translators in its dependencies.

Also, citations created from Google Books never include editors, misattributing their contribution as authorship, and I'm not sure if that's something that can only be addressed by improving the translator or if it's something that is going on within Citoid. Thanks in advance for your answers. Kindly,

Mvolz (WMF) (talkcontribs)

There's a "powered by Zotero" message in the citation picker if it's from Zotero, but Zotero also has a generic translators now, so that is probably not super useful to you (historically they did not- it's very rare to have a purely citoid response now).

The github repo you've found is a good place to look to see what's available (I note the aws tests haven't run in a while!) Note some of the poor citation quality is going to be from javascript loaded pages, so things that work well with Zotero the browser extension, which deals with the loaded page, won't necessarily give the same results after being scraped using translation-server.

Folly Mox (talkcontribs)

(I tried to post the above in one go, but kept tripping the abuse filter for "linkspam". I couldn't even get the third paragraph to post as a single comment. Maybe the filter settings are too strict?)

Whatamidoing (WMF) (talkcontribs)

I think the problem of mistaking a book's editors for its authors predates citoid and the visual editor. Part of the problem is that reality is complicated. If you look at page xix in https://books.google.com/books?id=bIIeBQAAQBAJ, you'll see that there is a main author, an editor, and a long list of people who wrote specific entries. The correct author's name depends on which bit you're actually citing, and you're supposed to notice the presence or absence of the author's initials at the end. In https://www.google.com/books/edition/The_Routledge_Encyclopedia_of_Mark_Twain/8BhUuxcKNPkC, however, Google correctly names the editors as being editors, and it would be nice if citoid/Zotero could figure that out.

Folly Mox (talkcontribs)
I definitely never expect automated referencing to identify chapter contributors. Sometimes the table of contents is not even available for preview. I've had some luck going directly to publishers for the info, but in a couple cases I've had to leave the author attribution empty. Identifying editors as editors seems like pretty low hanging fruit, by which I mean it's clearly stated at the bottom of the page, right by the publisher and isbn information already being correctly scraped.
Folly Mox (talkcontribs)

Now that I think of it, something that's entirely within Citoid's remit would be, when creating a book citation, to use the "authorn-first" and "authorn-last" aliases instead of "firstn" and "lastn", since they'd be considerably easier to change into "editorn-" form, without needing to erase and retype the full parameter names as currently.

Folly Mox (talkcontribs)

Citoid knows if it's using a Zotero translator or not. Does it know which one? If it does, and citation templates were updated to hold an appropriate hidden parameter, could the translator in use be surfaced and passed to the template? That could facilitate identifying which translators are consistently inaccurate, which seems like a good first step in trying to improve them or track them for manual correction.

Mvolz (WMF) (talkcontribs)

Zotero reports translator use in its logs, unfortunately the logging is not compatible with our infrastructure so we have those turned off. But if you run a version locally and try the url, it will tell you in the console.

Folly Mox (talkcontribs)

I should also mention I was informed a few days ago at en:Module talk:Wd#References mapping that |website= (in citation templates) "should only get the domain name when the source is best known by that name". Citoid always chooses to fill this parameter, even when it can't discern a human readable website name and falls back on the first part of the URL (which is often). Apparently this behaviour is not desirable in general.

Folly Mox (talkcontribs)
Folly Mox (talkcontribs)

Does Citoid do any error checking its values? It's not immediately clear where to find the source code, but it's pretty clear the user scripts downstream of Citoid don't double check it, so we get silly things like a perfectly formatted citation to a 404 page, or numeric data in an author name field. I understand that the parsing issues themselves stem from Zotero, but if basic error checking could be performed in-house, it could cut down on the amount of bogus citations added by good faith editors not cautious enough to double check script output.

Mvolz (WMF) (talkcontribs)

There is various error checking, for instance we check if a website sends a 404 page not found status code. However, unfortunately occasionally websites don't always comply with W3C standards and do silly things like report a 200 page OK status code and then in text write 404.

Folly Mox (talkcontribs)

Well, I guess it's fair that websites should probably follow standards and return 404 codes for their 404 pages, but since many of them don't, do you think it would be possible to check for "404", "page not found", "page does not exist", "we're sorry" etc. in the |title= parameter? An ounce of prevention saving a pound of cure, and all that.

I'm minded to return to this subtopic specifically because I thought another title nearly universally indicating a failed reference is "is for sale", which is what typically shows up when a site has been usurped by a domain squatter.

Reply to "Improving citation quality"
Ifly6 (talkcontribs)

On English Wikipedia the {{Cite journal}} template has a jstor parameter. Can Citoid be changed to extract the relevant stable link for the Jstor URL instead of copying the provided URL into the URL field?

Mvolz (WMF) (talkcontribs)

If given a JSTOR link, it gives the stable JSTOR url.

For most other links, it doesn't typically know the JSTOR identifier, so it can't use that to then get the JSTOR link. Most links to journal articles, if they include extra identifiers will include the DOI, but not typically JSTOR.

Ifly6 (talkcontribs)
Mvolz (WMF) (talkcontribs)

Ah, I misinterpreted you - you want the jstor url to go into the jstor field instead of into the url field?

That's a little tricky. We could definitely return a JSTOR parameter in the api; the problem is that TemplateData and Citoid extension only does really basic mapping, so then the jstor link would end up in the url as well and so it'd be linked in both. In the API we return a url no matter what because it's a required parameter (api guarantees return of a url in the url field) and for other language wikis that don't have separate parameters, they need it. We've had this issue as well with people not liking we return both the doi and the resolved doi link in the url field, though personally it doesn't bother me.

That kind of per-wiki customisation might have to be per-wiki user script / common.js kind of solution rather than something that goes in the back-end or the extension, which is designed to be fairly agnostic about the citation templates being used.

Ifly6 (talkcontribs)

Yea, it would pretty much be just changing where the JSTOR parameter ends up. On English Wikipedia there's been an issue basically where there are three interrelated issues. ¶ First, the citation bot adds Jstor parameters given the stable Jstor URL but this causes unnecessary duplication... which some people want retained just in case someone meant the URL to be there (even though nobody means much of anything when using these citation generators). ¶ Second, the Internet Archive bot then can be run to "archive" the live Jstor URLs (but not the parameters) because the URL is there... even though, because Jstor is paywalled, the "archive" is just a landing page. Naturally some people don't want these useless archive links remove either. ¶ Third, Jstor because it's paywalled isn't always the best free full-text source and putting a URL there would on first glance seem misleading.

Anyway, I understand the technical issues involved, though I think the real solution in this instance is the root cause, which is the unthinking addition of Jstor URLs to templates that end up triggering all of the downstream clutter. A user script would have insufficient adoption to go much of anywhere in nipping the issue.

Folly Mox (talkcontribs)

Just noting here that it's been my practice to remove url parameters when they point to jstor, and put the stable jstor identifier in the jstor parameter instead, to avoid the unnecessary archive and access-date cruft that follow-on scripts produce. I understand if it's not possible not to return a url parameter though.

en-wp's own in-house tools could be a vector for correction here, although the maintainers have been too busy to maintain them for a long time. Honestly given how popular automated referencing has become, we could use about four times as much staffing at every point in the stack.

Ifly6 (talkcontribs)

Yea, when I was reading that Village Pump discussion about people claiming that an editor might have placed the Jstor URL there on purpose, my first thought was "lmao nobody formats citations manually anymore; there's no purpose involved".

Folly Mox (talkcontribs)

I missed that discussion, but there's no reason to duplicate a link (to jstor content) in the cruft-inducing url field when it can be safely tucked into the parameter specifically included to hold it.

These days if I'm citing a journal article, I'll usually swap into Visual Editor to generate the citation with Citoid, but I swap back into source editing to touch it up afterwards.

I do find it worrisome how proliferate automated referencing has become when weighed against the accuracy of its output. I spend probably eighty per cent of my time on wiki cleaning up after thoughtless automatic references, but even with a team of fifteen or twenty the references would be flowing in at a rate we couldn't handle them, given the huge backlog currently present.

Ifly6 (talkcontribs)

I agree, which is why I was thinking to get to (at least) one of the sources of those automatic reference generators. Is it possible, Mvolz, to add some kind of post-processing to trigger with Jstor? Or is that actually technically infeasible?

Reply to "Jstor citations"

PopulatingCite_report for relevant reference types

3
Evolution and evolvability (talkcontribs)

Currently the visuala editor citoid formatts reports using cite_journal. It'd be highly useful to wrap them instead in cite_report (especially when a Qikidata QID is provided, since such wikidata items will state that the instance of (P31)=report). It'd also be ideal in those ccasees to also include location data since the location of the publisher / commissioning organisation / authoring organisation is often highly relevant (indeed usually more relevant than a book's publisher's city!). Either drawing from the country (P17) of the publisher (P123) or maybe the location (P276) of the cited item itself?


See here for an example where cite_report would be helpful in formatting.

Mvolz (WMF) (talkcontribs)
Evolution and evolvability (talkcontribs)
Reply to "PopulatingCite_report for relevant reference types"
Oiyarbepsy (talkcontribs)

I would like to suggest that Citoid automatically archives pages when they are used in a reference. Yeah, I know, you have a very long to-do list, but put this on your eventually list.

Jdforrester (WMF) (talkcontribs)

Do you mean "automatically finds the archive URL from the Internet Archive and makes it available" or something else?

Oiyarbepsy (talkcontribs)

Yes, or, automatically creates the archive would be even better. We could potentially eliminate the scourge of dead links.

Jdforrester (WMF) (talkcontribs)

"Creates the archive" meaning "asks the Internet Archive to archive the URL"? Or are you asking for WMF to archive the Web (in which case I believe the answer is a very clear "no" from Legal, as repeatedly discussed over the last few years).

Atlasowa (talkcontribs)

The internet Archive is already "crawling all new external links, citations and embeds made on Wikipedia pages within a few hours of their creation / update." Only enWP, it seems (and caveat robots.txt).

The french WP even automatically adds an archive-link to wikiwix on all reference links.

The rest is rotting away.

PS: Both cite_web and cite_news have parameters to use "archiveurl=" and "archivedate= ".

Mvolz (WMF) (talkcontribs)
Reply to "Auto-archive"

Sometimes the visual editor citation tool changes tags from <ref> to <references>

5
Nathanielcwm (talkcontribs)

It seems to happen most frequently when a large number of citations are added in a single edit.

This behaviour is undesired and confusing due to <references> generally being used for the reflist.

Whatamidoing (WMF) (talkcontribs)

Could you give me a diff in which this happened?

Whatamidoing (WMF) (talkcontribs)
Nathanielcwm (talkcontribs)

Nathanielcwm (talkcontribs)

I was just able to reproduce it with this edit https://en.wikipedia.org/w/index.php?title=User:Nathanielcwm/sandbox/referencestest&diff=prev&oldid=1160061664


To reproduce it I did the following:

This was done under Windows 11 and Firefox 114.0.1 with the modern Vector skin

I have the following non default gadgets enabled:

  • Focus the cursor in the search bar on loading the Main Page
  • Twinkle
  • Prosesize
  • find-archived-section
  • Display pages on your watchlist that have changed since your last visit in bold
  • HotCat
  • ProveIt (not used for this edit)
  • MoreMenu
  • Replace the "new section" tab text with "+"
  • Change UTC-based times and dates, such as those used in signatures, to be relative to local time
  • Display an assessment of an article's quality in its page header
  • Dark mode toggle
  • Display links to disambiguation pages in orange
  • Strike out usernames that have been blocked
  • XTools
  • Make headers of tables display as long as the table is in view, i.e. "sticky"
  • Dark mode styling

And the following default gadgets disabled:

  • Reference Tooltips
  • refToolbar

and https://en.wikipedia.org/wiki/User:Nathanielcwm/common.js

Reply to "Sometimes the visual editor citation tool changes tags from <ref> to <references>"
HLHJ (talkcontribs)

When a journal article has a Pubmed ID, can autofill please check if it has a PMC free-fulltext ID, too? Pubmed metadata will say if there is a PMC ID. Having a PMC ID means that the reference has a fulltext link and is labelled with a little green unlocked padlock icon saying access is free, so it's really helpful to readers. It's even more helpful to remote medical professionals using Internet-in-a-box, as they have access to PMC fulltexts but not the internet.

It would also be nice to have autofill from a PMC ID or PMC URL work.

Thanks to everyone maintaining this tool!

Mvolz (WMF) (talkcontribs)
Reply to "PMC"

UNABLE_TO_GET_ISSUER_CERT_LOCALLY

2
Klymets (talkcontribs)

npm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLY npm ERR! errno UNABLE_TO_GET_ISSUER_CERT_LOCALLY npm ERR! request to https://registry.npmjs.org/has-flag failed, reason: unable to get local issuer certificate

npm ERR! A complete log of this run can be found in: npm ERR! /root/.npm/_logs/2022-11-14T14_10_54_259Z-debug.log

Whatamidoing (WMF) (talkcontribs)
Reply to "UNABLE_TO_GET_ISSUER_CERT_LOCALLY"

Error: Cannot find module 'w3c-xmlserializer/lib/XMLSerializer'

1
Guillaume Taillefer (talkcontribs)

Since the last, post, I have managed to figure out most of the problems, but not all. I figured out that I needed to install many missing packages of which I could see were missing through npm outdated. I then downloaded all the missing packages, and then awaited to see if npm start would finally work, until it didn't. When I typed in npm start, it gave me this:

Error: Cannot find module 'w3c-xmlserializer/lib/XMLSerializer'

Require stack:

- /home2/wwiiarch/zotero/src/translation/translate.js

- /home2/wwiiarch/zotero/src/zotero.js

- /home2/wwiiarch/zotero/src/server.js

    at Function.Module._resolveFilename (node:internal/modules/cjs/loader:985:15)

    at Function.Module._load (node:internal/modules/cjs/loader:833:27)

    at Module.require (node:internal/modules/cjs/loader:1057:19)

    at require (node:internal/modules/cjs/helpers:103:18)

    at Object.<anonymous> (/home2/wwiiarch/zotero/src/translation/translate.js:44:24)

    at Module._compile (node:internal/modules/cjs/loader:1155:14)

    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1209:10)

    at Module.load (node:internal/modules/cjs/loader:1033:32)

    at Function.Module._load (node:internal/modules/cjs/loader:868:12)

    at Module.require (node:internal/modules/cjs/loader:1057:19) {

  code: 'MODULE_NOT_FOUND',

  requireStack: [

    '/home2/wwiiarch/zotero/src/translation/translate.js',

    '/home2/wwiiarch/zotero/src/zotero.js',

    '/home2/wwiiarch/zotero/src/server.js'

  ]

}

I tried npm install w3c-xmlserializer/lib/XMLSerializer but it gives me the following error:

npm ERR! code ENOENT

npm ERR! syscall open

npm ERR! path /home2/wwiiarch/zotero/w3c-xmlserializer/lib/XMLSerializer/package.json

npm ERR! errno -2

npm ERR! enoent ENOENT: no such file or directory, open '/home2/wwiiarch/zotero/w3c-xmlserializer/lib/XMLSerializer/package.json'

npm ERR! enoent This is related to npm not being able to find a file.

npm ERR! enoent

npm ERR! A complete log of this run can be found in:

npm ERR!     /home2/wwiiarch/.npm/_logs/2022-11-11T02_13_24_366Z-debug-0.log

I checked in the w3c-xmlserializer/lib/folder to see if there was a XMLSerializer, but there wasn't, just the .js files in there. I have no idea how to fix this so if anyone can that'd be great

My specs now are:

CentOS 6

shared hosting bluehost cpanel

node v16.18.1

npm 8.19.2

Thanks

Reply to "Error: Cannot find module 'w3c-xmlserializer/lib/XMLSerializer'"

npm WARN tar ENOENT: no such file or directory

4
Guillaume Taillefer (talkcontribs)

I tried installing both the translation-server and the zotero service from mediawiki via their git commands, and upon going in their respective directories, and trying to use npm install, they give me 100+ different versions of the same error:

npm WARN tar ENOENT: no such file or directory ...

For example one of the ones from translation-service:

npm WARN tar ENOENT: no such file or directory, lstat '/home2/wwiiarch/translation-server/node_modules/.staging/core-js-f5f4dd1d/modules'

The common thing that I have found for both of these is that when I go into each node_modules directory, it shows that they're emtpy. This is despite the fact that none of the isntructions tell me anything about this, and just say to use the git command. However it seems that I use the npm commands its looking for a .staging folder inside node_modules.

My version of node is v10.24.1 (before I was trying v12.22.12 but it doesn't seem to change anything), and my version of npm is 6.14.16

How do I fix this problem? Thanks

Mvolz (WMF) (talkcontribs)

Did you use the --recurse-submodules flag when you did git clone?

If not you may be missing the submodules in your repo. If you do

git submodule init
git submodule update 
npm install

In your translation-server directory, hopefully that fixes it.

Guillaume Taillefer (talkcontribs)

Did you use the --recurse-submodules flag when you did git clone?

Yes that's in the instructions to do for both translation-server and zotero.

I tried both tried using the above commands in both translation-server/ and translation-server/node_modules.

For translation-server/node_modules both the first two git commands resulted in no errors, I just typed them in, enter, and nothing. Then I do npm install, and it throws the hundreds of basically the same error again. In the middle of it it gave this error:

npm ERR! Error while executing:

npm ERR! /usr/local/cpanel/3rdparty/lib/path-bin/git ls-remote -h -t ssh://git@github.com/zotero/wicked-good-xpath.git

npm ERR!

npm ERR! Permission denied (publickey).

npm ERR! fatal: Could not read from remote repository.

npm ERR!

npm ERR! Please make sure you have the correct access rights

npm ERR! and the repository exists.

npm ERR!

npm ERR! exited with error code: 128

And then it continued on and then eventually stopped with the error that the log could be found in debug logs. I then went to do it in just translation-server/. This did the exact same thing with the git commands, and when I did npm install, it did the same thing with the errors except it continually repeated the same looking error from my original comment, and then ended with this:

npm ERR! Error while executing:

npm ERR! /usr/local/cpanel/3rdparty/lib/path-bin/git ls-remote -h -t ssh://git@github.com/zotero/wicked-good-xpath.git

npm ERR!

npm ERR! Warning: Permanently added the ECDSA host key for IP address '140.82.113.4' to the list of known hosts.

npm ERR! Permission denied (publickey).

npm ERR! fatal: Could not read from remote repository.

npm ERR!

npm ERR! Please make sure you have the correct access rights

npm ERR! and the repository exists.

npm ERR!

npm ERR! exited with error code: 128

npm ERR! A complete log of this run can be found in:

npm ERR!     /home2/wwiiarch/.npm/_logs/2022-11-01T22_04_08_734Z-debug.log

It seems to me that since I am on shared hosting this might have a thing to do with it. Thanks

Guillaume Taillefer (talkcontribs)

Ok so I have made some progress. First I deleted npm and node (as well as zotero), then redownloaded them with nvm (node v12.22.22). Then I retried downloading zotero, along with cd zotero, npm install. I got the deprecated errors again. One of those deprecated messages mentioned something about an outdated core-js. I then did npm install --save core-js. As far as I remember success for that. I then found out about npm outdated, which I then typed in (in zotero), and then I got the results of aws-sdk missing, along with md5, require, @zotero/eslint-config, etc. I then did npm install aws-sdk, which was mostly successful except for a few errors, which was taken care of using npm audit fix --force. I then fixed md5, require, @zotero/eslint-config, and some other ones, and it was pretty much up to date. I then did npm install, and finally it was successful! I then tried npm start, and there was a problem I have no idea how to solve:

> translation-server@2.0.4 start

> node src/server.js

/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/browser/parser/html.js:170

    after._pushedOnStackOfOpenElements?.();

                                      ^

SyntaxError: Unexpected token '.'

    at wrapSafe (internal/modules/cjs/loader.js:915:16)

    at Module._compile (internal/modules/cjs/loader.js:963:27)

    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)

    at Module.load (internal/modules/cjs/loader.js:863:32)

    at Function.Module._load (internal/modules/cjs/loader.js:708:14)

    at Module.require (internal/modules/cjs/loader.js:887:19)

    at require (internal/modules/cjs/helpers.js:74:18)

    at Object.<anonymous> (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/browser/parser/index.js:4:20)

    at Module._compile (internal/modules/cjs/loader.js:999:30)

    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)

This seems like this is all supposed to be there, so I didn't exactly want to modify the code; However I tried it once by deleting both the ? and . for both the shown variable and the one under, which showed no errors in the code. However when I run npm start again, it gave me a whole load of errors:

ReferenceError: FinalizationRegistry is not defined

    at new IterableWeakSet (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/living/helpers/iterable-weak-set.js:9:38)

    at new DocumentImpl (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/living/nodes/Document-impl.js:177:34)

    at Object.exports.setup (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/living/generated/Document.js:106:12)

    at Object.exports.create (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/living/generated/Document.js:47:18)

    at Object.exports.createImpl (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/living/generated/Document.js:51:27)

    at Object.exports.createImpl (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/living/documents.js:10:19)

    at Object.exports.createWrapper (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/living/documents.js:14:33)

    at new Window (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/browser/Window.js:236:30)

    at exports.createWindow (/home2/wwiiarch/zotero/node_modules/jsdom/lib/jsdom/browser/Window.js:96:10)

    at new JSDOM (/home2/wwiiarch/zotero/node_modules/jsdom/lib/api.js:36:20)


Anyone have the answer to this? Thanks

Reply to "npm WARN tar ENOENT: no such file or directory"