I want to enable the screen reader facility but how to do it?
Jump to navigation Jump to search
Reply to "How to try Wikispeech Screen reader?"
Reply to "Current status?"
Reply to "Contributing to the WikiSpeech Project"
Reply to "Development libraries, offline mode and a note of the current state of linux TTS"
Reply to "More languages"
Reply to "How does Wikispeech relate to Wikidata?"
Reply to "WikiSpeak"
Reply to "User script on enwiki"
Reply to "Related"
How to try Wikispeech Screen reader?
You can try a version under development on our demo wiki: https://wikispeech.wmflabs.org.
Hi, what's the current status of the project? I'm especially interested if the team approached the item "Conversion of text from raw to annotated text" and if so, if it can be reused for other languages. Thank you!
Contributing to the WikiSpeech Project
Hello WikiSpeech community,
First of all, I will like to say this is a great Idea and I totally endorse it. I will like to contribute to this project and help improve it to make it better for its consumers. I will get started at once, any advices before I begin? My contributions will be technical, I have subscribed to the project on phab and I will go ahead and set it up at once locally so that I can start fixing bugs and building new features :)
Development libraries, offline mode and a note of the current state of linux TTS
hi am follow this project for a some time and as a Linux user with dyslexia. The first question that i have is there a libraries version for building other software. Second question is there a offline mode.
The reason of the first question is that the current TTS packets on linux (Festival, espeak) have no easy way to interface to other software with out execute a shell command. Another problem with the exec method is the length of the text, this can be the problem with the software that i use or the data is send via the command line.
The idea behind the offline mode is for offloading wikimedia servers and using for Third party application on PC/raspberry pi for example Calibre or a screen reader.
Last this is not in my two question but a note on the current state of TTS under linux is that the current Free/open Source voices are bad qualety (espeak) or the software is open source but the TTS voices are close source or a restricted license. And a good guideline is MS SAM (sapi 4 and sapi 5) for the minimum quality for a TTS voice that use in an production environment.
First off, sorry for the delayed response.
The Wikispeech system is divided in two main components: the MediaWiki extension and the TTS server.
For your first question, the TTS server uses MaryTTS, along with a couple other modules. You can find the latter at https://github.com/stts-se/pronlex and https://github.com/stts-se/wikispeech_mockup. While the server is primarily developed for use with MediaWiki, we try to make it general enough to be usable with other software.
As for your second question, it is possible to run the TTS server locally on your own server. Since it has an HTTP API, you can use it the same way, regardless of where the server is running.
If you want to try the software, we would be happy for any feedback and could give support to some extent. Note, however, that both the MediaWiki extension and the TTS server is still very much under development.
Hello, how can we add more languages? For example, I could help with Spanish, which has very simple text-to-sound rules.
During the Wikispeech-project, we will add support for Swedish, English and Arabic. However, the goal is to make it easy to add other languages later on. If you know of any TTS resources (voices, lexicons etc.) for Spanish, that would be a good start.
Oh, I thought that you could add words by copying and pasting IPA notation or something. I would volunteer to write or search them.
Hi! Yes, when a language is activated it will be possible to add IPA notation to improve the lexicon. We hope to be able to add more languages in 2018 and Spanish is certainly a strong candidate - so do keep an eye on this page! And thank you for volunteering to help!
How does Wikispeech relate to Wikidata?
In cases where there are multiple words with the same spelling but different pronounciation it seems like Wikidata could help. It might be possible to select the correct alternative based on the context and Wikidata's knowledge.
Does Wikispeech try to integrate with Wikidata?
Hi ChristianKl and thank you for your question!
Currently Wikidata doesn't work as a dictionary (even though there are plans/considerations to integrate Wiktionary there). However, our hope is that the lexicon for Wikispeech and recorded pronunciations for words would be valuable to add either to Wikidata or Wiktionary in the future. See #7 here. One way could be to upload files with the sound recordings onto Wikimedia Commons, with the IPA as part of the metadata, another option is to connect different words to the Wikispeech database. This is something we have to look into a bit more.
￼In the last Wikipedia signpost it was a report from Wiki conference India, where they worked on something called WikiSpeak https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2016-08-18/News_and_notes
User script on enwiki
￼ There is a user script on English Wikipedia that are supposed to make the article voice friendly. Perhaps it can be of inspiration. https://en.wikipedia.org/wiki/User:P999/Toggle_VF
￼ Apparently the ios app are also working on text to speech https://phabricator.wikimedia.org/T126179
There are no older topics