In Pardus, a project is assigned for each intern to work on during his/her internship period each summer. The list of available projects are announced on Pardus Wiki, see “http://tr.pardus-wiki.org/Staj2010#Projeler”. Interns can choose one of these projects, or developers of Pardus – as a mentor- can assign a project to them, or even an intern can come up with an idea/project that he/she wants to work on (if it is acceptable). After my orientation period is over, I gave a break on searching and learning the general system, and I started to decide on my project. Our internship coordinator gave me some ideas about each project, and I read their descriptions, requirements, difficulty levels and supplementary materials on Pardus Wiki. After this research, we decided to chose the project called “Engelsiz Pardus” (i.e. unimpeded or abled Pardus). The aim of this project is providing some improvements on Pardus for the benefits of users with certain disabilities. Some applications like screen readers should be integrated with Pardus and KDE in order to make Pardus easier to use for disabled people. The tasks can be listed as researching some screen reader applications and speech synthesizers like Orca, Fire Vox, Linux Screen Reader, Suse BLinux, KTTSD/KTTSmgr; and then choosing the most appropriate ones that can be suitable to integrate with Pardus, and finally adding these properties to Pardus. The requirements for this project is the knowledge of PiSi packaging, since many new packages will be prepared and possible problems will be solved during this integration. Moreover, I should learn at least the general syntax and structure of Python language since it will be used while preparing actions.py python script file for each package. Thus, my working schedule is set like this: first of all, pre-research on similar and useful screen reader applications, collecting necessary informations/links/sources/documents about them, examining these materials together with examining current Pardus packages, optimizing the availability and integration possibility etc. While doing these, I should also start learning how to prepare Pisi packages and how to solve possible problems, in addition to learning Python.
The pre-work period of the project took long time than we expected. The reason is that what we can do is not so obvious and research would be a possible starting point. Thus, after first days of general orientation and education, I have spent nearly 2 days on searching and reading about different screen readers, speech synthesizers and dispatchers, examining their documentation, how to manuals, installation and system requirements details etc. I have collected links and sources that will be helpful later. I also try to kept in mind that we are looking for the ones that can be applicable to Pardus OS. For instance, if an application requires some desktop or library dependencies that are not found in Pardus repository, we should look if these pre-requirements can be matched easily. If they are difficult or nearly impossible to be matched, we should eliminate these applications. Moreover, while comparing more than one application that does more or less the same, we should focus on the one that has less dependencies or the one with dependencies that are already found in Pardus. The reason is that, if we cannot meet these dependencies, we need to prepare a package for each of these dependencies first, and then try to prepare a package for the general application. Unfortunately, if this requirement package also has a dependency that is not present, we need to prepare another package for that too etc. As you see, it is a recursive process and choosing the easiest but powerful option is hard to pick. Thus it deserves some research time before jumping into the first option we have.
With this motivation, I have started searching on different applications. The list of possible screen readers and comparisons can be listed from here: “http://en.wikipedia.org/wiki/Comparison_of_screen_readers” or speech synthesizers from here: “http://en.wikipedia.org/wiki/Comparison_of_speech_synthesizers”. The distinction between screen readers and speech synthesizers is important. Speech synthesis is the artificial production of human speech and a computer system used for this purpose is called a speech synthesizer. A text-to-speech (TTS) system converts normal language text into speech. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written works on a home computer. Some example speech synthesizers that work on unix/Linux systems are eSpeak, Festival, Flite, FreeTTS, OpenTTS. However, screen reader is more general and different. A screen reader is a software application that attempts to identify and interpret what is being displayed on the screen. This interpretation is then represented to the user with text-to-speech. Thus, we can say that screen readers use underlying speech synthesizers to work. Some example screen readers that work on unix/Linux platforms are Brltty, Fire Vox, Linux Screen Reader (LSR), Orca, Suse Blinux, Emacspeak. I have examined each of them form their websites and forums, read their documentation to have an idea and to report back when necessary. I cannot summarize all the technical details about each of them here, but I had general idea about each of them. Furthermore, since Pardus uses KDE (a desktop environment provided as the default working environment on many Linux distributions), we should focus on applications suitable for KDE. Since most of them are prepared for GNOME desktop, even if we know that applications for KDE or GNOME can work for each other using their libraries, choosing the ones for KDE is better. For that purpose, KDE has KTTS which stands for KDE Text-to-Speech. It is a subsystem within the KDE desktop for conversion of text to audible speech. This is a part of KDE accessibility project and you can visit KTTS from: “http://accessibility.kde.org/developer/kttsd/”. It has main parts as KTTSD (The KDE Text-to-Speech Deamon, a non-gui application that runs in the background, providing TTS support to KDE applications) and KTTSMGR (An application for configuring and controlling KTTSD). After working on its road map and general documentation, we discussed with project mentors and decided to focus on KTTS. Since this project can be so huge and many things could be added over time, it will never be finished; especially during the limited internship period. Thus, we decided to start from kde accessibility and continue as much as we can. The “to do” list is prepared that involves what packages should be prepared at first, by looking at requirements and dependencies that I have collected and reported. Kde accessibility needs speech dispatcher or OpenTTS, then OpenTTS needs Dot.conf library, eSpeak, Flite, Festival-freebsoft-utilsas speech synthesizers, and at-spi2-core plus at-spi2-atk etc. Moreover, for Fire Vox we need Orca or FreeTTS, Brltty, python-pyatspi etc. My first package to be prepared is Dot.conf library, but before that I need to learn how to prepare a Pisi package from the beginning.
Hiç yorum yok:
Yorum Gönder