![]() | ![]() |
|
Speech Recognition HOWTOStephen Cook scook@gear21.com
1. Legal Notices1.1. Copyright/LicenseCopyright (c) 2000-2002 Stephen C. Cook. Permission is granted to copy, distribute, and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation. This document is made available under the terms of the GNU Free Documentation License (GFDL), which is hereby incorporated by reference. 1.2. DisclaimerThe author disclaims all warranties with regard to this document, including all implied warranties of merchantability and fitness for a certain purpose; in no event shall the author be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use of this document. 1.3. TrademarksAll trademarks contained in this document are copyright/trademark of their respective owners. 2. Forward2.1. About This DocumentThis document is targeted at the beginner to intermediate level Linux user interested in learning about Speech Recognition and trying it out. It may also help the interested developer in explaining the basics of speech recognition programming. I started this document when I began researching what speech recognition software and development libraries were available for Linux. Automated Speech Recognition (ASR or just SR) on Linux is just starting to come into its own, and I hope this document gives it a push in the right direction - by supporting both users and developers of ASR technology. I have left a variety of SR techniques out of this document, and instead I have focused on the "HOWTO" aspect (since this is a howto...). I have included a Publications section so the interested reader can find books and articles on anything not covered here. This is not meant to be a definitive statement of ASR on Linux. For the most recent version of this document, check the LDP archive, or go to: http://www.gear21.com/speech/index.html. 2.2. AcknowledgementsI would like to thank the following people for the help, reviewing, and support of this document:
2.3. Comments/Updates/FeedbackIf you have any comments, suggestions, revisions, updates, or just want to chat about ASR, please send an email to me at scook@gear21.com. 2.4. ToDoThe following things are left "to do":
3. Introduction3.1. Speech Recognition BasicsSpeech recognition is the process by which a computer (or other type of machine) identifies spoken words. Basically, it means talking to your computer, AND having it correctly recognize what you are saying. The following definitions are the basics needed for understanding speech recognition technology.
3.2. Types of Speech RecognitionSpeech recognition systems can be separated in several different classes by describing what types of utterances they have the ability to recognize. These classes are based on the fact that one of the difficulties of ASR is the ability to determine when a speaker starts and finishes an utterance. Most packages can fit into more than one class, depending on which mode they're using.
3.3. Uses and ApplicationsAlthough any task that involves interfacing with a computer can potentially use ASR, the following applications are the most common right now.
4. Hardware4.1. Sound CardsBecause speech requires a relatively low bandwidth, just about any medium-high quality 16 bit sound card will get the job done. You must have sound enabled in your kernel, and you must have correct drivers installed. For more information on sound cards, please see "The Linux Sound HOWTO" available at: http://www.LinuxDoc.org/. Sound card quality often starts a heated discussion about their impact on accuracy and noise. Sound cards with the 'cleanest' A/D (analog to digital) conversions are recommended, but most often the clarity of the digital sample is more dependent on the microphone quality and even more dependent on the environmental noise. Electrical "noise" from monitors, pci slots, hard-drives, etc. are usually nothing compared to audible noise from the computer fans, squeaking chairs, or heavy breathing. Some ASR software packages may require a specific sound card. It's usually a good idea to stay away from specific hardware requirements, because it limits many of your possible future options and decisions. You'll have to weigh the benefits and costs if you are considering packages that require specific hardware to function properly. 4.2. MicrophonesA quality microphone is key when utilizing ASR. In most cases, a desktop microphone just won't do the job. They tend to pick up more ambient noise that gives ASR programs a hard time. Hand held microphones are also not the best choice as they can be cumbersome to pick up all the time. While they do limit the amount of ambient noise, they are most useful in applications that require changing speakers often, or when speaking to the recognizer isn't done frequently (when wearing a headset isn't an option). The best choice, and by far the most common is the headset style. It allows the ambient noise to be minimized, while allowing you to have the microphone at the tip of your tongue all the time. Headsets are available without earphones and with earphones (mono or stereo). I recommend the stereo headphones, but it's just a matter of personal taste. You can get excellent quality microphone headsets for between $25 $100. A good place to start looking is http://www.headphones.com or http://www.speechcontrol.com. A quick note about levels: Don't forget to turn up your microphone volume. This can be done with a program such as XMixer or OSS Mixer and care should be used to avoid feedback noise. If the ASR software includes auto-adjustment programs, use them instead, as they are optimized for their particular recognition system. 4.3. Computers/ProcessorsASR applications can be heavily dependent on processing speed. This is because a large amount of digital filtering and signal processing can take place in ASR. As with just about any cpu intensive software, the faster the better. Also, the more memory the better. It's possible to do some SR with 100MHz and 16M RAM, but for fast processing (large dictionaries, complex recognition schemes, or high sample rates), you should shoot for a minimum of a 400MHz and 128M RAM. Because of the processing required, most software packages list their minimum requirements. Using a cluster (Beowulf or otherwise) to perform massive recognition efforts hasn't yet been undertaken. If you know of any project underway, or in development please send me a note! scook@gear21.com 5. Speech Recognition Software5.1. Free SoftwareMuch of the free software listed here is available for download at: http://sunsite.uio.no/pub/Linux/sound/apps/speech/ 5.1.1. XVoiceXVoice is a dictation/continuous speech recognizer that can be used with a variety of XWindow applications. It allows user-defined macros. This is a fine program with a definite future. Once setup, it performs with adequate accuracy. XVoice requires that you download and install IBM's (free) ViaVoice for Linux (See Commercial Section). It also requires the configuration of ViaVoice to work correctly. Additionally, Lesstif/Motif (libXm) is required. It is also important to note that because this program interacts with X windows, you must leave X resources open on your machine, so caution should be used if you use this on a networked or multi-user machine. This software is primarily for users. An RPM is available. HomePage: http://www.compapp.dcu.ie/~tdoris/Xvoice/ http://www.zachary.com/creemer/xvoice.html Project: http://xvoice.sourceforge.net Community: http://www.onelist.com/community/xvoice 5.1.2. CVoiceControl/kVoiceControlCVoiceControl (which stands for Console Voice Control) started its life as KVoiceControl (KDE Voice Control). It is a basic speech recognition system that allows a user to execute Linux commands by using spoken commands. CVoiceControl replaces KVoiceControl. The software includes a microphone level configuration utility, a vocabulary "model editor" for adding new commands and utterances, and the speech recognition system. CVoiceControl is an excellent starting point for experienced users looking to get started in ASR. It is not the most user friendly, but once it has been trained correctly, it can be very helpful. Be sure to read the documentation while setting up. This software is primarily for users. Homepage: http://www.kiecza.de/daniel/linux/index.html Documents: http://www.kiecza.de/daniel/linux/cvoicecontrol/index.html 5.1.3. Open Mind SpeechStarted in late 1999, Open Mind Speech has changed names several times (was VoiceControl, then SpeechInput, and then FreeSpeech), and is now part of the "Open Mind Initiative". This is an open source project. Currently it isn't completely operational and is primarily for developers. This software is primarily for developers. Homepage: http://freespeech.sourceforge.net 5.1.4. GVoiceGVoice is a speech ASR library that uses IBM's ViaVoice (free) SDK to control Gtk/GNOME applications. It includes libraries for initialization, recognition engine, vocabulary manipulation, and panel control. Development on this has been idle for over a year. This software is primarily for developers. Homepage: http://www.cse.ogi.edu/~omega/gnome/gvoice/ 5.1.5. ISIPThe Institute for Signal and Information Processing at Mississippi State University has made its speech recognition engine available. The toolkit includes a front-end, a decoder, and a training module. It's a functional toolkit. This software is primarily for developers. The toolkit (and more information about ISIP) is available at: http://www.isip.msstate.edu/project/speech/ 5.1.6. CMU SphinxSphinx originally started at CMU and has recently been released as open source. This is a fairly large program that includes a lot of tools and information. It is still "in development", but includes trainers, recognizers, acoustic models, language models, and some limited documentation. This software is primarily for developers. Homepage: http://www.speech.cs.cmu.edu/sphinx/Sphinx.html Source: http://download.sourceforge.net/cmusphinx/sphinx2-0.1a.tar.gz 5.1.7. EarsAlthough Ears isn't fully developed, it is a good starting point for programmers wishing to start in ASR. This software is primarily for developers. FTP site: ftp://svr-ftp.eng.cam.ac.uk/comp.speech/recognition/ 5.1.8. NICO ANN ToolkitThe NICO Artificial Neural Network toolkit is a flexible back propagation neural network toolkit optimized for speech recognition applications. This software is primarily for developers. Its homepage: http://www.speech.kth.se/NICO/index.html 5.1.9. Myers' Hidden Markov Model SoftwareThis software by Richard Myers is HMM algorithms written in C++ code. It provides an example and learning tool for HMM models described in the L. Rabiner book "Fundamentals of Speech Recognition". This software is primarily for developers. Information is available at: http://www.itl.atr.co.jp/comp.speech/Section6/Recognition/myers.hmm.html 5.1.10. Jialong He's Speech Recognition Research ToolAlthough not originally written for Linux, this research tool can be compiled on Linux. It contains three different types of recognizers: DTW, Dynamic Hidden Markov Model, and a Continuous Density Hidden Markov Model. This is for research and development uses, as it is not a fully functional ASR system. The toolkit contains some very useful tools. This software is primarily for developers. More information is available at: http://www.itl.atr.co.jp/comp.speech/Section6/Recognition/jialong.html 5.1.11. More Free Software?If you know of free software that isn't included in the above list, please send me a note at: scook@gear21.com. If you're in the mood, you can also send me where to get a copy of the software, and any impressions you may have about it. Thanks! 5.2. Commercial Software5.2.1. IBM ViaVoiceIBM has made true on their promise to support Linux with their series of ViaVoice products for Linux, though the future of their SDKs aren't set in stone (their licensing agreement for developers isn't officially released as of this date - more to come). Their commercial (not-free) product, IBM ViaVoice Dictation for Linux (available at http://www-4.ibm.com/software/speech/linux/dictation.html) performs very well, but has some sizeable system requirements compared to the more basic ASR systems (64M RAM and 233MHz Pentium). For the $59.95US price tag you also get an Andrea NC-8 microphone. It also allows multiple users (but I haven't tried it with multiple users, so if anyone has any experience please give me a shout). The package includes: documentation (PDF), Trainer, dictation system, and installation scripts. Support for additional Linux Distributions based on 2.2 kernels is also available in the latest release. The ASR SDK is available for free, and includes IBM's SMAPI, grammar API, documentation, and a variety of sample programs. The ViaVoice Run Time Kit provides an ASR engine and data files for dictation functions, and user utilities. The ViaVoice Command & Control Run Time Kit includes the ASR engine and data files for command and control functions, and user utilities. The SDK and Kits require 128M RAM and a Linux 2.2 or better kernel) The SDKs and Kits are available for free at: http://www-4.ibm.com/software/speech/dev/sdk_linux.html 5.2.2. Vocalis SpeechwareMore information on Vocalis and Vocalis Speechware is available at: http://www.vocalisspeechware.com and http://www.vocalis.com. 5.2.3. Babel TechnologiesBabel Technologies has a Linux SDK available called Babear. It is a speaker-independent system based on Hybrid Markov Models and Artificial Neural Networks technology. They also have a variety of products for Text-to-speech, speaker verification, and phoneme analysis. More information is available at: http://www.babeltech.com. 5.2.4. SpeechWorksI didn't see anything on their website that specifically mentioned Linux, but their "OpenSpeech Recognizer" uses VoiceXML, which is an open standard. More information is available at: http://www.speechworks.com. 5.2.5. NuanceNuance offers a speech recognition/natural language product (currently Nuance 8.0) for a variety of *nix platforms. It can handle very large vocabularies and uses a unqiue distributed architecture for scalability and fault tolerance. More information is available at: http://www.nuance.com. 5.2.6. Abbot/AbbotDemoAbbot is a very large vocabulary, speaker independent ASR system. It was originally developed by the Connectionist Speech Group at Cambridge University. It was transferred (commercialized) to SoftSound. More information is available at: http://www.softsound.com. AbbotDemo is a demonstration package of Abbot. This demo system has a vocabulary of about 5000 words and uses the connectionist/HMM continuous speech algorithm. This is a demonstration program with no source code. 5.2.7. EntropicThe fine people over at Entropic have been bought out by Micro$oft... Their products and support services have all but disappeared. Their support for HTK and ESPS/waves+ is gone, and their future is in the hands of M$. Their old website as http://www.entropic.com has more information. K.K. Chin advised me that the original developers of the HTK (the Speech Vision and Robotic Group at Cambridge) are still providing support for it. There is also a "free" version available at: http://htk.eng.cam.ac.uk. Also note that Microsoft still owns the copyright to the current HTK code... 5.2.8. More Commercial ProductsThere are rumors of more commercial ASR products becoming available in the near future (including L&H). I talked with a couple of L&H representatives at Comdex 2000 (Vegas) and none of them could give me any information on a Linux release, or even if they planned on releasing any products for Linux. If you have any further information, please send any details to me at scook@gear21.com. 6. Inside Speech Recognition6.1. How Recognizers WorkRecognition systems can be broken down into two main types. Pattern Recognition systems compare patterns to known/trained patterns to determine a match. Acoustic Phonetic systems use knowledge of the human body (speech production, and hearing) to compare speech features (phonetics such as vowel sounds). Most modern systems focus on the pattern recognition approach because it combines nicely with current computing techniques and tends to have higher accuracy. Most recognizers can be broken down into the following steps:
Although each step seems simple, each one can involve a multitude of different (and sometimes completely opposite) techniques. (1) Audio/Utterance Recording: can be accomplished in a number of ways. Starting points can be found by comparing ambient audio levels (acoustic energy in some cases) with the sample just recorded. Endpoint detection is harder because speakers tend to leave "artifacts" including breathing/sighing,teeth chatters, and echoes. (2) Pre-Filtering: is accomplished in a variety of ways, depending on other features of the recognition system. The most common methods are the "Bank-of-Filters" method which utilizes a series of audio filters to prepare the sample, and the Linear Predictive Coding method which uses a prediction function to calculate differences (errors). Different forms of spectral analysis are also used. (3) Framing/Windowing involves separating the sample data into specific sizes. This is often rolled into step 2 or step 4. This step also involves preparing the sample boundaries for analysis (removing edge clicks, etc.) (4) Additional Filtering is not always present. It is the final preparation for each window before comparison and matching. Often this consists of time alignment and normalization. There are a huge number of techniques available for (5), Comparison and Matching. Most involve comparing the current window with known samples. There are methods that use Hidden Markov Models (HMM), frequency analysis, differential analysis, linear algebra techniques/shortcuts, spectral distortion, and time distortion methods. All these methods are used to generate a probability and accuracy match. (6) Actions can be just about anything the developer wants. *GRIN* 6.2. Digital Audio BasicsAudio is inherently an analog phenomenon. Recording a digital sample is done by converting the analog signal from the microphone to an digital signal through the A/D converter in the sound card. When a microphone is operating, sound waves vibrate the magnetic element in the microphone, causing an electrical current to the sound card (think of a speaker working in reverse). Basically, the A/D converter records the value of the electrical voltage at specific intervals. There are two important factors during this process. First is the "sample rate", or how often to record the voltage values. Second, is the "bits per sample", or how accurate the value is recorded. A third item is the number of channels (mono or stereo), but for most ASR applications mono is sufficient. Most applications use pre-set values for these parameters and user's shouldn't change them unless the documentation suggests it. Developers should experiment with different values to determine what works best with their algorithms. So what is a good sample rate for ASR? Because speech is relatively low bandwidth (mostly between 100Hz-8kHz), 8000 samples/sec (8kHz) is sufficient for most basic ASR. But, some people prefer 16000 samples/sec (16kHz) because it provides more accurate high frequency information. If you have the processing power, use 16kHz. For most ASR applications, sampling rates higher than about 22kHz is a waste. And what is a good value for "bits per sample"? 8 bits per sample will record values between 0 and 255, which means that the position of the microphone element is in one of 256 positions. 16 bits per sample divides the element position into 65536 possible values. Similar to sample rate, if you have enough processing power and memory, go with 16 bits per sample. For comparison, an audio Compact Disc is encoded with 16 bits per sample at about 44kHz. The encoding format used should be simple - linear signed or unsigned. Using a U-Law/A-Law algorithm or some other compression scheme is usually not worth it, as it will cost you in computing power, and not gain you much. 7. PublicationsIf there is a publication that is not on this list, that you think should be, please send the information to me at: scook@gear21.com. 7.1. Books
For a very LARGE online biography, check the Institut Fur Phonetik: http://www.informatik.uni-frankfurt.de/~ifb/bib_engl.html 7.2. Internet
|