|
|
|
|
LEADER |
00000cam a22000004i 4500 |
005 |
20221101225958.0 |
008 |
080807s2009 njua b 001 0 eng d |
010 |
|
|
|a 2008010335
|
011 |
|
|
|a BIB MATCHES WORLDCAT
|
020 |
|
|
|a 0131873210
|q hardcover (alk. paper)
|
020 |
|
|
|a 9780131873216
|q hardcover (alk. paper)
|
035 |
|
|
|a (ATU)b11371158
|
035 |
|
|
|a (OCoLC)213375806
|
040 |
|
|
|a DLC
|b eng
|e rda
|c DLC
|d YDX
|d UKM
|d BAKER
|d YDXCP
|d BWX
|d CDX
|d BTCTA
|d OCLCG
|d ATU
|
050 |
0 |
0 |
|a P98
|b .J87 2009
|
082 |
0 |
0 |
|a 410.285
|2 22
|
100 |
1 |
|
|a Jurafsky, Dan,
|d 1962-
|e author.
|9 253491
|
245 |
1 |
0 |
|a Speech and language processing :
|b an introduction to natural language processing, computational linguistics, and speech recognition /
|c Daniel Jurafsky, James H. Martin.
|
250 |
|
|
|a Second edition.
|
264 |
|
1 |
|a Upper Saddle River, N.J. :
|b Pearson Prentice Hall,
|c [2009]
|
264 |
|
4 |
|c ©2009
|
300 |
|
|
|a xxxi, 988 pages :
|b illustrations ;
|c 25 cm.
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a unmediated
|b n
|2 rdamedia
|
338 |
|
|
|a volume
|b nc
|2 rdacarrier
|
490 |
1 |
|
|a Prentice Hall series in artificial intelligence
|
504 |
|
|
|a Includes bibliographical references (pages 909-958) and index.
|
505 |
0 |
0 |
|g 1.
|t Introduction --
|g 1.1.
|t Knowledge in Speech and Language Processing --
|g 1.2.
|t Ambiguity --
|g 1.3.
|t Models and Algorithms --
|g 1.4.
|t Language, Thought, and Understanding --
|g 1.5.
|t The State of the Art --
|g 1.6.
|t Some Brief History --
|g 1.6.1.
|t Foundational Insights: 1940s and 1950s --
|g 1.6.2.
|t The Two Camps: 1957 - 1970 --
|g 1.6.3.
|t Four Paradigms: 1970 - 1983 --
|g 1.6.4.
|t Empiricism and Finite State Models Redux: 1983 - 1993 --
|g 1.6.5.
|t The Field Comes Together: 1994 - 1999 --
|g 1.6.6.
|t The Rise of Machine Learning: 2000 - 2008 --
|g 1.6.7.
|t On Multiple Discoveries --
|g 1.6.8.
|t A Final Brief Note on Psychology --
|g 1.7.
|t Summary --
|g Part I.
|t Words --
|g 2.
|t Regular Expressions and Automata --
|g 2.1.
|t Regular Expressions --
|g 2.1.1.
|t Basic Regular Expression Patterns --
|g 2.1.2.
|t Disjunction, Grouping, and Precedence --
|g 2.1.3.
|t A Simple Example --
|g 2.1.4.
|t A More Complex Example --
|g 2.1.5.
|t Advanced Operators --
|g 2.1.6.
|t Regular Expression Substitution, Memory, and ELIZA --
|g 2.2.
|t Finite-State Automata --
|g 2.2.1.
|t Using an FSA to Recognize Sheeptalk --
|g 2.2.2.
|t Formal Languages --
|g 2.2.3.
|t Another Example --
|g 2.2.4.
|t Non-Deterministic FSAs --
|g 2.2.5.
|t Using an NFSA to Accept Strings --
|g 2.2.6.
|t Recognition as Search --
|g 2.2.7.
|t Relating Deterministic and Non-Deterministic Automata --
|g 2.3.
|t Regular Languages and FSAs --
|g 2.4.
|t Summary --
|g 3.
|t Words and Transducers --
|g 3.1.
|t Survey of (Mostly) English Morphology --
|g 3.1.1.
|t Inflectional Morphology --
|g 3.1.2.
|t Derivational Morphology --
|g 3.1.3.
|t Cliticization --
|g 3.1.4.
|t Non-Concatenative Morphology --
|g 3.1.5.
|t Agreement --
|g 3.2.
|t Finite-State Morphological Parsing --
|g 3.3.
|t Construction of a Finite-State Lexicon --
|g 3.4.
|t Finite-State Transducers --
|g 3.4.1.
|t Sequential Transducers and Determinism --
|g 3.5.
|t FSTs for Morphological Parsing --
|g 3.6.
|t Transducers and Orthographic Rules --
|g 3.7.
|t The COmbination of an FST Lexicon and Rules --
|g 3.8.
|t Lexicon-Free FSTs: The Porter Stemmer --
|g 3.9.
|t Word and Sentence Tokenization --
|g 3.9.1.
|t Segmentation in Chinese --
|g 3.10.
|t Detection and Correction of Spelling Errors --
|g 3.11.
|t Minimum Edit Distance --
|g 3.12.
|t Human Morphological Processing --
|g 3.13.
|t Summary --
|g 4.
|t N-grams --
|g 4.1.
|t Word Counting in Corpora --
|g 4.2.
|t Simple (Unsmoothed) N-grams --
|g 4.3.
|t Training and Test Sets --
|g 4.3.1.
|t N-gram Sensitivity to the Training Corpus --
|g 4.3.2.
|t Unknown Words: Open Versus Closed Vocabulary Tasks --
|g 4.4.
|t Evaluating N-grams: Perplexity --
|g 4.5.
|t Smoothing --
|g 4.5.1.
|t Laplace Smoothing --
|g 4.5.2.
|t Good-Turing Discounting --
|g 4.5.3.
|t Some Advanced Issues in Good-Turing Estimation --
|g 4.6.
|t Interpolation --
|g 4.7.
|t Backoff --
|g 4.7.1.
|t Advanced: Details of Computing Katz Backoff a and --
|g 4.8.
|t Practical Issues: Toolkits and Data Formats --
|g 4.9.
|t Advanced Issues in Language Modeling --
|g 4.9.1.
|t Advanced Smoothing Methods: Kneser-Ney Smoothing --
|g 4.9.2.
|t Class-Based N-grams --
|g 4.9.3.
|t Language Model Adaptation and Web Use --
|g 4.9.4.
|t Using Longer Distance Information: A Brief Summary --
|g 4.10.
|t Advanced: Information Theory Background --
|g 4.10.1.
|t Cross-Entropy for Comparing Models --
|g 4.11.
|t Advanced: The Entropy of English and Entropy Rate Constancy --
|g 4.12.
|t Summary --
|g 5.
|t Part-of-Speech Tagging --
|g 5.1.
|t (Mostly) English Word Classes --
|g 5.2.
|t Tagsets for English --
|g 5.3.
|t Part-of-Speech Tagging --
|g 5.4.
|t Rule-Based Part-of-Speech Tagging --
|g 5.5.
|t HMM Part-of-Speech Tagging --
|g 5.5.1.
|t Computing the Most-Likely Tag Sequence: An Example --
|g 5.5.2.
|t Formalizing Hidden Markov Model Taggers --
|g 5.5.3.
|t Using the Viterbi Algorithm for HMM Tagging --
|g 5.5.4.
|t Extending the HMM Algorithm to Trigrams --
|g 5.6.
|t Transformation-Based Tagging --
|g 5.6.1.
|t How TBL Rules Are Applied --
|g 5.6.2.
|t How TBL Rules Are Learned --
|g 5.7.
|t Evaluation and Error Analysis --
|g 5.7.1.
|t Error Analysis --
|g 5.8.
|t Advanced Issues in Part-of-Speech Tagging --
|g 5.8.1.
|t Practical Issues: Tag Indeterminacy and Tokenization --
|g 5.8.2.
|t Unknown Words --
|g 5.8.3.
|t Part-of-Speech Tagging for Other Languages --
|g 5.8.4.
|t Tagger Combination --
|g 5.9.
|t Advanced: The Noisy Channel Model for Spelling --
|g 5.9.1.
|t Contextual Spelling Error Correction --
|g 5.10.
|t Summary --
|g 6.
|t Hidden Markov and Maximum Entropy Models --
|g 6.1.
|t Markov Chains --
|g 6.2.
|t The Hidden Markov Model --
|g 6.3.
|t Likelihood Computation: The Forward Algorithm --
|g 6.4.
|t Decoding: The Viterbi Algorithm --
|g 6.5.
|t HMM Training: The Forward-Backward Algorithm --
|g 6.6.
|t Maximum Entropy Models: Background --
|g 6.6.1.
|t Linear Regression --
|g 6.6.2.
|t Logistic Regression --
|g 6.6.3.
|t Logistic Regression: Classification --
|g 6.6.4.
|t Advanced: Learning in Logistic Regression --
|g 6.7.
|t Maximum Entropy Modeling --
|g 6.7.1.
|t Why We Call it Maximum Entropy --
|g 6.8.
|t Maximum Entropy Markov Models --
|g 6.8.1.
|t Decoding and Learning in MEMMs --
|g 6.9.
|t Summary --
|g Part II.
|t Speech --
|g 7.
|t Phonetics --
|g 7.1.
|t Speech Sounds and Phonetic Transcription --
|g 7.2.
|t Articulatory Phonetics --
|g 7.2.1.
|t The Vocal Organs --
|g 7.2.2.
|t Consonants: Place of Articulation --
|g 7.2.3.
|t Consonants: Manner of Articulation --
|g 7.2.4.
|t Vowels --
|g 7.2.5.
|t Syllables --
|g 7.3.
|t Phonological Categories and Pronunciation Variation --
|g 7.3.1.
|t Phonetic Features --
|g 7.3.2.
|t Predicting Phonetic Variation --
|g 7.3.3.
|t Factors Influencing Phonetic Variation --
|g 7.4.
|t Acoustic Phonetics and Signals --
|g 7.4.1.
|t Waves --
|g 7.4.2.
|t Speech Sound Waves --
|g 7.4.3.
|t Frequency and Amplitude --
|g 7.4.4.
|t Interpretation of Phones from a Waveform --
|g 7.4.5.
|t Spectra and the Frequency Domain --
|g 7.4.6.
|t The Source-Filter Model --
|g 7.5.
|t Phonetic Resources --
|g 7.6.
|t Advanced: Articulatory and Gestural Phonology --
|g 7.7.
|t Summary --
|
505 |
8 |
0 |
|g 8.
|t Speech Synthesis --
|g 8.1.
|t Text Normalization --
|g 8.1.1.
|t Sentence Tokenization --
|g 8.1.2.
|t Non-Standard Words --
|g 8.1.3.
|t Homograph Disambiguation --
|g 8.2.
|t Phonetic Analysis --
|g 8.2.1.
|t Dictionary Lookup --
|g 8.2.2.
|t Names --
|g 8.2.3.
|t Grapheme-to-Phoneme Conversion --
|g 8.3.
|t Prosodic Analysis --
|g 8.3.1.
|t Prosodic Structure --
|g 8.3.2.
|t Prosodic Prominence --
|g 8.3.3.
|t Tune --
|g 8.3.4.
|t More Sophisticated Models: To BI --
|g 8.3.5.
|t Computing Duration from Prosodic Labels --
|g 8.3.6.
|t Computing F0 from Prosodic Labels --
|g 8.3.7.
|t Final Result of Text Analysis: Internal Representation --
|g 8.4.
|t Diphone Waveform synthesis --
|g 8.4.1.
|t Steps for Building a Diphone Database --
|g 8.4.2.
|t Diphone Concatenation and TD-PSOLA for Prosody --
|g 8.5.
|t Unit Selection (Waveform) Synthesis --
|g 8.6.
|t Evaluation --
|g 9.
|t Automatic Speech Recognition --
|g 9.1.
|t Speech Recognition Architecture --
|g 9.2.
|t Applying the Hidden Markov Model to Speech --
|g 9.3.
|t Feature Extraction: MFCC vectors --
|g 9.3.1.
|t Preemphasis --
|g 9.3.2.
|t Windowing --
|g 9.3.3.
|t Discrete Fourier Transform --
|g 9.3.4.
|t Mel Filter Bank and Log --
|g 9.3.5.
|t The Cepstrum: Inverse Discrete Fourier Transform --
|g 9.3.6.
|t Deltas and Energy --
|g 9.3.7.
|t Summary: MFCC --
|g 9.4.
|t Acoustic Likelihood Computation --
|g 9.4.1.
|t Vector Quantization --
|g 9.4.2.
|t Gaussian PDFs --
|g 9.4.3.
|t Probabilities, Log Probabilities and Distance Functions --
|g 9.5.
|t The Lexicon and Language Model --
|g 9.6.
|t Search and Decoding --
|g 9.7.
|t Embedded Training --
|g 9.8.
|t Evaluation: Word Error Rate --
|g 9.9.
|t Summary --
|g 10.
|t Speech Recognition: Advanced Topics --
|g 10.1.
|t Multipass Decoding: N-best Lists and Lattices --
|g 10.2.
|t A ('stack') Decoding --
|g 10.3.
|t Context-Dependent Acoustic Models: Triphones --
|g 10.4.
|t Discriminative Training --
|g 10.4.1.
|t Maximum Mutual Information Estimation --
|g 10.4.2.
|t Acoustic Models Based on Posterior Classifiers --
|g 10.5.
|t Modeling Variation --
|g 10.5.1.
|t Environmental Variation and Noise --
|g 10.5.2.
|t Speaker Variation and Speaker Adaptation --
|g 10.5.3.
|t Pronunciation Modeling: Variation Due to Genre --
|g 10.6.
|t Metadata: Boundaries, Punctuation, and Disfluencies --
|g 10.7.
|t Speech Recognition by Humans --
|g 10.8.
|t Summary --
|g 11.
|t Computational Phonology --
|g 11.1.
|t Finite-State Phonology --
|g 11.2.
|t Advanced Finite-State Phonology --
|g 11.2.1.
|t Harmony --
|g 11.2.2.
|t Templatic Morphology --
|g 11.3.
|t Computational Optimality Theory --
|g 11.3.1.
|t Finite-State Transducer Models of Optimality Theory --
|g 11.3.2.
|t Stochastic Models of Optimality Theory --
|g 11.4.
|t Syllabification --
|g 11.5.
|t Learning Phonology and Morphology --
|g 11.5.1.
|t Learning Phonological Rules --
|g 11.5.2.
|t Learning Morphology --
|g 11.5.3.
|t Learning in Optimality Theory --
|g 11.6.
|t Summary --
|g Part III.
|t Syntax --
|g 12.
|t Formal Grammars of English --
|g 12.1.
|t Constituency --
|g 12.2.
|t Context-Free Grammars --
|g 12.2.1.
|t Formal definition of Context-Free Grammar --
|g 12.3.
|t Some Grammar Rules for English --
|g 12.3.1.
|t Sentence-Level Constructions --
|g 12.3.2.
|t Clauses and Sentences --
|g 12.3.3.
|t The Noun Phrase --
|g 12.3.4.
|t Agreement --
|g 12.3.5.
|t The Verb Phrase and Subcategorization --
|g 12.3.6.
|t Auxiliaries --
|g 12.3.7.
|t Coordination --
|g 12.4.
|t Treebanks --
|g 12.4.1.
|t Example: The Penn Treebank Project --
|g 12.4.2.
|t Treebanks as Grammars --
|g 12.4.3.
|t Treebank Searching --
|g 12.4.4.
|t Heads and Head Finding --
|g 12.5.
|t Grammar Equivalence and Normal Form --
|g 12.6.
|t Finite-State and Context-Free Grammars --
|g 12.7.
|t Dependency Grammars --
|g 12.7.1.
|t The Relationship Between Dependencies and Heads --
|g 12.7.2.
|t Categorial Grammar --
|g 12.8.
|t Spoken Language Syntax --
|g 12.8.1.
|t Disfluencies and Repair --
|g 12.8.2.
|t Treebanks for Spoken Language --
|g 12.9.
|t Grammars and Human Processing --
|g 12.10.
|t Summary --
|g 13.
|t Syntactic Parsing --
|g 13.1.
|t Parsing as Search --
|g 13.1.1.
|t Top-Down Parsing --
|g 13.1.2.
|t Bottom-Up Parsing --
|g 13.1.3.
|t Comparing Top-Down and Bottom-Up Parsing --
|g 13.2.
|t Ambiguity --
|g 13.3.
|t Search in the Face of Ambiguity --
|g 13.4.
|t Dynamic Programming Parsing Methods --
|g 13.4.1.
|t CKY Parsing --
|g 13.4.2.
|t The Earley Algorithm --
|g 13.4.3.
|t Chart Parsing --
|g 13.5.
|t Partial Parsing --
|g 13.5.1.
|t Finite-State Rule-Based Chunking --
|g 13.5.2.
|t Machine Learning-Based Approaches to Chunking --
|g 13.5.3.
|t Evaluating Chunking Systems --
|g 13.6.
|t Summary --
|g 14.
|t Statistical Parsing --
|g 14.1.
|t Probabilistic Context-Free Grammars --
|g 14.1.1.
|t PCFGs for Disambiguation --
|g 14.1.2.
|t PCFGs for Language Modeling --
|g 14.2.
|t Probabilistic CKY Parsing of PCFGs --
|g 14.3.
|t Learning PCFG Rule Probabilities --
|g 14.4.
|t Problems with PCFGs --
|g 14.4.1.
|t Independence Assumptions Miss Structural Dependencies Between Rules --
|g 14.4.2.
|t Lack of Sensitivity to Lexical Dependencies --
|g 14.5.
|t Improving PCFGs by Splitting Non-Terminals --
|g 14.6.
|t Probabilistic Lexicalized CFGs --
|g 14.6.1.
|t The Collins Parser --
|g 14.6.2.
|t Advanced: Further Details of the Collins Parser --
|g 14.7.
|t Evaluating Parsers --
|g 14.8.
|t Advanced: Discriminative Reranking --
|g 14.9.
|t Advanced: Parser-Based Language Modeling --
|g 14.10.
|t Human Parsing --
|g 14.11.
|t Summary --
|g 15.
|t Features and Unification --
|g 15.1.
|t Feature Structures --
|g 15.2.
|t Unification of Feature Structures --
|g 15.3.
|t Feature Structures in the Grammar --
|g 15.3.1.
|t Agreement --
|g 15.3.2.
|t Head Features --
|g 15.3.3.
|t Subcategorization --
|g 15.3.4.
|t Long-Distance Dependencies --
|g 15.4.
|t Implementation of Unification --
|g 15.4.1.
|t Unification Data Structures --
|g 15.4.2.
|t The Unification Algorithm --
|g 15.5.
|t Parsing with Unification Constraints --
|g 15.5.1.
|t Integration of Unification into an Earley Parser --
|g 15.5.2.
|t Unification-Based Parsing --
|g 15.6.
|t Types and Inheritance --
|g 15.6.1.
|t Advanced: Extensions to Typing --
|g 15.6.2.
|t Other Extensions to Unification --
|g 15.7.
|t Summary --
|g 16.
|t Language and Complexity --
|g 16.1.
|t The Chomsky Hierarchy --
|g 16.2.
|t Ways to Tell if a Language Isn't Regular --
|g 16.2.1.
|t The Pumping Lemma --
|g 16.2.2.
|t Proofs That Various Natural Languages Are Not Regular --
|g 16.3.
|t Is Natural Language Context-Free? --
|g 16.4.
|t Complexity and Human Processing --
|g 16.5.
|t Summary --
|g Part IV.
|t Semantics and Pragmatics --
|g 17.
|t The Representation of Meaning --
|g 17.1.
|t Computational Desiderata for Representations --
|g 17.1.1.
|t Verifiability --
|g 17.1.2.
|t Unambiguous Representations --
|g 17.1.3.
|t Canonical Form --
|g 17.1.4.
|t Inference and Variables --
|g 17.1.5.
|t Expressiveness --
|g 17.2.
|t Model-Theoretic Semantics --
|g 17.3.
|t First-Order Logic --
|g 17.3.1.
|t Basic Elements of First-Order Logic --
|g 17.3.2.
|t Variables and Quantifiers --
|g 17.3.3.
|t Lambda Notation --
|g 17.3.4.
|t The Semantics of First-Order Logic --
|g 17.3.5.
|t Inference --
|g 17.4.
|t Event and State Representations --
|g 17.4.1.
|t Representing Time --
|g 17.4.2.
|t Aspect --
|g 17.5.
|t Description Logics --
|g 17.6.
|t Embodied and Situated Approaches to Meaning --
|g 17.7.
|t Summary --
|
505 |
8 |
0 |
|g 18.
|t Computational Semantics --
|g 18.1.
|t Syntax-Driven Semantic Analysis --
|g 18.2.
|t Semantic Augmentations to Syntactic Rules --
|g 18.3.
|t Quantifier Scope Ambiguity and Underspecification --
|g 18.3.1.
|t Store and Retrieve Approaches --
|g 18.3.2.
|t Constraint-Based Approaches --
|g 18.4.
|t Unification-Based Approaches to Semantic Analysis --
|g 18.5.
|t Integration of Semantics into the Earley Parser --
|g 18.6.
|t Idioms and Compositionality --
|g 18.7.
|t Summary --
|g 19.
|t Lexical Semantics --
|g 19.1.
|t Word Senses --
|g 19.2.
|t Relations Between Senses --
|g 19.2.1.
|t Synonymy and Antonymy --
|g 19.2.2.
|t Hyponymy --
|g 19.2.3.
|t Semantic Fields --
|g 19.3.
|t Word --
|g 19.4.
|t Event Participants --
|g 19.4.1.
|t Thematic Roles --
|g 19.4.2.
|t Diathesis Alternations --
|g 19.4.3.
|t Problems with Thematic Roles --
|g 19.4.4.
|t The Proposition Bank --
|g 19.4.5.
|t Frame --
|g 19.4.6.
|t Selectional Restrictions --
|g 19.5.
|t Primitive Decomposition --
|g 19.6.
|t Advanced: Metaphor --
|g 19.7.
|t Summary --
|g 20.
|t Computational Lexical Semantics --
|g 20.1.
|t Word Sense Disambiguation: Overview --
|g 20.2.
|t Supervised Word Sense Disambiguation --
|g 20.2.1.
|t Feature Extraction for Supervised Learning --
|g 20.2.2.
|t Naive Bayes and Decision List Classifiers --
|g 20.3.
|t WSD Evaluation, Baselines, and Ceilings --
|g 20.4.
|t WSD: Dictionary and Thesaurus Methods --
|g 20.4.1.
|t The Lesk Algorithm --
|g 20.4.2.
|t Selectional Restrictions and Selectional Preferences . . . . 684 --
|g 20.5.
|t Minimally Supervised WSD: Bootstrapping --
|g 20.6.
|t Word Similarity: Thesaurus Methods --
|g 20.7.
|t Word Similarity: Distributional Methods --
|g 20.7.1.
|t Defining a Word's Co-Occurrence Vectors --
|g 20.7.2.
|t Measuring Association with Context --
|g 20.7.3.
|t Defining Similarity Between Two Vectors --
|g 20.7.4.
|t Evaluating Distributional Word Similarity --
|g 20.8.
|t Hyponymy and Other Word Relations --
|g 20.9.
|t Semantic Role Labeling --
|g 20.10.
|t Advanced: Unsupervised Sense Disambiguation --
|g 20.11.
|t Summary --
|g 21.
|t Computational Discourse --
|g 21.1.
|t Discourse Segmentation --
|g 21.1.1.
|t Unsupervised Discourse Segmentation --
|g 21.1.2.
|t Supervised Discourse Segmentation --
|g 21.1.3.
|t Discourse Segmentation Evaluation --
|g 21.2.
|t Text Coherence --
|g 21.2.1.
|t Rhetorical Structure Theory --
|g 21.2.2.
|t Automatic Coherence Assignment --
|g 21.3.
|t Reference Resolution --
|g 21.4.
|t Reference Phenomena --
|g 21.4.1.
|t Five Types of Referring Expressions --
|g 21.4.2.
|t Information Status --
|g 21.5.
|t Features for Pronominal Anaphora Resolution --
|g 21.6.
|t Three Algorithms for Pronominal Anaphora Resolution --
|g 21.6.1.
|t Pronominal Anaphora Baseline: The Hobbs Algorithm --
|g 21.6.2.
|t A Centering Algorithm for Anaphora Resolution --
|g 21.6.3.
|t A Log-Linear Model for Pronominal Anaphora Resoluton --
|g 21.6.4.
|t Features for Pronominal Anaphora Resoluton --
|g 21.7.
|t Coreference Resolution --
|g 21.8.
|t Evaluation of Coreference Resolution --
|g 21.9.
|t Advanced: Inference-Based Coherence Resolution --
|g 21.10.
|t Psycholinguistic Studies of Reference --
|g 21.11.
|t Summary --
|g Part V.
|t Applications --
|g 22.
|t Information Extraction --
|g 22.1.
|t Named Entity Recognition --
|g 22.1.1.
|t Ambiguity in Named Entity Recognition --
|g 22.1.2.
|t NER as Sequence Labeling --
|g 22.1.3.
|t Evaluation of Named Entity Recognition --
|g 22.1.4.
|t Practical NER Architectures --
|g 22.2.
|t Relation Detection and Classification --
|g 22.2.1.
|t Supervised Learning Approaches to Relation Analysis --
|g 22.2.2.
|t Lightly Supervised Approaches to Relation Analysis --
|g 22.2.3.
|t Evaluation of Relation Analysis Systems --
|g 22.3.
|t Temporal and Event Processing --
|g 22.3.1.
|t Temporal Expression Recognition --
|g 22.3.2.
|t Temporal Normalization --
|g 22.3.3.
|t Event Detection and Analysis --
|g 22.3.4.
|t Time Bank --
|g 22.4.
|t Template-Filling --
|g 22.4.1.
|t Statistical Approaches to Template-Filling --
|g 22.4.2.
|t Finite-State Template-Filling Systems --
|g 22.5.
|t Advanced: Biomedical Information Extraction --
|g 22.5.1.
|t Biological Named Entity Recognition --
|g 22.5.2.
|t Gene Normalization --
|g 22.5.3.
|t Biological Roles and Relations --
|g 22.6.
|t Summary --
|g 23.
|t Question Answering and Summarization --
|g 23.1.
|t Information Retrieval --
|g 23.1.1.
|t The Vector Space Model --
|g 23.1.2.
|t Term Weighting --
|g 23.1.3.
|t Term Selection and Creation --
|g 23.1.4.
|t Evaluation of Information-Retrieval Systems --
|g 23.1.5.
|t Homonymy, Polysemy, and Synonymy --
|g 23.1.6.
|t Ways to Improve User Queries --
|g 23.2.
|t Factoid Question Answering --
|g 23.2.1.
|t Question Processing --
|g 23.2.2.
|t Passage Retrieval --
|g 23.2.3.
|t Answer Processing --
|g 23.2.4.
|t Evaluation of Factoid Answers --
|g 23.3.
|t Summarization --
|g 23.4.
|t Single Document Summarization --
|g 23.4.1.
|t Unsupervised Content Selection --
|g 23.4.2.
|t Unsupervised Summarization Based on Rhetorical Parsing --
|g 23.4.3.
|t Supervised Content Selection --
|g 23.4.4.
|t Sentence Simplification --
|g 23.5.
|t Multi-Document Summarization --
|g 23.5.1.
|t Content Selection in Multi-Document Summarization --
|g 23.5.2.
|t Information Ordering in Multi-Document Summarization --
|g 23.6.
|t Focused Summarization and Question Answering --
|g 23.7.
|t Summarization Evaluation --
|g 23.8.
|t Summary --
|g 24.
|t Dialogue and Conversational Agents --
|g 24.1.
|t Properties of Human Conversations --
|g 24.1.1.
|t Turns and Turn-Taking --
|g 24.1.2.
|t Language as Action: Speech Acts --
|g 24.1.3.
|t Language as Joint Action: Grounding --
|g 24.1.4.
|t Conversational Structure --
|g 24.1.5.
|t Conversational Implicature --
|g 24.2.
|t Basic Dialogue Systems --
|g 24.2.1.
|t ASR component --
|g 24.2.2.
|t NLU component --
|g 24.2.3.
|t Generation and TTS components --
|g 24.2.4.
|t Dialogue Manager --
|g 24.2.5.
|t Dealing with Errors: Confirmation and Rejection --
|g 24.3.
|t Voice --
|g 24.4.
|t Dialogue System Design and Evaluation --
|g 24.4.1.
|t Designing Dialogue Systems --
|g 24.4.2.
|t Evaluating Dialogue Systems --
|g 24.5.
|t Information-State and Dialogue Acts --
|g 24.5.1.
|t Using Dialogue Acts --
|g 24.5.2.
|t Interpreting Dialogue Acts --
|g 24.5.3.
|t Detecting Correction Acts --
|g 24.5.4.
|t Generating Dialogue Acts: Confirmation and Rejection --
|g 24.6.
|t Markov Decision Process Architecture --
|g 24.7.
|t Advanced: Plan-Based Dialogue Agents --
|g 24.7.1.
|t Plan-Inferential Interpretation and Production --
|g 24.7.2.
|t The Intentional Structure of Dialogue --
|g 24.8.
|t Summary --
|g 25.
|t Machine Translation --
|g 25.1.
|t Why Machine Translation Is Hard --
|g 25.1.1.
|t Typology --
|g 25.1.2.
|t Other Structural Divergences --
|g 25.1.3.
|t Lexical Divergences --
|g 25.2.
|t Classical MT and the Vauquois Triangle --
|g 25.2.1.
|t Direct Translation --
|g 25.2.2.
|t Transfer --
|g 25.2.3.
|t Combined Direct and Tranfer Approaches in Classic MT --
|g 25.2.4.
|t The Interlingua Idea: Using Meaning --
|g 25.3.
|t Statistical MT --
|g 25.4.
|t P(FE): the Phrase-Based Translation Model --
|g 25.5.
|t Alignment in MT --
|g 25.5.1.
|t IBM Model 1 --
|g 25.5.2.
|t HMM Alignment --
|g 25.6.
|t Training Alignment Models --
|g 25.6.1.
|t EM for Training Alignment Models --
|g 25.7.
|t Symmetrizing Alignments for Phrase-Based MT --
|g 25.8.
|t Decoding for Phrase-Based Statistical MT --
|g 25.9.
|t MT Evaluation --
|g 25.9.1.
|t Using Human Raters --
|g 25.9.2.
|t Automatic Evaluation: BLEU --
|g 25.10.
|t Advanced: Syntactic Models for MT --
|g 25.11.
|t Advanced: IBM Model 3 and Fertitlity --
|g 25.11.1.
|t Training for Model 3 --
|g 25.12.
|t Advanced: Log-linear Models for MT --
|g 25.13.
|t Summary.
|
588 |
|
|
|a Machine converted from AACR2 source record.
|
650 |
|
0 |
|a Computational linguistics
|9 320129
|
650 |
|
0 |
|a Automatic speech recognition
|9 329957
|
700 |
1 |
|
|a Martin, James H.,
|d 1959-
|e author.
|9 413493
|
830 |
|
0 |
|a Prentice Hall series in artificial intelligence.
|9 1046569
|
907 |
|
|
|a .b11371158
|b 28-07-20
|c 27-10-15
|
942 |
|
|
|c B
|
945 |
|
|
|a 410.285 JUR
|g 1
|i A457831B
|j 0
|l cmain
|o -
|p $153.71
|q -
|r -
|s -
|t 0
|u 13
|v 7
|w 1
|x 3
|y .i12766288
|z 29-10-15
|
952 |
|
|
|0 0
|1 0
|4 0
|6 410_285000000000000_JUR
|7 0
|9 313307
|a C
|b C
|c cmain
|d 2015-10-29
|g 153.71
|i i12766288
|l 15
|m 8
|o 410.285 JUR
|p A457831B
|r 2024-05-22 10:32:51
|s 2024-04-24
|t 1
|v 153.71
|w 2021-10-31
|y B
|
998 |
|
|
|a (2)b
|a (2)c
|b 06-04-16
|c m
|d a
|e -
|f eng
|g nju
|h 0
|
999 |
|
|
|c 1187420
|d 1187420
|