Factorial Hidden Markov Models for full and weakly supervised supertagging

For many sequence prediction tasks in Natural Language Processing, modeling dependencies between individual predictions can be used to improve prediction accuracy of the sequence as a whole. Supertagging, involves assigning lexical entries to words based on lexicalized grammatical theory such as Com...

Full description

Bibliographic Details
Main Author: Ramanujam, Srivatsan
Format: Others
Language:English
Published: 2010
Subjects:
Online Access:http://hdl.handle.net/2152/ETD-UT-2009-08-350
id ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-ETD-UT-2009-08-350
record_format oai_dc
spelling ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-ETD-UT-2009-08-3502015-09-20T16:53:47ZFactorial Hidden Markov Models for full and weakly supervised supertaggingRamanujam, SrivatsanHidden Markov ModelsBayesian ModelsCategorial GrammarSupertaggingJoint InferenceFor many sequence prediction tasks in Natural Language Processing, modeling dependencies between individual predictions can be used to improve prediction accuracy of the sequence as a whole. Supertagging, involves assigning lexical entries to words based on lexicalized grammatical theory such as Combinatory Categorial Grammar (CCG). Previous work has used Bayesian HMMs to learn taggers for both POS tagging and supertagging separately. Modeling them jointly has the potential to produce more robust and accurate supertaggers trained with less supervision and thereby potentially help in the creation of useful models for new languages and domains. Factorial Hidden Markov Models (FHMM) support joint inference for multiple sequence prediction tasks. Here, I use them to jointly predict part-of-speech tag and supertag sequences with varying levels of supervision. I show that supervised training of FHMM models improves performance compared to standard HMMs, especially when labeled training material is scarce. Secondly, FHMMs trained from tag dictionaries rather than labeled examples also perform better than a standard HMM. Finally, I show that an FHMM and a maximum entropy Markov model can complement each other in a single step co-training setup that improves the performance of both models when there is limited labeled training material available.text2010-06-04T14:44:01Z2010-06-04T14:44:01Z2009-082010-06-04T14:44:01ZAugust 2009thesisapplication/pdfhttp://hdl.handle.net/2152/ETD-UT-2009-08-350eng
collection NDLTD
language English
format Others
sources NDLTD
topic Hidden Markov Models
Bayesian Models
Categorial Grammar
Supertagging
Joint Inference
spellingShingle Hidden Markov Models
Bayesian Models
Categorial Grammar
Supertagging
Joint Inference
Ramanujam, Srivatsan
Factorial Hidden Markov Models for full and weakly supervised supertagging
description For many sequence prediction tasks in Natural Language Processing, modeling dependencies between individual predictions can be used to improve prediction accuracy of the sequence as a whole. Supertagging, involves assigning lexical entries to words based on lexicalized grammatical theory such as Combinatory Categorial Grammar (CCG). Previous work has used Bayesian HMMs to learn taggers for both POS tagging and supertagging separately. Modeling them jointly has the potential to produce more robust and accurate supertaggers trained with less supervision and thereby potentially help in the creation of useful models for new languages and domains. Factorial Hidden Markov Models (FHMM) support joint inference for multiple sequence prediction tasks. Here, I use them to jointly predict part-of-speech tag and supertag sequences with varying levels of supervision. I show that supervised training of FHMM models improves performance compared to standard HMMs, especially when labeled training material is scarce. Secondly, FHMMs trained from tag dictionaries rather than labeled examples also perform better than a standard HMM. Finally, I show that an FHMM and a maximum entropy Markov model can complement each other in a single step co-training setup that improves the performance of both models when there is limited labeled training material available. === text
author Ramanujam, Srivatsan
author_facet Ramanujam, Srivatsan
author_sort Ramanujam, Srivatsan
title Factorial Hidden Markov Models for full and weakly supervised supertagging
title_short Factorial Hidden Markov Models for full and weakly supervised supertagging
title_full Factorial Hidden Markov Models for full and weakly supervised supertagging
title_fullStr Factorial Hidden Markov Models for full and weakly supervised supertagging
title_full_unstemmed Factorial Hidden Markov Models for full and weakly supervised supertagging
title_sort factorial hidden markov models for full and weakly supervised supertagging
publishDate 2010
url http://hdl.handle.net/2152/ETD-UT-2009-08-350
work_keys_str_mv AT ramanujamsrivatsan factorialhiddenmarkovmodelsforfullandweaklysupervisedsupertagging
_version_ 1716820835988144128