Co-training and Self-training for Word Sense Disambiguation

Description:

This paper investigates the application of co-training and self-training to word sense disambiguation. Optimal and empirical parameter selection methods for co-training and self-training are investigated, with various degrees of error reduction. A new method that combines co-training with majority voting is introduced, with the effect of smoothing the bootstrapping learning curves, and improving the average performance.

Creator(s): Mihalcea, Rada, 1974-
Creation Date: May 2004
Partner(s):
UNT College of Engineering
Collection(s):
UNT Scholarly Works
Usage:
Total Uses: 96
Past 30 days: 3
Yesterday: 2
Creator (Author):
Mihalcea, Rada, 1974-

University of North Texas

Date(s):
  • Creation: May 2004
Description:

This paper investigates the application of co-training and self-training to word sense disambiguation. Optimal and empirical parameter selection methods for co-training and self-training are investigated, with various degrees of error reduction. A new method that combines co-training with majority voting is introduced, with the effect of smoothing the bootstrapping learning curves, and improving the average performance.

Degree:
Physical Description:

8 p.

Language(s):
Subject(s):
Keyword(s): word sense disambiguations | optimal parameter settings | sense classifiers | bootstrapping
Source: Conference on Natural Language Learning (CoNLL), 2004, Boston, Massachusetts, United States
Contributor(s):
Partner:
UNT College of Engineering
Collection:
UNT Scholarly Works
Identifier:
  • ARK: ark:/67531/metadc30955
Resource Type: Paper
Format: Text
Rights:
Access: Public