Digitale Bibliotheek
Sluiten Bladeren door artikelen uit een tijdschrift
<< vorige    volgende >>
     Tijdschrift beschrijving
       Alle jaargangen van het bijbehorende tijdschrift
         Alle afleveringen van het bijbehorende jaargang
           Alle artikelen van de bijbehorende aflevering
                                       Details van artikel 4 van 6 gevonden artikelen
  Learning drifting concepts: Example selection vs. example weighting
Titel: Learning drifting concepts: Example selection vs. example weighting
Auteur: Ralf Klinkenberg
Verschenen in: Intelligent data analysis
Paginering: Jaargang 8 (2004) nr. 3 pagina's 281-300
Jaar: 2004-08-19
Inhoud: For many learning tasks where data is collected over an extended period of time, its underlying distribution is likely to change. A typical example is information filtering, i.e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and the document content change over time. A filtering system should be able to adapt to such concept changes. This paper proposes several methods to handle such concept drifts with support vector machines. The methods either maintain an adaptive time window on the training data [13], select representative training examples, or weight the training examples [15]. The key idea is to automatically adjust the window size, the example selection, and the example weighting, respectively, so that the estimated generalization error is minimized. The approaches are both theoretically well-founded as well as effective and efficient in practice. Since they do not require complicated parameterization, they are simpler to use and more robust than comparable heuristics. Experiments with simulated concept drift scenarios based on real-world text data compare the new methods with other window management approaches. We show that they can effectively select an appropriate window size, example selection, and example weighting, respectively, in a robust way. We also explain how the proposed example selection and weighting approaches can be turned into incremental approaches. Since most evaluation methods for machine learning, like e.g. cross-validation, assume that the examples are independent and identically distributed, which is clearly unrealistic in the case of concept drift, alternative evaluation schemes are used to estimate and optimize the performance of each learning step within the concept drift handling frameworks as well as to evaluate and compare the different frameworks.
Uitgever: IOS Press
Bronbestand: Elektronische Wetenschappelijke Tijdschriften

                             Details van artikel 4 van 6 gevonden artikelen
<< vorige    volgende >>
 Koninklijke Bibliotheek - Nationale Bibliotheek van Nederland