You are here

Year 2001

  1. Marco Visentin; Cesare Furlanello,
    Time Series Boosting for the Automatic Selection of panels in Marketing and Financial Studies,
  2. Cesare Furlanello; Stefano Merler; Stefano Menegon,
    Metodi informatici WebGIS per l'analisi e la sorveglianza epidemiologica delle infezioni trasmesse da zecche,
  3. C. Chemini; Annamaria Rizzoli; Cesare Furlanello,
    La ricerca su zecche e malattie trasmesse in Trentino,
  4. Stefano Merler; Cesare Furlanello; Barbara Larcher; Andrea Sboner,
    Tuning Cost-sensitive Boosting and its Application to Melanoma Diagnosis,
    Second International Workshop on Multiple Classifier Systems,
    Springer Verlag,
    , pp. 32-
    , (Second International Workshop on Multiple Classifier Systems,
    Cambridge, UK,
    02/07/2001 - 04/07/2001)
  5. Cesare Furlanello; P. Lecca; S. Zamboni,
    Servizi G.I.S. e metodi statistici per la gestione della tutela e della ricerca archeologica in Trentino,
    Atti del Convegno Archeologia e Territorio,
  6. Bruno Caprile; Cesare Furlanello; Stefano Merler,
    Exact Bagging with k-Nearest Neighbour Classifiers,
    A formula is exhibited for the exact computation of Bagging classifiers when the base model adopted is k-Nearest Neighbour (k-NN). The formula holds in any dimension, does not require the extraction of bootstrap replicates, and yields an implementation of Bagging that is as fast as the computation of a single k-NN classifier. It also shows that Bagging with 1-Nearest Neighbour is perfectly equivalent to plain 1-NN,
  7. Bruno Caprile; Cesare Furlanello; Stefano Merler,
    The Dynamics of AdaBoost Weights Tells You What's hard to Classify,
    The dynamical evolution of weights in the AdaBoost algorithm contains useful information about the role that the associated data points play in the built of the AdaBoost model. In particular, the dynamics induces a bipartition of the data set into two (easy/hard) classes. Easy points are ininfluential in the making of the model, while the varying relevance of hard points can be gauged in terms of an entropy value associated to their evolution. Smooth approximations of entropy highlight regions where classification is most
    uncertain. Promising results are obtained when methods proposed are applied in the Optimal Sampling framework
  8. Stefano Merler; Cesare Furlanello; Barbara Larcher; Andrea Sboner,
    SSTBoost: Automatic Model Selection in Cost-sensitive Boosting,
    This paper introduces SSTBoost, a predictive classification methodology designed to target the accuracy of a modified boosting algorithm towards required sensitivity and specificity constraints. The SSTBoost method is demonstrated in practice for the automated medical diagnosis of cancer on a set of skin lesions (42 melanomas and 110 naevi) described by geometric and colorimetric features. A cost-sensitive variant of the AdaBoost algorithm is combined with a procedure for the automatic selection of optimal cost parameters. Within each boosting step, different weights are considered for errors on false negatives and false positives, and differently updated for negatives and positives. Given only a target region in the ROC space, the method also completely automates the selection of the cost parameters ratio, tipically of uncertain definition. On the cancer diagnosis problem, SSTBoost outperformed in accuracy and stability a battery of specialized automatic systems based on different types of multiple classifier combinations and a panel of expert dermatologists. The method thus can be applied for the early diagnosis of melanoma cancer or in other problems in which an automated cost-resistive classification is required,