Generalized Stacked Sequential Learning

dc.contributor
Universitat de Barcelona. Departament de Matemàtica Aplicada i Anàlisi
dc.contributor.author
Puertas i Prats, Eloi
dc.date.accessioned
2015-02-24T11:17:02Z
dc.date.available
2015-02-24T11:17:02Z
dc.date.issued
2014-11-04
dc.identifier.uri
http://hdl.handle.net/10803/285969
dc.description.abstract
Over the past few decades, machine learning (ML) algorithms have become a very useful tool in tasks where designing and programming explicit, rule-based algorithms are infeasible. Some examples of applications where machine learning has been applied successfully are spam filtering, optical character recognition (OCR), search engines and computer vision. One of the most common tasks in ML is supervised learning, where the goal is to learn a general model able to predict the correct label of unseen examples from a set of known labeled input data. In supervised learning often it is assumed that data is independent and identically distributed (i.i.d ). This means that each sample in the data set has the same probability distribution as the others and all are mutually independent. However, classification problems in real world databases can break this i.i.d. assumption. For example, consider the case of object recognition in image understanding. In this case, if one pixel belongs to a certain object category, it is very likely that neighboring pixels also belong to the same object, with the exception of the borders. Another example is the case of a laughter detection application from voice records. A laugh has a clear pattern alternating voice and non-voice segments. Thus, discriminant information comes from the alternating pattern, and not just by the samples on their own. Another example can be found in the case of signature section recognition in an e-mail. In this case, the signature is usually found at the end of the mail, thus important discriminant information is found in the context. Another case is part-of-speech tagging in which each example describes a word that is categorized as noun, verb, adjective, etc. In this case it is very unlikely that patterns such as [verb, verb, adjective, verb] occur. All these applications present a common feature: the sequence/context of the labels matters. Sequential learning (25) breaks the i.i.d. assumption and assumes that samples are not independently drawn from a joint distribution of the data samples X and their labels Y . In sequential learning the training data actually consists of sequences of pairs (x, y), so that neighboring examples exhibit some kind of correlation. Usually sequential learning applications consider one-dimensional relationship support, but these types of relationships appear very frequently in other domains, such as images, or video. Sequential learning should not be confused with time series prediction. The main difference between both problems lays in the fact that sequential learning has access to the whole data set before any prediction is made and the full set of labels is to be provided at the same time. On the other hand, time series prediction has access to real labels up to the current time t and the goal is to predict the label at t + 1. Another related but different problem is sequence classification. In this case, the problem is to predict a single label for an input sequence. If we consider the image domain, the sequential learning goal is to classify the pixels of the image taking into account their context, while sequence classification is equivalent to classify one full image as one class. Sequential learning has been addressed from different perspectives: from the point of view of meta-learning by means of sliding window techniques, recurrent sliding windows or stacked sequential learning where the method is formulated as a combination of classifiers; or from the point of view of graphical models, using for example Hidden Markov Models or Conditional Random Fields. In this thesis, we are concerned with meta-learning strategies. Cohen et al. (17) showed that stacked sequential learning (SSL from now on) performed better than CRF and HMM on a subset of problems called “sequential partitioning problems”. These problems are characterized by long runs of identical labels. Moreover, SSL is computationally very efficient since it only needs to train two classifiers a constant number of times. Considering these benefits, we decided to explore in depth sequential learning using SSL and generalize the Cohen architecture to deal with a wider variety of problems.
eng
dc.format.extent
118 p.
cat
dc.format.mimetype
application/pdf
dc.language.iso
eng
cat
dc.publisher
Universitat de Barcelona
dc.rights.license
L'accés als continguts d'aquesta tesi queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative Commons: http://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.rights.uri
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
*
dc.source
TDX (Tesis Doctorals en Xarxa)
dc.subject
Intel·ligència artificial
cat
dc.subject
Inteligencia artificial
cat
dc.subject
Artificial intelligence
cat
dc.subject
Aprenentatge automàtic
cat
dc.subject
Aprendizaje automático
cat
dc.subject
Machine learning
cat
dc.subject
Visió per ordinador
cat
dc.subject
Visión por ordenador
cat
dc.subject
Computer vision
cat
dc.subject
Reconeixement de formes (Informàtica)
cat
dc.subject
Reconocimiento de formas (Informática)
cat
dc.subject
Pattern recognition systems
cat
dc.subject.other
Ciències Experimentals i Matemàtiques
cat
dc.title
Generalized Stacked Sequential Learning
cat
dc.type
info:eu-repo/semantics/doctoralThesis
dc.type
info:eu-repo/semantics/publishedVersion
dc.subject.udc
51
cat
dc.contributor.director
Pujol Vila, Oriol
dc.contributor.director
Escalera Guerrero, Sergio
dc.embargo.terms
cap
cat
dc.rights.accessLevel
info:eu-repo/semantics/openAccess
dc.identifier.dl
B 6854-2015
cat


Documents

EPiP_PhD_THESIS.pdf

4.658Mb PDF

This item appears in the following Collection(s)