Search for the top quark at D0 using multivariate methods Page: 4 of 14

2

MULTIVARIATE CLASSIFIERS
A classifier is any procedure that assigns objects to classes. In the present
context, a classifier would separate signal events from the background. The
time-honored conventional classification methods of examining uni-variate (1-
dimensional) and bi-variate (2-dimensional) distributions of variables to opti-
mize cuts for separating signal and background events do not in general pro-
vide the maximum possible discrimination when correlations exist between
variables. Multivariate classifiers which fully exploit the correlations that ex-
ist among several variables provide a discriminating boundary between signal
and background in multi-dimensional space that can yield discrimination close
to the theoretical maximum (Bayes' limit (5)).
In the multivariate approach, one encodes each event as a point in a multi-
dimensional space, called feature space, corresponding to a vector z of feature
variables such as electron Er (E), neutrino Er, ($), Hr (EEr(jets)), etc.
This feature space is then mapped into a one or a few-dimensional output
space in such a way that the signal and background vectors are mapped onto
different regions of the output space. The aim of the multivariate methods
is to reduce the dimensionality of the problem without losing information in
the process. The optimal way to partition the feature space into signal and
background regions is to choose the mapping to be the Bayes discriminant
function. Each cut on the value of the function corresponds to a discriminating
boundary in feature space. The Bayes discriminant function is simply the
ratio of the probability P(slz) that a given event is a signal event and the
probability P(blz) that it is a background event. It is written as
P(slz) P(cIs)P(s)
P(blz) P(alb)P(b) (1)
The quantities P(c s), P(acb) are the likelihood functions for the signal and
background, respectively (hereafter denoted as f () with or without appro-
priate subscript). The ratio of the prior probabilities ( is the ratio of the
signal and background cross-sections. Some multivariate classifiers approxi-
mate the likelihood functions while the neural network classifier arrives at the
Bayesian probability for the signal, P(slz), without calculating the likelihood
functions for each class separately. The three classifiers being used by DO are
described in the following sections.
H-matrix Method
This is the familiar covariance matrix method which is also known as the
Gaussian Classifier. It was introduced in the 1930s (6,7) as a tool for discrim-
inating one class of feature vector z from another. The vector z is assumed to
be distributed according to a multivariate Gaussian with covariance matrix
M and mean ,. The likelihood function is therefore,

Upcoming Pages

Here’s what’s next.

upcoming item: 5 5 of 14
upcoming item: 6 6 of 14
upcoming item: 7 7 of 14
upcoming item: 8 8 of 14

Show all pages in this article.

This article can be searched. Note: Results may vary based on the legibility of text within the document.

Tools / Downloads

Get a copy of this page .

Citing and Sharing

Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.

Reference the current page of this Article.

Bhat, P.C. Search for the top quark at D0 using multivariate methods, article, July 1, 1995; Batavia, Illinois. (https://digital.library.unt.edu/ark:/67531/metadc621656/m1/4/ocr/: accessed April 21, 2019), University of North Texas Libraries, Digital Library, https://digital.library.unt.edu; crediting UNT Libraries Government Documents Department.

International Image Interoperability Framework (This Page)