Collaborative Filtering for Brain-Computer Interaction Using Transfer Learning and Active Class Selection Page: 4
18 p. : col. ill.View a full description of this article.
Extracted Text
The following text was automatically extracted from the image on this page using optical character recognition software:
Collaborative Filtering for BCI
objective function, and the other is to help define the hypothesis.
Particularly, in kNN one role of the auxiliary data is to help define
the objective function and the other is to serve as potential
neighbors. In [30] we investigated both roles and found that using
the auxiliary training samples in the validation part of the internal
cross-validation algorithm generally achieved better performance.
So, only this approach is considered in this paper. In each iteration
the TL algorithm computes ap by leave-one-out cross-validation
using the NP primary training samples, and aa using the NP
primary training samples to classify a selected set of "good"
auxiliary training samples. Its pseudo-code is given in Figure 2,
and is denoted TL in this paper.
How the "good" auxiliary training samples are selected is very
important to the success of the TL algorithm. The general
guideline is to select auxiliary training samples that are similar to
the primary training samples. Specifically we are using a mean
squared difference, calculated by:
1. Computing the mean feature vector of each class for the new
subject, from the Np primary training samples. These are
denoted as ml', where i= 1,2,...,c is the class index.
2. Computing the mean feature vector of each class for each
subject in the auxiliary dataset. These are denoted as mi, where
i= 1,2,...,c is the class index and j is the subject index.
3. Select the subject with the smallest difference from the new
subject, i.e., argmin1 =1 m -m 12, and use his/her data
as auxiliary training data.We note that the way to select "good" auxiliary training samples
may be application dependent, and there are multiple potential
avenues for research in this area. One of them is pointed out in the
Future Research section.
Active Class Selection (ACS)
This section introduces the theory and an algorithm for ACS.
For simplicity the kNN classifier is used; however, the algorithm
can be extended to other classifiers such as the SVM.
ACS theory. Active learning (AL) [26-26] has been attracting
a great deal of research interest recently. It addresses the following
problem: suppose that we have considerable amounts of offline
unlabeled training samples and that the labels are very difficult,
time-consuming, or expensive to obtain; which training samples
should be selected for labeling so that the maximum learning
(classification or prediction) performance can be obtained from the
minimum labeling effort? For example, in speech emotion
estimation [29,44,45], the utterances and their features can be
easily obtained; however, it is difficult to evaluate the emotions
they express. In this case, AL can be used to select the most
informative utterances to label so that a good classifier or predictor
can be trained based on them. Many different approaches have
been proposed for AL [28] so far, e.g., uncertainty sampling [46],
query-by-committee [47,48], expected model change [49],
expected error reduction [50], variance reduction [51], and
density-weighted methods [52].Input: l() initial p1)riary trailing smilples;
'A auxiliary training samples;
.i, the number of new primary training saminples to generate in Iteratioll i:
., the llIaxinullL nIIuiiber of iterations;:
(a. thle lillilllnul satisfactory c(lassificatioll accuracy;
k. the llaxinmull k in the kNN classifier:
c. the mll)er of classes.
Output: ko, the optimal k in the kNN classifier.
for i in [1, ]
~7p O, n O a . dNp Z '-1
S - - --j= 0 zj-
for k in [1, k]
Compitute a, the leave-olle-out cross-vali(lation accuracy sing the N' primary samples;
Select Ala "good" auxiliary samples from the N auxiliary training samples:
Compullte "(, tihe (lassific(ation accura('y using the N' prilnmary sam11)ples inl training al(nd 1Al"
auxiliary samples in valid(lation:
if o' > P
7 = a' P. 7" = 7 ", k, A = k:
else if ud' = 71' aniL a" > 71"
end
end
if 11-' < a
(;enerate li new primary training samples uniformly from the c: classes;
else
Return k,,:
end
end
Return k0
Figure 2. The TL algorithm, in which primary and auxiliary training samples are used together in determining the optimal k in the
kNN classifier.
doi:1 0.1371/journal.pone.0056624.g002February 2013 1 Volume 8 1 Issue 2 1 e56624
PLOS ONE I www.plosone.org
Upcoming Pages
Here’s what’s next.
Search Inside
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Wu, Dongrui; Lance, Brent J. & Parsons, Thomas D. Collaborative Filtering for Brain-Computer Interaction Using Transfer Learning and Active Class Selection, article, February 21, 2013; [San Francisco, California]. (https://digital.library.unt.edu/ark:/67531/metadc306986/m1/4/: accessed April 25, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT College of Arts and Sciences.