In the context of machine learning, a kernel method is effectively the analysis of a data set through a choice of a distance/metric-function on it. The corresponding integral kernels turn out to contain useful information, if chosen correctly.
As a special case, if the data set is a “choice of rankings”, hence the set of permutations, of elements, distance functions come from the graph distance of Cayley graphs of the symmetric group, such as the Kendall distance (or Cayley distance) with associated kernels such as the Mallows kernel. With these choices, the kernel method overlaps with geometric group theory.
A kernel on a set is a function (although complex-valued kernels are also used) which is symmetric and positive definite, in the sense that for any and any , the matrix is positive definite, i.e., for all .
This may be reformulated as a mapping , where is a reproducing kernel Hilbert space, a function space in which pointwise evaluation is a continuous linear functional.
The ‘kernel’ and ‘feature map to Hilbert space’ stories relate to each other as follows:
The ‘reproducing’ aspect of means that
and this is continuous. So we have
is the span of the set of functions , and, under certain conditions, when we find the for which a functional is minimised, it takes the form
Many linear algebra techniques in just involve the inner product, and so can be conducted as a form of nonlinear algebra back in , using the ‘kernel trick’.
Kernels characterise the similarity of the input set. But how do they get chosen? Often a Gaussian radial-basis function (RBF) kernel is selected:
Note that there’s a Bayesian version of kernel methods which uses Gaussian processes.
Consider binary classification where we look to form a classifier which accurately finds the labels in of a sample from a distribution over on the basis of their values, given a set of labelled training data. The support vector machine approach to this task looks to find the hyperplane in the associated Hilbert space, , which best separates the images of data points , so that those with the same label fall in the same half space. The control for overfitting comes from finding such a hyperplane that classifies the training data accurately with the largest perpendicular distance to the nearest of them (the support vectors).
Survey and introduction:
Thomas Hofmann, Bernhard Schölkopf, Alexander J. Smola, Kernel methods in machine learning, Annals of Statistics 2008, Vol. 36, No. 3, 1171-1220 (arXiv:math/0701907)
Julien Mairal, Jean-Philippe Vert, Machine Learning with Kernel Methods, lecture notes 2017 (webpage, pdf, pdf)
See also:
On positive definiteness via Fourier transform/Bochner's theorem:
Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, Bernhard Schölkopf, Characteristic Kernels on Groups and Semigroups, Advances in Neural Information Processing Systems 21 : 22nd Annual Conference on Neural Information Processing Systems 2008 (NIPS 2008), 473-480 (mpg:5466, pdf)
Kenji Fukumizu, Theory of Positive Definite Kernel and Reproducing Kernel Hilbert Space 2008 (pdf, pdf)
Proof that the Mallows kernel and Kendall kernel are positive definite:
Discussion of weighted variants:
On Hamming distance kernels via topological quantum computation:
See also
In this paper I propose a generative model of supervised learning that unifies two approaches to supervised learning, using a concept of a correct loss function. Addressing two measurability problems, which have been ignored in statistical learning theory, I propose to use convergence in outer probability to characterize the consistency of a learning algorithm. Building upon these results, I extend a result due to Cucker-Smale, which addresses the learnability of a regression model, to the setting of a conditional probability estimation problem. Additionally, I present a variant of Vapnik-Stefanyuk’s regularization method for solving stochastic ill-posed problems, and using it to prove the generalizability of overparameterized supervised learning models.
On kernel methods applicable to persistence diagrams/barcodes for making topological data analysis amenable to “topological” machine learning:
Jan Reininghaus, Stefan Huber, Ulrich Bauer, Roland Kwitt, A Stable Multi-Scale Kernel for Topological Machine Learning, NIPS’15: Proceedings of the 28th International Conference on Neural Information Processing System, 2 (2015 3070–3078 arXiv:1412.6821
Roland Kwitt, Stefan Huber, Marc Niethammer, Weili Lin, Ulrich Bauer, Statistical Topological Data Analysis – A Kernel Perspective, in: Advances in Neural Information Processing Systems (NIPS 2015) ISBN:9781510825024, doi:10.5555/2969442.2969582
Bastian Rieck, Filip Sadlo, Heike Leitte, Topological Machine Learning with Persistence Indicator Functions, In: Topological Methods in Data Analysis and Visualization V TopoInVis (2017) 87-101 Mathematics and Visualization. Springer, arXiv:1907.13496, doi:10.1007/978-3-030-43036-8_6
Raphael Reinauer, Matteo Caorsi, Nicolas Berkouk, Persformer: A Transformer Architecture for Topological Machine Learning arXiv:2112.15210
On kernel methods in topological data analysis via quantum computation:
Last revised on July 17, 2024 at 14:05:59. See the history of this page for a list of all contributions to it.