B. Banaschewski, R. Harting, Lattice aspects of radical ideals and choice principles, Proc. London Math. Soc. (1985) s3-50 (3): 385–404 doi
Kimmo I. Rosenthal, A general approach to Gabriel filters on quantales, Comm. Algebra 20, n.11 (1992) 3393–3409 doi
James Freitag, Rémi Jaoui, Rahim Moosa, When any three solutions are independent, Invent. math. 230 (2022) 1249–1265 arXiv/2110.08123 doi
Alberto Canonaco, Mattia Ornaghi, Paolo Stellari, Localizations of the category of A∞ categories and internal Homs over a ring, arXiv:2404.06610
Leland McInnes, John Healy, James Melville, UMAP: Uniform manifold approximation and projection for dimension deduction, arXiv:1802.03426
Hông Vân Lê, Supervised learning with probabilistic morphisms and kernel mean embeddings, arXiv:2305.06348
In this paper I propose a generative model of supervised learning that unifies two approaches to supervised learning, using a concept of a correct loss function. Addressing two measurability problems, which have been ignored in statistical learning theory, I propose to use convergence in outer probability to characterize the consistency of a learning algorithm. Building upon these results, I extend a result due to Cucker-Smale, which addresses the learnability of a regression model, to the setting of a conditional probability estimation problem. Additionally, I present a variant of Vapnik-Stefanuyk’s regularization method for solving stochastic ill-posed problems, and using it to prove the generalizability of overparameterized supervised learning models.
Last revised on September 4, 2024 at 14:35:20. See the history of this page for a list of all contributions to it.