nLab Hông Vân Lê

Selected writings

Hông Vân Lê is a Vietnamese-Czech mathematician with research in differential geometry (especially contact geometry, symplectic geometry and special holonomy manifolds), information geometry and mathematics of machine learning.

  • webpage at the Institute of Mathematics of Czech Academy of Sciences in Prague

Selected writings

  • Hông Vân Lê, Geometric structures associated with a simple Cartan 3-form, J. of Geometry and Physics 70 (2013) 205–223 arXiv:1103.1201
  • Nihat Ay, Jürgen Jost, Hông Vân Lê, Lorenz Schwachhöfer, Information geometry, Ergeb. der Mathematik and ihrer Grenzgebiete 3. Folge, 64, Springer 2017

The following treatment of information geometry (and Fisher metric in particular) is using diffeological spaces (motivated by singular statistical models, including from machine learning)

  • Hông Vân Lê, Natural differentiable structures on statistical models and the Fisher metric, Information Geometry (2022) arXiv:2208.06539 doi

  • Hông Vân Lê, Diffeological statistical models and diffeological Hausdorff measures, video yt, slides pdf

  • Hông Vân Lê, Alexey A. Tuzhilin, Nonparametric estimations and the diffeological Fisher metric, In: Barbaresco F., Nielsen F. (eds) Geometric Structures of Statistical Physics, Information Geometry, and Learning, p. 120–138, SPIGL 2020. Springer Proceedings in Mathematics & Statistics 361, doi

In this paper, first, we survey the concept of diffeological Fisher metric and its naturality, using functorial language of probability morphisms, and slightly extending Lê’s theory in (Le2020) to include weakly C kC^k-diffeological statistical models. Then we introduce the resulting notions of the diffeological Fisher distance, the diffeological Hausdorff–Jeffrey measure and explain their role in classical and Bayesian nonparametric estimation problems in statistics.

  • Hông Vân Lê, Diffeological statistical models,the Fisher metric and probabilistic mappings, Mathematics 2020, 8(2) 167 arXiv:1912.02090

  • Hông Vân Lê, Supervised learning with probabilistic morphisms and kernel mean embeddings, arXiv:2305.06348

In this paper I propose a generative model of supervised learning that unifies two approaches to supervised learning, using a concept of a correct loss function. Addressing two measurability problems, which have been ignored in statistical learning theory, I propose to use convergence in outer probability to characterize the consistency of a learning algorithm. Building upon these results, I extend a result due to Cucker-Smale, which addresses the learnability of a regression model, to the setting of a conditional probability estimation problem. Additionally, I present a variant of Vapnik-Stefanyuk’s regularization method for solving stochastic ill-posed problems, and using it to prove the generalizability of overparameterized supervised learning models.

category: people

Last revised on July 17, 2024 at 14:06:33. See the history of this page for a list of all contributions to it.