A neural network is a class of functions used in both supervised and unsupervised? learning to approximate a correspondence between samples in a dataset and their associated labels.
Where is compact, a finite set of affine maps such that where is the layer weight matrix and the layer bias, a non-linear activation function, a neural network is a function , such that on input , computes the composition:
where is applied component-wise.
Typically, is called the input layer, the output layer, and layers to are hidden layers. In particular, a real-valued 1-hidden layer neural network with computes:
where is the output weight, the output bias, the row of the hidden weight matrix, and the hidden bias. Here, the hidden layer is -dimensional.
On the learning algorithm as gradient descent of the loss functional:
On the learning algorithm as analogous to the AdS/CFT correspondence:
Yi-Zhuang You, Zhao Yang, Xiao-Liang Qi, Machine Learning Spatial Geometry from Entanglement Features, Phys. Rev. B 97, 045153 (2018) (arxiv:1709.01223)
W. C. Gan and F. W. Shu, Holography as deep learning, Int. J. Mod. Phys. D 26, no. 12, 1743020 (2017) (arXiv:1705.05750)
J. W. Lee, Quantum fields as deep learning (arXiv:1708.07408)
Koji Hashimoto, Sotaro Sugishita, Akinori Tanaka, Akio Tomiya, Deep Learning and AdS/CFT, Phys. Rev. D 98, 046019 (2018) (arxiv:1802.08313)
A category theoretic treatment of back propagation:
Application of tensor networks and specifically tree tensor networks:
Ding Liu, Shi-Ju Ran, Peter Wittek, Cheng Peng, Raul Blázquez García, Gang Su, Maciej Lewenstein, Machine Learning by Unitary Tensor Network of Hierarchical Tree Structure, New Journal of Physics, 21, 073059 (2019) (arXiv:1710.04833)
Song Cheng, Lei Wang, Tao Xiang, Pan Zhang, Tree Tensor Networks for Generative Modeling, Phys. Rev. B 99, 155131 (2019) (arXiv:1901.02217)
Last revised on February 8, 2020 at 06:58:38. See the history of this page for a list of all contributions to it.