In deep learning and game theory, we usually think of neural networks/economic agents as processes taking in an input and producing an output . However, we additionally want to model that these processes have extra, βhiddenβ inputs not available to the outside world. In neural networks we call these weights (or parameters) and in game theory we call these strategies.
In other words, we want to form a category where a morphism contains the data of a) a parameter space and b) a morphism . From this description we see that this construction necessitates a choice of some underlying monoidal category .
Such a morphism might be visualised using the string diagram language of monoidal categories (below, left). However, this notation does not emphasise the special role played by , which is part of the data of the morphism itself. Parameters and data in machine learning have different semantics; by separating them on two different axes, we obtain a graphical language which is more closely tied to these semantics (below, right).
This gives us an intuitive way to compose parameterised maps:
This construction is called , originally introduced in (Fong, Spivak and Tuyeras 2019) in a specialised form, then successively refined in (Gavranovic 2019), (Capucci et al. 2020) and (Cruttwell et al. 2021).
Let be a symmetric monoidal category. Then is a bicategory with the following data:
in .
The sequential composition of a map and is given by the animation above. The composite is a -parameterised map defined as
The construction defined here works in the general setting of actegories.
todo
todo
When the base category is set to be the category of optics (in computer science), then recovers the category of neural networks defined in (Capucci et al. 2020).
Brendan Fong, David Spivak, RΓ©my TuyΓ©ras, Backprop as Functor: A compositional perspective on supervised learning, 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS) 2019, pp. 1-13, 2019. (arXiv:1711.10455, LICSβ19)
Bruno GavranoviΔ, Compositional Deep Learning, (arXiv:1907.08292)
Matteo Capucci, Bruno GavranoviΔ, Jules Hedges, Eigil Fjeldgren Rischel, Towards Foundations of Categorical Cybernetics, (arXiv:2015.06332)
G.S.H. Cruttwell, Bruno GavranoviΔ, Neil Ghani, Paul Wilson, Fabio Zanasi, Categorical Foundations of Gradient-Based Learning, (arXiv:2103.01931)
Last revised on July 23, 2021 at 07:51:36. See the history of this page for a list of all contributions to it.