Categorical compositional distributional semantics, also known as DisCoCat for short, uses category theory to combine the benefits of two very different approaches to linguistics: categorial grammar and distributional semantics.
Specifically, it uses the fact that pregroups and the category of finite-dimensional real vector spaces are both examples of rigid monoidal categories. Semantics for pregroup grammars may then be given as a strong monoidal functor, which allows sentence meanings to be computed from word meanings by linear algebra, compositionally on the grammatical structure of the sentence.
DisCoCat was first introduced in Coecke, Sadrzadeh and Clark 2010.
There is a curious (if probably shallow) analogy between DisCoCat and topological quantum field theory, in that both concern strong monoidal functors from a free (rigid monoidal or compact closed) category to a category of vector spaces.
Consider a very simple fragment of English consisting of nouns and verbs. Write for the type of nouns, and for the type of sentences. Let be the free pregroup generated by and . In , a noun will be assigned the type and a reflexive verb will be assigned the type , since it will produce a sentence if provided with nouns on the left and right. Now given nouns such as “Alice” and “Bob” and reflexive verbs such as “loves” and “hates”, the grammaticality of a sentence such as “Alice hates Bob” is witnessed by the fact that
in the free pregroup.
A strong monoidal functor is completely determined by where it sends the generators and . The vector spaces and are called the noun space and the sentence space, and their choice is a matter of linguistic modelling. A common approach is to pick a large set of basis words, say (or as many as your sparse matrix implementation can handle), and take . The choice of is not well understood, but is a common choice, with the standard basis vectors interpreted as “true” and “false”.
Given a grammatical parsing such as , we automatically get a linear map
given by appropriate tensor contractions.
The final ingredient we need are word vectors. We need to pick vectors and . The choice of these vectors is also a matter for linguistics, but a common idea is to count how often Alice (for example) appears in a corpus (for example the full text of Wikipedia) close to each of the chosen basis words.
Finally, we obtain a sentence-vector
All earlier papers on DisCoCat use free pregroups for the grammar category, building on earlier work by Joachim Lambek on pregroup grammars. Unfortunately, in Preller 2014, fact 1 it is proven that any strong monoidal functor from a free pregroup to is necessarily trivial, in the sense that it sends every generator to the monoidal unit. One should instead use a free rigid monoidal category, a kind of categorification. Morphisms in the free autonomous category can be viewed as proof relevant grammatical reductions. They may be encoded as string diagrams, see pregroup grammar.
This allows a slightly more elegant reformulation of our basic example. Let be the free autonomous category on the objects and the morphisms , and . Then a sentence such as “Alice loves Bob”, together with its grammaticality, is witnessed by a morphism . Now the word-vectors , and become part of the data defining , and the resulting sentence vector is simply .
One criticism of DisCoCat is that a single model is unlikely to be complex enough to capture the intricacies of a natural language. A response to this criticism is that DisCoCat models should only model a given fragment of language with some level of detail. These models can then be combined or updated to represent more complex fragments of natural language. Indeed, in Translating and Evolving: Towards a Model of Language Change in DisCoCat the authors construct a category of DisCoCat models where
objects are strong monoidal functors
where is a category which is freely monoidal on some set of grammatical types. In practice might be freely equipped with an rigid structure or a Frobenius algebra on each object.
A morphism from a language model to a language model is a tuple where is a monoidal functor and is a monoidal natural transformation filling the following triangle
Morphisms in are called translations because they represent ways of changing your semantic interpretations of grammatical types in a way which is coherent with a change in grammar.
In Mathematical foundations for a compositional distributional model of meaning a different description of a DisCoCat model was described. Let be a free autonomous category representing grammar and assume that you have an assignment of each grammatical type to a finite dimensional vector space. Then grammatical computations of meaning can be performed in the product category . Let be an object in and let be its semantic meaning. Then a grammatical reduction built from the units and counits in can be paired with the corresponding structure maps in . This pairing allows you to apply the structure maps corresponding to to the vector to get a new semantic meaning. In essence, this approach identifies an autonomous subcategory of where meaning computations can occur.
This product space representation can be related to monoidal functor perspective via the category of elements. Using the forgetful functor the product space of a language model can be defined as the category of elements . The product space representation is useful for equipping a language model with a specific set of words. If is a set of words then a map describes an assignment of each word to a grammatical type and semantic meaning.
DisCoCat is relatively easy to modify by replacing the semantic category with some other category with the appropriate structure. Examples include
Relations, which yields a semantics in regular logic, closely related to Cartesian bicategories, see Felice, Meichanetzidis and Toumi 2019
Density matrices to model semantic ambiguity such as “financial bank” vs “river bank”
Convex relations (i.e. relations in the category of convex spaces) to model cognition (Bolt et al 2016)
Games to model dialogue and Wittgenstein’s language games (Hedges and Lewis 2018)
Linguistics using category theory on the n-category café
Cognition, convexity and category theory on the n-category café
Meeting the Dialogue Challenge on the n-category café
Bob Coecke, Mehrnoosh Sadrzadeh and Stephen Clark, Mathematical foundations for a compositional distributional model of meaning. Lambek Festschrift, special issue of Linguistic Analysis, 2010. (arXiv:1003.4394)
Anne Preller, From logical to distributional methods. QPL 2013. (arXiv:1412.8527)
Josef Bolt, Bob Coecke, Fabrizio Genovese, Martha Lewis, Daniel Marsden and Robin Piedeleu, Interacting conceptual spaces I: Grammatical composition of concepts, SLPCS 2016 (arXiv:1703.08314
Giovanni de Felice, Konstantinos Meichanetzidis and Alexis Toumi, Functorial Question Answering, ACT 2019. (arXiv:1905.07408])
Jules Hedges and Martha Lewis, Towards functorial language-games, CAPNS 2018. (arXiv:1807.07828)
The category and it’s relationship to the product space representation is discussed in:
Translating and Evolving: Towards a Model of Language Change in DisCoCat arXiv:1811.11041
Last revised on December 9, 2020 at 14:17:23. See the history of this page for a list of all contributions to it.