Categorical compositional distributional semantics, also known as DisCoCat for short, uses category theory to combine the benefits of two very different approaches to linguistics: categorial grammar and distributional semantics. Specifically, it uses the fact that pregroups and the category of finite-dimensional real vector spaces are both examples of compact closed categories. It turns out that word meanings can be arranged into a strong monoidal functor, which allows sentence meanings to be computed from word meanings by linear algebra, compositionally on the grammatical structure of the sentence.
DisCoCat was first introduced in Coecke, Sadrzadeh and Clark 2010.
There is a curious (if probably shallow) analogy between DisCoCat and topological quantum field theory, in that both concern strong monoidal functors from a free compact closed category to a category of vector spaces.
Consider a very simple fragment of English consisting of nouns and verbs. Write $n$ for the type of nouns, and $s$ for the type of sentences. Let $\mathcal{C}$ be the free pregroup generated by $n$ and $s$. In $\mathcal{C}$, a noun will be assigned the type $n$ and a reflexive verb will be assigned the type $n^{r} s n^{l}$, since it will produce a sentence if provided with nouns on the left and right. Now given nouns such as “Alice” and “Bob” and reflexive verbs such as “loves” and “hates”, the grammaticality of a sentence such as “Alice hates Bob” is witnessed by the fact that
in the free pregroup.
A strong monoidal functor $F : \mathcal{C} \to FVect_{\mathbb{R}}$ is completely determined by where it sends the generators $n$ and $s$. The vector spaces $F (n)$ and $F (s)$ are called the noun space and the sentence space, and their choice is a matter of linguistic modelling. A common approach is to pick a large set $n$ of basis words, say $n = 1000$ (or as many as your sparse matrix implementation can handle), and take $F (n) = \mathbb{R}^{1000}$. The choice of $F (s)$ is not well understood, but $F (s) = \mathbb{R}^{2}$ is a common choice, with the standard basis vectors interpreted as “true” and “false”.
Given a grammatical parsing such as $n n^{r} s n^{l} n \leq s$, we automatically get a linear map
given by appropriate tensor contractions.
The final ingredient we need are word vectors. We need to pick vectors $[\![ Alice ]\!], [\![ Bob ]\!] \in F (n)$ and $[\![ loves ]\!] \in F (n) \otimes F (s) \otimes F (n)$. The choice of these vectors is also a matter for linguistics, but a common idea is to count how often Alice (for example) appears in a corpus (for example the full text of Wikipedia) close to each of the chosen basis words.
Finally, we obtain a sentence-vector
All earlier papers on DisCoCat use free pregroups for the grammar category, building on earlier work by Joachim Lambek on pregroup grammars. Unfortunately, in Preller 2014, fact 1 it is proven that any strong monoidal functor from a free pregroup to $FVect$ is necessarily trivial, in the sense that it sends every generator to the monoidal unit $\mathbb{R}$. This is worked around by using a free autonomous category instead of a free pregroup (this is a sort of categorification). Morphisms in the free autonomous category can be viewed as proof relevant grammatical reductions.
This allows a slightly more elegant reformulation of our basic example. Let $\mathcal{C}$ be the free autonomous category on the objects $n, s$ and the morphisms $Alice : 1 \to n$, $Bob : 1 \to n$ and $loves : 1 \to n^{r} s n^{l}$. Then a sentence such as “Alice loves Bob”, together with its grammaticality, is witnessed by a morphism $f : 1 \to s$. Now the word-vectors $[\![ Alice ]\!] = F (Alice)$, $[\![ Bob ]\!] = F (Bob)$ and $[\![ loves ]\!] = F (loves)$ become part of the data defining $F : \mathcal{C} \to FVect$, and the resulting sentence vector is simply $F (f)$.
DisCoCat is relatively easy to modify by replacing the semantic category $FVect$ with some other category with the appropriate structure. Examples include
Relations to model simple boolean semantics, usually as a simpler toy model
Density matrices to model semantic ambiguity (such as “financial bank” vs “river bank”
Convex relations (i.e. relations in the category of convex spaces) to model cognition (Bolt et al 2016)
Formulas of regular logic to model Montague semantics (Felice, Meichanetzidis and Toumi 2019)
Games to model dialogue (Hedges and Lewis 2018)
Last revised on July 10, 2019 at 10:43:18. See the history of this page for a list of all contributions to it.