- algebra, higher algebra
- universal algebra
- monoid, semigroup, quasigroup
- nonassociative algebra
- associative unital algebra
- commutative algebra
- Lie algebra, Jordan algebra
- Leibniz algebra, pre-Lie algebra
- Poisson algebra, Frobenius algebra
- lattice, frame, quantale
- Boolean ring, Heyting algebra
- commutator, center
- monad, comonad
- distributive law

- group, normal subgroup
- action, Cayley's theorem
- centralizer, normalizer
- abelian group, cyclic group
- group extension, Galois extension
- algebraic group, formal group
- Lie group, quantum group

Let $R$ be a commutative ring. A *polynomial* with coefficients in $R$ is an element of a **polynomial ring** over $R$. A polynomial ring over $R$ consists of a set $X$ whose elements are called “variables” or “indeterminates”, and a function $X \to R[X]$ to (the underlying set of) a commutative $R$-algebra that is universal among such functions, so that $R[X]$ is the free commutative $R$-algebra generated by $X$; a polynomial is then an element of the underlying set of $R[X]$.

Much like “vector”, the term “polynomial” in this sense may seem slightly deprecated from the viewpoint of modern mathematics. We no longer think of a vector space as consisting of things called “vectors” (i.e. we don’t assign an objective meaning to “vectors”); it’s the other way around, where we introduce a type of structure called a vector space and then, relative to a given vector space context, declare that “vector” just means an element therein. Similarly, the term “polynomial” in the sense above has seemingly been subordinated to the structural concept of polynomial ring. (In Linderholm’s *Mathematics Made Difficult*, page 152, there is an amusing passage where someone points at the expression $a_0 + a_1 X + a_2 X^2 + \ldots + a_n X^n$ and asks “Well, how about it? Is it a polynomial, or isn’t it?” and the respondent says, “Yeah, sure, I guess. Looks like one. Yeah, sure, that’s a polynomial all right” – an answer which is not wrong exactly, but not quite right either, since it fails to recognize that there is something questionable about the question.)

On further reflection, however, we might more objectively identify the concept of “polynomial” (let us say a polynomial in $n$ variables) with a definable $n$-ary operation in the theory of commutative $R$-algebras. From a categorical perspective, if $U: CommAlg_R \to Set$ is the forgetful functor, a definable $n$-ary operation means a natural transformation $U^n \to U$. The connection is that the functor $U^n$ is representable, being represented by the free object $F[n] = R[x_1, \ldots, x_n]$, so that a natural transformation $U^n \to U$ is canonically identified with a transformation $\hom(R[x_1, \ldots, x_n], -) \to U$ or to an element of $U R[x_1, \ldots, x_n]$, by the Yoneda lemma. In pursuit of this objective meaning (which is essentially due to Lawvere), we find that the “variable” $x_i$ stands for the $i^{th}$ projection map $U^n \to U$, and that the meaning of (let’s say) $x_1 x_2 \in R[x_1, x_2, x_3]$ is that it is the definable operation whose instantiation at any commutative $R$-algebra $A$ is the function $A^3 \to A$ taking $(a, b, c)$ to $a b$.

Of course there are traditional standard expressions that people usually have in mind when they speak of “a polynomial” as such. But leaving it at that, where polynomials are merely identified with certain types of expressions (as by the characters in Linderholm’s book), ignores the deeper objective meaning of definable operations which of course is the actual point of it all.

Finally, sometimes “polynomial” is construed to mean a *polynomial function*. This is actually just a particular instantiation of a definable operation. The default meaning is that, if we are working for instance with a polynomial ring in one variable $R[x]$, then we have a composite

$U R[x] \cong \hom(U, U) \stackrel{ev_R}{\to} \hom(U R, U R)$

where the second map sends a natural transformation to its component at $R$ as an $R$-algebra. Put differently, the set $\hom(U R, U R)$ carries a commutative $R$-algebra structure under the pointwise operations, and there is a unique $R$-algebra map $R[x] \to \hom(U R, U R)$ that sends ‘$x$’ to the identity map. The value of a polynomial $p \in R[x]$ under this map is then the corresponding polynomial function $U R \to U R$.

This conflation of polynomials with polynomial functions is often forgivable, particularly in those cases where the map $R[x] \to \hom(U R, U R)$ is injective (so that ‘polynomials’ are identified with certain types of functions). Of course the map *won’t* be injective if $R$ is finite, to give one example. But in analysis, where we consider functions on $\mathbb{R}$ or $\mathbb{C}$, the conflation is familiar and rarely cause for concern. The conflation may also be responsible for certain notational artifacts, such as the common (and useful!) practice of writing $p(x)$ for polynomials $p$.

With these preliminary remarks out of the way, we recall some of the more syntactic considerations with an example.

The set of **polynomials** in one variable $z$ with coefficients in $R$, also called the set of **univariate polynomials**, is the set $R[\mathbb{N}]$ of all formal linear combinations on elements $n \in \mathbb{N}$, thought of as powers $z^n$ of the variable $z$. As a string of symbols, a polynomial is frequently represented by a form like

$a_n z^n + \cdots + a_1 z + a_0
\,,$

where $n$ is an arbitrary natural number and $a_0, \dots, a_n \in R$, subject to the usual fine print (where we work modulo the congruence generated by equations of the form

$0 z^{n+1} + a_n z^n + \cdots + a_1 z + a_0 = a_n z^n + \cdots + a_1 z + a_0$

so that we ignore coefficients of zero). (The **degree** of a polynomial is the maximum $n$ for which $a_n$ is nonzero, in which case the **leading term** of the polynomial is $a_n z^n$. A polynomial is **constant** if its degree is $0$. The degree of the zero polynomial may be left chastely undefined, although for some purposes it may be convenient to define it as $-1$ or as $-\infty$. Even $0$ is possible if one is prepared to observe some fine print. *Chacun à son goût*.)

This set is equipped with an $R$-module structure (where formal linear combinations are added and scalar-multiplied as usual) and also with the structure of a ring, in fact a commutative algebra over $R$, denoted $R[z]$ and called the **polynomial ring** or **ring of polynomials**, with ring multiplication

$R[z] \cdot R[z] \to R[z]$

the unique one that bilinearly extends the multiplication of monomials given by

$z^k \cdot z^l = z^{k+l}
\,.$

Thus, one way to construct a polynomial ring is first to construct the free commutative monoid generated by a set $X$ (the monoid of monomials), and then to construct the free $R$-module generated by the underlying set of that monoid, extending the monoid multiplication to a ring multiplication by bilinearity.

In addition to the ring structure, there is a further operation $R[z] \times R[z] \to R[z]$ which may be described as “substitution” or “composition”; see Remark below for a general description (which applies in fact to any Lawvere theory).

In the case of univariate polynomials, since the function type $R[z] \to R[z]$ is a function $R$-algebra with the commutative $R$-algebra homomorphism to constant functions $C:R \to (R[z] \to R[z])$, there exists a commutative $R[z]$-algebra homomorphism $i:R[z] \to (R[z] \to R[z])$ inductively defined by $i(s(r)) \coloneqq C(r)$ for $r \in R$ and $i(z) \coloneqq id_{R[z]}$. The **substitution** or **composition** of univariate polynomials $(-) \circ (-):R[z] \times R[z] \to R[z]$ is the uncurrying of $i$, $p \circ q \coloneqq i(p)(q)$ for $p \in R[z]$ and $q \in R[z]$.

Moreover, there is a noncommutative analogue of polynomial ring on a set $X$, efficiently described as the free $R$-module generated by the (underlying set of the) free monoid on $X$. This carries also a ring structure, with ring multiplication induced from the monoid multiplication. A far-reaching generalization of this construction is given at distributive law.

Finally: polynomial algebras may be regarded as graded algebras (graded over $\mathbb{N}$). Specifically: let us regard $R[X]$ as the free $R$-module generated by (the underlying set of) the free commutative monoid $F(X)$. The monoid homomorphism $F(!): F(X) \to F(1) \cong \mathbb{N}$ induced by the unique function $!: X \to 1$ gives an $\mathbb{N}$-fibering of $F(X)$ over $\mathbb{N}$, with typical fiber $F(X)_n$ whose elements are called *monomials of degree $n$*. Then the *homogeneous component* of degree $n$ in $R[X]$ is the $R$-submodule generated by the subset $F(X)_n \subset F(X)$. The elements of this component are called *homogeneous polynomials of degree $n$*.

By the definition of free objects one needs to check that algebra homomorphisms

$f : R[z] \to K$

to another algebra K are in natural bijection with functions of sets

$\bar f : * \to K$

from the singleton to the set underlying $K$. Take $\bar f \coloneqq f(z)$. Using $R$-linearity, this is directly seen to yield the desired bijection.

Similarly, the set of polynomials in any given set of variables with coefficients in $R$ is the free commutative $R$-algebra on that set of generators; see symmetric power and symmetric algebra.

As usual in the study of universal algebra via Lawvere theories, there is an operad whose $n^{th}$ component $C_n$ is the free algebra $R[x_1, \ldots, x_n]$, and whose operadic multiplication is given by maps

$C_k \times C_{n_1} \times \ldots \times C_{n_k} \to C_n$

($n = n_1 + \ldots + n_k$) that take a tuple of elements $(p; q_1, \ldots, q_k)$ to $p(q_1(x), \ldots, q_k(x))$. Formally, it takes this tuple to the value of $p$ under the unique algebra map $R[x_1, \ldots, x_k] \to R[x_1, \ldots, x_n]$ that extends the mapping $x_j \mapsto i_j(q_j)$. Here the $i_j: R[x_1, \ldots, x_{n_j}] \to R[x_1, \ldots, x_n]$ are appropriate coproduct inclusions (in the category of commutative rings), where $i_j(x_l) = x_{n_1 + \ldots + n_{j-1} + l}$. A particularly important case of substitution is the case $k=1$ and $n_1 = 1$, where the map $R[x] \times R[x] \to R[x]$ is ordinary substitution $(p, q) \mapsto p(q(x))$. This is a special case of the more general notion of Tall-Wraith monoid.

In case $R$ is an integral domain, the field of fractions of $R[z]$ is the field $R(z)$ of rational fractions.

The underlying $R$-module of the polynomial ring $R[z]$ on one generator is the natural numbers object in the category RMod of $R$-modules.

Polynomial rings on one generator also have the structure of a differential algebra.

For every univariate polynomial ring $R[z]$, one could inductively define a function called a **derivative** or **derivation**

$\frac{\partial}{\partial z}: R[z] \to R[z]$

such that

- for all polynomials $p \in R[z]$ and $q \in R[z]$ and constant polynomials $a \in R[z]$ and $b \in R[z]$,$\frac{\partial}{\partial z}(a \cdot p + b \cdot q) = a \cdot \frac{\partial}{\partial z}(p) + b \cdot \frac{\partial}{\partial t}(q)$
- for all polynomials $p \in R[z]$ and $q \in R[z]$,$\frac{\partial}{\partial z}(p \cdot q) = p \cdot \frac{\partial}{\partial z}(q) + \frac{\partial}{\partial z}(p) \cdot q$
- and$\frac{\partial}{\partial z}(z) = 1$
Thus the univariate polynomial ring $R[z]$ is a differential algebra.

In the multivariate polynomial ring $R[z_1, z_2 \ldots z_n]$, there is a derivation

$\frac{\partial}{\partial z_i}: R[z_1, z_2 \ldots z_n] \to R[z_1, z_2 \ldots z_n]$

for each $1 \leq i \leq n$ called a **partial derivative**.

Since $R$ is power-associative, for every positive integer $n \in \mathbb{Z}_+$,

$\frac{\partial}{\partial z}z^{n+1} = j(n) \circ z^{n}$

where $j:\mathbb{Z}_+ \to \mathbb{Z} \to R[z]$ is the canonical function from the positive integers to $R[x]$.

If $R$ is an integral domain, then for all constant polynomials $a \in R[z]$,

$\frac{\partial}{\partial x}a = 0$

Given a polynomial $p \in R[z]$, we define the **set of antiderivatives** of $p$ to be the fiber of the derivative at $p$:

$\mathrm{antiderivatives}(p) \coloneqq \left\{q \in R[z] \bigg| \frac{\partial}{\partial z}\left(q\right) =_{R[z]} p\right\}$

In the case where $R = k$ is a field, the polynomial ring $k[x]$ has a number of useful properties. One is that it is a Euclidean domain, where the degree serves as the Euclidean function:

Let $R$ be a commutative ring. Given $f, g \in R[x]$ where the leading coefficient of $g$ is a unit (e.g., if $g$ is a monic polynomial), there are unique $q, r \in k[x]$ such that $f = q \cdot g + r$ and $\deg(r) \lt \deg(g)$.

If $\deg(f) \lt \deg(g)$, then $q = 0$ and $r = f$ will serve. Otherwise we may argue by induction on $\deg(f)$, where if $a_m x^m$ is the leading term of $f$ and $b_n x^n$ the leading term of $g$, then $h(x) = f(x) - a_m b_n^{-1} x^{m-n}g(x)$ has lower degree than $f(x)$. This proves existence. Uniqueness is clear, since if $q \cdot g + r = q' \cdot g + r'$ and $q \neq q'$, we have $\deg((q - q')g) = \deg(r' - r) \lt \deg(g)$ which is impossible; then $r = r'$ quickly follows from $q = q'$.

If $k$ is field, then $k[x]$ is a Euclidean domain. As a result, $k[x]$ is a principal ideal domain, and therefore a unique factorization domain.

See Euclidean domain for a proof.

For any commutative ring $R$, if $a \in R$ is a root of $p(x) \in R[x]$, i.e., if the value of the polynomial function $p(a)$ is $0$, then $p(x)$ is of the form $(x - a)q(x)$.

Since $x - a$ is monic, we may write $p(x) = (x - a)q(x) + r$ where $\deg(r) \lt 1$, whence $r$ is a constant. By evaluating the polynomial function at $x = a$, the term $r$ is $0$.

This observation may be exploited in various neat ways. One is that if $p(x)$ is a polynomial, then $p(y) = p(x) + (y - x)q(x, y)$ for some unique $q(x, y) \in k[x, y]$. A consequence is that the Lawvere theory of commutative $k$-algebras is a Fermat theory. The derivative of $p$ may be defined to be $q(x, x) \in k[x]$.

Last revised on August 21, 2024 at 01:41:55. See the history of this page for a list of all contributions to it.