symmetric monoidal (∞,1)-category of spectra
Let $R$ be a commutative ring. A polynomial with coefficients in $R$ is an element of a polynomial ring over $R$. A polynomial ring over $R$ consists of a set $X$ whose elements are called “variables” or “indeterminates”, and a function $X \to R[X]$ to (the underlying set of) a commutative $R$-algebra that is universal among such functions, so that $R[X]$ is the free commutative $R$-algebra generated by $X$; a polynomial is then an element of the underlying set of $R[X]$.
Much like “vector”, the term “polynomial” in this sense may seem slightly deprecated from the viewpoint of modern mathematics. We no longer think of a vector space as consisting of things called “vectors” (i.e. we don’t assign an objective meaning to “vectors”); it’s the other way around, where we introduce a type of structure called a vector space and then, relative to a given vector space context, declare that “vector” just means an element therein. Similarly, the term “polynomial” in the sense above has seemingly been subordinated to the structural concept of polynomial ring. (In Linderholm’s Mathematics Made Difficult, page 152, there is an amusing passage where someone points at the expression $a_0 + a_1 X + a_2 X^2 + \ldots + a_n X^n$ and asks “Well, how about it? Is it a polynomial, or isn’t it?” and the respondent says, “Yeah, sure, I guess. Looks like one. Yeah, sure, that’s a polynomial all right” – an answer which is not wrong exactly, but not quite right either, since it fails to recognize that there is something questionable about the question.)
On further reflection, however, we might more objectively identify the concept of “polynomial” (let us say a polynomial in $n$ variables) with a definable $n$-ary operation in the theory of commutative $R$-algebras. From a categorical perspective, if $U: CommAlg_R \to Set$ is the forgetful functor, a definable $n$-ary operation means a natural transformation $U^n \to U$. The connection is that the functor $U^n$ is representable, being represented by the free object $F[n] = R[x_1, \ldots, x_n]$, so that a natural transformation $U^n \to U$ is canonically identified with a transformation $\hom(R[x_1, \ldots, x_n], -) \to U$ or to an element of $U R[x_1, \ldots, x_n]$, by the Yoneda lemma. In pursuit of this objective meaning (which is essentially due to Lawvere), we find that the “variable” $x_i$ stands for the $i^{th}$ projection map $U^n \to U$, and that the meaning of (let’s say) $x_1 x_2 \in R[x_1, x_2, x_3]$ is that it is the definable operation whose instantiation at any commutative $R$-algebra $A$ is the function $A^3 \to A$ taking $(a, b, c)$ to $a b$.
Of course there are traditional standard expressions that people usually have in mind when they speak of “a polynomial” as such. But leaving it at that, where polynomials are merely identified with certain types of expressions (as by the characters in Linderholm’s book), ignores the deeper objective meaning of definable operations which of course is the actual point of it all.
Finally, sometimes “polynomial” is construed to mean a polynomial function. This is actually just a particular instantiation of a definable operation. The default meaning is that, if we are working for instance with a polynomial ring in one variable $R[x]$, then we have a composite
where the second map sends a natural transformation to its component at $R$ as an $R$-algebra. Put differently, the set $\hom(U R, U R)$ carries a commutative $R$-algebra structure under the pointwise operations, and there is a unique $R$-algebra map $R[x] \to \hom(U R, U R)$ that sends ‘$x$’ to the identity map. The value? of a polynomial $p \in R[x]$ under this map is then the corresponding polynomial function $U R \to U R$.
This conflation of polynomials with polynomial functions is often forgivable, particularly in those cases where the map $R[x] \to \hom(U R, U R)$ is injective (so that ‘polynomials’ are identified with certain types of functions). Of course the map won’t be injective if $R$ is finite, to give one example. But in analysis, where we consider functions on $\mathbb{R}$ or $\mathbb{C}$, the conflation is familiar and rarely cause for concern. The conflation may also be responsible for certain notational artifacts, such as the common (and useful!) practice of writing $p(x)$ for polynomials $p$.
With these preliminary remarks out of the way, we recall some of the more syntactic considerations with an example.
The set of polynomials in one variable $z$ with coefficients in $R$ is the set $R[\mathbb{N}]$ of all formal linear combinations on elements $n \in \mathbb{N}$, thought of as powers $z^n$ of the variable $z$. As a string of symbols, a polynomial is frequently represented by a form like
where $n$ is an arbitrary natural number and $a_0, \dots, a_n \in R$, subject to the usual fine print (where we work modulo the congruence generated by equations of the form
so that we ignore coefficients of zero). (The degree of a polynomial is the maximum $n$ for which $a_n$ is nonzero, in which case the leading term of the polynomial is $a_n z^n$. A polynomial is constant if its degree is $0$. The degree of the zero polynomial may be left chastely undefined, although for some purposes it may be convenient to define it as $-1$ or as $-\infty$. Even $0$ is possible if one is prepared to observe some fine print. Chacun à son goût.)
This set is equipped with an $R$-module structure (where formal linear combinations are added and scalar-multiplied as usual) and also with the structure of a ring, in fact a commutative algebra over $R$, denoted $R[z]$ and called the polynomial ring or ring of polynomials, with ring multiplication
the unique one that bilinearly extends the multiplication of monomials given by
Thus, one way to construct a polynomial ring is first to construct the free commutative monoid generated by a set $X$ (the monoid of monomials), and then to construct the free $R$-module generated by the underlying set of that monoid, extending the monoid multiplication to a ring multiplication by bilinearity.
In addition to the ring structure, there is a further operation $R[z] \times R[z] \to R[z]$ which may be described as “substitution”; see Remark 5 below for a general description (which applies in fact to any Lawvere theory).
Moreover, there is a noncommutative analogue of polynomial ring on a set $X$, efficiently described as the free $R$-module generated by the (underlying set of the) free monoid on $X$. This carries also a ring structure, with ring multiplication induced from the monoid multiplication. A far-reaching generalization of this construction is given at distributive law.
Finally: polynomial algebras may be regarded as graded algebras (graded over $\mathbb{N}$). Specifically: let us regard $R[X]$ as the free $R$-module generated by (the underlying set of) the free commutative monoid $F(X)$. The monoid homomorphism $F(!): F(X) \to F(1) \cong \mathbb{N}$ induced by the unique function $!: X \to 1$ gives an $\mathbb{N}$-fibering of $F(X)$ over $\mathbb{N}$, with typical fiber $F(X)_n$ whose elements are called monomials of degree $n$. Then the homogeneous component of degree $n$ in $R[X]$ is the $R$-submodule generated by the subset $F(X)_n \subset F(X)$. The elements of this component are called homogeneous polynomials of degree $n$.
By the definition of free objects one needs to check that algebra homomorphisms
to another algebra K are in natural bijection with functions of sets
from the singleton to the set underlying $K$. Take $\bar f \coloneqq f(z)$. Using $R$-linearity, this is directly seen to yield the desired bijection.
Similarly, the set of polynomials in any given set of variables with coefficients in $R$ is the free commutative $R$-algebra on that set of generators; see symmetric power and symmetric algebra.
As usual in the study of universal algebra via Lawvere theories, there is an operad whose $n^{th}$ component $C_n$ is the free algebra $R[x_1, \ldots, x_n]$, and whose operadic multiplication is given by maps
($n = n_1 + \ldots + n_k$) that take a tuple of elements $(p; q_1, \ldots, q_k)$ to $p(q_1(x), \ldots, q_k(x))$. Formally, it takes this tuple to the value of $p$ under the unique algebra map $R[x_1, \ldots, x_k] \to R[x_1, \ldots, x_n]$ that extends the mapping $x_j \mapsto i_j(q_j)$. Here the $i_j: R[x_1, \ldots, x_{n_j}] \to R[x_1, \ldots, x_n]$ are appropriate coproduct inclusions (in the category of commutative rings), where $i_j(x_l) = x_{n_1 + \ldots + n_{j-1} + l}$. A particularly important case of substitution is the case $k=1$ and $n_1 = 1$, where the map $R[x] \times R[x] \to R[x]$ is ordinary substitution $(p, q) \mapsto p(q(x))$. This is a special case of the more general notion of Tall-Wraith monoid.
In case $R$ is an integral domain, the field of fractions of $R[z]$ is the field $R(z)$ of rational functions.
If $R = \mathbb{R}$ is the real numbers regarded as a Euclidean space equipped with its metric topology, regard a polynomial $P \in \mathbb{R}[X]$ as a function $\mathbb{R} \to \mathbb{R}$. Then this is a continuous function. See at polynomials are continuous.
In the case where $R = k$ is a field, the polynomial ring $k[x]$ has a number of useful properties. One is that it is a Euclidean domain, where the degree serves as the Euclidean function:
Let $R$ be a commutative ring. Given $f, g \in R[x]$ where the leading coefficient of $g$ is a unit (e.g., if $g$ is a monic polynomial), there are unique $q, r \in k[x]$ such that $f = q \cdot g + r$ and $\deg(r) \lt \deg(g)$.
If $\deg(f) \lt \deg(g)$, then $q = 0$ and $r = f$ will serve. Otherwise we may argue by induction on $\deg(f)$, where if $a_m x^m$ is the leading term of $f$ and $b_n x^n$ the leading term of $g$, then $h(x) = f(x) - a_m b_n^{-1} x^{m-n}g(x)$ has lower degree than $f(x)$. This proves existence. Uniqueness is clear, since if $q \cdot g + r = q' \cdot g + r'$ and $q \neq q'$, we have $\deg((q - q')g) = \deg(r' - r) \lt \deg(g)$ which is impossible; then $r = r'$ quickly follows from $q = q'$.
If $k$ is field, then $k[x]$ is a Euclidean domain. As a result, $k[x]$ is a principal ideal domain, and therefore a unique factorization domain.
See Euclidean domain for a proof.
For any commutative ring $R$, if $a \in R$ is a root of $p(x) \in R[x]$, i.e., if the value of the polynomial function $p(a)$ is $0$, then $p(x)$ is of the form $(x - a)q(x)$.
Since $x - a$ is monic, we may write $p(x) = (x - a)q(x) + r$ where $\deg(r) \lt 1$, whence $r$ is a constant. By evaluating the polynomial function at $x = a$, the term $r$ is $0$.
This observation may be exploited in various neat ways. One is that if $p(x)$ is a polynomial, then $p(y) = p(x) + (y - x)q(x, y)$ for some unique $q(x, y) \in k[x, y]$. A consequence is that the Lawvere theory of commutative $k$-algebras is a Fermat theory. The derivative of $p$ may be defined to be $q(x, x) \in k[x]$.