nLab Minkowski's inequality




What is commonly known as Minkowski’s inequality is the statement that the p-norm f p X|f| pdμp{\Vert f\Vert_p} \coloneqq \root{p}{\int_X {\vert f\vert^p} d\mu} on Lebesgue spaces indeed satisfies the triangle inequality.


Our proof of Minkowski’s inequality is broken down into a few simple lemmas. The plan is to boil it down to two things: the scaling axiom, and convexity of the function x|x| px \mapsto {|x|^p} (as a function from real or complex numbers to nonnegative real numbers).

First, some generalities. Let VV be a (real or complex) vector space equipped with a function ():V[0,]{\|(-)\|}\colon V \to [0, \infty] that satisfies the scaling axiom: tv=|t|v{\|t v\|} = {|t|} \, {\|v\|} for all scalars tt, and the separation axiom: v=0{\|v\|} = 0 implies v=0v = 0. As usual, we define the unit ball in VV to be {vV|v1}.\{v \in V \;|\; {\|v\|} \leq 1\}.


Given that the scaling and separation axioms hold, the following conditions are equivalent:

  1. The triangle inequality is satisfied.
  2. The unit ball is convex.
  3. If u=v=1{\|u\|} = {\|v\|} = 1, then tu+(1t)v1{\|t u + (1-t)v\|} \leq 1 for all t[0,1]t \in [0, 1].

Since conditions 2. and 3. are pretty obviously equivalent, we just prove 1. and 3. are equivalent. Condition 1. implies condition 3. easily: if u=1=v\|u\| = 1 = \|v\| and 0t10 \leq t \leq 1, we have

tu+(1t)v tu+(1t)v = tu+(1t)v = t+(1t)=1.\array{ {\|t u + (1-t)v\|} & \leq & {\|t u\|} + {\|(1-t)v\|} \\ & = & t {\|u\|} + (1-t) {\|v\|} \\ & = & t + (1-t) = 1.}

Now we prove that 3. implies 1. Suppose v,v(0,){\|v\|}, {\|v'\|} \in (0, \infty). Let u=vvu = \frac{v}{{\|v\|}} and u=vvu' = \frac{v'}{{\|v'\|}} be the associated unit vectors. Then

v+vv+v = (vv+v)vv+(vv+v)vv = tu+(1t)u\array{ \frac{v + v'}{{\|v\|}+{\|v'\|}} & = & \left(\frac{{\|v\|}}{{\|v\|}+{\|v'\|}}\right)\frac{v}{{\|v\|}} + \left(\frac{{\|v'\|}}{{\|v\|}+{\|v'\|}}\right)\frac{v'}{{\|v'\|}} \\ & = & t u + (1-t)u'}

where t=vv+vt = \frac{{\|v\|}}{{\|v\|} + {\|v'\|}}. If condition 3. holds, then

tu+(1t)u1{\|t u + (1-t)u'\|} \leq 1

but by the scaling axiom, this is the same as saying

v+vv+v1,\frac{{\|v + v'\|}}{{\|v\|} + {\|v'\|}} \leq 1,

which is the triangle inequality.

Consider now L pL^p with its pp-norm f=|f| p{\|f\|} = {|f|_p}. By Lemma , the Minkowski inequality is equivalent to

  • Condition 4: If |u| p p=1{|u|_{p}^{p}} = 1 and |v| p p=1{|v|_{p}^{p}} = 1, then |tu+(1t)v| p p1{|t u + (1-t)v|_{p}^{p}} \leq 1 whenever 0t10 \leq t \leq 1.

This allows us to remove the cumbersome exponent 1/p1/p in the definition of the pp-norm.


Define ϕ:\phi\colon \mathbb{C} \to \mathbb{R} by ϕ(x)=|x| p\phi(x) = |x|^p. Then ϕ\phi is convex, i.e., for all x,yx, y,

|tx+(1t)y| pt|x| p+(1t)|y| p{|t x + (1-t)y|^p} \leq t{|x|^p} + (1-t){|y|^p}

for all t[0,1]t \in [0, 1].


The function g:x|x|g: x \mapsto {|x|} is convex, and for 1<p1 \lt p the function f:tt pf: t \mapsto t^p for t0t \geq 0 is monotone increasing and convex, by the first and second derivative tests. Thus g(tx+(1t)y)tg(x)+(1t)g(y)g(t x + (1-t)y) \leq t g(x) + (1-t)g(y) and then

f(g(tx+(1t)y)) f(tg(x)+(1t)g(y)) tf(g(x))+(1t)f(g(y))\array{ f(g(t x + (1-t)y)) & \leq & f(t g(x) + (1-t)g(y)) \\ & \leq & t f(g(x)) + (1-t)f(g(y)) }

so fg:x|x| pf \circ g: x \mapsto {|x|^p} is convex.

Proof of Minkowski’s inequality

Let uu and vv be unit vectors in L pL^p. By condition 4, it suffices to show that |tu+(1t)v| p p1{|t u + (1-t)v|_p^p} \leq 1 for all t[0,1]t \in [0, 1]. But

X|tu+(1t)v| pdμ Xt|u| p+(1t)|v| pdμ\int_X {|t u + (1-t)v|^p} \,d\mu \leq \int_X t{|u|}^p + (1-t){|v|}^p \,d\mu

by Lemma . Using |u| p=1=|v| p\int {|u|^p} = 1 = \int {|v|^p}, we are done.

Another commonly seen proof of Minkowski’s inequality derives it with the help of Hölder's inequality; see there for some commentary on this. But this is probably not the first thing one would think of unless one knows the trick, whereas the alternative proof given above seems geometrically motivated and fairly simple.


Last revised on December 23, 2022 at 10:52:43. See the history of this page for a list of all contributions to it.