Quantcast
Channel: Simple proof of Prékopa's Theorem: log-concavity is preserved by marginalization - MathOverflow
Viewing all articles
Browse latest Browse all 2

Simple proof of Prékopa's Theorem: log-concavity is preserved by marginalization

$
0
0

The following result is well-known:

Suppose that $H(x,y)$ is a log-concave distribution for $(x,y) \in \mathbb R^{m \times n}$ so that by definition we have$$H \left( (1 - \lambda)(x_1,y_1) + \lambda (x_2,y_2) \right) \geq H(x_1,y_1)^{1 - \lambda} H(x_2,y_2)^{\lambda},$$ and let $M(y)$ denote the marginal distribution obtained by integrating over $x$$$M(y) = \int_{\mathbb{R}^m} H(x,y) \, dx.$$ Let $y_1$$y_2 \in \mathbb R^n$ and $\lambda \in (0,1)$ be given. Then the Prékopa–Leindler inequality applies. It can be written in terms of $M$ as$$M((1-\lambda) y_1 + \lambda y_2) \geq M(y_1)^{1-\lambda} M(y_2)^\lambda$$ which is the log-concavity for $M$.

Now, I wanted to understand this for a very simple example where $f: \mathbb R^2 \rightarrow \mathbb R:$

$$e^{-g(y)} = \int_{\mathbb R} e^{-f(y,z)} \ dz.$$

Then, I want to prove that $g''\ge 0$ if $f$ satisfies $D^2f > 0$ globally as a matrix. We assume for simplicity that $f$ is such that the above integral is well-defined.

It is easy to see that

$$g''(y) = \langle D_{yy}f \rangle_z - \operatorname{ Var}_z (D_{y}f)$$

where $\langle \cdot \rangle_z$ is the expected value $$ \langle F \rangle_z(y) := \frac{\int_{\mathbb R} F(y,z) e^{-f(y,z)} \ dz}{ \int_{\mathbb R} e^{-f(y,z)} \ dz}$$ and $\operatorname{ Var}_z$ is the variance with respect to the probability measure with density $p(z) \propto e^{-f(y,z)}$.

However, it is not at all clear to me from this representation why $g''\ge 0$ holds.

Is there a pedestrian way to see this from the above expression for the second derivative?

I am looking for a more "Calculus" based derivation (using the 2nd derivative) of this inequality than the usual convex-combinatorial arguments.


Viewing all articles
Browse latest Browse all 2

Trending Articles