A compact space may not be 2nd countable, e.g. an uncountable space with finite-complement topology. If a compact hausdorff space is 2nd countable, then it is a Polish space, namely, a seperable complete metric space. All Polish spaces are Borel isomorphic. In particular, they are isomorphic to the unit intevel. Thus every compact space admits a probability measure (actually every measurable space admits a probability measure, simply take the Dirac measure).
Every compact group admits a suitabily regular bi-invariant probability measure, namely a Haar measure.
(a wrong proof: take a probability measure obtained above, translate it from both sides and then take the average. this is wrong and the arguement above is irrelevant. To see this it’s enough to show that there exists a left-invariant measure (which one proves quite nontrivially using Riesz representation theorem). Since compactness allows us to show that the modular function is as .
A group is called unimodular if its modular function is trivial. A group is unimodular if and only if its left Haar measure and right Haar measure conincide. Examples of unimodular groups are abelian groups, compact groups, discrete groups, semisimple Lie groups and connected nilpotent Lie groups. An example of a non-unimodular group is the ax + b group of transformations of the form. this example shows that a solvable Lie group need not be unimodular.
Every compact Lie group (which is very well understood) admits a bi-invariant metric making it into a space of nonnegative sectional curvature (which are also well understood). In general noncompact Lie groups do not have a bi-invariant metric, though all connected semisimple (or reductive) Lie groups do. The existence of a bi-invariant metric implies that the Lie algebra is the Lie algebra of a compact Lie group;
Thanks to Haar measure, every (continuous) representation on a compact group is unitarizable. in general, for non-compact groups, it is a more serious question which representations are unitarizable.
However, bi-invariant vector field is a naive question with nagetive answer.
Fubini-Tonelli principle: Imagine that you have a saw-shaped “square” with very deep sawteeth and small area. If we integrate iteratedly, then in one order, in the inner integral we can only bound with and thus the estimate would be , which is very rough; in the other order, in the sparcity can be easily detected and hence we get a more accurate estimate. This corresponds to Tonelli principle which guides us to first detect the sparsity. Similarly, cancellation may be easily detected from one perspective, but not from another. In this case we should integrate first the cancellation coordiate. And this corresponds to Fubini principle.
Integration by parts: is in some sense also a Fubini principle. Take summation by part for example, if we interprete the summation as total area of consecutive (on axis) growing rectangles, then summation by parts tells us that the area is the area of the eveloping rectangle minus the sum area of the complementary consecutive (on axis) growing rectangles. If we assume that the area of the eveloping rectangle vanishes (which is usually the case), then it tells us the . However, it is usually easier to get a more precise estimate for since we are detecting the cancellation from an other perspective.
Integrate along cancellation/sparsity: Many tricks, such as polar coordinate, integration through upper level set (whose importance is well known), co-area formula, etc. All of these can be seen as Fubini-Tonelli principle. Differentiate along cancellation: In a broader sense, the principle can even be applied to PDEs. To read certain PDEs, people use the characteristics along which significant cancellation happens so that one can reduce the PDE into ODE, or reduce the order of PDE, say two, into first order stochastic ODE.
Randomization trick: Sometimes we know cancellation/sparsity appears somewhere but they are hard to identify. Then one trick to tackle the problem is to let a random vector detect the cancellation/sparsity for us. We can employ suitable randamization to obtain an extra “axis”. Integrating first along the random axis (i.e. taking the average), the cancellation/sparsity might be detected. Example includes random series, spherical projection, random (dyadic) grid, etc.