A deep result is that in order to solve a continuous it’s enough that is everywhere differentiable (see Tao’s post), which reminds one an other deep result called invariance of domain (there is a similar result that we may call “invariance of Borel set”, see this SE post. But it does not require the same dimension).

Now let’s focus on . Assume that is defined implicitly (sometimes we intentionally make it implicit) by , differentiate the equation we get . Note that we are not losing much information because any solution stays in a level set of . But now which is a reasonable ODE with initial data .

For some problems this has given us a desired formula for :

Example 1:

Example 2:

Example 3:

Example 4:

Implict funtion theorem is substantially used in differentiable manifolds, method of characteristics for first order nonlinear PDE’s.

Implicit funtion theorem can even be generalized to infinite dimension, which is called Nash-Moser theorem. Literally, the proof implementes Nash-Moser iteration scheme.

]]>

Every compact group admits a suitabily regular bi-invariant probability measure, namely a **Haar measure.**~~ (a wrong proof: take a probability measure obtained above, translate it from both sides and then take the average. this is wrong and the arguement above is irrelevant.~~ To see this it’s enough to show that there exists a left-invariant measure (which one proves quite nontrivially using Riesz representation theorem). Since compactness allows us to show that the modular function is as .

A group is called unimodular if its modular function is trivial. A group is unimodular if and only if its left Haar measure and right Haar measure conincide. Examples of unimodular groups are abelian groups, compact groups, discrete groups, semisimple Lie groups and connected nilpotent Lie groups. An example of a non-unimodular group is the *ax* + *b* group of transformations of the form. this example shows that a solvable Lie group need not be unimodular.

Every compact Lie group (which is very well understood) admits a **bi-invariant metric **making it into a space of nonnegative sectional curvature (which are also well understood). In general noncompact Lie groups do not have a bi-invariant metric, though all connected semisimple (or reductive) Lie groups do. The existence of a bi-invariant metric implies that the Lie algebra is the Lie algebra of a compact Lie group;

Thanks to Haar measure, every (continuous) representation on a compact group is **unitarizable**. in general, for non-compact groups, it is a more serious question which representations are unitarizable.

However, bi-**invariant vector field** is a naive question with nagetive answer.

]]>

**Integration by parts**: is in some sense also a Fubini principle. Take summation by part for example, if we interprete the summation as total area of consecutive (on axis) growing rectangles, then summation by parts tells us that the area is the area of the eveloping rectangle minus the sum area of the complementary consecutive (on axis) growing rectangles. If we assume that the area of the eveloping rectangle vanishes (which is usually the case), then it tells us the . However, it is usually easier to get a more precise estimate for since we are detecting the cancellation from an other perspective.

**Integrate along cancellation/sparsity:** Many tricks, such as polar coordinate, integration through upper level set (whose importance is well known), co-area formula, etc. All of these can be seen as Fubini-Tonelli principle. **Differentiate along cancellation:** In a broader sense, the principle can even be applied to PDEs. To read certain PDEs, people use the characteristics along which significant cancellation happens so that one can reduce the PDE into ODE, or reduce the order of PDE, say two, into first order stochastic ODE.

**Randomization trick: **Sometimes we know cancellation/sparsity appears somewhere but they are hard to identify. Then one trick to tackle the problem is to let a random vector detect the cancellation/sparsity for us. We can employ suitable randamization to obtain an extra “axis”. Integrating first along the random axis (i.e. taking the average), the cancellation/sparsity might be detected. Example includes random series, spherical projection, random (dyadic) grid, etc.

]]>

Reference: Walter Rudin, Fourier analysis on groups.

]]>