You don't need matrix inversion.

Kind of, sometimes anyway.
Math
Tricks
C++
Speed up inversion of definite matrices using the code in (6).

I suppose that, if you are reading this, I don't need to explain why you might want to speed up matrix inversion (or avoid it entirely) wherever possible. There is a trick you can use to speed up matrix inversion by something between 20%70%20\%-70\%, depending on your use case.

I don't claim to have discovered this — in fact a specialized version of this alogrithm is commonly used to calculate the squared Mahalanobis distance. Another version, specialized for the inversion of positive definite Hermitian matrices, is probably used here. I just haven't seen this generalization anywhere.

Limitations

This trick only works for definite matrices and you need to know the definiteness. However, I think this is commonly the case (think covariances etc.).

Generalization

Let's assume you need to invert a matrix M>0M > 0 (for M<0M < 0, just use M-M) in order to calculate the result of some multiplication with another matrix:
A=M1BMGL(n,R),M>0,BRn×k \begin{split} A &= M^{-1}B \\ M &\in \text{GL}(n, \mathbb{R}), M > 0, B\in\mathbb{R}^{n\times k} \end{split}
(1)
This is a generalization of three common problems:
  1. Inverting a Matrix: B=I    A=M1B=I \implies A=M^{-1}
  2. Calculating a matrix-vector product of the inverse: B=vRk    A=M1vB = v \in \mathbb{R}^k \implies A = M^{-1}v
  3. Calculating a matrix-matrix of the inverse product as in (1)
You could implement a solution to (1) like this:
A = M.inverse() * B;
(2)
But should you?

Equations aren't instructions

A pitfall when implementing equations in code is that there is a difference between '=' and '=='. The former is an assignment; the latter tells us that its left side is equal to its right side. This allows us to transform (1) into
MA=B MA = B
(3)
which we can now solve for AA. Intuition may tell you that this implies going back to (1), which leads to the code in (2) — not helpful.

Now, the trick: Let your computer solve for AA. First, perform a Cholesky decomposition on MM:
M=LL    A=(LL)1B=LL1B \begin{split} M &= LL^\top \implies A = (LL^\top)^{-1}B = L^{-\top}L^{-1}B \end{split}
(4)

So I have made the problem worse

In addition to the decomposition, there are now more matrix multiplications and the number of inverses hasn't changed. Let's avoid any matrix inversion by applying the same logic from above to solve:
C=L1B    LC=BA=LC    LA=C \begin{split} C = L^{-1}B \iff LC = B \\ A = L^{-\top}C \iff L^\top A = C \end{split}
(5)
Isn't this the exact problem that we attempted to solve in the beginning… but twice?

Actually, no

The matrix LL has a triangular shape. This means (5) can be trivially solved using forward substitution. An implementation might look like this:
LLT = M.llt();
C = LLT.matrixL().solve(B);
A = LLT.matrixL().transpose().solve(C);
(6)

Is this faster?

Before going into the benchmarks, keep in mind that classic matrix inversion and the technique involving Cholesky decomposition both operate in O(n3)O(n^3).

I ran some benchmarks on these problems:
  1. Computing the inverse M1M^{-1} using B=IB=I
  2. Computing M1vM^{-1}v where vv is a vector
  3. Computing M1BkM^{-1}B_k where BRn×kB \in \mathbb{R}^{n\times k}
Benchmarks were implemented in C++ using Eigen and can be found here. I chose random positive definite matrices MM, random values for BB and measured the computation time relative to the respective implementation using the .inverse() method.

Here are the relative computation times:
M1M^{-1}M1Bk=nM^{-1}B_{k=n}M1Bk=4M^{-1}B_{k=4}M1vM^{-1}v
n=8n=870%70\%54%54\%50%50\%45%45\%
n=20n=2077%77\%55%55\%38%38\%30%30\%
As you can see from the benchmarks, the greatest benefit can be gained when calculating a matrix-matrix product, especially as the number of columns kk in BB decrease.