Optimal transport is a powerful tool for measuring the distances between signals and images. A common choice is to use the Wasserstein distance where one is required to treat the signal as a probability measure. This places restrictive conditions on the signals and although ad-hoc renormalisation can be applied to sets of unnormalised measures this can often dampen features of the signal. The second disadvantage is that despite recent advances, computing optimal transport distances for large sets is still difficult. In this talk I will extend the linearisation of optimal transport distances to the Hellinger–Kantorovich distance, which can be applied between any pair of non-negative measures, and the TLp distance, a version of optimal transport applicable to functions. Linearisation provides an embedding into a Euclidean space where the Euclidean distance in the embedded space is approximately the optimal transport distance in the original space. This method, in particular, allows for the application of off-the-shelf data analysis tools such as principal component analysis as well as reducing the number of optimal transport calculations from O(n^2) to O(n) in a data set of size n.