# Scattering Transform Tutorial

## Motivation

The following is a tutorial to help explain what a scattering transform is and why we might use them. We are interested in using scattering transforms as a way to enable a machine learning approach to materials science. Materials science drives many new technologies. Our major goal is to harness the power of supercomputing to make these new discoveries. By providing a uniform mathematical description of materials, scattering transforms will allow us to examine a vastly larger pool of materials by facilitating machine learning over existing sets of sparse data. Hopefully the following will help provide some intuition on what scattering transforms are as well as why we use them over other similar methods. Overall we are looking for 3 things as we look to transform our data to something that the computer can understand. They are:

- Translational In-variance
- Rotational In-variance
- Deformation Stability

In the following sections we will talk about each of these 3 requirements in various settings.

## Similar methods and why not to use them

### Fourier transforms and their relatives

The first thing that we should examine are Fourier transforms. Fourier transforms by themselves have none of the 3 properties that we are looking for, however a power spectrum (pretty much a Fourier Transform) is a good shot at what we want. A power spectrum is defined as follows:

In English the power spectrum is the square of the magnitude of the FFT(Fast Fourier Transform) of the signal. It can be shown that the this power spectrum is indeed translationally invariant.

For the following example should convince you that this is the case!

Here we have simply taken our "atoms" (actually just some Gaussians) a taken their power spectra. Now lets slide them around a little.

// graphics grid for a transformed version of the original // As you can see the power spectrum of the transformed image is the same! If you're not convinced that this is the case, I can prove it. Below is the plot of the difference of the 2 power spectra. // difference of the power spectrum //

So great we can see that power spectra do indeed have the first property that we are looking for. But what about rotations? What if we take gaussians from figure 1 and rotate them just a bit:

// rotated gaussians //

It should be immediately apparent that these two power spectra are not the same. But honestly this isn't the reason that we aren't going to use power spectra. The real reason is because they have quite a bit of trouble with our 3rd property. You can actually take one more step to make them rotationally invariant.

// Show the power spectra integrated over all possible rotations //

Being based in Fourier transforms, we encounter real problems when deform our signal just a little. With little deformations our power spectrum changes by a ton. It is this that makes power spectra bad candidates for our machine learning approach.