Image super-resolution is a fundamental problem in satellite remote sensing and computer vision. Sentinel-2 satellite, recently launched by the European Space Agency, plays a critical role in several Earth observation missions. As the satellite sensor is far away from the target objects, each pixel of Sentinel-2 image corresponds to an area of hundreds to thousands of square meters. For effectiveness of subsequent classification/identification applications, it is necessary to super-resolve the Sentinel-2 image. Unlike conventional 3-band RGB images, one of the most challenging parts is that Sentinel 2 satellite has a total of 13 spectral bands (ranging from visible to invisible regions). More seriously, the spatial resolutions of the 13 spectral bands are different, making conventional super-resolution methods not applicable. Another challenge is that Sentinel-2 is a new satellite, so big data required for training high-performance neural networks is lacking, making the popular deep learning not applicable.

The left subplot is the low-resolution Sentinel-2 image (i.e., algorithm input), and the right subplot is the super-resolved image (i.e., algorithm output).

Intelligent Hyperspectral Computing Laboratory (IHCL) proposed a novel unsupervised algorithm, which significantly outperforms the state-of-the-art ATPRK in terms of both reconstruction accuracy and computational efficiency. We achieve so without relying on model training or big data. What we need is just one single Sentinel-2 imaging data, from which we are able to compute the spatial details. How could this be possible?

It is up to the doer to undo the knot. If what missing is some spatial information, let us seek other information from the spatial domain.

The key lies within the highly-structured spatial patches, i.e., the self-similarity widely observed in natural images. Such prior knowledge about the spatial structure will be used to make up for the missing spatial information. As self-similarity was just a concept that had never been mathematically defined, existing machine learning technologies rely on plug-and-play (PnP) when dealing with the induced optimization problem, and such PnP-driven approaches have no convergence guarantee in general.

For the first time, our IHCL team mathematically defined the concept of self-similarity, allowing us to derive closed-form solutions of the involved optimization problems, leading to a fast algorithm. Besides defining the self-similarity explicitly, it is defined as a convex function. Specifically, we formulate the self-similarity pattern as a weighted graph (which can be directly learned from the texture of the input image, thereby enabling a scene-adapted regularization), followed by embedding such graph into a quadratic term. The resultant convexity allows us to design a fast algorithm (with provable convergence guarantee) by employing powerful convex optimization theory. The algorithmic steps involve two types of high-dimensional matrix inversion, which are efficiently computed by exploiting the so-called BCCB structure and sparsity structure, respectively. The general form of our newly defined super-resolution framework also allows us to compute the basis of multi-resolution imagery like Sentinel-2. Comparing to the computation in the color space, such basis enables a more stable computation of spatial details in a low-dimensional eigenspace. Finally, we conclude the article with a seven-word quatrain:

No worry about lacking of big data.

No need to train any neural network.

Just give us a single imaging dataset.

We then compute spatial details for you!