Technion - Israel Institute of Technology
Uncertainty quantification is crucial for the deployment of image restoration models in safety-critical domains, like autonomous driving and biological imaging. To date, methods for uncertainty visualization have mainly focused on per-pixel estimates. Yet, a heatmap of per-pixel variances is typically of little practical use, as it does not capture the strong correlations between pixels. A more natural measure of uncertainty corresponds to the variances along the principal components (PCs) of the posterior distribution. Theoretically, the PCs can be computed by applying PCA on samples generated from a conditional generative model for the input image. However, this requires generating a very large number of samples at test time, which is painfully slow with the current state-of-the-art (diffusion) models. In this work, we present a method for predicting the PCs of the posterior distribution for any input image, in a single forward pass of a neural network. Our method can either wrap around a pre-trained model that was trained to minimize the mean square error (MSE), or can be trained from scratch to output both a predicted image and the posterior PCs. We showcase our method on multiple inverse problems in imaging, including denoising, inpainting, super-resolution, colorization, and biological image-to-image translation. Our method reliably conveys instance-adaptive uncertainty directions, achieving uncertainty quantification comparable with posterior samplers while being orders of magnitude faster.
Visualizing the uncertainty in MNIST digits denoising/inpainting (Please allow a ~sec for the image to update after each slider move).
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Visualizing reconstruction uncertainty in CelebAHQ faces for the tasks of 8x noisy super-resolution, inpainting, and colorization.
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #4
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #2
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #1
\[\hat{x} + 0.00 \sigma\]
Here, we apply NPPC to the task of translating the image of a (microscopic) biological specimen from one fluorescent dye to another:
Visualizing the uncertainty in bioimage translation, where nuclear stains (cells) are predicted from actin stains (cytoskeletons).
Input Image
PC #2
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #2
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #2
\[\hat{x} + 0.00 \sigma\]
Input Image
PC #3
\[\hat{x} + 0.00 \sigma\]
Uncertainty Quantification via Neural Posterior Principal Components
Elias Nehme, Omer Yair, Tomer Michaeli.
Bibtex
Recently the field of uncertainty quantification have witnessed a number of interesting works dealing with pixel correlations in images, such as:
This webpage was originally made by Matan Kleiner with the
help of Hila Manor
for SinDDM and can be used as a template.
It is inspired by the template that was originally made by Phillip Isola and
Richard Zhang for a colorful ECCV project;
the code for the original template can be found here.
A lot of features are taken from bootstrap. All icons are taken from font awesome except for the dice icon taken from icons8.