A Review of Automatic Turning of Denoising Algorithms Parameters Without Ground Truth

An initial review for literature survey

Authors
Affiliations

Siju K S

School of Artificial Intelligence

Amrita Vishwa Vidyapeetham

Dr. Vipin V.

School of Artificial Intelligence

Amrita Vishwa Vidyapeetham

Published

September 17, 2024

Other Formats
Abstract

This review explores the mathematical concepts, noise types, and denoising algorithms, focusing on additive Gaussian noise. The paper’s methodology was reproduced using Gaussian smoothing, with MSE and PSNR metrics. Optimization of the error function was performed using various techniques, including gradient descent and scipy minimization, highlighting the performance of simple denoisers.

Keywords

Bilevel optimization, Denoising, Hyper-parameter tuning, Additive noise, Gaussian smoothing, Denoising algorithms, Mean squared error (MSE), Peak signal-to-noise ratio (PSNR), Gradient descent, Finite difference method, Optimization

1 Introduction

This review undertakes a detailed exploration of the paper titled Automatic Tuning of Denoising Algorithms Parameters without Ground Truth. The primary objective of this work is not to propose a novel denoiser but rather to develop a framework for automatically tuning the hyperparameters of existing denoising algorithms using only the noisy input image, eliminating the need for clean reference images. This paradigm shift offers a unique unsupervised approach to parameter selection, which is of significant interest in real-world applications where ground truth images are often unavailable.

2 Background and inspritaion for the work

This work is inspired by the supervised models like Noise2Noise (N2N) and Noise as Clean (NaC) , have been successful in estimating the denoised images through carefully defined loss functions optimized with access to clean or synthetic reference images. In contrast, the proposed method introduces novel unsupervised loss functions that allow the system to infer optimal hyperparameters directly from the noisy data.

3 Core Contents

This review attempts to recreate and dissect the theoretical framework and algorithmic implementations discussed in the paper. In particular, I focus on the key differences between the proposed method and the following models:

3.1 Reference models

  • Noise2Noise (N2N): A supervised learning method where the target output is a noisy version of the input image, and the denoising model learns to map between these noisy inputs.
  • Noise as Clean (NaC): Where the noisy input is treated as if it were clean, allowing for simpler loss function optimizations but often at the loss of performance.
  • Noiser to Noise (Nr2N): A semi-supervised approach where multiple noisy versions of the same image are used to train the denoiser.
  • R2R: A more robust method that uses random rotations to regularize and train the model for better generalization in real-world scenarios.

3.2 Major contribution

The critical contribution of the reviewed paper lies in proposing alternative unsupervised loss functions and an inference scheme that automatically selects the hyperparameters such that the results empirically match those obtained through supervised methods. This approach demonstrates that comparable performance to supervised models can be achieved without access to clean reference data, leading to a potential breakthrough in how denoising algorithms are deployed in practical scenarios.

3.3 Review approach

This review compares the following aspects of the proposed methodology with existing denoising techniques:

  1. loss Function Definition and Optimization: Unlike the supervised models, where loss functions like Mean Squared Error (MSE) or structural similarity are optimized against clean images, the unsupervised loss functions in this paper rely on indirect metrics such as residual variance and image sharpness to estimate the quality of the denoised output.

  2. Inference Scheme: While supervised methods explicitly optimize their denoising algorithms based on the availability of paired clean and noisy data, the proposed inference scheme iteratively adjusts hyperparameters using gradient-based optimization on the unsupervised loss functions. The optimization process aims to converge on the denoised image \(\hat{x} \coloneqq x^*\), which matches the empirical quality of the ground truth.

3.3.1 General Form of loss function

Denoising algorithms are of the form \(A_\theta (y)\) , where \(\theta\in \mathbb{R}^d\). To estimate the parameter \(\theta\) mostly generate mappings of the form \[\Theta_\lambda:y\longrightarrow \theta\] with parameters \(\lambda\in \mathbb{R}^d\). In short, this mapping maps image and its features to the set of parameters.

These mapping parameters are found by optimizing the average error produced by a discrepancy function, \(\mathcal{L}\). Using the modern computational terminology, this process is to find the optimal parameters, \(\lambda^*\) such that \[\lambda^*=\underset{\lambda\in \mathbb{R}^d}{\mathrm{argmin}}\, \mathbb{E}\left(\mathcal{L}(A_{\theta_\lambda}(y)(y),x)\right)\]

Previous approaches demand a dataset for training. But this may not be possible in real situations. The new approach proposed in this article is unsupervised and is in line with the supervised models proposed in Noise to Noise (N2N), Noise as Clean (NaC), Noiser to Noise (Nr2N) and Recorrupted to Recorrupted (R2R).

Main thread of the work is that this novel approach defined an un-supervised loss (not depends on the ground truth \(x\)), achieving the same minimizer \(\lambda^*\) as the supervised counterpart.

3.3.2 Context of the work

The inspired works are supervised and have the disadvantages of overfitting and (or) non-generalizability with reference to a finite dataset \(\Omega_{i,j}\). Authors claim that, in the proposed unsupervised approach the parameters are time tuned and directly optimizing the loss function. The work is divided into two stages:

  1. Define the loss function \(A_\theta\) in various setups with low cardinality(\(\theta\))/ pixel values.
  2. Solve the optimization problem (minimizing the loss function using gradient descent method). For the gradient calculations, the have used automatic differentiation.

3.4 Loss functions and Inference schemes

With reference to the four published articles, the authors proposed the following loss functions and inference schemes. Here they consider two noisy images.

  1. Noise to Noise: (Lehtinen 2018)

The loss function is

\[ \hat{\theta}=\underset{\theta}{\mathrm{argmin}}\, ||A_\theta (y)-y'||^2_2\]

and the inference scheme is

\[x^{N2N}\coloneqq A_{\hat{\theta}}(y)\simeq x^*\]

were \(y\) and \(y'\) are two noisy data defined by \(y=x+n_1\) and \(y'=x+n_2\).

  1. Noisy as Clean:(Xu et al. 2020) The loss function is

\[ \hat{\theta}=\underset{\theta}{\mathrm{argmin}}\, ||A_\theta (z)-y'||^2_2\]

and the inference scheme is

\[x^{NaC}\coloneqq A_{\hat{\theta}}(y)\simeq x^*\]

where \(z\) is a dobly noisy data defined by \(z=y+n_s\).

  1. Noiser to Noise: (Moran et al. 2020)

The loss function is

\[ \hat{\theta}=\underset{\theta}{\mathrm{argmin}}\,\underset{(z_1,z_2)}{\mathbb{E}} \left(||A_\theta (z)-z||^2_2\right)\]

and the inference scheme is

\[x^{Nr2R}\coloneqq \frac{(1+\alpha^2)A_{\hat{\theta}}(z)-z}{\alpha^2}\]

Note

This approach has no restriction on noise except additive one. But the noise level may high. To mitigate this artificially high noise, lower the variance level of \(n_s\) as \(n_s\sim \mathcal{N}(0,\alpha\sigma)\).

  1. Recurrupted to Recurrupted: (Pang et al. 2021)

In this reference, the noisy images are ‘doubly noisy’ images created from the clear image as \[\begin{align*} z_1&=y+D^Tn_s\\ z_2&=y-D^{-1}n_s \end{align*}\] with \(D\) being any invertible matrix and \(n_s\) drawn from same distribution of \(n\).

As a result, \(z_1=x+n_1\) and \(z_2=x+n_2\), where \(n_1\) and \(n_2\) are two zero mean independent noise vectors.

The loss function is

\[ \hat{\theta}=\underset{\theta}{\mathrm{argmin}}\,\mathbb{E} \left(||A_{\hat{\theta}} (z_1)-z_2||^2_2\right)\]

and the inference scheme is

\[x^{Nr2R}\coloneqq \frac{1}{M}\sum\limits_{m=1}^MA_{\hat{\theta}}(Z_1^m)\]

3.4.1 Optimization of loss function

For the optimization, the authors used gradient based approach. For the evaluation of gradient, they used automatic differentiation and the iterative formula for \(\theta\) update is:

\[\theta^{n+1}=\theta_n-\eta \nabla \hat{\theta}\]

Here \(\theta_0\), the initial parameter measure is found by manually tuning \(\theta\) for a single image.

3.4.2 Presented use case

Authors used the proposed method on the denoiser, Denoising via Quantum Interactive Patches (DeQuIP) to fine tune the parameters. Implementation is done on PyTorch 1.12.0 with BSD400 datasets as ground truth. Unfortunately, Pytorch 1.12.0 is not connected to Python 11.2 version. As per authors claim, the proposed approach makes it possible to obtain an average PSNR output within less than 1% of the best achievable PSNR. In this review work, a miniature model is developed using the authors concept.

A visualization of parameter fine tuning.
Figure 1: Example of denoising results.
A visualization of parameter fine tuning.
Figure 2: Example of denoising results.
A visualization of parameter fine tuning.
Figure 3: Example of denoising results.
A visualization of parameter fine tuning.
Figure 4: Example of denoising results.

Skill of the proposed alogorithm on the same sample images used by the author is shown in Figure 1 , Figure 2 , Figure 3 and Figure 4 .

4 Conclusion

The review examined several influential works that inspired the authors’ unsupervised denoising framework, including Noise2Noise (N2N), Noise as Clean (NaC), and Noise2Noise Regression (Nr2N). These models demonstrated how effective denoising can be achieved without relying on clean ground-truth images, focusing solely on noisy data. By building on these ideas, the authors introduced novel unsupervised cost functions and inference schemes to match the performance of supervised denoising models. Using Gaussian smoothing as a basic case study, the review reproduced these methods and explored the optimization of the error functions through scipy minimization and custom gradient descent. Metrics such as MSE and PSNR provided a comparative analysis, reinforcing that while the unsupervised method closely mirrors the results of supervised models, further refinement is needed to fully realize its potential in more complex scenarios.

5 References

Dabov, Kostadin, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. 2007. “Image Denoising by Sparse 3-d Transform-Domain Collaborative Filtering.” IEEE Transactions on Image Processing 16 (8): 2080–95. https://doi.org/10.1109/TIP.2007.901238.
Dutta, Sayantan, Adrian Basarab, Bertrand Georgeot, and Denis Kouamé. 2021a. “Image Denoising Inspired by Quantum Many-Body Physics.” In 2021 IEEE International Conference on Image Processing (ICIP), 1619–23. https://doi.org/10.1109/ICIP42928.2021.9506794.
———. 2021b. “Quantum Mechanics-Based Signal and Image Representation: Application to Denoising.” IEEE Open Journal of Signal Processing 2: 190–206. https://doi.org/10.1109/OJSP.2021.3067507.
Floquet, Arthur, Sayantan Dutta, Emmanuel Soubies, Duong-Hung Pham, Denis Kouamé, and Denis Kouame. 2024. Automatic Tuning of Denoising Algorithms Parameters without Ground Truth.” IEEE Signal Processing Letters 31 (January): 381–85. https://doi.org/10.1109/LSP.2024.3354554.
Knuth, Donald E. 1984. “Literate Programming.” Comput. J. 27 (2): 97–111. https://doi.org/10.1093/comjnl/27.2.97.
Krull, Alexander, Tim-Oliver Buchholz, and Florian Jug. 2019. “Noise2Void - Learning Denoising from Single Noisy Images.” In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2124–32. https://doi.org/10.1109/CVPR.2019.00223.
Lehtinen, J. 2018. “Noise2noise: Learning Image Restoration Without Clean Data.” arXiv Preprint arXiv:1803.04189.
Moran, Nick, Dan Schmidt, Yu Zhong, and Patrick Coady. 2020. “Noisier2Noise: Learning to Denoise from Unpaired Noisy Data.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12061–69. https://doi.org/10.1109/CVPR42600.2020.01208.
Nguyen, Pascal, Emmanuel Soubies, and Caroline Chaux. 2023. “Map-Informed Unrolled Algorithms for Hyper-Parameter Estimation.” In 2023 IEEE International Conference on Image Processing (ICIP), 2160–64. https://doi.org/10.1109/ICIP49359.2023.10222154.
Pang, Tongyao, Huan Zheng, Yuhui Quan, and Hui Ji. 2021. “Recorrupted-to-Recorrupted: Unsupervised Deep Learning for Image Denoising.” In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2043–52. https://doi.org/10.1109/CVPR46437.2021.00208.
Ramani, Sathish, Thierry Blu, and Michael Unser. 2008. “Monte-Carlo Sure: A Black-Box Optimization of Regularization Parameters for General Denoising Algorithms.” IEEE Transactions on Image Processing 17 (9): 1540–54. https://doi.org/10.1109/TIP.2008.2001404.
Selesnick, Ivan. 2017. “Total Variation Denoising via the Moreau Envelope.” IEEE Signal Processing Letters 24 (2): 216–20. https://doi.org/10.1109/LSP.2017.2647948.
Xu, Jun, Yuan Huang, Ming-Ming Cheng, Li Liu, Fan Zhu, Zhou Xu, and Ling Shao. 2020. “Noisy-as-Clean: Learning Self-Supervised Denoising from Corrupted Image.” IEEE Transactions on Image Processing 29: 9316–29. https://doi.org/10.1109/TIP.2020.3026622.
Zhu, Xiang, and Peyman Milanfar. 2010. “Automatic Parameter Selection for Denoising Algorithms Using a No-Reference Measure of Image Content.” IEEE Transactions on Image Processing 19 (12): 3116–32. https://doi.org/10.1109/TIP.2010.2052820.

Citation

BibTeX citation:
@online{k_s2024,
  author = {K S, Siju and Vipin V., Dr.},
  title = {A {Review} of {Automatic} {Turning} of {Denoising}
    {Algorithms} {Parameters} {Without} {Ground} {Truth}},
  date = {2024-09-17},
  langid = {en},
  abstract = {This review explores the mathematical concepts, noise
    types, and denoising algorithms, focusing on additive Gaussian
    noise. The paper’s methodology was reproduced using Gaussian
    smoothing, with MSE and PSNR metrics. Optimization of the error
    function was performed using various techniques, including gradient
    descent and `scipy` minimization, highlighting the performance of
    simple denoisers.}
}
For attribution, please cite this work as:
K S, Siju, and Dr. Vipin V. 2024. “A Review of Automatic Turning of Denoising Algorithms Parameters Without Ground Truth.” Center for Computational Engineering and Networking. September 17, 2024.