TY - JOUR
T1 - Shared Prior Learning of Energy-Based Models for Image Reconstruction
AU - Pinetz, Thomas
AU - Kobler, Erich
AU - Pock, Thomas
AU - Effland, Alexander
N1 - Publisher Copyright:
© by SIAM. Unauthorized reproduction of this article is prohibited.
PY - 2021
Y1 - 2021
N2 - We propose a novel learning-based framework for image reconstruction particularly designed for training without ground truth data, which has three major building blocks: energy-based learning, a patch-based Wasserstein loss functional, and shared prior learning. In energy-based learning, the parameters of an energy functional composed of a learned data fidelity term and a data-driven regularizer are computed in a mean-field optimal control problem. In the absence of ground truth data, we change the loss functional to a patch-based Wasserstein functional, in which local statistics of the output images are compared to uncorrupted reference patches. Finally, in shared prior learning, both aforementioned optimal control problems are optimized simultaneously with shared learned parameters of the regularizer to further enhance unsupervised image reconstruction. We derive several time discretization schemes of the gradient flow and verify their consistency in terms of Mosco convergence. In numerous numerical experiments, we demonstrate that the proposed method generates state-of-the-art results for various image reconstruction applications---even if no ground truth images are available for training.
AB - We propose a novel learning-based framework for image reconstruction particularly designed for training without ground truth data, which has three major building blocks: energy-based learning, a patch-based Wasserstein loss functional, and shared prior learning. In energy-based learning, the parameters of an energy functional composed of a learned data fidelity term and a data-driven regularizer are computed in a mean-field optimal control problem. In the absence of ground truth data, we change the loss functional to a patch-based Wasserstein functional, in which local statistics of the output images are compared to uncorrupted reference patches. Finally, in shared prior learning, both aforementioned optimal control problems are optimized simultaneously with shared learned parameters of the regularizer to further enhance unsupervised image reconstruction. We derive several time discretization schemes of the gradient flow and verify their consistency in terms of Mosco convergence. In numerous numerical experiments, we demonstrate that the proposed method generates state-of-the-art results for various image reconstruction applications---even if no ground truth images are available for training.
KW - convolutional neural network
KW - deep learning
KW - energy-based learning
KW - gradient flow
KW - mean-field optimal control
KW - Mosco convergence
KW - optimal transport
KW - shared prior learning
KW - Wasserstein distance
UR - https://www.scopus.com/pages/publications/85131728909
U2 - 10.1137/20M1380016
DO - 10.1137/20M1380016
M3 - Article
AN - SCOPUS:85131728909
SN - 1936-4954
VL - 14
SP - 1706
EP - 1748
JO - SIAM Journal on Imaging Sciences
JF - SIAM Journal on Imaging Sciences
IS - 4
ER -