Inverse Problems Leveraging Pre-trained Contrastive Representations

Sriram Ravula*, Georgios Smyrnis*, Matt Jordan, Alexandros G. Dimakis

The University of Texas at Austin

(to appear in) NeurIPS 2021

Classifying corrupted images

Our robust encoders observe highly corrupted images and use a simple linear probe to classify. We present the top 3 classes from our models as well as those from the end-to-end supervised baselines. For three different types of forward operators, our robust encoders classify correctly and also produce reasonable top 3 alternatives. On the contrary, the supervised baselines completely fail even though they were fine-tuned on exactly this task to classify corrupted images, starting from a powerful ImageNet pretrained ResNet-101. We also expect that most humans would fail to classify such highly corrupted images.

Abstract

We study a new family of inverse problems for recovering representations of corrupted data. We assume access to a pre-trained representation learning network R(x) that operates on clean images, like CLIP. The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A. We propose a supervised inversion method that uses a contrastive objective to obtain excellent representations for highly corrupted images. Using a linear probe on our robust representations, we achieve a higher accuracy than end-to-end supervised baselines when classifying images with various types of distortions, including blurring, additive noise, and random pixel masking. We evaluate on a subset of ImageNet and observe that our method is robust to varying levels of distortion. Our method outperforms end-to-end baselines even with a fraction of the labeled data in a wide range of forward operators.

Our proposed method

We initialize a student and teacher model from a pretrained CLIP encoder. Clean image batches are fed to the teacher while distorted versions of those images are fed to the student. The student is trained using a contrastive loss which makes student and teacher representations of the same original images more similar while making their representations of different images less similar.

Citation

APA

Ravula, S., Smyrnis, G., Jordan, M., & Dimakis, A. G. (2021). Inverse Problems Leveraging Pre-trained Contrastive Representations. arXiv preprint arXiv:2110.07439.

Bibtex

@misc{ravula2021inverse,

title={Inverse Problems Leveraging Pre-trained Contrastive Representations},

author={Sriram Ravula and Georgios Smyrnis and Matt Jordan and Alexandros G. Dimakis},

year={2021},

eprint={2110.07439},

archivePrefix={arXiv},

primaryClass={cs.LG}}