Learning Generative Models Across Incomparable Spaces

Learning Generative Models Across Incomparable Spaces

Abstract

Adversarial training has become the de facto standard for generative modeling. While adversarial approaches have shown remarkable success in learning a dis- tribution that faithfully recovers a reference distribution in its entirety, they are not applicable when one wishes the generated distribution to recover some —but not all— aspects of it. For example, one might be interested in modeling purely relational or topological aspects (such as cluster or manifold structure) while ignor- ing or constraining absolute characteristics (e.g., global orientation in Euclidean spaces). Furthermore, such absolute aspects are not available if the data is pro- vided in an intrinsically relational form, such as a weighted graph. In this work, we propose an approach to learn generative models across such incomparable spaces that relies on the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. We show how the resulting framework can be used to learn distributions across spaces of different dimensionality or even different data types.

Publication
NeurIPS Workshop on Relational Representation Learning
Date