On the Effectiveness of Two-Step Learning for Latent-Variable Models
Autor: | Maxime Gasse, Cem Subakan, Laurent Charlin |
---|---|
Rok vydání: | 2020 |
Předmět: |
Computer science
business.industry 05 social sciences Two step Latent variable 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences Autoencoder Specific cost 0502 economics and business Prior probability Probability distribution Artificial intelligence 050207 economics business computer Generative grammar 0105 earth and related environmental sciences Coding (social sciences) |
Zdroj: | MLSP |
DOI: | 10.1109/mlsp49062.2020.9231729 |
Popis: | Latent-variable generative models offer a principled solution for modeling and sampling from complex probability distributions. Implementing a joint training objective with a complex prior, however, can be a tedious task, as one is typically required to derive and code a specific cost function for each new type of prior distribution. In this work, we propose a general framework for learning latent variable generative models in a two-step fashion. In the first step of the framework, we train an autoencoder, and in the second step we fit a prior model on the resulting latent distribution. This two-step approach offers a convenient alternative to joint training, as it allows for a straightforward combination of existing models without the hustle of deriving new cost functions, and the need for coding the joint training objectives. Through a set of experiments, we demonstrate that two-step learning results in performances similar to joint training, and in some cases even results in more accurate modeling. |
Databáze: | OpenAIRE |
Externí odkaz: |