(2018) Feature Generating Networks for Zero-Shot Learning.
Abstract
Suffering from the extreme training data imbalance be-tween seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state ofthe art on five challenging datasets – CUB, FLO, SUN, AWA and ImageNet – in both the zero-shot learning and generalized zero-shot learning settings.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Mario Fritz (MF) |
Conference: | CVPR IEEE Conference on Computer Vision and Pattern Recognition |
Depositing User: | Tobias Lorenz |
Date Deposited: | 12 May 2021 14:11 |
Last Modified: | 08 Aug 2021 19:51 |
URI: | https://publications.cispa.saarland/id/eprint/3422 |
Actions
Actions (login required)
View Item |