[mlpack] [GSoC 17] Generative Adversarial Network (GAN)
marcus.edel at fu-berlin.de
Sun Mar 19 11:32:18 EDT 2017
welcome and thanks for getting in touch!
> I think the WGAN is wonderful, so I want to implement it too. and I'm wonder
> that is it full enough for three month's work to just implement one module
> between SGAN and WGAN? but when I want to integrate two modules I found there is
> not much in common between them. So I'm not sure what should I do. Can you give
> me some advice and guide me what should I do next?
It is a really great idea and well written paper. Regarding if implementing a
single model SGAN or WGAN is enough work for GSoC, I don't think so, even if you
like to implement a bunch of different test scenarios. I think adding another
model besides WGAN or SGAN would fulfill that requirement. What do you think?
> On 19 Mar 2017, at 08:40, YuLun Cai <buptcyl at gmail.com> wrote:
> I am YuLun Cai from China. I am currently in my first year of Master studies. I am interested in participating inGSoC 17 with mlpack in Essential Deep Learning Modules.
> Among the topics given on the wiki page, I am interested in implemening GAN modules. I have done a course in Advance Machine Learning and I've finished the Stanford course "CS231n: Convolutional Neural Networks for Visual Recognition" for self-study, which help me a lot in understand the deep learning.
> I've built the mlpack from source in my own machine successfully, then I look at the source code in the ANN module(the activation_functions, lots of layers and the api in ffn.hpp and rnn.hpp to learn how to build a neural network in mlpack) .
> I also learn to resource about GAN in the GSOC project wiki, I think the "Stacked Generative Adversarial Networks" is interesting, which consists of a top-down stack of GANs and try to invert the hierarchical representations of a discriminative bottom-up deep network to generate images.
> In addition, recently the Wasserstein GAN paper gets a lot of attention, many people think it is excellent:
> * it proposes a new GAN training algorithm that works well on the common GAN datasets
> * there is just a little difference between the original GAN and WGAN algorithm
> * its training algorithm is backed up by theory. it clarifies that the original GAN sometimes doesn't provide gradient to train when using KL divergence or JS divergence, and prove that through the Wasserstein distance the gradient always can be provided.
> * In the Wasserstein GAN, it can train the discriminator to convergence and also can improve the stability of learning, get rid of the mode collapse.
> I think the WGAN is wonderful, so I want to implement it too. and I'm wonder that is it full enough for three month's work to just implement one module between SGAN and WGAN? but when I want to integrate two modules I found there is not much in common between them. So I'm not sure what should I do. Can you give me some advice and guide me what should I do next?
>  https://arxiv.org/abs/1612.04357 <https://arxiv.org/abs/1612.04357>
>  https://arxiv.org/abs/1701.07875 <https://arxiv.org/abs/1701.07875>_______________________________________________
> mlpack mailing list
> mlpack at lists.mlpack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mlpack