[mlpack] [GSoC]Contribute for MLPack

Tyler Zhu rui.tyler.zhu at gmail.com
Tue Mar 21 17:52:32 EDT 2017


Hi Ryan,

Thanks for your reply. I think GSoC is not only popular in UAlberta, but also all around the world. I’m really excited to see friends loving coding and machine learning from so many countries.

Thanks for providing previous discussion on this idea. It seems that this idea has been widely discussed and I’ve seen many potential works around it. I agree with your point that F-W algorithm is just the fundamental. Low-rank matrix completion, OMP and other applications can be applied. Also I found some papers discussing the parallel/distributed implementations of the F-W algorithm. We can think about it but implementing F-W and its applications would be the first and the essential step.

Best regards,
Tyler Zhu (Rui)
University of Alberta

> On Mar 21, 2017, at 3:33 PM, Ryan Curtin <ryan at ratml.org> wrote:
> 
> On Mon, Mar 20, 2017 at 11:48:05PM -0600, Tyler Zhu wrote:
>> Dear All,
>> 
>> Nice to meet you guys here! My name is Rui, a PhD student in
>> University of Alberta. My research interests include distributed
>> machine learning, low-rank matrix completion, and other exciting
>> topics. I have been an MLPack user for a while in my research and I’m
>> really fascinated by this great library and I’m willing to make
>> contributions to this great machine learning framework.
> 
> Hi Rui!
> 
> It seems GSoC is quite popular among University of Alberta students this
> year. :)
> 
>> I’m quite interested in “Low rank/sparse optimization using
>> Frank-Wolfe". Frank-Wolfe is an old but famous algorithm in
>> optimization. It has the same convergence rate with (projected)
>> gradient descent, which is a more popular method in optimization. If
>> it is not efficient to calculate the projection, Frank-Wolfe is one of
>> the best alternative. Such example can be found in various papers,
>> e.g., low-rank matrix completion and orthogonal matching pursuit. 
>> 
>> I think this algorithm can be implemented as a new optimizer since it
>> can be used in may applications. My preliminary plan is to borrow some
>> ideas from existing implementations of other optimizers for this
>> project, and the major work is to implement the “argmin” step, or also
>> called “conditional gradient” in the literature.
> 
> I think this is a good approach.  Keep in mind that just the
> implementation of F-W might not be enough to fill a whole summer, so it
> may be worthwhile to propose some additional work on top of that.  Maybe
> some extensions of F-W or some applications might work.  There has been
> quite some discussion on this project over the years on the mailing
> list; you might check out some of those posts for more information:
> 
> http://lists.mlpack.org/pipermail/mlpack/2017-March/003185.html
> http://lists.mlpack.org/pipermail/mlpack/2016-March/002575.html
> 
> Let me know if I can clarify anything.
> 
> Thanks,
> 
> Ryan
> 
> -- 
> Ryan Curtin    | "I am a golden god!"
> ryan at ratml.org |   - Russell



More information about the mlpack mailing list