Sign for Notice Everyday    注册| 登陆| 友情链接| English|

Our Sponsors


    OPT 2010 - OPT 2010 3rd International Workshop on Optimization for Machine Learning

    View: 1489

    Website | Want to Edit it Edit Freely

    Category OPT 2010

    Deadline: October 24, 2010 | Date: December 10, 2010

    Venue/Country: Whistler, Canada

    Updated: 2010-10-23 10:10:27 (GMT+9)

    Call For Papers - CFP

    OPT 2010

    3rd International Workshop on Optimization for Machine Learning

    NIPS*2010 Workshop

    December 10th, 2010, Whistler, Canada

    URL: http://opt.kyb.tuebingen.mpg.de/

    Abstract

    Optimization is a well-established, mature discipline. But the way we use

    this discipline is undergoing a rapid transformation: the advent of modern

    data intensive applications in statistics, scientific computing, or data

    mining and machine learning, is forcing us to drop theoretically powerful

    methods in favor of simpler but more scalable ones. This changeover exhibits

    itself most starkly in machine learning, where we have to often process

    massive datasets; this necessitates not only reliance on large-scale

    optimization techniques, but also the need to develop methods "tuned" to the

    specific needs of machine learning problems.

    Background and Objectives

    We build on OPT*2008 and 2009, the forerunners to this workshop that happened

    as a part of NIPS workshops. Beyond that significant precedent, there have

    been several other related workshops such as the "Mathematical Programming in

    Machine Learning / Data Mining" series (2005 to 2007) and the BigML NIPS 2007

    workshop.

    Our workshop has the following major aims:

    * Provide a platform for increasing the interaction between researchers from

    optimization, operations research, statistics, scientific computing, and

    machine learning;

    * Identify key problems and challenges that lie at the intersection of

    optimization and ML;

    * Narrow the gap between optimization and ML, to help reduce rediscovery,

    and thereby accelerate new advances.

    Call for Participation

    This year we invite two types of submissions to the workshop:

    (i) contributed talks and/or posters

    (ii) open problems

    For the latter, we request the authors to prepare a few slides that clearly

    present, motivate, and explain an important open problem --- the main aim here

    is to foster active discussion. The topics of interest for the open

    problem session are the same as those for regular submissions; please see

    below for details.

    In addition to open problems, we invite high quality submissions for

    presentation as talks or poster presentations during the workshop. We are

    especially interested in participants who can contribute theory / algorithms,

    applications, or implementations with a machine learning focus on the

    following topics:

    Topics

    * Stochastic, Parallel and Online Optimization,

    - Large-scale learning, massive data sets

    - Distributed algorithms

    - Optimization on massively parallel architectures

    - Optimization using GPUs, Streaming algorithms

    - Decomposition for large-scale, message-passing and online learning

    - Stochastic approximation

    - Randomized algorithms

    * Algorithms and Techniques (application oriented)

    - Global and Lipschitz optimization

    - Algorithms for non-smooth optimization

    - Linear and higher-order relaxations

    - Polyhedral combinatorics applications to ML problems

    * Non-Convex Optimization,

    - Non-convex quadratic programming, including binary QPs

    - Convex Concave Decompositions, D.C. Programming, EM

    - Training of deep architectures and large hidden variable models

    - Approximation Algorithms

    * Optimization with Sparsity constraints

    - Combinatorial methods for L0 norm minimization

    - L1, Lasso, Group Lasso, sparse PCA, sparse Gaussians

    - Rank minimization methods

    - Feature and subspace selection

    * Combinatorial Optimization

    - Optimization in Graphical Models

    - Structure learning

    - MAP estimation in continuous and discrete random fields

    - Clustering and graph-partitioning

    - Semi-supervised and multiple-instance learning

    Important Dates

    * Deadline for submission of papers: 24st October 2010

    * Notification of acceptance: 12th November 2010

    * Final version of submission: 20th November 2010

    * Workshop date: 10th December 2010

    Please note that at least one author of each accepted paper must be available

    to present the paper at the workshop. Further details regarding the

    submission process are available at the workshop homepage (style files, page

    limits, etc.)

    Workshop

    The workshop will be a one-day event with a morning and afternoon session. In

    addition to a lunch break, long coffee breaks will be offered both in the

    morning and afternoon.

    A new session on open problems is proposed for spurring active discussion and

    interaction amongst the participants. A key aim of this session will be on

    establishing areas and problems of interest to the community.

    Invited Speakers

    * Yurii Nesterov -- Catholic University of Louvain

    * Laurent El Ghaoui -- University of California, Berkeley

    * Mark Schmidt -- University of British Columbia

    Workshop Organizers

    * Suvrit Sra, Max Planck Institute for Biological Cybernetics

    * Sebastian Nowozin, Microsoft Research, Cambridge, UK

    * Stephen Wright, University of Wisconsin, Madison


    Keywords: Accepted papers list. Acceptance Rate. EI Compendex. Engineering Index. ISTP index. ISI index. Impact Factor.
    Disclaimer: ourGlocal is an open academical resource system, which anyone can edit or update. Usually, journal information updated by us, journal managers or others. So the information is old or wrong now. Specially, impact factor is changing every year. Even it was correct when updated, it may have been changed now. So please go to Thomson Reuters to confirm latest value about Journal impact factor.