Sign for Notice Everyday    Sign Up| Sign In| Link| English|

Our Sponsors


    RL 2012 - ICML Workshop on Representation Learning

    View: 415

    Website icml.cc/2012/workshops/ | Want to Edit it Edit Freely

    Category RL 2012

    Deadline: May 07, 2012 | Date: June 26, 2012-July 01, 2012

    Venue/Country: Edinburgh, U.K.

    Updated: 2012-03-24 09:04:23 (GMT+9)

    Call For Papers - CFP

    In this workshop we consider the question of how we can learn meaningful and useful representations of the data. There has been a great deal of recent work on this topic, much of it emerging from researchers interested in training deep architectures. Deep learning methods such as deep belief networks, sparse coding-based methods, convolutional networks, and deep Boltzmann machines, have shown promise as a means of learning invariant representations of data and have already been successfully applied to a variety of tasks in computer vision, audio processing, natural language processing, information retrieval, and robotics. Bayesian nonparametric methods and other hierarchical graphical model-based approaches have also been recently shown the ability to learn rich representations of data.

    By bringing together researchers with diverse expertise and perspectives but who are all interested in the question of how to learn data representations, we will explore the challenges and promising directions for future research in this area.

    In the context of an opening overview talk and in a panel discussion (including our invited speakers), we will attempt to address some of the issues that have recently emerged as critical in shaping the future development of this line of research:

    How do we learn invariant representations? Feature pooling is a popular and highly successful mean of achieving invariant features, but is there a tension between feature specificity and robustness to structured noise (movement in a direction of an irrelevant factor of variation)? Does it make sense to think in terms of a theory of invariant features?

    What role does learning really play? There is some evidence that learning does not seem as important as previously believed. Rather, the process of feature extraction itself seems to play the most significant role in determining the success of the representation of the data. For example, there is evidence that the use of feedback in feature extraction could be very important.

    How can several layers of latent variables be effectively learned? There has been lots of empirical work showing the importance of certain architectures and inference algorithms to learn representations that retain information of the input while extracting more and more abstract concepts. We would like to discuss what are the key modules of these hierarchical models and what inference methods are more suitable to discover useful representations of data. Also, we would like to investigate which inference algorithms are more effective and scalable in terms of number of data points and feature dimensionality.

    The workshop will also invite paper submissions on the development of representation learning methods, deep learning algorithms, theoretical foundations, inference and optimization methods, semi-supervised and transfer learning, and applications of deep learning and unsupervised feature learning to real-world tasks. Papers will be presented mainly as poster presentations.


    Keywords: Accepted papers list. Acceptance Rate. EI Compendex. Engineering Index. ISTP index. ISI index. Impact Factor.
    Disclaimer: ourGlocal is an open academical resource system, which anyone can edit or update. Usually, journal information updated by us, journal managers or others. So the information is old or wrong now. Specially, impact factor is changing every year. Even it was correct when updated, it may have been changed now. So please go to Thomson Reuters to confirm latest value about Journal impact factor.