Our Sponsors
Category JNLE-SIO 2011
Deadline: November 30, 2010 | Date: December 27, 2011
Venue/Country: Call For papers, U.S.A
Updated: 2011-03-18 10:18:19 (GMT+9)
Machine learning and statistical approaches have become indispensable for large part ofComputational Linguistics and Natural Language Processing research. On one hand,they have enhanced systems' accuracy and have significantly sped-up some design phases,e.g. the inference phase. On the other hand, their use requires careful parameter tuning and,above all, engineering of machine-based representations of natural language phenomena,e.g. by means of features, which sometimes detach from the common sense interpretation ofsuch phenomena.These difficulties become more marked when the input/output data have a structured andrelational form: the designer has both to engineer features for representing the system input,e.g. the syntactic parse tree of a sentence, and devise methods for generating the output,e.g. by building a set of classifiers, which provide boundaries and type (argument, function orconcept type) of some of the parse-tree constituents.Research in empirical Natural Language Processing has been tackling these complexitiessince the early work in the field, e.g. part-of-speech tagging is a problem in which the input--word sequences-- and output --POS-tag sequences-- are structured. However, the modelsinitially designed were mainly based on local information. The use of such ad hoc solutionswas mainly due to the lack of statistical and machine learning theory suggesting how modelsshould be designed and trained for capturing dependencies among the items in theinput/output structured data. In contrast, recent work in machine learning has provided severalparadigms to globally represent and process such data: structural kernel methods, linearmodels for structure learning, graphical models, constrained conditional models, andre-ranking, among others.However, none of the above approaches has been shown to be superior in general to therest. A general expressivity-efficiency trade off is observed, making the best option usuallytask-dependant. Overall, the special issue is devoted to study engineering techniques foreffectively using natural language structures in the input and in the output of typicalcomputational linguistics applications. Therefore, the study on generalization of new ortraditional methods, which allow for fast design in different or novel NLP tasks is one importantaim of this special issue.Finally, the special issue is also seeking for (partial) answers to the following questions:* Is there any evidence (empirical or theoretical) that can establish the superiority of oneclass of learning algorithms/paradigms over the others when applied to some concrete naturallanguage structures?* When we use different classes of methods, e.g. SVMs vs CRFs, or different paradigms,what do we loose and what do we gain from a practical viewpoint (implementation, efficiencyand accuracy)? This question is particularly interesting, when considering different structuretypes: syntactic or semantic both shallow or deep.* Can we empirically demonstrate that theoretically motivated algorithms, e.g. SVM-struct,improve simpler models, e.g. re-ranking, in the NLP case?* Are there any other novel engineering approaches to NLP input and output structures?TOPICSFor this special issue we invite submissions of papers describing novel and challenging work/resultsin theories, models, applications or empirical studies on statistical learning for natural languageprocessing involving structured input and/or structured output. Therefore, the invited submissionmust concern with (a) any kind of natural language problems; and (b) natural language structureddata.Assuming the target above, the range of topics to be covered will include, but will not be limited tothe following:* Practical and theoretical new learning approaches and architectures* Experimental evaluation/comparison of different approaches* Kernel Methods* Algorithms for structure output (batch and on?line):? structured SVMs, Perceptron, etc.? on sequences, trees, graphs, etc.* Bayesian Learning, Generative Models, Graphical Models* Relational Learning* Constraint Conditional models* Integer Linear Programming approaches* Graph-based algorithms* Ranking and Reranking* Scalability and effciency of ML methods* Robust approaches? noisy data, domain adaptation, small training sets, etc.* Unsupervised and semi-supervised models* Encoding of syntactic/semantic structures* Structured data encoding deep semantic information and relations* Relation between the syntactic and semantic layers in structured dataDATESCall for papers: 30 November 2010Submission of articles: 20 April 2011Preliminary decisions to authors: 26 July 2011Submission of revised articles: 28 September 2011Final decisions to authors: 23 November 2011Final versions due from authors: 27 December 2011INSTRUCTIONSArticles submitted to this special issue must adhere to the NLE journal guidelines available at:http://journals.cambridge.org/action/displayMoreInfo?jid=NLE&type=ifc
(see section "Manuscript requirements" for the journal latex style).We encourage authors to keep their submissions below 30 pages.Send your manuscript in pdf attached to an email addressed to JNLE-SIO
disi.unitn.it- with subject filed: JNLE-SIO and- including names of the authors and title of the submission in the bodyAn alternative way to submit to JNLE-SIO is to submit a paper to TextGraph 6 and being selectedfor contributing to JNLE. See the website:http://www.textgraphs.org/ws11/index.html
The selected workshop papers must be extended to journal papers by following the indications ofboth the TextGraph 6 reviewers and the JNLE-SIO editors. These upgraded versions have to besubmitted to JNLE-SIO no later than August 28, 2011 for the second round of review of JNLE-SIO.GUEST EDITORSLluís MàrquezTALP Research Center, Technical University of Catalonialluism
lsi.upc.eduhttp://www.lsi.upc.edu/~lluism/
Alessandro MoschittiInformation Engineering and Computer Science Department, University of Trentomoschitti
disi.unitn.ithttp://disi.unitn.eu/moschitti
GUEST EDITORIAL BOARDRoberto Basili, University of Rome, ItalyUlf Brefeld, Yahoo!-Research, SpainRazvan Bunescu, Ohio University, USNicola Cancedda, Xerox, FranceXavier Carreras, UPC, SpainStephen Clark, University of Cambridge, UKTrevor Cohn, University of Sheffield, UKWalter Daelemans, University of Antwerp, BelgiumHal Daumé, University of Maryland, USJason Eisner, John Hopkins University, USJames Henderson, University of Geneva, SwitzerlandLiang Huang, ISI, University of Southern California, USTerry Koo, MIT CSAIL, USMirella Lapata, University of Edinburgh, UKYuji Matsumoto, Nara Institute of Science and Technology, JapanRyan McDonald, Microsoft Research, USRaymond Mooney, University of Texas at Austin, US Hwee Tou Ng, National University of Singapore, SingaporeSebastian Riedel, University of Massachusetts, USDan Roth, University of Illinois at Urbana Champaign, USMihai Surdeanu, Stanford University, USIvan Titov, Saarland University, GermanyKristina Toutanova, Microsoft Research, USJun'ichi Tsujii, University of Tokyo, JapanAntal van den Bosch, Tilburg University, The NetherlandsScott Wen-tau Yih, Microsoft Research, USFabio Massimo Zanzotto, University of Rome "Tor Vergata", ItalyMin Zhang, A-STAR, SingaporeKeywords: Accepted papers list. Acceptance Rate. EI Compendex. Engineering Index. ISTP index. ISI index. Impact Factor.
Disclaimer: ourGlocal is an open academical resource system, which anyone can edit or update. Usually, journal information updated by us, journal managers or others. So the information is old or wrong now. Specially, impact factor is changing every year. Even it was correct when updated, it may have been changed now. So please go to Thomson Reuters to confirm latest value about Journal impact factor.