Sign for Notice Everyday    Sign Up| Sign In| Link| English|

Our Sponsors


    CROWDSEARCH 2012 - The First International Workshop on Crowdsourcing Web search

    View: 2532

    Website | Want to Edit it Edit Freely

    Category CROWDSEARCH 2012

    Deadline: February 08, 2012 | Date: April 16, 2012-April 20, 2012

    Venue/Country: Lyon, France

    Updated: 2012-01-14 17:38:03 (GMT+9)

    Call For Papers - CFP

    CALL FOR PAPERS

    CROWDSEARCH 2012

    First International Workshop on Crowdsourcing Web Search

    Lyon (France), April 17 2012 Co-located with WWW 2012

    http://crowdsearch.como.polimi.it/

    CALL FOR PAPERS

    GOALS OF THE WORKSHOP

    Link analysis, that has shaped Web search technology in the

    last decade, can be seen as a massive mining of crowd-secured

    reputation associated with pages. With the exponential increase

    of social engagement, link analysis is now complemented by

    other kinds of crowd-generated information, such as multimedia

    content, recommendations, tweets and tags, and each person can

    ask for information or advices from dedicated sites. With the

    growth of online presence, we expect questions to be directly

    routed to informed crowds. At the same time, many kinds of

    tasks - either directly used for search or indirectly used for

    enriching content to make it more searchable - are explicitly

    crowd-sourced, possibly under the format of games. Many such

    tasks can be used to craft information, e.g. by naming and

    tagging data objects and by solving representational

    ambiguities and conflicts, thereby enhancing the scope of

    searchable objects. Thus, social engagement is empowering and

    reshaping the search of Web information.

    CrowdSearch is targeted to enabling, promoting and

    understanding individual and social participation to search. It

    addresses important research questions, such as: How can search

    paradigms make use of social participation? Will keyword-based

    search seamlessly adapt to social search, or instead will new

    models of interaction emerge? Should social interaction be

    stimulated by curiosity, games, friendship or other incentives?

    Is there a "crowdsearching etiquette" to be used when engaging

    friend or expert communities? Should new sources of information

    be socially scouted? Which are the mechanisms that may be used

    to improve or reshape search results based upon social ranking?

    How do social ranking models compare to advertising? Will

    social interaction solve the problems of data integration? What

    is the role of semantics, and can it help CrowdSearch?

    The workshop aims at gathering researchers from different

    fields to debate about the various concepts, approaches,

    architectural choices and technical solutions for opening

    information search to the active participation of human beings.

    The key idea is that human beings should be actively involved

    in different stages of the search and their actions should be

    composed and intermixed with those of computers to get the best

    possible search results.

    KEYNOTE SPEAKERS

    * Sihem Amer Yahia, QCRI: Crowd-Sourcing Literature Review in SUNFLOWER

    * Donald Kossmann, ETH Zurich: Using the Crowd to Solve Database Problems

    Topics of interest

    The topics of interest for this workshop include (but are not

    limited to):

    Large-scale knowledge discovery, content enrichment and quality

    assessment with the support of humans and communities.

    Models for task crowdsourcing and game creation for information

    augmentation, integration, extraction, classification, and

    retrieval.

    Software models, architectures, and tools for combining

    information management with human and social computations.

    Throughput, processing time, and results quality optimization

    of queries that involve both data and human sources.

    Incentive mechanisms for engaging users in tasks and games,

    either individually or cooperatively within social networks.

    Techniques for identifying and mitigating spam and abuse in

    crowd search tasks.

    Approaches for measuring the effectiveness and quality of human

    and social applications for information retrieval and their

    empirical assessment.

    Human and social computation in multimedia content processing

    for search.

    Use cases and applications of human-assisted information

    retrieval.

    Role of crowd search in "big data" applications.

    User models and human factors in task design for crowdsourced

    search applications, e.g., cognitive bias, bounded rationality,

    understanding the boundaries between search questions and spam,

    etc.

    Registration will be open to all WWW 2012 attendees.

    SUBMISSION GUIDELINES

    The workshop will accept:

    * Regular research papers (maximum length: 6 pages)

    * Industrial / Experience papers (maximum length: 4 pages)

    * Position / Vision papers (maximum length: 4 pages)

    Papers should be submitted as PDF files, in double-column ACM SIG proceedings format

    (http://www.acm.org/sigs/publications/proceedings-templates [www.acm.org];

    for LaTeX, use "Option 2").

    Papers should be submitted electronically using the EasyChair

    system at

    https://www.easychair.org/conferences/?conf=crowdsearch2012

    no later than 23:59 Pacific Standard Time, February 8, 2012.

    IMPORTANT DATES

    * Abstract submission deadline: February 1st 2012 (strongly recommended)

    * Papers submission deadline: February 8, 2012

    * Notification of papers acceptance: March 5, 2012

    * Papers camera-ready version: March 12, 2012

    * Workshop date: April 17, 2012

    WORKSHOP PROCEEDINGS

    The proceedings of the workshop will be published as CEUR

    Workshop Proceeding.

    ORGANIZERS

    Ricardo Baeza Yates, Yahoo! Research (rbaezaatacm.org)

    Stefano Ceri, Piero Fraternali, Politecnico di Milano (stefano.ceri/piero.fraternaliatpolimi.it)

    Fausto Giunchiglia, Universita di Trento (faustoatdisi.unitn.it)

    PROGRAM COMMITTEE

    * Omar Alonso, Bing

    * Marco Brambilla, Politecnico di Milano

    * Alessandro Bozzon, Politecnico di Milano

    * Fabio Casati, University of Trento

    * Petros Daras, ITI CERTH

    * Michael J. Franklin, University of California, Berkeley

    * Erol Gelenbe, Imperial College

    * Masataka Goto, National Institute of Advanced Industrial Science and Technology (AIST)

    * Ebroul Izquierdo, Queen Mary University of London

    * Anthony Jameson, DFKI

    * Alejandro Jaimes, Yahoo! Research Barcelona

    * Martha Larson, TU Delft

    * Matt Lease, University of Texas

    * Stefano Mazzocchi, Google

    * Stefano Mizzaro, Universita di Udine

    * Wolfgang Nejdl, L3S

    * Neoklis Polyzotis, University of California Santa Cruz

    * Alexander J Quinn, University of Maryland College Park

    * Dave Robertson, University of Edinburgh

    * Yannis Velegrakis, University of Trento

    Workshop Contact: piero.fraternaliatpolimi.it


    Keywords: Accepted papers list. Acceptance Rate. EI Compendex. Engineering Index. ISTP index. ISI index. Impact Factor.
    Disclaimer: ourGlocal is an open academical resource system, which anyone can edit or update. Usually, journal information updated by us, journal managers or others. So the information is old or wrong now. Specially, impact factor is changing every year. Even it was correct when updated, it may have been changed now. So please go to Thomson Reuters to confirm latest value about Journal impact factor.