Copy the page URI to the clipboard
Bellot, Patrice; Bogers, Toine; Geva, Shlomo; Hall, Mark; Huurdeman, Hugo; Kamps, Jaap; Kazai, Gabriella; Koolen, Marijn; Moriceau, Véronique; Mothe, Josiane; Preminger, Michael; SanJuan, Eric; Schenkel, Ralf; Skov, Mette; Tannier, Xavier and Walsh, David
(2014).
DOI: https://doi.org/10.1007/978-3-319-11382-1_19
Abstract
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2014 evaluation campaign, which consisted of three tracks: The Interactive Social Book Search Track investigated user information seeking behavior when interacting with various sources of information, for realistic task scenarios, and how the user interface impacts search and the search experience. The Social Book Search Track investigated the relative value of authoritative metadata and user-generated content for search and recommendation using a test collection with data from Amazon and LibraryThing, including user profiles and personal catalogues. The Tweet Contextualization Track investigated tweet contextualization, helping a user to understand a tweet by providing him with a short background summary generated from relevant Wikipedia passages aggregated into a coherent summary. INEX 2014 was an exciting year for INEX in which we for the third time ran our workshop as part of the CLEF labs. This paper gives an overview of all the INEX 2014 tracks, their aims and task, the built test-collections, the participants, and gives an initial analysis of the results.