The Open UniversitySkip to content
 

Multimedia resource discovery

Rüger, Stefan (2011). Multimedia resource discovery. In: Melucci, Massimo and Baeza-Yates, Ricardo eds. Advanced Topics in Information Retrieval. The Information Retrieval Series. Heidelberg: Springer, pp. 157–186.

Full text available as:
Full text not publicly available
Due to copyright restrictions, this file is not available for public download
Click here to request a copy from the OU Author.
DOI (Digital Object Identifier) Link: http://dx.doi.org/10.1007/978-3-642-20946-8_7
Google Scholar: Look up in Google Scholar

Abstract

This chapter examines the challenges and opportunities of Multimedia Information Retrieval and corresponding search engine applications. Computer technology has changed our access to information tremendously: We used to search authors or titles (which we had to know) in library cards in order to locate relevant books; now we can issue keyword searches within the full text of whole book repositories in order to identify authors, titles and locations of relevant books. What about the corresponding challenge of finding multimedia by fragments, examples and excerpts? Rather than asking for a music piece by artist and title, can we hum its tune to find it? Can doctors submit scans of a patient to identify medically similar images of diagnosed cases in a database? Can your mobile phone take a picture of a statue and tell you about its artist and significance via a service that it sends this picture to?

In an attempt to answer some of these questions we get to know basic concepts of multimedia resource discovery technologies for a number of different query and document types: piggy-back text search, i.e., reducing the multimedia to pseudo text documents; automated annotation of visual components; content-based retrieval where the query is an image; and fingerprinting to match near duplicates.

Some of the research challenges are given by the semantic gap between the simple pixel properties computers can readily index and high-level human concepts; related to this is an inherent technological limitation of automated annotation of images from pixels alone. Other challenges are given by polysemy, i.e., the many meanings and interpretations that are inherent in visual material and the corresponding wide range of a user’s information need.
This chapter demonstrates how these challenges can be tackled by automated processing and machine learning and by utilising the skills of the user, for example through browsing or through a process that is called relevance feedback, thus putting the user at centre stage. The latter is made easier by “added value” technologies, exemplified here by summaries of complex multimedia objects such as TV news, information visualisation techniques for document clusters, visual search by example, and methods to create browsable structures within the collection.

Item Type: Book Chapter
Copyright Holders: 2011 Stefan Rueger , Springer-Verlag
ISBN: 3-642-20945-9, 978-3-642-20945-1
Academic Unit/Department: Knowledge Media Institute
Item ID: 29020
Depositing User: Stefan Rüger
Date Deposited: 28 Jun 2011 15:54
Last Modified: 25 Oct 2012 09:36
URI: http://oro.open.ac.uk/id/eprint/29020
Share this page:

Altmetrics

Scopus Citations

Actions (login may be required)

View Item
Report issue / request change

Policies | Disclaimer

© The Open University   + 44 (0)870 333 4340   general-enquiries@open.ac.uk