User-evaluation of the relevance of an image is often dependent on context, and this
implies that the performance of a raking function may also be very context dependent.
A key component of a retrieval system operating in such an environment is context
awareness in the query process. An early approach to developing methods utilizing
context in information retrieval was so-called context-aware retrieval applications.
These applications could be interactive, where the user directly issued a request to
retrieve relevant data items, or proactive, where documents were presented to the user
To help define the field of context-aware features that these applications should support:
- presentation of information and services to a user dependent on the perceived user context.
- automatic execution of a service triggered by the context of the user, and
- tagging of context data to information elements for later retrieval
The Fast Search & Transfer (FAST) search engine is one example of a system
conducting context-aware computing called contextual insight. The system supports
text-based information retrieval from distributed databases by collecting, processing and
storing the processed data it in a central database.
None of the approaches discussed above combine content-based queries with text-based
queries while also utilizing context in a distributed setting, and none of the systems
supports the use of a combination of content-based queries with text-based queries on
full-text document collections. Instead, most of them rely on manually added
annotations given to all images stored in single image collections. The FAST system
is an exception and supports information retrieval from full-text documents, but
does not support content-based image retrieval in the current version of the system. In
addition, the FAST approach makes use of a centralized database solution where the
contents of all participating databases are copied into a central repository for further
processing. The other systems proposed for combining content-based image retrieval
and text retrieval support only keyword searches against recorded annotations in
combination with the content-based image retrieval, and these systems are not
developed for distributed settings. Moreover, most of these systems do not allow for
users to choose both the seed image and specify the query terms.
The proposed CAIRANK system represents an approach that
combines content-based image retrieval algorithms with text-retrieval algorithms
developed for full-text queries with the use of context for improved quality in the
ranking of result sets in a distributed setting. The CAIRANK approach also involves the
user more in the process of formulating queries in that the user specify both the seed
image and the query terms to be used in the query.