JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Interactive Information Retrieval: An Introduction
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Interactive Information Retrieval: An Introduction
Borlund, Pia;
  PDF(new window)
 Abstract
The paper introduces the research area of interactive information retrieval (IIR) from a historical point of view. Further, the focus here is on evaluation, because much research in IR deals with IR evaluation methodology due to the core research interest in IR performance, system interaction and satisfaction with retrieved information. In order to position IIR evaluation, the Cranfield model and the series of tests that led to the Cranfield model are outlined. Three iconic user-oriented studies and projects that all have contributed to how IIR is perceived and understood today are presented: The MEDLARS test, the Book House fiction retrieval system, and the OKAPI project. On this basis the call for alternative IIR evaluation approaches motivated by the three revolutions (the cognitive, the relevance, and the interactive revolutions) put forward by Robertson & Hancock-Beaulieu (1992) is presented. As a response to this call the `IIR evaluation model` by Borlund (e.g., 2003a) is introduced. The objective of the IIR evaluation model is to facilitate IIR evaluation as close as possible to actual information searching and IR processes, though still in a relatively controlled evaluation environment, in which the test instrument of a simulated work task situation plays a central part.
 Keywords
interactive information retrieval;IIR;evaluation;human-computer information retrieval;HCIR;IIR evaluation model;user-oriented information retrieval;information retrieval;IR;history;
 Language
English
 Cited by
1.
Has Retrieval Technology in Vertical Site Search Systems Improved over the Years? A Holistic Evaluation for Real Web Systems,;;;

Journal of Information Science Theory and Practice, 2015. vol.3. 4, pp.19-34 crossref(new window)
 References
1.
Aitchison, J. & Cleverdon, C. (1963). Aslib Cranfield research project: Report on the test of the Index of Metallurgical Literature of Western Reserve University. Cranfield: The College of Aeronautics.

2.
Beaulieu, M. & Jones, S. (1998). Interactive searching and interface issues in the Okapi Best Match Probabilistic Retrieval System. Interacting with Computers, 10, 237-248. crossref(new window)

3.
Beaulieu, M. (1997). Experiments on interfaces to support query expansion. Journal of Documentation, (53)1, 8-19. crossref(new window)

4.
Beaulieu, M., Robertson, S. & Rasmussen, E. (1996). Evaluating interactive systems in TREC. Journal of the American Society for Information Science, 47 (1), 85-94. crossref(new window)

5.
Belkin, N.J. (1980). Anomalous states of knowledge as a basis for information retrieval. The Canadian Journal of Information Science, (5), 133-143.

6.
Belkin, N.J. (2008). Some(what) grand challenges for information retrieval. ACM SIGIR Forum, 42 (1), 47-54. crossref(new window)

7.
Belkin, N.J., Cool, C., Croft, W.B. & Callan, J.P. (1993). The effect of multiple query representation on information retrieval system performance. In R. Korfhage, E. Rasmussen, & P. Willett (Eds.), Proceedings of the 16th ACM Sigir Conference on Research and Development of Information Retrieval. Pittsburgh, 1993. New York: ACM Press, 339-346.

8.
Belkin, N.J., Oddy, R. & Brooks, H. (1982). ASK for information retrieval: Part I. Background and theory. Journal of Documentation, 38 (2), 61-71. crossref(new window)

9.
Borlund, P. & Ingwersen, P. (1997). The development of a method for the evaluation of interactive information retrieval systems. Journal of Documentation, 53 (3), 225-250. crossref(new window)

10.
Borlund, P. & Ingwersen, P. (1998). Measures of relative relevance and ranked half-life: Performance indicators for interactive IR. In B.W. Croft, A. Moffat, C.J. van Rijsbergen, R. Wilkinson, & J. Zobel (Eds.), Proceedings of the 21st ACM Sigir Conference on Research and Development of Information Retrieval. Melbourne, 1998. Australia: ACM Press/York Press, 324-331.

11.
Borlund, P. (2000a). Evaluation of interactive information retrieval systems. Abo: Abo Akademi University Press. Doctoral Thesis, Abo Akademi University.

12.
Borlund, P. (2000b). Experimental components for the evaluation of interactive information retrieval systems. Journal of Documentation, 56 (1), 71-90. crossref(new window)

13.
Borlund, P. (2003a). The IIR evaluation model: A framework for evaluation of interactive information retrieval systems. Information Research, 8 (3). Retrieved from http://informationr.net/ir/8- 3/paper152.html

14.
Borlund, P. (2003b). The concept of relevance in IR. Journal of the American Society for Information Science and Technology, 54 (10), 913-925. crossref(new window)

15.
Bruce, H.W. (1994). A cognitive view of the situational dynamism of user-centered relevance estimation. Journal of the American Society for Information Science, 45, 142-148. crossref(new window)

16.
Cleverdon, C.W. & Keen, E.M. (1966). Aslib Cranfield Research Project: Factors determining the performance of indexing systems. Vol. 2: Results. Cranfield.

17.
Cleverdon, C.W. (1960). Aslib Cranfield Research Project: Report on the first stage of an investigation into the comparative efficiency of indexing systems. Cranfield: the College of Aeronautics.

18.
Cleverdon, C.W. (1962). Aslib Cranfield Research Project: Report on the testing and analysis of an investigation into the comparative efficiency of indexing systems. Cranfield.

19.
Cleverdon, C.W., Mills, J. & Keen, E.M. (1966). Aslib Cranfield Research Project: Factors determining the performance of indexing systems. Vol. 1: Design.

20.
Cool, C. & Belkin, N.J. (2011). Interactive information retrieval: History and background. In I. Rutven & D. Kelly (Eds.), Interactive information seeking, behaviour and retrieval. London: Facet Publishing, 1-14.

21.
Ellis, D. (1989). A behavioural approach to information retrieval systems design. Journal of Documentation, 45 (3), 171-212. crossref(new window)

22.
Ellis, D. (1996). Progress and problems in information retrieval. London: Library Association Publishing.

23.
Fidel, R. (2012). Human information interaction: An ecological approach to information behavior. Cambridge, MA: MIT.

24.
Goodstein, L.P. & Pejtersen, A.M. (1989). The Book House: System functionality and evaluation. Roskilde, Denmark: Riso National Laboratory, (Riso-M-2793).

25.
Gull, C.D. (1956). Seven years of work on the organization of materials in the special library. American Documentation, 7, 320-329. crossref(new window)

26.
Harter, S.P. & Hert, C.A. (1997). Evaluation of information retrieval systems: Approaches, issues, and methods. In M.E. Williams (Ed.), Annual Review of Information Science and Technology, 32, 1997, 3-94.

27.
Harter, S.P. (1996). Variations in Relevance assessments and the measurement of retrieval effectiveness. Journal of the American Society for Information Science, 47 (1), 37-49. crossref(new window)

28.
Ingwersen, P. & Jarvelin, K. (2005). The turn: Integration of information seeking retrieval in context. Dordrecht, Netherlands: Springer Verlag.

29.
Ingwersen, P. (1992). Information retrieval interaction. London: Taylor Graham.

30.
Jarvelin, K. & Kekalainen, J. (2000). IR evaluation methods for retrieving highly relevant documents. In N.J. Belkin, P. Ingwersen, & M.-K. Leong (Eds.), Proceedings of the 23rd ACM Sigir Conference on Research and Development of Information Retrieval. Athens, Greece, 2000. New York, N.Y.: ACM Press, 2000, 41-48.

31.
Jarvelin, K. (2011). Evaluation. In I. Rutven & D. Kelly (Eds.), Interactive information seeking, behaviour and retrieval. London: Facet Publishing, 113-138.

32.
Kelly, D. (2009). Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval, 3 (1-2), 1-224.

33.
Kuhlthau, C.C. (1993). Seeking meaning: A process approach to library and information science. Norwood, NJ: Ablex Publishing.

34.
Lancaster, W.F. (1969). Medlars: Report on the evaluation of its operating efficiency. American Documentation, 20, 119-142. crossref(new window)

35.
Lu, W., Robertson, S.E. & Macfarlane, A. (2007). CISR at INEX 2006. In N. Fuhr, M. Lalmas, and A. Trotman (Eds.), Comparative Evaluation of XML Information Retrieval Systems: 5th International Workshop of the Initiative for the Evaluation of XML Retrieva (INEX 2006), Dagstuhl, Germany, LNCS 4518, Springer-Verlag, (2007), 57-63.

36.
Lu, W., Robertson, S.E. & Macfarlane, A. (2006). Field- Weighted XML retrieval based on BM25. In N. Fuhr, M. Lalmas, S. Malik, & G. Kazai (Eds.), Advances in XML Information Retrieval and Evaluation: Fourth Workshop of the INitiative for the Evaluation of XML Retrieval (INEX 2005), Dagstuhl, 28-30 November 2005, Lecture Notes in Computer Science, Vol 3977, Springer-Verlag,

37.
Marchionini, G. (2006). Toward human-computer information retrieval bulletin. In June/July 2006 Bulletin of the American Society for Information Science. Retrieved from http://www.asis.org/ Bulletin/Jun-06/marchionini.html

38.
Martyn, J. & Lancaster, F.W. (1981). Investigative methods in library and information science: An introduction. Virginia: Information Resources Press. 1981. (2nd impression September 1991).

39.
Pejtersen, A.M. & Austin, J. (1983). Fiction retrieval: Experimental design and evaluation of a search system based on users' value criteria (Part 1). Journal of Documentation, 39 (4), 230-246. crossref(new window)

40.
Pejtersen, A.M. & Austin, J. (1984). Fiction retrieval: Experimental design and evaluation of a search system based on users' value criteria (Part 2). Journal of Documentation, 40 (1), 25-35. crossref(new window)

41.
Pejtersen, A.M. & Fidel, R. (1998). A framework for work centered evaluation and design: A case study of IR on the web. Grenoble, March 1998. [Working paper for MIRA Workshop, Unpublished].

42.
Pejtersen, A.M. & Rasmussen, J. (1998). Effectiveness testing of complex systems. In M. Helander (Ed.), Handbook of human-computer interaction. Amsterdam: North-Holland, 1514-1542.

43.
Pejtersen, A.M. (1980). Design of a classification scheme for fiction based on an analysis of actual user-librarian communication and use of the scheme for control of librarians' search strategies. In O. Harbo, & L. Kajberg (Eds.), Theory and application of information research. Proceedings of the 2nd International Research forum on Information Science. London: Mansell, 146-159.

44.
Pejtersen, A.M. (1989). A library system for information retrieval based on a cognitive task analysis and supported by an icon-based interface. In Proceedings of the 12th Annual International ACM SIGR Conference on Research and Development in Information Retrieval (SIGIR 1989), ACM, 40-47.

45.
Pejtersen, A.M. (1991). Interfaces based on associative semantics for browsing in information retrieval. Roskilde, Denmark: Riso National Laboratory, (Riso-M-2883).

46.
Pejtersen, A.M. (1992). New model for multimedia interfaces to online public access catalogues. The Electronic Library, 10 (6), 359-366. crossref(new window)

47.
Pejtersen, A.M., Olsen, S.E. & Zunde, P. (1987). Development of a term association interface for browsing bibliographic data bases based on end users' word associations. In I. Wormell (Ed.), Knowledge engineering: expert systems and information retrieval. London: Taylor Graham, 92-112.

48.
Rasmussen, J., Pejtersen, A.M. & Goodstein, L.P. (1994). Cognitive systems engineering. N.Y.: John Wiley & Sons.

49.
Robertson, S.E. & Hancock-Beaulieu, M.M. (1992). On the evaluation of IR systems. Information Processing & Management, 28 (4), 457-466. crossref(new window)

50.
Robertson, S.E. (1981). The methodology of information retrieval experiment. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 9-31.

51.
Robertson, S.E. (1997a). Overview of the Okapi Projects. Journal of Documentation, 53 (1), 3-7. crossref(new window)

52.
Robertson, S.E. (Ed.). (1997b). Special issue on Okapi. Journal of Documentation, 53 (1).

53.
Robertson, S.E., Lu, W. & MacFarlane, A. (2006). XML-structured documents: Retrievable units and inheritance. In H. Legind Larsen, G. Pasi, D. Ortiz-Arroyo, T. Andreasen, & H. Christiansen (Eds.), Proceedings of Flexible Query Answering Systems 7th International Conference, FQAS 2006, Milan, Italy, June 7-10, 2006, LNCS, 4027, Springer-Verlag, (2006), 121-132.

54.
Robertson, S.E., Walker, S. & Beaulieu, M. (1997). Laboratory experiments with Okapi: Participation in the TREC programme. Journal of Documentation, 53 (1), 20-34. crossref(new window)

55.
Ruthven, I. & Kelly, D. (Eds.). (2011). Interactive information seeking, behaviour and retrieval. London: Facet Publishing.

56.
Ruthven, I. (2008). Interactive information retrieval. Annual Review of Information Science and Technology, 24 (1), 2008, 43-91.

57.
Salton, G. (1972). A New comparison between conventional indexing (MEDLARS) and automatic text processing (SMART). Journal of the American Society for Information Science, (March-April), 75-84.

58.
Salton, G. (1981). The smart environment for retrieval system evaluation: Advantages and problem areas. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 316-329.

59.
Sanderson, M. (2010). Test collection based evaluation of information retrieval systems. Foundations and Trends in Information Retrieval, 4 (4), 247-375. crossref(new window)

60.
Saracevic, T. (1995). Evaluation of evaluation in information retrieval. In E.A Fox, P. Ingwersen, & R. Fidel (Eds.), Proceedings of the 18th ACM Sigir Conference on Research and Development of Information Retrieval. Seattle, 1995. N.Y.: ACM Press, 138-146.

61.
Schamber, L. (1994). Relevance and information behavior. In M.E. Williams (Ed.), Annual Review of Information Science and Technology (ARIST). Medford, NJ: Learned Information, INC., 29, 3-48.

62.
Schamber, L. Eisenberg, M.B. & Nilan, M.S. (1990). A re-examination of relevance: Toward a dynamic, situational definition. Information Processing & Management, (26), 755-775.

63.
Sharp, J. (1964). Review of the Cranfield-WRU test literature. Journal of Documentation, 20 (3), 170-174.

64.
Sparck Jones, K. (1971). Automatic keyword classification for information retrieval. London: Buttersworths.

65.
Sparck Jones, K. (1981a). Retrieval system tests 1958-1978. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 213-255.

66.
Sparck Jones, K. (1981b). The Cranfield tests. In K. Sparck Jones (Ed.), Information retrieval experiments. London: Buttersworths, 256-284.

67.
Sparck Jones, K. (Ed.). (1981c). Information retrieval experiments. London: Buttersworths.

68.
Spink, A., Greisdorf, H. & Bateman, J. (1998). From highly relevant to not relevant: Examining different regions of relevance. Information Processing & Management, 34 (5), 599-621. crossref(new window)

69.
Swanson, D.R. (1965). The evidence underlying the Cranfield results. Library Quarterly, 35, 1-20. crossref(new window)

70.
Swanson, D.R. (1986). Subjective versus objective relevance in bibliographic retrieval systems. Library quarterly, 56, 389-398. crossref(new window)

71.
Tague-Sutcliffe, J. (1992). The pragmatics of information retrieval experimentation, revisited. Information Processing & Management, 28(4), 467-490. crossref(new window)

72.
Thorne, R.G. (1955). The efficiency of subject catalogues and the cost of information searches. Journal of Documentation, 11(3), 130-148. crossref(new window)

73.
Voorhees, E.M. & Harman, D.K. (2005a). The text retrieval conference. In E.M. Voorhees & D.K. Harman (Eds.). TREC: Experiment and evaluation in information retrieval. Cambridge, Massachusetts: The MIT Press. 3-19.

74.
Voorhees, E.M. & Harman, D.K. (Eds.) (2005b). TREC: Experiment and evaluation in information retrieval. Cambridge, Massachusetts: The MIT Press.

75.
Walker, S. & De Vere, R. (1990). Improving subject retrieval in online catalogues: 2. Relevance feedback and query expansion. London: British Library. (British Library Research Paper 72).

76.
Walker, S. (1989). The Okapi online catalogue research projects. In The Online catalogue: developments and directions. London: The Library Association, 84-106.

77.
Wang, P. (2001). Methodologies and methods for user behavioral research. In M.E. Williams (Ed.), Annual Review of Information Science and Technology, 34, 1999, 53-99.

78.
Wilson, M. (2011). Interfaces for information retrieval. In I. Rutven, & D. Kelly (Eds.), Interactive information seeking, behaviour and retrieval. London: Facet Publishing, 139-170.

79.
Xie, I. (2008). Interactive information retrieval in digital environments. IGI Publishing.