Making Federated Searching More Usable Frank Cervone Assistant University Librarian for Information Technology Northwestern University The Education Institute Tech Tuesdays: Talking with Techies #14 Agenda • What are some of the key issues related to federated searching that could affect usability? • What have we learned from usability research that has been done already? – What information should be included on the result screen of a federated search? – What should you consider when developing specific subject area groupings? – Where should the federated search box be placed on a home page? © 2006 Frank Cervone A fundamental issue • Federated search engine concept is still new to most people • Don’t have a mental model for it – People understand searching Google – They understand searching individual databases • Most people have no clue what “metasearch” means © 2006 Frank Cervone While it may be “old hat” for some of us… • Federated search is still relatively new – For the majority, still in the early stages of implementation or planning • Federated search can be complicated – Offering services as a seamlessly integrated whole – How does architecture of the software influence service implementation? © 2006 Frank Cervone Component integration • Tight vs. loose integration – Integration with other services • Advantages of tight integration – “Big bang” implementation • Advantages of looser integration – Staged introduction of the services © 2006 Frank Cervone Key questions - Authentication and authorization • Support for external authentication – LDAP, Active Directory • How does it interact with an authentication service to determine authorization to services? – Allow only students in grades 6-8 – Deny access from all campuses except Main © 2006 Frank Cervone Key questions - Interface customization • Customized searches for specific populations? – Undergrads, Med school, specific branches? • Can user interfaces be customized at a departmental, divisional, institutional level? • How are “collections” of databases defined? • Can users create their own collection of databases? – Can individual users save “preferences”? – Number of items returned – Preferred databases © 2006 Frank Cervone Issues related to usability • Issues don’t differ much regardless of model • Federated search usability has many of the same characteristics of usability testing on the main web site of a library • Certainly, similar goals: – Ensure that people are able to find and use resources in the most efficient and straightforward manner possible. © 2006 Frank Cervone Upfront work • Customize OpenURL resolver’s “target menu” page • Some things to consider –Limit number of sources to display –Display alternative services •Link to catalog •Link to ILL •Citation download •Online chat © 2006 Frank Cervone © 2006 Frank Cervone Our testing program • Program of usability testing in place for well over three years • In that time, we have tested various aspects of our web site and our library catalog. • Each iteration feeds into the next, so we are able to incrementally improve the interface over time © 2006 Frank Cervone Introducing federated searching • We decided we wanted this to be a “named” service • But what to call it? – Quick tour of the federated search engine, by way of a usability testing question – Asked them how they would describe what the search engine is • None of the test participants spontaneously used a “standard term” – metasearch, federated searching, megasearch © 2006 Frank Cervone So what happened? • Described it in terms of our current resource finder – Often with extra superlative adjectives or descriptors • We knew from prior testing that names for services did not, in general, stick with our undergraduates – “NUcat” and “Answers Online” • We’re not sure why this is – But we were able to see the blank stares when we named them © 2006 Frank Cervone Preference testing • Consequently, we did a preference test • Tested out the working name for the project – What did they think of “Einstein”? • Surprisingly, they liked it • As one student described it “Well, it’s not clear what it is, but now that I know I’ll remember it. I mean, Google doesn’t make a whole lot of sense as a name either.” © 2006 Frank Cervone What is the point? 1. When in doubt, perform a preference test and find out what resonates with people 2. “Common wisdom” is sometime wrong – Many usability studies have dismissed FAQ pages as wastes of time • But this observation depends on context • For a site that is well-known, it is true that FAQs truly are rare • However, our testing showed that this new service truly did have a set of FAQs! 3. The last point of the story is more complex © 2006 Frank Cervone A simple and streamlined interface? • What’s the goal of a federated search engine? – Make it simple! – Make it like Google? • The federated search model is different – Actually quite different from Google • Database? Subject Area? • Google does not provide the same mentalmodel © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone Are strict subject categories a good thing? • They do help filter out some of the “noise” from a less-than-specific search • However, what if the searcher does not know what the subject area is? • What if we have not defined a subject area that is logical from the perspective of the patron? • What about interdisciplinary studies? • Strict subject categories could be an impediment © 2006 Frank Cervone The answers are sometimes surprising… • Defy the “conventional” wisdom we may have about the best way to approach an information retrieval task • Usability testing of federated search products at Northwestern and other large research universities has demonstrated some of the interesting conundrums that federated search engines raises © 2006 Frank Cervone For example… • Design point - needed to be a very simplified interface designed to appeal to undergraduates • Core group of databases – Would cover the broad range of programs at the university – No need to know about a specific subject area or even about specific databases © 2006 Frank Cervone Conflict with other goals… • This conflicts with desire of librarians to teach about using databases – Which is most appropriate for a given topic • Several factors in the decision – Faculty and other researchers do not care about “databases” • Faculty are concerned with the source of the material, that is the journal itself • Databases change configurations regularly anyway – FirstSearch to ProQuest – Web of Science to SCOPUS – Instruction is not something that can be forced on a web visitor © 2006 Frank Cervone What does the research say? • Several usability studies indicate a clear preference for “simple search” – Does not require the user to select a database – Does not provide any sort of “complex” query statements – User defaults to a group of databases if no group is specifically selected • Default typically provides broad interdisciplinary coverage • Works for specific areas as well © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone The advantages of quick search • Users feel they are very successful finding resources • Good place to start when they are beginning to research their topic • Unexpected advantage: includes areas that may not be self-evident – Graduate students and faculty often find many significant citations they were not previously aware of – Most participants in these tests were only familiar with a very limited number of databases in their area – They rarely searched in other databases outside their realm of expertise – This is similar to earlier studies on searching behavior © 2006 Frank Cervone Why? • People become accustomed to searching in specific native interfaces – Are not especially interested in learning others – Particularly when the benefits are not clear • Researchers claim their behaviors are different – This is not true – There is a surprising amount of similarity among all types of users © 2006 Frank Cervone Conflict! • Students (graduate and undergraduate) have clear expectations about how results should be displayed – In relevance order • Even when it’s not clear how relevance is determined • Direct conflict with traditional approach – Alphabetical within reverse chronological order • But by forcing the traditional ordering, we create a disconnect © 2006 Frank Cervone You will do as you are told… • As a result, at NU we have decided to display results in relevance order • Other institutions have taken other approaches – Patrons are able to figure out the display order fairly quickly • If it is alphabetical within reverse chronological – But they do not like it – When they have the opportunity, they change the order to relevance © 2006 Frank Cervone However, this is not without problems • How is relevance determined? – Federated search engine – Individual data bases © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone © 2006 Frank Cervone Looking for articles • Novice researchers “fish” for articles • Advanced researchers tend to have specific articles in mind – Advanced researchers say they tend to focus on databases for these types of searches – But when presented with a federated search, this behavior cannot be reliably observed • The vast majority of library patron communities, when given the choice, prefer starting at a “Quick Search” • In fact, when presented with a list that allows them to “find a database,” test participants generally find it confusing © 2006 Frank Cervone © 2006 Frank Cervone “I’d just quit and go to ProQuest” • Because it was one of a few databases the student knew the name of • Seems to “provide good articles most of the time” • This type of response, although perhaps articulated differently, was common © 2006 Frank Cervone Broad vs. focused coverage • Delicate balancing act we must do if our web sites are to be successful • Finding the right grouping of databases for “subject areas” or “interests” can be a major problem in setting up the federated search engine • The natural desire to set up long lists of items to provide comprehensive coverage may be working against the library’s best interest © 2006 Frank Cervone Too much leads to too little • Often long lists of resources are developed in the hope of promoting awareness of the various resources a library purchases • Hope that people will click through to the native interface when it is appropriate – In usability tests, there is no evidence that creating these long lists does, in fact, enhance awareness • Students often found these long lists of resources confusing – “Those long lists really make me feel stupid.” © 2006 Frank Cervone If it’s not federatable, it’s ignored • In general, people don’t use non-federatable databases from within a federated search interface – They simply ignore these databases • One exception is for a well-known resource people expect to see, even if it’s not federatable – Hein Online or Lexis Nexis • Alleviate the problem by having a “best choices” group of 3 databases in each area – Provide the rest as a secondary grouping afterwards © 2006 Frank Cervone Results list should be informative • Streamlined result lists can be confusing • Result lists work better when a more “traditional” citation format is used © 2006 Frank Cervone © 2006 Frank Cervone Custom combinations of databases • Functionality for combining databases in unique ways geared toward the specific interests of the researcher • A clear overriding theme: patrons find this confusing and avoid using it for the most part • Default view for combining databases is based on a subject classification model • For people working in multidisciplinary areas, this is not usable © 2006 Frank Cervone © 2006 Frank Cervone We can’t fix that … • Federated search software cannot be easily customized to address this • An important issue in usability testing sometimes, we cannot immediately “fix the problem.” • Solving usability issues often can only be accomplished through a partnership of working with our vendors to design solutions that will work better for our patrons © 2006 Frank Cervone Some emerging best practices • Predefine collections of databases – Broad subject areas • At least at first • Put must frequently used databases at the top of lists • If possible, expose descriptive information about databases © 2006 Frank Cervone Resource definition considerations • Have correct technical information – Is the Z39.50 configuration “up to spec”? • Categorize resources early in the process – Otherwise you’ll have to retouch every database record – Kick start the process • Appoint a small group to make the first pass at each area – Clearly define extent of coverage • What databases will be included in each collection? • Will there be a maximum number? – Think about performance characteristics Differences in results among databases Do these databases “play well” together? If not, what do you do? © 2006 Frank Cervone In summary • OpenURL and federated searching are important new services that we are bring to our patrons • It is critical for us to make sure that these services work from the perspective of the patron • They represent extensions of the library that further enable self-service information provision, which will be critical to the library of the future • Although usability testing in this area may be in its infancy, it is critical that we vigorously explore this area. For while we know some things about how federated search works, there is still a lot more for us to learn and our future depends on it © 2006 Frank Cervone Thanks! Frank Cervone Assistant University Librarian for Information Technology Northwestern University Evanston, IL 60208 [email protected] © 2006 Frank Cervone
© Copyright 2026 Paperzz