Librarians are adept at searching for material in traditional catalogs because we understand how the catalog searches metadata. The catalog searches for the classic fields like title and author. But there is a push to make library catalogs more like Google.
We are dealing with the "Google generation" after all, and if our patrons can't find the information that they are looking for in library catalogs by searching a few keywords, then the fear is that they won't use the library catalogs (thus library resources) and will always resort to Google and the very convenient (and maybe less reputable) resources.
The trend now is to use software that allows for discovery layers in catalogs. "Discovery layers are a relatively new software component for libraries that provide a search interface for users to find information held in the library’s catalog and beyond. Typically, a discovery layer is based on an enterprise search platform that can interact with a metadata index and will normally include additional features that allow your library to customize the search results. The primary function of a discovery layer is a user interface that allows patrons to navigate and find information. This generally sits atop harvested metadata such as catalog records, index/abstract records, and other information from local and/ or remote databases."
"The discovery layer interface will generally provide a unified view across resources such as local archival management systems, institutional repositories or the catalog component of a library management system. Some will even crawl local websites. Harvested data is indexed and presented to the end user in a single set of results. One key factor for a discovery layer is that it indexes data that lies outside the library’s immediate catalog - e.g. web based content stored remotely, metadata for copyrighted works not in the library catalog, or content stored in other libraries. The discovery layer can therefore cover a far greater scope than a simple search of the library catalog."
Barbara Fister discussed the flattening of knowledge as libraries try to keep up with Google. "Trying to emulate the convenience and simplicity of Google and Amazon, libraries are (once again) putting too high a value on volume of information and too little on curation. We have told vendors that we want as much full text as possible in the databases we subscribe to, which has made it harder, not easier, for undergraduates to use a database like Academic Search Premier and find articles they can understand that have been published in journals whose titles their teachers will recognize."
Further, she states, "[w]hen we ported the contents of card catalogs into databases, we kept the same data structures. We could search by authors, titles, and subjects, and included bits of description and local location information. The same thing happened in the shift from indexes and abstracts to database retrieval. [But] the current vogue for discovery layers – licensed software maintained with a great deal of local labor by librarians that allows library users to search both the catalog and licensed databases all at once – is at least in part an attempt to flatten the library’s collection of knowledge just as Google does."
"Unlike Google, we have to pay a lot and put a lot of staff hours into it to make it customized for a local collection."
Making library catalogs more like Google seems like a good thing (and I think there is a lot of potential), but our students need to be taught information literacy, too. We cannot market our catalog as being like Google without also discussing how the information is retrieved and that the students may need to go beyond the first few results from the search.
It's an uphill battle to engage students with this information. Their main concern is a good grade that is "earned" through the least amount of effort. Librarians and educators need to focus on filling the "knowledge in action" gap to ensure that our students know how to evaluate and contextualize their research.