Verify Internet Sources Or Shephardize The Internet

InsideHigherEd recently ran an article about information literacy on the web. The author advances the notion that websites should have some form of updating information for the user. 

"Every law school student knows “shepardizing.” It is the process by which one learns how and in what ways to research a legal case that may have been affected by subsequent cases.  Shepardizing is a critical process in a legal system based on precedent.  Stare decisis notwithstanding, one must know the latest decision on any specific legal question to proceed to the next.  In the old days, it was done by hand and rather laborious, requiring not only denoting a case, but also reading those subsequent cases to evaluate the nuances of “modified,” “distinguished,” or even “overruled.”  I was in law school during the transition to digitized process.  In one of my first jobs as a lawyer, the attorney who gave me the assignment thought me brilliant because I came back within 20 minutes with the up-to-date case that significantly modified the one he asked me to research.  His opinion shifted when I explained the automated West Law program that did all the work!"

The author is basically saying that if Westlaw could automate the shephardizing process, why can't search engines to the same for websites to make sure that the information is still valid? 

"Information literacy 101 instructs students not to accept the first link in a search, to test for validity, to evaluate the source, and to do a researcher’s version of shepardizing. In other words, to dive deeper exploring subsequent research. With all the knowledge that search engines integrate, some form of updating information, or at least denoting links with metadata that contextualizes it, shouldn’t be too difficult to create." 

The author goes on to say that "[u]sers, especially computer scientists, research faculty, and reference librarians, should already be thinking about how this metadata should operate. Responsible “shepardizing” helps citizens as well as students because it prizes transparent, objective, valid and sometimes even peer reviewed or tested information. And that direction shapes user experience of the Internet."

This is a great argument, and one that I had never considered. As a reference librarian, I teach manual information literacy and vetting of information, but I have never even considered the possibility of Google doing the job for me. In previous posts, I have written about the perils of relying solely on Internet resources, and the quality of information is of utmost concern. If the search engines can make the job of vetting information easier, then the quality of scholarship created from online sources will increase, which is a good thing for society as a whole. 

For more information about digital literacy, see Cornell's Digital Literacy website. 


Popular posts from this blog

For The Love Of Archives

Law Library Lessons in Vendor Relations from the UC/Elsevier Split

Library Catalogs & Discovery Layers