The Problem with Impact Factor in Law
While working as a Faculty Services & Scholarly Communications Librarian, I presume I am not alone in being asked to create an impact factor for which to judge the scholarly work of faculty.
In fact, Gary Lucas at Texas A&M was recently asked a similar question:
Texas A&M University assesses its colleges and departments based partly on scholarly impact and using quantitative metrics. The law school’s dean has assigned me the task of identifying scholarly impact metrics for use in assessing the performance of our law faculty collectively and individually. This essay discusses the major issues that arise in measuring the impact of legal scholarship. It also explains important scholarly impact metrics, including the Leiter score and Google Scholar h-index, and the major sources of information regarding scholarly impact, including Google Scholar, Westlaw, Hein Online, SSRN, and bepress.
Ultimately, Lucas proposes ranking scholarship by Google Scholar citation count to provide a much-needed supplement to existing rankings schemes, including ranking schools based on U.S. News peer reputation score.
Lucas has made a noble effort toward impact in law. But creating an impact factor from scratch and getting other law schools on board to use it is a magnum-opus type work that other brave souls have attempted before to no avail.
Ultimately, it is unlikely that Google Scholar will be adopted to widespread use. The metadata created by Google Scholar is neither reliable nor reproducible, and it distorts the metric indicators at the individual and journal levels, as noted by other authors. Additionally, when broaching the topic of using impact in promotion and tenure decisions, law faculty will inevitably analyze the impact factors to death.
Because of the inherent difficulty and extensive resources that it would take to create an impact factor from scratch that faculty feel is a reliable indicator of their work, we're left without a metric that all schools will adopt and use consistently to make individual impact viable in law.
In fact, Gary Lucas at Texas A&M was recently asked a similar question:
Texas A&M University assesses its colleges and departments based partly on scholarly impact and using quantitative metrics. The law school’s dean has assigned me the task of identifying scholarly impact metrics for use in assessing the performance of our law faculty collectively and individually. This essay discusses the major issues that arise in measuring the impact of legal scholarship. It also explains important scholarly impact metrics, including the Leiter score and Google Scholar h-index, and the major sources of information regarding scholarly impact, including Google Scholar, Westlaw, Hein Online, SSRN, and bepress.
Ultimately, Lucas proposes ranking scholarship by Google Scholar citation count to provide a much-needed supplement to existing rankings schemes, including ranking schools based on U.S. News peer reputation score.
Lucas has made a noble effort toward impact in law. But creating an impact factor from scratch and getting other law schools on board to use it is a magnum-opus type work that other brave souls have attempted before to no avail.
Ultimately, it is unlikely that Google Scholar will be adopted to widespread use. The metadata created by Google Scholar is neither reliable nor reproducible, and it distorts the metric indicators at the individual and journal levels, as noted by other authors. Additionally, when broaching the topic of using impact in promotion and tenure decisions, law faculty will inevitably analyze the impact factors to death.
Because of the inherent difficulty and extensive resources that it would take to create an impact factor from scratch that faculty feel is a reliable indicator of their work, we're left without a metric that all schools will adopt and use consistently to make individual impact viable in law.
Comments
Post a Comment