New publications on the San Francisco Declaration on Research Assessment

225 225 DaCosta Lab

There is a pressing need to improve the ways in which the outputs from scientific research is evaluated. To date, outputs from scientific research, which include research articles, intellectual property, and trained young scientists, are being evaluated based on Journal Impact Factor. The Journal Impact Factor is calculated by Thomson Reuters and was originally created as a tool to help librarians identify journals to purchase, not as a measure of scientific quality of individuals and institutions. Using such a tool for research assessment is problematic. Citation distributions within journals are highly skewed, the properties of the Journal Impact Factor are field-specific, Journal Impact Factors can be manipulated by editorial policy, and data used to calculate the Journal Impact Factors are not openly available to the public. To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment (DORA), to improve the way in which the quality of research output is evaluated. To learn more, please click the following link: https://sfdora.org/read/.

Towards implementing DORA principles at the UHN research institutes, an advisory group comprising Drs. Mathieu AlbertJames ChowRalph DaCostaAnna Gagliardi (Chair), Michael HoffmanBehrang KeshavarzPia KontosJenny LiuMary Pat McAndrews, and Stephanie Protze has developed measures by which to report and assess research activity, outputs, and impact.

The group applied an evidence- and consensus-based process to derive measures from published research, and national and international best practices, then engaged researchers and research leaders across all of the institutes through a two-round Delphi survey. This approach identified ten measures that reflect the relevance of the research program, funding, innovative outputs, publications, collaboration, recognition, and challenges to research productivity. The group also examined current assessment rubrics at the institutes for these measures to identify gaps and areas for improvement.

Publication 1: DORA-compliant measures of research quality and impact to assess the performance of researchers in biomedical institutions: Review of published research, international best practice and Delphi survey

Abstract

Objective: The San Francisco Declaration on Research Assessment (DORA) advocates for assessing biomedical research quality and impact, yet academic organizations continue to employ traditional measures suck as Journal Impact Factor. We aimed to identify and prioritize measure for assessing research quality and impact.

Method: We conducted a review of published and grey literature to identify measures of research quality and impact, which we included in an online survey. We assembled a panel of researchers and research leaders, and conducted a two-round Delphi survey to prioritize measures rated as high (rated 6 or 7 by ≥ 80% of respondents) or moderate (rated 6 or 7 by ≥ 50% of respondents) importance.

Results: We identified 50 measures organized in 8 domains: relevance of the research program, challenges to research program, or productivity, team/open science, funding, innovations, publications, other dissemination, and impact. Rating of measures by 44 panelists (60%) in Round One and 24 (55%) in Round Two of a Delphi survey resulted in consensus on the high importance of 5 measures: research advances existing knowledge, research plan is innovative, an independent body of research (or fundamental role) supported by peer-reviewed research funding, research outputs relevant to discipline, and quality of the content of publications. Five measures achieved consensus on moderate importance: challenges to research productivity, potential to improve health or healthcare, team science, collaboration, and recognition by professional societies or academic bodies. There was high congruence between researchers and research leaders across disciplines.

Conclusions: Our work contributes to the field by identifying 10 DORA-compliant measures of research quality and impact, a more comprehensive and explicit set of measures than prior efforts. Research is needed to identify strategies to overcome barriers of use of DORA-compliant measures, and to “de-implement” traditional measures that do not uphold DORA principles yet are still in use.

 

Publication 2: Exploring the merits of research performance measures that comply with the San Francisco Declaration on Research Assessment and strategies to overcome barriers of adoption: qualitative interviews with administrators and researchers

Abstract



Background: In prior research, we identified and prioritized ten measures to assess research performance that comply with the San Francisco Declaration on Research Assessment, a principle adopted worldwide that discourages metrics-based assessment. Given the shift away from assessment based on Journal Impact Factor, we explored potential barriers to implementing and adopting the prioritized measures.


Methods: We identified administrators and researchers across six research institutes, conducted telephone interviews with consenting participants, and used qualitative description and inductive content analysis to derive themes.


Results: We interviewed 18 participants: 6 administrators (research institute business managers and directors) and 12 researchers (7 on appointment committees) who varied by career stage (2 early, 5 mid, 5 late). Participants appreciated that the measures were similar to those currently in use, comprehensive, relevant across disciplines, and generated using a rigorous process. They also said the reporting template was easy to understand and use. In contrast, a few administrators thought the measures were not relevant across disciplines. A few participants said it would be time-consuming and difficult to prepare narratives when reporting the measures, and several thought that it would be difficult to objectively evaluate researchers from a different discipline without considerable effort to read their work. Strategies viewed as necessary to overcome barriers and support implementation of the measures included high-level endorsement of the measures, an official launch accompanied by a multi-pronged communication strategy, training for both researchers and evaluators, administrative support or automated reporting for researchers, guidance for evaluators, and sharing of approaches across research institutes.


Conclusions: While participants identified many strengths of the measures, they also identified a few limitations and offered corresponding strategies to address the barriers that we will apply at our organization. Ongoing work is needed to develop a framework to help evaluators translate the measures into an overall assessment. Given little prior research that identified research assessment measures and strategies to support adoption of those measures, this research may be of interest to other organizations that assess the quality and impact of research.

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.