Skip to main content

Notice

The new RDA web platform is still being rolled out. Existing RDA members PLEASE REACTIVATE YOUR ACCOUNT using this link: https://rda-login.wicketcloud.com/users/confirmation. Please report bugs, broken links and provide your feedback using the UserSnap tool on the bottom right corner of each page. Stay updated about the web site milestones at https://www.rd-alliance.org/rda-web-platform-upcoming-features-and-functionalities/.

P22 Asynchronous Discussion: Highlighting TIER2

  • Creator
    Discussion
  • #136813

    Lauren Cadwallader
    Participant

    TIER2

    TIER2 is a three-year EU funded project investigating a range of methods, tools and practices related to increasing levels of reproducibility to systematically understand and enhance reproducibility in social, life, and computer sciences through targeted interventions. A key area of interest is “epistemic diversity” – that the meanings, relevance, and feasibility of “reproducibility” vary widely across research disciplines and methodologies. The project is looking at areas/interventions such as containerization tools, editorial checks for reproducibility, automated tools and policy instruments.

    <b style=”background-color: var(–bb-content-background-color); font-family: inherit; font-size: inherit; color: var(–wp–preset–color–dark);”>Our project and it’s relation to computational reproducibility; What we are trying to address and how

    Our project TIER2 (EC Horizon-funded, 2023-2025), is investigating a range of methods, tools and practices related to increasing levels of reproducibility. A key concept for us is that of “epistemic diversity” (that the meanings, relevance, and feasibility of “reproducibility” vary widely across research disciplines and methodologies). A large part of our project is geared towards computational reproducibility, with areas of research including containerization tools for computationally reproducible workflows (in life and computer sciences), reproducibility checklists for computational social sciences, general purpose “reproducibility management plans” (extension of DMPs), an editorial handbook for reproducibility-related checks at journal-level, automated tools to monitor levels of re-use of data, code and other materials, as well as policy instruments (funder reproducibility promotion plans) and research into specific interventions (e.g., on measures to increase data-sharing).

    Our project’s vision for computational reproducibility

    Our aim is to increase the reusability of research results and foster trust, integrity, and efficiency in processes of scholarship. By embracing epistemic diversity, our goal is to systematically understand and enhance reproducibility in social, life, and computer sciences wherever relevant and feasible through targeted interventions. A key element of this is computational reproducibility, which should be the default for all research to which the concept is applicable. However, we should also recognise that research can be computationally reproducible and still wrong – hence we must also continue to work on improving all areas of reproducibility more broadly, including the cultural disincentives that fuel questionable research practices.

    The challenges we see to achieving this vision

    Communities are now aware that action must all levels of research culture (c.f., Center for Open Science https://www.cos.io/blog/strategy-for-culture-change), from enabling infrastructure and tools, through training and awareness, to community building and normalisation, to incentives through reward/recognition or policy requirements. Targeting and coordinating action across all these areas is the ultimate challenge.

    Achieving this vision is confronted with numerous challenges, including the diverse epistemological, social, and technical factors that influence reproducibility in different scientific contexts. Each discipline brings unique challenges that require tailored approaches to ensure effective reproducibility practices. With others, we also see a need for a brighter framing of the issue – from talk of “crisis” to “opportunity”. There is also a wealth of evidence on what works and where, that would benefit from systematisation to identify priority areas for action and gaps in knowledge. Additionally, coordinating efforts across the broad stakeholder groups of researchers, publishers, and funders poses logistical and alignment challenges. These challenges necessitate continuous dialogue, adaptable strategies, and strong collaborative networks to implement effective reproducibility tools and practices across varied research contexts. Finally, in seeking to change systems we always run the risk of unintended consequences and these should be closely monitored.

    Figure 1 below outlines TIER2’s specific key strategic priorities in addressing these challenges.

    Questions for members of the Research Data Alliance Reproducibility Interest Group

    • How are you addressing issues of epistemic diversity with regard to computational reproducibility (e.g., differences in cultures, standards, workflows, tools across disciplines)?

    • When planning for computational reproducibility, what are the most important aspects that are not already covered in a Data Management Plan?

    • How can researchers, institutions, funders, publishers, scholarly societies and others more effectively collaborate to increase computational reproducibility?

    • What effective practices or tools have you implemented in your research to enhance computational reproducibility

    • Fig. 1. TIER2 Strategic Priorities, taken from: Ross-Hellauer T (2023) Strategic priorities for reproducibility reform. PLOS Biology 21(1): e3001943. https://doi.org/10.1371/journal.pbio.3001943

Log in to reply.