Skip to main content

Notice

The new RDA web platform is still being rolled out. Existing RDA members PLEASE REACTIVATE YOUR ACCOUNT using this link: https://rda-login.wicketcloud.com/users/confirmation. Please report bugs, broken links and provide your feedback using the UserSnap tool on the bottom right corner of each page. Stay updated about the web site milestones at https://www.rd-alliance.org/rda-web-platform-upcoming-features-and-functionalities/.

P22 Asynchronous Discussion: Highlighting ScreenIT (Berlin Institute of Health)

  • Creator
    Discussion
  • #136729

    Limor Peer
    Participant

    Another group we’d like to highlight for you today is ScreenIT from the Quest Center at the Berlin Institute of Health, Charité. The ScreenIT pipeline screens for common problems that can affect transparency or reporting and provides feedback to authors.

    ScreenIT created a collection of automated screening tools, combined into a single pipeline, to detect common problems or beneficial practices such as blinding, randomization, power calculations, limitations sections, open data, open code and common data visualization problems in published papers. FAIR predominantly addresses the question “Can I reuse this data?” . We’d like to extend the conversation to the data reuse, by asking “Should I reuse this data?” and “How should I reuse this data responsibly?”.

    Tracey Weissgerber provided information about this initiative and their vision for reproducibility and its challenges in the thread below. Please feel free to continue the conversation within the thread!

    Briefly, tell us about your work/organization and how it’s related to computational reproducibility; What are you trying to address and how?

    ScreenIT is a group of researchers and tool developers from around the world who create automated screening tools to detect common problems or beneficial practices in published papers. We predominantly focus on biomedical research, and our tools detect features like blinding, randomization, power calculations, limitations sections, open data, open code and common data visualization problems. We have combined our tools into a single pipeline. During the pandemic, we used this pipeline to screen and post public reports on more than 23,000 papers using the web annotation tool hypothesis.is. We have also used these tools to create a “Screen My Paper” website for our institution, which is undergoing testing. We hope that these reports help authors to make their papers more transparent and reproducible. In the context of FAIR, our recent correspondence highlights the importance of detailed methods for responsible data reuse, and calls for those with relevant expertise to join us to develop guiding principles for responsible data reuse (https://www.nature.com/articles/d41586-021-02906-8). FAIR predominantly addresses the question “Can I reuse this data?” . We’d like to extend the conversation to the data reuse, by asking “Should I reuse this data?” and “How should I reuse this data responsibly?”.

    What is your/your organization’s vision when it comes to computational reproducibility (e.g., all scholarship is computationally reproducible by default)?

    Publications routinely fail to include details that are essential for evaluating scientific rigor and the risk of bias. Compliance with reporting guidelines, outlining essential details that should be reported for different study types, remains poor. Our goal is to create automated tools to help authors improve reporting, before they submit their paper. As noted above, we’d also like to develop guiding principles for responsible data reuse.

    What are some of the challenges you see to achieving this vision?

    Legacy manuscript submission systems are a major challenge. While we would like our tools to be integrated into journal submission systems, these systems are extremely inflexible and the costs for any changes are very high. Sustainability and maintaining the open source tools in the pipeline is also a challenge.

    What would you like to ask the members of our Interest Group?

    Our tools are focused on biomedicine and we don’t have tools that are well suited for screening for features associated with rigorous science in computational papers. We would be interested in exploring opportunities to collaborate to develop tools for screening computational papers. As we are not computationally focused, we would also extend the conversation to include factors that are associated with reproducibility in fields that are not computational.

Log in to reply.