Brief introduction describing the activities and scope of the group(s):
Scientific reproducibility provides a common purpose and language for data professionals and researchers. For data professionals, reproducibility can be a framework to hone and justify curation actions and decisions, and for researchers it offers a rationale for inserting best practices early into the research lifecycle. Curating for reproducibility (CURE) includes activities that ensure that statistical and analytic claims about given data can be reproduced with that data. Academic libraries and data archives have been stepping up to provide systems and standards for making research materials publicly accessible, but the datasets housed in repositories rarely meet the quality standards required by the scientific community. Even as data sharing becomes normative practice in the research community, there is growing awareness that access to data alone – even well-curated data – is not sufficient to guarantee the reproducibility of published research findings. Computational reproducibility, the ability to recreate computational results from the data and code used by the original researcher, is a key requirement to enable researchers to reap the benefits of data sharing, but one that recent reports suggest is not being met. Data curation workflows that enable data access often fall short when research reproducibility is the ultimate goal. Code review and result verification are required in order to confirm the integrity of the scientific record, to build upon previous work to discover, and to develop innovations. Several initiatives confirm that the scientific community is embracing these ideas. For example, the CURE Consortium has been implementing practices and developing workflows and tools that support curating for reproducibility in the social sciences.
CURE-FAIR stands for Curating for reproducible and FAIR data and code.
Additional links to informative material: