You are here

Body:

RDA Case Statement

GORC International Benchmarking WG

 

  1. Charter

The Global Open Research Commons (GORC) is an ambitious vision of a global set of interoperable resources necessary to enable researchers to address societal grand challenges including climate change, pandemics, and poverty. The realized vision of GORC will provide frictionless access to all research artifacts including, but not limited to: data, publications, software and compute resources; and metadata, vocabulary, and identification services to everyone everywhere, at all times.

The GORC is being built by a set of national, pan-national and domain specific organizations such as the European Open Science Cloud, the African Open Science Platform, and the International Virtual Observatory Alliance (see Appendix A for a fuller list). The GORC IG is working on a set of deliverables to support coordination amongst these organizations, including a roadmap for global alignment to help set priorities for Commons development and integration. In support of this roadmap, this WG will develop and collect a set of benchmarks for GORC organizations to measure their user engagement and development internally within the organization, gauge their maturity and compare features across commons.

In the first case, the WG will collect information about how existing commons are measuring success, adoption or use of their services within their organization, such as data downloads, contributed software, and similar KPI and access statistics.  

Secondly, we will also develop, validate, collect and curate a set of benchmarks that will allow Commons developers to compare features across science clouds. In the latter case for example, we would consider benchmarks such as evidence or the existence of :

        1. A well defined decision making process
        2. A consistent and openly available data privacy policy
        3. Federated Authentication and Authorization infrastructure
        4. Community supported and well documented metadata standard(s)
        5. A workflow for adding and maintaining PIDs for managed assets
        6. A mechanism for utilizing vocabulary services or publishing to the semantic web
        7. A process to inventory research artefacts and services
        8. An Open Catalogue of these artefacts and services
        9. A proven workflow to connect multiple different research artefact types (e.g. data and publications; data and electronic laboratory notebooks; data and related datasets)
        10. A mechanism to capture provenance for research artefacts
        11. Mechanisms for community engagement and input; an element or scale for inclusion

We anticipate that the first set of metrics will be quantitative measures used within an organization, while the second set of benchmarks will be comparable across organizations.

  1. Value Proposition

This WG is motivated by the broader goal of openly sharing data and related services across technologies, disciplines, and countries to address the grand challenges of society. The deliverables of the WG itself will inform roadmaps for development of the infrastructure necessary to meet that goal, while engagements and relationships formed during the work period will help forge strong partnerships across national, regional and domain focused members which are crucial to its success. Identifying observable and measurable benchmarks in pursuit of the global open science commons will help create a tangible path for development and support strategic planning within and across science commons infrastructures. In the future, best practices for commons development will emerge based on the experience of what actions led to successful outcomes. This work will provide a forum for discussion that will allow members to identify the most important features and the minimal elements required to guide their own development and build a commons that is globally interoperable. Finally, it will support developers as they seek resources to build the global commons by helping them respond to funding agencies requirements for measurable deliverables.

The proposed WG was discussed at the RDA 16 virtual plenary.[1] Participants discussed the initial work packages and agreed during the meeting this was a worthy goal and an appropriate approach. 

 

  1. Engagement with Existing Work

The GORC IG builds on, and incorporates the previous National Data Services IG. The Commons that will be investigated in this WG are likely either to have considered or implemented outputs from other RDA groups, such as the  Domain Repositories IG, the Data Fabric IG, and the Virtual Research Environment IG, just to name a few. These groups and many others outside of RDA will have recommendations that speak to functionality and features of various components of Commons; for example the re3data.org schema for collecting information on research data repositories for registration, the EOSC FAIR WG and Sustainability WG that seek to define the EOSC as a Minimum Viable Product (MVP).  We will review these and other related outputs to see if they have identified benchmarks that we feel will support our goals. This review period will ensure that we do not duplicate existing efforts. Appendix B of this case statement identifies a few of these existing efforts, both within and without RDA; this list will be expanded and reviewed by the WG members.

 

  1. Work Plan

To create these deliverables, members of the group will:

  1. Create a target list of Commons (Appendix A)
  2. Review public facing documentation of each Commons to extract benchmarking information (both KPIs and feature lists).
  3. Review public facing documentation of recommendations and roadmaps from related communities to extract benchmarking information (Appendix B). This evaluation phase will include an examination of the outputs from other RDA WGs and position papers available in the wider science infrastructure community, along with experiences gathered by the WG’s members.
  4. Because benchmarking information may not be easily found in public documents we will conduct outreach to Commons representatives and related organizations to ask for additional feedback and information about benchmarks used by their community.  This may include benchmarks already in use, as well as benchmarks that organizations feel would be useful but which are not yet implemented.
  5. Synthesize and document the benchmarks into 2 deliverables, described below..

We anticipate that the WG will create sub-working groups or task groups. The WG will decide if they would rather define the task group according to the deliverables, creating a Commons Internal Benchmarking TG and a Commons External Benchmarking TG, or if they would rather subdivide according to a typology of the commons, for example with some members looking at pan-national, national, or domain specific commons, or by some other subdivision of labor.

The WG will proceed according to the following schedule:

Month

Activity

Jan-Mar

2020

Group formation

  1. Agreement on the scope of work and deliverables (broad scope)
  2. Case statement community review
  3. Creation of sub-working groups

Apr-Sept

2021

Begin literature review of public facing documents from Science Commons and related organizations

Refine scope: Meeting point to consolidate list of topics to be addressed in the deliverables and assess level of resource available to achieve them

Oct-Dec

2021

Begin outreach to Science Commons and related organizations

Update at RDA17

Jan-Mar

2021

First draft: Internal Benchmarks distributed for community review

 

Mar-Jun

2022

First draft: External Benchmarks  distributed for community review

July

2022

Final deliverables

 

  1. Deliverables

This group will create Supporting Outputs in furtherance of the goals of the  GORC IG. Specifically, 2 documents:

D1: a non-redundant set of KPIs and success metrics currently utilized, planned or desired for existing science commons, and

D2: a list of observable international benchmarks of features, structures and functionality that can help define a Commons and that will feed into a roadmap of Commons interoperability.

D3: Adoption Plan: described in section 9 below.

  1. Mode and Frequency of Operation

The WG will meet monthly over Zoom, at a time to be determined by the membership. The WG will also communicate asynchronously online using the mailing list functionality provided by RDA and via shared online documents. If and when post-Covid international travel is restored during the 18 month work period of this WG then we will propose and schedule meetings during RDA plenaries and at other conferences where a sufficient number of group members are in attendance.

  1. Addressing Consensus and Conflicts

The WG will adhere to the stated RDA Code of Conduct and will work towards consensus, which will be achieved primarily through mailing list discussions and online meetings, where opposing views will be openly discussed and debated amongst members of the group. If consensus cannot be achieved in this manner, the group co-chairs will make the final decision on how to proceed.

The co-chairs will keep the working group on track by reviewing progress relative to the deliverables. Any new ideas about deliverables or work that the co-chairs deem to be outside the scope of the WG defined here will be referred back to the GORC IG to determine if a new WG should be formed.

  1. Community Engagement

The working group case statement will be disseminated to RDA mailing lists and communities of practice related to Commons development that are identified by the GORC IG in an effort to cast a wide net and attract a diverse, multi-disciplinary membership. Similarly, when appropriate, draft outputs will also be published to relevant stakeholders and mailing lists to encourage broad community feedback.

  1. Adoption Plan

The WG will create an adoption plan for distributing and maintaining the deliverables.  A specific plan will be developed to facilitate adoption or implementation of the WG Recommendation and other outcomes within the organizations and institutions represented by WG members.  This will include possible strategies  for adoption more broadly within the global community, and in such a way as to facilitate interoperability of global infrastructures.  Pilot adoptions or implementations would ideally start within the 18 month timeframe before the WG is complete. We envision implementation occurring when developers of commons compare themselves with similar organizations. We also envision the adoption plan will speak to howwe include the benchmarks in the larger GORC roadmap being created by the parent IG.

  1. Initial Membership

Co-chairs:

  1. Karen Payne <ito-director@oceannetworks.ca>
  2. Mark Leggott <mark.leggott@rdc-drc.ca>
  3. Andrew Treloar <andrew.treloar@ardc.edu.au>

 

Appendix A: List of Commons

 

Pan National Commons

  1. European Open Science Cloud
  2. African Open Science Platform
  3. Nordic e-Infrastructure Collaboration
  4. the Arab States Research and Education Network, ASREN
  5. LIBSENSE  (LIBSENSE is a community of practice, not an infrastructure. The infrastructure will be built by the RENs, NRENs and universities)
  6. WACREN
  7. LA Referencia

 

National Commons

European Roadmaps - The European Commission and European Strategy Forum on Research Infrastructures (ESFRI) encourage Member States and Associated Countries to develop national roadmaps for research infrastructures.

  1. German National Research Data Infrastructure (NFDI)
  2. DANS
  3. ATT (Finland)
  4. GAIA-X (non- member state?; see also) (focused on data sharing in the commercial sectors - without excluding research)
  5. UK
    1. UK Research and Innovation
    2. JISC
    3. Digital Curation Centre

Non-European

  1. QNL (Qatar)
  2. China Science and Technology Cloud (CSTCloud); see also
  3. Australian Research Data Commons
  4. Canadian National Data Services Framework (in development)
  5. National Research Cloud (US; AI focused)
  6. NII Research Data Cloud (Japan)
  7. KISTI (South Korea)

 

Domain Commons

  1. International Virtual Observatory Alliance (IVOA)
  2. NIH Data Commons; Office of Data Science Strategy (USA)
  3. NIST RDaF (USA)
  4. Earth Sciences
    1. DataOne Federation
    2. Federation of Earth Science Information Partners (ESIP)
    3. EarthCube
    4. GEO / GEOSS
    5. Near-Earth Space Data Infrastructure for e-Science (ESPAS, prototype)
    6. Polar
      1. The Arctic Data Committee landscape map of the Polar Community
      2. Polar View - The Canadian Polar Data Ecosystem (includes international initiatives, infrastructure and platforms)
      3. Polar Commons / Polar International Circle (PIC) [not sure if this is active]
      4. PolarTEP
    7. Infrastructure for the European Network for Earth System Modelling (IS-ENES)
  5. Global Ocean Observing Systems (composed of Regional Alliances)
  6. CGIAR Platform for Big Data in Agriculture
  7. Social Sciences & Humanities Open Cloud (SSHOC)
  8. Dissco https://www.dissco.eu/ Research infrastructure for natural collections (a commons for specimens and their digital twins)
  9. ELIXIR Bridging Force IG (in the process of being redefined as “Life Science Data Infrastructures IG”)
  10. Global Alliance for Genomics and Health (GA4GH)
  11. Datacommons.org - primarily statistics for humanitarian work

 

Gateway/Virtual Research Environment/Virtual Laboratory communities and other Services

  1. International Coalition on Science Gateways
  2. Data Curation Network
  3. CURE Consortium
  4. OpenAire
  5. RDA VRE IG

 

Appendix B: Draft List of WG/IG, documents, recommendations, frameworks and roadmaps from related and relevant communities

 

  1. RDA Outputs and Recommendations Catalogue
  2. RDA Data publishing workflows (Zenodo)
  3. RDA FAIR Data Maturity Model
  4. RDA 9 functional requirements for data discovery
  5. Repository Platforms for Research Data IG
  6. Metadata Standards Catalog WG
  7. Metadata IG
  8. Brokering IG
  9. Data Fabric IG
  10. Repository Platform IG
  11. International Materials Resource Registries WG
  12. RDA Collection of Use Cases (see also)
  13. Existing service catalogues (for example the eInfra service description template used in the EOSC)
  14. the Open Science Framework
  15. Matrix of use cases and functional requirements for research data repository platforms.
  16. Activities and recommendations arising from the interdisciplinary EOSC Enhance program
  17. Scoping the Open Science Infrastructure Landscape in Europe
  18. Docs from https://investinopen.org/about/who-we-are/
  19. Monitoring Open Science Implementation in Federal Science-based Departments and Agencies: Metrics and Indicators
  20. Next-generation metrics:Responsible metrics and evaluation for openscience. Report of the European Commission Expert Group on Altmetrics (see also)
  21. Guidance and recommendations arising from EOSC FAIR WG and Sustainability WG
  22. Outputs from the International FAIR Convergence Symposium (Dec 2020) (particularly the session Mobilizing the Global Open Science Cloud (GOSC) Initiative: Priority, Progress and Partnership
  23. The European Strategy Forum on Research Infrastructures (ESFRI) Landscape Analysis “provides the current context of the most relevant Research Infrastructures that are available to European scientists and to technology developers”
  24. NIH Workshop on Data Metrics (Feb 2020)
Review period start:
Friday, 8 January, 2021 to Monday, 8 February, 2021
Custom text:
Body:

 

CASE STATEMENT: RDA/CODATA Epidemiology common standard for surveillance data reporting WG

See, also:

 

 

1. WG CHARTER

A concise articulation of what issues the WG will address within a 12-18 month time frame and what its “deliverables” or outcomes will be.

 

In May 2020, the Organization for Economic Cooperation and Development (OECD) discussed why and how Open Science is critical to preventing and combating pandemics such as COVID-19 caused by the novel coronavirus, SARS-CoV-2 (OECD 2020). Open Science is transparent and accessible knowledge that is shared and developed through collaborative networks (Vicente-Saez and Martinez-Fuentes 2018). FAIR (findable, accessible, interoperable, and reusable) data principles are an integral part of Open Science. FAIR data principles emphasise machine-actionability (i.e., the capacity of computational systems to find, access, interoperate, and reuse data with no or minimal human intervention) (GoFAIR).  

 

However, there is an urgent need to develop a common standard for reporting communicable disease surveillance data without which Open Science and FAIR data will be difficult to achieve. Limited by antiquated systems and the lack of an established infrastructure, the tempo of the spread of the disease has outpaced our ability to react and adjust (Austin et al. 2020a,b; Garder et al. 2020). 

 

The need for developing a common standard for reporting epidemiology surveillance data was articulated by the RDA COVID-19 Epidemiology work group (WG) in their recommendations and guidelines, and supporting output (RDA COVID-19 WG 2020; RDA COVID-19 Epidemiology WG 2020). 

 

On October 27, 2020, the WHO, UNESCO, HCHR, and CERN issued a Joint Appeal for Open Science, a call on the international community to take all necessary measures to enable universal access to scientific progress and its applications UNESCO et al. 2020; UNESCO 2020:

 

"The open science movement aims to make science more accessible, more transparent and thereby more effective. A crisis such as the COVID-19 pandemic demonstrates the urgent need to strengthen scientific cooperation and ensure the fundamental right to universal access to scientific progress and its applications. Open Science is about free access to scientific publications, data and infrastructure, as well as open software, open educational resources and open technologies such as tests or vaccines. Open science also promotes trust in science, at a time when rumours and false information abound."

 

Michelle Bachelet, United Nations High Commissioner for Human Rights stated, 

 

"Data are a vital human rights tool."

 

The WG will build upon existing standards and guidelines to develop uniform definitions and data elements to improve data comparability and interoperability. 

 

We will build upon the work begun by the RDA COVID-19 Epidemiology WG, and extend beyond the COVID-19 pandemic to provide an actionable specification for reporting communicable disease surveillance data and metadata, including geospatial data.

 

This work will be a consensus building effort that contributes to CODATA’s Decadal programme:

  • Enabling Technologies and Good Practice for Data-Intensive Science
  • Mobilising Domains and Breaking Down Silos
  • Advancing Interoperability Through Cross-Domain Case Studies

 

Outcome

A standard specification for reporting communicable disease surveillance data. 

 

2. VALUE PROPOSITION

A specific description of who will benefit from the adoption or implementation of the WG outcomes and what tangible impacts should result.

 

Epidemiology surveillance data will enable governments and public health agencies to detect and respond to newly emergent threats of disease. Early detection may prevent development of epidemics and pandemics. It will also enable them to deliver more effective responses at all stages of the threat, from emergence through containment, mitigation, and reopening of society in the case of pandemics. Epidemiology surveillance data and geospatial data are large and varied. Treated as a strategic asset, they have the potential to support evidence-informed policy, stimulate new research areas, expand collaboration opportunities, and increase the health and economic well-being of society. A common standard for reporting epidemiology surveillance data will support these outcomes by improving data and metadata management and provision of findable, accessible, interoperable, reusable, ethical, and reproducible (FAIRER) data. 

 

The common standard for reporting epidemiology surveillance data is intended for implementation by government and international agencies, policy and decision-makers, epidemiologists and public health experts, disaster preparedness and response experts, funders, data providers, teachers, researchers, clinicians, and other potential users.

 

3. ENGAGEMENT WITH EXISTING WORK IN THE AREA

A brief review of related work and plan for engagement with any other activities in the area.

 

Nature of the problem to be addressed

The World Health Organization (WHO) defines public health surveillance as, “An ongoing, systematic collection, analysis and interpretation of health-related data essential to the planning, implementation, and evaluation of public health practice” (WHO 2020a). The WHO is a source of international standardized COVID-19 data and evidence-based guidelines, and is an invaluable source of technical guidance (WHO 2020b). Available instruments include a case-based reporting form, data dictionary, template, and aggregated weekly reporting form (WHO 2020c). There is also a global COVID-19 clinical data platform for clinical characterization and management of hospitalized patients with suspected or confirmed COVID-19 (WHO 2020d). The WHO (2020e) also notes that continued vigilance is needed to detect the emergence of novel zoonotic viruses affecting humans. 

 

Unfortunately, there are inconsistencies in the manner in which various jurisdictions agencies collect and report their data. This is due to gaps in existing standards, and a failure to comply with those standards that do exist. 

 

COVID-19 threat detection has been slow and ineffective, resulting in rapid development of a pandemic. Countries around the world have implemented a disparate series of public health measures in attempting to suppress and mitigate spread of the disease. The world was not prepared to respond to a novel zoonose that spreads with the tempo and severity of COVID-19 (Greenfield et al. 2020). The pandemic has resulted in serious health and economic consequences for both High Income Countries (HICs) and for Low and Middle Income Countries (LMICs) (Bong et. al, 2020). 

 

The RDA COVID-19 WG recommendations, guidelines, and supporting output highlighted discrepancies in the number of COVID-19 incident and mortality data across data sources which could be directly attributed to varying definitions and reporting protocols (RDA COVID-19 Epidemiology WG, 2020a,b). For example, mortality data from COVID-19 are frequently not comparable between and within jurisdictions due to varying definitions (Dudel, 2020). Variations resulting from discrepancies in official statistics limit effective disease-specific strategies (Modi et al., 2020; Modig & Ebeling, 2020). Other variables  (e.g., confirmed cases, probable cases, probable deaths, negative tests, recoveries, critical cases) are also inconsistently defined (Austin et al. 2020b). For example, while the WHO (2020f) defines a confirmed case as "a person with laboratory confirmation of COVID-19 infection, irrespective of clinical signs and symptoms", other datasets report confirmed cases as the number of both laboratory positive subjects and probable cases (JHU, 2020). The US CDC (2020) has amended its previous policy and now reports case counts from commercial and reference laboratories, public health laboratories, and hospital laboratories, but still excludes data from other testing sites within a jurisdiction (e.g. point-of-care test sites). In Turkey, the number of cases published until the end of July represented only symptomatic COVID-19 subjects, excluding asymptomatic laboratory positive individuals (Reuters Editors, 2020). Other issues affecting data accuracy include: duplicate event records, laboratory report delays, missing data, and incorrect dates.

 

Much of the developed world has a notifiable disease surveillance system for effective and efficient reporting within their national limits, with varying fields of data elements. There exist, also, a large number of international data standards which should be used when reporting epidemiology surveillance data (Table 1). However, these do not address specific requirements that would ensure that epidemiology surveillance data are comparable and interoperable. 

 

Table 1. Initial list of data standards useful for notifiable disease surveillance systems (non-exhaustive list) [SOURCE: Haghiri et. al. 2019].

 

Format

Proposed Standard

Machine-organizable data

HL7

Medical document exchange format

Clinical Document Architecture (CDA), Continuity of Care Document (CCD), and Continuity Care Record (CCR)

Markup language

XML Document Transform (XDT)

Classification systems

International classification of disease (ICD, ICD9, ICD9-CM)

Other classification systems (DRG, CPT, ICECI, HCPCS, ICPM, ICF, DSM)

Nomenclature systems

LOINC

SNOMED

Rx NORM

Standard content-maker formats

Standard address format definition, standard contact number format definition, standard ID format definition, and standard date format definition

 

Disease surveillance systems rely on complex hierarchies for data reporting. Raw data are collected at local level followed by anonymization and aggregation as necessary before sending it up the hierarchy which includes many levels. Even in many of the most developed regions of the world, much of this process continues to be done by hand, although the push to electronic medical records is gaining traction. As a result, most disease surveillance systems across the world experience reporting lags of at least one to two weeks (Fairchild et.al 2018, Janati et.al 2015).

 

Publicly available data are made available on websites that are often difficult to navigate to find the data and associated definitions.

 

Historical data are not fixed the first time they are published due to undetected errors, late or missing data, laboratory delays, etc. The dataset is updated when the data becomes available. This problem, often called “backfill,” is due to the complex reporting hierarchy and antiquated systems that disease surveillance systems rely on. Backfill can in some cases drastically affect analyses (Fairchild et.al 2018). The problem is compounded when corrected and missing case counts are added to the date on which the correction was reported, instead of the date on which the event occured. 

 

Case definitions used in epidemiological surveillance data are not clearly defined. Publicly available data are made available on websites that are often difficult to navigate to find the data and associated definitions. Fairchild et al. (2018) have highlighted the challenges with data reporting and stressed the importance of explicit and clear case definitions. Even with standardized definitions, regions with little support on funding for public health institutions may struggle to adopt a framework of best practices. We will develop guidelines that recognise these limitations and that will support both LMICs and HICs.

 

Engagement with other related activities

The proposed “Epidemiology common standard for surveillance data reporting WG” will address a high-priority challenge based on assessments of public health needs during a pandemic using COVID-19 as a use case.

 

The initial WG membership (see Section 6, below) is well connected to various community-based initiatives and WGs that address similar and other relevant topics. The WG will monitor and align its efforts with other related activities, including:

 

RDA WG and interest groups (IG):

Others:

See, also, “Solicited WG membership in Section 6, below.”

 

4. WORK PLAN 

A specific and detailed description of how the WG will operate.

 

Deliverables

D1 (months 4-12). Epidemiology common standard for surveillance data reporting.

This deliverable will contain the developed common standard specification for reporting epidemiology surveillance data, including variable names, definitions, and rationale.

 

D2 (months 12-16). Guidelines for adopting the common standard.

Guidelines will be based on lessons learned during development of the standard.

 

Milestones

M1 (months 0-1). Engagement of representatives from prominent stakeholders in public health. 

We will seek engagement with the WHO, eCDC, US-CDC, ICMR etc.

 

M2 (months 0-3). Identification of standards gaps and issues concerning data interoperability and comparability across and within jurisdictions. 

  • We will use COVID-19 surveillance data as a use-case to identify issues that can be resolved by implementation of a common standard for reporting communicable disease surveillance data.

  • Identify related standards and guidelines.

  • Identify standards gaps.

 

M3 (months 1-3). Definition of the scope of the standard and detailed objectives.

We will develop a detailed project management and work plan. 

 

M4 (months 3-6). Hackathon.

A hackathon will be conducted for the RDA 17th Plenary in April 2021. The objective will be to combine publicly available COVID-19 related datasets and to present solutions that overcome the barriers encountered. The hackathon will be announced at the 16th Plenary in November 2020 and will be opened in February 2021. Participants will present their results at the 17th Plenary at which time judging will take place and winners announced. Winners will be offered co-authorship on a peer-reviewed publication. From November to January, we will seek sponsors for cash prizes to be awarded to 1st, 2nd, and 3rd place winners.

 

M5 (months 3-12). Development of a draft standard for reporting epidemiology surveillance data.

 

M6 (months 12-14). Public review of the draft standard.

 

M7 (months 12-16). Development of guidelines for adoption of the standard

 

M8 (month 15-17). Finalization of the standard.
 

M9 (month 1-18). Dissemination and Communication.

WG activities and outcomes will be disseminated via the RDA website, preprint(s), submission to a peer-reviewed journal, RDA Plenaries, conference presentations, and social media.

 

Simplified Gantt Chart

Months

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

Deliverables

                     

D1

draft

 

D2

draft

 

D2

final

D1

final

 

Milestones

M1

 

M2

M3

 

M4

         

M5

 

M6

 

M7

M8

M9

Plenaries

         

P17

         

P18

       

P19

 

 

Work space

The WG will use the following platforms for communication and development:

  • RDA website

  • Google drive

    • Working documents will be managed on Gdrive to facilitate open collaboration, and to generally make things easier.

  • GitHub 

    • We will develop a public GitHub repository to host the hackathon material,  models, source code, and the proposed common standard, and to raise and resolve issues.

  • Zotero

Tools

We will use a variety of tools, for example:

  • Visualization
    • Mindmapping
    • Infographics
  • Gantt charting for project management
  • Voting and consensus building tools

 

Meetings

  • Meetings will be held weekly.

  • An online platform (e.g., GoToMeeting, Zoom, WebEx, MS Teams, or Google Meet) will be used for meetings. Participants will be asked to activate their video to enhance communication effectiveness. The WG will meet at RDA Plenaries, the first such meeting being at the 16th Plenary on November 12, 2020 at 12:00 - 1:30 AM UTC

  • Agenda, minutes and rolling notes will be circulated via google doc. 

  • Discussions will be held at the RDA 16th, 17th, and 18th Plenaries, and at other conferences and workshops where possible. 

Consensus

A description of how the WG plans to develop consensus, address conflicts, stay on track and within scope, and move forward during operation, and

 

Consensus will be achieved mainly through discussions in our regular weekly meetings, where conflicting viewpoints will be identified and openly discussed and debated by group members. If consensus cannot be reached in this manner, the final decision will be taken by the group co-chairs. By setting realistic deadlines and assessing progress on assigned tasks, the co-chairs will keep the WG on track and within scope.

 

Community engagement

A description of the WG’s planned approach to broader community engagement and participation. 

 

To encourage broader community engagement and participation in the development of a standard, the WG case statement will be circulated to various public health organizations and  epidemiological societies across the globe, and on social media (Linkedin and Twitter). A regular update on events/news related to Epidemiology common standards will be posted on RDA WG webpage to encourage involvement of specialists in the field. 

 

License

WG outputs will be published under a CC BY-SA license. 

 

5. ADOPTION PLAN

A specific plan for adoption or implementation of the WG outcomes within the organizations and institutions represented by WG members, as well as plans for adoption more broadly within the community. Such adoption or implementation should start within the 12-18 month timeframe before the WG is complete.

 

The WG members will be encouraged to implement the new standard and guidelines within their organizations. We will pursue adoption by a variety of stakeholders and research communities, particularly those involved in public health.The standard will be disseminated via RDA webinars, other scientific presentations and twitter handle. We will also seek to publish the final standard and guidelines as an open access peer-reviewed journal article. We will follow up with adoption stories.

 

6. INITIAL WG MEMBERSHIP

A specific list of initial members of the WG and a description of initial leadership of the WG.

 

Co-Chairs: Claire Austin and Rajini Nagrani

 

RDA Liaison: Stefanie Kethers

 

Members/Interested:

Soegianto Ali

Anthony Juehne

Nada El Jundi

Fotis Georgatos

Jitendra Jonnagaddala

Miklós Krész

Gary Mazzaferro

Jiban K. Pal

Carlos Luis Parra-Calderon

Bonnet Pascal

Fotis Psomopoulos

Stefan Sauermann

Henri Tonnang

Marcos Roberto Tovani-Palone    

Anna Widyastuti

Becca Wilson

Eiko Yoneki

 

Current initial membership

The initial WG includes:

  • Cross-domain expertise 

    • biostatistics, clinical informatics, computer engineering, data science, epidemiology, global health, health informatics, health sciences, interoperability, IT architecture, mathematics, open science, pathology, predictive modeling, public health, research data management, software development, veterinary medicine
  • Experience 

    • academia, editor of scientific journals, government, international WG leadership,  program director, research, standards development.
  • Regional representation 

    • Africa (sub-saharan), Asia (maritime southeast), Asia (south), Australasia, Europe, North America, and South America.
  • Income groups

    • Two lower-middle income, two upper-middle income, and 15 high-income countries.

 

Initial membership comprises a core group from the RDA-COVID19-Epidemiology WG, and additional members who bring additional domain specific expertise. We aim to further strengthen the group to expand global participation (low income, lower-middle income, and upper-middle income countries), interdisciplinary experts, and stakeholder representation to address this pressing common epidemiology surveillance data challenge across the public health domain. 

 

Actively soliciting WG membership

The initial membership does not currently include any potential adopters. We will be soliciting the active participation in the WG of representatives from key stakeholders, including the following:

Official agencies and funders

  • Official agencies, organizations, and funders having international reach
  • Supernational organizations
  • European Centers for Disease Control (eCDC)
  • Global Early Warning System (GLEWS+)
  • Global Health Security Agenda (GHSA)
  • Global Influenza Surveillance and Response System (GISRS)
  • Global Partnership for Sustainable Development Data (GPSDD)
  • GloPID-R
  • Indian Council of Medical Research (ICMR)
  • Observational Health Data Sciences and Informatics (OHDSI)
  • UN Office for Disaster Risk Reduction (UNDRR)
  • United Nations Educational, Scientific and Cultural Organization (UNESCO)
  • U.S. Centers for Disease Control (CDC)
  • Wellcome Trust
  • World Data System (WDS)
  • World Health Organization (WHO)
  • World Bank World Development Indicators (WDI)

     

Data aggregators in academia

  • Johns Hopkins University (Killeen et al. 2020)
  • University of California, Berkeley (Altieri et al. 2020)
  • University of Oxford (Roser et al. 2020)

News Outlets

  • The Atlantic

  • The Economist

  • The Financial Times

  • The New York Times

Communications/graphic artist expertise

 

 

7. REFERENCES

Altieri, N., Barter, R. L., Duncan, J., Dwivedi, R., Kumbier, K., Li, X., Netzorg, R., Park, B., Singh, C., Tan, Y. S., Tang, T., Wang, Y., Zhang, C., & Yu, B. (2020). Curating a COVID-19 Data Repository and Forecasting County-Level DeathCounts in the United States. Harvard Data Science Review. https://doi.org/10.1162/99608f92.1d4e0dae

 

Austin, Claire C; Nagrani, Rajini; Widyastuti, Anna; El Jundi, Nada (2020a). Global status of COVID-19 data: A cross-jurisdictional and international perspective. Canadian Public Health Association Conference. October 14-16. https://www.cpha.ca/publichealth2020

 

Austin, Claire C; Widyastuti, Anna; El Jundi, Nada; Nagrani, Rajini; and the RDA COVID-19 WG. (2020b). Surveillance Data and Models: Review and Analysis, Part 1 (September 18, 2020). Preprint available at SSRN: http://dx.doi.org/10.2139/ssrn.3695335

 

Bong CL, Brasher C, Chikumba E, McDougall R, Mellin-Olsen J, Enright A (2020). The COVID-19 Pandemic: Effects on Low- and Middle-Income Countries. Anesth Analg, 131:86-92. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7173081/

 

CDC. Coronavirus Disease 2019 (COVID-19) in the U.S.. Centers for Disease Control and Prevention. 2020 [cited 2020 Oct 23]. Available from: https://covid.cdc.gov/covid-data-tracker

 

Fairchild G, Tasseff B, Khalsa H, Generous N, Daughton AR, Velappan N, Priedhorsky R, Deshpande A (2018). Epidemiological Data Challenges: Planning for a More Robust Future Through Data Standards. Front Public Health, 6:336. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6265573/

 

Gardner, L., Ratcliff, J., Dong, E., & Katz, A. (2020). A need for open public data standards and sharing in light of COVID-19. The Lancet Infectious Diseases, 0(0). https://doi.org/10.1016/S1473-3099(20)30635-6

 

Greenfield J., Tonnang E.Z., Mazzaferro G., Austin, C.C.; and the RDA-COVID19-WG. (2020). Epi-TRACS: Rapid detection and whole system response for emerging pathogens such as SARS-CoV-2 virus and the COVID-19 disease that it causes. IN: COVID-19 Data sharing in epidemiology, version 0.06b. Research Data Alliance RDA-COVID19-Epidemiology WG. https://doi.org/10.15497/rda00049

 

GLEWS (2013). Global Early Warning System. http://www.glews.net/?page_id=5

 

Haghiri H, Rabiei R, Hosseini A, Moghaddasi H, Asadi F (2019). Notifiable Diseases Surveillance System with a Data Architecture Approach: A Systematic Review. Acta Inform Med, 27:268-277. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7004293/

 

Janati A, Hosseiny M, Gouya MM, Moradi G, Ghaderi E (2015). Communicable Disease Reporting Systems in the World: A Systematic Review. Iran J Public Health, 44:1453-1465. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4703224/

 

JHU (2020). Coronavirus resource center. Johns Hopkins University. https://coronavirus.jhu.edu/

 

Killeen, B. D., Wu, J. Y., Shah, K., Zapaishchykova, A., Nikutta, P., Tamhane, A., Chakraborty, S., Wei, J., Gao, T., Thies, M., & Unberath, M. (2020). A County-level Dataset for Informing the United States’ Response to COVID-19. ArXiv:2004.00756 [Physics, q-Bio]. http://arxiv.org/abs/2004.00756

 

Modig K, Ebeling M (2020). Excess mortality from COVID-19. weekly excess death rates by age and sex for aweden. Preprint available at medRxiv: https://doi.org/10.1101/2020.05.10.20096909

 

Norton, A., Pardinz-Solis, R., & Carson, G. (2017). Roadmap for data sharing in public health emergencies. GloPID-R. https://www.glopid-r.org/our-work/data-sharing/

 

OECD (2020). Why open science is critical to combatting COVID-19—OECD. Organisation for Economic Co-Operation and Development, May 12, 2020. https://read.oecd-ilibrary.org/view/?ref=129_129916-31pgjnl6cb&title=Why-open-science-is-critical-to-combatting-COVID-19

 

OHDSI (2020). Observational Health Data Sciences and Informatics. https://ohdsi.github.io/TheBookOfOhdsi/

 

RDA COVID-19 WG (2020). Recommendations and guidelines. Research Data Alliance. https://doi.org/10.15497/rda00052

 

RDA COVID-19 Epidemiology WG (2020). Sharing COVID-19 epidemiology data: Supporting output. Research Data Alliance. https://doi.org/10.15497/rda00049

 

Reuters Editors. Turkey has only been publishing symptomatic coronavirus cases - minister. Reuters. 2020 [cited 2020 Oct 15]; Available from: https://www.reuters.com/article/health-coronavirus-turkey-int-idUSKBN26L3HG

 

Roser, M., Ritchie, H., Ortiz-Ospina, E., & Hasell, J. (2020). Coronavirus Pandemic (COVID-19). Our World in Data. https://ourworldindata.org/coronavirus

 

SDMX (2020). The Business Case for SDMX. SDMX Initiative. https://sdmx.org/?sdmx_news=the-business-case-for-sdmx

 

UN (2018). Overview of standards for data disaggregation. United Nations. https://unstats.un.org/sdgs/files/Overview%20of%20Standards%20for%20Data%20Disaggregation.pdf

 

UN (2020). IAEG-SDGs—Data Disaggregation for the SDG Indicators. United Nations. https://unstats.un.org/sdgs/iaeg-sdgs/disaggregation/

 

UNESCO (2020). Preliminary report on the first draft of the Recommendation on Open Science—UNESCO Digital Library. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000374409.locale=en.page=10

 

UNESCO, WHO, HCHR, & CERN (2020, October 27). ​Joint Appeal for Open Science. https://events.unesco.org/event/?id=1522100236

 

Vicente-Saez, R., & Martinez-Fuentes, C. (2018). Open Science now: A systematic literature review for an integrated definition. Journal of Business Research, 88, 428–436. https://doi.org/10.1016/j.jbusres.2017.12.043

 

WHO (2020a). Public health surveillance. United Nations, World Health Organization. https://www.who.int/immunization/monitoring_surveillance/burden/vpd/en/

 

WHO. (2020b). Country & Technical Guidance—Coronavirus disease (COVID-19). World Health Organization. https://www.who.int/emergencies/diseases/novel-coronavirus-2019/technical-guidance-publications?publicationtypes=df113943-c6f4-42a5-914f-0a0736769008

 

WHO. (2020c). Global COVID-19 Clinical Data Platform for clinical characterization and management of hospitalized patients with suspected or confirmed COVID-19. World Health Organization. https://www.who.int/docs/default-source/documents/emergencies/information-sheet-global-covid19-data-platofrm.pdf?sfvrsn=ff1f4e64_2

 

WHO. (2020d). Global COVID-19 Clinical Data Platform. World Health Organization. https://www.who.int/teams/health-care-readiness-clinical-unit/covid-19/data-platform

 

WHO (2020e). Preparing GISRS for the upcoming influenza seasons during the COVID-19 pandemic – practical considerations. United Nations, World Health Organization. https://apps.who.int/iris/bitstream/handle/10665/332198/WHO-2019-nCoV-Preparing_GISRS-2020.1-eng.pdf.

 

WHO (2020f). COVID-19 case definition. https://www.who.int/publications/i/item/WHO-2019-nCoV-Surveillance_Case_Definition-2020.1

 

WHO (2020g). Global Influenza Surveillance and Response System (GISRS). United Nations, World Health Organization. https://www.who.int/influenza/gisrs_laboratory/en/.

 

WHO (2020h). COVID-19 Core Version Case Record Form (CRF). United Nations, World Health Organization. https://media.tghn.org/medialibrary/2020/05/ISARIC_WHO_nCoV_CORE_CRF_23APR20.pdf 

 

WHO (2020i). COVID-19 Rapid Version Case Record Form (CRF). United Nations, World Health Organization. https://media.tghn.org/medialibrary/2020/04/ISARIC_COVID-19_RAPID_CRF_24MAR20_EN.pdf

 

WHO (2020j). WHO Information Network for Epidemics (EPI-WIN). United Nations, World Health Organization. https://www.who.int/teams/risk-communication/about-epi-win

 

 

Review period start:
Wednesday, 28 October, 2020 to Friday, 25 December, 2020
Custom text:
Body:
Review period start:
Monday, 26 October, 2020
Custom text:
Body:
Review period start:
Monday, 26 October, 2020
Custom text:
Body:

Introduction (A brief articulation of what issues the IG will address, how this IG is aligned with the RDA mission, and how this IG would be a value-added contribution to the RDA community):

Extensive work has been, and continues to be done on data interoperability at the technical and information domains. However, a large portion of the challenges in building interoperable information infrastructures are the result of the interplay between organisations, institutions, economics, and individuals.  Collectively these form the social dynamics that foster or hinder the progress towards achieving technical and information interoperability.

These are some of the most difficult challenges to address.  Currently there is only a limited body of work on how to address these challenges in a systematic way.  In keeping with the mission of the RDA, the focus of this group is to focus on what is required to build the social bridges that enable open sharing and re-use of data.

The focus of this interest group is to identify opportunities for the development of systematic approaches to address the key social challenges and to build a corpus of knowledge on building and operating interoperable information infrastructures.

 

User scenario(s) or use case(s) the IG wishes to address (what triggered the desire for this IG in the first place):

Within Australia, the National Collaborative Research Infrastructure Strategy (NCRIS) set forth the need to establish a National Environmental Prediction System (NEPS).  This requires the collaboration, coordination and (most importantly) the interoperability between a range of facilities, organisations and government entities for this system to work effectively.  A number of facilities involved have recently come to the realization that the Social dynamics between the facilities is a key factor in the success (or failure) of this initiative.

Within the United States, initiatives such as the Pacific Research Platform, the National Research Platform, and the Eastern Regional Network are a few examples of cross-institutional initiatives whose success is dependent as much on social dynamics as on overcoming technical challenges.

The problem exists at smaller scales as well.  At the institutional level, the need to drive adoption across IT, IT Security, Research units, and Libraries provides a persistent challenge.

The BoF session held at the 13th Plenary session highlighted that similar challenges exist within other research domains.

There are many solutions that are being applied everyday around the world to address these challenges.  Many of these are conceived and developed through the knowledge and experiences of the individuals involved.  However , at present there is limited systematic knowledge on this topic and therefore they have limited systematic knowledge to draw upon.

For example, the RDA itself is an instrument intended to address some of the challenges that exist in the social dynamics across the global research data landscape.  As such it provides both an interesting case study as well as a representative microcosm of the broader challenges in this space.

 

Objectives (A specific set of focus areas for discussion, including use cases that pointed to the need for the IG in the first place.   Articulate how this group is different from other current activities inside or outside of RDA.):

Currently there is no other IG within the RDA that has a specific focus on the social dynamics, (ie: the interplay between organisations, institutions, economics, and individuals) relating to interoperable information infrastructure.

The main objective of this IG is to:

  • Identify organisational, institutional, economic, and individual aspects that increase the friction to achieving information interoperability.
  • Develop a corpus of knowledge, including models, frameworks and patterns that can be applied by practitioners to develop the desired social dynamics that reduce friction and foster information interoperability.
  • Identify and develop case studies of solutions that demonstrate the application of the corpus of knowledge on this topic.  It is acknowledged that often the details of specific case studies could be sensitive and documented case studies may need to be synthesised drawing upon actual cases.

The purpose of this IG is to create the body of knowledge and illustrative case studies for practitioners to be able to equip themselves with the best knowledge to understand the social dynamics that exist in their specific context and to be able to draw on this knowledge to influence positive change. 

                                                                                                   

Participation (Address which communities will be involved, what skills or knowledge should they have, and how will you engage these communities.  Also address how this group proposes to coordinate its activity with relevant related groups.):

The participation in this IG is left open and broad to anyone who has an interest in the social dynamics as it relates to building interoperable data infrastructures.  Specific skills and knowledge that would be useful for this IG include,

  • Social psychology
  • Organisational behaviour and organisational psychology
  • Economics
  • Legal frameworks
  • Digital anthropology
  • Digital ethnography

It is expected that many of the topics of interest for this IG will have some degree of overlap with other IGs and WGs within RDA.  It is intended that this IG will keep these related IGs informed of its activity, and seek to coordinate with them on topics that overlap or have a common interest.  It is feasible that in the future we could hold joint sessions at plenary events around common topics.

Drawing on the description provided in the RDA website, the following IGs have been identified as potentially having overlapping interests with this IG,

  1. Big Data IG
  2. Biodiversity Data Integration IG
  3. Chemistry Research Data IG
  4. CODATA/RDA Research Data Science Schools for Low and Middle Income Countries
  5. Data Economics IG
  6. Data Fabric IG
  7. Data Foundations and Terminology IG
  8. Data in Context IG
  9. Data policy standardisation and implementation IG
  10. Digital Practices in History and Ethnography IG
  11. Domain Repositories IG
  12. Early Career and Engagement IG
  13. Education and Training on handling of research data IG
  14. ELIXIR Bridging Force IG
  15. Engaging Researchers with Data IG
  16. Ethics and Social Aspects of Data IG
  17. Federated Identity Management
  18. Global Water Information IG
  19. National Data Services IG
  20. Physical Samples and Collections in the Research Data Ecosystem IG
  21. PID IG
  22. Preservation Tools, Techniques, and Policies
  23. RDA/CODATA Legal Interoperability IG
  24. RDA/CODATA Materials Data, Infrastructure & Interoperability IG
  25. RDA/NISO Privacy Implications of Research Data Sets IG
  26. RDA/WDS Certification of Digital Repositories IG
  27. Research Data Architectures in Research Institutions IG

Outcomes (Discuss what the IG intends to accomplish.  Include examples of WG topics or supporting IG-level outputs that might lead to WGs later on.):

There are two primary outcomes of this IG:

  1. Create a community of interest on the Social dynamics of interoperable information infrastructures;
  2. Create a corpus of knowledge on the topic.
  3. Identify and develop case studies of solutions that demonstrate the application of the corpus of knowledge on this topic.

Some initial topics that could lead to Working Groups include,

  • Problem and solution patterns in Information Infrastructure;
  • Governance  & participation models;
  • Frameworks for trust;
  • Incentives and disincentives for collaboration and participation;
  • Specific institutional partnerships known to exist, how they came to be, and their varying degrees of success

Mechanism (Describe how often your group will meet and how will you maintain momentum between Plenaries.):

     The group will aim to have at least 1 virtual meeting between sessions.  It will also establish a mechanism (possibly the mailing-list) for offline discussions.

Timeline (Describe draft milestones and goals for the first 12 months):

   

Research and identify organisational, institutional, economic, and individual challenges to achieving interoperability

Month 1-6

Identify case studies

Month 7-12

creation of knowledge corpus

Month 12-24

Apply knowledge corpus to case studies

Month 24+

  

 

Potential Group Members (Include proposed chairs/initial leadership and all members who have expressed interest):

 

FIRST NAME

LAST NAME

EMAIL

TITLE

Kheeran

Dharmawardena

kheerand@cytrax.com.au

Co-Chair

Greg

Madden

gregmadden@psu.edu

 

Heidi

Laine

heidi.laine@csc.fi

 

Jay

Pearlman

jay.pearlman@fourbridges.org

 

Jeremy

Cope

jez.cope@bl.uk

 

Kathleen

Gregory

kathleen.gregory@dans.knaw.nl

 

Kiera

McNeice

kmcneice@cambridge.org

 

Lisa

Raymond

lraymond@whoi.edu

 

Maggie

Hellström

margareta.hellstrom@nateko.lu.se

 

Stefanie

Kethers

stefanie.kethers@ardc.edu.au

 

Review period start:
Tuesday, 20 October, 2020
Custom text:
Body:

Introduction

Data management plans (DMPs) serve as the first step in the RDM lifecycle. They aid in recording metadata at various levels during the data description process and are intended to be adapted as a project evolves. During consultations and training focused on the concept of a data management plan, it becomes clear that the views of research funders and researchers differ widely on the use of DMPs. Research funders want to know what happens to the data during and after the project. Researchers, on the other hand, want support in their daily work with data and tend to see the DMP as an additional bureaucratic burden. Additionally, creating DMPs is further complicated because individual disciplines may have very different requirements and challenges for data collection and data management.

 

The full case statement of the WG  is attached to this page.

 

 

 

 

 

Review period start:
Tuesday, 20 October, 2020 to Friday, 20 November, 2020
Custom text:
Body:

Disciplinary groups play a pivotal role in the RDA ecosystem. Not only do they provide a mechanism for engaging with the outputs of the RDA community, they are instrumental in providing input to the cross-cutting technical and socio-cultural groups. With its formation and endorsement in 2013, the Biodiversity Data Integration IG has been one of the pioneering groups in RDA, focusing from the outset on linking the wider communities of practice in the biodiversity data domain with experts and other communities within RDA. 

 

The COVID-19 outbreak was a game changer that will have a permanent effect on digital strategies. It has made it abundantly clear that the domain needs to expand its digital strategies beyond simply adding content online. This includes strengthening relationships with our audiences, supporting the development of people and their digital skills, evaluating joint policies and the ability to evaluate choices in the ever changing technology landscape. The BDI IG aims to be a platform for discussion and development of recommendations and guidance that can help institutions and researchers in the domain in their digital transformation.

 

This charter has been updated because there are many developments in the field of biodiversity data now that would benefit with alignment of developments in other scientific domains and vice versa. Examples are the global movement to FAIR data, implementing RDA recommendations towards a FAIR DO infrastructure for specimens with Natural Science Identifiers, an extended Catalogue of Life and the establishment of a Global Alliance for Biodiversity Knowledge.

 

Please find the updated Charter here: https://www.rd-alliance.org/sites/default/files/case_statement/RDA%20BDI...

Review period start:
Wednesday, 12 August, 2020 to Saturday, 12 September, 2020
Custom text:
Body:

Reuse of health and clinical research data (including social care, and hereafter referred to as health research  data)  has  major  restrictions  when  compared  to  other  research  data  in  the  biomedical domain.  This  primarily  pertains  to  the  concerns  imposed  by  privacy,  sensitivity  and  ethical  issues raised by making data freely available. Meanwhile, research data management (RDM) practices such as  the  creation  of  data  management  plans  (DMP),  sharing  datasets,  the  deposition  of  data  in repositories,  and  the  application  of  FAIR  data  principles  to  research  outcomes  are  becoming increasingly  common  as  they  are  required by funder mandates. Besides this requirement placed by funders  there  is  also  the  wider  need  for  researchers  to  share,  find  and  access  data  to  progress common goals as the primary value, and promote integrity and reproducibility.

The last few years have seen a rapid rise in uptake of the FAIR principles, which originated in the life sciences domain, but which have now been adopted to varying degrees across all research domains. Concomitant  with  the  rise  of  FAIR  datasets  has  been  an  increase  in  open  research  which  urges researchers to make their data available for reuse, especially those that are publicly funded. However, an  important  caveat  when  thinking  about  FAIR when compared to open research is the phrase “as open as possible, as closed as necessary”.

The recent enforcement of the GDPR in Europe is a prime example of a legal framework that makes strict regulations around the processing and sharing of personal data and places the onus on the data controller  to  make  sure  provisions  are  in  place  to  ensure  this.  Although  the  GDPR  is  the  most  far reaching  data  protection  legislation  currently  in  the  world,  there  are  other  territories  that  have restrictions  on  secondary  use  of  personal  data  and  health  data,  e.g.  USA (HIPAA), Ireland (Health Research  Regulation),  India  (Personal  Data  Protection  Bill)    and  S-Africa  (Protection  of  Personal Information Act). As well as internationally enforced restrictions, there are those at national and local levels, and together they all require evidence that the sharing and reuse of health research data are carried out responsibly and in-line with stated aims. The legislation is not meant to impose barriers but to protect individuals' rights.

FAIR adoption in the health research domain is complicated by numerous factors including concerns regarding:  ethical,  moral,  cultural,  technical,  and  legal  constraints  of  primary  source  data.  We therefore propose this WG to address some of these issues to:

    Analyse and report legal and ethical issues surrounding data privacy of health research data at the national level.

    Identify  commonalities  across  territories  that  can  be  a  foundation  for  harmonisation  of guidelines on FAIR adoption.

    Provide  Health  Research  Performing  Organizations  (HRPOs) with a set of clear and simple guidelines for implementing FAIR Open Data policy in health research.

Therefore,  the  main  purpose  of  this  WG is to provide HRPOs (such as universities, public research institutes, hospitals, medical charities etc.) with a set of clear and simple guidelines, which will define, establish  and  enable  implementation  of  an  aligned FAIR data policy at the institutional level.

Review period start:
Friday, 5 June, 2020 to Monday, 6 July, 2020
Custom text:
Body:

Update October 2020:

Based on the revisions requested by the TAB, we have updated the IG charter (changes in yellow) with regard to the following aspects:

  • updated information on the co-chairs nomination/election, including geographical spread.
  • updated information on our contacts with relevant other IG and WG, to speed up collaborations

We thank the TAB for their useful feedback and hope the updated charter will result in the formal recognition of our IG (in formation). Don't hesitate to contact us in case of questions.

Many thanks and best regards,

Mijke Jetten, on behalf of the other co-chairs as well,

Peter Neish, Niklas Zimmer, Mohammad Akhlaghi, Varsha Khodiyar, Michelle Barker, Romain DAVID, Debora Drucker, Christina Drummond, Yan Wang

 


May 2020:

In attachment, you find the charter proposal for the Professionalising Data Stewardship Interest Group (in formation). Part of the submission process is a round of community review (up to June 18) on the proposed charter. We kindly ask you to read the charter and provide feedback via the 'comments option'. 

Many thanks in advance,

Mijke, Marta & Peter (co-chairs)

 


Original Charter (May 2020): https://www.rd-alliance.org/sites/default/files/case_statement/Professio...

Revised Charter (October 2020): https://www.rd-alliance.org/sites/default/files/case_statement/Update%20...

Review period start:
Monday, 18 May, 2020 to Thursday, 18 June, 2020
Custom text:
Body:

Please note that the CURE FAIR WG's Case Statement was revised in October 2020, following the community review and TAB review. This final version is dated October 2020.

The version that underwent community and TAB review was created in May 2020. Both versions are attached to this page.


The goal of the working group is to establish guidelines and standards for curating for reproducible and FAIR data and code (Wilkinson et al., 2016). The ultimate objective is to improve FAIR-ness and long-term usability of “reproducible file bundles” across domains.

When we think of specific research outputs, we might think of data, software, codebooks, etc. These individual outputs may have inherent value. For example, a set of observations that is very costly to produce, or that cannot be repeated, or a script that can be used by others for computation. Traditional curation has considered these outputs as its core objects. But in the context of empirical research, these outputs interact with each other, often to produce specific findings or results. Nowadays, the process by which results are generated is captured in computation. Our approach to curation takes into account this process and focuses on computational reproducibility.

 

Computational reproducibility is the ability to repeat the analysis and arrive at the same results (National Academies of Sciences, Engineering, and Medicine, 2019; Stodden, 2015). It requires using the data and code used in the original analysis, and additional information about study methods and computational environment. The reason to pursue computational reproducibility is to preserve a complete scientific record , to verify scientific claims, to do science and build upon the findings, and to teach (Elman, Kapieszewski, & Lupia, 2018; Resnik & Shamoo, 2017; Stodden, Bailey, & Borwein, 2013).

 

In this framework, the object of the curation is a “reproducible file bundle” and its component parts, including the files and their elements (e.g., variables), with the goal of enabling continued access and independent reuse of the bundle for the long term.

The CURE-FAIR WG is focused on the curation practices that support computational reproducibility and FAIR principles.

By curation we refer to the activities designed for “maintaining, preserving and adding value to digital research data throughout its lifecycle” (Digital Curation Center, n.d.).

The WG will deliver,

  1. A snapshot of the current state of CURE-FAIR practices drawing upon community surveys and reviews of practice.
  2. A synthesis of practices relating to curating for computational reproducibility and FAIR principles.
  3. A final document outlining standards and guidelines for CURE-FAIR best practices in publishing and archiving computationally reproducible studies, including the associated computational methods and materials.
Review period start:
Friday, 15 May, 2020 to Monday, 15 June, 2020
Custom text:

Pages