Public Workshop in Leiden, as part of the FAIR Convergence Symposium, 12:00-15:30 CEST (10:00-13:30 UTC); two 90-minute sessions with a 30-minute break.
In case you missed the event, you can watch the workshop recording on Vimeo and check the presentation decks.
Location: Collegezaal 2, LUMC, and online.
The WorldFAIR project is a major new global collaboration between partners from thirteen countries across Africa, Australasia, Europe, and North and South America. WorldFAIR will advance implementation of the FAIR data principles, in particular those for Interoperability, by developing a Cross-Domain Interoperability Framework and recommendations for FAIR assessment in a set of 11 disciplines or cross-disciplinary research areas.
FAIR Implementation Profiles (FIPs) are an approach, developed by GO FAIR, through which a research community expresses its practices and decisions around FAIR.? The methodology involves a series of questions on how the community makes data and metadata FAIR and what ‘FAIR Enabling Resources’ (FERs) are used. The WorldFAIR project is exploring FIPs with our 11 case studies.
The FIPs consist of a set of questions about practice in relation to each of the FAIR principles and they are supported by an online tool, the FIPs Wizard. One of the potential benefits is the creation, as ‘nanopublications’ of a network of FIPs and FAIR Enabling Resources, coded in RDF and which can be visualised and analysed. With use, the creation of more and more FIPs and FERs will furnish a resource which can potentially give great insights into FAIR practices.
In this workshop we will reflect on the experience and explore our findings. What have we learnt from the process?
PART ONE: THE EXPERIENCE OF THE CASE STUDIES
Introduction to WorldFAIR and FIPs; Simon Hodson, CODATA (10 mins)
Presentations from some Case Studies (7.5 minutes each): we have invited 6 of the 11 WorldFAIR Case Studies to present their experiences of developing FIPs. The other 5 case studies are invited to participate, and to contribute to the discussion.
- Chemistry, IUPAC: Leah McEwan (remote), Ian Bruno and Stuart Chalk (onsite)
- Nanomaterials: Iseult Lynch and Thomas Exner (onsite)
- Social Surveys: Steve McEachern and Hilde Orten (remote)
- Agricultural Biodiversity: Maarten Trekels (onsite) and Debora Drucker (remote)
- Disaster Risk Reduction: Bapon Fakhruddin and Jill Bolland (remote)
- Cultural Heritage: Beth Knazook (onsite)
Each Case study is asked to describe their experience and, in particular, to respond to the following questions:
- What have you learnt from the process?
- Has using FIPs helped you describe practices around FAIR in your case studies?
- Has it helped identify any gaps or areas which would benefit from further attention?
- Has the process identified ways in which the FIPs methodology and the tools around it can be improved?
- What have you learnt about the FAIRness of your community or domain?
- Have you identified any next steps in response to what you have learnt?
The presentations will be followed by general discussion, of about 50 minutes, in which all Case Studies will be invited to share their experiences, and to which all participants will be invited to contribute.
The discussion will take place either side of a 30-minute break in which refreshments will be available for onsite participants.
PART TWO: DISCUSSION OF EXPERIENCE, FINDINGS AND NEXT STEPS.
Summary of what we have learnt and implications for the Cross-Domain Interoperability Framework (CDIF): Arofan Gregory, CODATA (15 mins), followed by c.45 minutes discussion.
This presentation and the subsequent discussion will respond to the following questions:
- How have the outcomes of the FIPs assisted the project in the development of a Cross-Domain Interoperability Framework and recommendations for more domain-sensitive FAIR assessment?
- What commonalities have we identified?
- Has the process helped our identification of components of CDIF and candidate standards?
- Have we identified any specific needs in domains that should be part of domain sensitive FAIR assessment?
- What are the key findings about FIPs as a methodology?
- What improvements would we recommend?