Blog

Add new Blog

31 Oct 2019

Applying FAIR principles to historical data

The idea came up while participating in the Working Group (WG): FAIR Data Maturity Model, that took place the first day of the 14th RDA Plenary in Helsinki. One of the opposing views that were discussed during the WG was related to machine-processable data versus human access. The question posed was: “FAIRness requires data to be machine-processable or human access to data can also be considered FAIR”?

But let’s start from the beginning. I had the opportunity to participate in the RDA Plenary, as I have been selected as a grantee for the RDA Europe 4.0 programme for Experts. It was a great pleasure to realize that I would have the chance to participate in one of the biggest data events, where people from different disciplines and communities will exchange their views on challenging issues such as data sharing, data responsibility, data FAIRness. I had decided before even going to the Plenary that I would like to engage in every “fair” activity. The reason is that in my Home Institute (Hellenic Centre for Marine Research) I deal with data management and the last 6months I have earned the nickname of “fair lady”, as I follow closely every FAIR activity (conferences, workshops, publications etc).

So let’s come back to the question posed. Scientific Data in order to be considered FAIR (Findable, Accessible, Interoperable, Reusable) should be machine-processable and actionable? In other words, should it be digitized? And now comes the dilemma, what about the data which are in pdf formats; such as historical data? Could it become at some point FAIR?

A plethora of historical datasets exists in the form of simple and unorganized printed documents or pdf files in several Institutions, libraries or personal collections. This type of data, which is not digitized and stored on a remote server (e.g. in Cloud or any other online repository), is considered to be at risk of being lost to future use. Permanent loss of data has fatal consequences, simply because it is impossible to retrieve this data, which was collected from a certain area over a certain period of time. Consequently, loss of data equals the loss of unique resources and ultimately to the loss of our natural and cultural wealth.

To my point of view, this type of data should attract our attention. As a response to the resulting waste of valuable data, the public sector should play its role and guarantee long term support; through the funding of digitization-preservation projects and data management plans for historical data. To these projects/ plans should be a prerequisite the implementation of FAIR principles.

In other terms, the rescued historical data should be a. open; as there are no copyright issues, b. enriched with descriptive metadata, always available to the public, c. their provenance and scope should be documenting, d. available by using standardized methodologies, controlled vocabularies and ontologies and last but not least d. interoperable with trustworthy and certified data repositories and knowledgebases.

How can we understand our present or glimpse our future if we cannot understand or know in depth our past? How can we know who we are if we do not know who we were? This applies both as humans and as scientists. One part of our future is in our past; that is why historical data could and should be FAIRified!

About the author

Related blogs

comments

There are 0 comments on "Applying FAIR principles to historical data".