RDA Guidance on AI Tools Usage

You are here

08 June 2023 10194 reads

June 2023

1. Introduction

This Artificial Intelligence (AI) tool usage Guidance provides guidelines and best practices for the use of AI tools during Research Data Alliance (RDA) activities, including meetings and events. AI tools are becoming more frequently used in conjunction with web conferencing platforms to enhance and support efficiency and productivity during work-related tasks and meetings. While these tools have vast potential for Open Science1, RDA recognises the need for community guidance on their responsible use in an RDA context, particularly in meetings. This guidance considers the use of AI meeting assistants that record audio, transcribe notes, automatically capture slides/screenshots, and generate summaries and aims to support the responsible and ethical use of such tools whilst minimising the risk of misuse that may harm individuals or violate the RDA and its mission, vision, guiding principles and code of conduct. Given the fast-evolving AI technology space, this guidance will be regularly reviewed and updated as necessary.

 

2. Scope

This guidance applies to:

  • All RDA community members and any attendee participating in meetings organised by the RDA, irrespective of membership, without exceptions.
  • AI tools that can record online meetings through the RDA-approved web conferencing platform (Zoom) and disseminate transcripts.
  • AI tools that can be used to record in-person meetings and disseminate transcripts.

 

3. Context

Text rendering of spoken discussions and presentations provided by AI meeting assistants may help to promote inclusion, accessibility and participation during meetings by assisting those with disabilities, not fluent in the language used, and/or unable to fully engage for the entire meeting duration. However, there are privacy implications since some AI meeting assistants use the data collected to improve their transcription services (e.g. read.ai) (see the section below). As such, while AI tools can assist participants, RDA recognises it may also cause the participants to feel unsafe and unwilling to speak freely.

 

4. Guidance

In alignment with the RDA’s Code of Conduct, the following guidelines and best practices aim to provide a safe and inclusive environment for all members and meeting attendees:

 

4.1. Due diligence

Consistent with the RDA’s mission, vision and guiding principles, users should use tools with open and transparent methodologies and documentation to better understand their operational processes. Where available, users should familiarise themselves with their tool’s privacy policy, be aware that AI tools may generate biased, inaccurate and/or inappropriate results (e.g., recordings, transcripts, screenshots and summaries) and understand why this might occur based on the tool’s data usage policy.  

 

4.2. Consent

Users of an AI meeting assistant must explicitly inform participants about the use of the tool and request consent from all participants before the meeting. The user should also declare the use of the tool at the beginning of the meeting and should only use it if all participants approve.

AI bots may not be scheduled to attend meetings on behalf of and instead of a person; the RDA considers this as not commensurate with our principles of being 'consensus' and 'community-driven' and in breach of the RDA Code of Conduct (inappropriate content, "Opinions of fictitious parties"). If an RDA meeting facilitator identifies the potential presence of an AI bot and it has not been declared by its user, the facilitator may note the presence of the bot to other participants and highlight the potential for meeting recording and post-meeting transcript dissemination. The facilitator may also remove an AI bot if deemed necessary to do so.

 

4.3 Accessibility of AI-generated outputs

The RDA does not promote the use of AI tools that require a participant to create an account and login to access the outputs or that collect or disseminate personal information. If information is collected, all AI-generated outputs must be openly accessible to all participants after a meeting.

 

4.4 Data Privacy

At all times, AI tools should align with the RDA’s Data Privacy Policy. RDA takes the collection of personal information very seriously in concordance with its privacy policy. To comply with the policy, it is not allowed to use tools that collect, store or disseminate personal contacts during meeting use.

AI tools should have clear data privacy and usage statement and/or policy. Ideally, the policy should list how and what type of data is collected (for example, personal account, financial and IP information). For example, see Otter.ai’s Privacy policy and Sembly’s Sembly.ai.

Users can also consider where the data is stored; many tools allow selecting where and how the data is stored; for example, in a private cloud, with end-to-end encryption and only shared upon request. As a concrete example, Fireflies is a commercial tool which lists its storage policy with aspects such as server location and type, as well as the possibility to delete your metadata records and further information.

 

1. Wang Kuansan (2019) Opportunities in Open Science With AI . Frontiers in Big Data V2, 2019. 10.3389/fdata.2019.00026


Relevant Articles

Chubb, J., Cowling, P. & Reed, D. (2022) Speeding up to keep up: exploring the use of AI in the research process. AI & Soc 37, 1439–1457 https://doi.org/10.1007/s00146-021-01259-0

University of Göttingen (2023) Open Science and AI https://pad.gwdg.de/OpenScienceGOE20230315

Wang, Kuansan (2019) Opportunities in Open Science With AI. Frontiers in Big Data V2, 2019. https://www.frontiersin.org/articles/10.3389/fdata.2019.00026/full

Zhou & Hunter (2023) Accessibility Powered by AI: How Artificial Intelligence Can Help Universalize Access to Digital Content. Scholarly Kitchen https://scholarlykitchen.sspnet.org/2023/06/05/guest-post-accessibility-...