Answered By: Laurissa Gann
Last Updated: Aug 17, 2023     Views: 13

Given the excitement and interest around large language models like OpenAI's ChatGPT, it's important to address the implications of their use for scientific writing and publishing. AI tools can help writers save time, correct grammar, develop well-structured text, summarize large amounts of text, or overcome barriers faced by disadvantaged populations. But along with the potential benefits come potential problems like factual errors, poorly chosen or fake sources, security and confidentiality risks, plagiarism, and lack of accountability. 

Here are some best practices to consider before using an AI-based application.

Follow all guidelines

When deciding whether and how to use AI tools in your writing, first review all applicable guidelines for AI use. These include guidelines from your employer or institution, professional organizations, journals, and granting agencies.

Ensure content integrity

Authors are responsible for the integrity of their published content, including text generated by AI tools. Therefore, AI tools should not be listed as co-authors, as they do not qualify for authorship. Instead, they should be listed in the acknowledgments or methods section of the manuscript (AMA guideline).

Safeguard PII and PHI

We are prohibited from entering personally identifiable information, protected health information, or institutional intellectual property, including unpublished manuscripts and grants, into web-based search engines and tools such has ChatGPT and other generative platforms (ADM1187).

Disclose use

Authors should always disclose the use of AI tools when publishing or generating academic materials (ICJME and AMA guidelines).

Do not use AI for peer review

Reviewers of journal articles and grant applications are trusted and required to maintain confidentiality throughout the peer review process. Thus, using AI to assist in peer review would involve a breach of confidentiality.

Contact Us

Live Chat

Related Topics