REMS (Research Experience for Medical Students)

Artificial Intelligence

Artificial intelligence (AI) is a rapidly evolving field that has the ability to transform the health sciences. AI will provide both benefits and challenges to health care professionals in their research and scholarly publication journey. While AI tools can help analyze data, proof scholarly communication, and summarize literature, it also creates ethical and legal concerns. Privacy, bias, accountability, security, and other concerns should be considered when researchers use such tools. Additionally, the field of medicine and health sciences requires the background and experience of professionals to confirm that the data is reasonable, responsible, and effective when applied in practice.   

For an in-depth guide to artificial intelligence in medicine and health sciences education visit the SMHS Artificial Intelligence Research Guide.

Data Privacy

Concerns about privacy and security, including (but not limited to):

  • Concerns about unauthorized access to sensitive research data.
  • Potential data breaches and leaks of confidential information.
  • Risk of re-identification of de-identified data, compromising anonymity.
  • Misuse of AI to infer personal information from seemingly anonymous data

Tips to mitigate privacy and security issues:

  • Obtain informed consent
  • Remove Personally Identifiable Information (PII) through anonymization and de-identification
  • Encryption
  • Secure data storage and transmission
  • Review of the company’s usage policies

AI Regulation

Currently, most AI is not regulated by the United States Government. The Food and Drug Administration does regulate medical devices, but that is the extent of their AI-regulation mandate.

On October 30, 2023 President Biden released an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and at the same time, the first-ever International AI Safety Summit was held in the UK with political and technology leaders from around the world in attendance.

The US White House previously had produced a “Blueprint for an AI Bill of Rights”, which identifies 5 principles to "help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs."

On May 16th, 2023, OpenAI CEO Sam Altman testified at a United States Senate Judiciary subcommittee hearing on "...the state of AI development and some of the concerns its usage had, especially regarding the job market and the spread of election disinformation." Following this testimony, Senator Richard Blumenthal (D-CT), the subcommittee’s chair, talked to reporters about potential steps to regulate AI at the federal level.

Prior to these, and even prior to the public release of ChatGPT, the United Nations Educational, Scientific, and Cultural Organization (UNSESCO) released Recommendation on the Ethics of Artificial Intelligence.

Globally, other governments have already moved to regulate AI:

Other institutional responses:

AI companies also have their own guidelines to help protect users (and themselves). For example, OpenAI states: "Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations."

Finally, as researchers you will follow the regulations of Institutional Review Boards (IRBs) which oversee research involving human subjects.

Peer Review

One of the final steps in scholarly publication, and frequently a challenging one, is the peer review process. Researchers have recently noted that their work has waited up to two years to complete the peer review process and this has some looking to AI for help. While this is a very new aspect of publication, it has received some attention lately.

Ultimately, the decision will likely be made by the publishers and editors who have already begun to draft guidelines related to AI and peer review. In July of 2023, the NIH clearly stated “Using AI in Peer Review is a Breach of Confidentiality” and Elsevier's Publishing Ethics Policies includes statements on the use of generative AI and AI-assisted technologies in the journal editorial process and the journal peer review process Additionally, several editors of biomedical and humanities journals published “Editor's Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing” in September of 2023.

Additional information on AI in publishing can be found on the Chester Fritz Library AI Research Guide.