Artificial Intelligence

a guide to artificial intelligence in medicine and health sciences education

Privacy and data collection

The ways different AI systems use data or information inputted by users is not transparent.

On their FAQ page, OpenAI says of ChatGPT:

"Will you use my conversations for training?

  • Yes. Your conversations may be reviewed by our AI trainers to improve our systems."

In April 2023, OpenAI announced new privacy controls, making it possible to delete your chat history and turn off conversation logging in ChatGPT via the application's settings. However, like any other website, ChatGPT's privacy policy still indicates that personal information is collected for a variety of reasons and shared with third parties.

Also in April 2023, "...Italy became the first country to block ChatGPT due to privacy related concerns.13 The country’s data protection authority said that there was no legal basis for the collection and storage of personal data used to train ChatGPT. The authority also raised ethical concerns around the tool’s inability to determine a user’s age, meaning minors may be exposed to age-inappropriate responses. This example highlights wider issues relating to what data is being collected, by whom, and how it is applied in AI." (UNESCO)

Refrain from inputting any sensitive or personal information into any kind of AI or chatbot system.

Cognitive Bias and discrimination

Depending on human-curated or selected datasets, artificial intelligence is rife with cognitive bias:

  • Consultant Leon Furze includes strategies for teaching AI Ethics available at Teaching AI Ethics
  • ProPublica found black people were twice as likely as white people to be misclassified by COMPAS, which is used by law enforcement. (Larson 2016)
  • Amazon shut down a recruitment AI tool it had developed because it was consistently discriminating against female applicants. (Hamilton 2018)
  • Galactica — an LLM similar to ChatGPT trained on 46 million text examples — was shut down by Meta after 3 days because it spewed "false and racist information.”(Getahun, 2023)
  • Algorithms used in a hospital recommended Black patients receive less medical care than their white counterparts. (Obermeyer 2019)

"It is important to note that ChatGPT is not governed by ethical principles and cannot distinguish between right and wrong, true and false. This tool only collects information from the databases and texts it processes on the internet, so it also learns any cognitive bias found in that information. It is therefore essential to critically analyse the results it provides and compare them with other sources of information." (UNESCO)

Accessibility

There are two main concerns around the accessibility of ChatGPT.

  • The first is the lack of availability of the tool in some countries due to government regulations, censorship, or other restrictions on the internet.
    • ChatGPT is currently banned in China, Russia, North Korea, Iran, and others (Browne 2023)
  • The second concern relates to broader issues of access and equity in terms of the uneven distribution of internet availability, cost and speed. In connection, teaching and research/development on AI has also not been evenly spread around the world, with some regions far less likely to have been able to develop knowledge or resources on this topic" (UNESCO)

AI as assistive tech:

  • "On the one hand, ChatGPT could be used as assistive technology by the millions of people with a communication disability or difficulty. On the other hand, the widespread use of this technology, and the perception of algorithmic objectivity, could create a standard of “correct English” that further marginalizes and stigmatizes alternative modes of communication." (Ciurra 2023)

Commercialization

ChatGPT was created by a private for-profit company, OpenAI.

  • the free version of ChatGPT uses the older, less-updated LLM, ChatGPT-3
  • the company is charging subscribers $20 a month to use the updated ChatGPT-4 version of the tool which can access the internet to search for answers
  • both versions of ChatGPT keep the data inputted by users
OpenAI paid Kenyan workers less than $2 an hour to screen toxic content out of ChatGPT and make it safer for users. (Perrigo 2023)

Lack of regulation

Currently, most AI is not regulated by the United States Government. The Food and Drug Administration does regulate medical devices, but that is the extent of their AI-regulation mandate.

The US White house has produced:

  • an "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"
    • "...which seeks to lessen the harm posed by deepfakes and also require "that companies report to the federal government about the risks that their systems could aid countries or terrorists to make weapons of mass destruction" (Kang, Sanger, 2023 NY Times).
    • The order also:
      • "Directs the National Science Foundation to pilot the National AI Research Resource (NAIRR) to explore the infrastructure, governance mechanisms, and user interfaces needed to make distributed computational, data, model, and training resources available to the research community in support of AI-related research and development.
      • Directs the U.S. Patent & Trademark Office to consult with the Copyright Office to issue recommendations to the President on potential executive actions on copyright and AI, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training. 
      • Directs the Department of Education to develop resources, policy, and guidance on the use of AI including an “AI toolkit.”
      • Directs the Office of Management and Budget to identify and assess the government’s collection and use of personally identifiable information from third-party data brokers. 
      • Directs federal agencies to promote competition in AI including addressing concentrated control of inputs in AI models and encourages the Federal Trade Commission (FTC) to ensure fair competition in the AI marketplace."
        • the above bulleted information is excerpted from a 10/31/2023 email from SPARC
  • a “Blueprint for an AI Bill of Rights”, which identifies 5 principles to "help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs."
  • Office of Management and Budget "a draft policy for the use of AI by the U.S. government"

On May 16th, 2023, former OpenAI CEO Sam Altman testified at a United States Senate Judiciary subcommittee hearing on "...the state of AI development and some of the concerns its usage had, especially regarding the job market and the spread of election disinformation." Following this testimony, Senator Richard Blumenthal (D-CT), the subcommittee’s chair, talked to reporters about potential steps to regulate AI at the federal level.

Globally, other governments have already moved to regulate AI:

Other institutional responses:

AI and intellectual property

What does it mean to cite an AI that was trained on work created by humans? Possibly without permission?

Some of the data used to train AI models was copyrighted, and the original creators never gave permission for this reuse

"In a case filed in late 2022, Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles, allowing users to generate works that may be insufficiently transformative from their existing, protected works, and, as a result, would be unauthorized derivative works. If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply." (Appel, Neelbauer, and Schweidel 2023)

"Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection." (Appel, Neelbauer, and Schweidel 2023)

"In each of these cases, the legal system is being asked to clarify the bounds of what is a “derivative work” under intellectual property laws — and depending upon the jurisdiction, different federal circuit courts may respond with different interpretations. The outcome of these cases is expected to hinge on the interpretation of the fair use doctrine, which allows copyrighted work to be used without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,” and for a transformative use of the copyrighted material in a manner for which it was not intended." (Appel, Neelbauer, and Schweidel 2023)

And academic journal publishers seem to overwhelmingly prohibit AI co-authorship:

  • Springer nature: “Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria... authorship carries with it accountability for the work.”
  • Nature: "Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs.   Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript."

Resources

Appel, G., Neelbauer, J., & Schweidel, D. A. (2023, April 7). Generative AI Has an Intellectual Property Problem. Harvard Business Review.

Edwards, B. (2023, February 23). AI-generated comic artwork loses US Copyright protection. Ars Technica.

Generative Artificial Intelligence and Copyright Law. Congressional Research Service. LSB10922 Christopher T. Zirpoli. May 11 2023. United States Congress.

Grant, D. (2023, May 5). New US copyright rules protect only AI art with ‘human authorship.’ The Art Newspaper - International Art News and Events.

United States Copyright Office webpage on "Copyright and Artificial Intelligence"

Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge.

Readings on ethical issues, environmental and human costs:

Ciurria, M. (2023, March 30). Ableism and ChatGPT: Why People Fear It Versus Why They Should Fear It. Blog of the APA.

Marks, A. (2023, January 18). Bestiality and Beyond: ChatGTP Works Because Underpaid Workers Read About Horrible ThingsRolling Stone.

Lucciono, S. (2023, April 12). The mounting human and environmental costs of generative AIArs Technica.

Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT SaferTime.

Wong, M. (2023, June 2). AI Doomerism Is a Decoy. The Atlantic.

Readings on bias in AI

Getahun, H. (n.d.). ChatGPT could be used for good, but like many other AI models, it’s rife with racist and discriminatory bias. Insider. Retrieved July 13, 2023

Hamilton, I. A. (n.d.). Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women. Business Insider. Retrieved July 13, 2023

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779.

Mattu, J. L., Julia Angwin,Lauren Kirchner,Surya. (n.d.). How We Analyzed the COMPAS Recidivism Algorithm. ProPublica. Retrieved July 13, 2023

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populationsScience366(6464), 447–453.

Palmer, K. (2022, April 6). ‘We need to be much more diverse’: More than half of data used in health care AI comes from the U.S. and ChinaSTAT.

Why Meta’s latest large language model survived only three days online. (n.d.). MIT Technology Review. Retrieved July 13, 2023

Readings on why ChatGPT lies:

Smith, C. (2023, March 13). Hallucinations Could Blunt ChatGPT’s Success - IEEE Spectrum. IEEE Spectrum.