Do you have comments or questions?
This guide to artificial intelligence is very much a work in progress, and we would love to hear your thoughts on other resources we can include or issues to be addressed!
Fill out this form to share your feedback, comments, and/or questions with the library.
The ways different AI systems use data or information inputted by users is not transparent.
On their FAQ page, OpenAI says of ChatGPT:
"Will you use my conversations for training?
Yes. Your conversations may be reviewed by our AI trainers to improve our systems."
In April 2023, OpenAI announced new privacy controls, making it possible to delete your chat history and turn off conversation logging in ChatGPT via the application's settings. However, like any other website, ChatGPT's privacy policy still indicates that personal information is collected for a variety of reasons and shared with third parties.
Also in April 2023, "...Italy became the first country to block ChatGPT due to privacy related concerns.13 The country’s data protection authority said that there was no legal basis for the collection and storage of personal data used to train ChatGPT. The authority also raised ethical concerns around the tool’s inability to determine a user’s age, meaning minors may be exposed to age-inappropriate responses. This example highlights wider issues relating to what data is being collected, by whom, and how it is applied in AI." (UNESCO)
Refrain from inputting any sensitive or personal information into any kind of AI or chatbot system.
Depending on human-curated or selected datasets, artificial intelligence is rife with cognitive bias:
"It is important to note that ChatGPT is not governed by ethical principles and cannot distinguish between right and wrong, true and false. This tool only collects information from the databases and texts it processes on the internet, so it also learns any cognitive bias found in that information. It is therefore essential to critically analyse the results it provides and compare them with other sources of information." (UNESCO)
There are two main concerns around the accessibility of ChatGPT.
AI as assistive tech:
ChatGPT was created by a private for-profit company, OpenAI.
Currently, most AI is not regulated by the United States Government. The Food and Drug Administration does regulate medical devices, but that is the extent of their AI-regulation mandate.
The US White house has produced:
On May 16th, 2023, former OpenAI CEO Sam Altman testified at a United States Senate Judiciary subcommittee hearing on "...the state of AI development and some of the concerns its usage had, especially regarding the job market and the spread of election disinformation." Following this testimony, Senator Richard Blumenthal (D-CT), the subcommittee’s chair, talked to reporters about potential steps to regulate AI at the federal level.
Globally, other governments have already moved to regulate AI:
Other institutional responses:
Some of the data used to train AI models was copyrighted, and the original creators never gave permission for this reuse
"In a case filed in late 2022, Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles, allowing users to generate works that may be insufficiently transformative from their existing, protected works, and, as a result, would be unauthorized derivative works. If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply." (Appel, Neelbauer, and Schweidel 2023)
"Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection." (Appel, Neelbauer, and Schweidel 2023)
"In each of these cases, the legal system is being asked to clarify the bounds of what is a “derivative work” under intellectual property laws — and depending upon the jurisdiction, different federal circuit courts may respond with different interpretations. The outcome of these cases is expected to hinge on the interpretation of the fair use doctrine, which allows copyrighted work to be used without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,” and for a transformative use of the copyrighted material in a manner for which it was not intended." (Appel, Neelbauer, and Schweidel 2023)
Appel, G., Neelbauer, J., & Schweidel, D. A. (2023, April 7). Generative AI Has an Intellectual Property Problem. Harvard Business Review.
Edwards, B. (2023, February 23). AI-generated comic artwork loses US Copyright protection. Ars Technica.
Generative Artificial Intelligence and Copyright Law. Congressional Research Service. LSB10922 Christopher T. Zirpoli. May 11 2023. United States Congress.
Grant, D. (2023, May 5). New US copyright rules protect only AI art with ‘human authorship.’ The Art Newspaper - International Art News and Events.
United States Copyright Office webpage on "Copyright and Artificial Intelligence"
Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge.
Ciurria, M. (2023, March 30). Ableism and ChatGPT: Why People Fear It Versus Why They Should Fear It. Blog of the APA.
Marks, A. (2023, January 18). Bestiality and Beyond: ChatGTP Works Because Underpaid Workers Read About Horrible Things. Rolling Stone.
Lucciono, S. (2023, April 12). The mounting human and environmental costs of generative AI. Ars Technica.
Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time.
Wong, M. (2023, June 2). AI Doomerism Is a Decoy. The Atlantic.
Getahun, H. (n.d.). ChatGPT could be used for good, but like many other AI models, it’s rife with racist and discriminatory bias. Insider. Retrieved July 13, 2023
Hamilton, I. A. (n.d.). Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women. Business Insider. Retrieved July 13, 2023
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779.
Mattu, J. L., Julia Angwin,Lauren Kirchner,Surya. (n.d.). How We Analyzed the COMPAS Recidivism Algorithm. ProPublica. Retrieved July 13, 2023
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
Palmer, K. (2022, April 6). ‘We need to be much more diverse’: More than half of data used in health care AI comes from the U.S. and China. STAT.
Why Meta’s latest large language model survived only three days online. (n.d.). MIT Technology Review. Retrieved July 13, 2023
Smith, C. (2023, March 13). Hallucinations Could Blunt ChatGPT’s Success - IEEE Spectrum. IEEE Spectrum.