Do you have comments or questions?
This guide to artificial intelligence is very much a work in progress, and we would love to hear your thoughts on other resources we can include or issues to be addressed!
Fill out this form to share your feedback, comments, and/or questions with the library.
See UND's Code of Student Life for guidance on use of AI or re-use of AI-generated content (section II.B specifically covers "cheating" and plaigarism)
Refer to the ethical guidelines of your professional organization.
When publishing, look for journal policies on use of AI or inclusion of AI as a co-author. Most journals prohibit AI as a co-author.
In general, you must declare when and how you use the technology in your writing, but there isn't yet consensus on how to do so:
APA has has issued concrete guidelines for how to cite chatbots as tools. This could be confused with citing them as sources.
While the International Committee of Medical Journal Editors (ICMJE), and Elsevier clearly advise against citing chatbots.
It is important to remember that content generated by AI tools:
The following are some current recommendations, although they will continue to evolve.
Currently, APA recommends that text generated from AI be formatted as "Personal Communication." As such, it receives an in-text citation but not an entry on the References list.
Rule: (Communicator, personal communication, Month Date, Year)
(OpenAI, personal communication, January 16, 2023).
When asked to explain psychology's main schools of thought, OpenAI's ChatGPT's response included ... (personal communication, February 22, 2023).
Many AI cannot access the internet to look up answers or references. ChatGPT, for example, is a large language model which can only make predictions of what word is likely to follow another in a sequence based off the frequencies of words present in the dataset used to train it. The free version of ChatGPT, 3.5, was trained on pre-2021 data, and cannot access the internet, so it cannot look up an answer to a question. In effect, ChatGPT was created to mimic human speech, not to say truthful things.
Hallucinations and fabrications are “…mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, you can’t trust what the machine is telling you.” (Smith 2023)
Even large language models that can access the internet may fabricate information: "Why you shouldn’t trust AI search engines" Melissa Heikkiläarchive February 14th, 2023, MIT Technology Review​.
Fact-check any references or details outputted by an AI if you intend to re-use the information.
Some of the data used to train AI models was copyrighted, and the original creators never gave permission for this reuse
"In a case filed in late 2022, Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles, allowing users to generate works that may be insufficiently transformative from their existing, protected works, and, as a result, would be unauthorized derivative works. If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply." (Appel, Neelbauer, and Schweidel 2023)
"Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection." (Appel, Neelbauer, and Schweidel 2023)
"In each of these cases, the legal system is being asked to clarify the bounds of what is a “derivative work” under intellectual property laws — and depending upon the jurisdiction, different federal circuit courts may respond with different interpretations. The outcome of these cases is expected to hinge on the interpretation of the fair use doctrine, which allows copyrighted work to be used without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,” and for a transformative use of the copyrighted material in a manner for which it was not intended." (Appel, Neelbauer, and Schweidel 2023)
Appel, G., Neelbauer, J., & Schweidel, D. A. (2023, April 7). Generative AI Has an Intellectual Property Problem. Harvard Business Review.
Edwards, B. (2023, February 23). AI-generated comic artwork loses US Copyright protection. Ars Technica.
Generative Artificial Intelligence and Copyright Law. Congressional Research Service. LSB10922 Christopher T. Zirpoli. May 11 2023. United States Congress.
Grant, D. (2023, May 5). New US copyright rules protect only AI art with ‘human authorship.’ The Art Newspaper - International Art News and Events.
United States Copyright Office webpage on "Copyright and Artificial Intelligence"
Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge.