Skip to Main Content

Artificial Intelligence

a guide to artificial intelligence in medicine and health sciences education

Anticipating and responding to student use of AI

Take-home points:

Invite students into the conversation. Talk to the students early about your academic integrity expectations and provide examples of acceptable and unacceptable work. (D'Agostino 2023)

  • Syllabus Tips. This article includes topics to consider when updating a class syllabus along with ideas for assignments to incorporate AI.
    Boris Steipe (2023) “Syllabus Resources”. The Sentient Syllabus Project http://sentientsyllabus.org
    • Sections include "Assessing Materials", Assessing Performance", Academic Integrity", "Specific Assessment Types" and "Example Statement on Academic Integrity."

Ask questions to promote discussion and thought, such as:

  • Why is academic integrity important?
  • And what's the purpose of higher education if you don't learn?

Get to know your students' voices. Detecting an AI-produced work is easier if you know what your student's true writing voice sounds like. Always consider students’ writing history and the broader context of the assignment before making a decision. When plagiarism is suspected, talking to the student individually is the easiest first step to addressing the problem.

Use prompts that ask for personal experiences. Large language models like ChatGPT can give examples of possible personal experiences, but not specific details.

Reconsider formal language use as a learning outcome. Large language models like ChatGPT excel at mimicking boilerplate or highly formalized language because it is so rules-based and standardized. Students unfamiliar with academic language conventions may be tempted to use ChatGPT as a tool if they're asked to write in a voice that feels unnatural or foreign to them.

Run your assignment through ChatGPT. If you assign a task that can be solved by ChatGPT/ other generative AI, run it through ChatGPT first. Review the answer you receive, and tell your students about your experience (and that you’ve saved the output). ChatGPT does not produce the same answer each time the same question is posed, but the outputs may still be fairly similar.

Make plagiarizing difficult. Use some of the assignment design strategies suggested above and/or contact UND instructional designers at TTADA for help creating assignments that discourage use of a chatbot.

some of the above is taken from the Montclair State University Office for Faculty Excellence

 

Data on ChatGPT use from PEW:

AI and intellectual property

What does it mean to cite an AI that was trained on work created by humans? Possibly without permission?

Some of the data used to train AI models was copyrighted, and the original creators never gave permission for this reuse

"In a case filed in late 2022, Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles, allowing users to generate works that may be insufficiently transformative from their existing, protected works, and, as a result, would be unauthorized derivative works. If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply." (Appel, Neelbauer, and Schweidel 2023)

"Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection." (Appel, Neelbauer, and Schweidel 2023)

"In each of these cases, the legal system is being asked to clarify the bounds of what is a “derivative work” under intellectual property laws — and depending upon the jurisdiction, different federal circuit courts may respond with different interpretations. The outcome of these cases is expected to hinge on the interpretation of the fair use doctrine, which allows copyrighted work to be used without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,” and for a transformative use of the copyrighted material in a manner for which it was not intended." (Appel, Neelbauer, and Schweidel 2023)

And academic journal publishers seem to overwhelmingly prohibit AI co-authorship:

  • Springer nature: “Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria... authorship carries with it accountability for the work.”
  • Nature: "Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs.   Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript."

Resources

Appel, G., Neelbauer, J., & Schweidel, D. A. (2023, April 7). Generative AI Has an Intellectual Property Problem. Harvard Business Review.

Edwards, B. (2023, February 23). AI-generated comic artwork loses US Copyright protection. Ars Technica.

Generative Artificial Intelligence and Copyright Law. Congressional Research Service. LSB10922 Christopher T. Zirpoli. May 11 2023. United States Congress.

Grant, D. (2023, May 5). New US copyright rules protect only AI art with ‘human authorship.’ The Art Newspaper - International Art News and Events.

United States Copyright Office webpage on "Copyright and Artificial Intelligence"

Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge.

How do you cite AI?

Citation style guides have started to address the usage of large language models and AI such as chatGPT in writing and research. In general, you must declare when and how you use the technology in your writing, but there isn't yet consensus on how to do so.

It is important to remember that content generated by AI tools:

  • was trained on data which might have been copyrighted, without the permission of the copyright holder
  • is usually nonrecoverable, so it cannot be retrieved later or linked in your citation.

The following are some current recommendations, although they will continue to evolve.

APA

Currently, APA recommends that text generated from AI be formatted as "Personal Communication." As such, it receives an in-text citation but not an entry on the References list.

Rule: (Communicator, personal communication, Month Date, Year)

Examples: 

(OpenAI, personal communication, January 16, 2023).

When asked to explain psychology's main schools of thought, OpenAI's ChatGPT's response included ... (personal communication, February 22, 2023).

Resources:

APA Blog entry "How to cite ChatGPT", Timothy McAdoo 2023

Can you detect AI use?

AI Detectors mostly do not work

AI detectors have a low degree of accuracy. Open AI's own AI-detector had an accuracy rate of just 26% in January of 2023. (Deziel 2023) And in August 2023, OpenAI stated publicly on the FAQ on their website that AI detectors do not work (Edwards 2023)

text from OpenAI's FAQ for Educators:

Do AI detectors work?

  • In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.

  • Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.

  • To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.

    • When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.

    • There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.

  • Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.

Perplexity and Burstiness

Perplexity and burstiness are two qualities of writing that correlate with a human author, and that AI text often lacks: "Perplexity measures how complex a text is, while burstiness compares the variation between sentences. The lower the values for these two factors, the more likely it is that a text was produced by an AI." (Deziel 2023).

AI in higher education readings

Alimardani, A., & Jane, E. A. (2023, February 19). We pitted ChatGPT against tools for detecting AI-written text, and the results are troubling. The Conversation.

Bartlett, T. (2023, July 7). A Study Found That AI Could Ace MIT. Three MIT Students Beg to DifferThe Chronicle of Higher Education.

Caines, A. (2022, December 30). ChatGPT and Good Intentions in Higher EdIs a Liminal Space.

Edwards, B. (2023, September 8). OpenAI confirms that AI writing detectors don’t work. Ars Technica.

Fowler, G. A. (2023, April 14). Analysis | We tested a new ChatGPT-detector for teachers. It flagged an innocent student. Washington Post.

Leadership, P. R. M. I. for, & Teaching, I. and E. in. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Paul R. MacPherson Institute for Leadership, Innovation and Excellence in Teaching.

Mikeladze, T., Meijer, P. C., & Verhoeff, R. P. (2024). https://doi.org/10.1111/ejed.12663 European Journal of Education, n/a(n/a), e12663.

(2023, July 31). Advice | Should You Add an AI Policy to Your Syllabus? The Chronicle of Higher Education

Teng, M., Singla, R., Yau, O., Lamoureux, D., Gupta, A., Hu, Z., Hu, R., Aissiou, A., Eaton, S., Hamm, C., Hu, S., Kelly, D., MacMillan, K. M., Malik, S., Mazzoli, V., Teng, Y.-W., Laricheva, M., Jarus, T., & Field, T. S. (2022). Health Care Students’ Perspectives on Artificial Intelligence: Countrywide Survey in Canada. JMIR Medical Education, 8(1), e33390.

United Nations Educational, Scientific and Cultural Organization, 7. (2023). ChatGPT, artificial intelligence and higher education: What do higher education institutions need to know? – UNESCO-IESALC.

U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, DC, 2023.