Generative Artificial Intelligence

Kristin Shingler

Generative Artificial Intelligence (genAI)

This chapter will focus on generative artificial intelligence (genAI). The content will be broken down into three parts:people sitting around a table with electronic devices

  1. What is artificial intelligence?
  2. Generative AI Deep Dive
  3. Generative AI in Education

Each section will build on the previous one, with it’s own learning objectives and knowledge checks. Feel free to revisit each section as needed!

 

 

 

 

I. What is artificial intelligence?

Learning Objectives

After reading this section you will be able to:

  • Define the phrase artificial intelligence and identify technologies that meet this definition
  • List historical usages of artificial intelligence
  • Identify current uses of artificial intelligence in the field of dentistry

 

The term “artificial intelligence” was first coined by John McCarthy in the mid-1950s at a conference at Dartmouth. This phrase encompasses the science and engineering technologies that explore and develop computational understanding of intelligent behavior [1]. This definition is simplified by the Merriam-Webster dictionary to read, “the capability of computer systems or algorithms to imitate intelligent human behavior.[2]” If we apply this definition throughout history, we can find technological advances that pre-date the usage of the term artificial intelligence, and since its advent, the innovations in artificial intelligence have continued to grow.

 

Timeline:

  • 1936 – Alan Turing (Universal Machine [computer] to solve a puzzle)
  • 1955 – Alan Newell, John Clifford Shaw, and Herbert Simon engineer the program Logic Theorist to mimic human problem solving skills
  • 1955 – “Artificial Intelligence” coined
  • 1957 – Frank Rosenblatt’s “The Perceptron” artificial neural network
  • 1964 – Chatbot ELIZA by Joseph Weizenbaum
  • 1997 – Gary Kasparov defeated by IBM’s Deep Blue (chess)
  • 2011 – Ken Jennings defeated by IBM Watson (Jeopardy)
  • 2012 – ImageNet (Geoffrey Hinton and team) results in image classification using deep learning
  • 2017 – Natural Language Processing, based on the Transformer Model[3] – chatbots, etc.
  • 2019 – OpenAI creates GPT-2 using deep neural networks and achieves human-like language processing
  • 2022 – OpenAI releases ChatGPT, a model that improves through human provided feedback via reinforcement learning

Medicine and Dentistry Timeline:

  • 1971 – A ranking algorithm called INTERNIST-1 generated to help reach diagnoses
  • 1976 – MYCIN is used to suggest antibiotic treatments for infections with bacterial pathogens using a “backward chaining” artificial intelligence approach
  • 1976 – Gunn’s investigation of acute abdominal pain diagnosis using computer analysis [4]
  • 1986 – DXplain, a program that generated diagnoses for 500 diseases based on input symptoms, is released by the University of Massachusetts (now contains >2600 conditions)
  • 1990s – Computer Aided Models (CAM) were used to design models of human teeth and create 3D models of dental crowns based on a person’s remaining dentition
  • 1991 – Pathology reports are created with ~95% diagnostic accuracy using the Pathology Expert Interpretative Reporting System
  • 2002 – Fuzzy logic in cancer diagnosis [5]
  • 2007 – Dental radiographs were first analyzed to find tooth decay using machine learning
  • 2017 – IBM’s Watson was used by neurologists to identify RNA-binding proteins that are changed in ALS patients.
  • 2019 – AI-powered devices for diagnoses of cancer are approved by the FDA
  • 2022 – The FDA authorized 91 AI-powered healthcare devices in this year alone.

From the timelines above, we can see that different algorithms and theories have been used to create artificial intelligence programs and devices over time. We can further classify artificial intelligence based on its programming and capabilities. The most basic division of AI systems categorizes them as strong or weak. While strong AI works to reach decisions in multiple fields based on a multi-task algorithm, weak AI focuses on solving a specific task. Currently, no strong AI applications are available due to concerns about ethics and safety. Weak AI applications, also referred to as narrow AI programs, are common and include facial recognition programs, chat robots, social media content recommendations.

We can see that the area of weak or narrow artificial intelligence is still quite broad. These programs are divided into expert-based systems and machine learning (ML). Expert-based systems rely on input situations and solutions that have been supplied by humans, and thus reflect the human decision making process.[6] This is in contrast to ML, where the algorithm reaches a solution by summarizing the learning experience[7]. ML is divided into further subsets that include deep learning and several types of neural networks, including artificial neural networks, convolution neural networks, and generative adversarial networks. The details of deep learning and neural networks are beyond the scope of this introduction, so will not be included here. Furthermore, all ML methods can be classified as supervised, semi-supervised, or unsupervised based on the extent to which the training datasets are labeled. Research in ML and its subdivisions is ongoing and intensive, leading to the many AI innovations that we see released and integrated into our daily lives.

 

 

II. Generative AI Deep Dive

Learning Objectives

After reading this section you will be able to:

  • Define key terms related to generative and predictive artificial intelligence including “generative AI,” “large language models (LLMs),”, and “hallucinations”.
  • Analyze the difference between generative AI and predictive AI that highlights their functionalities, limitations, and applications.
  • Evaluate the ethical implications of using generative AI tools

In the previous section we discussed different types and subtypes of artificial intelligence. Generative AI (commonly referred to as genAI) is another subtype of artificial intelligence. GenAI programs use machine learning to create new text, images, audio files, and videos using knowledge it learned from existing data. This is in contrast to predictive AI programs, which use machine learning to predict or forecast events, text, patterns, and trends based on training data. Predictive AI has been integrated into many parts of our everyday lives with examples such as auto-completion of email addresses, next word suggestions when texting, and design template suggestions in programs such as PowerPoint and Canva. Note, these predictions suggest already existing content. The novelty of genAI is that it creates content that was not pre-existing.

So how does generative AI work? Generative AI programs are built on large language models (LLMs). LLMs are complex algorithms that are designed to respond to prompts by predicting the most likely next word in a sentence. These predictions are based on the model being “trained”, which means they are fed massive datasets that contain large amounts of text. The text is then analyzed for content (included words) and structure (grammar). Generative AI companies use different datasets to train their LLMs, meaning different programs contain different information and will produce different output. Some common sources of text for training LLMs include websites (typically those scanned by bots that crawl the internet), books (both fiction and non-fiction),  scientific papers, news articles, data with training instructions for the LLM, and logs of past conversations between the models and users. In the table below you can find different companies that have released genAI tools, the names of those tools, the name of the database the LLM was trained on, and the most recent training date.

Company Generative AI Tool LLM Training Database LLM Training Date*
OpenAI ChatGPT CommonCrawl January 2022
Microsoft CoPilot# ChatGPT 4 + Bing Search Index January 2022 + Bing Search Results
Google Gemini “Publicly accessible sources” and Gemini Apps July 2023
Anthropic Claude Proprietary dataset “AI-2023” August 2023
HuggingFace HuggingChat Llama 3.1 from Meta December 2023

 

*These LLM training dates are current as of August, 2024.

#University of Minnesota students can log into CoPilot using their University email account to gain access to a “Protected” account where your data will not be used for training purposes

 

 

The list above is not exhaustive, and new genAI tools are constantly being developed and deployed. It’s always a good idea when you’re looking at a new tool to learn how it trains the model it’s built on and how recently it’s been trained. Other considerations when selecting a genAI tool can be found below:

  • Cost – Some companies offer free and paid versions of their genAI programs. There are often differences in the data used for the versions, the amount you can use the program, and alternative capabilities between versions.
  • Capabilities – Some genAI tools only generate text, while others can be used to generate code, images, music and other audio files, and even movies.
  • Your data – are you using a genAI program that is training its model based on the conversation you’re having with it? If yes, what data do you need to enter and are there privacy concerns surrounding it?
  • Two other things to add into your evaluation of new genAI tools are the accuracy and ethics surrounding the program. These concerns and limitations have large impacts on the selection of a genAI tool and more information is available below.

 

Creating new content with genAI programs is a unique opportunity, and the limits keep expanding. As this new technology evolves and it’s use expands it’s imperative to consider the limitations and concerns regarding the use of generative AI programs. Some of these are listed below:

  • Accuracy – GenerativeAI programs don’t have built-in fact-checkers to validate the output they produce. As the models they are built on use prediction of the next most likely word, they can generate inaccurate and false information, often referred to as hallucinations. Similarly, how recently the model has been trained can impact the output of generativeAI programs.
  • Ethics – Many concerns regarding the use of genAI stem from moral principles that guide individuals choices when using these programs. These ethical issues include:
    • Bias – Bias, defined as systematic error in decision-making processes that results in unfair outcomes, exists in generativeAI from multiple sources and manifests in several ways when these programs produce output. Some examples of bias in generative AI tools include sampling bias, algorithmic bias, confirmation bias, measurement bias, interaction bias, representation bias, and generative bias. A complete discussion of these biases with real world examples can be found in “Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies”.
    • Access and Equity – Many generativeAI programs have both free and paid versions, which can lead to access and equity issues. This is particularly true when the paid version allows access to a more recently trained model, doesn’t limit the number of prompts that can be submitted, or offers more data privacy protections for users. Similarly, quality internet connections are required to utilize genAI programs, which is not available for all people.
    • Copyright – There are concerns over how the data used to train LLMs was accessed. This is involved heavily in copyright laws, and is the center of much debate and attention. These concerns can also wade into the realm of plagiarism, as many genAI tools don’t cite (or cite properly) the sources of their data used to generate the output provided to users.
    • Academic Integrity – Along the same lines as copyright infringement and plagiarism, many concerns have been expressed about academic integrity violations due to the use of genAI programs. Embedded within this concern is learners copying genAI output for direct submission without fact-checking and editing, potential loss of critical thinking skills, and a lack of capacity to assess learner’s progress and knowledge in courses.
    • Environmental Impact – The energy demands of generative AI programs are huge, and continue to grow as these programs become more popular and integrated into our daily lives. GenAI use can impact our environment in multiple ways, including a large carbon footprint, increased need for computing power, increased demand for electricity production and consumption, and hastened reduction of already depleted natural resource pools. While there is work being done to create more efficiency in genAI programs and lower their environmental impact, this is still a concern as the use of these programs accelerates. [8]

III. Generative AI in Education

Learning Objectives

After reading this section you will be able to:

  • Determine if genAI use is allowable for a certain class and/or assignment
  • Brainstorm ways to use genAI in your education
  • Identify the components of a well-written genAI prompt

Now that we’ve covered the basics of artificial intelligence and highlighted the utility of generative AI, along with concerns and considerations for uses of those tools, we can shift to how genAI can be used in education. Here’s a general outline of the steps you can take to use genAI for certain school work:

  1. Determine if the use of generativeAI is allowed in the course and on the specific assignment you’re completing. When answering this question, a great place to start is in the syllabus for the class. The University of Minnesota shares three syllabus statements with faculty that ban the use of genAI, allow genAI use in certain context or on certain projects, or fully allows the use of genAI. You should also review the instructions for the specific assignment, and if you’re still unsure if genAI is allowed, ask your instructor.
  2. Select a generative AI tool to use. You can evaluate these tools based on the table above and on other parameters you might have to assessing the benefits of technology tools.
  3. Design your prompt! Consider how you want to interact with the program, what you need from the output, and what information you can and want to share with the tool. You’ll find more information on how to design prompts and ways to engage with the tools below.
  4. Fact check and edit the output. As we mentioned, generative AI produces output that may be incorrect and doesn’t include the most recent information about many topics. Add to this the fact that directly copying work that isn’t yours (even if it was done by a super computer) is a violation of academic integrity, and we’re left with this crucial last step! Take what is given, evaluate it for correctness, add in additional resources, especially those that were written after the LLM of your choice was trained, and add your own thoughts and flair to get the most value out of using a generative AI tool.
  5. Cite your use of generative AI in your work. The University of Minnesota Libraries has put together an informative LibGuide on citing genAI use for you to reference. This guide includes recommendations on how to cite AI tools in several commonly used citation formats with examples and links to other resources.

Are you excited to use genAI when allowed but not sure what to do with it? Take a look below for some fun and helpful options.

GenAI Use Examples

Use a genAI tool to

  • Summarize a document
  • Create a study schedule that incorporates time for self-care
  • Act as a personal tutor
  • Explain a concept in a certain style (some examples are as a twitter thread, in football terms, as a vocal coach, or as a recipe)
  • Write sample questions about a certain topic to use for studying
  • Draft an email to your classmates or a professor
  • Create images for a presentation
  • Write a pneumonic to remember the stages of the cell cycle (level this one up by asking it to relate to something you enjoy, like Harry Potter or dog breeds)

The list above isn’t exhaustive, but hopefully can inspire you! A good question to ask when wondering how genAI can be utilized for a task is how you can make that AI tool a team member or assistant so that you maximize your time and learning.

Along with deciding how you want to use genAI tools, you have to decide what you want to ask genAI to do for you. This is called prompt design or prompt engineering and can take some practice. The general concept is, that the more information you share in your prompt, the better the output will be. When you are designing your prompts be mindful that the information you share may be used to continue to train the model, so nothing private should be shared. This can include personal information, protected health information (PHI), and copyrighted materials amongst other things.

In order to design a good genAI prompt use these steps:

  1. Establish your viewpoint (who are you, what is your role)
  2. Establish your audience (who is the content being generated for)
  3. What background knowledge do you have
  4. Include if any steps are needed
  5. What is the desired outcome or goal of the prompt
  6. Assess the output
  7. Revise and edit as needed

If you’d like to learn more, openAI has a resource on prompt engineering.

Key Takeaways

Generative AI is a complex topic. Here are some key takeaways.

  • The use of generative AI and the output it generates should always be carefully considered.
  • Generative AI can be helpful, and knowing more about this technology can allow you to harness it’s power more effectively.

As a final thought, the use of generative AI can help to streamline many tasks and is often described as a tool to increase efficiency and save time. If you find this to be true for yourself, please use that saved time to take care of yourself, rather than overloading your schedule. We want you to be a whole person, so harness that educational efficiency to promote your own wellbeing!

 


  1. Shapiro SC. Artificial intelligence. In: Shapiro SC (ed) Encyclopedia of Artificial Intelligence, vol. 1, 2nd edn. New York: Wiley, 1992.
  2. “Artificial intelligence.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/artificial%20intelligence. Accessed 26 Jun. 2024.
  3. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. "Attention is All You Need." NIPS 2017.
  4. Gunn AA. The diagnosis of acute abdominal pain with computer analysis. J R Coll Surg Edinb. 1976 May;21(3):170-2. PMID: 781220.
  5. Schneider J, Bitterlich N, Velcovsky HG, Morr H, Katz N, Eigenbrodt E. Fuzzy logic-based tumor-marker profiles improved sensitivity in the diagnosis of lung cancer. Int J Clin Oncol. 2002 Jun;7(3):145-51. doi: 10.1007/s101470200021. PMID: 12109515.
  6. Liebowitz, J., "Expert Systems: A Short Introduction." Engineering Fracture Mechanics. 1995 50(5/6):601-607.
  7. Schmidhuber, J. "Deep learning in neural networks: An Overview" Neutral Networks 2015 61:85-117.
  8. Bashir, Noman, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti. 2024. “The Climate and Sustainability Implications of Generative AI.” An MIT Exploration of Generative AI, March. https://doi.org/10.21428/e4baedd9.9070dfe7.

License

A Guide for Success at the University of Minnesota School of Dentistry Copyright © 2021 by Kristin Shingler and Shannon Gilligan Wehr. All Rights Reserved.

Share This Book