Main navigation

Learn to Use AI

AI is rapidly transforming education, research and work. The best way to adapt is to learn the basics and think of ways to incorporate AI into your workflows. 

This page includes some resources to help you learn more about AI.

women typing on laptop

What is Generative AI?

What Is AI?

AI stands for Artificial Intelligence. You can think of it as teaching computers to think and reason in the way that humans can. AI learns from its experiences, just like you do when you learn new things. AI learns from data – looking at lots of examples and trying to find patterns. For example, if it sees millions of pictures of dogs, it learns what a dog looks like.

You are already acquainted with some forms of AI – most of us have used chatbots, many people use assistants like Siri and Alexa, and smart appliances, ranging from ovens to thermostats, are becoming increasingly common. Another form of AI you have likely used is facial recognition to unlock your phone or electronic device.

“Traditional” AI vs. Generative AI

While the impact of generative AI is just beginning, “traditional” non-generative AI is already widely used:

  • Non-generative AI: Non-generative AI is also called traditional or predictive AI. It is programmed with strict rules or instructions to do specific tasks. For example, traditional AI might be good at recognizing pictures of cats, but it can’t create a new picture on its own. It needs someone to give it the rules or tell it what to do. 
  • Generative AI: the user enters prompts, and the tool can create new things like drawings, stories or even music all by itself. It's like an artist robot that can think up ideas and make something from scratch. In the same example, Generative AI can not only recognize a cat, but can create a new image of a cat that has never existed before, based on images it has used to learn.

The rise of generative AI is fueled by three key factors: massive amounts of data available online, the concentrated power of cloud computing, and advancements in neural networks, which mimic the human brain's learning patterns.

Ohio State’s approved generative AI tool is Microsoft Copilot with commercial data protection. Copilot is a chatbot that uses public online data to provide you with information. The main benefit of logging in with your Ohio State credentials to use Copilot with commercial data protection is additional security, which means your conversations aren’t stored and Microsoft can’t access any data from your chat. 

However, in order to best protect institutional data, it is a best practice to only use S1 (public) or S2 (internal) institutional data in approved AI tools, like Copilot with corporate data protection. S3 (private) and/or S4 (restricted) data can be included when using approved AI tools when necessary for your education, business or research use case. Read more about security guidelines for institutional data in AI on the Office of Technology and Digital Innovation website


AI Terms*

The terms in this section are classified as referring to traditional AI or generative AI. The usefulness of AI in business is largely due to advances in generative AI, but an understanding of traditional AI terms will be helpful as you learn more about AI, particularly in helping you discern whether a source is referring to traditional or generative AI. Differences between traditional and generative AI are explained on this website's home page.

* New AI terms are constantly coming to the forefront, existing terms are becoming more relevant and new ones are being created. While basic terms are included here, this list is not comprehensive. If you have a term to add to this glossary, contact us at ai@osu.edu 

AI Terms
  • Artificial Intelligence/Machine Learning Models
    • Supervised methods: This learning model is used in traditional AI (not generative AI). It uses a mechanism to tell the AI model what the targets it should focus on. A few examples include: numeric values; specification of whether something belongs to a specific class; ascertaining if something is correct or incorrect. The downside to this model is that it is task-specific and does not translate outside of the data provided.
    • Unsupervised methods: This learning model is used in both traditional and generative AI. The model focuses on detecting patterns of characteristics and group together similar data. This artificial intelligence is best for clustering data.
  • Bias: Bias in AI refers to situations where traditional or generative AI systems produce unfair or prejudiced results based on how the model was trained and/or designed. Bias is higher on more complex models with many variables to track, but using a larger quantity of high-quality data to train the model can lower both bias and variance. There are many types of biases AI users should be aware of, including:
    • Sample Bias: sample is too small or not representative of the population
    • Programmatic Morality Bias: AI doesn’t know right from wrong
    • Ignorance Bias: users blindly believing results
    • Overton Window Bias: AI can struggle to account for how views evolve over time, making it difficult to detect potential controversies
    • Deference Bias: users believe AI has wisdom and are overly trustful of AI outputs
  • Decision Trees: Binary decision trees are used in traditional AI for classification tasks, even when the model includes a large number of features. Decision trees still are one of the most widely used types of AI models, especially when the input features can logically be split into two different parts (i.e., present or not present, A or B, true or false, etc.). Decision trees are not inherently generative AI models, as they do not generate new content but rather make decisions based on the learned patterns from the training data.
  • Hallucinations: Hallucinations are incorrect or misleading results that generative AI models sometimes produce. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.
  • Large Language Model (LLM): An LLM is a type of AI that has been trained on a vast amount of text data. Through this training, LLMs learn to understand and generate language much like humans do. Think of it as a very advanced version of auto-complete on your phone, but instead of predicting just the next word, it can predict entire sentences and paragraphs that make sense in the context of what has been written before. While all LLMs are a form of generative AI, not all generative AI relies on LLMs. 
  • Natural Language Processing (NLP): NLP is a field of AI that focuses on the interaction between computers and humans through natural language. The ultimate objective of NLP is to enable computers to understand, interpret and generate human language in a valuable way – to determine intent, sentiment and meaning. It involves various tasks such as translating languages, responding to spoken commands and summarizing large volumes of text. Essentially, NLP allows machines to read, decipher, understand and make sense of the human languages in a manner that is both meaningful and useful. NLP is used with traditional and generative AI systems. 
  • Neural Networks: Inspired by the structure and function of the human brain, neural networks are a powerful machine learning method used in both traditional and generative AI. These networks consist of interconnected processing units, mimicking simplified neurons, organized in layers. This architecture allows them to excel at pattern recognition and learning from data. All data inputs into a neural network get converted into numerical representations (vectors) for the network to process. This allows them to not only group unlabeled data according to similarities among the inputs, but also to classify data when they have a labeled dataset to train on. Neural networks are used in both traditional AI and generative AI.
  • Overfitting: In AI, overfitting is when a model learns the training data so well, including the random noise, that it can’t make accurate predictions on new, unseen data. It’s like the model is so focused on the details of the examples it was trained on that it can’t apply the rules to anything else. Overfitting can occur in both traditional and generative AI.
  • Regression: Regression is widely used machine learning tool – both in traditional and generative AI – that enables AI to predict things, even before they have occurred, using signals that are available. It involves two kinds of variables: 1) those that are in our control or observed and 2) unknowns that are not in our control and cannot be currently observed. AI that uses regression most often uses multiple linear regression to consider a variety of variables to best predict the answer to questions. The AI model uses the data it has been trained on to find the best relationship between the predictors and the outcome so it can make accurate predictions for new data it hasn’t seen before.
  • Retrievals Automated Generation (RAG): RAG is a solution for AI tools that return information for prompts. A RAG framework solves two of the biggest problems with LLMs – it enables LLMs to be more accurate and up-to-date by adding a "content store" that is meticulously maintained. It is particularly useful for knowledge-intensive tasks, information that has continuous updates and integration of domain-specific information. It can use only native data (for example only Ohio State data) as source material for an AI tool.
  • Variance: In the context of AI, variance refers to how much the model’s predictions for a given data point vary each time it is run. It’s a measure of the model’s consistency. If a model has high variance, it means that its predictions are very sensitive to small changes in the training data, which can lead to overfitting.

 

Free Learning Resources

LinkedIn and other industry leaders offer free courses that are available to help you learn about AI. Students may be interested in exploring MLA's Student Guide to AI Literacy.

Free Courses from LinkedIn Learning

Faculty, staff and students can leverage LinkedIn Learning to get started with learning about AI – remember to log in through BuckeyeLearn to access courses at no cost. We recommend you get started with a few basic courses.

AI Primer: BuckeyeLearn LinkedIn Learning Courses to start your GenAI journey 

  • Get Ready for Generative AI: (5 minutes, 26 seconds): This video offers a very brief overview of AI. It is most effective when combined with other courses, like those listed here. 
  • AI Productivity Hacks to Reimagine Your Workday and Career (27 minutes): This course shows you how new generative AI tools can automate routine tasks, boost productivity, enhance collaboration, and improve decision-making. Discover key hacks to reimagine your workday and create new workflows.
  • Prompt Engineering: How to Talk to the AIs (29 minutes): This basic course explains prompts and how to write them to increase the likelihood of AI tools returning your desired results. 
  • Ethics in the Age of Generative AI (38 minutes): an overview of ethical concerns related to AI that focuses on prioritizing human-centered approaches, ethical considerations and maintaining human control over AI-generated content. 

LinkedIn Learning Courses by Category

Case Studies in AI

Ethics in AI

AI and Research

AI and Cloud Services

You can also find helpful Copilot courses from LinkedIn Learning through BuckeyeLearn, but many of these courses highlight features that are not yet available through Ohio State’s subscription to Copilot with corporate data protection.

 

Free Courses from Industry Leaders

Since AI is still new to most of us, service providers are interested in educating the public to encourage adoption of AI tools. Here are some a few places that you can find free courses to build knowledge and sharpen your skills:

 

Prompting for Higher Education: Best Practices 

Prompt engineering is an exciting area that is developing as the GenAI ecosystem grows. Prompts guide generative GenAI tools, like ChatGPT or Copilot, to produce specific outputs or complete tasks based on your instructions. These sections provide tips for writing prompts. Assume that all references to "AI" in this section are specifically referring to Generative Artificial Intelligence.

The Anatomy of an AI Prompt: Be Specific and Minimize Assumptions

By default, AI tends towards broad, general responses. To guide AI to more specific, and therefore more likely to be useful, responses include, at minimum, these 4 main components when creating a prompt: Goal, Context, Source, Expectations. Remember, the more information you provide an AI tool, the better it can meet your request.  

  • Goal: What response do you want (i.e., bulleted list, data table, paragraph, image, etc.)?
  • Context: Why do you need the information and who is involved (i.e., instructor, student, other staff, management, etc.)?
  • Source: Which information sources or samples should be used (i.e., existing spreadsheets, background information, etc.)?
  • Expectations: How should the AI respond (i.e., use simple or complex language, explain something in laymen’s terms, use a certain tone, etc.)?

Being specific in prompting will also help minimize the assumptions the AI makes in creating a response to a prompt. Think of your prompt as a set of clear instructions. The more detailed it is, the better the AI can perform each task, resulting in higher quality output.

4 Steps to Writing Great AI Prompts
  1. Learn About Your Prompt’s Subject: Understand the subject matter to write a specific and precise prompt. This background knowledge can help you avoid issues such as AI-generated hallucinations (believable, but incorrect or entirely made-up outputs) and misinformation. Having background knowledge also helps you to evaluate the quality and accuracy of the AI's response.  
  2. Refine, Iterate and Experiment with Prompting Styles: You often won’t get a perfect response on the first try. Refine your prompt and try again. Use "iterative refinement," a technique that involves providing feedback on the AI's responses and requesting improvements. Many successful prompting sessions evolve into dialogues through the refining process. Refining may even involve changing the conversation style. For example, you can try asking the tool to be more creative or more precise in generating a response. Copilot, which the university has licensed for all faculty, staff and students with data protection, offers you three built-in conversation styles.   
  3. Keep Your Audience in Mind: As you refine your prompt, ensure the generated content aligns with your target audience. Consider their knowledge level and adjust the complexity as needed. Don't be afraid to explicitly tell the AI who you're creating the content for.  
  4. Ethical Considerations: The ethical implications of AI in education are crucial. Be mindful of issues like data security, privacy and copyright concerns. Always be transparent regarding your use of AI tools.   


In summary, remember to do the following when prompting AI:   

  • Be clear and concise: State your desired outcome or question in a straightforward manner.  
  • Provide context: The more information you give the AI about the topic, the better it can tailor its response.  
  • Use specific language: Avoid ambiguity. The more precise your terms, the more focused the AI's output will be. 
  • Refine and iterate: Don't be afraid to experiment with different prompts to achieve the best results.   
Writing Inclusive Photo Prompts

It’s important to be aware that AI has biases that impact its responses. If an AI model is trained on a dataset that lacks inclusivity and diversity, biases can emerge. If you experiment with headshot generating apps, a popular alternative to the time and cost involved with hiring a professional photographer for a business headshot, you will likely notice that the results are over filtered and lack nuance. 

One company that has articulated this issue particularly well is Dove Soap, a Unilever brand that is well known for its inclusive “Real Beauty” advertising campaign. The Real Beauty Prompt Playbook does a great job of offering tips on how to create images on the most popular generative AI programs that are representative of a more inclusive population. It also has a glossary of terms to make your prompting more inclusive.

This content is for educational purposes only and should not be construed as a endorsement or recommendation of Dove/Unilever brands.

Sample Higher Ed Prompts You Can Try

Try some of the prompts below to start getting acquainted with AI. Seeing the answers will give you an idea of what kinds of responses you can expect when writing prompts for AI tools. These are some good samples that will demonstrate how being specific in writing your prompts can help you get responses in the way that is most helpful to you. 

Create a rubric for a term paper 1000-level college course on creative writing. Students are asked to write their own story for this term paper. The rubric should have five components and each component should be graded on a 1-5 scale.   

I need help studying for a biology quiz on the parts of the cell and their functions. Please quiz me.   

Create a workout plan for a 35-year-old women. This workout plan should include three days per week of strength training and three days per week of cardio. Each day should have different exercises. The goals of this program are to increase overall fitness.   

I’m starting my first job and need outfit ideas. I am a 24-year-old male, and the work environment is business casual. Please provide images and descriptions of potential outfits.   

Write a SQL query on how to connect these two prompts: SELECT "email","points" 
FROM   "user" LIMIT  100000 and SELECT "achievementid", "title", "authoremail" 
FROM   "achievement" LIMIT  10000  

Draft an email to my boss to update her on a project I have been working on. Here is relevant information about the project [Insert Text].   

Generate 3-5 bullet points to prepare me for a meeting with Client X to discuss their next branding campaign. Generate the bullet points as a summary of this text: [Insert Text]. Please use simple language.

 

Pros and Cons of AI

AI proponents highlight the many benefits of the new technology. AI has the promise of improving accessibility and user experience with technology tools. It improves ease of use for non-technical users and reduces the cost for expensive, complex technologies, making them more accessible to smaller businesses. However, with every transformative technology comes change, and by its very nature there are both positive and negative outcomes. The LinkedIn Learning course, Ethics in the Age of Generative AI, addresses many of the moral concerns to be debated when discussing AI. However, there are also practical concerns.

Practical Concerns when Using AI
  • Lack of Transparency: AI models often lack transparency and the ability to explain their predictions and decisions. This is known as the "black-box" problem. Some AI tools, like Microsoft Copilot, mitigate this issue by citing sources of information in its answers. 
  • Lack of Specialized Knowledge: AI is designed to answer general prompts; very specific in-depth knowledge might require a plug-in for specialized knowledge, most of which are designed for ChatGPT. If you are, for example, a chemistry professor, you may AI to be unhelpful for your students. 
  • Fabricated Answers: AI has been known to make things up in answering questions – a defect that has been termed “hallucinations” – so you must verify the information AI provides. Incorrect or misleading results could be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model or biases in the data used to train the model. 
  • Imprecise Results: Detailed prompts help, but in some cases, AI just can’t quite deliver the response you seek. A user’s understanding of prompt engineering and the continual improvement of available tools will help improve this issue. 
  • Biases: 
    • Sample Bias: sample is too small or not representative of the population 
    • Programmatic Morality Bias: AI doesn’t know right from wrong 
    • Ignorance Bias: users blindly believing results 
    • Overton Window Bias: views change over time and AI doesn’t understand that and cannot detect controversy 
    • Deference Bias: users believe AI has wisdom 
  • Amplification of Propaganda: AI amplifies propaganda if the materials are included in its learning model. “Fake news” and “deep fakes” are a legitimate concern as AI becomes more prevalent and advanced.