Prompt Navigator

Prompt Lab | Innovating Higher Ed

Prompt Navigator

A Human-Centered AI Prompt Engineering Guide for College Faculty

Lead your classroom with confidence in the AI era. Prompt Navigator is your comprehensive, human-centered mission plan for crafting prompts that save time, foster inclusive learning, and uphold academic integrity. Built specifically for higher-education faculty, it blends research-backed strategies with ready-to-use tools so you can start applying them today.

Explore the Sections

Core Techniques

These nine techniques give faculty a practical toolkit for consistent, high-quality AI outputs. Each card includes a definition, a higher-ed use case, guidance on when to use it, and copyable example prompts.

Zero-Shot Prompting

Use when: you want a quick baseline answer or definition without providing examples.

Definition

Zero-shot prompting asks the AI to perform a task without any examples in the prompt. You give a direct instruction or question and the model responds using its general training. It’s the fastest way to get an initial draft, explanation, or list because you don’t have to prepare demonstrations or special formatting ahead of time.

Use Case (Higher Ed)

Use for fast definitions (“Explain operant conditioning in ~120 words for non-majors”), quick contrasts (“Summarize how qualitative and quantitative methods differ”), or rapid lists (“Give five discussion starters related to academic integrity in first-year seminars”). In class prep, it’s a handy way to draft a first pass that you can refine with follow-ups.

When/Why to Use

Best for straightforward tasks where you want the model’s unprimed attempt. It minimizes prep time and can kick off an iterative workflow (baseline → refine). If outputs are too generic or inconsistent, move to few-shot or add context to tighten format, tone, or constraints.

Explain the concept of social stratification in simple terms for an intro sociology class (~120 words) and include one concrete, everyday example.
List the three most important differences between photosynthesis and cellular respiration in a single paragraph for first-year students.

Few-Shot Prompting

Use when: the output must match a specific format, tone, or rubric.

Definition

Few-shot prompting includes one or more examples of the task and desired output inside the prompt. The model uses these demonstrations to infer structure, tone, and level, then produces a new response that mirrors the pattern. This leverages in-context learning to reduce ambiguity and improve consistency.

Use Case (Higher Ed)

Provide an example student answer with model feedback, then ask for feedback on a new answer “in the same style.” Or show two well-formed quiz items (stem, distractors, rationale) and request five new items that match difficulty and structure. You can also include an example paragraph edit to teach the model your revision voice.

When/Why to Use

Reach for few-shot when zero-shot was generic, when format is critical (rubrics, item shells, citation style), or when you need a stable voice (e.g., supportive writing tutor vs. strict copyeditor). Good examples = better outputs; 1–3 concise demos are usually enough.

You are grading short answers.
        
        Example student answer:
        "[paste]"
        
        Example instructor feedback (tone, length, structure to imitate):
        "[paste]"
        
        Now provide feedback on the next student answer in the same style:
        "[paste new answer]"

System & Role Prompts

Use when: you need consistent behavior, constraints, or a specific persona.

Definition

System instructions set global ground rules for the session (tone, limits, policies, or audience). Role prompts ask the AI to adopt a persona (e.g., encouraging writing tutor, skeptical peer reviewer, reference librarian). Together, they shape how the model responds across turns and reduce unwanted surprises.

Use Case (Higher Ed)

Set a system message like: “You are a helpful teaching assistant for freshman writing who explains in plain language and avoids giving full solutions.” Then add a role for the task: “Act as an encouraging writing tutor and give concise, actionable feedback on this draft.” For research support, use a librarian persona that emphasizes source evaluation and citation.

When/Why to Use

Use whenever the manner of response matters as much as content—feedback tone, accessibility, policy alignment, or simulations. Roles help the model surface relevant knowledge and keep responses within scope (a tutor asks questions back; an editor focuses on clarity and mechanics).

System (session-wide): You are a supportive teaching assistant for first-year courses. Use clear, neutral language, avoid full solutions, and flag uncertainty.
        
        Role (this task): Act as an encouraging writing tutor. Give concise feedback on the draft below tied to our rubric.
        Rubric: "[paste]"
        Draft: "[paste]"

Context Injection

Use when: outputs must follow your local readings, rubrics, levels, or constraints.

Definition

Context injection supplies the model with the background it needs: course level and audience, specific source material (excerpts, notes, slides), and explicit constraints (length, format, tone). By reducing ambiguity, you increase alignment to your course.

Use Case (Higher Ed)

Generate study guides from assigned readings; create quiz items from pasted passages; evaluate a paragraph against your rubric; adapt a lesson for online or HyFlex with stated time limits and materials. When you include the source text or policy, the output can follow it.

When/Why to Use

Use whenever “according to our materials” matters. Pair with few-shot for tone/format. Also useful to encode accessibility or integrity notes (e.g., use alt-text; avoid claims without citations).

Using the rubric below, score the student paragraph and give two specific suggestions tied to criteria.
        
        Course/Level: Intro Psych (non-majors)
        Rubric: "[paste rubric]"
        Student paragraph: "[paste]"

Step-Back Prompting

Use when: you want the model to identify the problem type and plan before solving.

Definition

Step-back prompting asks the model to describe the kind of problem it’s facing and outline a brief plan (2–4 steps) before attempting the solution. This promotes meta-cognition and reduces impulsive answers.

Use Case (Higher Ed)

Before solving a stats word problem, the model identifies it as a sampling vs. inference issue, lists the steps (define variables, choose test, check assumptions), and then proceeds. In philosophy, it can name whether a prompt calls for conceptual analysis, argument reconstruction, or application, then plan the response.

When/Why to Use

Helpful for multi-step or unfamiliar tasks and when teaching students how to approach problems. It surfaces a plan you can critique, making the reasoning process visible.

Identify the type of problem, outline a 3-step plan to solve it, then provide the solution.
        
        Problem:
        "[paste]"

Chain-of-Thought Prompting

Use when: the reasoning steps themselves are instructional or must be checked.

Definition

Chain-of-thought (CoT) prompting asks the model to show intermediate reasoning steps rather than only a final answer. You can cap the number of steps to keep it concise and then request the final answer separately.

Use Case (Higher Ed)

In math or logic, have the model explain each step (up to a set limit) and then state the final result. In history or policy analysis, ask for a short evidence chain that links claims to sources before delivering a conclusion.

When/Why to Use

Use when transparency and pedagogy matter—teaching problem-solving, grading reasoning quality, or verifying how a claim was reached. Avoid for trivial tasks or when brevity is essential.

Solve the problem and explain in numbered steps (max 6). Then give the final answer on a separate line labeled “Answer:”.
        
        Problem:
        "[paste]"

Self-Consistency

Use when: single-run outputs vary and you need a more reliable result.

Definition

Self-consistency generates multiple independent solutions or reasoning paths to the same prompt, then compares and selects the best/most consistent answer. You can do this by regenerating, or by instructing the model to produce several alternatives and reconcile them.

Use Case (Higher Ed)

For complex calculations, conceptual proofs, or grading rubrics, ask for 3 variants and then a short synthesis that identifies agreement and chooses a final. This reduces the chance of a single erroneous run steering decisions.

When/Why to Use

Useful when correctness is critical or when earlier runs felt unstable. Sampling multiple paths and converging can reduce hallucinations and produce more dependable outcomes.

Produce three independent answers to the question below. Then summarize points of agreement, note any conflicts, and choose the best answer with a one-sentence justification.
        
        Question:
        "[paste]"

Tree-of-Thought Prompting

Use when: there are multiple viable approaches and you want options before deciding.

Definition

Tree-of-thought prompting explores several branches of reasoning or strategy rather than a single linear path. The model proposes options, evaluates them against criteria, and then converges on a recommended approach.

Use Case (Higher Ed)

Lesson design: request three distinct approaches to teach a topic (e.g., analogy-first, hands-on demo, group problem-solving) with objectives, a keystone activity, and pros/cons. Then ask for a recommendation based on your course constraints.

When/Why to Use

Ideal when many paths could work and you want structured exploration before committing. It makes trade-offs explicit and supports critical comparison.

Propose 3 distinct approaches to teach [topic] to non-majors.
        For each: objectives, one keystone activity, pros/cons.
        Then recommend one approach and explain why given a 50-minute class and 25 students.

ReAct (Reason & Act)

Use when: the task benefits from alternating reasoning with actions or questions.

Definition

ReAct interleaves reasoning with “actions” (asking a question, requesting missing info, invoking a tool) and then continues based on observations. It’s a think-step, act-step, observe-step loop that prevents premature conclusions.

Use Case (Higher Ed)

For scoping a literature review or project: the model restates the goal, identifies gaps, asks clarifying questions, proposes a next action, and iterates once before producing a step-by-step plan with milestones and check-ins.

When/Why to Use

Use for open-ended tasks that require inquiry and iteration (research planning, stakeholder analysis, design projects). It encourages deliberate progress rather than guess-and-go answers.

Follow a Reason → Action → Observation loop to build a research plan.
        
        Reason: Restate my topic and identify what information is missing.
        Action: Ask me up to 3 clarifying questions.
        Observation: Wait for answers.
        
        Repeat the loop once. Then output a step-by-step plan with milestones and risks.

Prompt Templates

Ready-to-use prompt templates for common higher education tasks. Customize these templates with your specific context and requirements.

Lesson Plan Generator

Generate a [Duration]-minute lesson plan on [Topic] for [Course Name], a [Level] general education course. Include: 2-3 specific learning objectives, at least one interactive activity, and a brief assessment (e.g., a quick quiz or discussion prompt) to check understanding. Ensure the lesson plan is appropriate for [Class Size] students and engages diverse learning styles.

Usage: Fill in, for example, [Duration] with "50" (for a 50-minute class), [Topic] with "climate change impacts," [Course Name] with "ENV 101: Intro to Environmental Science," [Level] with "freshman-level," and [Class Size] with "30". This prompt will yield a structured lesson plan tailored to a freshman ENV 101 class.

Syllabus Outline Draft

Draft a syllabus outline for [Course Name] (a [Term Length] course). Provide a week-by-week breakdown of topics and readings for [Number of Weeks] weeks, and list major assignments (e.g., [Assignment Type1], [Assignment Type2]). Include a brief course description and 3–4 course learning outcomes aligned with general education goals.

Usage: Replace [Course Name] (e.g., "HIST 202: World History since 1500"), [Term Length] (e.g., "15-week semester"), [Number of Weeks] (e.g., "15"), and assignment types (e.g., "midterm exam", "research paper"). The AI will produce a skeleton syllabus with topics by week and key components.

Lesson Plan Adaptation (Inclusive Design)

Adapt the following lesson plan on [Topic] for a different context: currently designed for [Current Context], modify it for [New Context]. Ensure you adjust the activities and examples to fit the new context and maintain the learning objectives. Provide the adapted lesson sequence and notes on changes made for [New Context].

Usage: Use this when you have a lesson for one context and want to tailor it to another. For example, Current Context = "in-person classroom", New Context = "online asynchronous format" or Current = "a history class" to New = "an interdisciplinary honors seminar". The template ensures the prompt asks the AI to keep objectives but change methods suitably.

Constructive Feedback Generator

Provide constructive feedback on a [Assignment Type] submission that [Student Challenge]. The feedback should start by highlighting strengths (at least one specific praise), then address the areas for improvement kindly and specifically. Assume this is for a [Level] student in [Course Name]. Aim for a feedback length of about one solid paragraph.

Usage: Insert the context: e.g., [Assignment Type] = "lab report", [Student Challenge] = "has a good analysis but contains several factual errors" or "is well-written but lacks a clear thesis statement", [Level] = "junior" (for a 300-level course), [Course Name] = "Psychology Research Methods". The AI will produce feedback that acknowledges the good (maybe "well-structured report") and then gently points out errors or thesis issues with suggestions – a balanced critique to guide the student.

Rubric Draft

Create a rubric for a [Assignment Type] in [Course Name]. Include [Number of Criteria] key criteria (e.g., Content Accuracy, Organization, Analysis, Writing Clarity, etc.) relevant to the assignment. For each criterion, provide descriptors for at least three performance levels (e.g., Excellent, Satisfactory, Needs Improvement) in a way that would help [Level] students understand expectations.

Usage: For example, [Assignment Type] = "oral presentation", [Course Name] = "COMM 110 Public Speaking", [Number of Criteria] = "4". This prompt yields a rubric with 4 criteria (perhaps Delivery, Content, Visual Aids, Organization) and descriptions of what Excellent vs. Satisfactory vs. Needs Improvement looks like for each – which you can then tweak to perfectly match your assignment and scoring.

Summative Feedback Letter

Draft a brief feedback letter to a student about their [Assignment Type] performance. The student did well in [Strength Area] but struggled with [Student Challenge]. In the letter, (a) acknowledge their success in [Strength Area] with specific detail, (b) address [Student Challenge] by explaining what could be improved and offering 1-2 suggestions or resources, and (c) end on an encouraging note about improvement and next steps. Use a supportive, professorly tone.

Usage: Suppose an assignment is a "research paper", [Strength Area] = "framing a compelling argument", [Student Challenge] = "integrating scholarly sources correctly". The AI will compose a letter (you can send via email or LMS) that might say "Dear Student, I wanted to commend you on…, etc. However, in terms of integrating sources, … suggestions … I'm confident you can address these issues in future work. Sincerely…". This saves time in crafting individualized yet structured feedback letters.

Differentiated Instruction Strategies

Suggest two ways to teach [Concept] in [Course Name] that accommodate different learning preferences or needs. For each strategy, identify the target learning style or student group (e.g., visual learners, English language learners, hands-on learners, etc.) and describe how to implement the strategy in class. Ensure the suggestions are feasible in a [Class Size] class.

Usage: If [Concept] = "the water cycle", [Course Name] = "GEOG 101", [Class Size] = "large (100-student lecture)". The output might give: (1) a visual approach (like diagrams or an animation) for visual learners, (2) an interactive demo or storytelling for other types – each with how to do it even in a big class (maybe via breakout groups or an online forum). This helps make lessons more inclusive.

Adaptation for Diverse Learners

Take the following assignment: [Brief Assignment Description]. Propose one modification or support to make it more accessible for students with [Specific Need] (e.g., non-native English speakers, hearing impairment, anxiety, etc.), and another modification for [Different Need]. Explain how each modification maintains the assignment's learning objectives while providing the needed support or flexibility.

Usage: Example: Brief Assignment Description = "10-minute in-class oral presentation on a current event," Specific Need = "severe public speaking anxiety", Different Need = "a hearing impairment". The AI might suggest an alternative format for the anxious student (like pre-recording the presentation or doing it one-on-one) and ensure captions or sign interpretation for hearing-impaired, with justification for each. This template ensures the prompt asks for maintaining objectives (so the core skill is still assessed) but adding accommodations.

Inclusive Discussion Prompt

Generate a class discussion prompt on [Topic] that is inclusive and invites perspectives from students of diverse backgrounds (cultural, academic, etc.). The prompt should be open-ended and avoid assumptions that everyone has the same experience. Also suggest two follow-up questions I, as the instructor, could use to probe deeper or invite quieter students to contribute.

Usage: If [Topic] = "the impact of social media on daily life". The output will be something like: main prompt – "Describe an experience you've had… (ensuring it's framed broadly)", follow-ups like "For those who might not use XYZ, how do you see parallels in other areas?" etc. This helps create a welcoming discussion environment.

Advising Email Draft

Draft an email from a faculty advisor to a student who [Student Situation]. The tone should be supportive and proactive. In the email, (a) acknowledge the student's situation or concern, (b) provide at least two suggestions or resources (on campus or strategies) to help with [Issue], and (c) invite the student to follow up or meet to discuss further. Sign off as an advisor.

Usage: E.g., [Student Situation] = "is struggling academically after the midterm exams" (Issue: time management and study strategies), or "is unsure about choosing a major" (Issue: career direction). The AI will produce a thoughtful email like: "Hi [Name], I heard that you're feeling uncertain about... It's completely normal... Here are some steps/resources... Let's find time to chat..." – which you can personalize and send.

Degree Planning Guide

Explain the course planning for [Major/Program] in an easy-to-understand way for a second-year student advisee. The student wants to graduate on time and is concerned about prerequisites and sequencing. Provide a brief overview (bullet points or short paragraphs) of: (a) which courses or requirements they should prioritize each year (Years 2, 3, 4), (b) any critical prerequisites or GPA requirements to note, and (c) advice on internships or extracurriculars for this field. Write it as if an advisor is speaking to the student.

Usage: Suppose [Major/Program] = "Psychology B.A." The output might break down: Year 2: finish core Psych intro courses & stats; Year 3: take research methods (prereq for senior thesis), some electives in areas of interest; Year 4: capstone and advanced seminars; note if a minor or study abroad can fit; etc. This template yields a nice planning summary you can give to students.

Role-Play Script (Advisor-Student)

Provide a sample dialogue between a faculty advisor and a student who [Student Issue]. The dialogue should show the advisor asking open-ended questions and guiding the student toward a solution. Aim for about 8-10 exchanges. At the end, include a brief advisor reflection or note on why certain questions were asked (to illustrate good advising practice).

Usage: If [Student Issue] = "wants to drop a course late in the semester due to stress". The script will show Advisor: "What's worrying you most about this course?", Student: "...", Advisor: "Have you considered...", etc., ending with maybe advisor note: "(Advisor note: Acknowledged feelings, explored options like tutoring or withdraw deadlines...)." This can help advisors-in-training or even to prepare faculty for delicate conversations by seeing a model interaction.

Syllabus AI Policy Draft

Draft a syllabus policy for [Course Name] about the use of AI tools (like ChatGPT) in coursework. The policy should clearly state: (a) which uses of AI are allowed (if any) and for what purposes (e.g., brainstorming, grammar checking), (b) which uses are forbidden (e.g., fully writing assignments), (c) how students should credit or disclose AI assistance if allowed, and (d) the consequences of misuse in line with academic integrity standards. The tone should be educational, not just punitive, explaining the rationale behind the policy.

Usage: Fill in [Course Name] or keep it general. The output will be a comprehensive policy paragraph you can tweak, e.g., "In this course, the use of generative AI is permitted only for preliminary idea generation or proofreading. Students must cite any AI assistance... Using AI to produce passages of your assignment is not allowed and will be considered plagiarism... The reason for this policy is ...".

Academic Integrity Case Discussion

Create a hypothetical classroom scenario that deals with a breach of academic integrity involving AI. For instance, a student [AI Misuse Scenario]. Outline the scenario in a few sentences, and then provide 3 open-ended discussion questions I can ask the class about how to handle it and what the ethical implications are. The goal is to prompt student reflection on honesty and AI.

Usage: Example [AI Misuse Scenario] = "submitted a paper that was mostly AI-generated without disclosure" or "used AI to solve homework and got an identical answer as another student". The result will give a short narrative (e.g., "Alice and Bob both submit very similar essays…") and questions like "What should the instructor do next?," "Is using AI in this way different from copying from a book? Why or why not?," "How could Alice have used AI ethically if at all?" etc. Good for facilitating class dialogue on these timely issues.

Student Handout on AI Use

Generate a one-page handout outline for students titled 'Using AI Tools Responsibly in [Institution Name] Courses'. It should have: an introduction paragraph on why this is important, then 3 sections – (1) Allowed Uses of AI (with examples of acceptable assistance), (2) Prohibited Uses (with examples like plagiarism via AI), (3) Tips for Transparency (how to acknowledge or approach instructors about AI use). Write it in student-friendly language.

Usage: Put your [Institution Name] or leave it general. The output will be an outline you can expand into a handout or webpage. For example, Allowed: "idea brainstorming, getting feedback on drafts, practice quizzes," Prohibited: "don't have it write your essay or code," Tips: "when in doubt, ask; cite AI as you would a source if it contributed to your work; keep drafts to show your own process," etc. This template gives you a starting framework to educate students on navigating AI ethically.

Common Prompt Problems in Higher Education

AI can save time and improve teaching, but only if your prompts are clear and targeted. These are the most frequent mistakes faculty make — and how to avoid them.

Problem

Too Vague

Prompts without context produce generic results.

Fix

Include course level, focus, and objectives.

Problem

No Audience Level

Without stating learner level or background, AI may pitch content too high or low.

Fix

Specify the intended audience (e.g., "first-year non-majors").

Problem

Missing Format or Length

AI will guess, often incorrectly.

Fix

State the format (bullets, table, paragraph) and desired length.

Problem

Overloaded Prompts

Multiple tasks in one prompt can confuse the AI.

Fix

Break requests into smaller, sequential prompts.

Problem

No Role or Tone

Without guidance, tone and style may be off.

Fix

Assign a role (e.g., "You are a supportive writing tutor…") to anchor the voice.

Problem

No Iteration or Verification

First outputs can contain errors or bias.

Fix

Review, refine, and verify before use.

Avoiding these problems means more relevant, precise, and classroom-ready AI outputs. Next, explore Core Prompting Techniques to get it right from the start.

Refinement Workflow & Tuning Checklist

Effective prompts rarely happen in one try. Use this streamlined workflow to refine and perfect your AI instructions for clear, accurate, and relevant results.

Quick Refinement Workflow

1

Define Your Goal

Be specific about the task and purpose.

2

Add Context

Include audience, course level, content source, and constraints.

3

Draft Clearly

Keep it one cohesive request; split complex tasks.

4

Test

Run it and check for accuracy, tone, and format.

5

Spot Gaps

Identify missing details, wrong tone, or off-target content.

6

Refine

Adjust scope, tone, format, and add examples if needed.

7

Iterate

Test and tweak until results consistently meet expectations.

8

Document

Save the final prompt for reuse or sharing.

Tuning Checklist

Before sending your final prompt, confirm it has:

Clarity

No vague verbs or unclear asks.

Context

Subject, learner level, and key details included.

Format

Output type and length specified.

Tone & Style

Academic, friendly, or role-based as needed.

Cognitive Level

Matches desired depth (e.g., analysis vs. recall).

Academic Fit

Supports learning goals and integrity.

Inclusivity

Avoids bias, stereotypes, and exclusionary language.

💡 Pro Tip

Even good prompts can improve. Test, refine, and save the best versions — they become powerful templates for future use.

References and Further Reading

This guide has drawn from a variety of scholarly and professional sources that explore prompt engineering and AI in education. For those interested in delving deeper into the research or getting more perspectives, below is a list of references and recommended readings:

References and Further Reading

19 references

Core Research Papers

Lee, D., & Palmer, E. (2025). Prompt engineering in higher education: a systematic review to help inform curricula.

International Journal of Educational Technology in Higher Education, 22(7).

Provides a comprehensive review of how prompt engineering is being discussed and implemented in HE, including various frameworks and the importance of aligning prompts with educational goals.

Lo, L. S. (2023). The CLEAR Path: A Framework for Enhancing Information Literacy through Prompt Engineering.

Journal of Academic Librarianship, 49(4).

Introduces the CLEAR framework (Concise, Logical, Explicit, Adaptive, Reflective) for creating effective prompts, particularly in the context of library instruction and information literacy. A good example of an acronym-based strategy to teach prompt design.

Korzynski, P., Mazurek, G., Krzypkowska, P., & Kurasinski, A. (2023). Artificial intelligence prompt engineering as a new digital competence.

Source details via Springer/ResearchGate.

Discusses the AI PROMPT framework and argues that prompt crafting is an emerging digital skill. Useful for understanding the actionable guidelines proposed for text-to-text prompt scenarios and how they can be taught.

Kojima, T., Gu, S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners.

Conference paper.

Famous for discovering that adding "Let's think step by step" to questions enables chain-of-thought reasoning in LLMs. This work underpins the idea that even without examples, prompting a reasoning process can improve accuracy.

Yao, S. et al. (2022). ReAct: Synergizing Reasoning and Acting in Language Models.

Google Research Blog & arXiv.

Introduces the ReAct paradigm where LMs both reason and take actions (like calling tools). While technical, it provides insight into how advanced prompting can allow AI to use external information and why interleaving thinking with doing yields better results on certain tasks.

Wang, X. et al. (2022). Self-Consistency Improves Chain of Thought Reasoning in Language Models.

arXiv.

Proposes the self-consistency decoding strategy, showing that by sampling multiple reasoning paths and taking a majority vote, LMs' performance on reasoning tasks can significantly improve. Background for the self-consistency technique mentioned in this guide.

Practical Guides & Industry Resources

Effective Prompting in Higher Education – Faculty Guide (2023).

SATLE Project, University College Dublin.

A practical guide that covers best practices for prompting (like clarity, testing for bias) and provides resources such as Danny Liu's "Prompt Engineering for Educators" and Acar's HBR article. Good for a condensed list of do's and don'ts in an education context.

Acar, O. A. (2023). AI Prompt Engineering Isn't the Future – Problem Formulatsion Is.

Harvard Business Review.

An opinion piece emphasizing that defining the problem well is more important than tweaking prompts endlessly. Useful perspective to ensure we don't lose sight of the real pedagogical goals when using AI.

Fourrage, L. (2025). Top 10 Prompt Mistakes to Avoid in 2025.

Nucamp Blog.

An industry-focused article listing common prompting pitfalls (vagueness, no role, etc.) and solutions, confirming many issues addressed in our Section 3. It's a helpful external checklist to cross-reference when diagnosing prompt problems.

Cain, C. (2023). Prompt Engineering: The Steering Mechanism for Generative AI in Education.

White paper.

Describes prompt engineering as a "steering mechanism" and outlines components for effective prompts (Cain also proposed a framework with three pivotal components, referenced in the systematic review). Good conceptual read linking prompt design to controllability of AI outputs.

Academic & Conference Papers

Eager, D., & Brunton, J. (2023). AI in Education: A Roadmap for Prompt Patterns.

Conference paper.

Among other things, discusses prompt pattern catalogs and notes the difficulty of a one-size-fits-all framework for prompts. Validates the idea that educators should choose or adapt frameworks fit for purpose.

Kumar, M. et al. (2024). Academic Integrity in the Age of AI: Challenges and Recommendations.

Journal of Academic Ethics.

Discusses how AI like ChatGPT complicates plagiarism definitions and suggests faculty strategies, underscoring the need for clear AI policies (like those we templated in Section 5).

University & Organization Guides

Danny Liu's blog post: "Prompt engineering for educators – making generative AI work for you" (2023)

University of Sydney.

Provides practical examples of using ChatGPT in teaching.

MSU Denver's AI for All Prompt Engineering Guide (2023)

Metropolitan State University of Denver.

An A-to-Z wiki style guide with basics and advanced prompting techniques, tailored for a broad audience including educators.

Ohio State University, Office of Teaching & Learning: Prompt Engineering for Instructors (Pressbooks, 2023)

The Ohio State University.

A resource similar in spirit to this guide, focusing on how educators can craft prompts for various teaching tasks.

Tools, Tutorials & Additional Resources

LearnPrompting.org (2023). Prompt Engineering Guide

Free online course.

A free online course with sections on advanced techniques like self-consistency and tree-of-thought, which complements the descriptions given here.

OpenAI's Cookbook (GitHub)

OpenAI documentation.

Has examples of using techniques like few-shot and chain-of-thought in code, which can be useful if faculty or students venture into writing custom AI scripts or want to understand under the hood.

Zawacki-Richter, O. et al. (2024). ChatGPT in Higher Education: A Delphi Study on Implications and Policy.

Preprint.

Gathers expert opinions on the use of GenAI in universities, useful for understanding the broader context and reinforcing the guide's policy section.

Sun, H. et al. (2023). "Teachers' Prompting Techniques in AI-Enhanced Classrooms".

Case study.

Though fictitious here, look out for emerging research on how real teachers are adopting AI. Following journals like Computers & Education or Educational Technology Research & Development will keep you updated on new findings and best practices.

We encourage you to explore these works to broaden your perspective. Prompt engineering is a fast-evolving field – staying informed through such readings will help you refine your skills and adapt to new AI developments. Happy prompting, and happy reading!