Teaching Computational Thinking with ELIZA: A Classroom Module for Middle and High Schoolers
educationAI literacylesson plan

Teaching Computational Thinking with ELIZA: A Classroom Module for Middle and High Schoolers

UUnknown
2026-03-08
11 min read
Advertisement

Turn the EdSurge ELIZA activity into a 3-lesson classroom module that teaches rule-based chatbots, debugging, and AI literacy for middle and high schoolers.

Hook: Teach computational thinking with a 1960s bot — fast, low-cost, high-impact

Teachers and curriculum leads: if you need a ready-to-run classroom module that demystifies how chatbots work, shows their limits, and builds students' debugging and computational thinking skills, this ELIZA-based lesson plan delivers. Inspired by a Jan 2026 EdSurge classroom report on students interacting with ELIZA, the module converts that demonstration into a step-by-step three-session unit for middle and high school learners.

Why this matters in 2026

Across late 2025 and early 2026, schools and districts are prioritizing AI literacy: educators are asking for pragmatic classroom activities rather than abstract ethics talks. Policymakers and edtech guidelines now emphasize explainability, hands-on debugging, and the limits of rule-based models. ELIZA — the 1960s rule-based “therapist” bot originally created by Joseph Weizenbaum — is ideal for this purpose because it is simple enough for students to inspect but sophisticated enough to reveal common misconceptions about AI: it simulates conversation without understanding.

EdSurge (Jan 16, 2026): middle schoolers chatting with ELIZA uncovered how AI really works (and doesn’t).

Module overview (what students will gain)

  • Computational thinking concepts: decomposition, pattern recognition, abstraction, algorithms, and debugging.
  • AI literacy: difference between rule-based systems and modern statistical/ML models; why a chatbot can seem “intelligent” without understanding.
  • Practical skills: reading rules, writing simple conversational patterns, tracing and fixing logic errors in a bot.
  • Metacognition: reflection on trust, bias, and appropriate uses of conversational agents.

Designed for classroom constraints

This unit was built to run in three 45–60 minute lessons (flexible for block schedules). Materials are low-cost: any device with a browser, printed worksheets, and optional Python or Scratch environment for extensions.

Lesson sequence at a glance

  1. Lesson 1 — Explore ELIZA: interact, observe, and record patterns (45–60 min)
  2. Lesson 2 — Reverse-engineer the rules: decode how ELIZA responds and write your own patterns (45–60 min)
  3. Lesson 3 — Debug and extend: find and fix conversational failures; reflect on limits and ethics (45–60 min)

Learning objectives (SMART-aligned)

  • By the end of Lesson 1, students will identify at least five patterns in ELIZA responses and record them in a conversation log.
  • By the end of Lesson 2, students will create and test at least three rule-based conversational patterns and explain how each pattern maps to ELIZA's response.
  • By the end of Lesson 3, students will locate, explain, and correct at least two bugs in a provided ELIZA script and write a 150-word reflection on ELIZA’s limitations.

Materials and prep

  • Devices with a browser and internet access (or local copies of an ELIZA webpage). Example: browser ELIZA clones or local Python notebook with a simple rule engine.
  • Printed student worksheet (conversation log, rule worksheet, debugging checklist).
  • Teacher copy: annotated ELIZA script (original DOCTOR script simplified), sample rules for demonstration, and answer key for debugging tasks.
  • Optional: Scratch / Blockly project or a short Python file (provided below) for extension lessons.

Pre-class steps for teachers

  1. Test an online ELIZA instance (many educational sites host browser ELIZAs). If district policy restricts external tools, download a small local ELIZA script—see the appendix for a no-dependencies Python example.
  2. Print worksheets and prepare projected slides showing example conversations and typical failure modes.
  3. Decide grouping (pairs work well to encourage peer debugging) and set classroom norms for sensitive topics (ELIZA’s “therapist” prompts can produce emotional content; clarify safety protocols and provide opt-outs).

Lesson 1 — Explore ELIZA (45–60 min)

Hook (5–7 min)

Begin with a quick prompt: “Type a private message to ELIZA as if you were telling something worrying. Observe how the bot replies.” Give students 5 minutes to chat. Emphasize that their role is observer-researcher.

Activity: Conversation logging (20–25 min)

  • Students work in pairs. One student chats; the other logs the last 6–10 turns (student input + ELIZA response).
  • Use the worksheet to tag each ELIZA response for style (e.g., question, reflection, reframe), trigger words (e.g., mother, always), and whether the response matches the input meaningfully.

Class synthesis (10–15 min)

  • Collect 3 representative conversation snippets. Project them and ask: What patterns do you see in ELIZA’s replies? (Look for mirrored pronouns, substitution of keywords, and generic questioning.)
  • Introduce these core concepts: pattern matching, substitution, and lack of semantic understanding.

Exit ticket (5 min)

Students write one sentence: “ELIZA seems smart because…” or “ELIZA is not smart because…” Use this to surface misconceptions for Lesson 2.

Lesson 2 — Reverse-engineer and author rules (45–60 min)

Warm-up (5 min)

Show a short ELIZA exchange where a keyword triggers a stock reply. Ask students to predict the rule behind it.

Mini-lecture: How ELIZA works (10 min)

Explain at a high level: ELIZA uses a list of patterns (often regex-like) and responses; when input matches a pattern, the bot applies substitutions (e.g., “I am” -> “you are”) and inserts the transformed phrase into a response template.

Guided practice: Build a rule (20–25 min)

  1. Provide students with a simplified rule template: pattern + transformation + response choices.
  2. Example rule type: if input contains "I feel X", respond with "Why do you feel X?" where X is the captured group.
  3. Students create three rules on the worksheet. Each rule must include an example input, the expected ELIZA response, and a brief explanation of why it works.
  4. Pairs then test their rules on the ELIZA interface (many browser ELIZAs allow adding custom rules; otherwise use teacher-provided script or a local Python notebook).

Reflection (5–10 min)

Discuss: What kinds of student inputs break your rules? How would you change rules to be more robust? This is where debugging begins.

Lesson 3 — Debugging and limits of rule-based chatbots (45–60 min)

Intro and objectives (5 min)

Frame the lesson: today we practice systematic debugging and reflect on why ELIZA can fool us into thinking it understands.

Debugging lab (25–30 min)

  • Teacher provides one or two intentionally flawed ELIZA rules (for example, a rule that captures too broad a pattern, causing irrelevant substitutions).
  • Students use a structured debugging checklist: reproduce the bug, isolate which rule triggers, hypothesize a fix, apply the fix, and retest.
  • Examples of common bugs and debugging strategies:
    • Overbroad pattern: pattern "I" matches too many inputs. Fix: narrow to "\bI feel\b" or use word boundaries.
    • Incorrect substitution: failing to convert "I" to "you" properly leading to ungrammatical responses. Fix: add pronoun map or post-process substitution.
    • Order-of-rules problem: a specific rule is overshadowed by a catch-all default. Fix: reorder rules by specificity.

Class discussion: Limitations and ethics (10–15 min)

Lead students through guided questions:

  • When did ELIZA appear to “understand” you? What made it convincing?
  • What are the risks of confusing rule-based chatbots with real understanding (e.g., misplaced trust, privacy, misinformation)?
  • How do modern AI systems differ? Briefly contrast ELIZA’s deterministic rules with statistical models that rely on large datasets and compute.

Summative task (homework or class wrap-up)

Students write a 150–250 word reflection describing a bug they fixed, the debugging steps they used, and what that taught them about computational thinking and the limitations of chatbots.

Assessment and rubrics

Use a simple 10-point rubric:

  • Conversation log completeness (3 pts)
  • Rule design correctness and creativity (3 pts)
  • Debugging process and reflection quality (4 pts)

Differentiation and accessibility

  • For younger or less experienced students: limit to two lessons, focus on pattern spotting and ethical reflection; provide partially completed rules to test.
  • For advanced students: extend with coding tasks in Python or Scratch — implement a small pattern engine, add pronoun mapping and order-of-rules logic.
  • Provide alternative expressive options: oral reflections, visual flowcharts, or multimodal presentations for students with writing challenges.

Extension activities (project ideas)

  • Implement ELIZA in Scratch: use broadcast messages to simulate pattern matching and response selection.
  • Compare ELIZA to a small transformer demo (teacher-led), and have students chart differences in behavior, failure modes, and opacity.
  • Design a classroom “Turing test” where one group creates scripted responses and another uses ELIZA — classmates guess which is human-made.

Sample teacher-ready artifacts

Below are condensed artifacts you can paste into a shared drive or LMS for student use.

Conversation log worksheet (fields)

  • Student name / pair
  • Timestamp
  • Student input
  • ELIZA reply
  • Pattern tags (keyword, pronoun swap, question, generic)
  • Notes: why the reply fits or fails

Rule template (fill-in)

  • Pattern to match (example: "I feel (.*)")
  • Captured group explanation (what is X?)
  • Response template (example: "Why do you feel X?")
  • Test inputs

Practical debugging checklist for conversational agents

  1. Reproduce: get a concrete input that shows the failure.
  2. Isolate: which rule fired? (Use logging or systematic testing.)
  3. Hypothesize: why did it fail (overbroad pattern, missing substitution, wrong order)?
  4. Fix: adjust the pattern, substitution map, or ordering.
  5. Retest: try both the original failing input and other edge-cases.
  6. Document: log the fix and why it solved the problem.

Short, no-dependencies Python example (teacher copy)

Use this in a local environment or paste into an online Python runner for the advanced extension. This pseudocode shows the core idea of pattern matching + substitutions without external libraries.

// PSEUDOCODE - adapt to Python
rules = [
  {"pattern": r"\bI feel (.+)", "responses": ["Why do you feel {0}?", "Do you often feel {0}?"]},
  {"pattern": r"\bmy (.+)", "responses": ["Tell me more about your {0}."]}
]

pronoun_map = {"I": "you", "my": "your", "me": "you"}

function respond(input):
  for rule in rules:
    match = regex_search(rule.pattern, input, ignore_case=True)
    if match:
      captured = match.group(1)
      # simple pronoun swap example (teacher can extend)
      captured = swap_pronouns(captured, pronoun_map)
      reply = choose_random(rule.responses).format(captured)
      return reply
  return "Tell me more."

Note: Use small, commented examples when teaching code. Encourage students to add a new rule and test it immediately.

By 2026, AI literacy has matured from “what is AI?” lessons into skills-oriented curricula: debugging, transparency, and accountable design are core competencies. This ELIZA module aligns with those priorities: it foregrounds explainability (students can read every rule), reproducibility (small deterministic system), and critical reflection on deployment risks. Use it as a launch point for discussions on data privacy, consent, and when automated advice is inappropriate — all central topics in recent educational technology guidance.

Real classroom examples and teacher tips

From EdSurge's Jan 2026 reporting and subsequent educator pilots:

  • Middle school teachers found that framing ELIZA as a “language plumbing” demo reduced emotional content and made debugging more objective.
  • High school AP Computer Science classes used the Python extension to connect pattern-based approaches to regular expressions and string processing lessons.
  • Teachers reported that students’ post-module reflections were more nuanced: many shifted from “AI is magic” to understanding limits and appropriate uses.

Common classroom pitfalls and how to avoid them

  • Emotional triggers: Because ELIZA uses therapeutic prompts, establish opt-outs and offer an alternative script (e.g., ELIZA as a friendly chat-bot or a robot pet) for sensitive students.
  • Technical blockers: If internet access is unreliable, prepare an offline copy or run the local Python example in advance.
  • Misleading comparisons: Avoid conflating ELIZA with modern generative models — use explicit comparison activities to highlight differences.

Resources and reproducible assets

  • EdSurge’s Jan 2026 classroom piece (inspiration for this module).
  • Primary historical reference: Joseph Weizenbaum’s ELIZA (1960s) and the DOCTOR script.
  • Small educator-ready ELIZA scripts (search for "educational ELIZA clone" for browser-based demos or use local Python pseudocode above).

Final takeaways — what students truly learn

  • Computational thinking: students practice breaking a problem into patterns and rules, and iteratively refining algorithms.
  • Debugging mindset: structured reproducible steps that transfer to any software project.
  • Critical AI literacy: understanding that apparent intelligence can be produced by simple rules, and that real-world AI involves trade-offs in data, transparency, and trust.

Call to action

Ready to run this module? Download the printable lesson pack, worksheets, and teacher scripts from our resources page and pilot the three-lesson unit next week. Share student artifacts and reflections with your district’s AI-literacy coordinator — and if you try the Python extension, send a short note on what worked so we can refine teacher supports across the network.

Advertisement

Related Topics

#education#AI literacy#lesson plan
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-03-08T04:44:45.244Z