It started at a teaching workshop, last semester: Craig Kapp and Rob Egan presented a seminar at the NYU Center for Teaching and Learning called "Real-Time Insights: Leveraging AI for Responsive Teaching in Large Classrooms." They (re-)introduced a deceptively simple concept: the exit ticket. The idea is that at the end of every class session, you ask students three quick questions, each with a different shape metaphor:
- 🔵 Circle: What is still circling in your mind? (What are you confused about?)
- 🟥 Square: What "squared" with your understanding? (What clicked today?)
- 🔺 Triangle: What are three key takeaways from today's session?
Then, take these answers, and use LLMs to process them quickly and get feedback before the next session.
Getting structured feedback from students after every single session? Not at the end of the semester when it's too late to change anything, but right now, while you can still do something about it? I immediately wanted to try it.
Below I describe the details of the approach presented by Craig and Rob, and my own adjustments to the recipe. Hope you will find it useful.
The setup: Making it required (and why that matters)
It starts by setting up the exit ticket surveys as auto-graded quizzes on Brightspace (NYU's LMS). The auto-grading part is a nice little trick: one of the questions is simply "Select True in this question to get your points." Students complete the survey, they get their credit. No manual processing of ~50 submissions on my end.
We do tell students upfront: write something substantive. Don't game the system. We reserve the right to deduct points if someone slacks through the exit tickets all semester. And here's the nice irony: since we're already running AI-powered analysis on the responses, identifying freeriders who type "asdf" every week is trivial. The same pipeline that processes the feedback also flags the people not providing any.
The critical design decision: make it part of the grade, not optional. Optional feedback gets ~30% response rates and self-selected complainers. Required feedback gets everyone. And because this is formative feedback (not evaluative), students have every reason to be honest and detailed. They're not rating me. They're telling me what they need.
Compare this to the end-of-semester evaluation. Students fill it out in December, the professor reads it in January (maybe), and any changes happen next year for a completely different group of students. The feedback loop is so long that it barely qualifies as a loop. Exit tickets close that loop within days. Sometimes hours.
From exit ticket to next session: the processing pipeline
So now I have all this feedback. ~50 students, after every session, telling me what confused them, what clicked, and what they're taking away. The question becomes: how do you actually process all of that quickly enough to act on it?
NYU IT built an official path for this, which Rob demonstrated in the seminar. You export the exit ticket responses into the Brightspace Insights Portal (which Rob's team manages) and run AI-powered analysis using a prompt like this:
You are an expert Instructional Designer and Data Scientist assisting
a professor with the course "AI/ML Product Management" at NYU Stern
School of Business (undergraduate).
Your goal is to analyze student feedback survey data to improve course
delivery. The survey questions and student answers are provided below.
Please perform the following two steps:
### Step 1: Thematic Analysis
Analyze the responses to identify key themes. Do not just look for
keywords; look for semantic similarities and underlying sentiment. For
each theme, provide:
1. **Theme Name**: A concise title.
2. **Prevalence**: The approximate number of students who mentioned this.
3. **Explanation**: A brief summary of the sentiment or issue.
4. **Evidence**: A direct, representative quote from the data.
### Step 2: Actionable Pedagogy (Bloom's Taxonomy)
For each theme identified above, propose a short course activity.
* If the theme represents a **knowledge gap/pain point**, propose a
remedial activity.
* If the theme represents a **strength/interest**, propose an activity
to deepen understanding.
* **Constraint**: The activity must be supported by Bloom's Taxonomy.
Explicitly state which level of Bloom's Taxonomy the activity targets
(e.g., Application, Analysis, Evaluation).
**Format**:
Start the suggestion section for each theme with the label: "PRACTICE IDEA".
I attach the survey data.
It's a well-designed prompt. Thematic coding, prevalence counts, representative quotes, remedial activities aligned with Bloom's Taxonomy. The output is genuinely useful.
But I prefer to do something slightly different. I use the same prompt from the Insights Portal, but I run it inside NotebookLM with just the student feedback as input. For those unfamiliar: NotebookLM is Google's AI-powered research assistant. You upload your own documents, and it generates analysis, summaries, explainer videos, and podcast-style audio overviews grounded entirely in your uploaded sources. NYU provides institutional access through Google Workspace, so the data never trains any AI models, which matters when you're working with student feedback.
Why NotebookLM over the Insights Portal? Because the exit ticket analysis is just the starting point. What I really need is to prepare the follow-up material. Once NotebookLM identifies the themes and suggests activities, I take those suggestions and combine them with my lecture slides, readings, and case studies (which are already loaded in the same notebook). Then I ask it to generate explainers, videos, infographics, and targeted activities that address the confusion, all grounded in my actual course content.
The Insights Portal gives me a diagnosis. NotebookLM gives me the diagnosis and helps me build the treatment.
My workflow after every class:
- Students complete the exit ticket on Brightspace (takes them 2-3 minutes)
- I export the responses and upload them into a NotebookLM notebook, together with the materials for that session
- NotebookLM identifies the themes: what's confusing people, what clicked, what they found most valuable
- Based on those themes, I generate explainer materials, short videos, and targeted activities for the next session
(As an example, here is the NotebookLM that we use for the Zillow Offers case, which we use to discuss leading and lagging metrics, model and output monitoring, concept drift, adverse selection and other product-management-related topics. Note: this notebook contains only course materials for preparing the case discussion, not student feedback data.)
One small but annoying wrinkle: NotebookLM's default slide output has that unmistakable "AI-generated" aesthetic.You know the one. (Yes, they are visually gorgeous compared to my own slides, but after a while it starts feeling a bit like slop.) So I started uploading the NYU brand style guide as an additional source in my notebooks, and prompting NotebookLM to follow it when generating visual materials. The results are noticeably closer to proper NYU-branded slides. Not perfect, but much better than the generic AI look. I'm still waiting for NotebookLM to support custom templates or branding natively, but that's a different story.
The per-session overhead is maybe 15-20 minutes.
Why this actually works
The circle/box/triangle structure does something clever: it gives students permission to be confused. "What is still circling in your mind?" is a much less intimidating question than "What don't you understand?" And the three-takeaways question forces them to reflect, even briefly, which helps consolidate their learning.
But the real reason students engage is that they see the results. When I open the next class by saying "Several of you mentioned you were confused about X, so let's spend 15 minutes on this before we move on," students learn that their feedback actually matters. It creates a virtuous cycle: they write thoughtful responses because they know I'll respond, and I can respond because NotebookLM makes processing all the responses feasible. Without the AI assist, no professor has time to synthesize free-text responses from 50 students after every class and create targeted follow-up materials. Definitely not after every single session. The economics just don't work.
With NotebookLM doing the heavy lifting? The economics suddenly work beautifully.
The exit ticket has been around for decades. Craig and Rob simply showed how to supercharge it with AI. The hard part was never getting students to talk. It was finding the time to listen. Once students realize someone is actually listening, they start saying things worth hearing. That's the loop. That's the whole trick.