It's hard to pinpoint the exact moment when something fundamentally shifts. There's no day when you wake up and say, "Today, everything is different." It's more like boiling a frog. Except in this case, the frog is me, and the water feels amazing.
Over the last few weeks, a confluence of AI developments crossed an invisible threshold. None of them is dramatic on their own. All of them, together, profoundly changing how I work, how I teach, and honestly, how I think about what comes next.
Claude stopped being a chatty know-it-all
Let me start with the most concrete thing. Around December, Claude became... different. Not in some flashy, press-release way. It just started being right. Consistently, reliably right. The suggestions were spot on. The reasoning was good. The writing did not feel like fluffy AI slop. The output needed minimal editing.
I know, I know-"AI is getting better" isn't exactly breaking news. People have been saying this for years. But there's a qualitative difference between "impressive compared to what we had before, but I still need to direct and edit this very carefully" and "I now trust this thing with real work." We crossed that line.
Here's the moment it hit me. Yesterday, I had a brainstorming session with a student. We shared documents, exchanged ideas, sketched out some research directions. Normal academic stuff. Afterwards, I dumped my messy meeting notes into Claude and asked it to organize them.
What came back was not just a cleaned-up document with better formatting.
It was a research program.
Legitimate research questions, well thought out, properly scoped, organized into a coherent agenda with clear methodological approaches. I sat there staring at my screen. I did not feel that I was a professor advising a student and making some progress. It felt like we were in reality two grad students who had been goofing around with half-baked ideas, and then our wise, respected senior professor walked into the room, sat down, and said: "OK, here's how research is actually done. Here's how you think about this. Here's how you organize your work."
Not a helpful assistant anymore. Claude was setting the agenda this time around. It was the senior colleague. It was the advisor.
The Agent That Puts PhD Students to Shame
And then there's the agent setup, which is where things get truly surreal.
When you pair Claude with GitHub for memory, an AGENTS.md file for context, and a TODO.md for task tracking, something clicks. The AI labs have been saying for a while that their agents were reaching "PhD student level." I've supervised PhD students for 20 years. I love them. Truly. But let me be blunt: I have never worked with a PhD student this organized and this diligent.
None of them have ever created a table mapping every data-driven claim in the LaTeX code to the specific code and data files that support each claim. None of them had a full pipeline for the data analysis and the figures in a makefile, ready to repeat everything if necessary. None of them had a reproducibility package ready before we even sent out the first manuscript.
The only downside? I will not be able to have drinks with this PhD student in the future and feel happy seeing them be so much more successful than I am.
A paper is about to go out. I started writing in earnest on Saturday. It took a total of four days of work to get to a submittable manuscript. The experimental analysis, the writing, the polishing. Four days. This would have taken four weeks minimum with a human collaborator, and that's being generous. And the quality isn't "good enough for a draft." It's "ready for submission with minor tweaks."
I find myself glued to my screen all day. I am not doing busy work. I write down what needs to be done, and this is happening behind the scenes. I am getting back the next iteration in an hour, I look, I give feedback, we cross things out from the TODO.md and we move forward. This is real work being done. Not just coding. Paper writing. Report preparation. Coding practices leak into other types of work, and things are moving. My real work is getting done, not just my academic software prototypes.
It's like having an infinite pool of employees, each one eager, competent, and ready to come back with actual deliverables. Not drafts that need to be rewritten. Not outlines that need to be fleshed out. Deliverables.
Teaching as Curation: The NotebookLM Story
Let me tell you about another shift that's been happening in parallel, this one in our classroom.
We teach an AI Product Management course at Stern, and starting in November, something strange happened to how we prepare. We stopped creating content. We started curating it.
Here's our workflow now: After every class session, we collect student feedback. What clicked, what didn't, what questions came up, what topics generated the most energy. We dump all of this (the feedback forms, our own notes, relevant articles, the previous session's materials) into NotebookLM.
And then we ask it to help us design the next session.
NotebookLM digests the student feedback, identifies the gaps, suggests educational activities, and creates new explainer material that directly addresses what students found confusing or wanted to explore further. It connects themes across sessions that we might not have noticed. It proposes case studies that are relevant to the questions students actually asked, not the ones we assumed they'd ask.
The result? The course is absurdly adaptive. Every session builds on what students actually need, not on a syllabus we wrote in August. We're not creating lectures from scratch anymore. We're curating a learning experience, with AI as our editorial partner. The student feedback loop, which used to inform maybe the next semester's version of the course, now informs the next class.
We feel like careful curators, because we're still the ones making the final calls. For now. For how long? No idea. Perhaps in Summer even the curation will be something the AI does better than us.
Education is changing. Bloom's two sigma problem, the finding that one-on-one tutoring outperforms classroom instruction by two standard deviations, is solvable. Now. What is our role? No clue. Perhaps the future of education does not need professors. But the future of education is bright. We will not believe how bad we were. Almost like going from writing with a marker on transparencies to having an interactive demo of the concept. That transition took 30 years. Let's see where we will be in 30 months.
So... Everybody's a CEO Now?
Here's where I start to feel a little dizzy. The marginal cost of competence is hitting zero.
If I can supervise an AI agent the way I'd supervise a research team (giving it direction, reviewing output, iterating on results) and if this scales to writing papers, analyzing data, building prototypes, designing courses... then what am I? I'm a manager. A director. A CEO of a one-person company with an arbitrarily large AI workforce.
But here's the question: What happens when everyone can do this?
When every professor can produce research at 10x the speed. When every consultant can deliver analyses that used to require a team of five. When every entrepreneur can build and ship products without hiring engineers. When every student can produce work indistinguishable from an expert's.
Do we still need employees? Is it even feasible for everyone to operate like a one-person business? And if so, who are the customers? If everyone is a CEO, who is buying?
I don't have answers. The words people have been saying for the last few years, "AI will change everything," "this is the new industrial revolution," "knowledge work will be transformed," those words haven't changed.
But the feeling has.
It used to feel like a prediction. The prediction is here. You will feel it soon, if you have not felt it already. It will be a mix of awe and fear. Impostor syndrome to the fullest. What exactly am I adding here?
I'd love to tell you that the human role is now "taste, judgment, direction-setting" and that AI just handles the execution. That's the comforting version. But I just told you that Claude set the research agenda, not me. So even that may not hold for long.
Bye now
And for now, if you'll excuse me, I need to go review the deliverables my AI team just submitted. Four papers in the queue, a course redesign in progress, and a blog post that, unlike this one, I didn't write myself.
OK fine, I didn't write this one myself either.
(Kidding. Mostly.)